doc_content
stringlengths
1
386k
doc_id
stringlengths
5
188
tf.compat.v1.tpu.shard Shards computation for parallel execution. tf.compat.v1.tpu.shard( computation, inputs=None, num_shards=1, input_shard_axes=None, outputs_from_all_shards=True, output_shard_axes=None, infeed_queue=None, device_assignment=None, name=None, xla_options=None ) inputs must be a list of Tensors or None (equivalent to an empty list), each of which has a corresponding split axis (from input_shard_axes). Each input is split into num_shards pieces along the corresponding axis, and computation is applied to each shard in parallel. Tensors are broadcast to all shards if they are lexically captured by computation. e.g., x = tf.constant(7) def computation(): return x + 3 ... = shard(computation, ...) as inputs. If outputs_from_all_shards is true, the outputs from all shards of computation are concatenated back together along their output_shard_axes. Otherwise, each output is taken from an arbitrary shard. Inputs and outputs of the computation must be at least rank-1 Tensors. Args computation A Python function that builds a computation to apply to each shard of the input. inputs A list of input tensors or None (equivalent to an empty list). Each input tensor has a corresponding shard axes, given by input_shard_axes, which must have size divisible by num_shards. num_shards The number of shards. input_shard_axes A list of dimensions along which to shard inputs, or None. None means "shard all inputs along dimension 0". If not None, there must be one dimension per input. outputs_from_all_shards Boolean or list of boolean. For each output, if True, outputs from all shards are concatenated along the corresponding output_shard_axes entry. Otherwise, each output is taken from an arbitrary shard. If the argument is a boolean, the argument's value is used for each output. output_shard_axes A list of dimensions along which to concatenate the outputs of computation, or None. None means "concatenate all outputs along dimension 0". If not None, there must be one dimension per output. Ignored if outputs_from_all_shards is False. infeed_queue If not None, the InfeedQueue to use to augment the inputs of computation. device_assignment If not None, a DeviceAssignment describing the mapping between logical cores in the computation with physical cores in the TPU topology. Uses a default device assignment if None. The DeviceAssignment may be omitted if each shard of the computation uses only one core, and there is either only one shard, or the number of shards is equal to the number of cores in the TPU system. name (Deprecated) Does nothing. xla_options An instance of tpu.XLAOptions which indicates the options passed to XLA compiler. Use None for default options. Returns A list of output tensors. Raises ValueError If num_shards <= 0 ValueError If len(input_shard_axes) != len(inputs) ValueError If len(output_shard_axes) != len(outputs from computation)
tensorflow.compat.v1.tpu.shard
tf.compat.v1.tpu.shutdown_system Shuts down a running a distributed TPU system. tf.compat.v1.tpu.shutdown_system( job=None ) Args job The job (the XXX in TensorFlow device specification /job:XXX) that contains the TPU devices that will be shutdown. If job=None it is assumed there is only one job in the TensorFlow flock, and an error will be returned if this assumption does not hold.
tensorflow.compat.v1.tpu.shutdown_system
tf.compat.v1.tpu.XLAOptions XLA compilation options. tf.compat.v1.tpu.XLAOptions( use_spmd_for_xla_partitioning=True ) Attributes use_spmd_for_xla_partitioning Boolean. Whether to use XLA's SPMD partitioner instead of MPMD partitioner when compiler partitioning is requested.
tensorflow.compat.v1.tpu.xlaoptions
Module: tf.compat.v1.train Support for training models. See the Training guide. Modules experimental module: Public API for tf.train.experimental namespace. queue_runner module: Public API for tf.train.queue_runner namespace. Classes class AdadeltaOptimizer: Optimizer that implements the Adadelta algorithm. class AdagradDAOptimizer: Adagrad Dual Averaging algorithm for sparse linear models. class AdagradOptimizer: Optimizer that implements the Adagrad algorithm. class AdamOptimizer: Optimizer that implements the Adam algorithm. class BytesList: A ProtocolMessage class Checkpoint: Groups trackable objects, saving and restoring them. class CheckpointManager: Manages multiple checkpoints by keeping some and deleting unneeded ones. class CheckpointOptions: Options for constructing a Checkpoint. class CheckpointSaverHook: Saves checkpoints every N steps or seconds. class CheckpointSaverListener: Interface for listeners that take action before or after checkpoint save. class ChiefSessionCreator: Creates a tf.compat.v1.Session for a chief. class ClusterDef: A ProtocolMessage class ClusterSpec: Represents a cluster as a set of "tasks", organized into "jobs". class Coordinator: A coordinator for threads. class Example: A ProtocolMessage class ExponentialMovingAverage: Maintains moving averages of variables by employing an exponential decay. class Feature: A ProtocolMessage class FeatureList: A ProtocolMessage class FeatureLists: A ProtocolMessage class Features: A ProtocolMessage class FeedFnHook: Runs feed_fn and sets the feed_dict accordingly. class FinalOpsHook: A hook which evaluates Tensors at the end of a session. class FloatList: A ProtocolMessage class FtrlOptimizer: Optimizer that implements the FTRL algorithm. class GlobalStepWaiterHook: Delays execution until global step reaches wait_until_step. class GradientDescentOptimizer: Optimizer that implements the gradient descent algorithm. class Int64List: A ProtocolMessage class JobDef: A ProtocolMessage class LoggingTensorHook: Prints the given tensors every N local steps, every N seconds, or at end. class LooperThread: A thread that runs code repeatedly, optionally on a timer. class MomentumOptimizer: Optimizer that implements the Momentum algorithm. class MonitoredSession: Session-like object that handles initialization, recovery and hooks. class NanLossDuringTrainingError: Unspecified run-time error. class NanTensorHook: Monitors the loss tensor and stops training if loss is NaN. class Optimizer: Base class for optimizers. class ProfilerHook: Captures CPU/GPU profiling information every N steps or seconds. class ProximalAdagradOptimizer: Optimizer that implements the Proximal Adagrad algorithm. class ProximalGradientDescentOptimizer: Optimizer that implements the proximal gradient descent algorithm. class QueueRunner: Holds a list of enqueue operations for a queue, each to be run in a thread. class RMSPropOptimizer: Optimizer that implements the RMSProp algorithm (Tielemans et al. class Saver: Saves and restores variables. class SaverDef: A ProtocolMessage class Scaffold: Structure to create or gather pieces commonly needed to train a model. class SecondOrStepTimer: Timer that triggers at most once every N seconds or once every N steps. class SequenceExample: A ProtocolMessage class Server: An in-process TensorFlow server, for use in distributed training. class ServerDef: A ProtocolMessage class SessionCreator: A factory for tf.Session. class SessionManager: Training helper that restores from checkpoint and creates session. class SessionRunArgs: Represents arguments to be added to a Session.run() call. class SessionRunContext: Provides information about the session.run() call being made. class SessionRunHook: Hook to extend calls to MonitoredSession.run(). class SessionRunValues: Contains the results of Session.run(). class SingularMonitoredSession: Session-like object that handles initialization, restoring, and hooks. class StepCounterHook: Hook that counts steps per second. class StopAtStepHook: Hook that requests stop at a specified step. class SummarySaverHook: Saves summaries every N steps. class Supervisor: A training helper that checkpoints models and computes summaries. class SyncReplicasOptimizer: Class to synchronize, aggregate gradients and pass them to the optimizer. class VocabInfo: Vocabulary information for warm-starting. class WorkerSessionCreator: Creates a tf.compat.v1.Session for a worker. Functions MonitoredTrainingSession(...): Creates a MonitoredSession for training. NewCheckpointReader(...): A function that returns a CheckPointReader. add_queue_runner(...): Adds a QueueRunner to a collection in the graph. (deprecated) assert_global_step(...): Asserts global_step_tensor is a scalar int Variable or Tensor. basic_train_loop(...): Basic loop to train a model. batch(...): Creates batches of tensors in tensors. (deprecated) batch_join(...): Runs a list of tensors to fill a queue to create batches of examples. (deprecated) checkpoint_exists(...): Checks whether a V1 or V2 checkpoint exists with the specified prefix. (deprecated) checkpoints_iterator(...): Continuously yield new checkpoint files as they appear. cosine_decay(...): Applies cosine decay to the learning rate. cosine_decay_restarts(...): Applies cosine decay with restarts to the learning rate. create_global_step(...): Create global step tensor in graph. do_quantize_training_on_graphdef(...): A general quantization scheme is being developed in tf.contrib.quantize. (deprecated) exponential_decay(...): Applies exponential decay to the learning rate. export_meta_graph(...): Returns MetaGraphDef proto. generate_checkpoint_state_proto(...): Generates a checkpoint state proto. get_checkpoint_mtimes(...): Returns the mtimes (modification timestamps) of the checkpoints. (deprecated) get_checkpoint_state(...): Returns CheckpointState proto from the "checkpoint" file. get_global_step(...): Get the global step tensor. get_or_create_global_step(...): Returns and create (if necessary) the global step tensor. global_step(...): Small helper to get the global step. import_meta_graph(...): Recreates a Graph saved in a MetaGraphDef proto. init_from_checkpoint(...): Replaces tf.Variable initializers so they load from a checkpoint file. input_producer(...): Output the rows of input_tensor to a queue for an input pipeline. (deprecated) inverse_time_decay(...): Applies inverse time decay to the initial learning rate. latest_checkpoint(...): Finds the filename of latest saved checkpoint file. limit_epochs(...): Returns tensor num_epochs times and then raises an OutOfRange error. (deprecated) linear_cosine_decay(...): Applies linear cosine decay to the learning rate. list_variables(...): Lists the checkpoint keys and shapes of variables in a checkpoint. load_checkpoint(...): Returns CheckpointReader for checkpoint found in ckpt_dir_or_file. load_variable(...): Returns the tensor value of the given variable in the checkpoint. match_filenames_once(...): Save the list of files matching pattern, so it is only computed once. maybe_batch(...): Conditionally creates batches of tensors based on keep_input. (deprecated) maybe_batch_join(...): Runs a list of tensors to conditionally fill a queue to create batches. (deprecated) maybe_shuffle_batch(...): Creates batches by randomly shuffling conditionally-enqueued tensors. (deprecated) maybe_shuffle_batch_join(...): Create batches by randomly shuffling conditionally-enqueued tensors. (deprecated) natural_exp_decay(...): Applies natural exponential decay to the initial learning rate. noisy_linear_cosine_decay(...): Applies noisy linear cosine decay to the learning rate. piecewise_constant(...): Piecewise constant from boundaries and interval values. piecewise_constant_decay(...): Piecewise constant from boundaries and interval values. polynomial_decay(...): Applies a polynomial decay to the learning rate. range_input_producer(...): Produces the integers from 0 to limit-1 in a queue. (deprecated) remove_checkpoint(...): Removes a checkpoint given by checkpoint_prefix. (deprecated) replica_device_setter(...): Return a device function to use when building a Graph for replicas. sdca_fprint(...): Computes fingerprints of the input strings. sdca_optimizer(...): Distributed version of Stochastic Dual Coordinate Ascent (SDCA) optimizer for sdca_shrink_l1(...): Applies L1 regularization shrink step on the parameters. shuffle_batch(...): Creates batches by randomly shuffling tensors. (deprecated) shuffle_batch_join(...): Create batches by randomly shuffling tensors. (deprecated) slice_input_producer(...): Produces a slice of each Tensor in tensor_list. (deprecated) start_queue_runners(...): Starts all queue runners collected in the graph. (deprecated) string_input_producer(...): Output strings (e.g. filenames) to a queue for an input pipeline. (deprecated) summary_iterator(...): Returns a iterator for reading Event protocol buffers from an event file. update_checkpoint_state(...): Updates the content of the 'checkpoint' file. (deprecated) warm_start(...): Warm-starts a model using the given settings. write_graph(...): Writes a graph proto to a file.
tensorflow.compat.v1.train
tf.compat.v1.train.AdadeltaOptimizer Optimizer that implements the Adadelta algorithm. Inherits From: Optimizer tf.compat.v1.train.AdadeltaOptimizer( learning_rate=0.001, rho=0.95, epsilon=1e-08, use_locking=False, name='Adadelta' ) References: ADADELTA - An Adaptive Learning Rate Method: Zeiler, 2012 (pdf) Args learning_rate A Tensor or a floating point value. The learning rate. To match the exact form in the original paper use 1.0. rho A Tensor or a floating point value. The decay rate. epsilon A Tensor or a floating point value. A constant epsilon used to better conditioning the grad update. use_locking If True use locks for update operations. name Optional name prefix for the operations created when applying gradients. Defaults to "Adadelta". Methods apply_gradients View source apply_gradients( grads_and_vars, global_step=None, name=None ) Apply gradients to variables. This is the second part of minimize(). It returns an Operation that applies gradients. Args grads_and_vars List of (gradient, variable) pairs as returned by compute_gradients(). global_step Optional Variable to increment by one after the variables have been updated. name Optional name for the returned operation. Default to the name passed to the Optimizer constructor. Returns An Operation that applies the specified gradients. If global_step was not None, that operation also increments global_step. Raises TypeError If grads_and_vars is malformed. ValueError If none of the variables have gradients. RuntimeError If you should use _distributed_apply() instead. compute_gradients View source compute_gradients( loss, var_list=None, gate_gradients=GATE_OP, aggregation_method=None, colocate_gradients_with_ops=False, grad_loss=None ) Compute gradients of loss for the variables in var_list. This is the first part of minimize(). It returns a list of (gradient, variable) pairs where "gradient" is the gradient for "variable". Note that "gradient" can be a Tensor, an IndexedSlices, or None if there is no gradient for the given variable. Args loss A Tensor containing the value to minimize or a callable taking no arguments which returns the value to minimize. When eager execution is enabled it must be a callable. var_list Optional list or tuple of tf.Variable to update to minimize loss. Defaults to the list of variables collected in the graph under the key GraphKeys.TRAINABLE_VARIABLES. gate_gradients How to gate the computation of gradients. Can be GATE_NONE, GATE_OP, or GATE_GRAPH. aggregation_method Specifies the method used to combine gradient terms. Valid values are defined in the class AggregationMethod. colocate_gradients_with_ops If True, try colocating gradients with the corresponding op. grad_loss Optional. A Tensor holding the gradient computed for loss. Returns A list of (gradient, variable) pairs. Variable is always present, but gradient can be None. Raises TypeError If var_list contains anything else than Variable objects. ValueError If some arguments are invalid. RuntimeError If called with eager execution enabled and loss is not callable. Eager Compatibility When eager execution is enabled, gate_gradients, aggregation_method, and colocate_gradients_with_ops are ignored. get_name View source get_name() get_slot View source get_slot( var, name ) Return a slot named name created for var by the Optimizer. Some Optimizer subclasses use additional variables. For example Momentum and Adagrad use variables to accumulate updates. This method gives access to these Variable objects if for some reason you need them. Use get_slot_names() to get the list of slot names created by the Optimizer. Args var A variable passed to minimize() or apply_gradients(). name A string. Returns The Variable for the slot if it was created, None otherwise. get_slot_names View source get_slot_names() Return a list of the names of slots created by the Optimizer. See get_slot(). Returns A list of strings. minimize View source minimize( loss, global_step=None, var_list=None, gate_gradients=GATE_OP, aggregation_method=None, colocate_gradients_with_ops=False, name=None, grad_loss=None ) Add operations to minimize loss by updating var_list. This method simply combines calls compute_gradients() and apply_gradients(). If you want to process the gradient before applying them call compute_gradients() and apply_gradients() explicitly instead of using this function. Args loss A Tensor containing the value to minimize. global_step Optional Variable to increment by one after the variables have been updated. var_list Optional list or tuple of Variable objects to update to minimize loss. Defaults to the list of variables collected in the graph under the key GraphKeys.TRAINABLE_VARIABLES. gate_gradients How to gate the computation of gradients. Can be GATE_NONE, GATE_OP, or GATE_GRAPH. aggregation_method Specifies the method used to combine gradient terms. Valid values are defined in the class AggregationMethod. colocate_gradients_with_ops If True, try colocating gradients with the corresponding op. name Optional name for the returned operation. grad_loss Optional. A Tensor holding the gradient computed for loss. Returns An Operation that updates the variables in var_list. If global_step was not None, that operation also increments global_step. Raises ValueError If some of the variables are not Variable objects. Eager Compatibility When eager execution is enabled, loss should be a Python function that takes no arguments and computes the value to be minimized. Minimization (and gradient computation) is done with respect to the elements of var_list if not None, else with respect to any trainable variables created during the execution of the loss function. gate_gradients, aggregation_method, colocate_gradients_with_ops and grad_loss are ignored when eager execution is enabled. variables View source variables() A list of variables which encode the current state of Optimizer. Includes slot variables and additional global variables created by the optimizer in the current default graph. Returns A list of variables. Class Variables GATE_GRAPH 2 GATE_NONE 0 GATE_OP 1
tensorflow.compat.v1.train.adadeltaoptimizer
tf.compat.v1.train.AdagradDAOptimizer Adagrad Dual Averaging algorithm for sparse linear models. Inherits From: Optimizer tf.compat.v1.train.AdagradDAOptimizer( learning_rate, global_step, initial_gradient_squared_accumulator_value=0.1, l1_regularization_strength=0.0, l2_regularization_strength=0.0, use_locking=False, name='AdagradDA' ) This optimizer takes care of regularization of unseen features in a mini batch by updating them when they are seen with a closed form update rule that is equivalent to having updated them on every mini-batch. AdagradDA is typically used when there is a need for large sparsity in the trained model. This optimizer only guarantees sparsity for linear models. Be careful when using AdagradDA for deep networks as it will require careful initialization of the gradient accumulators for it to train. References: Adaptive Subgradient Methods for Online Learning and Stochastic Optimization :Duchi et al., 2011 (pdf) Args learning_rate A Tensor or a floating point value. The learning rate. global_step A Tensor containing the current training step number. initial_gradient_squared_accumulator_value A floating point value. Starting value for the accumulators, must be positive. l1_regularization_strength A float value, must be greater than or equal to zero. l2_regularization_strength A float value, must be greater than or equal to zero. use_locking If True use locks for update operations. name Optional name prefix for the operations created when applying gradients. Defaults to "AdagradDA". Raises ValueError If the initial_gradient_squared_accumulator_value is invalid. Methods apply_gradients View source apply_gradients( grads_and_vars, global_step=None, name=None ) Apply gradients to variables. This is the second part of minimize(). It returns an Operation that applies gradients. Args grads_and_vars List of (gradient, variable) pairs as returned by compute_gradients(). global_step Optional Variable to increment by one after the variables have been updated. name Optional name for the returned operation. Default to the name passed to the Optimizer constructor. Returns An Operation that applies the specified gradients. If global_step was not None, that operation also increments global_step. Raises TypeError If grads_and_vars is malformed. ValueError If none of the variables have gradients. RuntimeError If you should use _distributed_apply() instead. compute_gradients View source compute_gradients( loss, var_list=None, gate_gradients=GATE_OP, aggregation_method=None, colocate_gradients_with_ops=False, grad_loss=None ) Compute gradients of loss for the variables in var_list. This is the first part of minimize(). It returns a list of (gradient, variable) pairs where "gradient" is the gradient for "variable". Note that "gradient" can be a Tensor, an IndexedSlices, or None if there is no gradient for the given variable. Args loss A Tensor containing the value to minimize or a callable taking no arguments which returns the value to minimize. When eager execution is enabled it must be a callable. var_list Optional list or tuple of tf.Variable to update to minimize loss. Defaults to the list of variables collected in the graph under the key GraphKeys.TRAINABLE_VARIABLES. gate_gradients How to gate the computation of gradients. Can be GATE_NONE, GATE_OP, or GATE_GRAPH. aggregation_method Specifies the method used to combine gradient terms. Valid values are defined in the class AggregationMethod. colocate_gradients_with_ops If True, try colocating gradients with the corresponding op. grad_loss Optional. A Tensor holding the gradient computed for loss. Returns A list of (gradient, variable) pairs. Variable is always present, but gradient can be None. Raises TypeError If var_list contains anything else than Variable objects. ValueError If some arguments are invalid. RuntimeError If called with eager execution enabled and loss is not callable. Eager Compatibility When eager execution is enabled, gate_gradients, aggregation_method, and colocate_gradients_with_ops are ignored. get_name View source get_name() get_slot View source get_slot( var, name ) Return a slot named name created for var by the Optimizer. Some Optimizer subclasses use additional variables. For example Momentum and Adagrad use variables to accumulate updates. This method gives access to these Variable objects if for some reason you need them. Use get_slot_names() to get the list of slot names created by the Optimizer. Args var A variable passed to minimize() or apply_gradients(). name A string. Returns The Variable for the slot if it was created, None otherwise. get_slot_names View source get_slot_names() Return a list of the names of slots created by the Optimizer. See get_slot(). Returns A list of strings. minimize View source minimize( loss, global_step=None, var_list=None, gate_gradients=GATE_OP, aggregation_method=None, colocate_gradients_with_ops=False, name=None, grad_loss=None ) Add operations to minimize loss by updating var_list. This method simply combines calls compute_gradients() and apply_gradients(). If you want to process the gradient before applying them call compute_gradients() and apply_gradients() explicitly instead of using this function. Args loss A Tensor containing the value to minimize. global_step Optional Variable to increment by one after the variables have been updated. var_list Optional list or tuple of Variable objects to update to minimize loss. Defaults to the list of variables collected in the graph under the key GraphKeys.TRAINABLE_VARIABLES. gate_gradients How to gate the computation of gradients. Can be GATE_NONE, GATE_OP, or GATE_GRAPH. aggregation_method Specifies the method used to combine gradient terms. Valid values are defined in the class AggregationMethod. colocate_gradients_with_ops If True, try colocating gradients with the corresponding op. name Optional name for the returned operation. grad_loss Optional. A Tensor holding the gradient computed for loss. Returns An Operation that updates the variables in var_list. If global_step was not None, that operation also increments global_step. Raises ValueError If some of the variables are not Variable objects. Eager Compatibility When eager execution is enabled, loss should be a Python function that takes no arguments and computes the value to be minimized. Minimization (and gradient computation) is done with respect to the elements of var_list if not None, else with respect to any trainable variables created during the execution of the loss function. gate_gradients, aggregation_method, colocate_gradients_with_ops and grad_loss are ignored when eager execution is enabled. variables View source variables() A list of variables which encode the current state of Optimizer. Includes slot variables and additional global variables created by the optimizer in the current default graph. Returns A list of variables. Class Variables GATE_GRAPH 2 GATE_NONE 0 GATE_OP 1
tensorflow.compat.v1.train.adagraddaoptimizer
tf.compat.v1.train.AdagradOptimizer Optimizer that implements the Adagrad algorithm. Inherits From: Optimizer tf.compat.v1.train.AdagradOptimizer( learning_rate, initial_accumulator_value=0.1, use_locking=False, name='Adagrad' ) References: Adaptive Subgradient Methods for Online Learning and Stochastic Optimization :Duchi et al., 2011 (pdf) Args learning_rate A Tensor or a floating point value. The learning rate. initial_accumulator_value A floating point value. Starting value for the accumulators, must be positive. use_locking If True use locks for update operations. name Optional name prefix for the operations created when applying gradients. Defaults to "Adagrad". Raises ValueError If the initial_accumulator_value is invalid. Methods apply_gradients View source apply_gradients( grads_and_vars, global_step=None, name=None ) Apply gradients to variables. This is the second part of minimize(). It returns an Operation that applies gradients. Args grads_and_vars List of (gradient, variable) pairs as returned by compute_gradients(). global_step Optional Variable to increment by one after the variables have been updated. name Optional name for the returned operation. Default to the name passed to the Optimizer constructor. Returns An Operation that applies the specified gradients. If global_step was not None, that operation also increments global_step. Raises TypeError If grads_and_vars is malformed. ValueError If none of the variables have gradients. RuntimeError If you should use _distributed_apply() instead. compute_gradients View source compute_gradients( loss, var_list=None, gate_gradients=GATE_OP, aggregation_method=None, colocate_gradients_with_ops=False, grad_loss=None ) Compute gradients of loss for the variables in var_list. This is the first part of minimize(). It returns a list of (gradient, variable) pairs where "gradient" is the gradient for "variable". Note that "gradient" can be a Tensor, an IndexedSlices, or None if there is no gradient for the given variable. Args loss A Tensor containing the value to minimize or a callable taking no arguments which returns the value to minimize. When eager execution is enabled it must be a callable. var_list Optional list or tuple of tf.Variable to update to minimize loss. Defaults to the list of variables collected in the graph under the key GraphKeys.TRAINABLE_VARIABLES. gate_gradients How to gate the computation of gradients. Can be GATE_NONE, GATE_OP, or GATE_GRAPH. aggregation_method Specifies the method used to combine gradient terms. Valid values are defined in the class AggregationMethod. colocate_gradients_with_ops If True, try colocating gradients with the corresponding op. grad_loss Optional. A Tensor holding the gradient computed for loss. Returns A list of (gradient, variable) pairs. Variable is always present, but gradient can be None. Raises TypeError If var_list contains anything else than Variable objects. ValueError If some arguments are invalid. RuntimeError If called with eager execution enabled and loss is not callable. Eager Compatibility When eager execution is enabled, gate_gradients, aggregation_method, and colocate_gradients_with_ops are ignored. get_name View source get_name() get_slot View source get_slot( var, name ) Return a slot named name created for var by the Optimizer. Some Optimizer subclasses use additional variables. For example Momentum and Adagrad use variables to accumulate updates. This method gives access to these Variable objects if for some reason you need them. Use get_slot_names() to get the list of slot names created by the Optimizer. Args var A variable passed to minimize() or apply_gradients(). name A string. Returns The Variable for the slot if it was created, None otherwise. get_slot_names View source get_slot_names() Return a list of the names of slots created by the Optimizer. See get_slot(). Returns A list of strings. minimize View source minimize( loss, global_step=None, var_list=None, gate_gradients=GATE_OP, aggregation_method=None, colocate_gradients_with_ops=False, name=None, grad_loss=None ) Add operations to minimize loss by updating var_list. This method simply combines calls compute_gradients() and apply_gradients(). If you want to process the gradient before applying them call compute_gradients() and apply_gradients() explicitly instead of using this function. Args loss A Tensor containing the value to minimize. global_step Optional Variable to increment by one after the variables have been updated. var_list Optional list or tuple of Variable objects to update to minimize loss. Defaults to the list of variables collected in the graph under the key GraphKeys.TRAINABLE_VARIABLES. gate_gradients How to gate the computation of gradients. Can be GATE_NONE, GATE_OP, or GATE_GRAPH. aggregation_method Specifies the method used to combine gradient terms. Valid values are defined in the class AggregationMethod. colocate_gradients_with_ops If True, try colocating gradients with the corresponding op. name Optional name for the returned operation. grad_loss Optional. A Tensor holding the gradient computed for loss. Returns An Operation that updates the variables in var_list. If global_step was not None, that operation also increments global_step. Raises ValueError If some of the variables are not Variable objects. Eager Compatibility When eager execution is enabled, loss should be a Python function that takes no arguments and computes the value to be minimized. Minimization (and gradient computation) is done with respect to the elements of var_list if not None, else with respect to any trainable variables created during the execution of the loss function. gate_gradients, aggregation_method, colocate_gradients_with_ops and grad_loss are ignored when eager execution is enabled. variables View source variables() A list of variables which encode the current state of Optimizer. Includes slot variables and additional global variables created by the optimizer in the current default graph. Returns A list of variables. Class Variables GATE_GRAPH 2 GATE_NONE 0 GATE_OP 1
tensorflow.compat.v1.train.adagradoptimizer
tf.compat.v1.train.AdamOptimizer Optimizer that implements the Adam algorithm. Inherits From: Optimizer tf.compat.v1.train.AdamOptimizer( learning_rate=0.001, beta1=0.9, beta2=0.999, epsilon=1e-08, use_locking=False, name='Adam' ) References: Adam - A Method for Stochastic Optimization: Kingma et al., 2015 (pdf) Args learning_rate A Tensor or a floating point value. The learning rate. beta1 A float value or a constant float tensor. The exponential decay rate for the 1st moment estimates. beta2 A float value or a constant float tensor. The exponential decay rate for the 2nd moment estimates. epsilon A small constant for numerical stability. This epsilon is "epsilon hat" in the Kingma and Ba paper (in the formula just before Section 2.1), not the epsilon in Algorithm 1 of the paper. use_locking If True use locks for update operations. name Optional name for the operations created when applying gradients. Defaults to "Adam". Methods apply_gradients View source apply_gradients( grads_and_vars, global_step=None, name=None ) Apply gradients to variables. This is the second part of minimize(). It returns an Operation that applies gradients. Args grads_and_vars List of (gradient, variable) pairs as returned by compute_gradients(). global_step Optional Variable to increment by one after the variables have been updated. name Optional name for the returned operation. Default to the name passed to the Optimizer constructor. Returns An Operation that applies the specified gradients. If global_step was not None, that operation also increments global_step. Raises TypeError If grads_and_vars is malformed. ValueError If none of the variables have gradients. RuntimeError If you should use _distributed_apply() instead. compute_gradients View source compute_gradients( loss, var_list=None, gate_gradients=GATE_OP, aggregation_method=None, colocate_gradients_with_ops=False, grad_loss=None ) Compute gradients of loss for the variables in var_list. This is the first part of minimize(). It returns a list of (gradient, variable) pairs where "gradient" is the gradient for "variable". Note that "gradient" can be a Tensor, an IndexedSlices, or None if there is no gradient for the given variable. Args loss A Tensor containing the value to minimize or a callable taking no arguments which returns the value to minimize. When eager execution is enabled it must be a callable. var_list Optional list or tuple of tf.Variable to update to minimize loss. Defaults to the list of variables collected in the graph under the key GraphKeys.TRAINABLE_VARIABLES. gate_gradients How to gate the computation of gradients. Can be GATE_NONE, GATE_OP, or GATE_GRAPH. aggregation_method Specifies the method used to combine gradient terms. Valid values are defined in the class AggregationMethod. colocate_gradients_with_ops If True, try colocating gradients with the corresponding op. grad_loss Optional. A Tensor holding the gradient computed for loss. Returns A list of (gradient, variable) pairs. Variable is always present, but gradient can be None. Raises TypeError If var_list contains anything else than Variable objects. ValueError If some arguments are invalid. RuntimeError If called with eager execution enabled and loss is not callable. Eager Compatibility When eager execution is enabled, gate_gradients, aggregation_method, and colocate_gradients_with_ops are ignored. get_name View source get_name() get_slot View source get_slot( var, name ) Return a slot named name created for var by the Optimizer. Some Optimizer subclasses use additional variables. For example Momentum and Adagrad use variables to accumulate updates. This method gives access to these Variable objects if for some reason you need them. Use get_slot_names() to get the list of slot names created by the Optimizer. Args var A variable passed to minimize() or apply_gradients(). name A string. Returns The Variable for the slot if it was created, None otherwise. get_slot_names View source get_slot_names() Return a list of the names of slots created by the Optimizer. See get_slot(). Returns A list of strings. minimize View source minimize( loss, global_step=None, var_list=None, gate_gradients=GATE_OP, aggregation_method=None, colocate_gradients_with_ops=False, name=None, grad_loss=None ) Add operations to minimize loss by updating var_list. This method simply combines calls compute_gradients() and apply_gradients(). If you want to process the gradient before applying them call compute_gradients() and apply_gradients() explicitly instead of using this function. Args loss A Tensor containing the value to minimize. global_step Optional Variable to increment by one after the variables have been updated. var_list Optional list or tuple of Variable objects to update to minimize loss. Defaults to the list of variables collected in the graph under the key GraphKeys.TRAINABLE_VARIABLES. gate_gradients How to gate the computation of gradients. Can be GATE_NONE, GATE_OP, or GATE_GRAPH. aggregation_method Specifies the method used to combine gradient terms. Valid values are defined in the class AggregationMethod. colocate_gradients_with_ops If True, try colocating gradients with the corresponding op. name Optional name for the returned operation. grad_loss Optional. A Tensor holding the gradient computed for loss. Returns An Operation that updates the variables in var_list. If global_step was not None, that operation also increments global_step. Raises ValueError If some of the variables are not Variable objects. Eager Compatibility When eager execution is enabled, loss should be a Python function that takes no arguments and computes the value to be minimized. Minimization (and gradient computation) is done with respect to the elements of var_list if not None, else with respect to any trainable variables created during the execution of the loss function. gate_gradients, aggregation_method, colocate_gradients_with_ops and grad_loss are ignored when eager execution is enabled. variables View source variables() A list of variables which encode the current state of Optimizer. Includes slot variables and additional global variables created by the optimizer in the current default graph. Returns A list of variables. Class Variables GATE_GRAPH 2 GATE_NONE 0 GATE_OP 1
tensorflow.compat.v1.train.adamoptimizer
tf.compat.v1.train.add_queue_runner Adds a QueueRunner to a collection in the graph. (deprecated) View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.train.queue_runner.add_queue_runner tf.compat.v1.train.add_queue_runner( qr, collection=tf.GraphKeys.QUEUE_RUNNERS ) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: To construct input pipelines, use the tf.data module. When building a complex model that uses many queues it is often difficult to gather all the queue runners that need to be run. This convenience function allows you to add a queue runner to a well known collection in the graph. The companion method start_queue_runners() can be used to start threads for all the collected queue runners. Args qr A QueueRunner. collection A GraphKey specifying the graph collection to add the queue runner to. Defaults to GraphKeys.QUEUE_RUNNERS.
tensorflow.compat.v1.train.add_queue_runner
tf.compat.v1.train.assert_global_step Asserts global_step_tensor is a scalar int Variable or Tensor. tf.compat.v1.train.assert_global_step( global_step_tensor ) Args global_step_tensor Tensor to test.
tensorflow.compat.v1.train.assert_global_step
tf.compat.v1.train.basic_train_loop Basic loop to train a model. tf.compat.v1.train.basic_train_loop( supervisor, train_step_fn, args=None, kwargs=None, master='' ) Calls train_step_fn in a loop to train a model. The function is called as: train_step_fn(session, *args, **kwargs) It is passed a tf.compat.v1.Session in addition to args and kwargs. The function typically runs one training step in the session. Args supervisor tf.compat.v1.train.Supervisor to run the training services. train_step_fn Callable to execute one training step. Called repeatedly as train_step_fn(session, *args **kwargs). args Optional positional arguments passed to train_step_fn. kwargs Optional keyword arguments passed to train_step_fn. master Master to use to create the training session. Defaults to "" which causes the session to be created in the local process.
tensorflow.compat.v1.train.basic_train_loop
tf.compat.v1.train.batch Creates batches of tensors in tensors. (deprecated) tf.compat.v1.train.batch( tensors, batch_size, num_threads=1, capacity=32, enqueue_many=False, shapes=None, dynamic_pad=False, allow_smaller_final_batch=False, shared_name=None, name=None ) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Queue-based input pipelines have been replaced by tf.data. Use tf.data.Dataset.batch(batch_size) (or padded_batch(...) if dynamic_pad=True). The argument tensors can be a list or a dictionary of tensors. The value returned by the function will be of the same type as tensors. This function is implemented using a queue. A QueueRunner for the queue is added to the current Graph's QUEUE_RUNNER collection. If enqueue_many is False, tensors is assumed to represent a single example. An input tensor with shape [x, y, z] will be output as a tensor with shape [batch_size, x, y, z]. If enqueue_many is True, tensors is assumed to represent a batch of examples, where the first dimension is indexed by example, and all members of tensors should have the same size in the first dimension. If an input tensor has shape [*, x, y, z], the output will have shape [batch_size, x, y, z]. The capacity argument controls the how long the prefetching is allowed to grow the queues. The returned operation is a dequeue operation and will throw tf.errors.OutOfRangeError if the input queue is exhausted. If this operation is feeding another input queue, its queue runner will catch this exception, however, if this operation is used in your main thread you are responsible for catching this yourself. Note: If dynamic_pad is False, you must ensure that either (i) the shapes argument is passed, or (ii) all of the tensors in tensors must have fully-defined shapes. ValueError will be raised if neither of these conditions holds. If dynamic_pad is True, it is sufficient that the rank of the tensors is known, but individual dimensions may have shape None. In this case, for each enqueue the dimensions with value None may have a variable length; upon dequeue, the output tensors will be padded on the right to the maximum shape of the tensors in the current minibatch. For numbers, this padding takes value 0. For strings, this padding is the empty string. See PaddingFIFOQueue for more info. If allow_smaller_final_batch is True, a smaller batch value than batch_size is returned when the queue is closed and there are not enough elements to fill the batch, otherwise the pending elements are discarded. In addition, all output tensors' static shapes, as accessed via the shape property will have a first Dimension value of None, and operations that depend on fixed batch_size would fail. Args tensors The list or dictionary of tensors to enqueue. batch_size The new batch size pulled from the queue. num_threads The number of threads enqueuing tensors. The batching will be nondeterministic if num_threads > 1. capacity An integer. The maximum number of elements in the queue. enqueue_many Whether each tensor in tensors is a single example. shapes (Optional) The shapes for each example. Defaults to the inferred shapes for tensors. dynamic_pad Boolean. Allow variable dimensions in input shapes. The given dimensions are padded upon dequeue so that tensors within a batch have the same shapes. allow_smaller_final_batch (Optional) Boolean. If True, allow the final batch to be smaller if there are insufficient items left in the queue. shared_name (Optional). If set, this queue will be shared under the given name across multiple sessions. name (Optional) A name for the operations. Returns A list or dictionary of tensors with the same types as tensors (except if the input is a list of one element, then it returns a tensor, not a list). Raises ValueError If the shapes are not specified, and cannot be inferred from the elements of tensors. Eager Compatibility Input pipelines based on Queues are not supported when eager execution is enabled. Please use the tf.data API to ingest data under eager execution.
tensorflow.compat.v1.train.batch
tf.compat.v1.train.batch_join Runs a list of tensors to fill a queue to create batches of examples. (deprecated) tf.compat.v1.train.batch_join( tensors_list, batch_size, capacity=32, enqueue_many=False, shapes=None, dynamic_pad=False, allow_smaller_final_batch=False, shared_name=None, name=None ) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Queue-based input pipelines have been replaced by tf.data. Use tf.data.Dataset.interleave(...).batch(batch_size) (or padded_batch(...) if dynamic_pad=True). The tensors_list argument is a list of tuples of tensors, or a list of dictionaries of tensors. Each element in the list is treated similarly to the tensors argument of tf.compat.v1.train.batch(). Warning: This function is nondeterministic, since it starts a separate thread for each tensor. Enqueues a different list of tensors in different threads. Implemented using a queue -- a QueueRunner for the queue is added to the current Graph's QUEUE_RUNNER collection. len(tensors_list) threads will be started, with thread i enqueuing the tensors from tensors_list[i]. tensors_list[i1][j] must match tensors_list[i2][j] in type and shape, except in the first dimension if enqueue_many is true. If enqueue_many is False, each tensors_list[i] is assumed to represent a single example. An input tensor x will be output as a tensor with shape [batch_size] + x.shape. If enqueue_many is True, tensors_list[i] is assumed to represent a batch of examples, where the first dimension is indexed by example, and all members of tensors_list[i] should have the same size in the first dimension. The slices of any input tensor x are treated as examples, and the output tensors will have shape [batch_size] + x.shape[1:]. The capacity argument controls the how long the prefetching is allowed to grow the queues. The returned operation is a dequeue operation and will throw tf.errors.OutOfRangeError if the input queue is exhausted. If this operation is feeding another input queue, its queue runner will catch this exception, however, if this operation is used in your main thread you are responsible for catching this yourself. Note: If dynamic_pad is False, you must ensure that either (i) the shapes argument is passed, or (ii) all of the tensors in tensors_list must have fully-defined shapes. ValueError will be raised if neither of these conditions holds. If dynamic_pad is True, it is sufficient that the rank of the tensors is known, but individual dimensions may have value None. In this case, for each enqueue the dimensions with value None may have a variable length; upon dequeue, the output tensors will be padded on the right to the maximum shape of the tensors in the current minibatch. For numbers, this padding takes value 0. For strings, this padding is the empty string. See PaddingFIFOQueue for more info. If allow_smaller_final_batch is True, a smaller batch value than batch_size is returned when the queue is closed and there are not enough elements to fill the batch, otherwise the pending elements are discarded. In addition, all output tensors' static shapes, as accessed via the shape property will have a first Dimension value of None, and operations that depend on fixed batch_size would fail. Args tensors_list A list of tuples or dictionaries of tensors to enqueue. batch_size An integer. The new batch size pulled from the queue. capacity An integer. The maximum number of elements in the queue. enqueue_many Whether each tensor in tensor_list_list is a single example. shapes (Optional) The shapes for each example. Defaults to the inferred shapes for tensor_list_list[i]. dynamic_pad Boolean. Allow variable dimensions in input shapes. The given dimensions are padded upon dequeue so that tensors within a batch have the same shapes. allow_smaller_final_batch (Optional) Boolean. If True, allow the final batch to be smaller if there are insufficient items left in the queue. shared_name (Optional) If set, this queue will be shared under the given name across multiple sessions. name (Optional) A name for the operations. Returns A list or dictionary of tensors with the same number and types as tensors_list[i]. Raises ValueError If the shapes are not specified, and cannot be inferred from the elements of tensor_list_list. Eager Compatibility Input pipelines based on Queues are not supported when eager execution is enabled. Please use the tf.data API to ingest data under eager execution.
tensorflow.compat.v1.train.batch_join
tf.compat.v1.train.Checkpoint Groups trackable objects, saving and restoring them. tf.compat.v1.train.Checkpoint( **kwargs ) Checkpoint's constructor accepts keyword arguments whose values are types that contain trackable state, such as tf.compat.v1.train.Optimizer implementations, tf.Variable, tf.keras.Layer implementations, or tf.keras.Model implementations. It saves these values with a checkpoint, and maintains a save_counter for numbering checkpoints. Example usage when graph building: import tensorflow as tf import os checkpoint_directory = "/tmp/training_checkpoints" checkpoint_prefix = os.path.join(checkpoint_directory, "ckpt") checkpoint = tf.train.Checkpoint(optimizer=optimizer, model=model) status = checkpoint.restore(tf.train.latest_checkpoint(checkpoint_directory)) train_op = optimizer.minimize( ... ) status.assert_consumed() # Optional sanity checks. with tf.compat.v1.Session() as session: # Use the Session to restore variables, or initialize them if # tf.train.latest_checkpoint returned None. status.initialize_or_restore(session) for _ in range(num_training_steps): session.run(train_op) checkpoint.save(file_prefix=checkpoint_prefix) Example usage with eager execution enabled: import tensorflow as tf import os tf.compat.v1.enable_eager_execution() checkpoint_directory = "/tmp/training_checkpoints" checkpoint_prefix = os.path.join(checkpoint_directory, "ckpt") checkpoint = tf.train.Checkpoint(optimizer=optimizer, model=model) status = checkpoint.restore(tf.train.latest_checkpoint(checkpoint_directory)) for _ in range(num_training_steps): optimizer.minimize( ... ) # Variables will be restored on creation. status.assert_consumed() # Optional sanity checks. checkpoint.save(file_prefix=checkpoint_prefix) Checkpoint.save and Checkpoint.restore write and read object-based checkpoints, in contrast to tf.compat.v1.train.Saver which writes and reads variable.name based checkpoints. Object-based checkpointing saves a graph of dependencies between Python objects (Layers, Optimizers, Variables, etc.) with named edges, and this graph is used to match variables when restoring a checkpoint. It can be more robust to changes in the Python program, and helps to support restore-on-create for variables when executing eagerly. Prefer tf.train.Checkpoint over tf.compat.v1.train.Saver for new code. Checkpoint objects have dependencies on the objects passed as keyword arguments to their constructors, and each dependency is given a name that is identical to the name of the keyword argument for which it was created. TensorFlow classes like Layers and Optimizers will automatically add dependencies on their variables (e.g. "kernel" and "bias" for tf.keras.layers.Dense). Inheriting from tf.keras.Model makes managing dependencies easy in user-defined classes, since Model hooks into attribute assignment. For example: class Regress(tf.keras.Model): def __init__(self): super(Regress, self).__init__() self.input_transform = tf.keras.layers.Dense(10) # ... def call(self, inputs): x = self.input_transform(inputs) # ... This Model has a dependency named "input_transform" on its Dense layer, which in turn depends on its variables. As a result, saving an instance of Regress using tf.train.Checkpoint will also save all the variables created by the Dense layer. When variables are assigned to multiple workers, each worker writes its own section of the checkpoint. These sections are then merged/re-indexed to behave as a single checkpoint. This avoids copying all variables to one worker, but does require that all workers see a common filesystem. While tf.keras.Model.save_weights and tf.train.Checkpoint.save save in the same format, note that the root of the resulting checkpoint is the object the save method is attached to. This means saving a tf.keras.Model using save_weights and loading into a tf.train.Checkpoint with a Model attached (or vice versa) will not match the Model's variables. See the guide to training checkpoints for details. Prefer tf.train.Checkpoint over tf.keras.Model.save_weights for training checkpoints. Args **kwargs Keyword arguments are set as attributes of this object, and are saved with the checkpoint. Values must be trackable objects. Raises ValueError If objects in kwargs are not trackable. Attributes save_counter Incremented when save() is called. Used to number checkpoints. Methods restore View source restore( save_path ) Restore a training checkpoint. Restores this Checkpoint and any objects it depends on. When executing eagerly, either assigns values immediately if variables to restore have been created already, or defers restoration until the variables are created. Dependencies added after this call will be matched if they have a corresponding object in the checkpoint (the restore request will queue in any trackable object waiting for the expected dependency to be added). When graph building, restoration ops are added to the graph but not run immediately. To ensure that loading is complete and no more assignments will take place, use the assert_consumed() method of the status object returned by restore: checkpoint = tf.train.Checkpoint( ... ) checkpoint.restore(path).assert_consumed() An exception will be raised if any Python objects in the dependency graph were not found in the checkpoint, or if any checkpointed values do not have a matching Python object. When graph building, assert_consumed() indicates that all of the restore ops that will be created for this checkpoint have been created. They can be run via the run_restore_ops() method of the status object: checkpoint.restore(path).assert_consumed().run_restore_ops() If the checkpoint has not been consumed completely, then the list of restore ops will grow as more objects are added to the dependency graph. Name-based tf.compat.v1.train.Saver checkpoints can be loaded using this method. Names are used to match variables. No restore ops are created/run until run_restore_ops() or initialize_or_restore() are called on the returned status object when graph building, but there is restore-on-creation when executing eagerly. Re-encode name-based checkpoints using tf.train.Checkpoint.save as soon as possible. Args save_path The path to the checkpoint, as returned by save or tf.train.latest_checkpoint. If None (as when there is no latest checkpoint for tf.train.latest_checkpoint to return), returns an object which may run initializers for objects in the dependency graph. If the checkpoint was written by the name-based tf.compat.v1.train.Saver, names are used to match variables. Returns A load status object, which can be used to make assertions about the status of a checkpoint restoration and run initialization/restore ops. The returned status object has the following methods: assert_consumed(): Raises an exception if any variables are unmatched: either checkpointed values which don't have a matching Python object or Python objects in the dependency graph with no values in the checkpoint. This method returns the status object, and so may be chained with initialize_or_restore or run_restore_ops. assert_existing_objects_matched(): Raises an exception if any existing Python objects in the dependency graph are unmatched. Unlike assert_consumed, this assertion will pass if values in the checkpoint have no corresponding Python objects. For example a tf.keras.Layer object which has not yet been built, and so has not created any variables, will pass this assertion but fail assert_consumed. Useful when loading part of a larger checkpoint into a new Python program, e.g. a training checkpoint with a tf.compat.v1.train.Optimizer was saved but only the state required for inference is being loaded. This method returns the status object, and so may be chained with initialize_or_restore or run_restore_ops. assert_nontrivial_match(): Asserts that something aside from the root object was matched. This is a very weak assertion, but is useful for sanity checking in library code where objects may exist in the checkpoint which haven't been created in Python and some Python objects may not have a checkpointed value. expect_partial(): Silence warnings about incomplete checkpoint restores. Warnings are otherwise printed for unused parts of the checkpoint file or object when the Checkpoint object is deleted (often at program shutdown). initialize_or_restore(session=None): When graph building, runs variable initializers if save_path is None, but otherwise runs restore operations. If no session is explicitly specified, the default session is used. No effect when executing eagerly (variables are initialized or restored eagerly). run_restore_ops(session=None): When graph building, runs restore operations. If no session is explicitly specified, the default session is used. No effect when executing eagerly (restore operations are run eagerly). May only be called when save_path is not None. save View source save( file_prefix, session=None ) Saves a training checkpoint and provides basic checkpoint management. The saved checkpoint includes variables created by this object and any trackable objects it depends on at the time Checkpoint.save() is called. save is a basic convenience wrapper around the write method, sequentially numbering checkpoints using save_counter and updating the metadata used by tf.train.latest_checkpoint. More advanced checkpoint management, for example garbage collection and custom numbering, may be provided by other utilities which also wrap write (tf.train.CheckpointManager for example). Args file_prefix A prefix to use for the checkpoint filenames (/path/to/directory/and_a_prefix). Names are generated based on this prefix and Checkpoint.save_counter. session The session to evaluate variables in. Ignored when executing eagerly. If not provided when graph building, the default session is used. Returns The full path to the checkpoint. write View source write( file_prefix, session=None ) Writes a training checkpoint. The checkpoint includes variables created by this object and any trackable objects it depends on at the time Checkpoint.write() is called. write does not number checkpoints, increment save_counter, or update the metadata used by tf.train.latest_checkpoint. It is primarily intended for use by higher level checkpoint management utilities. save provides a very basic implementation of these features. Args file_prefix A prefix to use for the checkpoint filenames (/path/to/directory/and_a_prefix). session The session to evaluate variables in. Ignored when executing eagerly. If not provided when graph building, the default session is used. Returns The full path to the checkpoint (i.e. file_prefix).
tensorflow.compat.v1.train.checkpoint
tf.compat.v1.train.checkpoint_exists Checks whether a V1 or V2 checkpoint exists with the specified prefix. (deprecated) tf.compat.v1.train.checkpoint_exists( checkpoint_prefix ) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use standard file APIs to check for files with this prefix. This is the recommended way to check if a checkpoint exists, since it takes into account the naming difference between V1 and V2 formats. Args checkpoint_prefix the prefix of a V1 or V2 checkpoint, with V2 taking priority. Typically the result of Saver.save() or that of tf.train.latest_checkpoint(), regardless of sharded/non-sharded or V1/V2. Returns A bool, true if a checkpoint referred to by checkpoint_prefix exists.
tensorflow.compat.v1.train.checkpoint_exists
tf.compat.v1.train.ChiefSessionCreator Creates a tf.compat.v1.Session for a chief. Inherits From: SessionCreator tf.compat.v1.train.ChiefSessionCreator( scaffold=None, master='', config=None, checkpoint_dir=None, checkpoint_filename_with_path=None ) Args scaffold A Scaffold used for gathering or building supportive ops. If not specified a default one is created. It's used to finalize the graph. master String representation of the TensorFlow master to use. config ConfigProto proto used to configure the session. checkpoint_dir A string. Optional path to a directory where to restore variables. checkpoint_filename_with_path Full file name path to the checkpoint file. Methods create_session View source create_session()
tensorflow.compat.v1.train.chiefsessioncreator
tf.compat.v1.train.cosine_decay Applies cosine decay to the learning rate. tf.compat.v1.train.cosine_decay( learning_rate, global_step, decay_steps, alpha=0.0, name=None ) When training a model, it is often recommended to lower the learning rate as the training progresses. This function applies a cosine decay function to a provided initial learning rate. It requires a global_step value to compute the decayed learning rate. You can just pass a TensorFlow variable that you increment at each training step. The function returns the decayed learning rate. It is computed as: global_step = min(global_step, decay_steps) cosine_decay = 0.5 * (1 + cos(pi * global_step / decay_steps)) decayed = (1 - alpha) * cosine_decay + alpha decayed_learning_rate = learning_rate * decayed Example usage: decay_steps = 1000 lr_decayed = cosine_decay(learning_rate, global_step, decay_steps) Args learning_rate A scalar float32 or float64 Tensor or a Python number. The initial learning rate. global_step A scalar int32 or int64 Tensor or a Python number. Global step to use for the decay computation. decay_steps A scalar int32 or int64 Tensor or a Python number. Number of steps to decay over. alpha A scalar float32 or float64 Tensor or a Python number. Minimum learning rate value as a fraction of learning_rate. name String. Optional name of the operation. Defaults to 'CosineDecay'. Returns A scalar Tensor of the same type as learning_rate. The decayed learning rate. Raises ValueError if global_step is not supplied. References: Stochastic Gradient Descent with Warm Restarts: Loshchilov et al., 2017 (pdf) Eager Compatibility When eager execution is enabled, this function returns a function which in turn returns the decayed learning rate Tensor. This can be useful for changing the learning rate value across different invocations of optimizer functions.
tensorflow.compat.v1.train.cosine_decay
tf.compat.v1.train.cosine_decay_restarts Applies cosine decay with restarts to the learning rate. tf.compat.v1.train.cosine_decay_restarts( learning_rate, global_step, first_decay_steps, t_mul=2.0, m_mul=1.0, alpha=0.0, name=None ) When training a model, it is often recommended to lower the learning rate as the training progresses. This function applies a cosine decay function with restarts to a provided initial learning rate. It requires a global_step value to compute the decayed learning rate. You can just pass a TensorFlow variable that you increment at each training step. The function returns the decayed learning rate while taking into account possible warm restarts. The learning rate multiplier first decays from 1 to alpha for first_decay_steps steps. Then, a warm restart is performed. Each new warm restart runs for t_mul times more steps and with m_mul times smaller initial learning rate. Example usage: first_decay_steps = 1000 lr_decayed = cosine_decay_restarts(learning_rate, global_step, first_decay_steps) Args learning_rate A scalar float32 or float64 Tensor or a Python number. The initial learning rate. global_step A scalar int32 or int64 Tensor or a Python number. Global step to use for the decay computation. first_decay_steps A scalar int32 or int64 Tensor or a Python number. Number of steps to decay over. t_mul A scalar float32 or float64 Tensor or a Python number. Used to derive the number of iterations in the i-th period m_mul A scalar float32 or float64 Tensor or a Python number. Used to derive the initial learning rate of the i-th period: alpha A scalar float32 or float64 Tensor or a Python number. Minimum learning rate value as a fraction of the learning_rate. name String. Optional name of the operation. Defaults to 'SGDRDecay'. Returns A scalar Tensor of the same type as learning_rate. The decayed learning rate. Raises ValueError if global_step is not supplied. References: Stochastic Gradient Descent with Warm Restarts: Loshchilov et al., 2017 (pdf) Eager Compatibility When eager execution is enabled, this function returns a function which in turn returns the decayed learning rate Tensor. This can be useful for changing the learning rate value across different invocations of optimizer functions.
tensorflow.compat.v1.train.cosine_decay_restarts
tf.compat.v1.train.create_global_step Create global step tensor in graph. tf.compat.v1.train.create_global_step( graph=None ) Args graph The graph in which to create the global step tensor. If missing, use default graph. Returns Global step tensor. Raises ValueError if global step tensor is already defined.
tensorflow.compat.v1.train.create_global_step
tf.compat.v1.train.do_quantize_training_on_graphdef A general quantization scheme is being developed in tf.contrib.quantize. (deprecated) tf.compat.v1.train.do_quantize_training_on_graphdef( input_graph, num_bits ) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: GraphDef quantized training rewriter is deprecated in the long term. Consider using that instead, though since it is in the tf.contrib namespace, it is not subject to backward compatibility guarantees. Args input_graph A GraphDef. num_bits The number of bits for quantize training. Returns The graph with quantize training done.
tensorflow.compat.v1.train.do_quantize_training_on_graphdef
Module: tf.compat.v1.train.experimental Public API for tf.train.experimental namespace. Classes class DynamicLossScale: Loss scale that dynamically adjusts itself. class FixedLossScale: Loss scale with a fixed value. class LossScale: Base class for all TF1 loss scales. class MixedPrecisionLossScaleOptimizer: An optimizer that applies loss scaling. class PythonState: A mixin for putting Python state in an object-based checkpoint. Functions disable_mixed_precision_graph_rewrite(...): Disables the mixed precision graph rewrite. enable_mixed_precision_graph_rewrite(...): Enable mixed precision via a graph rewrite.
tensorflow.compat.v1.train.experimental
tf.compat.v1.train.exponential_decay Applies exponential decay to the learning rate. tf.compat.v1.train.exponential_decay( learning_rate, global_step, decay_steps, decay_rate, staircase=False, name=None ) When training a model, it is often recommended to lower the learning rate as the training progresses. This function applies an exponential decay function to a provided initial learning rate. It requires a global_step value to compute the decayed learning rate. You can just pass a TensorFlow variable that you increment at each training step. The function returns the decayed learning rate. It is computed as: decayed_learning_rate = learning_rate * decay_rate ^ (global_step / decay_steps) If the argument staircase is True, then global_step / decay_steps is an integer division and the decayed learning rate follows a staircase function. Example: decay every 100000 steps with a base of 0.96: ... global_step = tf.Variable(0, trainable=False) starter_learning_rate = 0.1 learning_rate = tf.compat.v1.train.exponential_decay(starter_learning_rate, global_step, 100000, 0.96, staircase=True) # Passing global_step to minimize() will increment it at each step. learning_step = ( tf.compat.v1.train.GradientDescentOptimizer(learning_rate) .minimize(...my loss..., global_step=global_step) ) Args learning_rate A scalar float32 or float64 Tensor or a Python number. The initial learning rate. global_step A scalar int32 or int64 Tensor or a Python number. Global step to use for the decay computation. Must not be negative. decay_steps A scalar int32 or int64 Tensor or a Python number. Must be positive. See the decay computation above. decay_rate A scalar float32 or float64 Tensor or a Python number. The decay rate. staircase Boolean. If True decay the learning rate at discrete intervals name String. Optional name of the operation. Defaults to 'ExponentialDecay'. Returns A scalar Tensor of the same type as learning_rate. The decayed learning rate. Raises ValueError if global_step is not supplied. Eager Compatibility When eager execution is enabled, this function returns a function which in turn returns the decayed learning rate Tensor. This can be useful for changing the learning rate value across different invocations of optimizer functions.
tensorflow.compat.v1.train.exponential_decay
tf.compat.v1.train.export_meta_graph Returns MetaGraphDef proto. tf.compat.v1.train.export_meta_graph( filename=None, meta_info_def=None, graph_def=None, saver_def=None, collection_list=None, as_text=False, graph=None, export_scope=None, clear_devices=False, clear_extraneous_savers=False, strip_default_attrs=False, save_debug_info=False, **kwargs ) Optionally writes it to filename. This function exports the graph, saver, and collection objects into MetaGraphDef protocol buffer with the intention of it being imported at a later time or location to restart training, run inference, or be a subgraph. Args filename Optional filename including the path for writing the generated MetaGraphDef protocol buffer. meta_info_def MetaInfoDef protocol buffer. graph_def GraphDef protocol buffer. saver_def SaverDef protocol buffer. collection_list List of string keys to collect. as_text If True, writes the MetaGraphDef as an ASCII proto. graph The Graph to export. If None, use the default graph. export_scope Optional string. Name scope under which to extract the subgraph. The scope name will be striped from the node definitions for easy import later into new name scopes. If None, the whole graph is exported. graph_def and export_scope cannot both be specified. clear_devices Whether or not to clear the device field for an Operation or Tensor during export. clear_extraneous_savers Remove any Saver-related information from the graph (both Save/Restore ops and SaverDefs) that are not associated with the provided SaverDef. strip_default_attrs Boolean. If True, default-valued attributes will be removed from the NodeDefs. For a detailed guide, see Stripping Default-Valued Attributes. save_debug_info If True, save the GraphDebugInfo to a separate file, which in the same directory of filename and with _debug added before the file extend. **kwargs Optional keyed arguments. Returns A MetaGraphDef proto. Raises ValueError When the GraphDef is larger than 2GB. RuntimeError If called with eager execution enabled. Eager Compatibility Exporting/importing meta graphs is not supported unless both graph_def and graph are provided. No graph exists when eager execution is enabled.
tensorflow.compat.v1.train.export_meta_graph
tf.compat.v1.train.FtrlOptimizer Optimizer that implements the FTRL algorithm. Inherits From: Optimizer tf.compat.v1.train.FtrlOptimizer( learning_rate, learning_rate_power=-0.5, initial_accumulator_value=0.1, l1_regularization_strength=0.0, l2_regularization_strength=0.0, use_locking=False, name='Ftrl', accum_name=None, linear_name=None, l2_shrinkage_regularization_strength=0.0, beta=None ) This version has support for both online L2 (McMahan et al., 2013) and shrinkage-type L2, which is the addition of an L2 penalty to the loss function. References: Ad-click prediction: McMahan et al., 2013 (pdf) Args learning_rate A float value or a constant float Tensor. learning_rate_power A float value, must be less or equal to zero. Controls how the learning rate decreases during training. Use zero for a fixed learning rate. See section 3.1 in (McMahan et al., 2013). initial_accumulator_value The starting value for accumulators. Only zero or positive values are allowed. l1_regularization_strength A float value, must be greater than or equal to zero. l2_regularization_strength A float value, must be greater than or equal to zero. use_locking If True use locks for update operations. name Optional name prefix for the operations created when applying gradients. Defaults to "Ftrl". accum_name The suffix for the variable that keeps the gradient squared accumulator. If not present, defaults to name. linear_name The suffix for the variable that keeps the linear gradient accumulator. If not present, defaults to name + "1". l2_shrinkage_regularization_strength A float value, must be greater than or equal to zero. This differs from L2 above in that the L2 above is a stabilization penalty, whereas this L2 shrinkage is a magnitude penalty. The FTRL formulation can be written as: w{t+1} = argminw(\hat{g}{1:t}w + L1||w||_1 + L2||w||_2^2), where \hat{g} = g + (2L2_shrinkagew), and g is the gradient of the loss function w.r.t. the weights w. Specifically, in the absence of L1 regularization, it is equivalent to the following update rule: w_{t+1} = w_t - lr_t / (beta + 2L2lr_t) * g_t - 2L2_shrinkagelr_t / (beta + 2L2lr_t) * w_t where lr_t is the learning rate at t. When input is sparse shrinkage will only happen on the active weights. beta A float value; corresponds to the beta parameter in the paper. Raises ValueError If one of the arguments is invalid. Methods apply_gradients View source apply_gradients( grads_and_vars, global_step=None, name=None ) Apply gradients to variables. This is the second part of minimize(). It returns an Operation that applies gradients. Args grads_and_vars List of (gradient, variable) pairs as returned by compute_gradients(). global_step Optional Variable to increment by one after the variables have been updated. name Optional name for the returned operation. Default to the name passed to the Optimizer constructor. Returns An Operation that applies the specified gradients. If global_step was not None, that operation also increments global_step. Raises TypeError If grads_and_vars is malformed. ValueError If none of the variables have gradients. RuntimeError If you should use _distributed_apply() instead. compute_gradients View source compute_gradients( loss, var_list=None, gate_gradients=GATE_OP, aggregation_method=None, colocate_gradients_with_ops=False, grad_loss=None ) Compute gradients of loss for the variables in var_list. This is the first part of minimize(). It returns a list of (gradient, variable) pairs where "gradient" is the gradient for "variable". Note that "gradient" can be a Tensor, an IndexedSlices, or None if there is no gradient for the given variable. Args loss A Tensor containing the value to minimize or a callable taking no arguments which returns the value to minimize. When eager execution is enabled it must be a callable. var_list Optional list or tuple of tf.Variable to update to minimize loss. Defaults to the list of variables collected in the graph under the key GraphKeys.TRAINABLE_VARIABLES. gate_gradients How to gate the computation of gradients. Can be GATE_NONE, GATE_OP, or GATE_GRAPH. aggregation_method Specifies the method used to combine gradient terms. Valid values are defined in the class AggregationMethod. colocate_gradients_with_ops If True, try colocating gradients with the corresponding op. grad_loss Optional. A Tensor holding the gradient computed for loss. Returns A list of (gradient, variable) pairs. Variable is always present, but gradient can be None. Raises TypeError If var_list contains anything else than Variable objects. ValueError If some arguments are invalid. RuntimeError If called with eager execution enabled and loss is not callable. Eager Compatibility When eager execution is enabled, gate_gradients, aggregation_method, and colocate_gradients_with_ops are ignored. get_name View source get_name() get_slot View source get_slot( var, name ) Return a slot named name created for var by the Optimizer. Some Optimizer subclasses use additional variables. For example Momentum and Adagrad use variables to accumulate updates. This method gives access to these Variable objects if for some reason you need them. Use get_slot_names() to get the list of slot names created by the Optimizer. Args var A variable passed to minimize() or apply_gradients(). name A string. Returns The Variable for the slot if it was created, None otherwise. get_slot_names View source get_slot_names() Return a list of the names of slots created by the Optimizer. See get_slot(). Returns A list of strings. minimize View source minimize( loss, global_step=None, var_list=None, gate_gradients=GATE_OP, aggregation_method=None, colocate_gradients_with_ops=False, name=None, grad_loss=None ) Add operations to minimize loss by updating var_list. This method simply combines calls compute_gradients() and apply_gradients(). If you want to process the gradient before applying them call compute_gradients() and apply_gradients() explicitly instead of using this function. Args loss A Tensor containing the value to minimize. global_step Optional Variable to increment by one after the variables have been updated. var_list Optional list or tuple of Variable objects to update to minimize loss. Defaults to the list of variables collected in the graph under the key GraphKeys.TRAINABLE_VARIABLES. gate_gradients How to gate the computation of gradients. Can be GATE_NONE, GATE_OP, or GATE_GRAPH. aggregation_method Specifies the method used to combine gradient terms. Valid values are defined in the class AggregationMethod. colocate_gradients_with_ops If True, try colocating gradients with the corresponding op. name Optional name for the returned operation. grad_loss Optional. A Tensor holding the gradient computed for loss. Returns An Operation that updates the variables in var_list. If global_step was not None, that operation also increments global_step. Raises ValueError If some of the variables are not Variable objects. Eager Compatibility When eager execution is enabled, loss should be a Python function that takes no arguments and computes the value to be minimized. Minimization (and gradient computation) is done with respect to the elements of var_list if not None, else with respect to any trainable variables created during the execution of the loss function. gate_gradients, aggregation_method, colocate_gradients_with_ops and grad_loss are ignored when eager execution is enabled. variables View source variables() A list of variables which encode the current state of Optimizer. Includes slot variables and additional global variables created by the optimizer in the current default graph. Returns A list of variables. Class Variables GATE_GRAPH 2 GATE_NONE 0 GATE_OP 1
tensorflow.compat.v1.train.ftrloptimizer
tf.compat.v1.train.generate_checkpoint_state_proto Generates a checkpoint state proto. tf.compat.v1.train.generate_checkpoint_state_proto( save_dir, model_checkpoint_path, all_model_checkpoint_paths=None, all_model_checkpoint_timestamps=None, last_preserved_timestamp=None ) Args save_dir Directory where the model was saved. model_checkpoint_path The checkpoint file. all_model_checkpoint_paths List of strings. Paths to all not-yet-deleted checkpoints, sorted from oldest to newest. If this is a non-empty list, the last element must be equal to model_checkpoint_path. These paths are also saved in the CheckpointState proto. all_model_checkpoint_timestamps A list of floats, indicating the number of seconds since the Epoch when each checkpoint was generated. last_preserved_timestamp A float, indicating the number of seconds since the Epoch when the last preserved checkpoint was written, e.g. due to a keep_checkpoint_every_n_hours parameter (see tf.train.CheckpointManager for an implementation). Returns CheckpointState proto with model_checkpoint_path and all_model_checkpoint_paths updated to either absolute paths or relative paths to the current save_dir. Raises ValueError If all_model_checkpoint_timestamps was provided but its length does not match all_model_checkpoint_paths.
tensorflow.compat.v1.train.generate_checkpoint_state_proto
tf.compat.v1.train.get_checkpoint_mtimes Returns the mtimes (modification timestamps) of the checkpoints. (deprecated) tf.compat.v1.train.get_checkpoint_mtimes( checkpoint_prefixes ) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use standard file utilities to get mtimes. Globs for the checkpoints pointed to by checkpoint_prefixes. If the files exist, collect their mtime. Both V2 and V1 checkpoints are considered, in that priority. This is the recommended way to get the mtimes, since it takes into account the naming difference between V1 and V2 formats. Note: If not all checkpoints exist, the length of the returned mtimes list will be smaller than the length of checkpoint_prefixes list, so mapping checkpoints to corresponding mtimes will not be possible. Args checkpoint_prefixes a list of checkpoint paths, typically the results of Saver.save() or those of tf.train.latest_checkpoint(), regardless of sharded/non-sharded or V1/V2. Returns A list of mtimes (in microseconds) of the found checkpoints.
tensorflow.compat.v1.train.get_checkpoint_mtimes
tf.compat.v1.train.get_global_step Get the global step tensor. tf.compat.v1.train.get_global_step( graph=None ) The global step tensor must be an integer variable. We first try to find it in the collection GLOBAL_STEP, or by name global_step:0. Args graph The graph to find the global step in. If missing, use default graph. Returns The global step variable, or None if none was found. Raises TypeError If the global step tensor has a non-integer type, or if it is not a Variable.
tensorflow.compat.v1.train.get_global_step
tf.compat.v1.train.get_or_create_global_step Returns and create (if necessary) the global step tensor. tf.compat.v1.train.get_or_create_global_step( graph=None ) Args graph The graph in which to create the global step tensor. If missing, use default graph. Returns The global step tensor.
tensorflow.compat.v1.train.get_or_create_global_step
tf.compat.v1.train.global_step Small helper to get the global step. tf.compat.v1.train.global_step( sess, global_step_tensor ) # Create a variable to hold the global_step. global_step_tensor = tf.Variable(10, trainable=False, name='global_step') # Create a session. sess = tf.compat.v1.Session() # Initialize the variable sess.run(global_step_tensor.initializer) # Get the variable value. print('global_step: %s' % tf.compat.v1.train.global_step(sess, global_step_tensor)) global_step: 10 Args sess A TensorFlow Session object. global_step_tensor Tensor or the name of the operation that contains the global step. Returns The global step value.
tensorflow.compat.v1.train.global_step
tf.compat.v1.train.GradientDescentOptimizer Optimizer that implements the gradient descent algorithm. Inherits From: Optimizer tf.compat.v1.train.GradientDescentOptimizer( learning_rate, use_locking=False, name='GradientDescent' ) Args learning_rate A Tensor or a floating point value. The learning rate to use. use_locking If True use locks for update operations. name Optional name prefix for the operations created when applying gradients. Defaults to "GradientDescent". Methods apply_gradients View source apply_gradients( grads_and_vars, global_step=None, name=None ) Apply gradients to variables. This is the second part of minimize(). It returns an Operation that applies gradients. Args grads_and_vars List of (gradient, variable) pairs as returned by compute_gradients(). global_step Optional Variable to increment by one after the variables have been updated. name Optional name for the returned operation. Default to the name passed to the Optimizer constructor. Returns An Operation that applies the specified gradients. If global_step was not None, that operation also increments global_step. Raises TypeError If grads_and_vars is malformed. ValueError If none of the variables have gradients. RuntimeError If you should use _distributed_apply() instead. compute_gradients View source compute_gradients( loss, var_list=None, gate_gradients=GATE_OP, aggregation_method=None, colocate_gradients_with_ops=False, grad_loss=None ) Compute gradients of loss for the variables in var_list. This is the first part of minimize(). It returns a list of (gradient, variable) pairs where "gradient" is the gradient for "variable". Note that "gradient" can be a Tensor, an IndexedSlices, or None if there is no gradient for the given variable. Args loss A Tensor containing the value to minimize or a callable taking no arguments which returns the value to minimize. When eager execution is enabled it must be a callable. var_list Optional list or tuple of tf.Variable to update to minimize loss. Defaults to the list of variables collected in the graph under the key GraphKeys.TRAINABLE_VARIABLES. gate_gradients How to gate the computation of gradients. Can be GATE_NONE, GATE_OP, or GATE_GRAPH. aggregation_method Specifies the method used to combine gradient terms. Valid values are defined in the class AggregationMethod. colocate_gradients_with_ops If True, try colocating gradients with the corresponding op. grad_loss Optional. A Tensor holding the gradient computed for loss. Returns A list of (gradient, variable) pairs. Variable is always present, but gradient can be None. Raises TypeError If var_list contains anything else than Variable objects. ValueError If some arguments are invalid. RuntimeError If called with eager execution enabled and loss is not callable. Eager Compatibility When eager execution is enabled, gate_gradients, aggregation_method, and colocate_gradients_with_ops are ignored. get_name View source get_name() get_slot View source get_slot( var, name ) Return a slot named name created for var by the Optimizer. Some Optimizer subclasses use additional variables. For example Momentum and Adagrad use variables to accumulate updates. This method gives access to these Variable objects if for some reason you need them. Use get_slot_names() to get the list of slot names created by the Optimizer. Args var A variable passed to minimize() or apply_gradients(). name A string. Returns The Variable for the slot if it was created, None otherwise. get_slot_names View source get_slot_names() Return a list of the names of slots created by the Optimizer. See get_slot(). Returns A list of strings. minimize View source minimize( loss, global_step=None, var_list=None, gate_gradients=GATE_OP, aggregation_method=None, colocate_gradients_with_ops=False, name=None, grad_loss=None ) Add operations to minimize loss by updating var_list. This method simply combines calls compute_gradients() and apply_gradients(). If you want to process the gradient before applying them call compute_gradients() and apply_gradients() explicitly instead of using this function. Args loss A Tensor containing the value to minimize. global_step Optional Variable to increment by one after the variables have been updated. var_list Optional list or tuple of Variable objects to update to minimize loss. Defaults to the list of variables collected in the graph under the key GraphKeys.TRAINABLE_VARIABLES. gate_gradients How to gate the computation of gradients. Can be GATE_NONE, GATE_OP, or GATE_GRAPH. aggregation_method Specifies the method used to combine gradient terms. Valid values are defined in the class AggregationMethod. colocate_gradients_with_ops If True, try colocating gradients with the corresponding op. name Optional name for the returned operation. grad_loss Optional. A Tensor holding the gradient computed for loss. Returns An Operation that updates the variables in var_list. If global_step was not None, that operation also increments global_step. Raises ValueError If some of the variables are not Variable objects. Eager Compatibility When eager execution is enabled, loss should be a Python function that takes no arguments and computes the value to be minimized. Minimization (and gradient computation) is done with respect to the elements of var_list if not None, else with respect to any trainable variables created during the execution of the loss function. gate_gradients, aggregation_method, colocate_gradients_with_ops and grad_loss are ignored when eager execution is enabled. variables View source variables() A list of variables which encode the current state of Optimizer. Includes slot variables and additional global variables created by the optimizer in the current default graph. Returns A list of variables. Class Variables GATE_GRAPH 2 GATE_NONE 0 GATE_OP 1
tensorflow.compat.v1.train.gradientdescentoptimizer
tf.compat.v1.train.import_meta_graph Recreates a Graph saved in a MetaGraphDef proto. tf.compat.v1.train.import_meta_graph( meta_graph_or_file, clear_devices=False, import_scope=None, **kwargs ) This function takes a MetaGraphDef protocol buffer as input. If the argument is a file containing a MetaGraphDef protocol buffer , it constructs a protocol buffer from the file content. The function then adds all the nodes from the graph_def field to the current graph, recreates all the collections, and returns a saver constructed from the saver_def field. In combination with export_meta_graph(), this function can be used to Serialize a graph along with other Python objects such as QueueRunner, Variable into a MetaGraphDef. Restart training from a saved graph and checkpoints. Run inference from a saved graph and checkpoints. ... # Create a saver. saver = tf.compat.v1.train.Saver(...variables...) # Remember the training_op we want to run by adding it to a collection. tf.compat.v1.add_to_collection('train_op', train_op) sess = tf.compat.v1.Session() for step in xrange(1000000): sess.run(train_op) if step % 1000 == 0: # Saves checkpoint, which by default also exports a meta_graph # named 'my-model-global_step.meta'. saver.save(sess, 'my-model', global_step=step) Later we can continue training from this saved meta_graph without building the model from scratch. with tf.Session() as sess: new_saver = tf.train.import_meta_graph('my-save-dir/my-model-10000.meta') new_saver.restore(sess, 'my-save-dir/my-model-10000') # tf.get_collection() returns a list. In this example we only want # the first one. train_op = tf.get_collection('train_op')[0] for step in xrange(1000000): sess.run(train_op) Note: Restarting training from saved meta_graph only works if the device assignments have not changed. Example: Variables, placeholders, and independent operations can also be stored, as shown in the following example. # Saving contents and operations. v1 = tf.placeholder(tf.float32, name="v1") v2 = tf.placeholder(tf.float32, name="v2") v3 = tf.math.multiply(v1, v2) vx = tf.Variable(10.0, name="vx") v4 = tf.add(v3, vx, name="v4") saver = tf.train.Saver([vx]) sess = tf.Session() sess.run(tf.global_variables_initializer()) sess.run(vx.assign(tf.add(vx, vx))) result = sess.run(v4, feed_dict={v1:12.0, v2:3.3}) print(result) saver.save(sess, "./model_ex1") Later this model can be restored and contents loaded. # Restoring variables and running operations. saver = tf.train.import_meta_graph("./model_ex1.meta") sess = tf.Session() saver.restore(sess, "./model_ex1") result = sess.run("v4:0", feed_dict={"v1:0": 12.0, "v2:0": 3.3}) print(result) Args meta_graph_or_file MetaGraphDef protocol buffer or filename (including the path) containing a MetaGraphDef. clear_devices Whether or not to clear the device field for an Operation or Tensor during import. import_scope Optional string. Name scope to add. Only used when initializing from protocol buffer. **kwargs Optional keyed arguments. Returns A saver constructed from saver_def in MetaGraphDef or None. A None value is returned if no variables exist in the MetaGraphDef (i.e., there are no variables to restore). Raises RuntimeError If called with eager execution enabled. Eager Compatibility Exporting/importing meta graphs is not supported. No graph exists when eager execution is enabled.
tensorflow.compat.v1.train.import_meta_graph
tf.compat.v1.train.init_from_checkpoint Replaces tf.Variable initializers so they load from a checkpoint file. tf.compat.v1.train.init_from_checkpoint( ckpt_dir_or_file, assignment_map ) Values are not loaded immediately, but when the initializer is run (typically by running a tf.compat.v1.global_variables_initializer op). Note: This overrides default initialization ops of specified variables and redefines dtype. Assignment map supports following syntax: 'checkpoint_scope_name/': 'scope_name/' - will load all variables in current scope_name from checkpoint_scope_name with matching tensor names. 'checkpoint_scope_name/some_other_variable': 'scope_name/variable_name' - will initialize scope_name/variable_name variable from checkpoint_scope_name/some_other_variable. 'scope_variable_name': variable - will initialize given tf.Variable object with tensor 'scope_variable_name' from the checkpoint. 'scope_variable_name': list(variable) - will initialize list of partitioned variables with tensor 'scope_variable_name' from the checkpoint. '/': 'scope_name/' - will load all variables in current scope_name from checkpoint's root (e.g. no scope). Supports loading into partitioned variables, which are represented as '<variable>/part_<part #>'. Example: # Say, '/tmp/model.ckpt' has the following tensors: # -- name='old_scope_1/var1', shape=[20, 2] # -- name='old_scope_1/var2', shape=[50, 4] # -- name='old_scope_2/var3', shape=[100, 100] # Create new model's variables with tf.compat.v1.variable_scope('new_scope_1'): var1 = tf.compat.v1.get_variable('var1', shape=[20, 2], initializer=tf.compat.v1.zeros_initializer()) with tf.compat.v1.variable_scope('new_scope_2'): var2 = tf.compat.v1.get_variable('var2', shape=[50, 4], initializer=tf.compat.v1.zeros_initializer()) # Partition into 5 variables along the first axis. var3 = tf.compat.v1.get_variable(name='var3', shape=[100, 100], initializer=tf.compat.v1.zeros_initializer(), partitioner=lambda shape, dtype: [5, 1]) # Initialize all variables in `new_scope_1` from `old_scope_1`. init_from_checkpoint('/tmp/model.ckpt', {'old_scope_1/': 'new_scope_1'}) # Use names to specify which variables to initialize from checkpoint. init_from_checkpoint('/tmp/model.ckpt', {'old_scope_1/var1': 'new_scope_1/var1', 'old_scope_1/var2': 'new_scope_2/var2'}) # Or use tf.Variable objects to identify what to initialize. init_from_checkpoint('/tmp/model.ckpt', {'old_scope_1/var1': var1, 'old_scope_1/var2': var2}) # Initialize partitioned variables using variable's name init_from_checkpoint('/tmp/model.ckpt', {'old_scope_2/var3': 'new_scope_2/var3'}) # Or specify the list of tf.Variable objects. init_from_checkpoint('/tmp/model.ckpt', {'old_scope_2/var3': var3._get_variable_list()}) Args ckpt_dir_or_file Directory with checkpoints file or path to checkpoint. assignment_map Dict, where keys are names of the variables in the checkpoint and values are current variables or names of current variables (in default graph). Raises ValueError If missing variables in current graph, or if missing checkpoints or tensors in checkpoints.
tensorflow.compat.v1.train.init_from_checkpoint
tf.compat.v1.train.input_producer Output the rows of input_tensor to a queue for an input pipeline. (deprecated) tf.compat.v1.train.input_producer( input_tensor, element_shape=None, num_epochs=None, shuffle=True, seed=None, capacity=32, shared_name=None, summary_name=None, name=None, cancel_op=None ) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Queue-based input pipelines have been replaced by tf.data. Use tf.data.Dataset.from_tensor_slices(input_tensor).shuffle(tf.shape(input_tensor, out_type=tf.int64)[0]).repeat(num_epochs). If shuffle=False, omit the .shuffle(...). Note: if num_epochs is not None, this function creates local counter epochs. Use local_variables_initializer() to initialize local variables. Args input_tensor A tensor with the rows to produce. Must be at least one-dimensional. Must either have a fully-defined shape, or element_shape must be defined. element_shape (Optional.) A TensorShape representing the shape of a row of input_tensor, if it cannot be inferred. num_epochs (Optional.) An integer. If specified input_producer produces each row of input_tensor num_epochs times before generating an OutOfRange error. If not specified, input_producer can cycle through the rows of input_tensor an unlimited number of times. shuffle (Optional.) A boolean. If true, the rows are randomly shuffled within each epoch. seed (Optional.) An integer. The seed to use if shuffle is true. capacity (Optional.) The capacity of the queue to be used for buffering the input. shared_name (Optional.) If set, this queue will be shared under the given name across multiple sessions. summary_name (Optional.) If set, a scalar summary for the current queue size will be generated, using this name as part of the tag. name (Optional.) A name for queue. cancel_op (Optional.) Cancel op for the queue Returns A queue with the output rows. A QueueRunner for the queue is added to the current QUEUE_RUNNER collection of the current graph. Raises ValueError If the shape of the input cannot be inferred from the arguments. RuntimeError If called with eager execution enabled. Eager Compatibility Input pipelines based on Queues are not supported when eager execution is enabled. Please use the tf.data API to ingest data under eager execution.
tensorflow.compat.v1.train.input_producer
tf.compat.v1.train.inverse_time_decay Applies inverse time decay to the initial learning rate. tf.compat.v1.train.inverse_time_decay( learning_rate, global_step, decay_steps, decay_rate, staircase=False, name=None ) When training a model, it is often recommended to lower the learning rate as the training progresses. This function applies an inverse decay function to a provided initial learning rate. It requires an global_step value to compute the decayed learning rate. You can just pass a TensorFlow variable that you increment at each training step. The function returns the decayed learning rate. It is computed as: decayed_learning_rate = learning_rate / (1 + decay_rate * global_step / decay_step) or, if staircase is True, as: decayed_learning_rate = learning_rate / (1 + decay_rate * floor(global_step / decay_step)) Example: decay 1/t with a rate of 0.5: ... global_step = tf.Variable(0, trainable=False) learning_rate = 0.1 decay_steps = 1.0 decay_rate = 0.5 learning_rate = tf.compat.v1.train.inverse_time_decay(learning_rate, global_step, decay_steps, decay_rate) # Passing global_step to minimize() will increment it at each step. learning_step = ( tf.compat.v1.train.GradientDescentOptimizer(learning_rate) .minimize(...my loss..., global_step=global_step) ) Args learning_rate A scalar float32 or float64 Tensor or a Python number. The initial learning rate. global_step A Python number. Global step to use for the decay computation. Must not be negative. decay_steps How often to apply decay. decay_rate A Python number. The decay rate. staircase Whether to apply decay in a discrete staircase, as opposed to continuous, fashion. name String. Optional name of the operation. Defaults to 'InverseTimeDecay'. Returns A scalar Tensor of the same type as learning_rate. The decayed learning rate. Raises ValueError if global_step is not supplied. Eager Compatibility When eager execution is enabled, this function returns a function which in turn returns the decayed learning rate Tensor. This can be useful for changing the learning rate value across different invocations of optimizer functions.
tensorflow.compat.v1.train.inverse_time_decay
tf.compat.v1.train.limit_epochs Returns tensor num_epochs times and then raises an OutOfRange error. (deprecated) tf.compat.v1.train.limit_epochs( tensor, num_epochs=None, name=None ) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Queue-based input pipelines have been replaced by tf.data. Use tf.data.Dataset.from_tensors(tensor).repeat(num_epochs). Note: creates local counter epochs. Use local_variables_initializer() to initialize local variables. Args tensor Any Tensor. num_epochs A positive integer (optional). If specified, limits the number of steps the output tensor may be evaluated. name A name for the operations (optional). Returns tensor or OutOfRange. Raises ValueError if num_epochs is invalid.
tensorflow.compat.v1.train.limit_epochs
tf.compat.v1.train.linear_cosine_decay Applies linear cosine decay to the learning rate. tf.compat.v1.train.linear_cosine_decay( learning_rate, global_step, decay_steps, num_periods=0.5, alpha=0.0, beta=0.001, name=None ) Note that linear cosine decay is more aggressive than cosine decay and larger initial learning rates can typically be used. When training a model, it is often recommended to lower the learning rate as the training progresses. This function applies a linear cosine decay function to a provided initial learning rate. It requires a global_step value to compute the decayed learning rate. You can just pass a TensorFlow variable that you increment at each training step. The function returns the decayed learning rate. It is computed as: global_step = min(global_step, decay_steps) linear_decay = (decay_steps - global_step) / decay_steps) cosine_decay = 0.5 * ( 1 + cos(pi * 2 * num_periods * global_step / decay_steps)) decayed = (alpha + linear_decay) * cosine_decay + beta decayed_learning_rate = learning_rate * decayed Example usage: decay_steps = 1000 lr_decayed = linear_cosine_decay(learning_rate, global_step, decay_steps) Args learning_rate A scalar float32 or float64 Tensor or a Python number. The initial learning rate. global_step A scalar int32 or int64 Tensor or a Python number. Global step to use for the decay computation. decay_steps A scalar int32 or int64 Tensor or a Python number. Number of steps to decay over. num_periods Number of periods in the cosine part of the decay. See computation above. alpha See computation above. beta See computation above. name String. Optional name of the operation. Defaults to 'LinearCosineDecay'. Returns A scalar Tensor of the same type as learning_rate. The decayed learning rate. Raises ValueError if global_step is not supplied. References: Neural Optimizer Search with Reinforcement Learning: Bello et al., 2017 (pdf) Stochastic Gradient Descent with Warm Restarts: Loshchilov et al., 2017 (pdf) Eager Compatibility When eager execution is enabled, this function returns a function which in turn returns the decayed learning rate Tensor. This can be useful for changing the learning rate value across different invocations of optimizer functions.
tensorflow.compat.v1.train.linear_cosine_decay
tf.compat.v1.train.LooperThread A thread that runs code repeatedly, optionally on a timer. tf.compat.v1.train.LooperThread( coord, timer_interval_secs, target=None, args=None, kwargs=None ) This thread class is intended to be used with a Coordinator. It repeatedly runs code specified either as target and args or by the run_loop() method. Before each run the thread checks if the coordinator has requested stop. In that case the looper thread terminates immediately. If the code being run raises an exception, that exception is reported to the coordinator and the thread terminates. The coordinator will then request all the other threads it coordinates to stop. You typically pass looper threads to the supervisor Join() method. Args coord A Coordinator. timer_interval_secs Time boundaries at which to call Run(), or None if it should be called back to back. target Optional callable object that will be executed in the thread. args Optional arguments to pass to target when calling it. kwargs Optional keyword arguments to pass to target when calling it. Raises ValueError If one of the arguments is invalid. Attributes daemon A boolean value indicating whether this thread is a daemon thread. This must be set before start() is called, otherwise RuntimeError is raised. Its initial value is inherited from the creating thread; the main thread is not a daemon thread and therefore all threads created in the main thread default to daemon = False. The entire Python program exits when only daemon threads are left. ident Thread identifier of this thread or None if it has not been started. This is a nonzero integer. See the get_ident() function. Thread identifiers may be recycled when a thread exits and another thread is created. The identifier is available even after the thread has exited. name A string used for identification purposes only. It has no semantics. Multiple threads may be given the same name. The initial name is set by the constructor. Methods getName getName() isAlive isAlive() Return whether the thread is alive. This method is deprecated, use is_alive() instead. isDaemon isDaemon() is_alive is_alive() Return whether the thread is alive. This method returns True just before the run() method starts until just after the run() method terminates. The module function enumerate() returns a list of all alive threads. join join( timeout=None ) Wait until the thread terminates. This blocks the calling thread until the thread whose join() method is called terminates -- either normally or through an unhandled exception or until the optional timeout occurs. When the timeout argument is present and not None, it should be a floating point number specifying a timeout for the operation in seconds (or fractions thereof). As join() always returns None, you must call is_alive() after join() to decide whether a timeout happened -- if the thread is still alive, the join() call timed out. When the timeout argument is not present or None, the operation will block until the thread terminates. A thread can be join()ed many times. join() raises a RuntimeError if an attempt is made to join the current thread as that would cause a deadlock. It is also an error to join() a thread before it has been started and attempts to do so raises the same exception. loop View source @staticmethod loop( coord, timer_interval_secs, target, args=None, kwargs=None ) Start a LooperThread that calls a function periodically. If timer_interval_secs is None the thread calls target(args) repeatedly. Otherwise target(args) is called every timer_interval_secs seconds. The thread terminates when a stop of the coordinator is requested. Args coord A Coordinator. timer_interval_secs Number. Time boundaries at which to call target. target A callable object. args Optional arguments to pass to target when calling it. kwargs Optional keyword arguments to pass to target when calling it. Returns The started thread. run View source run() Method representing the thread's activity. You may override this method in a subclass. The standard run() method invokes the callable object passed to the object's constructor as the target argument, if any, with sequential and keyword arguments taken from the args and kwargs arguments, respectively. run_loop View source run_loop() Called at 'timer_interval_secs' boundaries. setDaemon setDaemon( daemonic ) setName setName( name ) start start() Start the thread's activity. It must be called at most once per thread object. It arranges for the object's run() method to be invoked in a separate thread of control. This method will raise a RuntimeError if called more than once on the same thread object. start_loop View source start_loop() Called when the thread starts. stop_loop View source stop_loop() Called when the thread stops.
tensorflow.compat.v1.train.looperthread
tf.compat.v1.train.maybe_batch Conditionally creates batches of tensors based on keep_input. (deprecated) tf.compat.v1.train.maybe_batch( tensors, keep_input, batch_size, num_threads=1, capacity=32, enqueue_many=False, shapes=None, dynamic_pad=False, allow_smaller_final_batch=False, shared_name=None, name=None ) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Queue-based input pipelines have been replaced by tf.data. Use tf.data.Dataset.filter(...).batch(batch_size) (or padded_batch(...) if dynamic_pad=True). See docstring in batch for more details. Args tensors The list or dictionary of tensors to enqueue. keep_input A bool Tensor. This tensor controls whether the input is added to the queue or not. If it is a scalar and evaluates True, then tensors are all added to the queue. If it is a vector and enqueue_many is True, then each example is added to the queue only if the corresponding value in keep_input is True. This tensor essentially acts as a filtering mechanism. batch_size The new batch size pulled from the queue. num_threads The number of threads enqueuing tensors. The batching will be nondeterministic if num_threads > 1. capacity An integer. The maximum number of elements in the queue. enqueue_many Whether each tensor in tensors is a single example. shapes (Optional) The shapes for each example. Defaults to the inferred shapes for tensors. dynamic_pad Boolean. Allow variable dimensions in input shapes. The given dimensions are padded upon dequeue so that tensors within a batch have the same shapes. allow_smaller_final_batch (Optional) Boolean. If True, allow the final batch to be smaller if there are insufficient items left in the queue. shared_name (Optional). If set, this queue will be shared under the given name across multiple sessions. name (Optional) A name for the operations. Returns A list or dictionary of tensors with the same types as tensors. Raises ValueError If the shapes are not specified, and cannot be inferred from the elements of tensors.
tensorflow.compat.v1.train.maybe_batch
tf.compat.v1.train.maybe_batch_join Runs a list of tensors to conditionally fill a queue to create batches. (deprecated) tf.compat.v1.train.maybe_batch_join( tensors_list, keep_input, batch_size, capacity=32, enqueue_many=False, shapes=None, dynamic_pad=False, allow_smaller_final_batch=False, shared_name=None, name=None ) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Queue-based input pipelines have been replaced by tf.data. Use tf.data.Dataset.interleave(...).filter(...).batch(batch_size) (or padded_batch(...) if dynamic_pad=True). See docstring in batch_join for more details. Args tensors_list A list of tuples or dictionaries of tensors to enqueue. keep_input A bool Tensor. This tensor controls whether the input is added to the queue or not. If it is a scalar and evaluates True, then tensors are all added to the queue. If it is a vector and enqueue_many is True, then each example is added to the queue only if the corresponding value in keep_input is True. This tensor essentially acts as a filtering mechanism. batch_size An integer. The new batch size pulled from the queue. capacity An integer. The maximum number of elements in the queue. enqueue_many Whether each tensor in tensor_list_list is a single example. shapes (Optional) The shapes for each example. Defaults to the inferred shapes for tensor_list_list[i]. dynamic_pad Boolean. Allow variable dimensions in input shapes. The given dimensions are padded upon dequeue so that tensors within a batch have the same shapes. allow_smaller_final_batch (Optional) Boolean. If True, allow the final batch to be smaller if there are insufficient items left in the queue. shared_name (Optional) If set, this queue will be shared under the given name across multiple sessions. name (Optional) A name for the operations. Returns A list or dictionary of tensors with the same number and types as tensors_list[i]. Raises ValueError If the shapes are not specified, and cannot be inferred from the elements of tensor_list_list.
tensorflow.compat.v1.train.maybe_batch_join
tf.compat.v1.train.maybe_shuffle_batch Creates batches by randomly shuffling conditionally-enqueued tensors. (deprecated) tf.compat.v1.train.maybe_shuffle_batch( tensors, batch_size, capacity, min_after_dequeue, keep_input, num_threads=1, seed=None, enqueue_many=False, shapes=None, allow_smaller_final_batch=False, shared_name=None, name=None ) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Queue-based input pipelines have been replaced by tf.data. Use tf.data.Dataset.filter(...).shuffle(min_after_dequeue).batch(batch_size). See docstring in shuffle_batch for more details. Args tensors The list or dictionary of tensors to enqueue. batch_size The new batch size pulled from the queue. capacity An integer. The maximum number of elements in the queue. min_after_dequeue Minimum number elements in the queue after a dequeue, used to ensure a level of mixing of elements. keep_input A bool Tensor. This tensor controls whether the input is added to the queue or not. If it is a scalar and evaluates True, then tensors are all added to the queue. If it is a vector and enqueue_many is True, then each example is added to the queue only if the corresponding value in keep_input is True. This tensor essentially acts as a filtering mechanism. num_threads The number of threads enqueuing tensor_list. seed Seed for the random shuffling within the queue. enqueue_many Whether each tensor in tensor_list is a single example. shapes (Optional) The shapes for each example. Defaults to the inferred shapes for tensor_list. allow_smaller_final_batch (Optional) Boolean. If True, allow the final batch to be smaller if there are insufficient items left in the queue. shared_name (Optional) If set, this queue will be shared under the given name across multiple sessions. name (Optional) A name for the operations. Returns A list or dictionary of tensors with the types as tensors. Raises ValueError If the shapes are not specified, and cannot be inferred from the elements of tensors. Eager Compatibility Input pipelines based on Queues are not supported when eager execution is enabled. Please use the tf.data API to ingest data under eager execution.
tensorflow.compat.v1.train.maybe_shuffle_batch
tf.compat.v1.train.maybe_shuffle_batch_join Create batches by randomly shuffling conditionally-enqueued tensors. (deprecated) tf.compat.v1.train.maybe_shuffle_batch_join( tensors_list, batch_size, capacity, min_after_dequeue, keep_input, seed=None, enqueue_many=False, shapes=None, allow_smaller_final_batch=False, shared_name=None, name=None ) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Queue-based input pipelines have been replaced by tf.data. Use tf.data.Dataset.interleave(...).filter(...).shuffle(min_after_dequeue).batch(batch_size). See docstring in shuffle_batch_join for more details. Args tensors_list A list of tuples or dictionaries of tensors to enqueue. batch_size An integer. The new batch size pulled from the queue. capacity An integer. The maximum number of elements in the queue. min_after_dequeue Minimum number elements in the queue after a dequeue, used to ensure a level of mixing of elements. keep_input A bool Tensor. This tensor controls whether the input is added to the queue or not. If it is a scalar and evaluates True, then tensors are all added to the queue. If it is a vector and enqueue_many is True, then each example is added to the queue only if the corresponding value in keep_input is True. This tensor essentially acts as a filtering mechanism. seed Seed for the random shuffling within the queue. enqueue_many Whether each tensor in tensor_list_list is a single example. shapes (Optional) The shapes for each example. Defaults to the inferred shapes for tensors_list[i]. allow_smaller_final_batch (Optional) Boolean. If True, allow the final batch to be smaller if there are insufficient items left in the queue. shared_name (optional). If set, this queue will be shared under the given name across multiple sessions. name (Optional) A name for the operations. Returns A list or dictionary of tensors with the same number and types as tensors_list[i]. Raises ValueError If the shapes are not specified, and cannot be inferred from the elements of tensors_list. Eager Compatibility Input pipelines based on Queues are not supported when eager execution is enabled. Please use the tf.data API to ingest data under eager execution.
tensorflow.compat.v1.train.maybe_shuffle_batch_join
tf.compat.v1.train.MomentumOptimizer Optimizer that implements the Momentum algorithm. Inherits From: Optimizer tf.compat.v1.train.MomentumOptimizer( learning_rate, momentum, use_locking=False, name='Momentum', use_nesterov=False ) Computes (if use_nesterov = False): accumulation = momentum * accumulation + gradient variable -= learning_rate * accumulation Note that in the dense version of this algorithm, accumulation is updated and applied regardless of a gradient's value, whereas the sparse version (when the gradient is an IndexedSlices, typically because of tf.gather or an embedding) only updates variable slices and corresponding accumulation terms when that part of the variable was used in the forward pass. Args learning_rate A Tensor or a floating point value. The learning rate. momentum A Tensor or a floating point value. The momentum. use_locking If True use locks for update operations. name Optional name prefix for the operations created when applying gradients. Defaults to "Momentum". use_nesterov If True use Nesterov Momentum. See (Sutskever et al., 2013). This implementation always computes gradients at the value of the variable(s) passed to the optimizer. Using Nesterov Momentum makes the variable(s) track the values called theta_t + mu*v_t in the paper. This implementation is an approximation of the original formula, valid for high values of momentum. It will compute the "adjusted gradient" in NAG by assuming that the new gradient will be estimated by the current average gradient plus the product of momentum and the change in the average gradient. Methods apply_gradients View source apply_gradients( grads_and_vars, global_step=None, name=None ) Apply gradients to variables. This is the second part of minimize(). It returns an Operation that applies gradients. Args grads_and_vars List of (gradient, variable) pairs as returned by compute_gradients(). global_step Optional Variable to increment by one after the variables have been updated. name Optional name for the returned operation. Default to the name passed to the Optimizer constructor. Returns An Operation that applies the specified gradients. If global_step was not None, that operation also increments global_step. Raises TypeError If grads_and_vars is malformed. ValueError If none of the variables have gradients. RuntimeError If you should use _distributed_apply() instead. compute_gradients View source compute_gradients( loss, var_list=None, gate_gradients=GATE_OP, aggregation_method=None, colocate_gradients_with_ops=False, grad_loss=None ) Compute gradients of loss for the variables in var_list. This is the first part of minimize(). It returns a list of (gradient, variable) pairs where "gradient" is the gradient for "variable". Note that "gradient" can be a Tensor, an IndexedSlices, or None if there is no gradient for the given variable. Args loss A Tensor containing the value to minimize or a callable taking no arguments which returns the value to minimize. When eager execution is enabled it must be a callable. var_list Optional list or tuple of tf.Variable to update to minimize loss. Defaults to the list of variables collected in the graph under the key GraphKeys.TRAINABLE_VARIABLES. gate_gradients How to gate the computation of gradients. Can be GATE_NONE, GATE_OP, or GATE_GRAPH. aggregation_method Specifies the method used to combine gradient terms. Valid values are defined in the class AggregationMethod. colocate_gradients_with_ops If True, try colocating gradients with the corresponding op. grad_loss Optional. A Tensor holding the gradient computed for loss. Returns A list of (gradient, variable) pairs. Variable is always present, but gradient can be None. Raises TypeError If var_list contains anything else than Variable objects. ValueError If some arguments are invalid. RuntimeError If called with eager execution enabled and loss is not callable. Eager Compatibility When eager execution is enabled, gate_gradients, aggregation_method, and colocate_gradients_with_ops are ignored. get_name View source get_name() get_slot View source get_slot( var, name ) Return a slot named name created for var by the Optimizer. Some Optimizer subclasses use additional variables. For example Momentum and Adagrad use variables to accumulate updates. This method gives access to these Variable objects if for some reason you need them. Use get_slot_names() to get the list of slot names created by the Optimizer. Args var A variable passed to minimize() or apply_gradients(). name A string. Returns The Variable for the slot if it was created, None otherwise. get_slot_names View source get_slot_names() Return a list of the names of slots created by the Optimizer. See get_slot(). Returns A list of strings. minimize View source minimize( loss, global_step=None, var_list=None, gate_gradients=GATE_OP, aggregation_method=None, colocate_gradients_with_ops=False, name=None, grad_loss=None ) Add operations to minimize loss by updating var_list. This method simply combines calls compute_gradients() and apply_gradients(). If you want to process the gradient before applying them call compute_gradients() and apply_gradients() explicitly instead of using this function. Args loss A Tensor containing the value to minimize. global_step Optional Variable to increment by one after the variables have been updated. var_list Optional list or tuple of Variable objects to update to minimize loss. Defaults to the list of variables collected in the graph under the key GraphKeys.TRAINABLE_VARIABLES. gate_gradients How to gate the computation of gradients. Can be GATE_NONE, GATE_OP, or GATE_GRAPH. aggregation_method Specifies the method used to combine gradient terms. Valid values are defined in the class AggregationMethod. colocate_gradients_with_ops If True, try colocating gradients with the corresponding op. name Optional name for the returned operation. grad_loss Optional. A Tensor holding the gradient computed for loss. Returns An Operation that updates the variables in var_list. If global_step was not None, that operation also increments global_step. Raises ValueError If some of the variables are not Variable objects. Eager Compatibility When eager execution is enabled, loss should be a Python function that takes no arguments and computes the value to be minimized. Minimization (and gradient computation) is done with respect to the elements of var_list if not None, else with respect to any trainable variables created during the execution of the loss function. gate_gradients, aggregation_method, colocate_gradients_with_ops and grad_loss are ignored when eager execution is enabled. variables View source variables() A list of variables which encode the current state of Optimizer. Includes slot variables and additional global variables created by the optimizer in the current default graph. Returns A list of variables. Class Variables GATE_GRAPH 2 GATE_NONE 0 GATE_OP 1
tensorflow.compat.v1.train.momentumoptimizer
tf.compat.v1.train.MonitoredSession Session-like object that handles initialization, recovery and hooks. tf.compat.v1.train.MonitoredSession( session_creator=None, hooks=None, stop_grace_period_secs=120 ) Example usage: saver_hook = CheckpointSaverHook(...) summary_hook = SummarySaverHook(...) with MonitoredSession(session_creator=ChiefSessionCreator(...), hooks=[saver_hook, summary_hook]) as sess: while not sess.should_stop(): sess.run(train_op) Initialization: At creation time the monitored session does following things in given order: calls hook.begin() for each given hook finalizes the graph via scaffold.finalize() create session initializes the model via initialization ops provided by Scaffold restores variables if a checkpoint exists launches queue runners calls hook.after_create_session() Run: When run() is called, the monitored session does following things: calls hook.before_run() calls TensorFlow session.run() with merged fetches and feed_dict calls hook.after_run() returns result of session.run() asked by user if AbortedError or UnavailableError occurs, it recovers or reinitializes the session before executing the run() call again Exit: At the close(), the monitored session does following things in order: calls hook.end() closes the queue runners and the session suppresses OutOfRange error which indicates that all inputs have been processed if the monitored_session is used as a context How to set tf.compat.v1.Session arguments: In most cases you can set session arguments as follows: MonitoredSession( session_creator=ChiefSessionCreator(master=..., config=...)) In distributed setting for a non-chief worker, you can use following: MonitoredSession( session_creator=WorkerSessionCreator(master=..., config=...)) See MonitoredTrainingSession for an example usage based on chief or worker. Note: This is not a tf.compat.v1.Session. For example, it cannot do following: it cannot be set as default session. it cannot be sent to saver.save. it cannot be sent to tf.train.start_queue_runners. Args session_creator A factory object to create session. Typically a ChiefSessionCreator which is the default one. hooks An iterable of `SessionRunHook' objects. Returns A MonitoredSession object. Attributes graph The graph that was launched in this session. Child Classes class StepContext Methods close View source close() run View source run( fetches, feed_dict=None, options=None, run_metadata=None ) Run ops in the monitored session. This method is completely compatible with the tf.Session.run() method. Args fetches Same as tf.Session.run(). feed_dict Same as tf.Session.run(). options Same as tf.Session.run(). run_metadata Same as tf.Session.run(). Returns Same as tf.Session.run(). run_step_fn View source run_step_fn( step_fn ) Run ops using a step function. Args step_fn A function or a method with a single argument of type StepContext. The function may use methods of the argument to perform computations with access to a raw session. The returned value of the step_fn will be returned from run_step_fn, unless a stop is requested. In that case, the next should_stop call will return True. Example usage: with tf.Graph().as_default(): c = tf.compat.v1.placeholder(dtypes.float32) v = tf.add(c, 4.0) w = tf.add(c, 0.5) def step_fn(step_context): a = step_context.session.run(fetches=v, feed_dict={c: 0.5}) if a <= 4.5: step_context.request_stop() return step_context.run_with_hooks(fetches=w, feed_dict={c: 0.1}) with tf.MonitoredSession() as session: while not session.should_stop(): a = session.run_step_fn(step_fn) Hooks interact with the run_with_hooks() call inside the step_fn as they do with a MonitoredSession.run call. Returns Returns the returned value of step_fn. Raises StopIteration if step_fn has called request_stop(). It may be caught by with tf.MonitoredSession() to close the session. ValueError if step_fn doesn't have a single argument called step_context. It may also optionally have self for cases when it belongs to an object. should_stop View source should_stop() __enter__ View source __enter__() __exit__ View source __exit__( exception_type, exception_value, traceback )
tensorflow.compat.v1.train.monitoredsession
tf.compat.v1.train.MonitoredSession.StepContext Control flow instrument for the step_fn from run_step_fn(). View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.train.SingularMonitoredSession.StepContext tf.compat.v1.train.MonitoredSession.StepContext( session, run_with_hooks_fn ) Users of step_fn may perform run() calls without running hooks by accessing the session. A run() call with hooks may be performed using run_with_hooks(). Computation flow can be interrupted using request_stop(). Args session An instance of tf.compat.v1.Session. run_with_hooks_fn A function for running fetches and hooks. Attributes session Methods request_stop View source request_stop() Exit the training loop by causing should_stop() to return True. Causes step_fn to exit by raising an exception. Raises StopIteration run_with_hooks View source run_with_hooks( *args, **kwargs ) Same as MonitoredSession.run. Accepts the same arguments.
tensorflow.compat.v1.train.monitoredsession.stepcontext
tf.compat.v1.train.MonitoredTrainingSession Creates a MonitoredSession for training. tf.compat.v1.train.MonitoredTrainingSession( master='', is_chief=True, checkpoint_dir=None, scaffold=None, hooks=None, chief_only_hooks=None, save_checkpoint_secs=USE_DEFAULT, save_summaries_steps=USE_DEFAULT, save_summaries_secs=USE_DEFAULT, config=None, stop_grace_period_secs=120, log_step_count_steps=100, max_wait_secs=7200, save_checkpoint_steps=USE_DEFAULT, summary_dir=None, save_graph_def=True ) For a chief, this utility sets proper session initializer/restorer. It also creates hooks related to checkpoint and summary saving. For workers, this utility sets proper session creator which waits for the chief to initialize/restore. Please check tf.compat.v1.train.MonitoredSession for more information. Args master String the TensorFlow master to use. is_chief If True, it will take care of initialization and recovery the underlying TensorFlow session. If False, it will wait on a chief to initialize or recover the TensorFlow session. checkpoint_dir A string. Optional path to a directory where to restore variables. scaffold A Scaffold used for gathering or building supportive ops. If not specified, a default one is created. It's used to finalize the graph. hooks Optional list of SessionRunHook objects. chief_only_hooks list of SessionRunHook objects. Activate these hooks if is_chief==True, ignore otherwise. save_checkpoint_secs The frequency, in seconds, that a checkpoint is saved using a default checkpoint saver. If both save_checkpoint_steps and save_checkpoint_secs are set to None, then the default checkpoint saver isn't used. If both are provided, then only save_checkpoint_secs is used. Default 600. save_summaries_steps The frequency, in number of global steps, that the summaries are written to disk using a default summary saver. If both save_summaries_steps and save_summaries_secs are set to None, then the default summary saver isn't used. Default 100. save_summaries_secs The frequency, in secs, that the summaries are written to disk using a default summary saver. If both save_summaries_steps and save_summaries_secs are set to None, then the default summary saver isn't used. Default not enabled. config an instance of tf.compat.v1.ConfigProto proto used to configure the session. It's the config argument of constructor of tf.compat.v1.Session. stop_grace_period_secs Number of seconds given to threads to stop after close() has been called. log_step_count_steps The frequency, in number of global steps, that the global step/sec is logged. max_wait_secs Maximum time workers should wait for the session to become available. This should be kept relatively short to help detect incorrect code, but sometimes may need to be increased if the chief takes a while to start up. save_checkpoint_steps The frequency, in number of global steps, that a checkpoint is saved using a default checkpoint saver. If both save_checkpoint_steps and save_checkpoint_secs are set to None, then the default checkpoint saver isn't used. If both are provided, then only save_checkpoint_secs is used. Default not enabled. summary_dir A string. Optional path to a directory where to save summaries. If None, checkpoint_dir is used instead. save_graph_def Whether to save the GraphDef and MetaGraphDef to checkpoint_dir. The GraphDef is saved after the session is created as graph.pbtxt. MetaGraphDefs are saved out for every checkpoint as model.ckpt-*.meta. Returns A MonitoredSession object.
tensorflow.compat.v1.train.monitoredtrainingsession
tf.compat.v1.train.natural_exp_decay Applies natural exponential decay to the initial learning rate. tf.compat.v1.train.natural_exp_decay( learning_rate, global_step, decay_steps, decay_rate, staircase=False, name=None ) When training a model, it is often recommended to lower the learning rate as the training progresses. This function applies an exponential decay function to a provided initial learning rate. It requires an global_step value to compute the decayed learning rate. You can just pass a TensorFlow variable that you increment at each training step. The function returns the decayed learning rate. It is computed as: decayed_learning_rate = learning_rate * exp(-decay_rate * global_step / decay_step) or, if staircase is True, as: decayed_learning_rate = learning_rate * exp(-decay_rate * floor(global_step / decay_step)) Example: decay exponentially with a base of 0.96: ... global_step = tf.Variable(0, trainable=False) learning_rate = 0.1 decay_steps = 5 k = 0.5 learning_rate = tf.compat.v1.train.natural_exp_decay(learning_rate, global_step, decay_steps, k) # Passing global_step to minimize() will increment it at each step. learning_step = ( tf.compat.v1.train.GradientDescentOptimizer(learning_rate) .minimize(...my loss..., global_step=global_step) ) Args learning_rate A scalar float32 or float64 Tensor or a Python number. The initial learning rate. global_step A Python number. Global step to use for the decay computation. Must not be negative. decay_steps How often to apply decay. decay_rate A Python number. The decay rate. staircase Whether to apply decay in a discrete staircase, as opposed to continuous, fashion. name String. Optional name of the operation. Defaults to 'ExponentialTimeDecay'. Returns A scalar Tensor of the same type as learning_rate. The decayed learning rate. Raises ValueError if global_step is not supplied. Eager Compatibility When eager execution is enabled, this function returns a function which in turn returns the decayed learning rate Tensor. This can be useful for changing the learning rate value across different invocations of optimizer functions.
tensorflow.compat.v1.train.natural_exp_decay
tf.compat.v1.train.NewCheckpointReader A function that returns a CheckPointReader. tf.compat.v1.train.NewCheckpointReader( filepattern ) Args filepattern The filename. Returns A CheckpointReader object.
tensorflow.compat.v1.train.newcheckpointreader
tf.compat.v1.train.noisy_linear_cosine_decay Applies noisy linear cosine decay to the learning rate. tf.compat.v1.train.noisy_linear_cosine_decay( learning_rate, global_step, decay_steps, initial_variance=1.0, variance_decay=0.55, num_periods=0.5, alpha=0.0, beta=0.001, name=None ) Note that linear cosine decay is more aggressive than cosine decay and larger initial learning rates can typically be used. When training a model, it is often recommended to lower the learning rate as the training progresses. This function applies a noisy linear cosine decay function to a provided initial learning rate. It requires a global_step value to compute the decayed learning rate. You can just pass a TensorFlow variable that you increment at each training step. The function returns the decayed learning rate. It is computed as: global_step = min(global_step, decay_steps) linear_decay = (decay_steps - global_step) / decay_steps) cosine_decay = 0.5 * ( 1 + cos(pi * 2 * num_periods * global_step / decay_steps)) decayed = (alpha + linear_decay + eps_t) * cosine_decay + beta decayed_learning_rate = learning_rate * decayed where eps_t is 0-centered gaussian noise with variance initial_variance / (1 + global_step) ** variance_decay Example usage: decay_steps = 1000 lr_decayed = noisy_linear_cosine_decay( learning_rate, global_step, decay_steps) Args learning_rate A scalar float32 or float64 Tensor or a Python number. The initial learning rate. global_step A scalar int32 or int64 Tensor or a Python number. Global step to use for the decay computation. decay_steps A scalar int32 or int64 Tensor or a Python number. Number of steps to decay over. initial_variance initial variance for the noise. See computation above. variance_decay decay for the noise's variance. See computation above. num_periods Number of periods in the cosine part of the decay. See computation above. alpha See computation above. beta See computation above. name String. Optional name of the operation. Defaults to 'NoisyLinearCosineDecay'. Returns A scalar Tensor of the same type as learning_rate. The decayed learning rate. Raises ValueError if global_step is not supplied. References: Neural Optimizer Search with Reinforcement Learning: Bello et al., 2017 (pdf) Stochastic Gradient Descent with Warm Restarts: Loshchilov et al., 2017 (pdf) Eager Compatibility When eager execution is enabled, this function returns a function which in turn returns the decayed learning rate Tensor. This can be useful for changing the learning rate value across different invocations of optimizer functions.
tensorflow.compat.v1.train.noisy_linear_cosine_decay
tf.compat.v1.train.Optimizer Base class for optimizers. tf.compat.v1.train.Optimizer( use_locking, name ) This class defines the API to add Ops to train a model. You never use this class directly, but instead instantiate one of its subclasses such as GradientDescentOptimizer, AdagradOptimizer, or MomentumOptimizer. Usage # Create an optimizer with the desired parameters. opt = GradientDescentOptimizer(learning_rate=0.1) # Add Ops to the graph to minimize a cost by updating a list of variables. # "cost" is a Tensor, and the list of variables contains tf.Variable # objects. opt_op = opt.minimize(cost, var_list=<list of variables>) In the training program you will just have to run the returned Op. # Execute opt_op to do one step of training: opt_op.run() Processing gradients before applying them. Calling minimize() takes care of both computing the gradients and applying them to the variables. If you want to process the gradients before applying them you can instead use the optimizer in three steps: Compute the gradients with compute_gradients(). Process the gradients as you wish. Apply the processed gradients with apply_gradients(). Example: # Create an optimizer. opt = GradientDescentOptimizer(learning_rate=0.1) # Compute the gradients for a list of variables. grads_and_vars = opt.compute_gradients(loss, <list of variables>) # grads_and_vars is a list of tuples (gradient, variable). Do whatever you # need to the 'gradient' part, for example cap them, etc. capped_grads_and_vars = [(MyCapper(gv[0]), gv[1]) for gv in grads_and_vars] # Ask the optimizer to apply the capped gradients. opt.apply_gradients(capped_grads_and_vars) Gating Gradients Both minimize() and compute_gradients() accept a gate_gradients argument that controls the degree of parallelism during the application of the gradients. The possible values are: GATE_NONE, GATE_OP, and GATE_GRAPH. GATE_NONE: Compute and apply gradients in parallel. This provides the maximum parallelism in execution, at the cost of some non-reproducibility in the results. For example the two gradients of matmul depend on the input values: With GATE_NONE one of the gradients could be applied to one of the inputs before the other gradient is computed resulting in non-reproducible results. GATE_OP: For each Op, make sure all gradients are computed before they are used. This prevents race conditions for Ops that generate gradients for multiple inputs where the gradients depend on the inputs. GATE_GRAPH: Make sure all gradients for all variables are computed before any one of them is used. This provides the least parallelism but can be useful if you want to process all gradients before applying any of them. Slots Some optimizer subclasses, such as MomentumOptimizer and AdagradOptimizer allocate and manage additional variables associated with the variables to train. These are called Slots. Slots have names and you can ask the optimizer for the names of the slots that it uses. Once you have a slot name you can ask the optimizer for the variable it created to hold the slot value. This can be useful if you want to log debug a training algorithm, report stats about the slots, etc. Args use_locking Bool. If True apply use locks to prevent concurrent updates to variables. name A non-empty string. The name to use for accumulators created for the optimizer. Raises ValueError If name is malformed. Methods apply_gradients View source apply_gradients( grads_and_vars, global_step=None, name=None ) Apply gradients to variables. This is the second part of minimize(). It returns an Operation that applies gradients. Args grads_and_vars List of (gradient, variable) pairs as returned by compute_gradients(). global_step Optional Variable to increment by one after the variables have been updated. name Optional name for the returned operation. Default to the name passed to the Optimizer constructor. Returns An Operation that applies the specified gradients. If global_step was not None, that operation also increments global_step. Raises TypeError If grads_and_vars is malformed. ValueError If none of the variables have gradients. RuntimeError If you should use _distributed_apply() instead. compute_gradients View source compute_gradients( loss, var_list=None, gate_gradients=GATE_OP, aggregation_method=None, colocate_gradients_with_ops=False, grad_loss=None ) Compute gradients of loss for the variables in var_list. This is the first part of minimize(). It returns a list of (gradient, variable) pairs where "gradient" is the gradient for "variable". Note that "gradient" can be a Tensor, an IndexedSlices, or None if there is no gradient for the given variable. Args loss A Tensor containing the value to minimize or a callable taking no arguments which returns the value to minimize. When eager execution is enabled it must be a callable. var_list Optional list or tuple of tf.Variable to update to minimize loss. Defaults to the list of variables collected in the graph under the key GraphKeys.TRAINABLE_VARIABLES. gate_gradients How to gate the computation of gradients. Can be GATE_NONE, GATE_OP, or GATE_GRAPH. aggregation_method Specifies the method used to combine gradient terms. Valid values are defined in the class AggregationMethod. colocate_gradients_with_ops If True, try colocating gradients with the corresponding op. grad_loss Optional. A Tensor holding the gradient computed for loss. Returns A list of (gradient, variable) pairs. Variable is always present, but gradient can be None. Raises TypeError If var_list contains anything else than Variable objects. ValueError If some arguments are invalid. RuntimeError If called with eager execution enabled and loss is not callable. Eager Compatibility When eager execution is enabled, gate_gradients, aggregation_method, and colocate_gradients_with_ops are ignored. get_name View source get_name() get_slot View source get_slot( var, name ) Return a slot named name created for var by the Optimizer. Some Optimizer subclasses use additional variables. For example Momentum and Adagrad use variables to accumulate updates. This method gives access to these Variable objects if for some reason you need them. Use get_slot_names() to get the list of slot names created by the Optimizer. Args var A variable passed to minimize() or apply_gradients(). name A string. Returns The Variable for the slot if it was created, None otherwise. get_slot_names View source get_slot_names() Return a list of the names of slots created by the Optimizer. See get_slot(). Returns A list of strings. minimize View source minimize( loss, global_step=None, var_list=None, gate_gradients=GATE_OP, aggregation_method=None, colocate_gradients_with_ops=False, name=None, grad_loss=None ) Add operations to minimize loss by updating var_list. This method simply combines calls compute_gradients() and apply_gradients(). If you want to process the gradient before applying them call compute_gradients() and apply_gradients() explicitly instead of using this function. Args loss A Tensor containing the value to minimize. global_step Optional Variable to increment by one after the variables have been updated. var_list Optional list or tuple of Variable objects to update to minimize loss. Defaults to the list of variables collected in the graph under the key GraphKeys.TRAINABLE_VARIABLES. gate_gradients How to gate the computation of gradients. Can be GATE_NONE, GATE_OP, or GATE_GRAPH. aggregation_method Specifies the method used to combine gradient terms. Valid values are defined in the class AggregationMethod. colocate_gradients_with_ops If True, try colocating gradients with the corresponding op. name Optional name for the returned operation. grad_loss Optional. A Tensor holding the gradient computed for loss. Returns An Operation that updates the variables in var_list. If global_step was not None, that operation also increments global_step. Raises ValueError If some of the variables are not Variable objects. Eager Compatibility When eager execution is enabled, loss should be a Python function that takes no arguments and computes the value to be minimized. Minimization (and gradient computation) is done with respect to the elements of var_list if not None, else with respect to any trainable variables created during the execution of the loss function. gate_gradients, aggregation_method, colocate_gradients_with_ops and grad_loss are ignored when eager execution is enabled. variables View source variables() A list of variables which encode the current state of Optimizer. Includes slot variables and additional global variables created by the optimizer in the current default graph. Returns A list of variables. Class Variables GATE_GRAPH 2 GATE_NONE 0 GATE_OP 1
tensorflow.compat.v1.train.optimizer
tf.compat.v1.train.piecewise_constant Piecewise constant from boundaries and interval values. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.train.piecewise_constant_decay tf.compat.v1.train.piecewise_constant( x, boundaries, values, name=None ) Example: use a learning rate that's 1.0 for the first 100001 steps, 0.5 for the next 10000 steps, and 0.1 for any additional steps. global_step = tf.Variable(0, trainable=False) boundaries = [100000, 110000] values = [1.0, 0.5, 0.1] learning_rate = tf.compat.v1.train.piecewise_constant(global_step, boundaries, values) # Later, whenever we perform an optimization step, we increment global_step. Args x A 0-D scalar Tensor. Must be one of the following types: float32, float64, uint8, int8, int16, int32, int64. boundaries A list of Tensors or ints or floats with strictly increasing entries, and with all elements having the same type as x. values A list of Tensors or floats or ints that specifies the values for the intervals defined by boundaries. It should have one more element than boundaries, and all elements should have the same type. name A string. Optional name of the operation. Defaults to 'PiecewiseConstant'. Returns A 0-D Tensor. Its value is values[0] when x <= boundaries[0], values[1] when x > boundaries[0] and x <= boundaries[1], ..., and values[-1] when x > boundaries[-1]. Raises ValueError if types of x and boundaries do not match, or types of all values do not match or the number of elements in the lists does not match. Eager Compatibility When eager execution is enabled, this function returns a function which in turn returns the decayed learning rate Tensor. This can be useful for changing the learning rate value across different invocations of optimizer functions.
tensorflow.compat.v1.train.piecewise_constant
tf.compat.v1.train.polynomial_decay Applies a polynomial decay to the learning rate. tf.compat.v1.train.polynomial_decay( learning_rate, global_step, decay_steps, end_learning_rate=0.0001, power=1.0, cycle=False, name=None ) It is commonly observed that a monotonically decreasing learning rate, whose degree of change is carefully chosen, results in a better performing model. This function applies a polynomial decay function to a provided initial learning_rate to reach an end_learning_rate in the given decay_steps. It requires a global_step value to compute the decayed learning rate. You can just pass a TensorFlow variable that you increment at each training step. The function returns the decayed learning rate. It is computed as: global_step = min(global_step, decay_steps) decayed_learning_rate = (learning_rate - end_learning_rate) * (1 - global_step / decay_steps) ^ (power) + end_learning_rate If cycle is True then a multiple of decay_steps is used, the first one that is bigger than global_steps. decay_steps = decay_steps * ceil(global_step / decay_steps) decayed_learning_rate = (learning_rate - end_learning_rate) * (1 - global_step / decay_steps) ^ (power) + end_learning_rate Example: decay from 0.1 to 0.01 in 10000 steps using sqrt (i.e. power=0.5): ... global_step = tf.Variable(0, trainable=False) starter_learning_rate = 0.1 end_learning_rate = 0.01 decay_steps = 10000 learning_rate = tf.compat.v1.train.polynomial_decay(starter_learning_rate, global_step, decay_steps, end_learning_rate, power=0.5) # Passing global_step to minimize() will increment it at each step. learning_step = ( tf.compat.v1.train.GradientDescentOptimizer(learning_rate) .minimize(...my loss..., global_step=global_step) ) Args learning_rate A scalar float32 or float64 Tensor or a Python number. The initial learning rate. global_step A scalar int32 or int64 Tensor or a Python number. Global step to use for the decay computation. Must not be negative. decay_steps A scalar int32 or int64 Tensor or a Python number. Must be positive. See the decay computation above. end_learning_rate A scalar float32 or float64 Tensor or a Python number. The minimal end learning rate. power A scalar float32 or float64 Tensor or a Python number. The power of the polynomial. Defaults to linear, 1.0. cycle A boolean, whether or not it should cycle beyond decay_steps. name String. Optional name of the operation. Defaults to 'PolynomialDecay'. Returns A scalar Tensor of the same type as learning_rate. The decayed learning rate. Raises ValueError if global_step is not supplied. Eager Compatibility When eager execution is enabled, this function returns a function which in turn returns the decayed learning rate Tensor. This can be useful for changing the learning rate value across different invocations of optimizer functions.
tensorflow.compat.v1.train.polynomial_decay
tf.compat.v1.train.ProximalAdagradOptimizer Optimizer that implements the Proximal Adagrad algorithm. Inherits From: Optimizer tf.compat.v1.train.ProximalAdagradOptimizer( learning_rate, initial_accumulator_value=0.1, l1_regularization_strength=0.0, l2_regularization_strength=0.0, use_locking=False, name='ProximalAdagrad' ) References: Adaptive Subgradient Methods for Online Learning and Stochastic Optimization: Duchi et al., 2011 (pdf) Efficient Learning using Forward-Backward Splitting: Duchi et al., 2009 (pdf) Args learning_rate A Tensor or a floating point value. The learning rate. initial_accumulator_value A floating point value. Starting value for the accumulators, must be positive. l1_regularization_strength A float value, must be greater than or equal to zero. l2_regularization_strength A float value, must be greater than or equal to zero. use_locking If True use locks for update operations. name Optional name prefix for the operations created when applying gradients. Defaults to "Adagrad". Raises ValueError If the initial_accumulator_value is invalid. Methods apply_gradients View source apply_gradients( grads_and_vars, global_step=None, name=None ) Apply gradients to variables. This is the second part of minimize(). It returns an Operation that applies gradients. Args grads_and_vars List of (gradient, variable) pairs as returned by compute_gradients(). global_step Optional Variable to increment by one after the variables have been updated. name Optional name for the returned operation. Default to the name passed to the Optimizer constructor. Returns An Operation that applies the specified gradients. If global_step was not None, that operation also increments global_step. Raises TypeError If grads_and_vars is malformed. ValueError If none of the variables have gradients. RuntimeError If you should use _distributed_apply() instead. compute_gradients View source compute_gradients( loss, var_list=None, gate_gradients=GATE_OP, aggregation_method=None, colocate_gradients_with_ops=False, grad_loss=None ) Compute gradients of loss for the variables in var_list. This is the first part of minimize(). It returns a list of (gradient, variable) pairs where "gradient" is the gradient for "variable". Note that "gradient" can be a Tensor, an IndexedSlices, or None if there is no gradient for the given variable. Args loss A Tensor containing the value to minimize or a callable taking no arguments which returns the value to minimize. When eager execution is enabled it must be a callable. var_list Optional list or tuple of tf.Variable to update to minimize loss. Defaults to the list of variables collected in the graph under the key GraphKeys.TRAINABLE_VARIABLES. gate_gradients How to gate the computation of gradients. Can be GATE_NONE, GATE_OP, or GATE_GRAPH. aggregation_method Specifies the method used to combine gradient terms. Valid values are defined in the class AggregationMethod. colocate_gradients_with_ops If True, try colocating gradients with the corresponding op. grad_loss Optional. A Tensor holding the gradient computed for loss. Returns A list of (gradient, variable) pairs. Variable is always present, but gradient can be None. Raises TypeError If var_list contains anything else than Variable objects. ValueError If some arguments are invalid. RuntimeError If called with eager execution enabled and loss is not callable. Eager Compatibility When eager execution is enabled, gate_gradients, aggregation_method, and colocate_gradients_with_ops are ignored. get_name View source get_name() get_slot View source get_slot( var, name ) Return a slot named name created for var by the Optimizer. Some Optimizer subclasses use additional variables. For example Momentum and Adagrad use variables to accumulate updates. This method gives access to these Variable objects if for some reason you need them. Use get_slot_names() to get the list of slot names created by the Optimizer. Args var A variable passed to minimize() or apply_gradients(). name A string. Returns The Variable for the slot if it was created, None otherwise. get_slot_names View source get_slot_names() Return a list of the names of slots created by the Optimizer. See get_slot(). Returns A list of strings. minimize View source minimize( loss, global_step=None, var_list=None, gate_gradients=GATE_OP, aggregation_method=None, colocate_gradients_with_ops=False, name=None, grad_loss=None ) Add operations to minimize loss by updating var_list. This method simply combines calls compute_gradients() and apply_gradients(). If you want to process the gradient before applying them call compute_gradients() and apply_gradients() explicitly instead of using this function. Args loss A Tensor containing the value to minimize. global_step Optional Variable to increment by one after the variables have been updated. var_list Optional list or tuple of Variable objects to update to minimize loss. Defaults to the list of variables collected in the graph under the key GraphKeys.TRAINABLE_VARIABLES. gate_gradients How to gate the computation of gradients. Can be GATE_NONE, GATE_OP, or GATE_GRAPH. aggregation_method Specifies the method used to combine gradient terms. Valid values are defined in the class AggregationMethod. colocate_gradients_with_ops If True, try colocating gradients with the corresponding op. name Optional name for the returned operation. grad_loss Optional. A Tensor holding the gradient computed for loss. Returns An Operation that updates the variables in var_list. If global_step was not None, that operation also increments global_step. Raises ValueError If some of the variables are not Variable objects. Eager Compatibility When eager execution is enabled, loss should be a Python function that takes no arguments and computes the value to be minimized. Minimization (and gradient computation) is done with respect to the elements of var_list if not None, else with respect to any trainable variables created during the execution of the loss function. gate_gradients, aggregation_method, colocate_gradients_with_ops and grad_loss are ignored when eager execution is enabled. variables View source variables() A list of variables which encode the current state of Optimizer. Includes slot variables and additional global variables created by the optimizer in the current default graph. Returns A list of variables. Class Variables GATE_GRAPH 2 GATE_NONE 0 GATE_OP 1
tensorflow.compat.v1.train.proximaladagradoptimizer
tf.compat.v1.train.ProximalGradientDescentOptimizer Optimizer that implements the proximal gradient descent algorithm. Inherits From: Optimizer tf.compat.v1.train.ProximalGradientDescentOptimizer( learning_rate, l1_regularization_strength=0.0, l2_regularization_strength=0.0, use_locking=False, name='ProximalGradientDescent' ) References: Efficient Learning using Forward-Backward Splitting: Duchi et al., 2009 (pdf) Args learning_rate A Tensor or a floating point value. The learning rate to use. l1_regularization_strength A float value, must be greater than or equal to zero. l2_regularization_strength A float value, must be greater than or equal to zero. use_locking If True use locks for update operations. name Optional name prefix for the operations created when applying gradients. Defaults to "GradientDescent". Methods apply_gradients View source apply_gradients( grads_and_vars, global_step=None, name=None ) Apply gradients to variables. This is the second part of minimize(). It returns an Operation that applies gradients. Args grads_and_vars List of (gradient, variable) pairs as returned by compute_gradients(). global_step Optional Variable to increment by one after the variables have been updated. name Optional name for the returned operation. Default to the name passed to the Optimizer constructor. Returns An Operation that applies the specified gradients. If global_step was not None, that operation also increments global_step. Raises TypeError If grads_and_vars is malformed. ValueError If none of the variables have gradients. RuntimeError If you should use _distributed_apply() instead. compute_gradients View source compute_gradients( loss, var_list=None, gate_gradients=GATE_OP, aggregation_method=None, colocate_gradients_with_ops=False, grad_loss=None ) Compute gradients of loss for the variables in var_list. This is the first part of minimize(). It returns a list of (gradient, variable) pairs where "gradient" is the gradient for "variable". Note that "gradient" can be a Tensor, an IndexedSlices, or None if there is no gradient for the given variable. Args loss A Tensor containing the value to minimize or a callable taking no arguments which returns the value to minimize. When eager execution is enabled it must be a callable. var_list Optional list or tuple of tf.Variable to update to minimize loss. Defaults to the list of variables collected in the graph under the key GraphKeys.TRAINABLE_VARIABLES. gate_gradients How to gate the computation of gradients. Can be GATE_NONE, GATE_OP, or GATE_GRAPH. aggregation_method Specifies the method used to combine gradient terms. Valid values are defined in the class AggregationMethod. colocate_gradients_with_ops If True, try colocating gradients with the corresponding op. grad_loss Optional. A Tensor holding the gradient computed for loss. Returns A list of (gradient, variable) pairs. Variable is always present, but gradient can be None. Raises TypeError If var_list contains anything else than Variable objects. ValueError If some arguments are invalid. RuntimeError If called with eager execution enabled and loss is not callable. Eager Compatibility When eager execution is enabled, gate_gradients, aggregation_method, and colocate_gradients_with_ops are ignored. get_name View source get_name() get_slot View source get_slot( var, name ) Return a slot named name created for var by the Optimizer. Some Optimizer subclasses use additional variables. For example Momentum and Adagrad use variables to accumulate updates. This method gives access to these Variable objects if for some reason you need them. Use get_slot_names() to get the list of slot names created by the Optimizer. Args var A variable passed to minimize() or apply_gradients(). name A string. Returns The Variable for the slot if it was created, None otherwise. get_slot_names View source get_slot_names() Return a list of the names of slots created by the Optimizer. See get_slot(). Returns A list of strings. minimize View source minimize( loss, global_step=None, var_list=None, gate_gradients=GATE_OP, aggregation_method=None, colocate_gradients_with_ops=False, name=None, grad_loss=None ) Add operations to minimize loss by updating var_list. This method simply combines calls compute_gradients() and apply_gradients(). If you want to process the gradient before applying them call compute_gradients() and apply_gradients() explicitly instead of using this function. Args loss A Tensor containing the value to minimize. global_step Optional Variable to increment by one after the variables have been updated. var_list Optional list or tuple of Variable objects to update to minimize loss. Defaults to the list of variables collected in the graph under the key GraphKeys.TRAINABLE_VARIABLES. gate_gradients How to gate the computation of gradients. Can be GATE_NONE, GATE_OP, or GATE_GRAPH. aggregation_method Specifies the method used to combine gradient terms. Valid values are defined in the class AggregationMethod. colocate_gradients_with_ops If True, try colocating gradients with the corresponding op. name Optional name for the returned operation. grad_loss Optional. A Tensor holding the gradient computed for loss. Returns An Operation that updates the variables in var_list. If global_step was not None, that operation also increments global_step. Raises ValueError If some of the variables are not Variable objects. Eager Compatibility When eager execution is enabled, loss should be a Python function that takes no arguments and computes the value to be minimized. Minimization (and gradient computation) is done with respect to the elements of var_list if not None, else with respect to any trainable variables created during the execution of the loss function. gate_gradients, aggregation_method, colocate_gradients_with_ops and grad_loss are ignored when eager execution is enabled. variables View source variables() A list of variables which encode the current state of Optimizer. Includes slot variables and additional global variables created by the optimizer in the current default graph. Returns A list of variables. Class Variables GATE_GRAPH 2 GATE_NONE 0 GATE_OP 1
tensorflow.compat.v1.train.proximalgradientdescentoptimizer
tf.compat.v1.train.QueueRunner Holds a list of enqueue operations for a queue, each to be run in a thread. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.train.queue_runner.QueueRunner tf.compat.v1.train.QueueRunner( queue=None, enqueue_ops=None, close_op=None, cancel_op=None, queue_closed_exception_types=None, queue_runner_def=None, import_scope=None ) Queues are a convenient TensorFlow mechanism to compute tensors asynchronously using multiple threads. For example in the canonical 'Input Reader' setup one set of threads generates filenames in a queue; a second set of threads read records from the files, processes them, and enqueues tensors on a second queue; a third set of threads dequeues these input records to construct batches and runs them through training operations. There are several delicate issues when running multiple threads that way: closing the queues in sequence as the input is exhausted, correctly catching and reporting exceptions, etc. The QueueRunner, combined with the Coordinator, helps handle these issues. Args queue A Queue. enqueue_ops List of enqueue ops to run in threads later. close_op Op to close the queue. Pending enqueue ops are preserved. cancel_op Op to close the queue and cancel pending enqueue ops. queue_closed_exception_types Optional tuple of Exception types that indicate that the queue has been closed when raised during an enqueue operation. Defaults to (tf.errors.OutOfRangeError,). Another common case includes (tf.errors.OutOfRangeError, tf.errors.CancelledError), when some of the enqueue ops may dequeue from other Queues. queue_runner_def Optional QueueRunnerDef protocol buffer. If specified, recreates the QueueRunner from its contents. queue_runner_def and the other arguments are mutually exclusive. import_scope Optional string. Name scope to add. Only used when initializing from protocol buffer. Raises ValueError If both queue_runner_def and queue are both specified. ValueError If queue or enqueue_ops are not provided when not restoring from queue_runner_def. RuntimeError If eager execution is enabled. Eager Compatibility QueueRunners are not compatible with eager execution. Instead, please use tf.data to get data into your model. Attributes cancel_op close_op enqueue_ops exceptions_raised Exceptions raised but not handled by the QueueRunner threads. Exceptions raised in queue runner threads are handled in one of two ways depending on whether or not a Coordinator was passed to create_threads(): With a Coordinator, exceptions are reported to the coordinator and forgotten by the QueueRunner. Without a Coordinator, exceptions are captured by the QueueRunner and made available in this exceptions_raised property. name The string name of the underlying Queue. queue queue_closed_exception_types Methods create_threads View source create_threads( sess, coord=None, daemon=False, start=False ) Create threads to run the enqueue ops for the given session. This method requires a session in which the graph was launched. It creates a list of threads, optionally starting them. There is one thread for each op passed in enqueue_ops. The coord argument is an optional coordinator that the threads will use to terminate together and report exceptions. If a coordinator is given, this method starts an additional thread to close the queue when the coordinator requests a stop. If previously created threads for the given session are still running, no new threads will be created. Args sess A Session. coord Optional Coordinator object for reporting errors and checking stop conditions. daemon Boolean. If True make the threads daemon threads. start Boolean. If True starts the threads. If False the caller must call the start() method of the returned threads. Returns A list of threads. from_proto View source @staticmethod from_proto( queue_runner_def, import_scope=None ) Returns a QueueRunner object created from queue_runner_def. to_proto View source to_proto( export_scope=None ) Converts this QueueRunner to a QueueRunnerDef protocol buffer. Args export_scope Optional string. Name scope to remove. Returns A QueueRunnerDef protocol buffer, or None if the Variable is not in the specified name scope.
tensorflow.compat.v1.train.queuerunner
Module: tf.compat.v1.train.queue_runner Public API for tf.train.queue_runner namespace. Classes class QueueRunner: Holds a list of enqueue operations for a queue, each to be run in a thread. Functions add_queue_runner(...): Adds a QueueRunner to a collection in the graph. (deprecated) start_queue_runners(...): Starts all queue runners collected in the graph. (deprecated)
tensorflow.compat.v1.train.queue_runner
tf.compat.v1.train.range_input_producer Produces the integers from 0 to limit-1 in a queue. (deprecated) tf.compat.v1.train.range_input_producer( limit, num_epochs=None, shuffle=True, seed=None, capacity=32, shared_name=None, name=None ) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Queue-based input pipelines have been replaced by tf.data. Use tf.data.Dataset.range(limit).shuffle(limit).repeat(num_epochs). If shuffle=False, omit the .shuffle(...). Note: if num_epochs is not None, this function creates local counter epochs. Use local_variables_initializer() to initialize local variables. Args limit An int32 scalar tensor. num_epochs An integer (optional). If specified, range_input_producer produces each integer num_epochs times before generating an OutOfRange error. If not specified, range_input_producer can cycle through the integers an unlimited number of times. shuffle Boolean. If true, the integers are randomly shuffled within each epoch. seed An integer (optional). Seed used if shuffle == True. capacity An integer. Sets the queue capacity. shared_name (optional). If set, this queue will be shared under the given name across multiple sessions. name A name for the operations (optional). Returns A Queue with the output integers. A QueueRunner for the Queue is added to the current Graph's QUEUE_RUNNER collection. Eager Compatibility Input pipelines based on Queues are not supported when eager execution is enabled. Please use the tf.data API to ingest data under eager execution.
tensorflow.compat.v1.train.range_input_producer
tf.compat.v1.train.remove_checkpoint Removes a checkpoint given by checkpoint_prefix. (deprecated) tf.compat.v1.train.remove_checkpoint( checkpoint_prefix, checkpoint_format_version=tf.train.SaverDef.V2, meta_graph_suffix='meta' ) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use standard file APIs to delete files with this prefix. Args checkpoint_prefix The prefix of a V1 or V2 checkpoint. Typically the result of Saver.save() or that of tf.train.latest_checkpoint(), regardless of sharded/non-sharded or V1/V2. checkpoint_format_version SaverDef.CheckpointFormatVersion, defaults to SaverDef.V2. meta_graph_suffix Suffix for MetaGraphDef file. Defaults to 'meta'.
tensorflow.compat.v1.train.remove_checkpoint
tf.compat.v1.train.replica_device_setter Return a device function to use when building a Graph for replicas. tf.compat.v1.train.replica_device_setter( ps_tasks=0, ps_device='/job:ps', worker_device='/job:worker', merge_devices=True, cluster=None, ps_ops=None, ps_strategy=None ) Device Functions are used in with tf.device(device_function): statement to automatically assign devices to Operation objects as they are constructed, Device constraints are added from the inner-most context first, working outwards. The merging behavior adds constraints to fields that are yet unset by a more inner context. Currently the fields are (job, task, cpu/gpu). If cluster is None, and ps_tasks is 0, the returned function is a no-op. Otherwise, the value of ps_tasks is derived from cluster. By default, only Variable ops are placed on ps tasks, and the placement strategy is round-robin over all ps tasks. A custom ps_strategy may be used to do more intelligent placement, such as tf.contrib.training.GreedyLoadBalancingStrategy. For example, # To build a cluster with two ps jobs on hosts ps0 and ps1, and 3 worker # jobs on hosts worker0, worker1 and worker2. cluster_spec = { "ps": ["ps0:2222", "ps1:2222"], "worker": ["worker0:2222", "worker1:2222", "worker2:2222"]} with tf.device(tf.compat.v1.train.replica_device_setter(cluster=cluster_spec)): # Build your graph v1 = tf.Variable(...) # assigned to /job:ps/task:0 v2 = tf.Variable(...) # assigned to /job:ps/task:1 v3 = tf.Variable(...) # assigned to /job:ps/task:0 # Run compute Args ps_tasks Number of tasks in the ps job. Ignored if cluster is provided. ps_device String. Device of the ps job. If empty no ps job is used. Defaults to ps. worker_device String. Device of the worker job. If empty no worker job is used. merge_devices Boolean. If True, merges or only sets a device if the device constraint is completely unset. merges device specification rather than overriding them. cluster ClusterDef proto or ClusterSpec. ps_ops List of strings representing Operation types that need to be placed on ps devices. If None, defaults to STANDARD_PS_OPS. ps_strategy A callable invoked for every ps Operation (i.e. matched by ps_ops), that takes the Operation and returns the ps task index to use. If None, defaults to a round-robin strategy across all ps devices. Returns A function to pass to tf.device(). Raises TypeError if cluster is not a dictionary or ClusterDef protocol buffer, or if ps_strategy is provided but not a callable.
tensorflow.compat.v1.train.replica_device_setter
tf.compat.v1.train.RMSPropOptimizer Optimizer that implements the RMSProp algorithm (Tielemans et al. Inherits From: Optimizer tf.compat.v1.train.RMSPropOptimizer( learning_rate, decay=0.9, momentum=0.0, epsilon=1e-10, use_locking=False, centered=False, name='RMSProp' ) 2012). References: Coursera slide 29: Hinton, 2012 (pdf) Args learning_rate A Tensor or a floating point value. The learning rate. decay Discounting factor for the history/coming gradient momentum A scalar tensor. epsilon Small value to avoid zero denominator. use_locking If True use locks for update operation. centered If True, gradients are normalized by the estimated variance of the gradient; if False, by the uncentered second moment. Setting this to True may help with training, but is slightly more expensive in terms of computation and memory. Defaults to False. name Optional name prefix for the operations created when applying gradients. Defaults to "RMSProp". Methods apply_gradients View source apply_gradients( grads_and_vars, global_step=None, name=None ) Apply gradients to variables. This is the second part of minimize(). It returns an Operation that applies gradients. Args grads_and_vars List of (gradient, variable) pairs as returned by compute_gradients(). global_step Optional Variable to increment by one after the variables have been updated. name Optional name for the returned operation. Default to the name passed to the Optimizer constructor. Returns An Operation that applies the specified gradients. If global_step was not None, that operation also increments global_step. Raises TypeError If grads_and_vars is malformed. ValueError If none of the variables have gradients. RuntimeError If you should use _distributed_apply() instead. compute_gradients View source compute_gradients( loss, var_list=None, gate_gradients=GATE_OP, aggregation_method=None, colocate_gradients_with_ops=False, grad_loss=None ) Compute gradients of loss for the variables in var_list. This is the first part of minimize(). It returns a list of (gradient, variable) pairs where "gradient" is the gradient for "variable". Note that "gradient" can be a Tensor, an IndexedSlices, or None if there is no gradient for the given variable. Args loss A Tensor containing the value to minimize or a callable taking no arguments which returns the value to minimize. When eager execution is enabled it must be a callable. var_list Optional list or tuple of tf.Variable to update to minimize loss. Defaults to the list of variables collected in the graph under the key GraphKeys.TRAINABLE_VARIABLES. gate_gradients How to gate the computation of gradients. Can be GATE_NONE, GATE_OP, or GATE_GRAPH. aggregation_method Specifies the method used to combine gradient terms. Valid values are defined in the class AggregationMethod. colocate_gradients_with_ops If True, try colocating gradients with the corresponding op. grad_loss Optional. A Tensor holding the gradient computed for loss. Returns A list of (gradient, variable) pairs. Variable is always present, but gradient can be None. Raises TypeError If var_list contains anything else than Variable objects. ValueError If some arguments are invalid. RuntimeError If called with eager execution enabled and loss is not callable. Eager Compatibility When eager execution is enabled, gate_gradients, aggregation_method, and colocate_gradients_with_ops are ignored. get_name View source get_name() get_slot View source get_slot( var, name ) Return a slot named name created for var by the Optimizer. Some Optimizer subclasses use additional variables. For example Momentum and Adagrad use variables to accumulate updates. This method gives access to these Variable objects if for some reason you need them. Use get_slot_names() to get the list of slot names created by the Optimizer. Args var A variable passed to minimize() or apply_gradients(). name A string. Returns The Variable for the slot if it was created, None otherwise. get_slot_names View source get_slot_names() Return a list of the names of slots created by the Optimizer. See get_slot(). Returns A list of strings. minimize View source minimize( loss, global_step=None, var_list=None, gate_gradients=GATE_OP, aggregation_method=None, colocate_gradients_with_ops=False, name=None, grad_loss=None ) Add operations to minimize loss by updating var_list. This method simply combines calls compute_gradients() and apply_gradients(). If you want to process the gradient before applying them call compute_gradients() and apply_gradients() explicitly instead of using this function. Args loss A Tensor containing the value to minimize. global_step Optional Variable to increment by one after the variables have been updated. var_list Optional list or tuple of Variable objects to update to minimize loss. Defaults to the list of variables collected in the graph under the key GraphKeys.TRAINABLE_VARIABLES. gate_gradients How to gate the computation of gradients. Can be GATE_NONE, GATE_OP, or GATE_GRAPH. aggregation_method Specifies the method used to combine gradient terms. Valid values are defined in the class AggregationMethod. colocate_gradients_with_ops If True, try colocating gradients with the corresponding op. name Optional name for the returned operation. grad_loss Optional. A Tensor holding the gradient computed for loss. Returns An Operation that updates the variables in var_list. If global_step was not None, that operation also increments global_step. Raises ValueError If some of the variables are not Variable objects. Eager Compatibility When eager execution is enabled, loss should be a Python function that takes no arguments and computes the value to be minimized. Minimization (and gradient computation) is done with respect to the elements of var_list if not None, else with respect to any trainable variables created during the execution of the loss function. gate_gradients, aggregation_method, colocate_gradients_with_ops and grad_loss are ignored when eager execution is enabled. variables View source variables() A list of variables which encode the current state of Optimizer. Includes slot variables and additional global variables created by the optimizer in the current default graph. Returns A list of variables. Class Variables GATE_GRAPH 2 GATE_NONE 0 GATE_OP 1
tensorflow.compat.v1.train.rmspropoptimizer
tf.compat.v1.train.Saver Saves and restores variables. tf.compat.v1.train.Saver( var_list=None, reshape=False, sharded=False, max_to_keep=5, keep_checkpoint_every_n_hours=10000.0, name=None, restore_sequentially=False, saver_def=None, builder=None, defer_build=False, allow_empty=False, write_version=tf.train.SaverDef.V2, pad_step_number=False, save_relative_paths=False, filename=None ) See Variables for an overview of variables, saving and restoring. The Saver class adds ops to save and restore variables to and from checkpoints. It also provides convenience methods to run these ops. Checkpoints are binary files in a proprietary format which map variable names to tensor values. The best way to examine the contents of a checkpoint is to load it using a Saver. Savers can automatically number checkpoint filenames with a provided counter. This lets you keep multiple checkpoints at different steps while training a model. For example you can number the checkpoint filenames with the training step number. To avoid filling up disks, savers manage checkpoint files automatically. For example, they can keep only the N most recent files, or one checkpoint for every N hours of training. You number checkpoint filenames by passing a value to the optional global_step argument to save(): saver.save(sess, 'my-model', global_step=0) ==> filename: 'my-model-0' ... saver.save(sess, 'my-model', global_step=1000) ==> filename: 'my-model-1000' Additionally, optional arguments to the Saver() constructor let you control the proliferation of checkpoint files on disk: max_to_keep indicates the maximum number of recent checkpoint files to keep. As new files are created, older files are deleted. If None or 0, no checkpoints are deleted from the filesystem but only the last one is kept in the checkpoint file. Defaults to 5 (that is, the 5 most recent checkpoint files are kept.) keep_checkpoint_every_n_hours: In addition to keeping the most recent max_to_keep checkpoint files, you might want to keep one checkpoint file for every N hours of training. This can be useful if you want to later analyze how a model progressed during a long training session. For example, passing keep_checkpoint_every_n_hours=2 ensures that you keep one checkpoint file for every 2 hours of training. The default value of 10,000 hours effectively disables the feature. Note that you still have to call the save() method to save the model. Passing these arguments to the constructor will not save variables automatically for you. A training program that saves regularly looks like: ... # Create a saver. saver = tf.compat.v1.train.Saver(...variables...) # Launch the graph and train, saving the model every 1,000 steps. sess = tf.compat.v1.Session() for step in xrange(1000000): sess.run(..training_op..) if step % 1000 == 0: # Append the step number to the checkpoint name: saver.save(sess, 'my-model', global_step=step) In addition to checkpoint files, savers keep a protocol buffer on disk with the list of recent checkpoints. This is used to manage numbered checkpoint files and by latest_checkpoint(), which makes it easy to discover the path to the most recent checkpoint. That protocol buffer is stored in a file named 'checkpoint' next to the checkpoint files. If you create several savers, you can specify a different filename for the protocol buffer file in the call to save(). Args var_list A list of Variable/SaveableObject, or a dictionary mapping names to SaveableObjects. If None, defaults to the list of all saveable objects. reshape If True, allows restoring parameters from a checkpoint where the variables have a different shape. sharded If True, shard the checkpoints, one per device. max_to_keep Maximum number of recent checkpoints to keep. Defaults to 5. keep_checkpoint_every_n_hours How often to keep checkpoints. Defaults to 10,000 hours. name String. Optional name to use as a prefix when adding operations. restore_sequentially A Bool, which if true, causes restore of different variables to happen sequentially within each device. This can lower memory usage when restoring very large models. saver_def Optional SaverDef proto to use instead of running the builder. This is only useful for specialty code that wants to recreate a Saver object for a previously built Graph that had a Saver. The saver_def proto should be the one returned by the as_saver_def() call of the Saver that was created for that Graph. builder Optional SaverBuilder to use if a saver_def was not provided. Defaults to BulkSaverBuilder(). defer_build If True, defer adding the save and restore ops to the build() call. In that case build() should be called before finalizing the graph or using the saver. allow_empty If False (default) raise an error if there are no variables in the graph. Otherwise, construct the saver anyway and make it a no-op. write_version controls what format to use when saving checkpoints. It also affects certain filepath matching logic. The V2 format is the recommended choice: it is much more optimized than V1 in terms of memory required and latency incurred during restore. Regardless of this flag, the Saver is able to restore from both V2 and V1 checkpoints. pad_step_number if True, pads the global step number in the checkpoint filepaths to some fixed width (8 by default). This is turned off by default. save_relative_paths If True, will write relative paths to the checkpoint state file. This is needed if the user wants to copy the checkpoint directory and reload from the copied directory. filename If known at graph construction time, filename used for variable loading/saving. Raises TypeError If var_list is invalid. ValueError If any of the keys or values in var_list are not unique. RuntimeError If eager execution is enabled andvar_list does not specify a list of variables to save. Attributes last_checkpoints List of not-yet-deleted checkpoint filenames. You can pass any of the returned values to restore(). Methods as_saver_def View source as_saver_def() Generates a SaverDef representation of this saver. Returns A SaverDef proto. build View source build() export_meta_graph View source export_meta_graph( filename=None, collection_list=None, as_text=False, export_scope=None, clear_devices=False, clear_extraneous_savers=False, strip_default_attrs=False, save_debug_info=False ) Writes MetaGraphDef to save_path/filename. Args filename Optional meta_graph filename including the path. collection_list List of string keys to collect. as_text If True, writes the meta_graph as an ASCII proto. export_scope Optional string. Name scope to remove. clear_devices Whether or not to clear the device field for an Operation or Tensor during export. clear_extraneous_savers Remove any Saver-related information from the graph (both Save/Restore ops and SaverDefs) that are not associated with this Saver. strip_default_attrs Boolean. If True, default-valued attributes will be removed from the NodeDefs. For a detailed guide, see Stripping Default-Valued Attributes. save_debug_info If True, save the GraphDebugInfo to a separate file, which in the same directory of filename and with _debug added before the file extension. Returns A MetaGraphDef proto. from_proto View source @staticmethod from_proto( saver_def, import_scope=None ) Returns a Saver object created from saver_def. Args saver_def a SaverDef protocol buffer. import_scope Optional string. Name scope to use. Returns A Saver built from saver_def. recover_last_checkpoints View source recover_last_checkpoints( checkpoint_paths ) Recovers the internal saver state after a crash. This method is useful for recovering the "self._last_checkpoints" state. Globs for the checkpoints pointed to by checkpoint_paths. If the files exist, use their mtime as the checkpoint timestamp. Args checkpoint_paths a list of checkpoint paths. restore View source restore( sess, save_path ) Restores previously saved variables. This method runs the ops added by the constructor for restoring variables. It requires a session in which the graph was launched. The variables to restore do not have to have been initialized, as restoring is itself a way to initialize variables. The save_path argument is typically a value previously returned from a save() call, or a call to latest_checkpoint(). Args sess A Session to use to restore the parameters. None in eager mode. save_path Path where parameters were previously saved. Raises ValueError If save_path is None or not a valid checkpoint. save View source save( sess, save_path, global_step=None, latest_filename=None, meta_graph_suffix='meta', write_meta_graph=True, write_state=True, strip_default_attrs=False, save_debug_info=False ) Saves variables. This method runs the ops added by the constructor for saving variables. It requires a session in which the graph was launched. The variables to save must also have been initialized. The method returns the path prefix of the newly created checkpoint files. This string can be passed directly to a call to restore(). Args sess A Session to use to save the variables. save_path String. Prefix of filenames created for the checkpoint. global_step If provided the global step number is appended to save_path to create the checkpoint filenames. The optional argument can be a Tensor, a Tensor name or an integer. latest_filename Optional name for the protocol buffer file that will contains the list of most recent checkpoints. That file, kept in the same directory as the checkpoint files, is automatically managed by the saver to keep track of recent checkpoints. Defaults to 'checkpoint'. meta_graph_suffix Suffix for MetaGraphDef file. Defaults to 'meta'. write_meta_graph Boolean indicating whether or not to write the meta graph file. write_state Boolean indicating whether or not to write the CheckpointStateProto. strip_default_attrs Boolean. If True, default-valued attributes will be removed from the NodeDefs. For a detailed guide, see Stripping Default-Valued Attributes. save_debug_info If True, save the GraphDebugInfo to a separate file, which in the same directory of save_path and with _debug added before the file extension. This is only enabled when write_meta_graph is True Returns A string: path prefix used for the checkpoint files. If the saver is sharded, this string ends with: '-?????-of-nnnnn' where 'nnnnn' is the number of shards created. If the saver is empty, returns None. Raises TypeError If sess is not a Session. ValueError If latest_filename contains path components, or if it collides with save_path. RuntimeError If save and restore ops weren't built. set_last_checkpoints View source set_last_checkpoints( last_checkpoints ) DEPRECATED: Use set_last_checkpoints_with_time. Sets the list of old checkpoint filenames. Args last_checkpoints A list of checkpoint filenames. Raises AssertionError If last_checkpoints is not a list. set_last_checkpoints_with_time View source set_last_checkpoints_with_time( last_checkpoints_with_time ) Sets the list of old checkpoint filenames and timestamps. Args last_checkpoints_with_time A list of tuples of checkpoint filenames and timestamps. Raises AssertionError If last_checkpoints_with_time is not a list. to_proto View source to_proto( export_scope=None ) Converts this Saver to a SaverDef protocol buffer. Args export_scope Optional string. Name scope to remove. Returns A SaverDef protocol buffer.
tensorflow.compat.v1.train.saver
tf.compat.v1.train.SaverDef A ProtocolMessage Attributes filename_tensor_name string filename_tensor_name keep_checkpoint_every_n_hours float keep_checkpoint_every_n_hours max_to_keep int32 max_to_keep restore_op_name string restore_op_name save_tensor_name string save_tensor_name sharded bool sharded version CheckpointFormatVersion version Class Variables CheckpointFormatVersion LEGACY 0 V1 1 V2 2
tensorflow.compat.v1.train.saverdef
tf.compat.v1.train.Scaffold Structure to create or gather pieces commonly needed to train a model. tf.compat.v1.train.Scaffold( init_op=None, init_feed_dict=None, init_fn=None, ready_op=None, ready_for_local_init_op=None, local_init_op=None, summary_op=None, saver=None, copy_from_scaffold=None, local_init_feed_dict=None ) When you build a model for training you usually need ops to initialize variables, a Saver to checkpoint them, an op to collect summaries for the visualizer, and so on. Various libraries built on top of the core TensorFlow library take care of creating some or all of these pieces and storing them in well known collections in the graph. The Scaffold class helps pick these pieces from the graph collections, creating and adding them to the collections if needed. If you call the scaffold constructor without any arguments, it will pick pieces from the collections, creating default ones if needed when scaffold.finalize() is called. You can pass arguments to the constructor to provide your own pieces. Pieces that you pass to the constructor are not added to the graph collections. The following pieces are directly accessible as attributes of the Scaffold object: saver: A tf.compat.v1.train.Saver object taking care of saving the variables. Picked from and stored into the SAVERS collection in the graph by default. init_op: An op to run to initialize the variables. Picked from and stored into the INIT_OP collection in the graph by default. ready_op: An op to verify that the variables are initialized. Picked from and stored into the READY_OP collection in the graph by default. ready_for_local_init_op: An op to verify that global state has been initialized and it is alright to run local_init_op. Picked from and stored into the READY_FOR_LOCAL_INIT_OP collection in the graph by default. This is needed when the initialization of local variables depends on the values of global variables. local_init_op: An op to initialize the local variables. Picked from and stored into the LOCAL_INIT_OP collection in the graph by default. summary_op: An op to run and merge the summaries in the graph. Picked from and stored into the SUMMARY_OP collection in the graph by default. You can also pass the following additional pieces to the constructor: init_feed_dict: A session feed dictionary that should be used when running the init op. init_fn: A callable to run after the init op to perform additional initializations. The callable will be called as init_fn(scaffold, session). Args init_op Optional op for initializing variables. init_feed_dict Optional session feed dictionary to use when running the init_op. init_fn Optional function to use to initialize the model after running the init_op. Will be called as init_fn(scaffold, session). ready_op Optional op to verify that the variables are initialized. Must return an empty 1D string tensor when the variables are initialized, or a non-empty 1D string tensor listing the names of the non-initialized variables. ready_for_local_init_op Optional op to verify that the global variables are initialized and local_init_op can be run. Must return an empty 1D string tensor when the global variables are initialized, or a non-empty 1D string tensor listing the names of the non-initialized global variables. local_init_op Optional op to initialize local variables. summary_op Optional op to gather all summaries. Must return a scalar string tensor containing a serialized Summary proto. saver Optional tf.compat.v1.train.Saver object to use to save and restore variables. May also be a tf.train.Checkpoint object, in which case object-based checkpoints are saved. This will also load some object-based checkpoints saved from elsewhere, but that loading may be fragile since it uses fixed keys rather than performing a full graph-based match. For example if a variable has two paths from the Checkpoint object because two Model objects share the Layer object that owns it, removing one Model may change the keys and break checkpoint loading through this API, whereas a graph-based match would match the variable through the other Model. copy_from_scaffold Optional scaffold object to copy fields from. Its fields will be overwritten by the provided fields in this function. local_init_feed_dict Optional session feed dictionary to use when running the local_init_op. Attributes init_feed_dict init_fn init_op local_init_feed_dict local_init_op ready_for_local_init_op ready_op saver summary_op Methods default_local_init_op View source @staticmethod default_local_init_op() Returns an op that groups the default local init ops. This op is used during session initialization when a Scaffold is initialized without specifying the local_init_op arg. It includes tf.compat.v1.local_variables_initializer, tf.compat.v1.tables_initializer, and also initializes local session resources. Returns The default Scaffold local init op. finalize View source finalize() Creates operations if needed and finalizes the graph. get_or_default View source @staticmethod get_or_default( arg_name, collection_key, default_constructor ) Get from cache or create a default operation.
tensorflow.compat.v1.train.scaffold
tf.compat.v1.train.sdca_fprint Computes fingerprints of the input strings. tf.compat.v1.train.sdca_fprint( input, name=None ) Args input A Tensor of type string. vector of strings to compute fingerprints on. name A name for the operation (optional). Returns A Tensor of type int64.
tensorflow.compat.v1.train.sdca_fprint
tf.compat.v1.train.sdca_optimizer Distributed version of Stochastic Dual Coordinate Ascent (SDCA) optimizer for tf.compat.v1.train.sdca_optimizer( sparse_example_indices, sparse_feature_indices, sparse_feature_values, dense_features, example_weights, example_labels, sparse_indices, sparse_weights, dense_weights, example_state_data, loss_type, l1, l2, num_loss_partitions, num_inner_iterations, adaptative=True, name=None ) linear models with L1 + L2 regularization. As global optimization objective is strongly-convex, the optimizer optimizes the dual objective at each step. The optimizer applies each update one example at a time. Examples are sampled uniformly, and the optimizer is learning rate free and enjoys linear convergence rate. Proximal Stochastic Dual Coordinate Ascent. Shai Shalev-Shwartz, Tong Zhang. 2012 $$Loss Objective = \sum f_{i} (wx_{i}) + (l2 / 2) * |w|^2 + l1 * |w|$$ Adding vs. Averaging in Distributed Primal-Dual Optimization. Chenxin Ma, Virginia Smith, Martin Jaggi, Michael I. Jordan, Peter Richtarik, Martin Takac. 2015 Stochastic Dual Coordinate Ascent with Adaptive Probabilities. Dominik Csiba, Zheng Qu, Peter Richtarik. 2015 Args sparse_example_indices A list of Tensor objects with type int64. a list of vectors which contain example indices. sparse_feature_indices A list with the same length as sparse_example_indices of Tensor objects with type int64. a list of vectors which contain feature indices. sparse_feature_values A list of Tensor objects with type float32. a list of vectors which contains feature value associated with each feature group. dense_features A list of Tensor objects with type float32. a list of matrices which contains the dense feature values. example_weights A Tensor of type float32. a vector which contains the weight associated with each example. example_labels A Tensor of type float32. a vector which contains the label/target associated with each example. sparse_indices A list with the same length as sparse_example_indices of Tensor objects with type int64. a list of vectors where each value is the indices which has corresponding weights in sparse_weights. This field maybe omitted for the dense approach. sparse_weights A list with the same length as sparse_example_indices of Tensor objects with type float32. a list of vectors where each value is the weight associated with a sparse feature group. dense_weights A list with the same length as dense_features of Tensor objects with type float32. a list of vectors where the values are the weights associated with a dense feature group. example_state_data A Tensor of type float32. a list of vectors containing the example state data. loss_type A string from: "logistic_loss", "squared_loss", "hinge_loss", "smooth_hinge_loss", "poisson_loss". Type of the primal loss. Currently SdcaSolver supports logistic, squared and hinge losses. l1 A float. Symmetric l1 regularization strength. l2 A float. Symmetric l2 regularization strength. num_loss_partitions An int that is >= 1. Number of partitions of the global loss function. num_inner_iterations An int that is >= 1. Number of iterations per mini-batch. adaptative An optional bool. Defaults to True. Whether to use Adaptive SDCA for the inner loop. name A name for the operation (optional). Returns A tuple of Tensor objects (out_example_state_data, out_delta_sparse_weights, out_delta_dense_weights). out_example_state_data A Tensor of type float32. out_delta_sparse_weights A list with the same length as sparse_example_indices of Tensor objects with type float32. out_delta_dense_weights A list with the same length as dense_features of Tensor objects with type float32.
tensorflow.compat.v1.train.sdca_optimizer
tf.compat.v1.train.sdca_shrink_l1 Applies L1 regularization shrink step on the parameters. tf.compat.v1.train.sdca_shrink_l1( weights, l1, l2, name=None ) Args weights A list of Tensor objects with type mutable float32. a list of vectors where each value is the weight associated with a feature group. l1 A float. Symmetric l1 regularization strength. l2 A float. Symmetric l2 regularization strength. Should be a positive float. name A name for the operation (optional). Returns The created Operation.
tensorflow.compat.v1.train.sdca_shrink_l1
tf.compat.v1.train.SessionCreator A factory for tf.Session. Methods create_session View source @abc.abstractmethod create_session()
tensorflow.compat.v1.train.sessioncreator
tf.compat.v1.train.SessionManager Training helper that restores from checkpoint and creates session. tf.compat.v1.train.SessionManager( local_init_op=None, ready_op=None, ready_for_local_init_op=None, graph=None, recovery_wait_secs=30, local_init_run_options=None, local_init_feed_dict=None ) This class is a small wrapper that takes care of session creation and checkpoint recovery. It also provides functions that to facilitate coordination among multiple training threads or processes. Checkpointing trained variables as the training progresses. Initializing variables on startup, restoring them from the most recent checkpoint after a crash, or wait for checkpoints to become available. Usage: with tf.Graph().as_default(): ...add operations to the graph... # Create a SessionManager that will checkpoint the model in '/tmp/mydir'. sm = SessionManager() sess = sm.prepare_session(master, init_op, saver, checkpoint_dir) # Use the session to train the graph. while True: sess.run(<my_train_op>) prepare_session() initializes or restores a model. It requires init_op and saver as an argument. A second process could wait for the model to be ready by doing the following: with tf.Graph().as_default(): ...add operations to the graph... # Create a SessionManager that will wait for the model to become ready. sm = SessionManager() sess = sm.wait_for_session(master) # Use the session to train the graph. while True: sess.run(<my_train_op>) wait_for_session() waits for a model to be initialized by other processes. Args local_init_op An Operation run immediately after session creation. Usually used to initialize tables and local variables. ready_op An Operation to check if the model is initialized. ready_for_local_init_op An Operation to check if the model is ready to run local_init_op. graph The Graph that the model will use. recovery_wait_secs Seconds between checks for the model to be ready. local_init_run_options RunOptions to be passed to session.run when executing the local_init_op. local_init_feed_dict Optional session feed dictionary to use when running the local_init_op. Raises ValueError If ready_for_local_init_op is not None but local_init_op is None Methods prepare_session View source prepare_session( master, init_op=None, saver=None, checkpoint_dir=None, checkpoint_filename_with_path=None, wait_for_checkpoint=False, max_wait_secs=7200, config=None, init_feed_dict=None, init_fn=None ) Creates a Session. Makes sure the model is ready to be used. Creates a Session on 'master'. If a saver object is passed in, and checkpoint_dir points to a directory containing valid checkpoint files, then it will try to recover the model from checkpoint. If no checkpoint files are available, and wait_for_checkpoint is True, then the process would check every recovery_wait_secs, up to max_wait_secs, for recovery to succeed. If the model cannot be recovered successfully then it is initialized by running the init_op and calling init_fn if they are provided. The local_init_op is also run after init_op and init_fn, regardless of whether the model was recovered successfully, but only if ready_for_local_init_op passes. If the model is recovered from a checkpoint it is assumed that all global variables have been initialized, in particular neither init_op nor init_fn will be executed. It is an error if the model cannot be recovered and no init_op or init_fn or local_init_op are passed. Args master String representation of the TensorFlow master to use. init_op Optional Operation used to initialize the model. saver A Saver object used to restore a model. checkpoint_dir Path to the checkpoint files. The latest checkpoint in the dir will be used to restore. checkpoint_filename_with_path Full file name path to the checkpoint file. wait_for_checkpoint Whether to wait for checkpoint to become available. max_wait_secs Maximum time to wait for checkpoints to become available. config Optional ConfigProto proto used to configure the session. init_feed_dict Optional dictionary that maps Tensor objects to feed values. This feed dictionary is passed to the session run() call when running the init op. init_fn Optional callable used to initialize the model. Called after the optional init_op is called. The callable must accept one argument, the session being initialized. Returns A Session object that can be used to drive the model. Raises RuntimeError If the model cannot be initialized or recovered. ValueError If both checkpoint_dir and checkpoint_filename_with_path are set. recover_session View source recover_session( master, saver=None, checkpoint_dir=None, checkpoint_filename_with_path=None, wait_for_checkpoint=False, max_wait_secs=7200, config=None ) Creates a Session, recovering if possible. Creates a new session on 'master'. If the session is not initialized and can be recovered from a checkpoint, recover it. Args master String representation of the TensorFlow master to use. saver A Saver object used to restore a model. checkpoint_dir Path to the checkpoint files. The latest checkpoint in the dir will be used to restore. checkpoint_filename_with_path Full file name path to the checkpoint file. wait_for_checkpoint Whether to wait for checkpoint to become available. max_wait_secs Maximum time to wait for checkpoints to become available. config Optional ConfigProto proto used to configure the session. Returns A pair (sess, initialized) where 'initialized' is True if the session could be recovered and initialized, False otherwise. Raises ValueError If both checkpoint_dir and checkpoint_filename_with_path are set. wait_for_session View source wait_for_session( master, config=None, max_wait_secs=float('Inf') ) Creates a new Session and waits for model to be ready. Creates a new Session on 'master'. Waits for the model to be initialized or recovered from a checkpoint. It's expected that another thread or process will make the model ready, and that this is intended to be used by threads/processes that participate in a distributed training configuration where a different thread/process is responsible for initializing or recovering the model being trained. NB: The amount of time this method waits for the session is bounded by max_wait_secs. By default, this function will wait indefinitely. Args master String representation of the TensorFlow master to use. config Optional ConfigProto proto used to configure the session. max_wait_secs Maximum time to wait for the session to become available. Returns A Session. May be None if the operation exceeds the timeout specified by config.operation_timeout_in_ms. Raises tf.DeadlineExceededError if the session is not available after max_wait_secs.
tensorflow.compat.v1.train.sessionmanager
tf.compat.v1.train.shuffle_batch Creates batches by randomly shuffling tensors. (deprecated) tf.compat.v1.train.shuffle_batch( tensors, batch_size, capacity, min_after_dequeue, num_threads=1, seed=None, enqueue_many=False, shapes=None, allow_smaller_final_batch=False, shared_name=None, name=None ) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Queue-based input pipelines have been replaced by tf.data. Use tf.data.Dataset.shuffle(min_after_dequeue).batch(batch_size). This function adds the following to the current Graph: A shuffling queue into which tensors from tensors are enqueued. A dequeue_many operation to create batches from the queue. A QueueRunner to QUEUE_RUNNER collection, to enqueue the tensors from tensors. If enqueue_many is False, tensors is assumed to represent a single example. An input tensor with shape [x, y, z] will be output as a tensor with shape [batch_size, x, y, z]. If enqueue_many is True, tensors is assumed to represent a batch of examples, where the first dimension is indexed by example, and all members of tensors should have the same size in the first dimension. If an input tensor has shape [*, x, y, z], the output will have shape [batch_size, x, y, z]. The capacity argument controls the how long the prefetching is allowed to grow the queues. The returned operation is a dequeue operation and will throw tf.errors.OutOfRangeError if the input queue is exhausted. If this operation is feeding another input queue, its queue runner will catch this exception, however, if this operation is used in your main thread you are responsible for catching this yourself. For example: # Creates batches of 32 images and 32 labels. image_batch, label_batch = tf.compat.v1.train.shuffle_batch( [single_image, single_label], batch_size=32, num_threads=4, capacity=50000, min_after_dequeue=10000) Note: You must ensure that either (i) the shapes argument is passed, or (ii) all of the tensors in tensors must have fully-defined shapes. ValueError will be raised if neither of these conditions holds. If allow_smaller_final_batch is True, a smaller batch value than batch_size is returned when the queue is closed and there are not enough elements to fill the batch, otherwise the pending elements are discarded. In addition, all output tensors' static shapes, as accessed via the shape property will have a first Dimension value of None, and operations that depend on fixed batch_size would fail. Args tensors The list or dictionary of tensors to enqueue. batch_size The new batch size pulled from the queue. capacity An integer. The maximum number of elements in the queue. min_after_dequeue Minimum number elements in the queue after a dequeue, used to ensure a level of mixing of elements. num_threads The number of threads enqueuing tensor_list. seed Seed for the random shuffling within the queue. enqueue_many Whether each tensor in tensor_list is a single example. shapes (Optional) The shapes for each example. Defaults to the inferred shapes for tensor_list. allow_smaller_final_batch (Optional) Boolean. If True, allow the final batch to be smaller if there are insufficient items left in the queue. shared_name (Optional) If set, this queue will be shared under the given name across multiple sessions. name (Optional) A name for the operations. Returns A list or dictionary of tensors with the types as tensors. Raises ValueError If the shapes are not specified, and cannot be inferred from the elements of tensors. Eager Compatibility Input pipelines based on Queues are not supported when eager execution is enabled. Please use the tf.data API to ingest data under eager execution.
tensorflow.compat.v1.train.shuffle_batch
tf.compat.v1.train.shuffle_batch_join Create batches by randomly shuffling tensors. (deprecated) tf.compat.v1.train.shuffle_batch_join( tensors_list, batch_size, capacity, min_after_dequeue, seed=None, enqueue_many=False, shapes=None, allow_smaller_final_batch=False, shared_name=None, name=None ) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Queue-based input pipelines have been replaced by tf.data. Use tf.data.Dataset.interleave(...).shuffle(min_after_dequeue).batch(batch_size). The tensors_list argument is a list of tuples of tensors, or a list of dictionaries of tensors. Each element in the list is treated similarly to the tensors argument of tf.compat.v1.train.shuffle_batch(). This version enqueues a different list of tensors in different threads. It adds the following to the current Graph: A shuffling queue into which tensors from tensors_list are enqueued. A dequeue_many operation to create batches from the queue. A QueueRunner to QUEUE_RUNNER collection, to enqueue the tensors from tensors_list. len(tensors_list) threads will be started, with thread i enqueuing the tensors from tensors_list[i]. tensors_list[i1][j] must match tensors_list[i2][j] in type and shape, except in the first dimension if enqueue_many is true. If enqueue_many is False, each tensors_list[i] is assumed to represent a single example. An input tensor with shape [x, y, z] will be output as a tensor with shape [batch_size, x, y, z]. If enqueue_many is True, tensors_list[i] is assumed to represent a batch of examples, where the first dimension is indexed by example, and all members of tensors_list[i] should have the same size in the first dimension. If an input tensor has shape [*, x, y, z], the output will have shape [batch_size, x, y, z]. The capacity argument controls the how long the prefetching is allowed to grow the queues. The returned operation is a dequeue operation and will throw tf.errors.OutOfRangeError if the input queue is exhausted. If this operation is feeding another input queue, its queue runner will catch this exception, however, if this operation is used in your main thread you are responsible for catching this yourself. If allow_smaller_final_batch is True, a smaller batch value than batch_size is returned when the queue is closed and there are not enough elements to fill the batch, otherwise the pending elements are discarded. In addition, all output tensors' static shapes, as accessed via the shape property will have a first Dimension value of None, and operations that depend on fixed batch_size would fail. Args tensors_list A list of tuples or dictionaries of tensors to enqueue. batch_size An integer. The new batch size pulled from the queue. capacity An integer. The maximum number of elements in the queue. min_after_dequeue Minimum number elements in the queue after a dequeue, used to ensure a level of mixing of elements. seed Seed for the random shuffling within the queue. enqueue_many Whether each tensor in tensor_list_list is a single example. shapes (Optional) The shapes for each example. Defaults to the inferred shapes for tensors_list[i]. allow_smaller_final_batch (Optional) Boolean. If True, allow the final batch to be smaller if there are insufficient items left in the queue. shared_name (optional). If set, this queue will be shared under the given name across multiple sessions. name (Optional) A name for the operations. Returns A list or dictionary of tensors with the same number and types as tensors_list[i]. Raises ValueError If the shapes are not specified, and cannot be inferred from the elements of tensors_list. Eager Compatibility Input pipelines based on Queues are not supported when eager execution is enabled. Please use the tf.data API to ingest data under eager execution.
tensorflow.compat.v1.train.shuffle_batch_join
tf.compat.v1.train.SingularMonitoredSession Session-like object that handles initialization, restoring, and hooks. tf.compat.v1.train.SingularMonitoredSession( hooks=None, scaffold=None, master='', config=None, checkpoint_dir=None, stop_grace_period_secs=120, checkpoint_filename_with_path=None ) Please note that this utility is not recommended for distributed settings. For distributed settings, please use tf.compat.v1.train.MonitoredSession. The differences between MonitoredSession and SingularMonitoredSession are: MonitoredSession handles AbortedError and UnavailableError for distributed settings, but SingularMonitoredSession does not. MonitoredSession can be created in chief or worker modes. SingularMonitoredSession is always created as chief. You can access the raw tf.compat.v1.Session object used by SingularMonitoredSession, whereas in MonitoredSession the raw session is private. This can be used: To run without hooks. To save and restore. All other functionality is identical. Example usage: saver_hook = CheckpointSaverHook(...) summary_hook = SummarySaverHook(...) with SingularMonitoredSession(hooks=[saver_hook, summary_hook]) as sess: while not sess.should_stop(): sess.run(train_op) Initialization: At creation time the hooked session does following things in given order: calls hook.begin() for each given hook finalizes the graph via scaffold.finalize() create session initializes the model via initialization ops provided by Scaffold restores variables if a checkpoint exists launches queue runners Run: When run() is called, the hooked session does following things: calls hook.before_run() calls TensorFlow session.run() with merged fetches and feed_dict calls hook.after_run() returns result of session.run() asked by user Exit: At the close(), the hooked session does following things in order: calls hook.end() closes the queue runners and the session suppresses OutOfRange error which indicates that all inputs have been processed if the SingularMonitoredSession is used as a context. Args hooks An iterable of SessionRunHook' objects. </td> </tr><tr> <td>scaffold</td> <td> AScaffoldused for gathering or building supportive ops. If not specified a default one is created. It's used to finalize the graph. </td> </tr><tr> <td>master</td> <td>Stringrepresentation of the TensorFlow master to use. </td> </tr><tr> <td>config</td> <td>ConfigProtoproto used to configure the session. </td> </tr><tr> <td>checkpoint_dir</td> <td> A string. Optional path to a directory where to restore variables. </td> </tr><tr> <td>stop_grace_period_secs</td> <td> Number of seconds given to threads to stop afterclose()has been called. </td> </tr><tr> <td>checkpoint_filename_with_path` A string. Optional path to a checkpoint file from which to restore variables. Attributes graph The graph that was launched in this session. Child Classes class StepContext Methods close View source close() raw_session View source raw_session() Returns underlying TensorFlow.Session object. run View source run( fetches, feed_dict=None, options=None, run_metadata=None ) Run ops in the monitored session. This method is completely compatible with the tf.Session.run() method. Args fetches Same as tf.Session.run(). feed_dict Same as tf.Session.run(). options Same as tf.Session.run(). run_metadata Same as tf.Session.run(). Returns Same as tf.Session.run(). run_step_fn View source run_step_fn( step_fn ) Run ops using a step function. Args step_fn A function or a method with a single argument of type StepContext. The function may use methods of the argument to perform computations with access to a raw session. The returned value of the step_fn will be returned from run_step_fn, unless a stop is requested. In that case, the next should_stop call will return True. Example usage: with tf.Graph().as_default(): c = tf.compat.v1.placeholder(dtypes.float32) v = tf.add(c, 4.0) w = tf.add(c, 0.5) def step_fn(step_context): a = step_context.session.run(fetches=v, feed_dict={c: 0.5}) if a <= 4.5: step_context.request_stop() return step_context.run_with_hooks(fetches=w, feed_dict={c: 0.1}) with tf.MonitoredSession() as session: while not session.should_stop(): a = session.run_step_fn(step_fn) Hooks interact with the run_with_hooks() call inside the step_fn as they do with a MonitoredSession.run call. Returns Returns the returned value of step_fn. Raises StopIteration if step_fn has called request_stop(). It may be caught by with tf.MonitoredSession() to close the session. ValueError if step_fn doesn't have a single argument called step_context. It may also optionally have self for cases when it belongs to an object. should_stop View source should_stop() __enter__ View source __enter__() __exit__ View source __exit__( exception_type, exception_value, traceback )
tensorflow.compat.v1.train.singularmonitoredsession
tf.compat.v1.train.slice_input_producer Produces a slice of each Tensor in tensor_list. (deprecated) tf.compat.v1.train.slice_input_producer( tensor_list, num_epochs=None, shuffle=True, seed=None, capacity=32, shared_name=None, name=None ) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Queue-based input pipelines have been replaced by tf.data. Use tf.data.Dataset.from_tensor_slices(tuple(tensor_list)).shuffle(tf.shape(input_tensor, out_type=tf.int64)[0]).repeat(num_epochs). If shuffle=False, omit the .shuffle(...). Implemented using a Queue -- a QueueRunner for the Queue is added to the current Graph's QUEUE_RUNNER collection. Args tensor_list A list of Tensor objects. Every Tensor in tensor_list must have the same size in the first dimension. num_epochs An integer (optional). If specified, slice_input_producer produces each slice num_epochs times before generating an OutOfRange error. If not specified, slice_input_producer can cycle through the slices an unlimited number of times. shuffle Boolean. If true, the integers are randomly shuffled within each epoch. seed An integer (optional). Seed used if shuffle == True. capacity An integer. Sets the queue capacity. shared_name (optional). If set, this queue will be shared under the given name across multiple sessions. name A name for the operations (optional). Returns A list of tensors, one for each element of tensor_list. If the tensor in tensor_list has shape [N, a, b, .., z], then the corresponding output tensor will have shape [a, b, ..., z]. Raises ValueError if slice_input_producer produces nothing from tensor_list. Eager Compatibility Input pipelines based on Queues are not supported when eager execution is enabled. Please use the tf.data API to ingest data under eager execution.
tensorflow.compat.v1.train.slice_input_producer
tf.compat.v1.train.start_queue_runners Starts all queue runners collected in the graph. (deprecated) View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.train.queue_runner.start_queue_runners tf.compat.v1.train.start_queue_runners( sess=None, coord=None, daemon=True, start=True, collection=tf.GraphKeys.QUEUE_RUNNERS ) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: To construct input pipelines, use the tf.data module. This is a companion method to add_queue_runner(). It just starts threads for all queue runners collected in the graph. It returns the list of all threads. Args sess Session used to run the queue ops. Defaults to the default session. coord Optional Coordinator for coordinating the started threads. daemon Whether the threads should be marked as daemons, meaning they don't block program exit. start Set to False to only create the threads, not start them. collection A GraphKey specifying the graph collection to get the queue runners from. Defaults to GraphKeys.QUEUE_RUNNERS. Raises ValueError if sess is None and there isn't any default session. TypeError if sess is not a tf.compat.v1.Session object. Returns A list of threads. Raises RuntimeError If called with eager execution enabled. ValueError If called without a default tf.compat.v1.Session registered. Eager Compatibility Not compatible with eager execution. To ingest data under eager execution, use the tf.data API instead.
tensorflow.compat.v1.train.start_queue_runners
tf.compat.v1.train.string_input_producer Output strings (e.g. filenames) to a queue for an input pipeline. (deprecated) tf.compat.v1.train.string_input_producer( string_tensor, num_epochs=None, shuffle=True, seed=None, capacity=32, shared_name=None, name=None, cancel_op=None ) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Queue-based input pipelines have been replaced by tf.data. Use tf.data.Dataset.from_tensor_slices(string_tensor).shuffle(tf.shape(input_tensor, out_type=tf.int64)[0]).repeat(num_epochs). If shuffle=False, omit the .shuffle(...). Note: if num_epochs is not None, this function creates local counter epochs. Use local_variables_initializer() to initialize local variables. Args string_tensor A 1-D string tensor with the strings to produce. num_epochs An integer (optional). If specified, string_input_producer produces each string from string_tensor num_epochs times before generating an OutOfRange error. If not specified, string_input_producer can cycle through the strings in string_tensor an unlimited number of times. shuffle Boolean. If true, the strings are randomly shuffled within each epoch. seed An integer (optional). Seed used if shuffle == True. capacity An integer. Sets the queue capacity. shared_name (optional). If set, this queue will be shared under the given name across multiple sessions. All sessions open to the device which has this queue will be able to access it via the shared_name. Using this in a distributed setting means each name will only be seen by one of the sessions which has access to this operation. name A name for the operations (optional). cancel_op Cancel op for the queue (optional). Returns A queue with the output strings. A QueueRunner for the Queue is added to the current Graph's QUEUE_RUNNER collection. Raises ValueError If the string_tensor is a null Python list. At runtime, will fail with an assertion if string_tensor becomes a null tensor. Eager Compatibility Input pipelines based on Queues are not supported when eager execution is enabled. Please use the tf.data API to ingest data under eager execution.
tensorflow.compat.v1.train.string_input_producer
tf.compat.v1.train.summary_iterator Returns a iterator for reading Event protocol buffers from an event file. tf.compat.v1.train.summary_iterator( path ) You can use this function to read events written to an event file. It returns a Python iterator that yields Event protocol buffers. Example: Print the contents of an events file. for e in tf.compat.v1.train.summary_iterator(path to events file): print(e) Example: Print selected summary values. # This example supposes that the events file contains summaries with a # summary value tag 'loss'. These could have been added by calling # `add_summary()`, passing the output of a scalar summary op created with # with: `tf.compat.v1.summary.scalar('loss', loss_tensor)`. for e in tf.compat.v1.train.summary_iterator(path to events file): for v in e.summary.value: if v.tag == 'loss': print(v.simple_value) Example: Continuously check for new summary values. summaries = tf.compat.v1.train.summary_iterator(path to events file) while True: for e in summaries: for v in e.summary.value: if v.tag == 'loss': print(v.simple_value) # Wait for a bit before checking the file for any new events time.sleep(wait time) See the protocol buffer definitions of Event and Summary for more information about their attributes. Args path The path to an event file created by a SummaryWriter. Returns A iterator that yields Event protocol buffers
tensorflow.compat.v1.train.summary_iterator
tf.compat.v1.train.Supervisor A training helper that checkpoints models and computes summaries. tf.compat.v1.train.Supervisor( graph=None, ready_op=USE_DEFAULT, ready_for_local_init_op=USE_DEFAULT, is_chief=True, init_op=USE_DEFAULT, init_feed_dict=None, local_init_op=USE_DEFAULT, logdir=None, summary_op=USE_DEFAULT, saver=USE_DEFAULT, global_step=USE_DEFAULT, save_summaries_secs=120, save_model_secs=600, recovery_wait_secs=30, stop_grace_secs=120, checkpoint_basename='model.ckpt', session_manager=None, summary_writer=USE_DEFAULT, init_fn=None, local_init_run_options=None ) This class is deprecated. Please use tf.compat.v1.train.MonitoredTrainingSession instead. The Supervisor is a small wrapper around a Coordinator, a Saver, and a SessionManager that takes care of common needs of TensorFlow training programs. Use for a single program with tf.Graph().as_default(): ...add operations to the graph... # Create a Supervisor that will checkpoint the model in '/tmp/mydir'. sv = Supervisor(logdir='/tmp/mydir') # Get a TensorFlow session managed by the supervisor. with sv.managed_session(FLAGS.master) as sess: # Use the session to train the graph. while not sv.should_stop(): sess.run(<my_train_op>) Within the with sv.managed_session() block all variables in the graph have been initialized. In addition, a few services have been started to checkpoint the model and add summaries to the event log. If the program crashes and is restarted, the managed session automatically reinitialize variables from the most recent checkpoint. The supervisor is notified of any exception raised by one of the services. After an exception is raised, should_stop() returns True. In that case the training loop should also stop. This is why the training loop has to check for sv.should_stop(). Exceptions that indicate that the training inputs have been exhausted, tf.errors.OutOfRangeError, also cause sv.should_stop() to return True but are not re-raised from the with block: they indicate a normal termination. Use for multiple replicas To train with replicas you deploy the same program in a Cluster. One of the tasks must be identified as the chief: the task that handles initialization, checkpoints, summaries, and recovery. The other tasks depend on the chief for these services. The only change you have to do to the single program code is to indicate if the program is running as the chief. # Choose a task as the chief. This could be based on server_def.task_index, # or job_def.name, or job_def.tasks. It's entirely up to the end user. # But there can be only one *chief*. is_chief = (server_def.task_index == 0) server = tf.distribute.Server(server_def) with tf.Graph().as_default(): ...add operations to the graph... # Create a Supervisor that uses log directory on a shared file system. # Indicate if you are the 'chief' sv = Supervisor(logdir='/shared_directory/...', is_chief=is_chief) # Get a Session in a TensorFlow server on the cluster. with sv.managed_session(server.target) as sess: # Use the session to train the graph. while not sv.should_stop(): sess.run(<my_train_op>) In the chief task, the Supervisor works exactly as in the first example above. In the other tasks sv.managed_session() waits for the Model to have been initialized before returning a session to the training code. The non-chief tasks depend on the chief task for initializing the model. If one of the tasks crashes and restarts, managed_session() checks if the Model is initialized. If yes, it just creates a session and returns it to the training code that proceeds normally. If the model needs to be initialized, the chief task takes care of reinitializing it; the other tasks just wait for the model to have been initialized. Note: This modified program still works fine as a single program. The single program marks itself as the chief. What master string to use Whether you are running on your machine or in the cluster you can use the following values for the --master flag: Specifying '' requests an in-process session that does not use RPC. Specifying 'local' requests a session that uses the RPC-based "Master interface" to run TensorFlow programs. See tf.train.Server.create_local_server for details. Specifying 'grpc://hostname:port' requests a session that uses the RPC interface to a specific host, and also allows the in-process master to access remote tensorflow workers. Often, it is appropriate to pass server.target (for some tf.distribute.Server named `server). Advanced use Launching additional services managed_session() launches the Checkpoint and Summary services (threads). If you need more services to run you can simply launch them in the block controlled by managed_session(). Example: Start a thread to print losses. We want this thread to run every 60 seconds, so we launch it with sv.loop(). ... sv = Supervisor(logdir='/tmp/mydir') with sv.managed_session(FLAGS.master) as sess: sv.loop(60, print_loss, (sess, )) while not sv.should_stop(): sess.run(my_train_op) Launching fewer services managed_session() launches the "summary" and "checkpoint" threads which use either the optionally summary_op and saver passed to the constructor, or default ones created automatically by the supervisor. If you want to run your own summary and checkpointing logic, disable these services by passing None to the summary_op and saver parameters. Example: Create summaries manually every 100 steps in the chief. # Create a Supervisor with no automatic summaries. sv = Supervisor(logdir='/tmp/mydir', is_chief=is_chief, summary_op=None) # As summary_op was None, managed_session() does not start the # summary thread. with sv.managed_session(FLAGS.master) as sess: for step in xrange(1000000): if sv.should_stop(): break if is_chief and step % 100 == 0: # Create the summary every 100 chief steps. sv.summary_computed(sess, sess.run(my_summary_op)) else: # Train normally sess.run(my_train_op) Custom model initialization managed_session() only supports initializing the model by running an init_op or restoring from the latest checkpoint. If you have special initialization needs, see how to specify a local_init_op when creating the supervisor. You can also use the SessionManager directly to create a session and check if it could be initialized automatically. Args graph A Graph. The graph that the model will use. Defaults to the default Graph. The supervisor may add operations to the graph before creating a session, but the graph should not be modified by the caller after passing it to the supervisor. ready_op 1-D string Tensor. This tensor is evaluated by supervisors in prepare_or_wait_for_session() to check if the model is ready to use. The model is considered ready if it returns an empty array. Defaults to the tensor returned from tf.compat.v1.report_uninitialized_variables() If None, the model is not checked for readiness. ready_for_local_init_op 1-D string Tensor. This tensor is evaluated by supervisors in prepare_or_wait_for_session() to check if the model is ready to run the local_init_op. The model is considered ready if it returns an empty array. Defaults to None. If None, the model is not checked for readiness before running local_init_op. is_chief If True, create a chief supervisor in charge of initializing and restoring the model. If False, create a supervisor that relies on a chief supervisor for inits and restore. init_op Operation. Used by chief supervisors to initialize the model when it can not be recovered. Defaults to an Operation that initializes all global variables. If None, no initialization is done automatically unless you pass a value for init_fn, see below. init_feed_dict A dictionary that maps Tensor objects to feed values. This feed dictionary will be used when init_op is evaluated. local_init_op Operation. Used by all supervisors to run initializations that should run for every new supervisor instance. By default these are table initializers and initializers for local variables. If None, no further per supervisor-instance initialization is done automatically. logdir A string. Optional path to a directory where to checkpoint the model and log events for the visualizer. Used by chief supervisors. The directory will be created if it does not exist. summary_op An Operation that returns a Summary for the event logs. Used by chief supervisors if a logdir was specified. Defaults to the operation returned from summary.merge_all(). If None, summaries are not computed automatically. saver A Saver object. Used by chief supervisors if a logdir was specified. Defaults to the saved returned by Saver(). If None, the model is not saved automatically. global_step An integer Tensor of size 1 that counts steps. The value from 'global_step' is used in summaries and checkpoint filenames. Default to the op named 'global_step' in the graph if it exists, is of rank 1, size 1, and of type tf.int32 or tf.int64. If None the global step is not recorded in summaries and checkpoint files. Used by chief supervisors if a logdir was specified. save_summaries_secs Number of seconds between the computation of summaries for the event log. Defaults to 120 seconds. Pass 0 to disable summaries. save_model_secs Number of seconds between the creation of model checkpoints. Defaults to 600 seconds. Pass 0 to disable checkpoints. recovery_wait_secs Number of seconds between checks that the model is ready. Used by supervisors when waiting for a chief supervisor to initialize or restore the model. Defaults to 30 seconds. stop_grace_secs Grace period, in seconds, given to running threads to stop when stop() is called. Defaults to 120 seconds. checkpoint_basename The basename for checkpoint saving. session_manager SessionManager, which manages Session creation and recovery. If it is None, a default SessionManager will be created with the set of arguments passed in for backwards compatibility. summary_writer SummaryWriter to use or USE_DEFAULT. Can be None to indicate that no summaries should be written. init_fn Optional callable used to initialize the model. Called after the optional init_op is called. The callable must accept one argument, the session being initialized. local_init_run_options RunOptions to be passed as the SessionManager local_init_run_options parameter. Raises RuntimeError If called with eager execution enabled. Attributes coord Return the Coordinator used by the Supervisor. The Coordinator can be useful if you want to run multiple threads during your training. global_step Return the global_step Tensor used by the supervisor. init_feed_dict Return the feed dictionary used when evaluating the init_op. init_op Return the Init Op used by the supervisor. is_chief Return True if this is a chief supervisor. ready_for_local_init_op ready_op Return the Ready Op used by the supervisor. save_model_secs Return the delay between checkpoints. save_path Return the save path used by the supervisor. save_summaries_secs Return the delay between summary computations. saver Return the Saver used by the supervisor. session_manager Return the SessionManager used by the Supervisor. summary_op Return the Summary Tensor used by the chief supervisor. summary_writer Return the SummaryWriter used by the chief supervisor. Methods Loop View source Loop( timer_interval_secs, target, args=None, kwargs=None ) Start a LooperThread that calls a function periodically. If timer_interval_secs is None the thread calls target(*args, **kwargs) repeatedly. Otherwise it calls it every timer_interval_secs seconds. The thread terminates when a stop is requested. The started thread is added to the list of threads managed by the supervisor so it does not need to be passed to the stop() method. Args timer_interval_secs Number. Time boundaries at which to call target. target A callable object. args Optional arguments to pass to target when calling it. kwargs Optional keyword arguments to pass to target when calling it. Returns The started thread. PrepareSession View source PrepareSession( master='', config=None, wait_for_checkpoint=False, max_wait_secs=7200, start_standard_services=True ) Make sure the model is ready to be used. Create a session on 'master', recovering or initializing the model as needed, or wait for a session to be ready. If running as the chief and start_standard_service is set to True, also call the session manager to start the standard services. Args master name of the TensorFlow master to use. See the tf.compat.v1.Session constructor for how this is interpreted. config Optional ConfigProto proto used to configure the session, which is passed as-is to create the session. wait_for_checkpoint Whether we should wait for the availability of a checkpoint before creating Session. Defaults to False. max_wait_secs Maximum time to wait for the session to become available. start_standard_services Whether to start the standard services and the queue runners. Returns A Session object that can be used to drive the model. RequestStop View source RequestStop( ex=None ) Request that the coordinator stop the threads. See Coordinator.request_stop(). Args ex Optional Exception, or Python exc_info tuple as returned by sys.exc_info(). If this is the first call to request_stop() the corresponding exception is recorded and re-raised from join(). ShouldStop View source ShouldStop() Check if the coordinator was told to stop. See Coordinator.should_stop(). Returns True if the coordinator was told to stop, False otherwise. StartQueueRunners View source StartQueueRunners( sess, queue_runners=None ) Start threads for QueueRunners. Note that the queue runners collected in the graph key QUEUE_RUNNERS are already started automatically when you create a session with the supervisor, so unless you have non-collected queue runners to start you do not need to call this explicitly. Args sess A Session. queue_runners A list of QueueRunners. If not specified, we'll use the list of queue runners gathered in the graph under the key GraphKeys.QUEUE_RUNNERS. Returns The list of threads started for the QueueRunners. Raises RuntimeError If called with eager execution enabled. Eager Compatibility Queues are not compatible with eager execution. To ingest data when eager execution is enabled, use the tf.data API. StartStandardServices View source StartStandardServices( sess ) Start the standard services for 'sess'. This starts services in the background. The services started depend on the parameters to the constructor and may include: A Summary thread computing summaries every save_summaries_secs. A Checkpoint thread saving the model every save_model_secs. A StepCounter thread measure step time. Args sess A Session. Returns A list of threads that are running the standard services. You can use the Supervisor's Coordinator to join these threads with: sv.coord.Join() Raises RuntimeError If called with a non-chief Supervisor. ValueError If not logdir was passed to the constructor as the services need a log directory. Stop View source Stop( threads=None, close_summary_writer=True, ignore_live_threads=False ) Stop the services and the coordinator. This does not close the session. Args threads Optional list of threads to join with the coordinator. If None, defaults to the threads running the standard services, the threads started for QueueRunners, and the threads started by the loop() method. To wait on additional threads, pass the list in this parameter. close_summary_writer Whether to close the summary_writer. Defaults to True if the summary writer was created by the supervisor, False otherwise. ignore_live_threads If True ignores threads that remain running after a grace period when joining threads via the coordinator, instead of raising a RuntimeError. StopOnException View source StopOnException() Context handler to stop the supervisor when an exception is raised. See Coordinator.stop_on_exception(). Returns A context handler. SummaryComputed View source SummaryComputed( sess, summary, global_step=None ) Indicate that a summary was computed. Args sess A Session object. summary A Summary proto, or a string holding a serialized summary proto. global_step Int. global step this summary is associated with. If None, it will try to fetch the current step. Raises TypeError if 'summary' is not a Summary proto or a string. RuntimeError if the Supervisor was created without a logdir. WaitForStop View source WaitForStop() Block waiting for the coordinator to stop. loop View source loop( timer_interval_secs, target, args=None, kwargs=None ) Start a LooperThread that calls a function periodically. If timer_interval_secs is None the thread calls target(*args, **kwargs) repeatedly. Otherwise it calls it every timer_interval_secs seconds. The thread terminates when a stop is requested. The started thread is added to the list of threads managed by the supervisor so it does not need to be passed to the stop() method. Args timer_interval_secs Number. Time boundaries at which to call target. target A callable object. args Optional arguments to pass to target when calling it. kwargs Optional keyword arguments to pass to target when calling it. Returns The started thread. managed_session View source @contextlib.contextmanager managed_session( master='', config=None, start_standard_services=True, close_summary_writer=True ) Returns a context manager for a managed session. This context manager creates and automatically recovers a session. It optionally starts the standard services that handle checkpoints and summaries. It monitors exceptions raised from the with block or from the services and stops the supervisor as needed. The context manager is typically used as follows: def train(): sv = tf.compat.v1.train.Supervisor(...) with sv.managed_session(<master>) as sess: for step in xrange(..): if sv.should_stop(): break sess.run(<my training op>) ...do other things needed at each training step... An exception raised from the with block or one of the service threads is raised again when the block exits. This is done after stopping all threads and closing the session. For example, an AbortedError exception, raised in case of preemption of one of the workers in a distributed model, is raised again when the block exits. If you want to retry the training loop in case of preemption you can do it as follows: def main(...): while True try: train() except tf.errors.Aborted: pass As a special case, exceptions used for control flow, such as OutOfRangeError which reports that input queues are exhausted, are not raised again from the with block: they indicate a clean termination of the training loop and are considered normal termination. Args master name of the TensorFlow master to use. See the tf.compat.v1.Session constructor for how this is interpreted. config Optional ConfigProto proto used to configure the session. Passed as-is to create the session. start_standard_services Whether to start the standard services, such as checkpoint, summary and step counter. close_summary_writer Whether to close the summary writer when closing the session. Defaults to True. Returns A context manager that yields a Session restored from the latest checkpoint or initialized from scratch if not checkpoint exists. The session is closed when the with block exits. prepare_or_wait_for_session View source prepare_or_wait_for_session( master='', config=None, wait_for_checkpoint=False, max_wait_secs=7200, start_standard_services=True ) Make sure the model is ready to be used. Create a session on 'master', recovering or initializing the model as needed, or wait for a session to be ready. If running as the chief and start_standard_service is set to True, also call the session manager to start the standard services. Args master name of the TensorFlow master to use. See the tf.compat.v1.Session constructor for how this is interpreted. config Optional ConfigProto proto used to configure the session, which is passed as-is to create the session. wait_for_checkpoint Whether we should wait for the availability of a checkpoint before creating Session. Defaults to False. max_wait_secs Maximum time to wait for the session to become available. start_standard_services Whether to start the standard services and the queue runners. Returns A Session object that can be used to drive the model. request_stop View source request_stop( ex=None ) Request that the coordinator stop the threads. See Coordinator.request_stop(). Args ex Optional Exception, or Python exc_info tuple as returned by sys.exc_info(). If this is the first call to request_stop() the corresponding exception is recorded and re-raised from join(). should_stop View source should_stop() Check if the coordinator was told to stop. See Coordinator.should_stop(). Returns True if the coordinator was told to stop, False otherwise. start_queue_runners View source start_queue_runners( sess, queue_runners=None ) Start threads for QueueRunners. Note that the queue runners collected in the graph key QUEUE_RUNNERS are already started automatically when you create a session with the supervisor, so unless you have non-collected queue runners to start you do not need to call this explicitly. Args sess A Session. queue_runners A list of QueueRunners. If not specified, we'll use the list of queue runners gathered in the graph under the key GraphKeys.QUEUE_RUNNERS. Returns The list of threads started for the QueueRunners. Raises RuntimeError If called with eager execution enabled. Eager Compatibility Queues are not compatible with eager execution. To ingest data when eager execution is enabled, use the tf.data API. start_standard_services View source start_standard_services( sess ) Start the standard services for 'sess'. This starts services in the background. The services started depend on the parameters to the constructor and may include: A Summary thread computing summaries every save_summaries_secs. A Checkpoint thread saving the model every save_model_secs. A StepCounter thread measure step time. Args sess A Session. Returns A list of threads that are running the standard services. You can use the Supervisor's Coordinator to join these threads with: sv.coord.Join() Raises RuntimeError If called with a non-chief Supervisor. ValueError If not logdir was passed to the constructor as the services need a log directory. stop View source stop( threads=None, close_summary_writer=True, ignore_live_threads=False ) Stop the services and the coordinator. This does not close the session. Args threads Optional list of threads to join with the coordinator. If None, defaults to the threads running the standard services, the threads started for QueueRunners, and the threads started by the loop() method. To wait on additional threads, pass the list in this parameter. close_summary_writer Whether to close the summary_writer. Defaults to True if the summary writer was created by the supervisor, False otherwise. ignore_live_threads If True ignores threads that remain running after a grace period when joining threads via the coordinator, instead of raising a RuntimeError. stop_on_exception View source stop_on_exception() Context handler to stop the supervisor when an exception is raised. See Coordinator.stop_on_exception(). Returns A context handler. summary_computed View source summary_computed( sess, summary, global_step=None ) Indicate that a summary was computed. Args sess A Session object. summary A Summary proto, or a string holding a serialized summary proto. global_step Int. global step this summary is associated with. If None, it will try to fetch the current step. Raises TypeError if 'summary' is not a Summary proto or a string. RuntimeError if the Supervisor was created without a logdir. wait_for_stop View source wait_for_stop() Block waiting for the coordinator to stop. Class Variables USE_DEFAULT 0
tensorflow.compat.v1.train.supervisor
tf.compat.v1.train.SyncReplicasOptimizer Class to synchronize, aggregate gradients and pass them to the optimizer. Inherits From: Optimizer tf.compat.v1.train.SyncReplicasOptimizer( opt, replicas_to_aggregate, total_num_replicas=None, variable_averages=None, variables_to_average=None, use_locking=False, name='sync_replicas' ) This class is deprecated. For synchronous training, please use Distribution Strategies. In a typical asynchronous training environment, it's common to have some stale gradients. For example, with a N-replica asynchronous training, gradients will be applied to the variables N times independently. Depending on each replica's training speed, some gradients might be calculated from copies of the variable from several steps back (N-1 steps on average). This optimizer avoids stale gradients by collecting gradients from all replicas, averaging them, then applying them to the variables in one shot, after which replicas can fetch the new variables and continue. The following accumulators/queue are created: N gradient accumulators, one per variable to train. Gradients are pushed to them and the chief worker will wait until enough gradients are collected and then average them before applying to variables. The accumulator will drop all stale gradients (more details in the accumulator op). 1 token queue where the optimizer pushes the new global_step value after all variables are updated. The following local variable is created: sync_rep_local_step, one per replica. Compared against the global_step in each accumulator to check for staleness of the gradients. The optimizer adds nodes to the graph to collect gradients and pause the trainers until variables are updated. For the Parameter Server job: An accumulator is created for each variable, and each replica pushes the gradients into the accumulators instead of directly applying them to the variables. Each accumulator averages once enough gradients (replicas_to_aggregate) have been accumulated. Apply the averaged gradients to the variables. Only after all variables have been updated, increment the global step. Only after step 4, pushes global_step in the token_queue, once for each worker replica. The workers can now fetch the global step, use it to update its local_step variable and start the next batch. Please note that some workers can consume multiple minibatches, while some may not consume even one. This is because each worker fetches minibatches as long as a token exists. If one worker is stuck for some reason and does not consume a token, another worker can use it. For the replicas: Start a step: fetch variables and compute gradients. Once the gradients have been computed, push them into gradient accumulators. Each accumulator will check the staleness and drop the stale. After pushing all the gradients, dequeue an updated value of global_step from the token queue and record that step to its local_step variable. Note that this is effectively a barrier. Start the next batch. Usage # Create any optimizer to update the variables, say a simple SGD: opt = GradientDescentOptimizer(learning_rate=0.1) # Wrap the optimizer with sync_replicas_optimizer with 50 replicas: at each # step the optimizer collects 50 gradients before applying to variables. # Note that if you want to have 2 backup replicas, you can change # total_num_replicas=52 and make sure this number matches how many physical # replicas you started in your job. opt = tf.compat.v1.train.SyncReplicasOptimizer(opt, replicas_to_aggregate=50, total_num_replicas=50) # Some models have startup_delays to help stabilize the model but when using # sync_replicas training, set it to 0. # Now you can call `minimize()` or `compute_gradients()` and # `apply_gradients()` normally training_op = opt.minimize(total_loss, global_step=self.global_step) # You can create the hook which handles initialization and queues. sync_replicas_hook = opt.make_session_run_hook(is_chief) In the training program, every worker will run the train_op as if not synchronized. with training.MonitoredTrainingSession( master=workers[worker_id].target, is_chief=is_chief, hooks=[sync_replicas_hook]) as mon_sess: while not mon_sess.should_stop(): mon_sess.run(training_op) To use SyncReplicasOptimizer with an Estimator, you need to send sync_replicas_hook while calling the fit. my_estimator = DNNClassifier(..., optimizer=opt) my_estimator.fit(..., hooks=[sync_replicas_hook]) Args opt The actual optimizer that will be used to compute and apply the gradients. Must be one of the Optimizer classes. replicas_to_aggregate number of replicas to aggregate for each variable update. total_num_replicas Total number of tasks/workers/replicas, could be different from replicas_to_aggregate. If total_num_replicas > replicas_to_aggregate: it is backup_replicas + replicas_to_aggregate. If total_num_replicas < replicas_to_aggregate: Replicas compute multiple batches per update to variables. variable_averages Optional ExponentialMovingAverage object, used to maintain moving averages for the variables passed in variables_to_average. variables_to_average a list of variables that need to be averaged. Only needed if variable_averages is passed in. use_locking If True use locks for update operation. name string. Optional name of the returned operation. Methods apply_gradients View source apply_gradients( grads_and_vars, global_step=None, name=None ) Apply gradients to variables. This contains most of the synchronization implementation and also wraps the apply_gradients() from the real optimizer. Args grads_and_vars List of (gradient, variable) pairs as returned by compute_gradients(). global_step Optional Variable to increment by one after the variables have been updated. name Optional name for the returned operation. Default to the name passed to the Optimizer constructor. Returns train_op The op to dequeue a token so the replicas can exit this batch and start the next one. This is executed by each replica. Raises ValueError If the grads_and_vars is empty. ValueError If global step is not provided, the staleness cannot be checked. compute_gradients View source compute_gradients( *args, **kwargs ) Compute gradients of "loss" for the variables in "var_list". This simply wraps the compute_gradients() from the real optimizer. The gradients will be aggregated in the apply_gradients() so that user can modify the gradients like clipping with per replica global norm if needed. The global norm with aggregated gradients can be bad as one replica's huge gradients can hurt the gradients from other replicas. Args *args Arguments for compute_gradients(). **kwargs Keyword arguments for compute_gradients(). Returns A list of (gradient, variable) pairs. get_chief_queue_runner View source get_chief_queue_runner() Returns the QueueRunner for the chief to execute. This includes the operations to synchronize replicas: aggregate gradients, apply to variables, increment global step, insert tokens to token queue. Note that this can only be called after calling apply_gradients() which actually generates this queuerunner. Returns A QueueRunner for chief to execute. Raises ValueError If this is called before apply_gradients(). get_init_tokens_op View source get_init_tokens_op( num_tokens=-1 ) Returns the op to fill the sync_token_queue with the tokens. This is supposed to be executed in the beginning of the chief/sync thread so that even if the total_num_replicas is less than replicas_to_aggregate, the model can still proceed as the replicas can compute multiple steps per variable update. Make sure: num_tokens >= replicas_to_aggregate - total_num_replicas. Args num_tokens Number of tokens to add to the queue. Returns An op for the chief/sync replica to fill the token queue. Raises ValueError If this is called before apply_gradients(). ValueError If num_tokens are smaller than replicas_to_aggregate - total_num_replicas. get_name View source get_name() get_slot View source get_slot( *args, **kwargs ) Return a slot named "name" created for "var" by the Optimizer. This simply wraps the get_slot() from the actual optimizer. Args *args Arguments for get_slot(). **kwargs Keyword arguments for get_slot(). Returns The Variable for the slot if it was created, None otherwise. get_slot_names View source get_slot_names( *args, **kwargs ) Return a list of the names of slots created by the Optimizer. This simply wraps the get_slot_names() from the actual optimizer. Args *args Arguments for get_slot(). **kwargs Keyword arguments for get_slot(). Returns A list of strings. make_session_run_hook View source make_session_run_hook( is_chief, num_tokens=-1 ) Creates a hook to handle SyncReplicasHook ops such as initialization. minimize View source minimize( loss, global_step=None, var_list=None, gate_gradients=GATE_OP, aggregation_method=None, colocate_gradients_with_ops=False, name=None, grad_loss=None ) Add operations to minimize loss by updating var_list. This method simply combines calls compute_gradients() and apply_gradients(). If you want to process the gradient before applying them call compute_gradients() and apply_gradients() explicitly instead of using this function. Args loss A Tensor containing the value to minimize. global_step Optional Variable to increment by one after the variables have been updated. var_list Optional list or tuple of Variable objects to update to minimize loss. Defaults to the list of variables collected in the graph under the key GraphKeys.TRAINABLE_VARIABLES. gate_gradients How to gate the computation of gradients. Can be GATE_NONE, GATE_OP, or GATE_GRAPH. aggregation_method Specifies the method used to combine gradient terms. Valid values are defined in the class AggregationMethod. colocate_gradients_with_ops If True, try colocating gradients with the corresponding op. name Optional name for the returned operation. grad_loss Optional. A Tensor holding the gradient computed for loss. Returns An Operation that updates the variables in var_list. If global_step was not None, that operation also increments global_step. Raises ValueError If some of the variables are not Variable objects. Eager Compatibility When eager execution is enabled, loss should be a Python function that takes no arguments and computes the value to be minimized. Minimization (and gradient computation) is done with respect to the elements of var_list if not None, else with respect to any trainable variables created during the execution of the loss function. gate_gradients, aggregation_method, colocate_gradients_with_ops and grad_loss are ignored when eager execution is enabled. variables View source variables() Fetches a list of optimizer variables in the default graph. This wraps variables() from the actual optimizer. It does not include the SyncReplicasOptimizer's local step. Returns A list of variables. Class Variables GATE_GRAPH 2 GATE_NONE 0 GATE_OP 1
tensorflow.compat.v1.train.syncreplicasoptimizer
tf.compat.v1.train.update_checkpoint_state Updates the content of the 'checkpoint' file. (deprecated) tf.compat.v1.train.update_checkpoint_state( save_dir, model_checkpoint_path, all_model_checkpoint_paths=None, latest_filename=None, all_model_checkpoint_timestamps=None, last_preserved_timestamp=None ) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use tf.train.CheckpointManager to manage checkpoints rather than manually editing the Checkpoint proto. This updates the checkpoint file containing a CheckpointState proto. Args save_dir Directory where the model was saved. model_checkpoint_path The checkpoint file. all_model_checkpoint_paths List of strings. Paths to all not-yet-deleted checkpoints, sorted from oldest to newest. If this is a non-empty list, the last element must be equal to model_checkpoint_path. These paths are also saved in the CheckpointState proto. latest_filename Optional name of the checkpoint file. Default to 'checkpoint'. all_model_checkpoint_timestamps Optional list of timestamps (floats, seconds since the Epoch) indicating when the checkpoints in all_model_checkpoint_paths were created. last_preserved_timestamp A float, indicating the number of seconds since the Epoch when the last preserved checkpoint was written, e.g. due to a keep_checkpoint_every_n_hours parameter (see tf.train.CheckpointManager for an implementation). Raises RuntimeError If any of the model checkpoint paths conflict with the file containing CheckpointSate.
tensorflow.compat.v1.train.update_checkpoint_state
tf.compat.v1.train.warm_start Warm-starts a model using the given settings. tf.compat.v1.train.warm_start( ckpt_to_initialize_from, vars_to_warm_start='.*', var_name_to_vocab_info=None, var_name_to_prev_var_name=None ) If you are using a tf.estimator.Estimator, this will automatically be called during training. Args ckpt_to_initialize_from [Required] A string specifying the directory with checkpoint file(s) or path to checkpoint from which to warm-start the model parameters. vars_to_warm_start [Optional] One of the following: A regular expression (string) that captures which variables to warm-start (see tf.compat.v1.get_collection). This expression will only consider variables in the TRAINABLE_VARIABLES collection -- if you need to warm-start non_TRAINABLE vars (such as optimizer accumulators or batch norm statistics), please use the below option. A list of strings, each a regex scope provided to tf.compat.v1.get_collection with GLOBAL_VARIABLES (please see tf.compat.v1.get_collection). For backwards compatibility reasons, this is separate from the single-string argument type. A list of Variables to warm-start. If you do not have access to the Variable objects at the call site, please use the above option. None, in which case only TRAINABLE variables specified in var_name_to_vocab_info will be warm-started. Defaults to '.*', which warm-starts all variables in the TRAINABLE_VARIABLES collection. Note that this excludes variables such as accumulators and moving statistics from batch norm. var_name_to_vocab_info [Optional] Dict of variable names (strings) to tf.estimator.VocabInfo. The variable names should be "full" variables, not the names of the partitions. If not explicitly provided, the variable is assumed to have no (changes to) vocabulary. var_name_to_prev_var_name [Optional] Dict of variable names (strings) to name of the previously-trained variable in ckpt_to_initialize_from. If not explicitly provided, the name of the variable is assumed to be same between previous checkpoint and current model. Note that this has no effect on the set of variables that is warm-started, and only controls name mapping (use vars_to_warm_start for controlling what variables to warm-start). Raises ValueError If the WarmStartSettings contains prev_var_name or VocabInfo configuration for variable names that are not used. This is to ensure a stronger check for variable configuration than relying on users to examine the logs.
tensorflow.compat.v1.train.warm_start
tf.compat.v1.train.WorkerSessionCreator Creates a tf.compat.v1.Session for a worker. Inherits From: SessionCreator tf.compat.v1.train.WorkerSessionCreator( scaffold=None, master='', config=None, max_wait_secs=(30 * 60) ) Args scaffold A Scaffold used for gathering or building supportive ops. If not specified a default one is created. It's used to finalize the graph. master String representation of the TensorFlow master to use. config ConfigProto proto used to configure the session. max_wait_secs Maximum time to wait for the session to become available. Methods create_session View source create_session()
tensorflow.compat.v1.train.workersessioncreator
tf.compat.v1.trainable_variables Returns all variables created with trainable=True. tf.compat.v1.trainable_variables( scope=None ) When passed trainable=True, the Variable() constructor automatically adds new variables to the graph collection GraphKeys.TRAINABLE_VARIABLES. This convenience function returns the contents of that collection. Args scope (Optional.) A string. If supplied, the resulting list is filtered to include only items whose name attribute matches scope using re.match. Items without a name attribute are never returned if a scope is supplied. The choice of re.match means that a scope without special tokens filters by prefix. Returns A list of Variable objects.
tensorflow.compat.v1.trainable_variables
tf.compat.v1.transpose Transposes a. tf.compat.v1.transpose( a, perm=None, name='transpose', conjugate=False ) Permutes the dimensions according to perm. The returned tensor's dimension i will correspond to the input dimension perm[i]. If perm is not given, it is set to (n-1...0), where n is the rank of the input tensor. Hence by default, this operation performs a regular matrix transpose on 2-D input Tensors. If conjugate is True and a.dtype is either complex64 or complex128 then the values of a are conjugated and transposed. For example: x = tf.constant([[1, 2, 3], [4, 5, 6]]) tf.transpose(x) # [[1, 4] # [2, 5] # [3, 6]] # Equivalently tf.transpose(x, perm=[1, 0]) # [[1, 4] # [2, 5] # [3, 6]] # If x is complex, setting conjugate=True gives the conjugate transpose x = tf.constant([[1 + 1j, 2 + 2j, 3 + 3j], [4 + 4j, 5 + 5j, 6 + 6j]]) tf.transpose(x, conjugate=True) # [[1 - 1j, 4 - 4j], # [2 - 2j, 5 - 5j], # [3 - 3j, 6 - 6j]] # 'perm' is more useful for n-dimensional tensors, for n > 2 x = tf.constant([[[ 1, 2, 3], [ 4, 5, 6]], [[ 7, 8, 9], [10, 11, 12]]]) # Take the transpose of the matrices in dimension-0 # (this common operation has a shorthand `linalg.matrix_transpose`) tf.transpose(x, perm=[0, 2, 1]) # [[[1, 4], # [2, 5], # [3, 6]], # [[7, 10], # [8, 11], # [9, 12]]] Args a A Tensor. perm A permutation of the dimensions of a. name A name for the operation (optional). conjugate Optional bool. Setting it to True is mathematically equivalent to tf.math.conj(tf.transpose(input)). Returns A transposed Tensor. Numpy Compatibility In numpy transposes are memory-efficient constant time operations as they simply return a new view of the same data with adjusted strides. TensorFlow does not support strides, so transpose returns a new tensor with the items permuted.
tensorflow.compat.v1.transpose
tf.compat.v1.truncated_normal_initializer Initializer that generates a truncated normal distribution. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.initializers.truncated_normal tf.compat.v1.truncated_normal_initializer( mean=0.0, stddev=1.0, seed=None, dtype=tf.dtypes.float32 ) These values are similar to values from a random_normal_initializer except that values more than two standard deviations from the mean are discarded and re-drawn. This is the recommended initializer for neural network weights and filters. Args mean a python scalar or a scalar tensor. Mean of the random values to generate. stddev a python scalar or a scalar tensor. Standard deviation of the random values to generate. seed A Python integer. Used to create random seeds. See tf.compat.v1.set_random_seed for behavior. dtype Default data type, used if no dtype argument is provided when calling the initializer. Only floating point types are supported. Methods from_config View source @classmethod from_config( config ) Instantiates an initializer from a configuration dictionary. Example: initializer = RandomUniform(-1, 1) config = initializer.get_config() initializer = RandomUniform.from_config(config) Args config A Python dictionary. It will typically be the output of get_config. Returns An Initializer instance. get_config View source get_config() Returns the configuration of the initializer as a JSON-serializable dict. Returns A JSON-serializable Python dict. __call__ View source __call__( shape, dtype=None, partition_info=None ) Returns a tensor object initialized as specified by the initializer. Args shape Shape of the tensor. dtype Optional dtype of the tensor. If not provided use the initializer dtype. partition_info Optional information about the possible partitioning of a tensor.
tensorflow.compat.v1.truncated_normal_initializer
tf.compat.v1.tuple Group tensors together. tf.compat.v1.tuple( tensors, name=None, control_inputs=None ) This creates a tuple of tensors with the same values as the tensors argument, except that the value of each tensor is only returned after the values of all tensors have been computed. control_inputs contains additional ops that have to finish before this op finishes, but whose outputs are not returned. This can be used as a "join" mechanism for parallel computations: all the argument tensors can be computed in parallel, but the values of any tensor returned by tuple are only available after all the parallel computations are done. See also tf.group and tf.control_dependencies. Args tensors A list of Tensors or IndexedSlices, some entries can be None. name (optional) A name to use as a name_scope for the operation. control_inputs List of additional ops to finish before returning. Returns Same as tensors. Raises ValueError If tensors does not contain any Tensor or IndexedSlices. TypeError If control_inputs is not a list of Operation or Tensor objects.
tensorflow.compat.v1.tuple
Module: tf.compat.v1.types Public TensorFlow type definitions. For details, see https://github.com/tensorflow/community/blob/master/rfcs/20200211-tf-types.md Modules experimental module: Public API for tf.types.experimental namespace.
tensorflow.compat.v1.types
Module: tf.compat.v1.types.experimental Public API for tf.types.experimental namespace. Type Aliases TensorLike: Union of all types that can be converted to a tf.Tensor by tf.convert_to_tensor.
tensorflow.compat.v1.types.experimental
tf.compat.v1.uniform_unit_scaling_initializer Initializer that generates tensors without scaling variance. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.initializers.uniform_unit_scaling tf.compat.v1.uniform_unit_scaling_initializer( factor=1.0, seed=None, dtype=tf.dtypes.float32 ) When initializing a deep network, it is in principle advantageous to keep the scale of the input variance constant, so it does not explode or diminish by reaching the final layer. If the input is x and the operation x * W, and we want to initialize W uniformly at random, we need to pick W from [-sqrt(3) / sqrt(dim), sqrt(3) / sqrt(dim)] to keep the scale intact, where dim = W.shape[0] (the size of the input). A similar calculation for convolutional networks gives an analogous result with dim equal to the product of the first 3 dimensions. When nonlinearities are present, we need to multiply this by a constant factor. See (Sussillo et al., 2014) for deeper motivation, experiments and the calculation of constants. In section 2.3 there, the constants were numerically computed: for a linear layer it's 1.0, relu: ~1.43, tanh: ~1.15. Args factor Float. A multiplicative factor by which the values will be scaled. seed A Python integer. Used to create random seeds. See tf.compat.v1.set_random_seed for behavior. dtype Default data type, used if no dtype argument is provided when calling the initializer. Only floating point types are supported. References: Sussillo et al., 2014 (pdf) Methods from_config View source @classmethod from_config( config ) Instantiates an initializer from a configuration dictionary. Example: initializer = RandomUniform(-1, 1) config = initializer.get_config() initializer = RandomUniform.from_config(config) Args config A Python dictionary. It will typically be the output of get_config. Returns An Initializer instance. get_config View source get_config() Returns the configuration of the initializer as a JSON-serializable dict. Returns A JSON-serializable Python dict. __call__ View source __call__( shape, dtype=None, partition_info=None ) Returns a tensor object initialized as specified by the initializer. Args shape Shape of the tensor. dtype Optional dtype of the tensor. If not provided use the initializer dtype. partition_info Optional information about the possible partitioning of a tensor.
tensorflow.compat.v1.uniform_unit_scaling_initializer
Module: tf.compat.v1.user_ops Public API for tf.user_ops namespace. Functions my_fact(...): Example of overriding the generated code for an Op.
tensorflow.compat.v1.user_ops
tf.compat.v1.user_ops.my_fact Example of overriding the generated code for an Op. tf.compat.v1.user_ops.my_fact()
tensorflow.compat.v1.user_ops.my_fact
tf.compat.v1.Variable See the Variables Guide. Inherits From: Variable tf.compat.v1.Variable( initial_value=None, trainable=None, collections=None, validate_shape=True, caching_device=None, name=None, variable_def=None, dtype=None, expected_shape=None, import_scope=None, constraint=None, use_resource=None, synchronization=tf.VariableSynchronization.AUTO, aggregation=tf.compat.v1.VariableAggregation.NONE, shape=None ) A variable maintains state in the graph across calls to run(). You add a variable to the graph by constructing an instance of the class Variable. The Variable() constructor requires an initial value for the variable, which can be a Tensor of any type and shape. The initial value defines the type and shape of the variable. After construction, the type and shape of the variable are fixed. The value can be changed using one of the assign methods. If you want to change the shape of a variable later you have to use an assign Op with validate_shape=False. Just like any Tensor, variables created with Variable() can be used as inputs for other Ops in the graph. Additionally, all the operators overloaded for the Tensor class are carried over to variables, so you can also add nodes to the graph by just doing arithmetic on variables. import tensorflow as tf # Create a variable. w = tf.Variable(<initial-value>, name=<optional-name>) # Use the variable in the graph like any Tensor. y = tf.matmul(w, ...another variable or tensor...) # The overloaded operators are available too. z = tf.sigmoid(w + y) # Assign a new value to the variable with `assign()` or a related method. w.assign(w + 1.0) w.assign_add(1.0) When you launch the graph, variables have to be explicitly initialized before you can run Ops that use their value. You can initialize a variable by running its initializer op, restoring the variable from a save file, or simply running an assign Op that assigns a value to the variable. In fact, the variable initializer op is just an assign Op that assigns the variable's initial value to the variable itself. # Launch the graph in a session. with tf.compat.v1.Session() as sess: # Run the variable initializer. sess.run(w.initializer) # ...you now can run ops that use the value of 'w'... The most common initialization pattern is to use the convenience function global_variables_initializer() to add an Op to the graph that initializes all the variables. You then run that Op after launching the graph. # Add an Op to initialize global variables. init_op = tf.compat.v1.global_variables_initializer() # Launch the graph in a session. with tf.compat.v1.Session() as sess: # Run the Op that initializes global variables. sess.run(init_op) # ...you can now run any Op that uses variable values... If you need to create a variable with an initial value dependent on another variable, use the other variable's initialized_value(). This ensures that variables are initialized in the right order. All variables are automatically collected in the graph where they are created. By default, the constructor adds the new variable to the graph collection GraphKeys.GLOBAL_VARIABLES. The convenience function global_variables() returns the contents of that collection. When building a machine learning model it is often convenient to distinguish between variables holding the trainable model parameters and other variables such as a global step variable used to count training steps. To make this easier, the variable constructor supports a trainable=<bool> parameter. If True, the new variable is also added to the graph collection GraphKeys.TRAINABLE_VARIABLES. The convenience function trainable_variables() returns the contents of this collection. The various Optimizer classes use this collection as the default list of variables to optimize. Warning: tf.Variable objects by default have a non-intuitive memory model. A Variable is represented internally as a mutable Tensor which can non-deterministically alias other Tensors in a graph. The set of operations which consume a Variable and can lead to aliasing is undetermined and can change across TensorFlow versions. Avoid writing code which relies on the value of a Variable either changing or not changing as other operations happen. For example, using Variable objects or simple functions thereof as predicates in a tf.cond is dangerous and error-prone:v = tf.Variable(True) tf.cond(v, lambda: v.assign(False), my_false_fn) # Note: this is broken. Here, adding use_resource=True when constructing the variable will fix any nondeterminism issues: v = tf.Variable(True, use_resource=True) tf.cond(v, lambda: v.assign(False), my_false_fn) To use the replacement for variables which does not have these issues: Add use_resource=True when constructing tf.Variable; Call tf.compat.v1.get_variable_scope().set_use_resource(True) inside a tf.compat.v1.variable_scope before the tf.compat.v1.get_variable() call. Args initial_value A Tensor, or Python object convertible to a Tensor, which is the initial value for the Variable. The initial value must have a shape specified unless validate_shape is set to False. Can also be a callable with no argument that returns the initial value when called. In that case, dtype must be specified. (Note that initializer functions from init_ops.py must first be bound to a shape before being used here.) trainable If True, also adds the variable to the graph collection GraphKeys.TRAINABLE_VARIABLES. This collection is used as the default list of variables to use by the Optimizer classes. Defaults to True, unless synchronization is set to ON_READ, in which case it defaults to False. collections List of graph collections keys. The new variable is added to these collections. Defaults to [GraphKeys.GLOBAL_VARIABLES]. validate_shape If False, allows the variable to be initialized with a value of unknown shape. If True, the default, the shape of initial_value must be known. caching_device Optional device string describing where the Variable should be cached for reading. Defaults to the Variable's device. If not None, caches on another device. Typical use is to cache on the device where the Ops using the Variable reside, to deduplicate copying through Switch and other conditional statements. name Optional name for the variable. Defaults to 'Variable' and gets uniquified automatically. variable_def VariableDef protocol buffer. If not None, recreates the Variable object with its contents, referencing the variable's nodes in the graph, which must already exist. The graph is not changed. variable_def and the other arguments are mutually exclusive. dtype If set, initial_value will be converted to the given type. If None, either the datatype will be kept (if initial_value is a Tensor), or convert_to_tensor will decide. expected_shape A TensorShape. If set, initial_value is expected to have this shape. import_scope Optional string. Name scope to add to the Variable. Only used when initializing from protocol buffer. constraint An optional projection function to be applied to the variable after being updated by an Optimizer (e.g. used to implement norm constraints or value constraints for layer weights). The function must take as input the unprojected Tensor representing the value of the variable and return the Tensor for the projected value (which must have the same shape). Constraints are not safe to use when doing asynchronous distributed training. use_resource whether to use resource variables. synchronization Indicates when a distributed a variable will be aggregated. Accepted values are constants defined in the class tf.VariableSynchronization. By default the synchronization is set to AUTO and the current DistributionStrategy chooses when to synchronize. aggregation Indicates how a distributed variable will be aggregated. Accepted values are constants defined in the class tf.VariableAggregation. shape (optional) The shape of this variable. If None, the shape of initial_value will be used. When setting this argument to tf.TensorShape(None) (representing an unspecified shape), the variable can be assigned with values of different shapes. Raises ValueError If both variable_def and initial_value are specified. ValueError If the initial value is not specified, or does not have a shape and validate_shape is True. RuntimeError If eager execution is enabled. Attributes aggregation constraint Returns the constraint function associated with this variable. device The device of this variable. dtype The DType of this variable. graph The Graph of this variable. initial_value Returns the Tensor used as the initial value for the variable. Note that this is different from initialized_value() which runs the op that initializes the variable before returning its value. This method returns the tensor that is used by the op that initializes the variable. initializer The initializer operation for this variable. name The name of this variable. op The Operation of this variable. shape The TensorShape of this variable. synchronization trainable Child Classes class SaveSliceInfo Methods assign View source assign( value, use_locking=False, name=None, read_value=True ) Assigns a new value to the variable. This is essentially a shortcut for assign(self, value). Args value A Tensor. The new value for this variable. use_locking If True, use locking during the assignment. name The name of the operation to be created read_value if True, will return something which evaluates to the new value of the variable; if False will return the assign op. Returns The updated variable. If read_value is false, instead returns None in Eager mode and the assign op in graph mode. assign_add View source assign_add( delta, use_locking=False, name=None, read_value=True ) Adds a value to this variable. This is essentially a shortcut for assign_add(self, delta). Args delta A Tensor. The value to add to this variable. use_locking If True, use locking during the operation. name The name of the operation to be created read_value if True, will return something which evaluates to the new value of the variable; if False will return the assign op. Returns The updated variable. If read_value is false, instead returns None in Eager mode and the assign op in graph mode. assign_sub View source assign_sub( delta, use_locking=False, name=None, read_value=True ) Subtracts a value from this variable. This is essentially a shortcut for assign_sub(self, delta). Args delta A Tensor. The value to subtract from this variable. use_locking If True, use locking during the operation. name The name of the operation to be created read_value if True, will return something which evaluates to the new value of the variable; if False will return the assign op. Returns The updated variable. If read_value is false, instead returns None in Eager mode and the assign op in graph mode. batch_scatter_update View source batch_scatter_update( sparse_delta, use_locking=False, name=None ) Assigns tf.IndexedSlices to this variable batch-wise. Analogous to batch_gather. This assumes that this variable and the sparse_delta IndexedSlices have a series of leading dimensions that are the same for all of them, and the updates are performed on the last dimension of indices. In other words, the dimensions should be the following: num_prefix_dims = sparse_delta.indices.ndims - 1 batch_dim = num_prefix_dims + 1 sparse_delta.updates.shape = sparse_delta.indices.shape + var.shape[ batch_dim:] where sparse_delta.updates.shape[:num_prefix_dims] == sparse_delta.indices.shape[:num_prefix_dims] == var.shape[:num_prefix_dims] And the operation performed can be expressed as: var[i_1, ..., i_n, sparse_delta.indices[i_1, ..., i_n, j]] = sparse_delta.updates[ i_1, ..., i_n, j] When sparse_delta.indices is a 1D tensor, this operation is equivalent to scatter_update. To avoid this operation one can looping over the first ndims of the variable and using scatter_update on the subtensors that result of slicing the first dimension. This is a valid option for ndims = 1, but less efficient than this implementation. Args sparse_delta tf.IndexedSlices to be assigned to this variable. use_locking If True, use locking during the operation. name the name of the operation. Returns The updated variable. Raises TypeError if sparse_delta is not an IndexedSlices. count_up_to View source count_up_to( limit ) Increments this variable until it reaches limit. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Prefer Dataset.range instead. When that Op is run it tries to increment the variable by 1. If incrementing the variable would bring it above limit then the Op raises the exception OutOfRangeError. If no error is raised, the Op outputs the value of the variable before the increment. This is essentially a shortcut for count_up_to(self, limit). Args limit value at which incrementing the variable raises an error. Returns A Tensor that will hold the variable value before the increment. If no other Op modifies this variable, the values produced will all be distinct. eval View source eval( session=None ) In a session, computes and returns the value of this variable. This is not a graph construction method, it does not add ops to the graph. This convenience method requires a session where the graph containing this variable has been launched. If no session is passed, the default session is used. See tf.compat.v1.Session for more information on launching a graph and on sessions. v = tf.Variable([1, 2]) init = tf.compat.v1.global_variables_initializer() with tf.compat.v1.Session() as sess: sess.run(init) # Usage passing the session explicitly. print(v.eval(sess)) # Usage with the default session. The 'with' block # above makes 'sess' the default session. print(v.eval()) Args session The session to use to evaluate this variable. If none, the default session is used. Returns A numpy ndarray with a copy of the value of this variable. experimental_ref View source experimental_ref() DEPRECATED FUNCTION Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use ref() instead. from_proto View source @staticmethod from_proto( variable_def, import_scope=None ) Returns a Variable object created from variable_def. gather_nd View source gather_nd( indices, name=None ) Gather slices from params into a Tensor with shape specified by indices. See tf.gather_nd for details. Args indices A Tensor. Must be one of the following types: int32, int64. Index tensor. name A name for the operation (optional). Returns A Tensor. Has the same type as params. get_shape View source get_shape() Alias of Variable.shape. initialized_value View source initialized_value() Returns the value of the initialized variable. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use Variable.read_value. Variables in 2.X are initialized automatically both in eager and graph (inside tf.defun) contexts. You should use this instead of the variable itself to initialize another variable with a value that depends on the value of this variable. # Initialize 'v' with a random tensor. v = tf.Variable(tf.random.truncated_normal([10, 40])) # Use `initialized_value` to guarantee that `v` has been # initialized before its value is used to initialize `w`. # The random values are picked only once. w = tf.Variable(v.initialized_value() * 2.0) Returns A Tensor holding the value of this variable after its initializer has run. load View source load( value, session=None ) Load new value into this variable. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Prefer Variable.assign which has equivalent behavior in 2.X. Writes new value to variable's memory. Doesn't add ops to the graph. This convenience method requires a session where the graph containing this variable has been launched. If no session is passed, the default session is used. See tf.compat.v1.Session for more information on launching a graph and on sessions. v = tf.Variable([1, 2]) init = tf.compat.v1.global_variables_initializer() with tf.compat.v1.Session() as sess: sess.run(init) # Usage passing the session explicitly. v.load([2, 3], sess) print(v.eval(sess)) # prints [2 3] # Usage with the default session. The 'with' block # above makes 'sess' the default session. v.load([3, 4], sess) print(v.eval()) # prints [3 4] Args value New variable value session The session to use to evaluate this variable. If none, the default session is used. Raises ValueError Session is not passed and no default session read_value View source read_value() Returns the value of this variable, read in the current context. Can be different from value() if it's on another device, with control dependencies, etc. Returns A Tensor containing the value of the variable. ref View source ref() Returns a hashable reference object to this Variable. The primary use case for this API is to put variables in a set/dictionary. We can't put variables in a set/dictionary as variable.__hash__() is no longer available starting Tensorflow 2.0. The following will raise an exception starting 2.0 x = tf.Variable(5) y = tf.Variable(10) z = tf.Variable(10) variable_set = {x, y, z} Traceback (most recent call last): TypeError: Variable is unhashable. Instead, use tensor.ref() as the key. variable_dict = {x: 'five', y: 'ten'} Traceback (most recent call last): TypeError: Variable is unhashable. Instead, use tensor.ref() as the key. Instead, we can use variable.ref(). variable_set = {x.ref(), y.ref(), z.ref()} x.ref() in variable_set True variable_dict = {x.ref(): 'five', y.ref(): 'ten', z.ref(): 'ten'} variable_dict[y.ref()] 'ten' Also, the reference object provides .deref() function that returns the original Variable. x = tf.Variable(5) x.ref().deref() <tf.Variable 'Variable:0' shape=() dtype=int32, numpy=5> scatter_add View source scatter_add( sparse_delta, use_locking=False, name=None ) Adds tf.IndexedSlices to this variable. Args sparse_delta tf.IndexedSlices to be added to this variable. use_locking If True, use locking during the operation. name the name of the operation. Returns The updated variable. Raises TypeError if sparse_delta is not an IndexedSlices. scatter_div View source scatter_div( sparse_delta, use_locking=False, name=None ) Divide this variable by tf.IndexedSlices. Args sparse_delta tf.IndexedSlices to divide this variable by. use_locking If True, use locking during the operation. name the name of the operation. Returns The updated variable. Raises TypeError if sparse_delta is not an IndexedSlices. scatter_max View source scatter_max( sparse_delta, use_locking=False, name=None ) Updates this variable with the max of tf.IndexedSlices and itself. Args sparse_delta tf.IndexedSlices to use as an argument of max with this variable. use_locking If True, use locking during the operation. name the name of the operation. Returns The updated variable. Raises TypeError if sparse_delta is not an IndexedSlices. scatter_min View source scatter_min( sparse_delta, use_locking=False, name=None ) Updates this variable with the min of tf.IndexedSlices and itself. Args sparse_delta tf.IndexedSlices to use as an argument of min with this variable. use_locking If True, use locking during the operation. name the name of the operation. Returns The updated variable. Raises TypeError if sparse_delta is not an IndexedSlices. scatter_mul View source scatter_mul( sparse_delta, use_locking=False, name=None ) Multiply this variable by tf.IndexedSlices. Args sparse_delta tf.IndexedSlices to multiply this variable by. use_locking If True, use locking during the operation. name the name of the operation. Returns The updated variable. Raises TypeError if sparse_delta is not an IndexedSlices. scatter_nd_add View source scatter_nd_add( indices, updates, name=None ) Applies sparse addition to individual values or slices in a Variable. The Variable has rank P and indices is a Tensor of rank Q. indices must be integer tensor, containing indices into self. It must be shape [d_0, ..., d_{Q-2}, K] where 0 < K <= P. The innermost dimension of indices (with length K) corresponds to indices into elements (if K = P) or slices (if K < P) along the Kth dimension of self. updates is Tensor of rank Q-1+P-K with shape: [d_0, ..., d_{Q-2}, self.shape[K], ..., self.shape[P-1]]. For example, say we want to add 4 scattered elements to a rank-1 tensor to 8 elements. In Python, that update would look like this: v = tf.Variable([1, 2, 3, 4, 5, 6, 7, 8]) indices = tf.constant([[4], [3], [1] ,[7]]) updates = tf.constant([9, 10, 11, 12]) add = v.scatter_nd_add(indices, updates) with tf.compat.v1.Session() as sess: print sess.run(add) The resulting update to v would look like this: [1, 13, 3, 14, 14, 6, 7, 20] See tf.scatter_nd for more details about how to make updates to slices. Args indices The indices to be used in the operation. updates The values to be used in the operation. name the name of the operation. Returns The updated variable. scatter_nd_sub View source scatter_nd_sub( indices, updates, name=None ) Applies sparse subtraction to individual values or slices in a Variable. Assuming the variable has rank P and indices is a Tensor of rank Q. indices must be integer tensor, containing indices into self. It must be shape [d_0, ..., d_{Q-2}, K] where 0 < K <= P. The innermost dimension of indices (with length K) corresponds to indices into elements (if K = P) or slices (if K < P) along the Kth dimension of self. updates is Tensor of rank Q-1+P-K with shape: [d_0, ..., d_{Q-2}, self.shape[K], ..., self.shape[P-1]]. For example, say we want to add 4 scattered elements to a rank-1 tensor to 8 elements. In Python, that update would look like this: v = tf.Variable([1, 2, 3, 4, 5, 6, 7, 8]) indices = tf.constant([[4], [3], [1] ,[7]]) updates = tf.constant([9, 10, 11, 12]) op = v.scatter_nd_sub(indices, updates) with tf.compat.v1.Session() as sess: print sess.run(op) The resulting update to v would look like this: [1, -9, 3, -6, -6, 6, 7, -4] See tf.scatter_nd for more details about how to make updates to slices. Args indices The indices to be used in the operation. updates The values to be used in the operation. name the name of the operation. Returns The updated variable. scatter_nd_update View source scatter_nd_update( indices, updates, name=None ) Applies sparse assignment to individual values or slices in a Variable. The Variable has rank P and indices is a Tensor of rank Q. indices must be integer tensor, containing indices into self. It must be shape [d_0, ..., d_{Q-2}, K] where 0 < K <= P. The innermost dimension of indices (with length K) corresponds to indices into elements (if K = P) or slices (if K < P) along the Kth dimension of self. updates is Tensor of rank Q-1+P-K with shape: [d_0, ..., d_{Q-2}, self.shape[K], ..., self.shape[P-1]]. For example, say we want to add 4 scattered elements to a rank-1 tensor to 8 elements. In Python, that update would look like this: v = tf.Variable([1, 2, 3, 4, 5, 6, 7, 8]) indices = tf.constant([[4], [3], [1] ,[7]]) updates = tf.constant([9, 10, 11, 12]) op = v.scatter_nd_assign(indices, updates) with tf.compat.v1.Session() as sess: print sess.run(op) The resulting update to v would look like this: [1, 11, 3, 10, 9, 6, 7, 12] See tf.scatter_nd for more details about how to make updates to slices. Args indices The indices to be used in the operation. updates The values to be used in the operation. name the name of the operation. Returns The updated variable. scatter_sub View source scatter_sub( sparse_delta, use_locking=False, name=None ) Subtracts tf.IndexedSlices from this variable. Args sparse_delta tf.IndexedSlices to be subtracted from this variable. use_locking If True, use locking during the operation. name the name of the operation. Returns The updated variable. Raises TypeError if sparse_delta is not an IndexedSlices. scatter_update View source scatter_update( sparse_delta, use_locking=False, name=None ) Assigns tf.IndexedSlices to this variable. Args sparse_delta tf.IndexedSlices to be assigned to this variable. use_locking If True, use locking during the operation. name the name of the operation. Returns The updated variable. Raises TypeError if sparse_delta is not an IndexedSlices. set_shape View source set_shape( shape ) Overrides the shape for this variable. Args shape the TensorShape representing the overridden shape. sparse_read View source sparse_read( indices, name=None ) Gather slices from params axis axis according to indices. This function supports a subset of tf.gather, see tf.gather for details on usage. Args indices The index Tensor. Must be one of the following types: int32, int64. Must be in range [0, params.shape[axis]). name A name for the operation (optional). Returns A Tensor. Has the same type as params. to_proto View source to_proto( export_scope=None ) Converts a Variable to a VariableDef protocol buffer. Args export_scope Optional string. Name scope to remove. Returns A VariableDef protocol buffer, or None if the Variable is not in the specified name scope. value View source value() Returns the last snapshot of this variable. You usually do not need to call this method as all ops that need the value of the variable call it automatically through a convert_to_tensor() call. Returns a Tensor which holds the value of the variable. You can not assign a new value to this tensor as it is not a reference to the variable. To avoid copies, if the consumer of the returned value is on the same device as the variable, this actually returns the live value of the variable, not a copy. Updates to the variable are seen by the consumer. If the consumer is on a different device it will get a copy of the variable. Returns A Tensor containing the value of the variable. __abs__ View source __abs__( x, name=None ) Computes the absolute value of a tensor. Given a tensor of integer or floating-point values, this operation returns a tensor of the same type, where each element contains the absolute value of the corresponding element in the input. Given a tensor x of complex numbers, this operation returns a tensor of type float32 or float64 that is the absolute value of each element in x. For a complex number \(a + bj\), its absolute value is computed as \(\sqrt{a^2 + b^2}\). For example: # real number x = tf.constant([-2.25, 3.25]) tf.abs(x) <tf.Tensor: shape=(2,), dtype=float32, numpy=array([2.25, 3.25], dtype=float32)> # complex number x = tf.constant([[-2.25 + 4.75j], [-3.25 + 5.75j]]) tf.abs(x) <tf.Tensor: shape=(2, 1), dtype=float64, numpy= array([[5.25594901], [6.60492241]])> Args x A Tensor or SparseTensor of type float16, float32, float64, int32, int64, complex64 or complex128. name A name for the operation (optional). Returns A Tensor or SparseTensor of the same size, type and sparsity as x, with absolute values. Note, for complex64 or complex128 input, the returned Tensor will be of type float32 or float64, respectively. __add__ View source __add__( x, y ) The operation invoked by the Tensor.add operator. Purpose in the API: This method is exposed in TensorFlow's API so that library developers can register dispatching for <a href="https://www.tensorflow.org/api_docs/python/tf/Tensor#__add__"><code>Tensor.__add__</code></a> to allow it to handle custom composite tensors & other custom objects. The API symbol is not intended to be called by users directly and does appear in TensorFlow's generated documentation. Args x The left-hand side of the + operator. y The right-hand side of the + operator. name an optional name for the operation. Returns The result of the elementwise + operation. __and__ View source __and__( x, y ) __div__ View source __div__( x, y ) Divides x / y elementwise (using Python 2 division operator semantics). (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Deprecated in favor of operator or tf.math.divide. Note: Prefer using the Tensor division operator or tf.divide which obey Python 3 division operator semantics. This function divides x and y, forcing Python 2 semantics. That is, if x and y are both integers then the result will be an integer. This is in contrast to Python 3, where division with / is always a float while division with // is always an integer. Args x Tensor numerator of real numeric type. y Tensor denominator of real numeric type. name A name for the operation (optional). Returns x / y returns the quotient of x and y. __eq__ View source __eq__( other ) Compares two variables element-wise for equality. __floordiv__ View source __floordiv__( x, y ) Divides x / y elementwise, rounding toward the most negative integer. The same as tf.compat.v1.div(x,y) for integers, but uses tf.floor(tf.compat.v1.div(x,y)) for floating point arguments so that the result is always an integer (though possibly an integer represented as floating point). This op is generated by x // y floor division in Python 3 and in Python 2.7 with from __future__ import division. x and y must have the same type, and the result will have the same type as well. Args x Tensor numerator of real numeric type. y Tensor denominator of real numeric type. name A name for the operation (optional). Returns x / y rounded down. Raises TypeError If the inputs are complex. __ge__ __ge__( x, y, name=None ) Returns the truth value of (x >= y) element-wise. Note: math.greater_equal supports broadcasting. More about broadcasting here Example: x = tf.constant([5, 4, 6, 7]) y = tf.constant([5, 2, 5, 10]) tf.math.greater_equal(x, y) ==> [True, True, True, False] x = tf.constant([5, 4, 6, 7]) y = tf.constant([5]) tf.math.greater_equal(x, y) ==> [True, False, True, True] Args x A Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, int64, bfloat16, uint16, half, uint32, uint64. y A Tensor. Must have the same type as x. name A name for the operation (optional). Returns A Tensor of type bool. __getitem__ View source __getitem__( var, slice_spec ) Creates a slice helper object given a variable. This allows creating a sub-tensor from part of the current contents of a variable. See tf.Tensor.getitem for detailed examples of slicing. This function in addition also allows assignment to a sliced range. This is similar to __setitem__ functionality in Python. However, the syntax is different so that the user can capture the assignment operation for grouping or passing to sess.run(). For example, import tensorflow as tf A = tf.Variable([[1,2,3], [4,5,6], [7,8,9]], dtype=tf.float32) with tf.compat.v1.Session() as sess: sess.run(tf.compat.v1.global_variables_initializer()) print(sess.run(A[:2, :2])) # => [[1,2], [4,5]] op = A[:2,:2].assign(22. * tf.ones((2, 2))) print(sess.run(op)) # => [[22, 22, 3], [22, 22, 6], [7,8,9]] Note that assignments currently do not support NumPy broadcasting semantics. Args var An ops.Variable object. slice_spec The arguments to Tensor.getitem. Returns The appropriate slice of "tensor", based on "slice_spec". As an operator. The operator also has a assign() method that can be used to generate an assignment operator. Raises ValueError If a slice range is negative size. TypeError TypeError: If the slice indices aren't int, slice, ellipsis, tf.newaxis or int32/int64 tensors. __gt__ __gt__( x, y, name=None ) Returns the truth value of (x > y) element-wise. Note: math.greater supports broadcasting. More about broadcasting here Example: x = tf.constant([5, 4, 6]) y = tf.constant([5, 2, 5]) tf.math.greater(x, y) ==> [False, True, True] x = tf.constant([5, 4, 6]) y = tf.constant([5]) tf.math.greater(x, y) ==> [False, False, True] Args x A Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, int64, bfloat16, uint16, half, uint32, uint64. y A Tensor. Must have the same type as x. name A name for the operation (optional). Returns A Tensor of type bool. __invert__ View source __invert__( x, name=None ) __iter__ View source __iter__() Dummy method to prevent iteration. Do not call. NOTE(mrry): If we register getitem as an overloaded operator, Python will valiantly attempt to iterate over the variable's Tensor from 0 to infinity. Declaring this method prevents this unintended behavior. Raises TypeError when invoked. __le__ __le__( x, y, name=None ) Returns the truth value of (x <= y) element-wise. Note: math.less_equal supports broadcasting. More about broadcasting here Example: x = tf.constant([5, 4, 6]) y = tf.constant([5]) tf.math.less_equal(x, y) ==> [True, True, False] x = tf.constant([5, 4, 6]) y = tf.constant([5, 6, 6]) tf.math.less_equal(x, y) ==> [True, True, True] Args x A Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, int64, bfloat16, uint16, half, uint32, uint64. y A Tensor. Must have the same type as x. name A name for the operation (optional). Returns A Tensor of type bool. __lt__ __lt__( x, y, name=None ) Returns the truth value of (x < y) element-wise. Note: math.less supports broadcasting. More about broadcasting here Example: x = tf.constant([5, 4, 6]) y = tf.constant([5]) tf.math.less(x, y) ==> [False, True, False] x = tf.constant([5, 4, 6]) y = tf.constant([5, 6, 7]) tf.math.less(x, y) ==> [False, True, True] Args x A Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, int64, bfloat16, uint16, half, uint32, uint64. y A Tensor. Must have the same type as x. name A name for the operation (optional). Returns A Tensor of type bool. __matmul__ View source __matmul__( x, y ) Multiplies matrix a by matrix b, producing a * b. The inputs must, following any transpositions, be tensors of rank >= 2 where the inner 2 dimensions specify valid matrix multiplication dimensions, and any further outer dimensions specify matching batch size. Both matrices must be of the same type. The supported types are: float16, float32, float64, int32, complex64, complex128. Either matrix can be transposed or adjointed (conjugated and transposed) on the fly by setting one of the corresponding flag to True. These are False by default. If one or both of the matrices contain a lot of zeros, a more efficient multiplication algorithm can be used by setting the corresponding a_is_sparse or b_is_sparse flag to True. These are False by default. This optimization is only available for plain matrices (rank-2 tensors) with datatypes bfloat16 or float32. A simple 2-D tensor matrix multiplication: a = tf.constant([1, 2, 3, 4, 5, 6], shape=[2, 3]) a # 2-D tensor <tf.Tensor: shape=(2, 3), dtype=int32, numpy= array([[1, 2, 3], [4, 5, 6]], dtype=int32)> b = tf.constant([7, 8, 9, 10, 11, 12], shape=[3, 2]) b # 2-D tensor <tf.Tensor: shape=(3, 2), dtype=int32, numpy= array([[ 7, 8], [ 9, 10], [11, 12]], dtype=int32)> c = tf.matmul(a, b) c # `a` * `b` <tf.Tensor: shape=(2, 2), dtype=int32, numpy= array([[ 58, 64], [139, 154]], dtype=int32)> A batch matrix multiplication with batch shape [2]: a = tf.constant(np.arange(1, 13, dtype=np.int32), shape=[2, 2, 3]) a # 3-D tensor <tf.Tensor: shape=(2, 2, 3), dtype=int32, numpy= array([[[ 1, 2, 3], [ 4, 5, 6]], [[ 7, 8, 9], [10, 11, 12]]], dtype=int32)> b = tf.constant(np.arange(13, 25, dtype=np.int32), shape=[2, 3, 2]) b # 3-D tensor <tf.Tensor: shape=(2, 3, 2), dtype=int32, numpy= array([[[13, 14], [15, 16], [17, 18]], [[19, 20], [21, 22], [23, 24]]], dtype=int32)> c = tf.matmul(a, b) c # `a` * `b` <tf.Tensor: shape=(2, 2, 2), dtype=int32, numpy= array([[[ 94, 100], [229, 244]], [[508, 532], [697, 730]]], dtype=int32)> Since python >= 3.5 the @ operator is supported (see PEP 465). In TensorFlow, it simply calls the tf.matmul() function, so the following lines are equivalent: d = a @ b @ [[10], [11]] d = tf.matmul(tf.matmul(a, b), [[10], [11]]) Args a tf.Tensor of type float16, float32, float64, int32, complex64, complex128 and rank > 1. b tf.Tensor with same type and rank as a. transpose_a If True, a is transposed before multiplication. transpose_b If True, b is transposed before multiplication. adjoint_a If True, a is conjugated and transposed before multiplication. adjoint_b If True, b is conjugated and transposed before multiplication. a_is_sparse If True, a is treated as a sparse matrix. Notice, this does not support tf.sparse.SparseTensor, it just makes optimizations that assume most values in a are zero. See tf.sparse.sparse_dense_matmul for some support for tf.sparse.SparseTensor multiplication. b_is_sparse If True, b is treated as a sparse matrix. Notice, this does not support tf.sparse.SparseTensor, it just makes optimizations that assume most values in a are zero. See tf.sparse.sparse_dense_matmul for some support for tf.sparse.SparseTensor multiplication. name Name for the operation (optional). Returns A tf.Tensor of the same type as a and b where each inner-most matrix is the product of the corresponding matrices in a and b, e.g. if all transpose or adjoint attributes are False: output[..., i, j] = sum_k (a[..., i, k] * b[..., k, j]), for all indices i, j. Note This is matrix product, not element-wise product. Raises ValueError If transpose_a and adjoint_a, or transpose_b and adjoint_b are both set to True. __mod__ View source __mod__( x, y ) Returns element-wise remainder of division. When x < 0 xor y < 0 is true, this follows Python semantics in that the result here is consistent with a flooring divide. E.g. floor(x / y) * y + mod(x, y) = x. Note: math.floormod supports broadcasting. More about broadcasting here Args x A Tensor. Must be one of the following types: int32, int64, uint64, bfloat16, half, float32, float64. y A Tensor. Must have the same type as x. name A name for the operation (optional). Returns A Tensor. Has the same type as x. __mul__ View source __mul__( x, y ) Dispatches cwise mul for "DenseDense" and "DenseSparse". __ne__ View source __ne__( other ) Compares two variables element-wise for equality. __neg__ __neg__( x, name=None ) Computes numerical negative value element-wise. I.e., \(y = -x\). Args x A Tensor. Must be one of the following types: bfloat16, half, float32, float64, int8, int16, int32, int64, complex64, complex128. name A name for the operation (optional). Returns A Tensor. Has the same type as x. __or__ View source __or__( x, y ) __pow__ View source __pow__( x, y ) Computes the power of one value to another. Given a tensor x and a tensor y, this operation computes \(x^y\) for corresponding elements in x and y. For example: x = tf.constant([[2, 2], [3, 3]]) y = tf.constant([[8, 16], [2, 3]]) tf.pow(x, y) # [[256, 65536], [9, 27]] Args x A Tensor of type float16, float32, float64, int32, int64, complex64, or complex128. y A Tensor of type float16, float32, float64, int32, int64, complex64, or complex128. name A name for the operation (optional). Returns A Tensor. __radd__ View source __radd__( y, x ) The operation invoked by the Tensor.add operator. Purpose in the API: This method is exposed in TensorFlow's API so that library developers can register dispatching for <a href="https://www.tensorflow.org/api_docs/python/tf/Tensor#__add__"><code>Tensor.__add__</code></a> to allow it to handle custom composite tensors & other custom objects. The API symbol is not intended to be called by users directly and does appear in TensorFlow's generated documentation. Args x The left-hand side of the + operator. y The right-hand side of the + operator. name an optional name for the operation. Returns The result of the elementwise + operation. __rand__ View source __rand__( y, x ) __rdiv__ View source __rdiv__( y, x ) Divides x / y elementwise (using Python 2 division operator semantics). (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Deprecated in favor of operator or tf.math.divide. Note: Prefer using the Tensor division operator or tf.divide which obey Python 3 division operator semantics. This function divides x and y, forcing Python 2 semantics. That is, if x and y are both integers then the result will be an integer. This is in contrast to Python 3, where division with / is always a float while division with // is always an integer. Args x Tensor numerator of real numeric type. y Tensor denominator of real numeric type. name A name for the operation (optional). Returns x / y returns the quotient of x and y. __rfloordiv__ View source __rfloordiv__( y, x ) Divides x / y elementwise, rounding toward the most negative integer. The same as tf.compat.v1.div(x,y) for integers, but uses tf.floor(tf.compat.v1.div(x,y)) for floating point arguments so that the result is always an integer (though possibly an integer represented as floating point). This op is generated by x // y floor division in Python 3 and in Python 2.7 with from __future__ import division. x and y must have the same type, and the result will have the same type as well. Args x Tensor numerator of real numeric type. y Tensor denominator of real numeric type. name A name for the operation (optional). Returns x / y rounded down. Raises TypeError If the inputs are complex. __rmatmul__ View source __rmatmul__( y, x ) Multiplies matrix a by matrix b, producing a * b. The inputs must, following any transpositions, be tensors of rank >= 2 where the inner 2 dimensions specify valid matrix multiplication dimensions, and any further outer dimensions specify matching batch size. Both matrices must be of the same type. The supported types are: float16, float32, float64, int32, complex64, complex128. Either matrix can be transposed or adjointed (conjugated and transposed) on the fly by setting one of the corresponding flag to True. These are False by default. If one or both of the matrices contain a lot of zeros, a more efficient multiplication algorithm can be used by setting the corresponding a_is_sparse or b_is_sparse flag to True. These are False by default. This optimization is only available for plain matrices (rank-2 tensors) with datatypes bfloat16 or float32. A simple 2-D tensor matrix multiplication: a = tf.constant([1, 2, 3, 4, 5, 6], shape=[2, 3]) a # 2-D tensor <tf.Tensor: shape=(2, 3), dtype=int32, numpy= array([[1, 2, 3], [4, 5, 6]], dtype=int32)> b = tf.constant([7, 8, 9, 10, 11, 12], shape=[3, 2]) b # 2-D tensor <tf.Tensor: shape=(3, 2), dtype=int32, numpy= array([[ 7, 8], [ 9, 10], [11, 12]], dtype=int32)> c = tf.matmul(a, b) c # `a` * `b` <tf.Tensor: shape=(2, 2), dtype=int32, numpy= array([[ 58, 64], [139, 154]], dtype=int32)> A batch matrix multiplication with batch shape [2]: a = tf.constant(np.arange(1, 13, dtype=np.int32), shape=[2, 2, 3]) a # 3-D tensor <tf.Tensor: shape=(2, 2, 3), dtype=int32, numpy= array([[[ 1, 2, 3], [ 4, 5, 6]], [[ 7, 8, 9], [10, 11, 12]]], dtype=int32)> b = tf.constant(np.arange(13, 25, dtype=np.int32), shape=[2, 3, 2]) b # 3-D tensor <tf.Tensor: shape=(2, 3, 2), dtype=int32, numpy= array([[[13, 14], [15, 16], [17, 18]], [[19, 20], [21, 22], [23, 24]]], dtype=int32)> c = tf.matmul(a, b) c # `a` * `b` <tf.Tensor: shape=(2, 2, 2), dtype=int32, numpy= array([[[ 94, 100], [229, 244]], [[508, 532], [697, 730]]], dtype=int32)> Since python >= 3.5 the @ operator is supported (see PEP 465). In TensorFlow, it simply calls the tf.matmul() function, so the following lines are equivalent: d = a @ b @ [[10], [11]] d = tf.matmul(tf.matmul(a, b), [[10], [11]]) Args a tf.Tensor of type float16, float32, float64, int32, complex64, complex128 and rank > 1. b tf.Tensor with same type and rank as a. transpose_a If True, a is transposed before multiplication. transpose_b If True, b is transposed before multiplication. adjoint_a If True, a is conjugated and transposed before multiplication. adjoint_b If True, b is conjugated and transposed before multiplication. a_is_sparse If True, a is treated as a sparse matrix. Notice, this does not support tf.sparse.SparseTensor, it just makes optimizations that assume most values in a are zero. See tf.sparse.sparse_dense_matmul for some support for tf.sparse.SparseTensor multiplication. b_is_sparse If True, b is treated as a sparse matrix. Notice, this does not support tf.sparse.SparseTensor, it just makes optimizations that assume most values in a are zero. See tf.sparse.sparse_dense_matmul for some support for tf.sparse.SparseTensor multiplication. name Name for the operation (optional). Returns A tf.Tensor of the same type as a and b where each inner-most matrix is the product of the corresponding matrices in a and b, e.g. if all transpose or adjoint attributes are False: output[..., i, j] = sum_k (a[..., i, k] * b[..., k, j]), for all indices i, j. Note This is matrix product, not element-wise product. Raises ValueError If transpose_a and adjoint_a, or transpose_b and adjoint_b are both set to True. __rmod__ View source __rmod__( y, x ) Returns element-wise remainder of division. When x < 0 xor y < 0 is true, this follows Python semantics in that the result here is consistent with a flooring divide. E.g. floor(x / y) * y + mod(x, y) = x. Note: math.floormod supports broadcasting. More about broadcasting here Args x A Tensor. Must be one of the following types: int32, int64, uint64, bfloat16, half, float32, float64. y A Tensor. Must have the same type as x. name A name for the operation (optional). Returns A Tensor. Has the same type as x. __rmul__ View source __rmul__( y, x ) Dispatches cwise mul for "DenseDense" and "DenseSparse". __ror__ View source __ror__( y, x ) __rpow__ View source __rpow__( y, x ) Computes the power of one value to another. Given a tensor x and a tensor y, this operation computes \(x^y\) for corresponding elements in x and y. For example: x = tf.constant([[2, 2], [3, 3]]) y = tf.constant([[8, 16], [2, 3]]) tf.pow(x, y) # [[256, 65536], [9, 27]] Args x A Tensor of type float16, float32, float64, int32, int64, complex64, or complex128. y A Tensor of type float16, float32, float64, int32, int64, complex64, or complex128. name A name for the operation (optional). Returns A Tensor. __rsub__ View source __rsub__( y, x ) Returns x - y element-wise. Note: Subtract supports broadcasting. More about broadcasting here Args x A Tensor. Must be one of the following types: bfloat16, half, float32, float64, uint8, int8, uint16, int16, int32, int64, complex64, complex128, uint32. y A Tensor. Must have the same type as x. name A name for the operation (optional). Returns A Tensor. Has the same type as x. __rtruediv__ View source __rtruediv__( y, x ) Divides x / y elementwise (using Python 3 division operator semantics). Note: Prefer using the Tensor operator or tf.divide which obey Python division operator semantics. This function forces Python 3 division operator semantics where all integer arguments are cast to floating types first. This op is generated by normal x / y division in Python 3 and in Python 2.7 with from __future__ import division. If you want integer division that rounds down, use x // y or tf.math.floordiv. x and y must have the same numeric type. If the inputs are floating point, the output will have the same type. If the inputs are integral, the inputs are cast to float32 for int8 and int16 and float64 for int32 and int64 (matching the behavior of Numpy). Args x Tensor numerator of numeric type. y Tensor denominator of numeric type. name A name for the operation (optional). Returns x / y evaluated in floating point. Raises TypeError If x and y have different dtypes. __rxor__ View source __rxor__( y, x ) __sub__ View source __sub__( x, y ) Returns x - y element-wise. Note: Subtract supports broadcasting. More about broadcasting here Args x A Tensor. Must be one of the following types: bfloat16, half, float32, float64, uint8, int8, uint16, int16, int32, int64, complex64, complex128, uint32. y A Tensor. Must have the same type as x. name A name for the operation (optional). Returns A Tensor. Has the same type as x. __truediv__ View source __truediv__( x, y ) Divides x / y elementwise (using Python 3 division operator semantics). Note: Prefer using the Tensor operator or tf.divide which obey Python division operator semantics. This function forces Python 3 division operator semantics where all integer arguments are cast to floating types first. This op is generated by normal x / y division in Python 3 and in Python 2.7 with from __future__ import division. If you want integer division that rounds down, use x // y or tf.math.floordiv. x and y must have the same numeric type. If the inputs are floating point, the output will have the same type. If the inputs are integral, the inputs are cast to float32 for int8 and int16 and float64 for int32 and int64 (matching the behavior of Numpy). Args x Tensor numerator of numeric type. y Tensor denominator of numeric type. name A name for the operation (optional). Returns x / y evaluated in floating point. Raises TypeError If x and y have different dtypes. __xor__ View source __xor__( x, y )
tensorflow.compat.v1.variable
tf.compat.v1.VariableAggregation Indicates how a distributed variable will be aggregated. tf.distribute.Strategy distributes a model by making multiple copies (called "replicas") acting data-parallel on different elements of the input batch. When performing some variable-update operation, say var.assign_add(x), in a model, we need to resolve how to combine the different values for x computed in the different replicas. NONE: This is the default, giving an error if you use a variable-update operation with multiple replicas. SUM: Add the updates across replicas. MEAN: Take the arithmetic mean ("average") of the updates across replicas. ONLY_FIRST_REPLICA: This is for when every replica is performing the same update, but we only want to perform the update once. Used, e.g., for the global step counter. ONLY_FIRST_TOWER: Deprecated alias for ONLY_FIRST_REPLICA. Class Variables MEAN tf.compat.v1.VariableAggregation NONE tf.compat.v1.VariableAggregation ONLY_FIRST_REPLICA tf.compat.v1.VariableAggregation SUM tf.compat.v1.VariableAggregation
tensorflow.compat.v1.variableaggregation
tf.compat.v1.VariableScope Variable scope object to carry defaults to provide to get_variable. tf.compat.v1.VariableScope( reuse, name='', initializer=None, regularizer=None, caching_device=None, partitioner=None, custom_getter=None, name_scope='', dtype=tf.dtypes.float32, use_resource=None, constraint=None ) Many of the arguments we need for get_variable in a variable store are most easily handled with a context. This object is used for the defaults. Attributes name name of the current scope, used as prefix in get_variable. initializer default initializer passed to get_variable. regularizer default regularizer passed to get_variable. reuse Boolean, None, or tf.compat.v1.AUTO_REUSE, setting the reuse in get_variable. When eager execution is enabled this argument is always forced to be False. caching_device string, callable, or None: the caching device passed to get_variable. partitioner callable or None: the partitioner passed to get_variable. custom_getter default custom getter passed to get_variable. name_scope The name passed to tf.name_scope. dtype default type passed to get_variable (defaults to DT_FLOAT). use_resource if False, create a normal Variable; if True create an experimental ResourceVariable with well-defined semantics. Defaults to False (will later change to True). When eager execution is enabled this argument is always forced to be True. constraint An optional projection function to be applied to the variable after being updated by an Optimizer (e.g. used to implement norm constraints or value constraints for layer weights). The function must take as input the unprojected Tensor representing the value of the variable and return the Tensor for the projected value (which must have the same shape). Constraints are not safe to use when doing asynchronous distributed training. original_name_scope Methods get_collection View source get_collection( name ) Get this scope's variables. get_variable View source get_variable( var_store, name, shape=None, dtype=None, initializer=None, regularizer=None, reuse=None, trainable=None, collections=None, caching_device=None, partitioner=None, validate_shape=True, use_resource=None, custom_getter=None, constraint=None, synchronization=tf.VariableSynchronization.AUTO, aggregation=tf.compat.v1.VariableAggregation.NONE ) Gets an existing variable with this name or create a new one. global_variables View source global_variables() Get this scope's global variables. local_variables View source local_variables() Get this scope's local variables. reuse_variables View source reuse_variables() Reuse variables in this scope. set_caching_device View source set_caching_device( caching_device ) Set caching_device for this scope. set_custom_getter View source set_custom_getter( custom_getter ) Set custom getter for this scope. set_dtype View source set_dtype( dtype ) Set data type for this scope. set_initializer View source set_initializer( initializer ) Set initializer for this scope. set_partitioner View source set_partitioner( partitioner ) Set partitioner for this scope. set_regularizer View source set_regularizer( regularizer ) Set regularizer for this scope. set_use_resource View source set_use_resource( use_resource ) Sets whether to use ResourceVariables for this scope. trainable_variables View source trainable_variables() Get this scope's trainable variables.
tensorflow.compat.v1.variablescope
tf.compat.v1.variables_initializer Returns an Op that initializes a list of variables. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.initializers.variables tf.compat.v1.variables_initializer( var_list, name='init' ) After you launch the graph in a session, you can run the returned Op to initialize all the variables in var_list. This Op runs all the initializers of the variables in var_list in parallel. Calling initialize_variables() is equivalent to passing the list of initializers to Group(). If var_list is empty, however, the function still returns an Op that can be run. That Op just has no effect. Args var_list List of Variable objects to initialize. name Optional name for the returned operation. Returns An Op that run the initializers of all the specified variables.
tensorflow.compat.v1.variables_initializer
tf.compat.v1.variable_axis_size_partitioner Get a partitioner for VariableScope to keep shards below max_shard_bytes. tf.compat.v1.variable_axis_size_partitioner( max_shard_bytes, axis=0, bytes_per_string_element=16, max_shards=None ) This partitioner will shard a Variable along one axis, attempting to keep the maximum shard size below max_shard_bytes. In practice, this is not always possible when sharding along only one axis. When this happens, this axis is sharded as much as possible (i.e., every dimension becomes a separate shard). If the partitioner hits the max_shards limit, then each shard may end up larger than max_shard_bytes. By default max_shards equals None and no limit on the number of shards is enforced. One reasonable value for max_shard_bytes is (64 << 20) - 1, or almost 64MB, to keep below the protobuf byte limit. Args max_shard_bytes The maximum size any given shard is allowed to be. axis The axis to partition along. Default: outermost axis. bytes_per_string_element If the Variable is of type string, this provides an estimate of how large each scalar in the Variable is. max_shards The maximum number of shards in int created taking precedence over max_shard_bytes. Returns A partition function usable as the partitioner argument to variable_scope and get_variable. Raises ValueError If any of the byte counts are non-positive.
tensorflow.compat.v1.variable_axis_size_partitioner
tf.compat.v1.variable_creator_scope Scope which defines a variable creation function to be used by variable(). @tf_contextlib.contextmanager tf.compat.v1.variable_creator_scope( variable_creator ) variable_creator is expected to be a function with the following signature: def variable_creator(next_creator, **kwargs) The creator is supposed to eventually call the next_creator to create a variable if it does want to create a variable and not call Variable or ResourceVariable directly. This helps make creators composable. A creator may choose to create multiple variables, return already existing variables, or simply register that a variable was created and defer to the next creators in line. Creators can also modify the keyword arguments seen by the next creators. Custom getters in the variable scope will eventually resolve down to these custom creators when they do create variables. The valid keyword arguments in kwds are: initial_value: A Tensor, or Python object convertible to a Tensor, which is the initial value for the Variable. The initial value must have a shape specified unless validate_shape is set to False. Can also be a callable with no argument that returns the initial value when called. In that case, dtype must be specified. (Note that initializer functions from init_ops.py must first be bound to a shape before being used here.) trainable: If True, the default, also adds the variable to the graph collection GraphKeys.TRAINABLE_VARIABLES. This collection is used as the default list of variables to use by the Optimizer classes. trainable defaults to True, unless synchronization is set to ON_READ, in which case it defaults to False. collections: List of graph collections keys. The new variable is added to these collections. Defaults to [GraphKeys.GLOBAL_VARIABLES]. validate_shape: If False, allows the variable to be initialized with a value of unknown shape. If True, the default, the shape of initial_value must be known. caching_device: Optional device string describing where the Variable should be cached for reading. Defaults to the Variable's device. If not None, caches on another device. Typical use is to cache on the device where the Ops using the Variable reside, to deduplicate copying through Switch and other conditional statements. name: Optional name for the variable. Defaults to 'Variable' and gets uniquified automatically. dtype: If set, initial_value will be converted to the given type. If None, either the datatype will be kept (if initial_value is a Tensor), or convert_to_tensor will decide. constraint: A constraint function to be applied to the variable after updates by some algorithms. use_resource: if True, a ResourceVariable is always created. synchronization: Indicates when a distributed a variable will be aggregated. Accepted values are constants defined in the class tf.VariableSynchronization. By default the synchronization is set to AUTO and the current DistributionStrategy chooses when to synchronize. aggregation: Indicates how a distributed variable will be aggregated. Accepted values are constants defined in the class tf.VariableAggregation. This set may grow over time, so it's important the signature of creators is as mentioned above. Args variable_creator the passed creator Yields A scope in which the creator is active
tensorflow.compat.v1.variable_creator_scope
tf.compat.v1.variable_op_scope Deprecated: context manager for defining an op that creates variables. @tf_contextlib.contextmanager tf.compat.v1.variable_op_scope( values, name_or_scope, default_name=None, initializer=None, regularizer=None, caching_device=None, partitioner=None, custom_getter=None, reuse=None, dtype=None, use_resource=None, constraint=None )
tensorflow.compat.v1.variable_op_scope
tf.compat.v1.variable_scope A context manager for defining ops that creates variables (layers). tf.compat.v1.variable_scope( name_or_scope, default_name=None, values=None, initializer=None, regularizer=None, caching_device=None, partitioner=None, custom_getter=None, reuse=None, dtype=None, use_resource=None, constraint=None, auxiliary_name_scope=True ) This context manager validates that the (optional) values are from the same graph, ensures that graph is the default graph, and pushes a name scope and a variable scope. If name_or_scope is not None, it is used as is. If name_or_scope is None, then default_name is used. In that case, if the same name has been previously used in the same scope, it will be made unique by appending _N to it. Variable scope allows you to create new variables and to share already created ones while providing checks to not create or share by accident. For details, see the Variable Scope How To, here we present only a few basic examples. Simple example of how to create a new variable: with tf.compat.v1.variable_scope("foo"): with tf.compat.v1.variable_scope("bar"): v = tf.compat.v1.get_variable("v", [1]) assert v.name == "foo/bar/v:0" Simple example of how to reenter a premade variable scope safely: with tf.compat.v1.variable_scope("foo") as vs: pass # Re-enter the variable scope. with tf.compat.v1.variable_scope(vs, auxiliary_name_scope=False) as vs1: # Restore the original name_scope. with tf.name_scope(vs1.original_name_scope): v = tf.compat.v1.get_variable("v", [1]) assert v.name == "foo/v:0" c = tf.constant([1], name="c") assert c.name == "foo/c:0" Keep in mind that the counters for default_name are discarded once the parent scope is exited. Therefore when the code re-enters the scope (for instance by saving it), all nested default_name counters will be restarted. For instance: with tf.compat.v1.variable_scope("foo") as vs: with tf.compat.v1.variable_scope(None, default_name="bar"): v = tf.compat.v1.get_variable("a", [1]) assert v.name == "foo/bar/a:0", v.name with tf.compat.v1.variable_scope(None, default_name="bar"): v = tf.compat.v1.get_variable("b", [1]) assert v.name == "foo/bar_1/b:0" with tf.compat.v1.variable_scope(vs): with tf.compat.v1.variable_scope(None, default_name="bar"): v = tf.compat.v1.get_variable("c", [1]) assert v.name == "foo/bar/c:0" # Uses bar instead of bar_2! Basic example of sharing a variable AUTO_REUSE: def foo(): with tf.compat.v1.variable_scope("foo", reuse=tf.compat.v1.AUTO_REUSE): v = tf.compat.v1.get_variable("v", [1]) return v v1 = foo() # Creates v. v2 = foo() # Gets the same, existing v. assert v1 == v2 Basic example of sharing a variable with reuse=True: with tf.compat.v1.variable_scope("foo"): v = tf.compat.v1.get_variable("v", [1]) with tf.compat.v1.variable_scope("foo", reuse=True): v1 = tf.compat.v1.get_variable("v", [1]) assert v1 == v Sharing a variable by capturing a scope and setting reuse: with tf.compat.v1.variable_scope("foo") as scope: v = tf.compat.v1.get_variable("v", [1]) scope.reuse_variables() v1 = tf.compat.v1.get_variable("v", [1]) assert v1 == v To prevent accidental sharing of variables, we raise an exception when getting an existing variable in a non-reusing scope. with tf.compat.v1.variable_scope("foo"): v = tf.compat.v1.get_variable("v", [1]) v1 = tf.compat.v1.get_variable("v", [1]) # Raises ValueError("... v already exists ..."). Similarly, we raise an exception when trying to get a variable that does not exist in reuse mode. with tf.compat.v1.variable_scope("foo", reuse=True): v = tf.compat.v1.get_variable("v", [1]) # Raises ValueError("... v does not exists ..."). Note that the reuse flag is inherited: if we open a reusing scope, then all its sub-scopes become reusing as well. A note about name scoping: Setting reuse does not impact the naming of other ops such as mult. See related discussion on github#6189 Note that up to and including version 1.0, it was allowed (though explicitly discouraged) to pass False to the reuse argument, yielding undocumented behaviour slightly different from None. Starting at 1.1.0 passing None and False as reuse has exactly the same effect. A note about using variable scopes in multi-threaded environment: Variable scopes are thread local, so one thread will not see another thread's current scope. Also, when using default_name, unique scopes names are also generated only on a per thread basis. If the same name was used within a different thread, that doesn't prevent a new thread from creating the same scope. However, the underlying variable store is shared across threads (within the same graph). As such, if another thread tries to create a new variable with the same name as a variable created by a previous thread, it will fail unless reuse is True. Further, each thread starts with an empty variable scope. So if you wish to preserve name prefixes from a scope from the main thread, you should capture the main thread's scope and re-enter it in each thread. For e.g. main_thread_scope = variable_scope.get_variable_scope() # Thread's target function: def thread_target_fn(captured_scope): with variable_scope.variable_scope(captured_scope): # .... regular code for this thread thread = threading.Thread(target=thread_target_fn, args=(main_thread_scope,)) Args name_or_scope string or VariableScope: the scope to open. default_name The default name to use if the name_or_scope argument is None, this name will be uniquified. If name_or_scope is provided it won't be used and therefore it is not required and can be None. values The list of Tensor arguments that are passed to the op function. initializer default initializer for variables within this scope. regularizer default regularizer for variables within this scope. caching_device default caching device for variables within this scope. partitioner default partitioner for variables within this scope. custom_getter default custom getter for variables within this scope. reuse True, None, or tf.compat.v1.AUTO_REUSE; if True, we go into reuse mode for this scope as well as all sub-scopes; if tf.compat.v1.AUTO_REUSE, we create variables if they do not exist, and return them otherwise; if None, we inherit the parent scope's reuse flag. When eager execution is enabled, new variables are always created unless an EagerVariableStore or template is currently active. dtype type of variables created in this scope (defaults to the type in the passed scope, or inherited from parent scope). use_resource If False, all variables will be regular Variables. If True, experimental ResourceVariables with well-defined semantics will be used instead. Defaults to False (will later change to True). When eager execution is enabled this argument is always forced to be True. constraint An optional projection function to be applied to the variable after being updated by an Optimizer (e.g. used to implement norm constraints or value constraints for layer weights). The function must take as input the unprojected Tensor representing the value of the variable and return the Tensor for the projected value (which must have the same shape). Constraints are not safe to use when doing asynchronous distributed training. auxiliary_name_scope If True, we create an auxiliary name scope with the scope. If False, we don't create it. Note that the argument is not inherited, and it only takes effect for once when creating. You should only use it for re-entering a premade variable scope. Raises ValueError when trying to reuse within a create scope, or create within a reuse scope. TypeError when the types of some arguments are not appropriate. Methods __enter__ View source __enter__() __exit__ View source __exit__( type_arg, value_arg, traceback_arg )
tensorflow.compat.v1.variable_scope
tf.compat.v1.verify_tensor_all_finite Assert that the tensor does not contain any NaN's or Inf's. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.debugging.assert_all_finite tf.compat.v1.verify_tensor_all_finite( t=None, msg=None, name=None, x=None, message=None ) Args t Tensor to check. msg Message to log on failure. name A name for this operation (optional). x Alias for t. message Alias for msg. Returns Same tensor as t.
tensorflow.compat.v1.verify_tensor_all_finite
Module: tf.compat.v1.version Public API for tf.version namespace. Other Members COMPILER_VERSION '7.3.1 20180303' GIT_VERSION 'v2.4.0-rc4-71-g582c8d236cb' GRAPH_DEF_VERSION 561 GRAPH_DEF_VERSION_MIN_CONSUMER 0 GRAPH_DEF_VERSION_MIN_PRODUCER 0 VERSION '2.4.0'
tensorflow.compat.v1.version
tf.compat.v1.where Return the elements, either from x or y, depending on the condition. tf.compat.v1.where( condition, x=None, y=None, name=None ) If both x and y are None, then this operation returns the coordinates of true elements of condition. The coordinates are returned in a 2-D tensor where the first dimension (rows) represents the number of true elements, and the second dimension (columns) represents the coordinates of the true elements. Keep in mind, the shape of the output tensor can vary depending on how many true values there are in input. Indices are output in row-major order. If both non-None, x and y must have the same shape. The condition tensor must be a scalar if x and y are scalar. If x and y are tensors of higher rank, then condition must be either a vector with size matching the first dimension of x, or must have the same shape as x. The condition tensor acts as a mask that chooses, based on the value at each element, whether the corresponding element / row in the output should be taken from x (if true) or y (if false). If condition is a vector and x and y are higher rank matrices, then it chooses which row (outer dimension) to copy from x and y. If condition has the same shape as x and y, then it chooses which element to copy from x and y. Args condition A Tensor of type bool x A Tensor which may have the same shape as condition. If condition is rank 1, x may have higher rank, but its first dimension must match the size of condition. y A tensor with the same shape and type as x. name A name of the operation (optional) Returns A Tensor with the same type and shape as x, y if they are non-None. Otherwise, a Tensor with shape (num_true, rank(condition)). Raises ValueError When exactly one of x or y is non-None.
tensorflow.compat.v1.where
tf.compat.v1.while_loop Repeat body while the condition cond is true. tf.compat.v1.while_loop( cond, body, loop_vars, shape_invariants=None, parallel_iterations=10, back_prop=True, swap_memory=False, name=None, maximum_iterations=None, return_same_structure=False ) cond is a callable returning a boolean scalar tensor. body is a callable returning a (possibly nested) tuple, namedtuple or list of tensors of the same arity (length and structure) and types as loop_vars. loop_vars is a (possibly nested) tuple, namedtuple or list of tensors that is passed to both cond and body. cond and body both take as many arguments as there are loop_vars. In addition to regular Tensors or IndexedSlices, the body may accept and return TensorArray objects. The flows of the TensorArray objects will be appropriately forwarded between loops and during gradient calculations. Note that while_loop calls cond and body exactly once (inside the call to while_loop, and not at all during Session.run()). while_loop stitches together the graph fragments created during the cond and body calls with some additional graph nodes to create the graph flow that repeats body until cond returns false. For correctness, tf.while_loop() strictly enforces shape invariants for the loop variables. A shape invariant is a (possibly partial) shape that is unchanged across the iterations of the loop. An error will be raised if the shape of a loop variable after an iteration is determined to be more general than or incompatible with its shape invariant. For example, a shape of [11, None] is more general than a shape of [11, 17], and [11, 21] is not compatible with [11, 17]. By default (if the argument shape_invariants is not specified), it is assumed that the initial shape of each tensor in loop_vars is the same in every iteration. The shape_invariants argument allows the caller to specify a less specific shape invariant for each loop variable, which is needed if the shape varies between iterations. The tf.Tensor.set_shape function may also be used in the body function to indicate that the output loop variable has a particular shape. The shape invariant for SparseTensor and IndexedSlices are treated specially as follows: a) If a loop variable is a SparseTensor, the shape invariant must be TensorShape([r]) where r is the rank of the dense tensor represented by the sparse tensor. It means the shapes of the three tensors of the SparseTensor are ([None], [None, r], [r]). NOTE: The shape invariant here is the shape of the SparseTensor.dense_shape property. It must be the shape of a vector. b) If a loop variable is an IndexedSlices, the shape invariant must be a shape invariant of the values tensor of the IndexedSlices. It means the shapes of the three tensors of the IndexedSlices are (shape, [shape[0]], [shape.ndims]). while_loop implements non-strict semantics, enabling multiple iterations to run in parallel. The maximum number of parallel iterations can be controlled by parallel_iterations, which gives users some control over memory consumption and execution order. For correct programs, while_loop should return the same result for any parallel_iterations > 0. For training, TensorFlow stores the tensors that are produced in the forward inference and are needed in back propagation. These tensors are a main source of memory consumption and often cause OOM errors when training on GPUs. When the flag swap_memory is true, we swap out these tensors from GPU to CPU. This for example allows us to train RNN models with very long sequences and large batches. Args cond A callable that represents the termination condition of the loop. body A callable that represents the loop body. loop_vars A (possibly nested) tuple, namedtuple or list of numpy array, Tensor, and TensorArray objects. shape_invariants The shape invariants for the loop variables. parallel_iterations The number of iterations allowed to run in parallel. It must be a positive integer. back_prop Whether backprop is enabled for this while loop. swap_memory Whether GPU-CPU memory swap is enabled for this loop. name Optional name prefix for the returned tensors. maximum_iterations Optional maximum number of iterations of the while loop to run. If provided, the cond output is AND-ed with an additional condition ensuring the number of iterations executed is no greater than maximum_iterations. return_same_structure If True, output has same structure as loop_vars. If eager execution is enabled, this is ignored (and always treated as True). Returns The output tensors for the loop variables after the loop. If return_same_structure is True, the return value has the same structure as loop_vars. If return_same_structure is False, the return value is a Tensor, TensorArray or IndexedSlice if the length of loop_vars is 1, or a list otherwise. Raises TypeError if cond or body is not callable. ValueError if loop_vars is empty. Example: i = tf.constant(0) c = lambda i: tf.less(i, 10) b = lambda i: tf.add(i, 1) r = tf.while_loop(c, b, [i]) Example with nesting and a namedtuple: import collections Pair = collections.namedtuple('Pair', 'j, k') ijk_0 = (tf.constant(0), Pair(tf.constant(1), tf.constant(2))) c = lambda i, p: i < 10 b = lambda i, p: (i + 1, Pair((p.j + p.k), (p.j - p.k))) ijk_final = tf.while_loop(c, b, ijk_0) Example using shape_invariants: i0 = tf.constant(0) m0 = tf.ones([2, 2]) c = lambda i, m: i < 10 b = lambda i, m: [i+1, tf.concat([m, m], axis=0)] tf.while_loop( c, b, loop_vars=[i0, m0], shape_invariants=[i0.get_shape(), tf.TensorShape([None, 2])]) Example which demonstrates non-strict semantics: In the following example, the final value of the counter i does not depend on x. So the while_loop can increment the counter parallel to updates of x. However, because the loop counter at one loop iteration depends on the value at the previous iteration, the loop counter itself cannot be incremented in parallel. Hence if we just want the final value of the counter (which we print on the line print(sess.run(i))), then x will never be incremented, but the counter will be updated on a single thread. Conversely, if we want the value of the output (which we print on the line print(sess.run(out).shape)), then the counter may be incremented on its own thread, while x can be incremented in parallel on a separate thread. In the extreme case, it is conceivable that the thread incrementing the counter runs until completion before x is incremented even a single time. The only thing that can never happen is that the thread updating x can never get ahead of the counter thread because the thread incrementing x depends on the value of the counter. import tensorflow as tf n = 10000 x = tf.constant(list(range(n))) c = lambda i, x: i < n b = lambda i, x: (tf.compat.v1.Print(i + 1, [i]), tf.compat.v1.Print(x + 1, [i], "x:")) i, out = tf.while_loop(c, b, (0, x)) with tf.compat.v1.Session() as sess: print(sess.run(i)) # prints [0] ... [9999] # The following line may increment the counter and x in parallel. # The counter thread may get ahead of the other thread, but not the # other way around. So you may see things like # [9996] x:[9987] # meaning that the counter thread is on iteration 9996, # while the other thread is on iteration 9987 print(sess.run(out).shape)
tensorflow.compat.v1.while_loop