doc_content
stringlengths 1
386k
| doc_id
stringlengths 5
188
|
---|---|
tf.distribute.MultiWorkerMirroredStrategy A distribution strategy for synchronous training on multiple workers. Inherits From: Strategy
tf.distribute.MultiWorkerMirroredStrategy(
cluster_resolver=None, communication_options=None
)
This strategy implements synchronous distributed training across multiple workers, each with potentially multiple GPUs. Similar to tf.distribute.MirroredStrategy, it replicates all variables and computations to each local device. The difference is that it uses a distributed collective implementation (e.g. all-reduce), so that multiple workers can work together. You need to launch your program on each worker and configure cluster_resolver correctly. For example, if you are using tf.distribute.cluster_resolver.TFConfigClusterResolver, each worker needs to have its corresponding task_type and task_id set in the TF_CONFIG environment variable. An example TF_CONFIG on worker-0 of a two worker cluster is: TF_CONFIG = '{"cluster": {"worker": ["localhost:12345", "localhost:23456"]}, "task": {"type": "worker", "index": 0} }'
Your program runs on each worker as-is. Note that collectives require each worker to participate. All tf.distribute and non tf.distribute API may use collectives internally, e.g. checkpointing and saving since reading a tf.Variable with tf.VariableSynchronization.ON_READ all-reduces the value. Therefore it's recommended to run exactly the same program on each worker. Dispatching based on task_type or task_id of the worker is error-prone. cluster_resolver.num_accelerators() determines the number of GPUs the strategy uses. If it's zero, the strategy uses the CPU. All workers need to use the same number of devices, otherwise the behavior is undefined. This strategy is not intended for TPU. Use tf.distribute.TPUStrategy instead. After setting up TF_CONFIG, using this strategy is similar to using tf.distribute.MirroredStrategy and tf.distribute.TPUStrategy. strategy = tf.distribute.MultiWorkerMirroredStrategy()
with strategy.scope():
model = tf.keras.Sequential([
tf.keras.layers.Dense(2, input_shape=(5,)),
])
optimizer = tf.keras.optimizers.SGD(learning_rate=0.1)
def dataset_fn(ctx):
x = np.random.random((2, 5)).astype(np.float32)
y = np.random.randint(2, size=(2, 1))
dataset = tf.data.Dataset.from_tensor_slices((x, y))
return dataset.repeat().batch(1, drop_remainder=True)
dist_dataset = strategy.distribute_datasets_from_function(dataset_fn)
model.compile()
model.fit(dist_dataset)
You can also write your own training loop: @tf.function
def train_step(iterator):
def step_fn(inputs):
features, labels = inputs
with tf.GradientTape() as tape:
logits = model(features, training=True)
loss = tf.keras.losses.sparse_categorical_crossentropy(
labels, logits)
grads = tape.gradient(loss, model.trainable_variables)
optimizer.apply_gradients(zip(grads, model.trainable_variables))
strategy.run(step_fn, args=(next(iterator),))
for _ in range(NUM_STEP):
train_step(iterator)
See Multi-worker training with Keras for a detailed tutorial. Saving You need to save and checkpoint on all workers instead of just one. This is because variables whose synchronization=ON_READ triggers aggregation during saving. It's recommended to save to a different path on each worker to avoid race conditions. Each worker saves the same thing. See Multi-worker training with Keras tutorial for examples. Known Issues
tf.distribute.cluster_resolver.TFConfigClusterResolver does not return the correct number of accelerators. The strategy uses all available GPUs if cluster_resolver is tf.distribute.cluster_resolver.TFConfigClusterResolver or None. In eager mode, the strategy needs to be created before calling any other Tensorflow API.
Args
cluster_resolver optional tf.distribute.cluster_resolver.ClusterResolver. If None, tf.distribute.cluster_resolver.TFConfigClusterResolver is used.
communication_options optional tf.distribute.experimental.CommunicationOptions. This configures the default options for cross device communications. It can be overridden by options provided to the communication APIs like tf.distribute.ReplicaContext.all_reduce. See tf.distribute.experimental.CommunicationOptions for details.
Attributes
cluster_resolver Returns the cluster resolver associated with this strategy. As a multi-worker strategy, tf.distribute.experimental.MultiWorkerMirroredStrategy provides the associated tf.distribute.cluster_resolver.ClusterResolver. If the user provides one in __init__, that instance is returned; if the user does not, a default TFConfigClusterResolver is provided.
extended tf.distribute.StrategyExtended with additional methods.
num_replicas_in_sync Returns number of replicas over which gradients are aggregated. Methods distribute_datasets_from_function View source
distribute_datasets_from_function(
dataset_fn, options=None
)
Distributes tf.data.Dataset instances created by calls to dataset_fn. The argument dataset_fn that users pass in is an input function that has a tf.distribute.InputContext argument and returns a tf.data.Dataset instance. It is expected that the returned dataset from dataset_fn is already batched by per-replica batch size (i.e. global batch size divided by the number of replicas in sync) and sharded. tf.distribute.Strategy.distribute_datasets_from_function does not batch or shard the tf.data.Dataset instance returned from the input function. dataset_fn will be called on the CPU device of each of the workers and each generates a dataset where every replica on that worker will dequeue one batch of inputs (i.e. if a worker has two replicas, two batches will be dequeued from the Dataset every step). This method can be used for several purposes. First, it allows you to specify your own batching and sharding logic. (In contrast, tf.distribute.experimental_distribute_dataset does batching and sharding for you.) For example, where experimental_distribute_dataset is unable to shard the input files, this method might be used to manually shard the dataset (avoiding the slow fallback behavior in experimental_distribute_dataset). In cases where the dataset is infinite, this sharding can be done by creating dataset replicas that differ only in their random seed. The dataset_fn should take an tf.distribute.InputContext instance where information about batching and input replication can be accessed. You can use element_spec property of the tf.distribute.DistributedDataset returned by this API to query the tf.TypeSpec of the elements returned by the iterator. This can be used to set the input_signature property of a tf.function. Follow tf.distribute.DistributedDataset.element_spec to see an example. Key Point: The tf.data.Dataset returned by dataset_fn should have a per-replica batch size, unlike experimental_distribute_dataset, which uses the global batch size. This may be computed using input_context.get_per_replica_batch_size.
Note: If you are using TPUStrategy, the order in which the data is processed by the workers when using tf.distribute.Strategy.experimental_distribute_dataset or tf.distribute.Strategy.distribute_datasets_from_function is not guaranteed. This is typically required if you are using tf.distribute to scale prediction. You can however insert an index for each element in the batch and order outputs accordingly. Refer to this snippet for an example of how to order outputs.
Note: Stateful dataset transformations are currently not supported with tf.distribute.experimental_distribute_dataset or tf.distribute.distribute_datasets_from_function. Any stateful ops that the dataset may have are currently ignored. For example, if your dataset has a map_fn that uses tf.random.uniform to rotate an image, then you have a dataset graph that depends on state (i.e the random seed) on the local machine where the python process is being executed.
For a tutorial on more usage and properties of this method, refer to the tutorial on distributed input). If you are interested in last partial batch handling, read this section.
Args
dataset_fn A function taking a tf.distribute.InputContext instance and returning a tf.data.Dataset.
options tf.distribute.InputOptions used to control options on how this dataset is distributed.
Returns A tf.distribute.DistributedDataset.
experimental_distribute_dataset View source
experimental_distribute_dataset(
dataset, options=None
)
Creates tf.distribute.DistributedDataset from tf.data.Dataset. The returned tf.distribute.DistributedDataset can be iterated over similar to regular datasets. NOTE: The user cannot add any more transformations to a tf.distribute.DistributedDataset. You can only create an iterator or examine the tf.TypeSpec of the data generated by it. See API docs of tf.distribute.DistributedDataset to learn more. The following is an example:
global_batch_size = 2
# Passing the devices is optional.
strategy = tf.distribute.MirroredStrategy(devices=["GPU:0", "GPU:1"])
# Create a dataset
dataset = tf.data.Dataset.range(4).batch(global_batch_size)
# Distribute that dataset
dist_dataset = strategy.experimental_distribute_dataset(dataset)
@tf.function
def replica_fn(input):
return input*2
result = []
# Iterate over the `tf.distribute.DistributedDataset`
for x in dist_dataset:
# process dataset elements
result.append(strategy.run(replica_fn, args=(x,)))
print(result)
[PerReplica:{
0: <tf.Tensor: shape=(1,), dtype=int64, numpy=array([0])>,
1: <tf.Tensor: shape=(1,), dtype=int64, numpy=array([2])>
}, PerReplica:{
0: <tf.Tensor: shape=(1,), dtype=int64, numpy=array([4])>,
1: <tf.Tensor: shape=(1,), dtype=int64, numpy=array([6])>
}]
Three key actions happending under the hood of this method are batching, sharding, and prefetching. In the code snippet above, dataset is batched by global_batch_size, and calling experimental_distribute_dataset on it rebatches dataset to a new batch size that is equal to the global batch size divided by the number of replicas in sync. We iterate through it using a Pythonic for loop. x is a tf.distribute.DistributedValues containing data for all replicas, and each replica gets data of the new batch size. tf.distribute.Strategy.run will take care of feeding the right per-replica data in x to the right replica_fn executed on each replica. Sharding contains autosharding across multiple workers and within every worker. First, in multi-worker distributed training (i.e. when you use tf.distribute.experimental.MultiWorkerMirroredStrategy or tf.distribute.TPUStrategy), autosharding a dataset over a set of workers means that each worker is assigned a subset of the entire dataset (if the right tf.data.experimental.AutoShardPolicy is set). This is to ensure that at each step, a global batch size of non-overlapping dataset elements will be processed by each worker. Autosharding has a couple of different options that can be specified using tf.data.experimental.DistributeOptions. Then, sharding within each worker means the method will split the data among all the worker devices (if more than one a present). This will happen regardless of multi-worker autosharding.
Note: for autosharding across multiple workers, the default mode is tf.data.experimental.AutoShardPolicy.AUTO. This mode will attempt to shard the input dataset by files if the dataset is being created out of reader datasets (e.g. tf.data.TFRecordDataset, tf.data.TextLineDataset, etc.) or otherwise shard the dataset by data, where each of the workers will read the entire dataset and only process the shard assigned to it. However, if you have less than one input file per worker, we suggest that you disable dataset autosharding across workers by setting the tf.data.experimental.DistributeOptions.auto_shard_policy to be tf.data.experimental.AutoShardPolicy.OFF.
By default, this method adds a prefetch transformation at the end of the user provided tf.data.Dataset instance. The argument to the prefetch transformation which is buffer_size is equal to the number of replicas in sync. If the above batch splitting and dataset sharding logic is undesirable, please use tf.distribute.Strategy.distribute_datasets_from_function instead, which does not do any automatic batching or sharding for you.
Note: If you are using TPUStrategy, the order in which the data is processed by the workers when using tf.distribute.Strategy.experimental_distribute_dataset or tf.distribute.Strategy.distribute_datasets_from_function is not guaranteed. This is typically required if you are using tf.distribute to scale prediction. You can however insert an index for each element in the batch and order outputs accordingly. Refer to this snippet for an example of how to order outputs.
Note: Stateful dataset transformations are currently not supported with tf.distribute.experimental_distribute_dataset or tf.distribute.distribute_datasets_from_function. Any stateful ops that the dataset may have are currently ignored. For example, if your dataset has a map_fn that uses tf.random.uniform to rotate an image, then you have a dataset graph that depends on state (i.e the random seed) on the local machine where the python process is being executed.
For a tutorial on more usage and properties of this method, refer to the tutorial on distributed input. If you are interested in last partial batch handling, read this section.
Args
dataset tf.data.Dataset that will be sharded across all replicas using the rules stated above.
options tf.distribute.InputOptions used to control options on how this dataset is distributed.
Returns A tf.distribute.DistributedDataset.
experimental_distribute_values_from_function View source
experimental_distribute_values_from_function(
value_fn
)
Generates tf.distribute.DistributedValues from value_fn. This function is to generate tf.distribute.DistributedValues to pass into run, reduce, or other methods that take distributed values when not using datasets.
Args
value_fn The function to run to generate values. It is called for each replica with tf.distribute.ValueContext as the sole argument. It must return a Tensor or a type that can be converted to a Tensor.
Returns A tf.distribute.DistributedValues containing a value for each replica.
Example usage: Return constant value per replica:
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"])
def value_fn(ctx):
return tf.constant(1.)
distributed_values = (
strategy.experimental_distribute_values_from_function(
value_fn))
local_result = strategy.experimental_local_results(distributed_values)
local_result
(<tf.Tensor: shape=(), dtype=float32, numpy=1.0>,
<tf.Tensor: shape=(), dtype=float32, numpy=1.0>)
Distribute values in array based on replica_id:
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"])
array_value = np.array([3., 2., 1.])
def value_fn(ctx):
return array_value[ctx.replica_id_in_sync_group]
distributed_values = (
strategy.experimental_distribute_values_from_function(
value_fn))
local_result = strategy.experimental_local_results(distributed_values)
local_result
(3.0, 2.0)
Specify values using num_replicas_in_sync:
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"])
def value_fn(ctx):
return ctx.num_replicas_in_sync
distributed_values = (
strategy.experimental_distribute_values_from_function(
value_fn))
local_result = strategy.experimental_local_results(distributed_values)
local_result
(2, 2)
Place values on devices and distribute: strategy = tf.distribute.TPUStrategy()
worker_devices = strategy.extended.worker_devices
multiple_values = []
for i in range(strategy.num_replicas_in_sync):
with tf.device(worker_devices[i]):
multiple_values.append(tf.constant(1.0))
def value_fn(ctx):
return multiple_values[ctx.replica_id_in_sync_group]
distributed_values = strategy.
experimental_distribute_values_from_function(
value_fn)
experimental_local_results View source
experimental_local_results(
value
)
Returns the list of all local per-replica values contained in value.
Note: This only returns values on the worker initiated by this client. When using a tf.distribute.Strategy like tf.distribute.experimental.MultiWorkerMirroredStrategy, each worker will be its own client, and this function will only return values computed on that worker.
Args
value A value returned by experimental_run(), run(), extended.call_for_each_replica(), or a variable created in scope.
Returns A tuple of values contained in value. If value represents a single value, this returns (value,).
gather View source
gather(
value, axis
)
Gather value across replicas along axis to the current device. Given a tf.distribute.DistributedValues or tf.Tensor-like object value, this API gathers and concatenates value across replicas along the axis-th dimension. The result is copied to the "current" device which would typically be the CPU of the worker on which the program is running. For tf.distribute.TPUStrategy, it is the first TPU host. For multi-client MultiWorkerMirroredStrategy, this is CPU of each worker. This API can only be called in the cross-replica context. For a counterpart in the replica context, see tf.distribute.ReplicaContext.all_gather.
Note: For all strategies except tf.distribute.TPUStrategy, the input value on different replicas must have the same rank, and their shapes must be the same in all dimensions except the axis-th dimension. In other words, their shapes cannot be different in a dimension d where d does not equal to the axis argument. For example, given a tf.distribute.DistributedValues with component tensors of shape (1, 2, 3) and (1, 3, 3) on two replicas, you can call gather(..., axis=1, ...) on it, but not gather(..., axis=0, ...) or gather(..., axis=2, ...). However, for tf.distribute.TPUStrategy.gather, all tensors must have exactly the same rank and same shape.
Note: Given a tf.distribute.DistributedValues value, its component tensors must have a non-zero rank. Otherwise, consider using tf.expand_dims before gathering them.
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"])
# A DistributedValues with component tensor of shape (2, 1) on each replica
distributed_values = strategy.experimental_distribute_values_from_function(lambda _: tf.identity(tf.constant([[1], [2]])))
@tf.function
def run():
return strategy.gather(distributed_values, axis=0)
run()
<tf.Tensor: shape=(4, 1), dtype=int32, numpy=
array([[1],
[2],
[1],
[2]], dtype=int32)>
Consider the following example for more combinations:
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1", "GPU:2", "GPU:3"])
single_tensor = tf.reshape(tf.range(6), shape=(1,2,3))
distributed_values = strategy.experimental_distribute_values_from_function(lambda _: tf.identity(single_tensor))
@tf.function
def run(axis):
return strategy.gather(distributed_values, axis=axis)
axis=0
run(axis)
<tf.Tensor: shape=(4, 2, 3), dtype=int32, numpy=
array([[[0, 1, 2],
[3, 4, 5]],
[[0, 1, 2],
[3, 4, 5]],
[[0, 1, 2],
[3, 4, 5]],
[[0, 1, 2],
[3, 4, 5]]], dtype=int32)>
axis=1
run(axis)
<tf.Tensor: shape=(1, 8, 3), dtype=int32, numpy=
array([[[0, 1, 2],
[3, 4, 5],
[0, 1, 2],
[3, 4, 5],
[0, 1, 2],
[3, 4, 5],
[0, 1, 2],
[3, 4, 5]]], dtype=int32)>
axis=2
run(axis)
<tf.Tensor: shape=(1, 2, 12), dtype=int32, numpy=
array([[[0, 1, 2, 0, 1, 2, 0, 1, 2, 0, 1, 2],
[3, 4, 5, 3, 4, 5, 3, 4, 5, 3, 4, 5]]], dtype=int32)>
Args
value a tf.distribute.DistributedValues instance, e.g. returned by Strategy.run, to be combined into a single tensor. It can also be a regular tensor when used with tf.distribute.OneDeviceStrategy or the default strategy. The tensors that constitute the DistributedValues can only be dense tensors with non-zero rank, NOT a tf.IndexedSlices.
axis 0-D int32 Tensor. Dimension along which to gather. Must be in the range [0, rank(value)).
Returns A Tensor that's the concatenation of value across replicas along axis dimension.
reduce View source
reduce(
reduce_op, value, axis
)
Reduce value across replicas and return result on current device.
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"])
def step_fn():
i = tf.distribute.get_replica_context().replica_id_in_sync_group
return tf.identity(i)
per_replica_result = strategy.run(step_fn)
total = strategy.reduce("SUM", per_replica_result, axis=None)
total
<tf.Tensor: shape=(), dtype=int32, numpy=1>
To see how this would look with multiple replicas, consider the same example with MirroredStrategy with 2 GPUs: strategy = tf.distribute.MirroredStrategy(devices=["GPU:0", "GPU:1"])
def step_fn():
i = tf.distribute.get_replica_context().replica_id_in_sync_group
return tf.identity(i)
per_replica_result = strategy.run(step_fn)
# Check devices on which per replica result is:
strategy.experimental_local_results(per_replica_result)[0].device
# /job:localhost/replica:0/task:0/device:GPU:0
strategy.experimental_local_results(per_replica_result)[1].device
# /job:localhost/replica:0/task:0/device:GPU:1
total = strategy.reduce("SUM", per_replica_result, axis=None)
# Check device on which reduced result is:
total.device
# /job:localhost/replica:0/task:0/device:CPU:0
This API is typically used for aggregating the results returned from different replicas, for reporting etc. For example, loss computed from different replicas can be averaged using this API before printing.
Note: The result is copied to the "current" device - which would typically be the CPU of the worker on which the program is running. For TPUStrategy, it is the first TPU host. For multi client MultiWorkerMirroredStrategy, this is CPU of each worker.
There are a number of different tf.distribute APIs for reducing values across replicas:
tf.distribute.ReplicaContext.all_reduce: This differs from Strategy.reduce in that it is for replica context and does not copy the results to the host device. all_reduce should be typically used for reductions inside the training step such as gradients.
tf.distribute.StrategyExtended.reduce_to and tf.distribute.StrategyExtended.batch_reduce_to: These APIs are more advanced versions of Strategy.reduce as they allow customizing the destination of the result. They are also called in cross replica context. What should axis be? Given a per-replica value returned by run, say a per-example loss, the batch will be divided across all the replicas. This function allows you to aggregate across replicas and optionally also across batch elements by specifying the axis parameter accordingly. For example, if you have a global batch size of 8 and 2 replicas, values for examples [0, 1, 2, 3] will be on replica 0 and [4, 5, 6, 7] will be on replica 1. With axis=None, reduce will aggregate only across replicas, returning [0+4, 1+5, 2+6, 3+7]. This is useful when each replica is computing a scalar or some other value that doesn't have a "batch" dimension (like a gradient or loss). strategy.reduce("sum", per_replica_result, axis=None)
Sometimes, you will want to aggregate across both the global batch and all replicas. You can get this behavior by specifying the batch dimension as the axis, typically axis=0. In this case it would return a scalar 0+1+2+3+4+5+6+7. strategy.reduce("sum", per_replica_result, axis=0)
If there is a last partial batch, you will need to specify an axis so that the resulting shape is consistent across replicas. So if the last batch has size 6 and it is divided into [0, 1, 2, 3] and [4, 5], you would get a shape mismatch unless you specify axis=0. If you specify tf.distribute.ReduceOp.MEAN, using axis=0 will use the correct denominator of 6. Contrast this with computing reduce_mean to get a scalar value on each replica and this function to average those means, which will weigh some values 1/8 and others 1/4.
Args
reduce_op a tf.distribute.ReduceOp value specifying how values should be combined. Allows using string representation of the enum such as "SUM", "MEAN".
value a tf.distribute.DistributedValues instance, e.g. returned by Strategy.run, to be combined into a single tensor. It can also be a regular tensor when used with OneDeviceStrategy or default strategy.
axis specifies the dimension to reduce along within each replica's tensor. Should typically be set to the batch dimension, or None to only reduce across replicas (e.g. if the tensor has no batch dimension).
Returns A Tensor.
run View source
run(
fn, args=(), kwargs=None, options=None
)
Invokes fn on each replica, with the given arguments. This method is the primary way to distribute your computation with a tf.distribute object. It invokes fn on each replica. If args or kwargs have tf.distribute.DistributedValues, such as those produced by a tf.distribute.DistributedDataset from tf.distribute.Strategy.experimental_distribute_dataset or tf.distribute.Strategy.distribute_datasets_from_function, when fn is executed on a particular replica, it will be executed with the component of tf.distribute.DistributedValues that correspond to that replica. fn is invoked under a replica context. fn may call tf.distribute.get_replica_context() to access members such as all_reduce. Please see the module-level docstring of tf.distribute for the concept of replica context. All arguments in args or kwargs should either be Python values of a nested structure of tensors, e.g. a list of tensors, in which case args and kwargs will be passed to the fn invoked on each replica. Or args or kwargs can be tf.distribute.DistributedValues containing tensors or composite tensors, i.e. tf.compat.v1.TensorInfo.CompositeTensor, in which case each fn call will get the component of a tf.distribute.DistributedValues corresponding to its replica. Key Point: Depending on the implementation of tf.distribute.Strategy and whether eager execution is enabled, fn may be called one or more times. If fn is annotated with tf.function or tf.distribute.Strategy.run is called inside a tf.function (eager execution is disabled inside a tf.function by default), fn is called once per replica to generate a Tensorflow graph, which will then be reused for execution with new inputs. Otherwise, if eager execution is enabled, fn will be called once per replica every step just like regular python code. Example usage: Constant tensor input.
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"])
tensor_input = tf.constant(3.0)
@tf.function
def replica_fn(input):
return input*2.0
result = strategy.run(replica_fn, args=(tensor_input,))
result
PerReplica:{
0: <tf.Tensor: shape=(), dtype=float32, numpy=6.0>,
1: <tf.Tensor: shape=(), dtype=float32, numpy=6.0>
}
DistributedValues input.
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"])
@tf.function
def run():
def value_fn(value_context):
return value_context.num_replicas_in_sync
distributed_values = (
strategy.experimental_distribute_values_from_function(
value_fn))
def replica_fn2(input):
return input*2
return strategy.run(replica_fn2, args=(distributed_values,))
result = run()
result
<tf.Tensor: shape=(), dtype=int32, numpy=4>
Use tf.distribute.ReplicaContext to allreduce values.
strategy = tf.distribute.MirroredStrategy(["gpu:0", "gpu:1"])
@tf.function
def run():
def value_fn(value_context):
return tf.constant(value_context.replica_id_in_sync_group)
distributed_values = (
strategy.experimental_distribute_values_from_function(
value_fn))
def replica_fn(input):
return tf.distribute.get_replica_context().all_reduce("sum", input)
return strategy.run(replica_fn, args=(distributed_values,))
result = run()
result
PerReplica:{
0: <tf.Tensor: shape=(), dtype=int32, numpy=1>,
1: <tf.Tensor: shape=(), dtype=int32, numpy=1>
}
Args
fn The function to run on each replica.
args Optional positional arguments to fn. Its element can be a Python value, a tensor or a tf.distribute.DistributedValues.
kwargs Optional keyword arguments to fn. Its element can be a Python value, a tensor or a tf.distribute.DistributedValues.
options An optional instance of tf.distribute.RunOptions specifying the options to run fn.
Returns Merged return value of fn across replicas. The structure of the return value is the same as the return value from fn. Each element in the structure can either be tf.distribute.DistributedValues, Tensor objects, or Tensors (for example, if running on a single replica).
scope View source
scope()
Context manager to make the strategy current and distribute variables. This method returns a context manager, and is used as follows:
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"])
# Variable created inside scope:
with strategy.scope():
mirrored_variable = tf.Variable(1.)
mirrored_variable
MirroredVariable:{
0: <tf.Variable 'Variable:0' shape=() dtype=float32, numpy=1.0>,
1: <tf.Variable 'Variable/replica_1:0' shape=() dtype=float32, numpy=1.0>
}
# Variable created outside scope:
regular_variable = tf.Variable(1.)
regular_variable
<tf.Variable 'Variable:0' shape=() dtype=float32, numpy=1.0>
What happens when Strategy.scope is entered?
strategy is installed in the global context as the "current" strategy. Inside this scope, tf.distribute.get_strategy() will now return this strategy. Outside this scope, it returns the default no-op strategy. Entering the scope also enters the "cross-replica context". See tf.distribute.StrategyExtended for an explanation on cross-replica and replica contexts. Variable creation inside scope is intercepted by the strategy. Each strategy defines how it wants to affect the variable creation. Sync strategies like MirroredStrategy, TPUStrategy and MultiWorkerMiroredStrategy create variables replicated on each replica, whereas ParameterServerStrategy creates variables on the parameter servers. This is done using a custom tf.variable_creator_scope. In some strategies, a default device scope may also be entered: in MultiWorkerMiroredStrategy, a default device scope of "/CPU:0" is entered on each worker.
Note: Entering a scope does not automatically distribute a computation, except in the case of high level training framework like keras model.fit. If you're not using model.fit, you need to use strategy.run API to explicitly distribute that computation. See an example in the custom training loop tutorial.
What should be in scope and what should be outside? There are a number of requirements on what needs to happen inside the scope. However, in places where we have information about which strategy is in use, we often enter the scope for the user, so they don't have to do it explicitly (i.e. calling those either inside or outside the scope is OK). Anything that creates variables that should be distributed variables must be in strategy.scope. This can be either by directly putting it in scope, or relying on another API like strategy.run or model.fit to enter it for you. Any variable that is created outside scope will not be distributed and may have performance implications. Common things that create variables in TF: models, optimizers, metrics. These should always be created inside the scope. Another source of variable creation can be a checkpoint restore - when variables are created lazily. Note that any variable created inside a strategy captures the strategy information. So reading and writing to these variables outside the strategy.scope can also work seamlessly, without the user having to enter the scope. Some strategy APIs (such as strategy.run and strategy.reduce) which require to be in a strategy's scope, enter the scope for you automatically, which means when using those APIs you don't need to enter the scope yourself. When a tf.keras.Model is created inside a strategy.scope, we capture this information. When high level training frameworks methods such as model.compile, model.fit etc are then called on this model, we automatically enter the scope, as well as use this strategy to distribute the training etc. See detailed example in distributed keras tutorial. Note that simply calling the model(..) is not impacted - only high level training framework APIs are. model.compile, model.fit, model.evaluate, model.predict and model.save can all be called inside or outside the scope. The following can be either inside or outside the scope: Creating the input datasets Defining tf.functions that represent your training step Saving APIs such as tf.saved_model.save. Loading creates variables, so that should go inside the scope if you want to train the model in a distributed way. Checkpoint saving. As mentioned above - checkpoint.restore may sometimes need to be inside scope if it creates variables.
Returns A context manager. | tensorflow.distribute.multiworkermirroredstrategy |
tf.distribute.NcclAllReduce View source on GitHub NCCL all-reduce implementation of CrossDeviceOps. Inherits From: CrossDeviceOps View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.distribute.NcclAllReduce
tf.distribute.NcclAllReduce(
num_packs=1
)
It uses Nvidia NCCL for all-reduce. For the batch API, tensors will be repacked or aggregated for more efficient cross-device transportation. For reduces that are not all-reduce, it falls back to tf.distribute.ReductionToOneDevice. Here is how you can use NcclAllReduce in tf.distribute.MirroredStrategy: strategy = tf.distribute.MirroredStrategy(
cross_device_ops=tf.distribute.NcclAllReduce())
Args
num_packs a non-negative integer. The number of packs to split values into. If zero, no packing will be done.
Raises
ValueError if num_packs is negative. Methods batch_reduce View source
batch_reduce(
reduce_op, value_destination_pairs, options=None
)
Reduce values to destinations in batches. See tf.distribute.StrategyExtended.batch_reduce_to. This can only be called in the cross-replica context.
Args
reduce_op a tf.distribute.ReduceOp specifying how values should be combined.
value_destination_pairs a sequence of (value, destinations) pairs. See tf.distribute.CrossDeviceOps.reduce for descriptions.
options a tf.distribute.experimental.CommunicationOptions. See tf.distribute.experimental.CommunicationOptions for details.
Returns A list of tf.Tensor or tf.distribute.DistributedValues, one per pair in value_destination_pairs.
Raises
ValueError if value_destination_pairs is not an iterable of tuples of tf.distribute.DistributedValues and destinations. broadcast View source
broadcast(
tensor, destinations
)
Broadcast tensor to destinations. This can only be called in the cross-replica context.
Args
tensor a tf.Tensor like object. The value to broadcast.
destinations a tf.distribute.DistributedValues, a tf.Variable, a tf.Tensor alike object, or a device string. It specifies the devices to broadcast to. Note that if it's a tf.Variable, the value is broadcasted to the devices of that variable, this method doesn't update the variable.
Returns A tf.Tensor or tf.distribute.DistributedValues.
reduce View source
reduce(
reduce_op, per_replica_value, destinations, options=None
)
Reduce per_replica_value to destinations. See tf.distribute.StrategyExtended.reduce_to. This can only be called in the cross-replica context.
Args
reduce_op a tf.distribute.ReduceOp specifying how values should be combined.
per_replica_value a tf.distribute.DistributedValues, or a tf.Tensor like object.
destinations a tf.distribute.DistributedValues, a tf.Variable, a tf.Tensor alike object, or a device string. It specifies the devices to reduce to. To perform an all-reduce, pass the same to value and destinations. Note that if it's a tf.Variable, the value is reduced to the devices of that variable, and this method doesn't update the variable.
options a tf.distribute.experimental.CommunicationOptions. See tf.distribute.experimental.CommunicationOptions for details.
Returns A tf.Tensor or tf.distribute.DistributedValues.
Raises
ValueError if per_replica_value can't be converted to a tf.distribute.DistributedValues or if destinations is not a string, tf.Variable or tf.distribute.DistributedValues. | tensorflow.distribute.ncclallreduce |
tf.distribute.OneDeviceStrategy View source on GitHub A distribution strategy for running on a single device. Inherits From: Strategy
tf.distribute.OneDeviceStrategy(
device
)
Using this strategy will place any variables created in its scope on the specified device. Input distributed through this strategy will be prefetched to the specified device. Moreover, any functions called via strategy.run will also be placed on the specified device as well. Typical usage of this strategy could be testing your code with the tf.distribute.Strategy API before switching to other strategies which actually distribute to multiple devices/machines. For example: strategy = tf.distribute.OneDeviceStrategy(device="/gpu:0")
with strategy.scope():
v = tf.Variable(1.0)
print(v.device) # /job:localhost/replica:0/task:0/device:GPU:0
def step_fn(x):
return x * 2
result = 0
for i in range(10):
result += strategy.run(step_fn, args=(i,))
print(result) # 90
Args
device Device string identifier for the device on which the variables should be placed. See class docs for more details on how the device is used. Examples: "/cpu:0", "/gpu:0", "/device:CPU:0", "/device:GPU:0"
Attributes
cluster_resolver Returns the cluster resolver associated with this strategy. In general, when using a multi-worker tf.distribute strategy such as tf.distribute.experimental.MultiWorkerMirroredStrategy or tf.distribute.TPUStrategy(), there is a tf.distribute.cluster_resolver.ClusterResolver associated with the strategy used, and such an instance is returned by this property. Strategies that intend to have an associated tf.distribute.cluster_resolver.ClusterResolver must set the relevant attribute, or override this property; otherwise, None is returned by default. Those strategies should also provide information regarding what is returned by this property. Single-worker strategies usually do not have a tf.distribute.cluster_resolver.ClusterResolver, and in those cases this property will return None. The tf.distribute.cluster_resolver.ClusterResolver may be useful when the user needs to access information such as the cluster spec, task type or task id. For example,
os.environ['TF_CONFIG'] = json.dumps({
'cluster': {
'worker': ["localhost:12345", "localhost:23456"],
'ps': ["localhost:34567"]
},
'task': {'type': 'worker', 'index': 0}
})
# This implicitly uses TF_CONFIG for the cluster and current task info.
strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy()
...
if strategy.cluster_resolver.task_type == 'worker':
# Perform something that's only applicable on workers. Since we set this
# as a worker above, this block will run on this particular instance.
elif strategy.cluster_resolver.task_type == 'ps':
# Perform something that's only applicable on parameter servers. Since we
# set this as a worker above, this block will not run on this particular
# instance.
For more information, please see tf.distribute.cluster_resolver.ClusterResolver's API docstring.
extended tf.distribute.StrategyExtended with additional methods.
num_replicas_in_sync Returns number of replicas over which gradients are aggregated. Methods distribute_datasets_from_function View source
distribute_datasets_from_function(
dataset_fn, options=None
)
Distributes tf.data.Dataset instances created by calls to dataset_fn. dataset_fn will be called once for each worker in the strategy. In this case, we only have one worker and one device so dataset_fn is called once. The dataset_fn should take an tf.distribute.InputContext instance where information about batching and input replication can be accessed: def dataset_fn(input_context):
batch_size = input_context.get_per_replica_batch_size(global_batch_size)
d = tf.data.Dataset.from_tensors([[1.]]).repeat().batch(batch_size)
return d.shard(
input_context.num_input_pipelines, input_context.input_pipeline_id)
inputs = strategy.distribute_datasets_from_function(dataset_fn)
for batch in inputs:
replica_results = strategy.run(replica_fn, args=(batch,))
Key Point: The tf.data.Dataset returned by dataset_fn should have a per-replica batch size, unlike experimental_distribute_dataset, which uses the global batch size. This may be computed using input_context.get_per_replica_batch_size.
Args
dataset_fn A function taking a tf.distribute.InputContext instance and returning a tf.data.Dataset.
options tf.distribute.InputOptions used to control options on how this dataset is distributed.
Returns A "distributed Dataset", which the caller can iterate over like regular datasets.
experimental_distribute_dataset View source
experimental_distribute_dataset(
dataset, options=None
)
Distributes a tf.data.Dataset instance provided via dataset. In this case, there is only one device, so this is only a thin wrapper around the input dataset. It will, however, prefetch the input data to the specified device. The returned distributed dataset can be iterated over similar to how regular datasets can.
Note: Currently, the user cannot add any more transformations to a distributed dataset.
Example: strategy = tf.distribute.OneDeviceStrategy()
dataset = tf.data.Dataset.range(10).batch(2)
dist_dataset = strategy.experimental_distribute_dataset(dataset)
for x in dist_dataset:
print(x) # [0, 1], [2, 3],...
Args: dataset: tf.data.Dataset to be prefetched to device. options: tf.distribute.InputOptions used to control options on how this dataset is distributed. Returns: A "distributed Dataset" that the caller can iterate over. experimental_distribute_values_from_function View source
experimental_distribute_values_from_function(
value_fn
)
Generates tf.distribute.DistributedValues from value_fn. This function is to generate tf.distribute.DistributedValues to pass into run, reduce, or other methods that take distributed values when not using datasets.
Args
value_fn The function to run to generate values. It is called for each replica with tf.distribute.ValueContext as the sole argument. It must return a Tensor or a type that can be converted to a Tensor.
Returns A tf.distribute.DistributedValues containing a value for each replica.
Example usage: Return constant value per replica:
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"])
def value_fn(ctx):
return tf.constant(1.)
distributed_values = (
strategy.experimental_distribute_values_from_function(
value_fn))
local_result = strategy.experimental_local_results(distributed_values)
local_result
(<tf.Tensor: shape=(), dtype=float32, numpy=1.0>,
<tf.Tensor: shape=(), dtype=float32, numpy=1.0>)
Distribute values in array based on replica_id:
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"])
array_value = np.array([3., 2., 1.])
def value_fn(ctx):
return array_value[ctx.replica_id_in_sync_group]
distributed_values = (
strategy.experimental_distribute_values_from_function(
value_fn))
local_result = strategy.experimental_local_results(distributed_values)
local_result
(3.0, 2.0)
Specify values using num_replicas_in_sync:
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"])
def value_fn(ctx):
return ctx.num_replicas_in_sync
distributed_values = (
strategy.experimental_distribute_values_from_function(
value_fn))
local_result = strategy.experimental_local_results(distributed_values)
local_result
(2, 2)
Place values on devices and distribute: strategy = tf.distribute.TPUStrategy()
worker_devices = strategy.extended.worker_devices
multiple_values = []
for i in range(strategy.num_replicas_in_sync):
with tf.device(worker_devices[i]):
multiple_values.append(tf.constant(1.0))
def value_fn(ctx):
return multiple_values[ctx.replica_id_in_sync_group]
distributed_values = strategy.
experimental_distribute_values_from_function(
value_fn)
experimental_local_results View source
experimental_local_results(
value
)
Returns the list of all local per-replica values contained in value. In OneDeviceStrategy, the value is always expected to be a single value, so the result is just the value in a tuple.
Args
value A value returned by experimental_run(), run(), extended.call_for_each_replica(), or a variable created in scope.
Returns A tuple of values contained in value. If value represents a single value, this returns (value,).
gather View source
gather(
value, axis
)
Gather value across replicas along axis to the current device. Given a tf.distribute.DistributedValues or tf.Tensor-like object value, this API gathers and concatenates value across replicas along the axis-th dimension. The result is copied to the "current" device which would typically be the CPU of the worker on which the program is running. For tf.distribute.TPUStrategy, it is the first TPU host. For multi-client MultiWorkerMirroredStrategy, this is CPU of each worker. This API can only be called in the cross-replica context. For a counterpart in the replica context, see tf.distribute.ReplicaContext.all_gather.
Note: For all strategies except tf.distribute.TPUStrategy, the input value on different replicas must have the same rank, and their shapes must be the same in all dimensions except the axis-th dimension. In other words, their shapes cannot be different in a dimension d where d does not equal to the axis argument. For example, given a tf.distribute.DistributedValues with component tensors of shape (1, 2, 3) and (1, 3, 3) on two replicas, you can call gather(..., axis=1, ...) on it, but not gather(..., axis=0, ...) or gather(..., axis=2, ...). However, for tf.distribute.TPUStrategy.gather, all tensors must have exactly the same rank and same shape.
Note: Given a tf.distribute.DistributedValues value, its component tensors must have a non-zero rank. Otherwise, consider using tf.expand_dims before gathering them.
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"])
# A DistributedValues with component tensor of shape (2, 1) on each replica
distributed_values = strategy.experimental_distribute_values_from_function(lambda _: tf.identity(tf.constant([[1], [2]])))
@tf.function
def run():
return strategy.gather(distributed_values, axis=0)
run()
<tf.Tensor: shape=(4, 1), dtype=int32, numpy=
array([[1],
[2],
[1],
[2]], dtype=int32)>
Consider the following example for more combinations:
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1", "GPU:2", "GPU:3"])
single_tensor = tf.reshape(tf.range(6), shape=(1,2,3))
distributed_values = strategy.experimental_distribute_values_from_function(lambda _: tf.identity(single_tensor))
@tf.function
def run(axis):
return strategy.gather(distributed_values, axis=axis)
axis=0
run(axis)
<tf.Tensor: shape=(4, 2, 3), dtype=int32, numpy=
array([[[0, 1, 2],
[3, 4, 5]],
[[0, 1, 2],
[3, 4, 5]],
[[0, 1, 2],
[3, 4, 5]],
[[0, 1, 2],
[3, 4, 5]]], dtype=int32)>
axis=1
run(axis)
<tf.Tensor: shape=(1, 8, 3), dtype=int32, numpy=
array([[[0, 1, 2],
[3, 4, 5],
[0, 1, 2],
[3, 4, 5],
[0, 1, 2],
[3, 4, 5],
[0, 1, 2],
[3, 4, 5]]], dtype=int32)>
axis=2
run(axis)
<tf.Tensor: shape=(1, 2, 12), dtype=int32, numpy=
array([[[0, 1, 2, 0, 1, 2, 0, 1, 2, 0, 1, 2],
[3, 4, 5, 3, 4, 5, 3, 4, 5, 3, 4, 5]]], dtype=int32)>
Args
value a tf.distribute.DistributedValues instance, e.g. returned by Strategy.run, to be combined into a single tensor. It can also be a regular tensor when used with tf.distribute.OneDeviceStrategy or the default strategy. The tensors that constitute the DistributedValues can only be dense tensors with non-zero rank, NOT a tf.IndexedSlices.
axis 0-D int32 Tensor. Dimension along which to gather. Must be in the range [0, rank(value)).
Returns A Tensor that's the concatenation of value across replicas along axis dimension.
reduce View source
reduce(
reduce_op, value, axis
)
Reduce value across replicas. In OneDeviceStrategy, there is only one replica, so if axis=None, value is simply returned. If axis is specified as something other than None, such as axis=0, value is reduced along that axis and returned. Example: t = tf.range(10)
result = strategy.reduce(tf.distribute.ReduceOp.SUM, t, axis=None).numpy()
# result: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
result = strategy.reduce(tf.distribute.ReduceOp.SUM, t, axis=0).numpy()
# result: 45
Args
reduce_op A tf.distribute.ReduceOp value specifying how values should be combined.
value A "per replica" value, e.g. returned by run to be combined into a single tensor.
axis Specifies the dimension to reduce along within each replica's tensor. Should typically be set to the batch dimension, or None to only reduce across replicas (e.g. if the tensor has no batch dimension).
Returns A Tensor.
run View source
run(
fn, args=(), kwargs=None, options=None
)
Run fn on each replica, with the given arguments. In OneDeviceStrategy, fn is simply called within a device scope for the given device, with the provided arguments.
Args
fn The function to run. The output must be a tf.nest of Tensors.
args (Optional) Positional arguments to fn.
kwargs (Optional) Keyword arguments to fn.
options (Optional) An instance of tf.distribute.RunOptions specifying the options to run fn.
Returns Return value from running fn.
scope View source
scope()
Returns a context manager selecting this Strategy as current. Inside a with strategy.scope(): code block, this thread will use a variable creator set by strategy, and will enter its "cross-replica context". In OneDeviceStrategy, all variables created inside strategy.scope() will be on device specified at strategy construction time. See example in the docs for this class.
Returns A context manager to use for creating variables with this strategy. | tensorflow.distribute.onedevicestrategy |
tf.distribute.ReduceOp View source on GitHub Indicates how a set of values should be reduced. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.distribute.ReduceOp
SUM: Add all the values.
MEAN: Take the arithmetic mean ("average") of the values.
Class Variables
MEAN tf.distribute.ReduceOp
SUM tf.distribute.ReduceOp | tensorflow.distribute.reduceop |
tf.distribute.ReductionToOneDevice View source on GitHub A CrossDeviceOps implementation that copies values to one device to reduce. Inherits From: CrossDeviceOps View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.distribute.ReductionToOneDevice
tf.distribute.ReductionToOneDevice(
reduce_to_device=None, accumulation_fn=None
)
This implementation always copies values to one device to reduce them, then broadcast reduced values to the destinations. It doesn't support efficient batching. Here is how you can use ReductionToOneDevice in tf.distribute.MirroredStrategy: strategy = tf.distribute.MirroredStrategy(
cross_device_ops=tf.distribute.ReductionToOneDevice())
Args
reduce_to_device the intermediate device to reduce to. If None, reduce to the first device in destinations of the reduce method.
accumulation_fn a function that does accumulation. If None, tf.math.add_n is used. Methods batch_reduce View source
batch_reduce(
reduce_op, value_destination_pairs, options=None
)
Reduce values to destinations in batches. See tf.distribute.StrategyExtended.batch_reduce_to. This can only be called in the cross-replica context.
Args
reduce_op a tf.distribute.ReduceOp specifying how values should be combined.
value_destination_pairs a sequence of (value, destinations) pairs. See tf.distribute.CrossDeviceOps.reduce for descriptions.
options a tf.distribute.experimental.CommunicationOptions. See tf.distribute.experimental.CommunicationOptions for details.
Returns A list of tf.Tensor or tf.distribute.DistributedValues, one per pair in value_destination_pairs.
Raises
ValueError if value_destination_pairs is not an iterable of tuples of tf.distribute.DistributedValues and destinations. broadcast View source
broadcast(
tensor, destinations
)
Broadcast tensor to destinations. This can only be called in the cross-replica context.
Args
tensor a tf.Tensor like object. The value to broadcast.
destinations a tf.distribute.DistributedValues, a tf.Variable, a tf.Tensor alike object, or a device string. It specifies the devices to broadcast to. Note that if it's a tf.Variable, the value is broadcasted to the devices of that variable, this method doesn't update the variable.
Returns A tf.Tensor or tf.distribute.DistributedValues.
reduce View source
reduce(
reduce_op, per_replica_value, destinations, options=None
)
Reduce per_replica_value to destinations. See tf.distribute.StrategyExtended.reduce_to. This can only be called in the cross-replica context.
Args
reduce_op a tf.distribute.ReduceOp specifying how values should be combined.
per_replica_value a tf.distribute.DistributedValues, or a tf.Tensor like object.
destinations a tf.distribute.DistributedValues, a tf.Variable, a tf.Tensor alike object, or a device string. It specifies the devices to reduce to. To perform an all-reduce, pass the same to value and destinations. Note that if it's a tf.Variable, the value is reduced to the devices of that variable, and this method doesn't update the variable.
options a tf.distribute.experimental.CommunicationOptions. See tf.distribute.experimental.CommunicationOptions for details.
Returns A tf.Tensor or tf.distribute.DistributedValues.
Raises
ValueError if per_replica_value can't be converted to a tf.distribute.DistributedValues or if destinations is not a string, tf.Variable or tf.distribute.DistributedValues. | tensorflow.distribute.reductiontoonedevice |
tf.distribute.ReplicaContext View source on GitHub A class with a collection of APIs that can be called in a replica context.
tf.distribute.ReplicaContext(
strategy, replica_id_in_sync_group
)
You can use tf.distribute.get_replica_context to get an instance of ReplicaContext, which can only be called inside the function passed to tf.distribute.Strategy.run.
strategy = tf.distribute.MirroredStrategy(['GPU:0', 'GPU:1'])
def func():
replica_context = tf.distribute.get_replica_context()
return replica_context.replica_id_in_sync_group
strategy.run(func)
PerReplica:{
0: <tf.Tensor: shape=(), dtype=int32, numpy=0>,
1: <tf.Tensor: shape=(), dtype=int32, numpy=1>
}
Args
strategy A tf.distribute.Strategy.
replica_id_in_sync_group An integer, a Tensor or None. Prefer an integer whenever possible to avoid issues with nested tf.function. It accepts a Tensor only to be compatible with tpu.replicate.
Attributes
devices Returns the devices this replica is to be executed on, as a tuple of strings. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Please avoid relying on devices property.
Note: For tf.distribute.MirroredStrategy and tf.distribute.experimental.MultiWorkerMirroredStrategy, this returns a nested list of device strings, e.g, [["GPU:0"]].
num_replicas_in_sync Returns number of replicas that are kept in sync.
replica_id_in_sync_group Returns the id of the replica. This identifies the replica among all replicas that are kept in sync. The value of the replica id can range from 0 to tf.distribute.ReplicaContext.num_replicas_in_sync - 1.
Note: This is not guaranteed to be the same ID as the XLA replica ID use for low-level operations such as collective_permute.
strategy The current tf.distribute.Strategy object. Methods all_gather View source
all_gather(
value, axis, options=None
)
All-gathers value across all replicas along axis.
Note: An all_gather method can only be called in replica context. For a cross-replica context counterpart, see tf.distribute.Strategy.gather. All replicas need to participate in the all-gather, otherwise this operation hangs. So if all_gather is called in any replica, it must be called in all replicas.
Note: If there are multiple all_gather calls, they need to be executed in the same order on all replicas. Dispatching all_gather based on conditions is usually error-prone.
For all strategies except tf.distribute.TPUStrategy, the input value on different replicas must have the same rank, and their shapes must be the same in all dimensions except the axis-th dimension. In other words, their shapes cannot be different in a dimension d where d does not equal to the axis argument. For example, given a tf.distribute.DistributedValues with component tensors of shape (1, 2, 3) and (1, 3, 3) on two replicas, you can call all_gather(..., axis=1, ...) on it, but not all_gather(..., axis=0, ...) or all_gather(..., axis=2, ...). However, with tf.distribute.TPUStrategy, all tensors must have exactly the same rank and same shape.
Note: The input value must have a non-zero rank. Otherwise, consider using tf.expand_dims before gathering them.
You can pass in a single tensor to all-gather:
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"])
@tf.function
def gather_value():
ctx = tf.distribute.get_replica_context()
local_value = tf.constant([1, 2, 3])
return ctx.all_gather(local_value, axis=0)
result = strategy.run(gather_value)
result
PerReplica:{
0: <tf.Tensor: shape=(6,), dtype=int32, numpy=array([1, 2, 3, 1, 2, 3], dtype=int32)>,
1: <tf.Tensor: shape=(6,), dtype=int32, numpy=array([1, 2, 3, 1, 2, 3], dtype=int32)>
}
strategy.experimental_local_results(result)
(<tf.Tensor: shape=(6,), dtype=int32, numpy=array([1, 2, 3, 1, 2, 3],
dtype=int32)>,
<tf.Tensor: shape=(6,), dtype=int32, numpy=array([1, 2, 3, 1, 2, 3],
dtype=int32)>)
You can also pass in a nested structure of tensors to all-gather, say, a list:
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"])
@tf.function
def gather_nest():
ctx = tf.distribute.get_replica_context()
value_1 = tf.constant([1, 2, 3])
value_2 = tf.constant([[1, 2], [3, 4]])
# all_gather a nest of `tf.distribute.DistributedValues`
return ctx.all_gather([value_1, value_2], axis=0)
result = strategy.run(gather_nest)
result
[PerReplica:{
0: <tf.Tensor: shape=(6,), dtype=int32, numpy=array([1, 2, 3, 1, 2, 3], dtype=int32)>,
1: <tf.Tensor: shape=(6,), dtype=int32, numpy=array([1, 2, 3, 1, 2, 3], dtype=int32)>
}, PerReplica:{
0: <tf.Tensor: shape=(4, 2), dtype=int32, numpy=
array([[1, 2],
[3, 4],
[1, 2],
[3, 4]], dtype=int32)>,
1: <tf.Tensor: shape=(4, 2), dtype=int32, numpy=
array([[1, 2],
[3, 4],
[1, 2],
[3, 4]], dtype=int32)>
}]
strategy.experimental_local_results(result)
([PerReplica:{
0: <tf.Tensor: shape=(6,), dtype=int32, numpy=array([1, 2, 3, 1, 2, 3], dtype=int32)>,
1: <tf.Tensor: shape=(6,), dtype=int32, numpy=array([1, 2, 3, 1, 2, 3], dtype=int32)>
}, PerReplica:{
0: <tf.Tensor: shape=(4, 2), dtype=int32, numpy=
array([[1, 2],
[3, 4],
[1, 2],
[3, 4]], dtype=int32)>,
1: <tf.Tensor: shape=(4, 2), dtype=int32, numpy=
array([[1, 2],
[3, 4],
[1, 2],
[3, 4]], dtype=int32)>
}],)
What if you are all-gathering tensors with different shapes on different replicas? Consider the following example with two replicas, where you have value as a nested structure consisting of two items to all-gather, a and b. On Replica 0, value is {'a': [0], 'b': [[0, 1]]} On Replica 1, value is {'a': [1], 'b': [[2, 3], [4, 5]]} Result for all_gather with axis=0: (on each of the replicas): {'a': [1, 2], 'b': [[0, 1], [2, 3], [4, 5]]}
Args
value a nested structure of tf.Tensor which tf.nest.flatten accepts, or a tf.distribute.DistributedValues instance. The structure of the tf.Tensor need to be same on all replicas. The underlying tensor constructs can only be dense tensors with non-zero rank, NOT tf.IndexedSlices.
axis 0-D int32 Tensor. Dimension along which to gather.
options a tf.distribute.experimental.CommunicationOptions. Options to perform collective operations. This overrides the default options if the tf.distribute.Strategy takes one in the constructor. See tf.distribute.experimental.CommunicationOptions for details of the options.
Returns A nested structure of tf.Tensor with the gathered values. The structure is the same as value.
all_reduce View source
all_reduce(
reduce_op, value, options=None
)
All-reduces value across all replicas.
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"])
def step_fn():
ctx = tf.distribute.get_replica_context()
value = tf.identity(1.)
return ctx.all_reduce(tf.distribute.ReduceOp.SUM, value)
strategy.experimental_local_results(strategy.run(step_fn))
(<tf.Tensor: shape=(), dtype=float32, numpy=2.0>,
<tf.Tensor: shape=(), dtype=float32, numpy=2.0>)
It supports batched operations. You can pass a list of values and it attempts to batch them when possible. You can also specify options to indicate the desired batching behavior, e.g. batch the values into multiple packs so that they can better overlap with computations.
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"])
def step_fn():
ctx = tf.distribute.get_replica_context()
value1 = tf.identity(1.)
value2 = tf.identity(2.)
return ctx.all_reduce(tf.distribute.ReduceOp.SUM, [value1, value2])
strategy.experimental_local_results(strategy.run(step_fn))
([PerReplica:{
0: <tf.Tensor: shape=(), dtype=float32, numpy=2.0>,
1: <tf.Tensor: shape=(), dtype=float32, numpy=2.0>
}, PerReplica:{
0: <tf.Tensor: shape=(), dtype=float32, numpy=4.0>,
1: <tf.Tensor: shape=(), dtype=float32, numpy=4.0>
}],)
Note that all replicas need to participate in the all-reduce, otherwise this operation hangs. Note that if there're multiple all-reduces, they need to execute in the same order on all replicas. Dispatching all-reduce based on conditions is usually error-prone. This API currently can only be called in the replica context. Other variants to reduce values across replicas are:
tf.distribute.StrategyExtended.reduce_to: the reduce and all-reduce API in the cross-replica context.
tf.distribute.StrategyExtended.batch_reduce_to: the batched reduce and all-reduce API in the cross-replica context.
tf.distribute.Strategy.reduce: a more convenient method to reduce to the host in cross-replica context.
Args
reduce_op a tf.distribute.ReduceOp value specifying how values should be combined. Allows using string representation of the enum such as "SUM", "MEAN".
value a nested structure of tf.Tensor which tf.nest.flatten accepts. The structure and the shapes of the tf.Tensor need to be same on all replicas.
options a tf.distribute.experimental.CommunicationOptions. Options to perform collective operations. This overrides the default options if the tf.distribute.Strategy takes one in the constructor. See tf.distribute.experimental.CommunicationOptions for details of the options.
Returns A nested structure of tf.Tensor with the reduced values. The structure is the same as value.
merge_call View source
merge_call(
merge_fn, args=(), kwargs=None
)
Merge args across replicas and run merge_fn in a cross-replica context. This allows communication and coordination when there are multiple calls to the step_fn triggered by a call to strategy.run(step_fn, ...). See tf.distribute.Strategy.run for an explanation. If not inside a distributed scope, this is equivalent to: strategy = tf.distribute.get_strategy()
with cross-replica-context(strategy):
return merge_fn(strategy, *args, **kwargs)
Args
merge_fn Function that joins arguments from threads that are given as PerReplica. It accepts tf.distribute.Strategy object as the first argument.
args List or tuple with positional per-thread arguments for merge_fn.
kwargs Dict with keyword per-thread arguments for merge_fn.
Returns The return value of merge_fn, except for PerReplica values which are unpacked. | tensorflow.distribute.replicacontext |
tf.distribute.RunOptions Run options for strategy.run. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.distribute.RunOptions
tf.distribute.RunOptions(
experimental_enable_dynamic_batch_size=True,
experimental_bucketizing_dynamic_shape=False
)
This can be used to hold some strategy specific configs.
Attributes
experimental_enable_dynamic_batch_size Boolean. Only applies to TPUStrategy. Default to True. If True, TPUStrategy will enable dynamic padder to support dynamic batch size for the inputs. Otherwise only static shape inputs are allowed.
experimental_bucketizing_dynamic_shape Boolean. Only applies to TPUStrategy. Default to False. If True, TPUStrategy will automatic bucketize inputs passed into run if the input shape is dynamic. This is a performance optimization to reduce XLA recompilation, which should not have impact on correctness. | tensorflow.distribute.runoptions |
tf.distribute.Server View source on GitHub An in-process TensorFlow server, for use in distributed training. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.distribute.Server, tf.compat.v1.train.Server
tf.distribute.Server(
server_or_cluster_def, job_name=None, task_index=None, protocol=None,
config=None, start=True
)
A tf.distribute.Server instance encapsulates a set of devices and a tf.compat.v1.Session target that can participate in distributed training. A server belongs to a cluster (specified by a tf.train.ClusterSpec), and corresponds to a particular task in a named job. The server can communicate with any other server in the same cluster.
Args
server_or_cluster_def A tf.train.ServerDef or tf.train.ClusterDef protocol buffer, or a tf.train.ClusterSpec object, describing the server to be created and/or the cluster of which it is a member.
job_name (Optional.) Specifies the name of the job of which the server is a member. Defaults to the value in server_or_cluster_def, if specified.
task_index (Optional.) Specifies the task index of the server in its job. Defaults to the value in server_or_cluster_def, if specified. Otherwise defaults to 0 if the server's job has only one task.
protocol (Optional.) Specifies the protocol to be used by the server. Acceptable values include "grpc", "grpc+verbs". Defaults to the value in server_or_cluster_def, if specified. Otherwise defaults to "grpc".
config (Options.) A tf.compat.v1.ConfigProto that specifies default configuration options for all sessions that run on this server.
start (Optional.) Boolean, indicating whether to start the server after creating it. Defaults to True.
Raises
tf.errors.OpError Or one of its subclasses if an error occurs while creating the TensorFlow server.
Attributes
server_def Returns the tf.train.ServerDef for this server.
target Returns the target for a tf.compat.v1.Session to connect to this server. To create a tf.compat.v1.Session that connects to this server, use the following snippet: server = tf.distribute.Server(...)
with tf.compat.v1.Session(server.target):
# ...
Methods create_local_server View source
@staticmethod
create_local_server(
config=None, start=True
)
Creates a new single-process cluster running on the local host. This method is a convenience wrapper for creating a tf.distribute.Server with a tf.train.ServerDef that specifies a single-process cluster containing a single task in a job called "local".
Args
config (Options.) A tf.compat.v1.ConfigProto that specifies default configuration options for all sessions that run on this server.
start (Optional.) Boolean, indicating whether to start the server after creating it. Defaults to True.
Returns A local tf.distribute.Server.
join View source
join()
Blocks until the server has shut down. This method currently blocks forever.
Raises
tf.errors.OpError Or one of its subclasses if an error occurs while joining the TensorFlow server. start View source
start()
Starts this server.
Raises
tf.errors.OpError Or one of its subclasses if an error occurs while starting the TensorFlow server. | tensorflow.distribute.server |
tf.distribute.Strategy View source on GitHub A state & compute distribution policy on a list of devices.
tf.distribute.Strategy(
extended
)
See the guide for overview and examples. See tf.distribute.StrategyExtended and tf.distribute for a glossory of concepts mentioned on this page such as "per-replica", replica, and reduce. In short: To use it with Keras compile/fit, please read. You may pass descendant of tf.distribute.Strategy to tf.estimator.RunConfig to specify how a tf.estimator.Estimator should distribute its computation. See guide. Otherwise, use tf.distribute.Strategy.scope to specify that a strategy should be used when building an executing your model. (This puts you in the "cross-replica context" for this strategy, which means the strategy is put in control of things like variable placement.)
If you are writing a custom training loop, you will need to call a few more methods, see the guide: Start by creating a tf.data.Dataset normally. Use tf.distribute.Strategy.experimental_distribute_dataset to convert a tf.data.Dataset to something that produces "per-replica" values. If you want to manually specify how the dataset should be partitioned across replicas, use tf.distribute.Strategy.distribute_datasets_from_function instead. Use tf.distribute.Strategy.run to run a function once per replica, taking values that may be "per-replica" (e.g. from a tf.distribute.DistributedDataset object) and returning "per-replica" values. This function is executed in "replica context", which means each operation is performed separately on each replica. Finally use a method (such as tf.distribute.Strategy.reduce) to convert the resulting "per-replica" values into ordinary Tensors.
A custom training loop can be as simple as: with my_strategy.scope():
@tf.function
def distribute_train_epoch(dataset):
def replica_fn(input):
# process input and return result
return result
total_result = 0
for x in dataset:
per_replica_result = my_strategy.run(replica_fn, args=(x,))
total_result += my_strategy.reduce(tf.distribute.ReduceOp.SUM,
per_replica_result, axis=None)
return total_result
dist_dataset = my_strategy.experimental_distribute_dataset(dataset)
for _ in range(EPOCHS):
train_result = distribute_train_epoch(dist_dataset)
This takes an ordinary dataset and replica_fn and runs it distributed using a particular tf.distribute.Strategy named my_strategy above. Any variables created in replica_fn are created using my_strategy's policy, and library functions called by replica_fn can use the get_replica_context() API to implement distributed-specific behavior. You can use the reduce API to aggregate results across replicas and use this as a return value from one iteration over a tf.distribute.DistributedDataset. Or you can use tf.keras.metrics (such as loss, accuracy, etc.) to accumulate metrics across steps in a given epoch. See the custom training loop tutorial for a more detailed example.
Note: tf.distribute.Strategy currently does not support TensorFlow's partitioned variables (where a single variable is split across multiple devices) at this time.
Attributes
cluster_resolver Returns the cluster resolver associated with this strategy. In general, when using a multi-worker tf.distribute strategy such as tf.distribute.experimental.MultiWorkerMirroredStrategy or tf.distribute.TPUStrategy(), there is a tf.distribute.cluster_resolver.ClusterResolver associated with the strategy used, and such an instance is returned by this property. Strategies that intend to have an associated tf.distribute.cluster_resolver.ClusterResolver must set the relevant attribute, or override this property; otherwise, None is returned by default. Those strategies should also provide information regarding what is returned by this property. Single-worker strategies usually do not have a tf.distribute.cluster_resolver.ClusterResolver, and in those cases this property will return None. The tf.distribute.cluster_resolver.ClusterResolver may be useful when the user needs to access information such as the cluster spec, task type or task id. For example,
os.environ['TF_CONFIG'] = json.dumps({
'cluster': {
'worker': ["localhost:12345", "localhost:23456"],
'ps': ["localhost:34567"]
},
'task': {'type': 'worker', 'index': 0}
})
# This implicitly uses TF_CONFIG for the cluster and current task info.
strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy()
...
if strategy.cluster_resolver.task_type == 'worker':
# Perform something that's only applicable on workers. Since we set this
# as a worker above, this block will run on this particular instance.
elif strategy.cluster_resolver.task_type == 'ps':
# Perform something that's only applicable on parameter servers. Since we
# set this as a worker above, this block will not run on this particular
# instance.
For more information, please see tf.distribute.cluster_resolver.ClusterResolver's API docstring.
extended tf.distribute.StrategyExtended with additional methods.
num_replicas_in_sync Returns number of replicas over which gradients are aggregated. Methods distribute_datasets_from_function View source
distribute_datasets_from_function(
dataset_fn, options=None
)
Distributes tf.data.Dataset instances created by calls to dataset_fn. The argument dataset_fn that users pass in is an input function that has a tf.distribute.InputContext argument and returns a tf.data.Dataset instance. It is expected that the returned dataset from dataset_fn is already batched by per-replica batch size (i.e. global batch size divided by the number of replicas in sync) and sharded. tf.distribute.Strategy.distribute_datasets_from_function does not batch or shard the tf.data.Dataset instance returned from the input function. dataset_fn will be called on the CPU device of each of the workers and each generates a dataset where every replica on that worker will dequeue one batch of inputs (i.e. if a worker has two replicas, two batches will be dequeued from the Dataset every step). This method can be used for several purposes. First, it allows you to specify your own batching and sharding logic. (In contrast, tf.distribute.experimental_distribute_dataset does batching and sharding for you.) For example, where experimental_distribute_dataset is unable to shard the input files, this method might be used to manually shard the dataset (avoiding the slow fallback behavior in experimental_distribute_dataset). In cases where the dataset is infinite, this sharding can be done by creating dataset replicas that differ only in their random seed. The dataset_fn should take an tf.distribute.InputContext instance where information about batching and input replication can be accessed. You can use element_spec property of the tf.distribute.DistributedDataset returned by this API to query the tf.TypeSpec of the elements returned by the iterator. This can be used to set the input_signature property of a tf.function. Follow tf.distribute.DistributedDataset.element_spec to see an example. Key Point: The tf.data.Dataset returned by dataset_fn should have a per-replica batch size, unlike experimental_distribute_dataset, which uses the global batch size. This may be computed using input_context.get_per_replica_batch_size.
Note: If you are using TPUStrategy, the order in which the data is processed by the workers when using tf.distribute.Strategy.experimental_distribute_dataset or tf.distribute.Strategy.distribute_datasets_from_function is not guaranteed. This is typically required if you are using tf.distribute to scale prediction. You can however insert an index for each element in the batch and order outputs accordingly. Refer to this snippet for an example of how to order outputs.
Note: Stateful dataset transformations are currently not supported with tf.distribute.experimental_distribute_dataset or tf.distribute.distribute_datasets_from_function. Any stateful ops that the dataset may have are currently ignored. For example, if your dataset has a map_fn that uses tf.random.uniform to rotate an image, then you have a dataset graph that depends on state (i.e the random seed) on the local machine where the python process is being executed.
For a tutorial on more usage and properties of this method, refer to the tutorial on distributed input). If you are interested in last partial batch handling, read this section.
Args
dataset_fn A function taking a tf.distribute.InputContext instance and returning a tf.data.Dataset.
options tf.distribute.InputOptions used to control options on how this dataset is distributed.
Returns A tf.distribute.DistributedDataset.
experimental_distribute_dataset View source
experimental_distribute_dataset(
dataset, options=None
)
Creates tf.distribute.DistributedDataset from tf.data.Dataset. The returned tf.distribute.DistributedDataset can be iterated over similar to regular datasets. NOTE: The user cannot add any more transformations to a tf.distribute.DistributedDataset. You can only create an iterator or examine the tf.TypeSpec of the data generated by it. See API docs of tf.distribute.DistributedDataset to learn more. The following is an example:
global_batch_size = 2
# Passing the devices is optional.
strategy = tf.distribute.MirroredStrategy(devices=["GPU:0", "GPU:1"])
# Create a dataset
dataset = tf.data.Dataset.range(4).batch(global_batch_size)
# Distribute that dataset
dist_dataset = strategy.experimental_distribute_dataset(dataset)
@tf.function
def replica_fn(input):
return input*2
result = []
# Iterate over the `tf.distribute.DistributedDataset`
for x in dist_dataset:
# process dataset elements
result.append(strategy.run(replica_fn, args=(x,)))
print(result)
[PerReplica:{
0: <tf.Tensor: shape=(1,), dtype=int64, numpy=array([0])>,
1: <tf.Tensor: shape=(1,), dtype=int64, numpy=array([2])>
}, PerReplica:{
0: <tf.Tensor: shape=(1,), dtype=int64, numpy=array([4])>,
1: <tf.Tensor: shape=(1,), dtype=int64, numpy=array([6])>
}]
Three key actions happending under the hood of this method are batching, sharding, and prefetching. In the code snippet above, dataset is batched by global_batch_size, and calling experimental_distribute_dataset on it rebatches dataset to a new batch size that is equal to the global batch size divided by the number of replicas in sync. We iterate through it using a Pythonic for loop. x is a tf.distribute.DistributedValues containing data for all replicas, and each replica gets data of the new batch size. tf.distribute.Strategy.run will take care of feeding the right per-replica data in x to the right replica_fn executed on each replica. Sharding contains autosharding across multiple workers and within every worker. First, in multi-worker distributed training (i.e. when you use tf.distribute.experimental.MultiWorkerMirroredStrategy or tf.distribute.TPUStrategy), autosharding a dataset over a set of workers means that each worker is assigned a subset of the entire dataset (if the right tf.data.experimental.AutoShardPolicy is set). This is to ensure that at each step, a global batch size of non-overlapping dataset elements will be processed by each worker. Autosharding has a couple of different options that can be specified using tf.data.experimental.DistributeOptions. Then, sharding within each worker means the method will split the data among all the worker devices (if more than one a present). This will happen regardless of multi-worker autosharding.
Note: for autosharding across multiple workers, the default mode is tf.data.experimental.AutoShardPolicy.AUTO. This mode will attempt to shard the input dataset by files if the dataset is being created out of reader datasets (e.g. tf.data.TFRecordDataset, tf.data.TextLineDataset, etc.) or otherwise shard the dataset by data, where each of the workers will read the entire dataset and only process the shard assigned to it. However, if you have less than one input file per worker, we suggest that you disable dataset autosharding across workers by setting the tf.data.experimental.DistributeOptions.auto_shard_policy to be tf.data.experimental.AutoShardPolicy.OFF.
By default, this method adds a prefetch transformation at the end of the user provided tf.data.Dataset instance. The argument to the prefetch transformation which is buffer_size is equal to the number of replicas in sync. If the above batch splitting and dataset sharding logic is undesirable, please use tf.distribute.Strategy.distribute_datasets_from_function instead, which does not do any automatic batching or sharding for you.
Note: If you are using TPUStrategy, the order in which the data is processed by the workers when using tf.distribute.Strategy.experimental_distribute_dataset or tf.distribute.Strategy.distribute_datasets_from_function is not guaranteed. This is typically required if you are using tf.distribute to scale prediction. You can however insert an index for each element in the batch and order outputs accordingly. Refer to this snippet for an example of how to order outputs.
Note: Stateful dataset transformations are currently not supported with tf.distribute.experimental_distribute_dataset or tf.distribute.distribute_datasets_from_function. Any stateful ops that the dataset may have are currently ignored. For example, if your dataset has a map_fn that uses tf.random.uniform to rotate an image, then you have a dataset graph that depends on state (i.e the random seed) on the local machine where the python process is being executed.
For a tutorial on more usage and properties of this method, refer to the tutorial on distributed input. If you are interested in last partial batch handling, read this section.
Args
dataset tf.data.Dataset that will be sharded across all replicas using the rules stated above.
options tf.distribute.InputOptions used to control options on how this dataset is distributed.
Returns A tf.distribute.DistributedDataset.
experimental_distribute_values_from_function View source
experimental_distribute_values_from_function(
value_fn
)
Generates tf.distribute.DistributedValues from value_fn. This function is to generate tf.distribute.DistributedValues to pass into run, reduce, or other methods that take distributed values when not using datasets.
Args
value_fn The function to run to generate values. It is called for each replica with tf.distribute.ValueContext as the sole argument. It must return a Tensor or a type that can be converted to a Tensor.
Returns A tf.distribute.DistributedValues containing a value for each replica.
Example usage: Return constant value per replica:
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"])
def value_fn(ctx):
return tf.constant(1.)
distributed_values = (
strategy.experimental_distribute_values_from_function(
value_fn))
local_result = strategy.experimental_local_results(distributed_values)
local_result
(<tf.Tensor: shape=(), dtype=float32, numpy=1.0>,
<tf.Tensor: shape=(), dtype=float32, numpy=1.0>)
Distribute values in array based on replica_id:
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"])
array_value = np.array([3., 2., 1.])
def value_fn(ctx):
return array_value[ctx.replica_id_in_sync_group]
distributed_values = (
strategy.experimental_distribute_values_from_function(
value_fn))
local_result = strategy.experimental_local_results(distributed_values)
local_result
(3.0, 2.0)
Specify values using num_replicas_in_sync:
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"])
def value_fn(ctx):
return ctx.num_replicas_in_sync
distributed_values = (
strategy.experimental_distribute_values_from_function(
value_fn))
local_result = strategy.experimental_local_results(distributed_values)
local_result
(2, 2)
Place values on devices and distribute: strategy = tf.distribute.TPUStrategy()
worker_devices = strategy.extended.worker_devices
multiple_values = []
for i in range(strategy.num_replicas_in_sync):
with tf.device(worker_devices[i]):
multiple_values.append(tf.constant(1.0))
def value_fn(ctx):
return multiple_values[ctx.replica_id_in_sync_group]
distributed_values = strategy.
experimental_distribute_values_from_function(
value_fn)
experimental_local_results View source
experimental_local_results(
value
)
Returns the list of all local per-replica values contained in value.
Note: This only returns values on the worker initiated by this client. When using a tf.distribute.Strategy like tf.distribute.experimental.MultiWorkerMirroredStrategy, each worker will be its own client, and this function will only return values computed on that worker.
Args
value A value returned by experimental_run(), run(), extended.call_for_each_replica(), or a variable created in scope.
Returns A tuple of values contained in value. If value represents a single value, this returns (value,).
gather View source
gather(
value, axis
)
Gather value across replicas along axis to the current device. Given a tf.distribute.DistributedValues or tf.Tensor-like object value, this API gathers and concatenates value across replicas along the axis-th dimension. The result is copied to the "current" device which would typically be the CPU of the worker on which the program is running. For tf.distribute.TPUStrategy, it is the first TPU host. For multi-client MultiWorkerMirroredStrategy, this is CPU of each worker. This API can only be called in the cross-replica context. For a counterpart in the replica context, see tf.distribute.ReplicaContext.all_gather.
Note: For all strategies except tf.distribute.TPUStrategy, the input value on different replicas must have the same rank, and their shapes must be the same in all dimensions except the axis-th dimension. In other words, their shapes cannot be different in a dimension d where d does not equal to the axis argument. For example, given a tf.distribute.DistributedValues with component tensors of shape (1, 2, 3) and (1, 3, 3) on two replicas, you can call gather(..., axis=1, ...) on it, but not gather(..., axis=0, ...) or gather(..., axis=2, ...). However, for tf.distribute.TPUStrategy.gather, all tensors must have exactly the same rank and same shape.
Note: Given a tf.distribute.DistributedValues value, its component tensors must have a non-zero rank. Otherwise, consider using tf.expand_dims before gathering them.
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"])
# A DistributedValues with component tensor of shape (2, 1) on each replica
distributed_values = strategy.experimental_distribute_values_from_function(lambda _: tf.identity(tf.constant([[1], [2]])))
@tf.function
def run():
return strategy.gather(distributed_values, axis=0)
run()
<tf.Tensor: shape=(4, 1), dtype=int32, numpy=
array([[1],
[2],
[1],
[2]], dtype=int32)>
Consider the following example for more combinations:
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1", "GPU:2", "GPU:3"])
single_tensor = tf.reshape(tf.range(6), shape=(1,2,3))
distributed_values = strategy.experimental_distribute_values_from_function(lambda _: tf.identity(single_tensor))
@tf.function
def run(axis):
return strategy.gather(distributed_values, axis=axis)
axis=0
run(axis)
<tf.Tensor: shape=(4, 2, 3), dtype=int32, numpy=
array([[[0, 1, 2],
[3, 4, 5]],
[[0, 1, 2],
[3, 4, 5]],
[[0, 1, 2],
[3, 4, 5]],
[[0, 1, 2],
[3, 4, 5]]], dtype=int32)>
axis=1
run(axis)
<tf.Tensor: shape=(1, 8, 3), dtype=int32, numpy=
array([[[0, 1, 2],
[3, 4, 5],
[0, 1, 2],
[3, 4, 5],
[0, 1, 2],
[3, 4, 5],
[0, 1, 2],
[3, 4, 5]]], dtype=int32)>
axis=2
run(axis)
<tf.Tensor: shape=(1, 2, 12), dtype=int32, numpy=
array([[[0, 1, 2, 0, 1, 2, 0, 1, 2, 0, 1, 2],
[3, 4, 5, 3, 4, 5, 3, 4, 5, 3, 4, 5]]], dtype=int32)>
Args
value a tf.distribute.DistributedValues instance, e.g. returned by Strategy.run, to be combined into a single tensor. It can also be a regular tensor when used with tf.distribute.OneDeviceStrategy or the default strategy. The tensors that constitute the DistributedValues can only be dense tensors with non-zero rank, NOT a tf.IndexedSlices.
axis 0-D int32 Tensor. Dimension along which to gather. Must be in the range [0, rank(value)).
Returns A Tensor that's the concatenation of value across replicas along axis dimension.
reduce View source
reduce(
reduce_op, value, axis
)
Reduce value across replicas and return result on current device.
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"])
def step_fn():
i = tf.distribute.get_replica_context().replica_id_in_sync_group
return tf.identity(i)
per_replica_result = strategy.run(step_fn)
total = strategy.reduce("SUM", per_replica_result, axis=None)
total
<tf.Tensor: shape=(), dtype=int32, numpy=1>
To see how this would look with multiple replicas, consider the same example with MirroredStrategy with 2 GPUs: strategy = tf.distribute.MirroredStrategy(devices=["GPU:0", "GPU:1"])
def step_fn():
i = tf.distribute.get_replica_context().replica_id_in_sync_group
return tf.identity(i)
per_replica_result = strategy.run(step_fn)
# Check devices on which per replica result is:
strategy.experimental_local_results(per_replica_result)[0].device
# /job:localhost/replica:0/task:0/device:GPU:0
strategy.experimental_local_results(per_replica_result)[1].device
# /job:localhost/replica:0/task:0/device:GPU:1
total = strategy.reduce("SUM", per_replica_result, axis=None)
# Check device on which reduced result is:
total.device
# /job:localhost/replica:0/task:0/device:CPU:0
This API is typically used for aggregating the results returned from different replicas, for reporting etc. For example, loss computed from different replicas can be averaged using this API before printing.
Note: The result is copied to the "current" device - which would typically be the CPU of the worker on which the program is running. For TPUStrategy, it is the first TPU host. For multi client MultiWorkerMirroredStrategy, this is CPU of each worker.
There are a number of different tf.distribute APIs for reducing values across replicas:
tf.distribute.ReplicaContext.all_reduce: This differs from Strategy.reduce in that it is for replica context and does not copy the results to the host device. all_reduce should be typically used for reductions inside the training step such as gradients.
tf.distribute.StrategyExtended.reduce_to and tf.distribute.StrategyExtended.batch_reduce_to: These APIs are more advanced versions of Strategy.reduce as they allow customizing the destination of the result. They are also called in cross replica context. What should axis be? Given a per-replica value returned by run, say a per-example loss, the batch will be divided across all the replicas. This function allows you to aggregate across replicas and optionally also across batch elements by specifying the axis parameter accordingly. For example, if you have a global batch size of 8 and 2 replicas, values for examples [0, 1, 2, 3] will be on replica 0 and [4, 5, 6, 7] will be on replica 1. With axis=None, reduce will aggregate only across replicas, returning [0+4, 1+5, 2+6, 3+7]. This is useful when each replica is computing a scalar or some other value that doesn't have a "batch" dimension (like a gradient or loss). strategy.reduce("sum", per_replica_result, axis=None)
Sometimes, you will want to aggregate across both the global batch and all replicas. You can get this behavior by specifying the batch dimension as the axis, typically axis=0. In this case it would return a scalar 0+1+2+3+4+5+6+7. strategy.reduce("sum", per_replica_result, axis=0)
If there is a last partial batch, you will need to specify an axis so that the resulting shape is consistent across replicas. So if the last batch has size 6 and it is divided into [0, 1, 2, 3] and [4, 5], you would get a shape mismatch unless you specify axis=0. If you specify tf.distribute.ReduceOp.MEAN, using axis=0 will use the correct denominator of 6. Contrast this with computing reduce_mean to get a scalar value on each replica and this function to average those means, which will weigh some values 1/8 and others 1/4.
Args
reduce_op a tf.distribute.ReduceOp value specifying how values should be combined. Allows using string representation of the enum such as "SUM", "MEAN".
value a tf.distribute.DistributedValues instance, e.g. returned by Strategy.run, to be combined into a single tensor. It can also be a regular tensor when used with OneDeviceStrategy or default strategy.
axis specifies the dimension to reduce along within each replica's tensor. Should typically be set to the batch dimension, or None to only reduce across replicas (e.g. if the tensor has no batch dimension).
Returns A Tensor.
run View source
run(
fn, args=(), kwargs=None, options=None
)
Invokes fn on each replica, with the given arguments. This method is the primary way to distribute your computation with a tf.distribute object. It invokes fn on each replica. If args or kwargs have tf.distribute.DistributedValues, such as those produced by a tf.distribute.DistributedDataset from tf.distribute.Strategy.experimental_distribute_dataset or tf.distribute.Strategy.distribute_datasets_from_function, when fn is executed on a particular replica, it will be executed with the component of tf.distribute.DistributedValues that correspond to that replica. fn is invoked under a replica context. fn may call tf.distribute.get_replica_context() to access members such as all_reduce. Please see the module-level docstring of tf.distribute for the concept of replica context. All arguments in args or kwargs should either be Python values of a nested structure of tensors, e.g. a list of tensors, in which case args and kwargs will be passed to the fn invoked on each replica. Or args or kwargs can be tf.distribute.DistributedValues containing tensors or composite tensors, i.e. tf.compat.v1.TensorInfo.CompositeTensor, in which case each fn call will get the component of a tf.distribute.DistributedValues corresponding to its replica. Key Point: Depending on the implementation of tf.distribute.Strategy and whether eager execution is enabled, fn may be called one or more times. If fn is annotated with tf.function or tf.distribute.Strategy.run is called inside a tf.function (eager execution is disabled inside a tf.function by default), fn is called once per replica to generate a Tensorflow graph, which will then be reused for execution with new inputs. Otherwise, if eager execution is enabled, fn will be called once per replica every step just like regular python code. Example usage: Constant tensor input.
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"])
tensor_input = tf.constant(3.0)
@tf.function
def replica_fn(input):
return input*2.0
result = strategy.run(replica_fn, args=(tensor_input,))
result
PerReplica:{
0: <tf.Tensor: shape=(), dtype=float32, numpy=6.0>,
1: <tf.Tensor: shape=(), dtype=float32, numpy=6.0>
}
DistributedValues input.
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"])
@tf.function
def run():
def value_fn(value_context):
return value_context.num_replicas_in_sync
distributed_values = (
strategy.experimental_distribute_values_from_function(
value_fn))
def replica_fn2(input):
return input*2
return strategy.run(replica_fn2, args=(distributed_values,))
result = run()
result
<tf.Tensor: shape=(), dtype=int32, numpy=4>
Use tf.distribute.ReplicaContext to allreduce values.
strategy = tf.distribute.MirroredStrategy(["gpu:0", "gpu:1"])
@tf.function
def run():
def value_fn(value_context):
return tf.constant(value_context.replica_id_in_sync_group)
distributed_values = (
strategy.experimental_distribute_values_from_function(
value_fn))
def replica_fn(input):
return tf.distribute.get_replica_context().all_reduce("sum", input)
return strategy.run(replica_fn, args=(distributed_values,))
result = run()
result
PerReplica:{
0: <tf.Tensor: shape=(), dtype=int32, numpy=1>,
1: <tf.Tensor: shape=(), dtype=int32, numpy=1>
}
Args
fn The function to run on each replica.
args Optional positional arguments to fn. Its element can be a Python value, a tensor or a tf.distribute.DistributedValues.
kwargs Optional keyword arguments to fn. Its element can be a Python value, a tensor or a tf.distribute.DistributedValues.
options An optional instance of tf.distribute.RunOptions specifying the options to run fn.
Returns Merged return value of fn across replicas. The structure of the return value is the same as the return value from fn. Each element in the structure can either be tf.distribute.DistributedValues, Tensor objects, or Tensors (for example, if running on a single replica).
scope View source
scope()
Context manager to make the strategy current and distribute variables. This method returns a context manager, and is used as follows:
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"])
# Variable created inside scope:
with strategy.scope():
mirrored_variable = tf.Variable(1.)
mirrored_variable
MirroredVariable:{
0: <tf.Variable 'Variable:0' shape=() dtype=float32, numpy=1.0>,
1: <tf.Variable 'Variable/replica_1:0' shape=() dtype=float32, numpy=1.0>
}
# Variable created outside scope:
regular_variable = tf.Variable(1.)
regular_variable
<tf.Variable 'Variable:0' shape=() dtype=float32, numpy=1.0>
What happens when Strategy.scope is entered?
strategy is installed in the global context as the "current" strategy. Inside this scope, tf.distribute.get_strategy() will now return this strategy. Outside this scope, it returns the default no-op strategy. Entering the scope also enters the "cross-replica context". See tf.distribute.StrategyExtended for an explanation on cross-replica and replica contexts. Variable creation inside scope is intercepted by the strategy. Each strategy defines how it wants to affect the variable creation. Sync strategies like MirroredStrategy, TPUStrategy and MultiWorkerMiroredStrategy create variables replicated on each replica, whereas ParameterServerStrategy creates variables on the parameter servers. This is done using a custom tf.variable_creator_scope. In some strategies, a default device scope may also be entered: in MultiWorkerMiroredStrategy, a default device scope of "/CPU:0" is entered on each worker.
Note: Entering a scope does not automatically distribute a computation, except in the case of high level training framework like keras model.fit. If you're not using model.fit, you need to use strategy.run API to explicitly distribute that computation. See an example in the custom training loop tutorial.
What should be in scope and what should be outside? There are a number of requirements on what needs to happen inside the scope. However, in places where we have information about which strategy is in use, we often enter the scope for the user, so they don't have to do it explicitly (i.e. calling those either inside or outside the scope is OK). Anything that creates variables that should be distributed variables must be in strategy.scope. This can be either by directly putting it in scope, or relying on another API like strategy.run or model.fit to enter it for you. Any variable that is created outside scope will not be distributed and may have performance implications. Common things that create variables in TF: models, optimizers, metrics. These should always be created inside the scope. Another source of variable creation can be a checkpoint restore - when variables are created lazily. Note that any variable created inside a strategy captures the strategy information. So reading and writing to these variables outside the strategy.scope can also work seamlessly, without the user having to enter the scope. Some strategy APIs (such as strategy.run and strategy.reduce) which require to be in a strategy's scope, enter the scope for you automatically, which means when using those APIs you don't need to enter the scope yourself. When a tf.keras.Model is created inside a strategy.scope, we capture this information. When high level training frameworks methods such as model.compile, model.fit etc are then called on this model, we automatically enter the scope, as well as use this strategy to distribute the training etc. See detailed example in distributed keras tutorial. Note that simply calling the model(..) is not impacted - only high level training framework APIs are. model.compile, model.fit, model.evaluate, model.predict and model.save can all be called inside or outside the scope. The following can be either inside or outside the scope: Creating the input datasets Defining tf.functions that represent your training step Saving APIs such as tf.saved_model.save. Loading creates variables, so that should go inside the scope if you want to train the model in a distributed way. Checkpoint saving. As mentioned above - checkpoint.restore may sometimes need to be inside scope if it creates variables.
Returns A context manager. | tensorflow.distribute.strategy |
tf.distribute.StrategyExtended View source on GitHub Additional APIs for algorithms that need to be distribution-aware.
tf.distribute.StrategyExtended(
container_strategy
)
Note: For most usage of tf.distribute.Strategy, there should be no need to call these methods, since TensorFlow libraries (such as optimizers) already call these methods when needed on your behalf.
Some common use cases of functions on this page: Locality tf.distribute.DistributedValues can have the same locality as a distributed variable, which leads to a mirrored value residing on the same devices as the variable (as opposed to the compute devices). Such values may be passed to a call to tf.distribute.StrategyExtended.update to update the value of a variable. You may use tf.distribute.StrategyExtended.colocate_vars_with to give a variable the same locality as another variable. You may convert a "PerReplica" value to a variable's locality by using tf.distribute.StrategyExtended.reduce_to or tf.distribute.StrategyExtended.batch_reduce_to. How to update a distributed variable A distributed variable is variables created on multiple devices. As discussed in the glossary, mirrored variable and SyncOnRead variable are two examples. The standard pattern for updating distributed variables is to: In your function passed to tf.distribute.Strategy.run, compute a list of (update, variable) pairs. For example, the update might be a gradient of the loss with respect to the variable. Switch to cross-replica mode by calling tf.distribute.get_replica_context().merge_call() with the updates and variables as arguments. Call tf.distribute.StrategyExtended.reduce_to(VariableAggregation.SUM, t, v) (for one variable) or tf.distribute.StrategyExtended.batch_reduce_to (for a list of variables) to sum the updates. Call tf.distribute.StrategyExtended.update(v) for each variable to update its value. Steps 2 through 4 are done automatically by class tf.keras.optimizers.Optimizer if you call its tf.keras.optimizers.Optimizer.apply_gradients method in a replica context. In fact, a higher-level solution to update a distributed variable is by calling assign on the variable as you would do to a regular tf.Variable. You can call the method in both replica context and cross-replica context. For a mirrored variable, calling assign in replica context requires you to specify the aggregation type in the variable constructor. In that case, the context switching and sync described in steps 2 through 4 are handled for you. If you call assign on mirrored variable in cross-replica context, you can only assign a single value or assign values from another mirrored variable or a mirrored tf.distribute.DistributedValues. For a SyncOnRead variable, in replica context, you can simply call assign on it and no aggregation happens under the hood. In cross-replica context, you can only assign a single value to a SyncOnRead variable. One example case is restoring from a checkpoint: if the aggregation type of the variable is tf.VariableAggregation.SUM, it is assumed that replica values were added before checkpointing, so at the time of restoring, the value is divided by the number of replicas and then assigned to each replica; if the aggregation type is tf.VariableAggregation.MEAN, the value is assigned to each replica directly.
Attributes
experimental_require_static_shapes Returns True if static shape is required; False otherwise.
parameter_devices Returns the tuple of all devices used to place variables.
worker_devices Returns the tuple of all devices used to for compute replica execution. Methods batch_reduce_to View source
batch_reduce_to(
reduce_op, value_destination_pairs, options=None
)
Combine multiple reduce_to calls into one for faster execution. Similar to reduce_to, but accepts a list of (value, destinations) pairs. It's more efficient than reduce each value separately. This API currently can only be called in cross-replica context. Other variants to reduce values across replicas are:
tf.distribute.StrategyExtended.reduce_to: the non-batch version of this API.
tf.distribute.ReplicaContext.all_reduce: the counterpart of this API in replica context. It supports both batched and non-batched all-reduce.
tf.distribute.Strategy.reduce: a more convenient method to reduce to the host in cross-replica context. See reduce_to for more information.
@tf.function
def step_fn(var):
def merge_fn(strategy, value, var):
# All-reduce the value. Note that `value` here is a
# `tf.distribute.DistributedValues`.
reduced = strategy.extended.batch_reduce_to(
tf.distribute.ReduceOp.SUM, [(value, var)])[0]
strategy.extended.update(var, lambda var, value: var.assign(value),
args=(reduced,))
value = tf.identity(1.)
tf.distribute.get_replica_context().merge_call(merge_fn,
args=(value, var))
def run(strategy):
with strategy.scope():
v = tf.Variable(0.)
strategy.run(step_fn, args=(v,))
return v
run(tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"]))
MirroredVariable:{
0: <tf.Variable 'Variable:0' shape=() dtype=float32, numpy=2.0>,
1: <tf.Variable 'Variable/replica_1:0' shape=() dtype=float32, numpy=2.0>
}
run(tf.distribute.experimental.CentralStorageStrategy(
compute_devices=["GPU:0", "GPU:1"], parameter_device="CPU:0"))
<tf.Variable 'Variable:0' shape=() dtype=float32, numpy=2.0>
run(tf.distribute.OneDeviceStrategy("GPU:0"))
<tf.Variable 'Variable:0' shape=() dtype=float32, numpy=1.0>
Args
reduce_op a tf.distribute.ReduceOp value specifying how values should be combined. Allows using string representation of the enum such as "SUM", "MEAN".
value_destination_pairs a sequence of (value, destinations) pairs. See tf.distribute.Strategy.reduce_to for descriptions.
options a tf.distribute.experimental.CommunicationOptions. Options to perform collective operations. This overrides the default options if the tf.distribute.Strategy takes one in the constructor. See tf.distribute.experimental.CommunicationOptions for details of the options.
Returns A list of reduced values, one per pair in value_destination_pairs.
colocate_vars_with View source
colocate_vars_with(
colocate_with_variable
)
Scope that controls which devices variables will be created on. No operations should be added to the graph inside this scope, it should only be used when creating variables (some implementations work by changing variable creation, others work by using a tf.compat.v1.colocate_with() scope). This may only be used inside self.scope(). Example usage: with strategy.scope():
var1 = tf.Variable(...)
with strategy.extended.colocate_vars_with(var1):
# var2 and var3 will be created on the same device(s) as var1
var2 = tf.Variable(...)
var3 = tf.Variable(...)
def fn(v1, v2, v3):
# operates on v1 from var1, v2 from var2, and v3 from var3
# `fn` runs on every device `var1` is on, `var2` and `var3` will be there
# too.
strategy.extended.update(var1, fn, args=(var2, var3))
Args
colocate_with_variable A variable created in this strategy's scope(). Variables created while in the returned context manager will be on the same set of devices as colocate_with_variable.
Returns A context manager.
reduce_to View source
reduce_to(
reduce_op, value, destinations, options=None
)
Combine (via e.g. sum or mean) values across replicas. reduce_to aggregates tf.distribute.DistributedValues and distributed variables. It supports both dense values and tf.IndexedSlices. This API currently can only be called in cross-replica context. Other variants to reduce values across replicas are:
tf.distribute.StrategyExtended.batch_reduce_to: the batch version of this API.
tf.distribute.ReplicaContext.all_reduce: the counterpart of this API in replica context. It supports both batched and non-batched all-reduce.
tf.distribute.Strategy.reduce: a more convenient method to reduce to the host in cross-replica context. destinations specifies where to reduce the value to, e.g. "GPU:0". You can also pass in a Tensor, and the destinations will be the device of that tensor. For all-reduce, pass the same to value and destinations. It can be used in tf.distribute.ReplicaContext.merge_call to write code that works for all tf.distribute.Strategy.
@tf.function
def step_fn(var):
def merge_fn(strategy, value, var):
# All-reduce the value. Note that `value` here is a
# `tf.distribute.DistributedValues`.
reduced = strategy.extended.reduce_to(tf.distribute.ReduceOp.SUM,
value, destinations=var)
strategy.extended.update(var, lambda var, value: var.assign(value),
args=(reduced,))
value = tf.identity(1.)
tf.distribute.get_replica_context().merge_call(merge_fn,
args=(value, var))
def run(strategy):
with strategy.scope():
v = tf.Variable(0.)
strategy.run(step_fn, args=(v,))
return v
run(tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"]))
MirroredVariable:{
0: <tf.Variable 'Variable:0' shape=() dtype=float32, numpy=2.0>,
1: <tf.Variable 'Variable/replica_1:0' shape=() dtype=float32, numpy=2.0>
}
run(tf.distribute.experimental.CentralStorageStrategy(
compute_devices=["GPU:0", "GPU:1"], parameter_device="CPU:0"))
<tf.Variable 'Variable:0' shape=() dtype=float32, numpy=2.0>
run(tf.distribute.OneDeviceStrategy("GPU:0"))
<tf.Variable 'Variable:0' shape=() dtype=float32, numpy=1.0>
Args
reduce_op a tf.distribute.ReduceOp value specifying how values should be combined. Allows using string representation of the enum such as "SUM", "MEAN".
value a tf.distribute.DistributedValues, or a tf.Tensor like object.
destinations a tf.distribute.DistributedValues, a tf.Variable, a tf.Tensor alike object, or a device string. It specifies the devices to reduce to. To perform an all-reduce, pass the same to value and destinations. Note that if it's a tf.Variable, the value is reduced to the devices of that variable, and this method doesn't update the variable.
options a tf.distribute.experimental.CommunicationOptions. Options to perform collective operations. This overrides the default options if the tf.distribute.Strategy takes one in the constructor. See tf.distribute.experimental.CommunicationOptions for details of the options.
Returns A tensor or value reduced to destinations.
update View source
update(
var, fn, args=(), kwargs=None, group=True
)
Run fn to update var using inputs mirrored to the same devices. tf.distribute.StrategyExtended.update takes a distributed variable var to be updated, an update function fn, and args and kwargs for fn. It applies fn to each component variable of var and passes corresponding values from args and kwargs. Neither args nor kwargs may contain per-replica values. If they contain mirrored values, they will be unwrapped before calling fn. For example, fn can be assign_add and args can be a mirrored DistributedValues where each component contains the value to be added to this mirrored variable var. Calling update will call assign_add on each component variable of var with the corresponding tensor value on that device. Example usage: strategy = tf.distribute.MirroredStrategy(['GPU:0', 'GPU:1']) # With 2
devices
with strategy.scope():
v = tf.Variable(5.0, aggregation=tf.VariableAggregation.SUM)
def update_fn(v):
return v.assign(1.0)
result = strategy.extended.update(v, update_fn)
# result is
# Mirrored:{
# 0: tf.Tensor(1.0, shape=(), dtype=float32),
# 1: tf.Tensor(1.0, shape=(), dtype=float32)
# }
If var is mirrored across multiple devices, then this method implements logic as following: results = {}
for device, v in var:
with tf.device(device):
# args and kwargs will be unwrapped if they are mirrored.
results[device] = fn(v, *args, **kwargs)
return merged(results)
Otherwise, this method returns fn(var, *args, **kwargs) colocated with var.
Args
var Variable, possibly mirrored to multiple devices, to operate on.
fn Function to call. Should take the variable as the first argument.
args Tuple or list. Additional positional arguments to pass to fn().
kwargs Dict with keyword arguments to pass to fn().
group Boolean. Defaults to True. If False, the return value will be unwrapped.
Returns By default, the merged return value of fn across all replicas. The merged result has dependencies to make sure that if it is evaluated at all, the side effects (updates) will happen on every replica. If instead "group=False" is specified, this function will return a nest of lists where each list has an element per replica, and the caller is responsible for ensuring all elements are executed.
value_container View source
value_container(
value
)
Returns the container that this per-replica value belongs to.
Args
value A value returned by run() or a variable created in scope().
Returns A container that value belongs to. If value does not belong to any container (including the case of container having been destroyed), returns the value itself. value in experimental_local_results(value_container(value)) will always be true.
variable_created_in_scope View source
variable_created_in_scope(
v
)
Tests whether v was created while this strategy scope was active. Variables created inside the strategy scope are "owned" by it:
strategy = tf.distribute.MirroredStrategy()
with strategy.scope():
v = tf.Variable(1.)
strategy.extended.variable_created_in_scope(v)
True
Variables created outside the strategy are not owned by it:
strategy = tf.distribute.MirroredStrategy()
v = tf.Variable(1.)
strategy.extended.variable_created_in_scope(v)
False
Args
v A tf.Variable instance.
Returns True if v was created inside the scope, False if not. | tensorflow.distribute.strategyextended |
tf.distribute.TPUStrategy Synchronous training on TPUs and TPU Pods. Inherits From: Strategy
tf.distribute.TPUStrategy(
tpu_cluster_resolver=None, experimental_device_assignment=None
)
To construct a TPUStrategy object, you need to run the initialization code as below:
resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu='')
tf.config.experimental_connect_to_cluster(resolver)
tf.tpu.experimental.initialize_tpu_system(resolver)
strategy = tf.distribute.TPUStrategy(resolver)
While using distribution strategies, the variables created within the strategy's scope will be replicated across all the replicas and can be kept in sync using all-reduce algorithms. To run TF2 programs on TPUs, you can either use .compile and .fit APIs in tf.keras with TPUStrategy, or write your own customized training loop by calling strategy.run directly. Note that TPUStrategy doesn't support pure eager execution, so please make sure the function passed into strategy.run is a tf.function or strategy.run is called inside a tf.function if eager behavior is enabled. See more details in https://www.tensorflow.org/guide/tpu. distribute_datasets_from_function and experimental_distribute_dataset APIs can be used to distribute the dataset across the TPU workers when writing your own training loop. If you are using fit and compile methods available in tf.keras.Model, then Keras will handle the distribution for you. An example of writing customized training loop on TPUs:
with strategy.scope():
model = tf.keras.Sequential([
tf.keras.layers.Dense(2, input_shape=(5,)),
])
optimizer = tf.keras.optimizers.SGD(learning_rate=0.1)
def dataset_fn(ctx):
x = np.random.random((2, 5)).astype(np.float32)
y = np.random.randint(2, size=(2, 1))
dataset = tf.data.Dataset.from_tensor_slices((x, y))
return dataset.repeat().batch(1, drop_remainder=True)
dist_dataset = strategy.distribute_datasets_from_function(
dataset_fn)
iterator = iter(dist_dataset)
@tf.function()
def train_step(iterator):
def step_fn(inputs):
features, labels = inputs
with tf.GradientTape() as tape:
logits = model(features, training=True)
loss = tf.keras.losses.sparse_categorical_crossentropy(
labels, logits)
grads = tape.gradient(loss, model.trainable_variables)
optimizer.apply_gradients(zip(grads, model.trainable_variables))
strategy.run(step_fn, args=(next(iterator),))
train_step(iterator)
For the advanced use cases like model parallelism, you can set experimental_device_assignment argument when creating TPUStrategy to specify number of replicas and number of logical devices. Below is an example to initialize TPU system with 2 logical devices and 1 replica.
resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu='')
tf.config.experimental_connect_to_cluster(resolver)
topology = tf.tpu.experimental.initialize_tpu_system(resolver)
device_assignment = tf.tpu.experimental.DeviceAssignment.build(
topology,
computation_shape=[1, 1, 1, 2],
num_replicas=1)
strategy = tf.distribute.TPUStrategy(
resolver, experimental_device_assignment=device_assignment)
Then you can run a tf.add operation only on logical device 0.
@tf.function()
def step_fn(inputs):
features, _ = inputs
output = tf.add(features, features)
# Add operation will be executed on logical device 0.
output = strategy.experimental_assign_to_logical_device(output, 0)
return output
dist_dataset = strategy.distribute_datasets_from_function(
dataset_fn)
iterator = iter(dist_dataset)
strategy.run(step_fn, args=(next(iterator),))
Args
tpu_cluster_resolver A tf.distribute.cluster_resolver.TPUClusterResolver, which provides information about the TPU cluster. If None, it will assume running on a local TPU worker.
experimental_device_assignment Optional tf.tpu.experimental.DeviceAssignment to specify the placement of replicas on the TPU cluster.
Attributes
cluster_resolver Returns the cluster resolver associated with this strategy. In general, when using a multi-worker tf.distribute strategy such as tf.distribute.experimental.MultiWorkerMirroredStrategy or tf.distribute.TPUStrategy(), there is a tf.distribute.cluster_resolver.ClusterResolver associated with the strategy used, and such an instance is returned by this property. Strategies that intend to have an associated tf.distribute.cluster_resolver.ClusterResolver must set the relevant attribute, or override this property; otherwise, None is returned by default. Those strategies should also provide information regarding what is returned by this property. Single-worker strategies usually do not have a tf.distribute.cluster_resolver.ClusterResolver, and in those cases this property will return None. The tf.distribute.cluster_resolver.ClusterResolver may be useful when the user needs to access information such as the cluster spec, task type or task id. For example,
os.environ['TF_CONFIG'] = json.dumps({
'cluster': {
'worker': ["localhost:12345", "localhost:23456"],
'ps': ["localhost:34567"]
},
'task': {'type': 'worker', 'index': 0}
})
# This implicitly uses TF_CONFIG for the cluster and current task info.
strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy()
...
if strategy.cluster_resolver.task_type == 'worker':
# Perform something that's only applicable on workers. Since we set this
# as a worker above, this block will run on this particular instance.
elif strategy.cluster_resolver.task_type == 'ps':
# Perform something that's only applicable on parameter servers. Since we
# set this as a worker above, this block will not run on this particular
# instance.
For more information, please see tf.distribute.cluster_resolver.ClusterResolver's API docstring.
extended tf.distribute.StrategyExtended with additional methods.
num_replicas_in_sync Returns number of replicas over which gradients are aggregated. Methods distribute_datasets_from_function View source
distribute_datasets_from_function(
dataset_fn, options=None
)
Distributes tf.data.Dataset instances created by calls to dataset_fn. The argument dataset_fn that users pass in is an input function that has a tf.distribute.InputContext argument and returns a tf.data.Dataset instance. It is expected that the returned dataset from dataset_fn is already batched by per-replica batch size (i.e. global batch size divided by the number of replicas in sync) and sharded. tf.distribute.Strategy.distribute_datasets_from_function does not batch or shard the tf.data.Dataset instance returned from the input function. dataset_fn will be called on the CPU device of each of the workers and each generates a dataset where every replica on that worker will dequeue one batch of inputs (i.e. if a worker has two replicas, two batches will be dequeued from the Dataset every step). This method can be used for several purposes. First, it allows you to specify your own batching and sharding logic. (In contrast, tf.distribute.experimental_distribute_dataset does batching and sharding for you.) For example, where experimental_distribute_dataset is unable to shard the input files, this method might be used to manually shard the dataset (avoiding the slow fallback behavior in experimental_distribute_dataset). In cases where the dataset is infinite, this sharding can be done by creating dataset replicas that differ only in their random seed. The dataset_fn should take an tf.distribute.InputContext instance where information about batching and input replication can be accessed. You can use element_spec property of the tf.distribute.DistributedDataset returned by this API to query the tf.TypeSpec of the elements returned by the iterator. This can be used to set the input_signature property of a tf.function. Follow tf.distribute.DistributedDataset.element_spec to see an example. Key Point: The tf.data.Dataset returned by dataset_fn should have a per-replica batch size, unlike experimental_distribute_dataset, which uses the global batch size. This may be computed using input_context.get_per_replica_batch_size.
Note: If you are using TPUStrategy, the order in which the data is processed by the workers when using tf.distribute.Strategy.experimental_distribute_dataset or tf.distribute.Strategy.distribute_datasets_from_function is not guaranteed. This is typically required if you are using tf.distribute to scale prediction. You can however insert an index for each element in the batch and order outputs accordingly. Refer to this snippet for an example of how to order outputs.
Note: Stateful dataset transformations are currently not supported with tf.distribute.experimental_distribute_dataset or tf.distribute.distribute_datasets_from_function. Any stateful ops that the dataset may have are currently ignored. For example, if your dataset has a map_fn that uses tf.random.uniform to rotate an image, then you have a dataset graph that depends on state (i.e the random seed) on the local machine where the python process is being executed.
For a tutorial on more usage and properties of this method, refer to the tutorial on distributed input). If you are interested in last partial batch handling, read this section.
Args
dataset_fn A function taking a tf.distribute.InputContext instance and returning a tf.data.Dataset.
options tf.distribute.InputOptions used to control options on how this dataset is distributed.
Returns A tf.distribute.DistributedDataset.
experimental_assign_to_logical_device View source
experimental_assign_to_logical_device(
tensor, logical_device_id
)
Adds annotation that tensor will be assigned to a logical device. This adds an annotation to tensor specifying that operations on tensor will be invoked on logical core device id logical_device_id. When model parallelism is used, the default behavior is that all ops are placed on zero-th logical device.
# Initializing TPU system with 2 logical devices and 4 replicas.
resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu='')
tf.config.experimental_connect_to_cluster(resolver)
topology = tf.tpu.experimental.initialize_tpu_system(resolver)
device_assignment = tf.tpu.experimental.DeviceAssignment.build(
topology,
computation_shape=[1, 1, 1, 2],
num_replicas=4)
strategy = tf.distribute.TPUStrategy(
resolver, experimental_device_assignment=device_assignment)
iterator = iter(inputs)
@tf.function()
def step_fn(inputs):
output = tf.add(inputs, inputs)
# Add operation will be executed on logical device 0.
output = strategy.experimental_assign_to_logical_device(output, 0)
return output
strategy.run(step_fn, args=(next(iterator),))
Args
tensor Input tensor to annotate.
logical_device_id Id of the logical core to which the tensor will be assigned.
Raises
ValueError The logical device id presented is not consistent with total number of partitions specified by the device assignment.
Returns Annotated tensor with identical value as tensor.
experimental_distribute_dataset View source
experimental_distribute_dataset(
dataset, options=None
)
Creates tf.distribute.DistributedDataset from tf.data.Dataset. The returned tf.distribute.DistributedDataset can be iterated over similar to regular datasets. NOTE: The user cannot add any more transformations to a tf.distribute.DistributedDataset. You can only create an iterator or examine the tf.TypeSpec of the data generated by it. See API docs of tf.distribute.DistributedDataset to learn more. The following is an example:
global_batch_size = 2
# Passing the devices is optional.
strategy = tf.distribute.MirroredStrategy(devices=["GPU:0", "GPU:1"])
# Create a dataset
dataset = tf.data.Dataset.range(4).batch(global_batch_size)
# Distribute that dataset
dist_dataset = strategy.experimental_distribute_dataset(dataset)
@tf.function
def replica_fn(input):
return input*2
result = []
# Iterate over the `tf.distribute.DistributedDataset`
for x in dist_dataset:
# process dataset elements
result.append(strategy.run(replica_fn, args=(x,)))
print(result)
[PerReplica:{
0: <tf.Tensor: shape=(1,), dtype=int64, numpy=array([0])>,
1: <tf.Tensor: shape=(1,), dtype=int64, numpy=array([2])>
}, PerReplica:{
0: <tf.Tensor: shape=(1,), dtype=int64, numpy=array([4])>,
1: <tf.Tensor: shape=(1,), dtype=int64, numpy=array([6])>
}]
Three key actions happending under the hood of this method are batching, sharding, and prefetching. In the code snippet above, dataset is batched by global_batch_size, and calling experimental_distribute_dataset on it rebatches dataset to a new batch size that is equal to the global batch size divided by the number of replicas in sync. We iterate through it using a Pythonic for loop. x is a tf.distribute.DistributedValues containing data for all replicas, and each replica gets data of the new batch size. tf.distribute.Strategy.run will take care of feeding the right per-replica data in x to the right replica_fn executed on each replica. Sharding contains autosharding across multiple workers and within every worker. First, in multi-worker distributed training (i.e. when you use tf.distribute.experimental.MultiWorkerMirroredStrategy or tf.distribute.TPUStrategy), autosharding a dataset over a set of workers means that each worker is assigned a subset of the entire dataset (if the right tf.data.experimental.AutoShardPolicy is set). This is to ensure that at each step, a global batch size of non-overlapping dataset elements will be processed by each worker. Autosharding has a couple of different options that can be specified using tf.data.experimental.DistributeOptions. Then, sharding within each worker means the method will split the data among all the worker devices (if more than one a present). This will happen regardless of multi-worker autosharding.
Note: for autosharding across multiple workers, the default mode is tf.data.experimental.AutoShardPolicy.AUTO. This mode will attempt to shard the input dataset by files if the dataset is being created out of reader datasets (e.g. tf.data.TFRecordDataset, tf.data.TextLineDataset, etc.) or otherwise shard the dataset by data, where each of the workers will read the entire dataset and only process the shard assigned to it. However, if you have less than one input file per worker, we suggest that you disable dataset autosharding across workers by setting the tf.data.experimental.DistributeOptions.auto_shard_policy to be tf.data.experimental.AutoShardPolicy.OFF.
By default, this method adds a prefetch transformation at the end of the user provided tf.data.Dataset instance. The argument to the prefetch transformation which is buffer_size is equal to the number of replicas in sync. If the above batch splitting and dataset sharding logic is undesirable, please use tf.distribute.Strategy.distribute_datasets_from_function instead, which does not do any automatic batching or sharding for you.
Note: If you are using TPUStrategy, the order in which the data is processed by the workers when using tf.distribute.Strategy.experimental_distribute_dataset or tf.distribute.Strategy.distribute_datasets_from_function is not guaranteed. This is typically required if you are using tf.distribute to scale prediction. You can however insert an index for each element in the batch and order outputs accordingly. Refer to this snippet for an example of how to order outputs.
Note: Stateful dataset transformations are currently not supported with tf.distribute.experimental_distribute_dataset or tf.distribute.distribute_datasets_from_function. Any stateful ops that the dataset may have are currently ignored. For example, if your dataset has a map_fn that uses tf.random.uniform to rotate an image, then you have a dataset graph that depends on state (i.e the random seed) on the local machine where the python process is being executed.
For a tutorial on more usage and properties of this method, refer to the tutorial on distributed input. If you are interested in last partial batch handling, read this section.
Args
dataset tf.data.Dataset that will be sharded across all replicas using the rules stated above.
options tf.distribute.InputOptions used to control options on how this dataset is distributed.
Returns A tf.distribute.DistributedDataset.
experimental_distribute_values_from_function View source
experimental_distribute_values_from_function(
value_fn
)
Generates tf.distribute.DistributedValues from value_fn. This function is to generate tf.distribute.DistributedValues to pass into run, reduce, or other methods that take distributed values when not using datasets.
Args
value_fn The function to run to generate values. It is called for each replica with tf.distribute.ValueContext as the sole argument. It must return a Tensor or a type that can be converted to a Tensor.
Returns A tf.distribute.DistributedValues containing a value for each replica.
Example usage: Return constant value per replica:
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"])
def value_fn(ctx):
return tf.constant(1.)
distributed_values = (
strategy.experimental_distribute_values_from_function(
value_fn))
local_result = strategy.experimental_local_results(distributed_values)
local_result
(<tf.Tensor: shape=(), dtype=float32, numpy=1.0>,
<tf.Tensor: shape=(), dtype=float32, numpy=1.0>)
Distribute values in array based on replica_id:
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"])
array_value = np.array([3., 2., 1.])
def value_fn(ctx):
return array_value[ctx.replica_id_in_sync_group]
distributed_values = (
strategy.experimental_distribute_values_from_function(
value_fn))
local_result = strategy.experimental_local_results(distributed_values)
local_result
(3.0, 2.0)
Specify values using num_replicas_in_sync:
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"])
def value_fn(ctx):
return ctx.num_replicas_in_sync
distributed_values = (
strategy.experimental_distribute_values_from_function(
value_fn))
local_result = strategy.experimental_local_results(distributed_values)
local_result
(2, 2)
Place values on devices and distribute: strategy = tf.distribute.TPUStrategy()
worker_devices = strategy.extended.worker_devices
multiple_values = []
for i in range(strategy.num_replicas_in_sync):
with tf.device(worker_devices[i]):
multiple_values.append(tf.constant(1.0))
def value_fn(ctx):
return multiple_values[ctx.replica_id_in_sync_group]
distributed_values = strategy.
experimental_distribute_values_from_function(
value_fn)
experimental_local_results View source
experimental_local_results(
value
)
Returns the list of all local per-replica values contained in value.
Note: This only returns values on the worker initiated by this client. When using a tf.distribute.Strategy like tf.distribute.experimental.MultiWorkerMirroredStrategy, each worker will be its own client, and this function will only return values computed on that worker.
Args
value A value returned by experimental_run(), run(), extended.call_for_each_replica(), or a variable created in scope.
Returns A tuple of values contained in value. If value represents a single value, this returns (value,).
experimental_replicate_to_logical_devices View source
experimental_replicate_to_logical_devices(
tensor
)
Adds annotation that tensor will be replicated to all logical devices. This adds an annotation to tensor tensor specifying that operations on tensor will be invoked on all logical devices. # Initializing TPU system with 2 logical devices and 4 replicas.
resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu='')
tf.config.experimental_connect_to_cluster(resolver)
topology = tf.tpu.experimental.initialize_tpu_system(resolver)
device_assignment = tf.tpu.experimental.DeviceAssignment.build(
topology,
computation_shape=[1, 1, 1, 2],
num_replicas=4)
strategy = tf.distribute.TPUStrategy(
resolver, experimental_device_assignment=device_assignment)
iterator = iter(inputs)
@tf.function()
def step_fn(inputs):
images, labels = inputs
images = strategy.experimental_split_to_logical_devices(
inputs, [1, 2, 4, 1])
# model() function will be executed on 8 logical devices with `inputs`
# split 2 * 4 ways.
output = model(inputs)
# For loss calculation, all logical devices share the same logits
# and labels.
labels = strategy.experimental_replicate_to_logical_devices(labels)
output = strategy.experimental_replicate_to_logical_devices(output)
loss = loss_fn(labels, output)
return loss
strategy.run(step_fn, args=(next(iterator),))
Args: tensor: Input tensor to annotate.
Returns Annotated tensor with identical value as tensor.
experimental_split_to_logical_devices View source
experimental_split_to_logical_devices(
tensor, partition_dimensions
)
Adds annotation that tensor will be split across logical devices. This adds an annotation to tensor tensor specifying that operations on tensor will be be split among multiple logical devices. Tensor tensor will be split across dimensions specified by partition_dimensions. The dimensions of tensor must be divisible by corresponding value in partition_dimensions. For example, for system with 8 logical devices, if tensor is an image tensor with shape (batch_size, width, height, channel) and partition_dimensions is [1, 2, 4, 1], then tensor will be split 2 in width dimension and 4 way in height dimension and the split tensor values will be fed into 8 logical devices. # Initializing TPU system with 8 logical devices and 1 replica.
resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu='')
tf.config.experimental_connect_to_cluster(resolver)
topology = tf.tpu.experimental.initialize_tpu_system(resolver)
device_assignment = tf.tpu.experimental.DeviceAssignment.build(
topology,
computation_shape=[1, 2, 2, 2],
num_replicas=1)
strategy = tf.distribute.TPUStrategy(
resolver, experimental_device_assignment=device_assignment)
iterator = iter(inputs)
@tf.function()
def step_fn(inputs):
inputs = strategy.experimental_split_to_logical_devices(
inputs, [1, 2, 4, 1])
# model() function will be executed on 8 logical devices with `inputs`
# split 2 * 4 ways.
output = model(inputs)
return output
strategy.run(step_fn, args=(next(iterator),))
Args: tensor: Input tensor to annotate. partition_dimensions: An unnested list of integers with the size equal to rank of tensor specifying how tensor will be partitioned. The product of all elements in partition_dimensions must be equal to the total number of logical devices per replica.
Raises
ValueError 1) If the size of partition_dimensions does not equal to rank of tensor or 2) if product of elements of partition_dimensions does not match the number of logical devices per replica defined by the implementing DistributionStrategy's device specification or 3) if a known size of tensor is not divisible by corresponding value in partition_dimensions.
Returns Annotated tensor with identical value as tensor.
gather View source
gather(
value, axis
)
Gather value across replicas along axis to the current device. Given a tf.distribute.DistributedValues or tf.Tensor-like object value, this API gathers and concatenates value across replicas along the axis-th dimension. The result is copied to the "current" device which would typically be the CPU of the worker on which the program is running. For tf.distribute.TPUStrategy, it is the first TPU host. For multi-client MultiWorkerMirroredStrategy, this is CPU of each worker. This API can only be called in the cross-replica context. For a counterpart in the replica context, see tf.distribute.ReplicaContext.all_gather.
Note: For all strategies except tf.distribute.TPUStrategy, the input value on different replicas must have the same rank, and their shapes must be the same in all dimensions except the axis-th dimension. In other words, their shapes cannot be different in a dimension d where d does not equal to the axis argument. For example, given a tf.distribute.DistributedValues with component tensors of shape (1, 2, 3) and (1, 3, 3) on two replicas, you can call gather(..., axis=1, ...) on it, but not gather(..., axis=0, ...) or gather(..., axis=2, ...). However, for tf.distribute.TPUStrategy.gather, all tensors must have exactly the same rank and same shape.
Note: Given a tf.distribute.DistributedValues value, its component tensors must have a non-zero rank. Otherwise, consider using tf.expand_dims before gathering them.
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"])
# A DistributedValues with component tensor of shape (2, 1) on each replica
distributed_values = strategy.experimental_distribute_values_from_function(lambda _: tf.identity(tf.constant([[1], [2]])))
@tf.function
def run():
return strategy.gather(distributed_values, axis=0)
run()
<tf.Tensor: shape=(4, 1), dtype=int32, numpy=
array([[1],
[2],
[1],
[2]], dtype=int32)>
Consider the following example for more combinations:
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1", "GPU:2", "GPU:3"])
single_tensor = tf.reshape(tf.range(6), shape=(1,2,3))
distributed_values = strategy.experimental_distribute_values_from_function(lambda _: tf.identity(single_tensor))
@tf.function
def run(axis):
return strategy.gather(distributed_values, axis=axis)
axis=0
run(axis)
<tf.Tensor: shape=(4, 2, 3), dtype=int32, numpy=
array([[[0, 1, 2],
[3, 4, 5]],
[[0, 1, 2],
[3, 4, 5]],
[[0, 1, 2],
[3, 4, 5]],
[[0, 1, 2],
[3, 4, 5]]], dtype=int32)>
axis=1
run(axis)
<tf.Tensor: shape=(1, 8, 3), dtype=int32, numpy=
array([[[0, 1, 2],
[3, 4, 5],
[0, 1, 2],
[3, 4, 5],
[0, 1, 2],
[3, 4, 5],
[0, 1, 2],
[3, 4, 5]]], dtype=int32)>
axis=2
run(axis)
<tf.Tensor: shape=(1, 2, 12), dtype=int32, numpy=
array([[[0, 1, 2, 0, 1, 2, 0, 1, 2, 0, 1, 2],
[3, 4, 5, 3, 4, 5, 3, 4, 5, 3, 4, 5]]], dtype=int32)>
Args
value a tf.distribute.DistributedValues instance, e.g. returned by Strategy.run, to be combined into a single tensor. It can also be a regular tensor when used with tf.distribute.OneDeviceStrategy or the default strategy. The tensors that constitute the DistributedValues can only be dense tensors with non-zero rank, NOT a tf.IndexedSlices.
axis 0-D int32 Tensor. Dimension along which to gather. Must be in the range [0, rank(value)).
Returns A Tensor that's the concatenation of value across replicas along axis dimension.
reduce View source
reduce(
reduce_op, value, axis
)
Reduce value across replicas and return result on current device.
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"])
def step_fn():
i = tf.distribute.get_replica_context().replica_id_in_sync_group
return tf.identity(i)
per_replica_result = strategy.run(step_fn)
total = strategy.reduce("SUM", per_replica_result, axis=None)
total
<tf.Tensor: shape=(), dtype=int32, numpy=1>
To see how this would look with multiple replicas, consider the same example with MirroredStrategy with 2 GPUs: strategy = tf.distribute.MirroredStrategy(devices=["GPU:0", "GPU:1"])
def step_fn():
i = tf.distribute.get_replica_context().replica_id_in_sync_group
return tf.identity(i)
per_replica_result = strategy.run(step_fn)
# Check devices on which per replica result is:
strategy.experimental_local_results(per_replica_result)[0].device
# /job:localhost/replica:0/task:0/device:GPU:0
strategy.experimental_local_results(per_replica_result)[1].device
# /job:localhost/replica:0/task:0/device:GPU:1
total = strategy.reduce("SUM", per_replica_result, axis=None)
# Check device on which reduced result is:
total.device
# /job:localhost/replica:0/task:0/device:CPU:0
This API is typically used for aggregating the results returned from different replicas, for reporting etc. For example, loss computed from different replicas can be averaged using this API before printing.
Note: The result is copied to the "current" device - which would typically be the CPU of the worker on which the program is running. For TPUStrategy, it is the first TPU host. For multi client MultiWorkerMirroredStrategy, this is CPU of each worker.
There are a number of different tf.distribute APIs for reducing values across replicas:
tf.distribute.ReplicaContext.all_reduce: This differs from Strategy.reduce in that it is for replica context and does not copy the results to the host device. all_reduce should be typically used for reductions inside the training step such as gradients.
tf.distribute.StrategyExtended.reduce_to and tf.distribute.StrategyExtended.batch_reduce_to: These APIs are more advanced versions of Strategy.reduce as they allow customizing the destination of the result. They are also called in cross replica context. What should axis be? Given a per-replica value returned by run, say a per-example loss, the batch will be divided across all the replicas. This function allows you to aggregate across replicas and optionally also across batch elements by specifying the axis parameter accordingly. For example, if you have a global batch size of 8 and 2 replicas, values for examples [0, 1, 2, 3] will be on replica 0 and [4, 5, 6, 7] will be on replica 1. With axis=None, reduce will aggregate only across replicas, returning [0+4, 1+5, 2+6, 3+7]. This is useful when each replica is computing a scalar or some other value that doesn't have a "batch" dimension (like a gradient or loss). strategy.reduce("sum", per_replica_result, axis=None)
Sometimes, you will want to aggregate across both the global batch and all replicas. You can get this behavior by specifying the batch dimension as the axis, typically axis=0. In this case it would return a scalar 0+1+2+3+4+5+6+7. strategy.reduce("sum", per_replica_result, axis=0)
If there is a last partial batch, you will need to specify an axis so that the resulting shape is consistent across replicas. So if the last batch has size 6 and it is divided into [0, 1, 2, 3] and [4, 5], you would get a shape mismatch unless you specify axis=0. If you specify tf.distribute.ReduceOp.MEAN, using axis=0 will use the correct denominator of 6. Contrast this with computing reduce_mean to get a scalar value on each replica and this function to average those means, which will weigh some values 1/8 and others 1/4.
Args
reduce_op a tf.distribute.ReduceOp value specifying how values should be combined. Allows using string representation of the enum such as "SUM", "MEAN".
value a tf.distribute.DistributedValues instance, e.g. returned by Strategy.run, to be combined into a single tensor. It can also be a regular tensor when used with OneDeviceStrategy or default strategy.
axis specifies the dimension to reduce along within each replica's tensor. Should typically be set to the batch dimension, or None to only reduce across replicas (e.g. if the tensor has no batch dimension).
Returns A Tensor.
run View source
run(
fn, args=(), kwargs=None, options=None
)
Run the computation defined by fn on each TPU replica. Executes ops specified by fn on each replica. If args or kwargs have tf.distribute.DistributedValues, such as those produced by a tf.distribute.DistributedDataset from tf.distribute.Strategy.experimental_distribute_dataset or tf.distribute.Strategy.distribute_datasets_from_function, when fn is executed on a particular replica, it will be executed with the component of tf.distribute.DistributedValues that correspond to that replica. fn may call tf.distribute.get_replica_context() to access members such as all_reduce. All arguments in args or kwargs should either be nest of tensors or tf.distribute.DistributedValues containing tensors or composite tensors. Example usage:
resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu='')
tf.config.experimental_connect_to_cluster(resolver)
tf.tpu.experimental.initialize_tpu_system(resolver)
strategy = tf.distribute.TPUStrategy(resolver)
@tf.function
def run():
def value_fn(value_context):
return value_context.num_replicas_in_sync
distributed_values = (
strategy.experimental_distribute_values_from_function(value_fn))
def replica_fn(input):
return input * 2
return strategy.run(replica_fn, args=(distributed_values,))
result = run()
Args
fn The function to run. The output must be a tf.nest of Tensors.
args (Optional) Positional arguments to fn.
kwargs (Optional) Keyword arguments to fn.
options (Optional) An instance of tf.distribute.RunOptions specifying the options to run fn.
Returns Merged return value of fn across replicas. The structure of the return value is the same as the return value from fn. Each element in the structure can either be tf.distribute.DistributedValues, Tensor objects, or Tensors (for example, if running on a single replica).
scope View source
scope()
Context manager to make the strategy current and distribute variables. This method returns a context manager, and is used as follows:
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"])
# Variable created inside scope:
with strategy.scope():
mirrored_variable = tf.Variable(1.)
mirrored_variable
MirroredVariable:{
0: <tf.Variable 'Variable:0' shape=() dtype=float32, numpy=1.0>,
1: <tf.Variable 'Variable/replica_1:0' shape=() dtype=float32, numpy=1.0>
}
# Variable created outside scope:
regular_variable = tf.Variable(1.)
regular_variable
<tf.Variable 'Variable:0' shape=() dtype=float32, numpy=1.0>
What happens when Strategy.scope is entered?
strategy is installed in the global context as the "current" strategy. Inside this scope, tf.distribute.get_strategy() will now return this strategy. Outside this scope, it returns the default no-op strategy. Entering the scope also enters the "cross-replica context". See tf.distribute.StrategyExtended for an explanation on cross-replica and replica contexts. Variable creation inside scope is intercepted by the strategy. Each strategy defines how it wants to affect the variable creation. Sync strategies like MirroredStrategy, TPUStrategy and MultiWorkerMiroredStrategy create variables replicated on each replica, whereas ParameterServerStrategy creates variables on the parameter servers. This is done using a custom tf.variable_creator_scope. In some strategies, a default device scope may also be entered: in MultiWorkerMiroredStrategy, a default device scope of "/CPU:0" is entered on each worker.
Note: Entering a scope does not automatically distribute a computation, except in the case of high level training framework like keras model.fit. If you're not using model.fit, you need to use strategy.run API to explicitly distribute that computation. See an example in the custom training loop tutorial.
What should be in scope and what should be outside? There are a number of requirements on what needs to happen inside the scope. However, in places where we have information about which strategy is in use, we often enter the scope for the user, so they don't have to do it explicitly (i.e. calling those either inside or outside the scope is OK). Anything that creates variables that should be distributed variables must be in strategy.scope. This can be either by directly putting it in scope, or relying on another API like strategy.run or model.fit to enter it for you. Any variable that is created outside scope will not be distributed and may have performance implications. Common things that create variables in TF: models, optimizers, metrics. These should always be created inside the scope. Another source of variable creation can be a checkpoint restore - when variables are created lazily. Note that any variable created inside a strategy captures the strategy information. So reading and writing to these variables outside the strategy.scope can also work seamlessly, without the user having to enter the scope. Some strategy APIs (such as strategy.run and strategy.reduce) which require to be in a strategy's scope, enter the scope for you automatically, which means when using those APIs you don't need to enter the scope yourself. When a tf.keras.Model is created inside a strategy.scope, we capture this information. When high level training frameworks methods such as model.compile, model.fit etc are then called on this model, we automatically enter the scope, as well as use this strategy to distribute the training etc. See detailed example in distributed keras tutorial. Note that simply calling the model(..) is not impacted - only high level training framework APIs are. model.compile, model.fit, model.evaluate, model.predict and model.save can all be called inside or outside the scope. The following can be either inside or outside the scope: Creating the input datasets Defining tf.functions that represent your training step Saving APIs such as tf.saved_model.save. Loading creates variables, so that should go inside the scope if you want to train the model in a distributed way. Checkpoint saving. As mentioned above - checkpoint.restore may sometimes need to be inside scope if it creates variables.
Returns A context manager. | tensorflow.distribute.tpustrategy |
Module: tf.dtypes Public API for tf.dtypes namespace. Classes class DType: Represents the type of the elements in a Tensor. Functions as_dtype(...): Converts the given type_value to a DType. cast(...): Casts a tensor to a new type. complex(...): Converts two real numbers to a complex number. saturate_cast(...): Performs a safe saturating cast of value to dtype.
Other Members
QUANTIZED_DTYPES
bfloat16 tf.dtypes.DType
bool tf.dtypes.DType
complex128 tf.dtypes.DType
complex64 tf.dtypes.DType
double tf.dtypes.DType
float16 tf.dtypes.DType
float32 tf.dtypes.DType
float64 tf.dtypes.DType
half tf.dtypes.DType
int16 tf.dtypes.DType
int32 tf.dtypes.DType
int64 tf.dtypes.DType
int8 tf.dtypes.DType
qint16 tf.dtypes.DType
qint32 tf.dtypes.DType
qint8 tf.dtypes.DType
quint16 tf.dtypes.DType
quint8 tf.dtypes.DType
resource tf.dtypes.DType
string tf.dtypes.DType
uint16 tf.dtypes.DType
uint32 tf.dtypes.DType
uint64 tf.dtypes.DType
uint8 tf.dtypes.DType
variant tf.dtypes.DType | tensorflow.dtypes |
tf.dtypes.as_dtype View source on GitHub Converts the given type_value to a DType. View aliases Main aliases
tf.as_dtype Compat aliases for migration See Migration guide for more details. tf.compat.v1.as_dtype, tf.compat.v1.dtypes.as_dtype
tf.dtypes.as_dtype(
type_value
)
Note: DType values are interned. When passed a new DType object, as_dtype always returns the interned value.
Args
type_value A value that can be converted to a tf.DType object. This may currently be a tf.DType object, a DataType enum, a string type name, or a numpy.dtype.
Returns A DType corresponding to type_value.
Raises
TypeError If type_value cannot be converted to a DType. | tensorflow.dtypes.as_dtype |
tf.dtypes.complex View source on GitHub Converts two real numbers to a complex number. View aliases Main aliases
tf.complex Compat aliases for migration See Migration guide for more details. tf.compat.v1.complex, tf.compat.v1.dtypes.complex
tf.dtypes.complex(
real, imag, name=None
)
Given a tensor real representing the real part of a complex number, and a tensor imag representing the imaginary part of a complex number, this operation returns complex numbers elementwise of the form \(a + bj\), where a represents the real part and b represents the imag part. The input tensors real and imag must have the same shape. For example: real = tf.constant([2.25, 3.25])
imag = tf.constant([4.75, 5.75])
tf.complex(real, imag) # [[2.25 + 4.75j], [3.25 + 5.75j]]
Args
real A Tensor. Must be one of the following types: float32, float64.
imag A Tensor. Must have the same type as real.
name A name for the operation (optional).
Returns A Tensor of type complex64 or complex128.
Raises
TypeError Real and imag must be correct types | tensorflow.dtypes.complex |
tf.dtypes.DType View source on GitHub Represents the type of the elements in a Tensor. View aliases Main aliases
tf.DType Compat aliases for migration See Migration guide for more details. tf.compat.v1.DType, tf.compat.v1.dtypes.DType
tf.dtypes.DType()
The following DType objects are defined:
tf.float16: 16-bit half-precision floating-point.
tf.float32: 32-bit single-precision floating-point.
tf.float64: 64-bit double-precision floating-point.
tf.bfloat16: 16-bit truncated floating-point.
tf.complex64: 64-bit single-precision complex.
tf.complex128: 128-bit double-precision complex.
tf.int8: 8-bit signed integer.
tf.uint8: 8-bit unsigned integer.
tf.uint16: 16-bit unsigned integer.
tf.uint32: 32-bit unsigned integer.
tf.uint64: 64-bit unsigned integer.
tf.int16: 16-bit signed integer.
tf.int32: 32-bit signed integer.
tf.int64: 64-bit signed integer.
tf.bool: Boolean.
tf.string: String.
tf.qint8: Quantized 8-bit signed integer.
tf.quint8: Quantized 8-bit unsigned integer.
tf.qint16: Quantized 16-bit signed integer.
tf.quint16: Quantized 16-bit unsigned integer.
tf.qint32: Quantized 32-bit signed integer.
tf.resource: Handle to a mutable resource.
tf.variant: Values of arbitrary types. The tf.as_dtype() function converts numpy types and string type names to a DType object.
Attributes
as_datatype_enum Returns a types_pb2.DataType enum value based on this data type.
as_numpy_dtype Returns a Python type object based on this DType.
base_dtype Returns a non-reference DType based on this DType.
is_bool Returns whether this is a boolean data type.
is_complex Returns whether this is a complex floating point type.
is_floating Returns whether this is a (non-quantized, real) floating point type.
is_integer Returns whether this is a (non-quantized) integer type.
is_numpy_compatible Returns whether this data type has a compatible NumPy data type.
is_quantized Returns whether this is a quantized data type.
is_unsigned Returns whether this type is unsigned. Non-numeric, unordered, and quantized types are not considered unsigned, and this function returns False.
limits Return intensity limits, i.e. (min, max) tuple, of the dtype. Args: clip_negative : bool, optional If True, clip the negative range (i.e. return 0 for min intensity) even if the image dtype allows negative values. Returns min, max : tuple Lower and upper intensity limits.
max Returns the maximum representable value in this data type.
min Returns the minimum representable value in this data type.
name
real_dtype Returns the DType corresponding to this DType's real part.
size
Methods is_compatible_with View source
is_compatible_with(
other
)
Returns True if the other DType will be converted to this DType. The conversion rules are as follows: DType(T) .is_compatible_with(DType(T)) == True
Args
other A DType (or object that may be converted to a DType).
Returns True if a Tensor of the other DType will be implicitly converted to this DType.
__eq__ View source
__eq__(
other
)
Returns True iff this DType refers to the same type as other. __ne__ View source
__ne__(
other
)
Returns True iff self != other. | tensorflow.dtypes.dtype |
tf.dtypes.saturate_cast View source on GitHub Performs a safe saturating cast of value to dtype. View aliases Main aliases
tf.saturate_cast Compat aliases for migration See Migration guide for more details. tf.compat.v1.dtypes.saturate_cast, tf.compat.v1.saturate_cast
tf.dtypes.saturate_cast(
value, dtype, name=None
)
This function casts the input to dtype without applying any scaling. If there is a danger that values would over or underflow in the cast, this op applies the appropriate clamping before the cast.
Args
value A Tensor.
dtype The desired output DType.
name A name for the operation (optional).
Returns value safely cast to dtype. | tensorflow.dtypes.saturate_cast |
tf.dynamic_partition Partitions data into num_partitions tensors using indices from partitions. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.dynamic_partition
tf.dynamic_partition(
data, partitions, num_partitions, name=None
)
For each index tuple js of size partitions.ndim, the slice data[js, ...] becomes part of outputs[partitions[js]]. The slices with partitions[js] = i are placed in outputs[i] in lexicographic order of js, and the first dimension of outputs[i] is the number of entries in partitions equal to i. In detail, outputs[i].shape = [sum(partitions == i)] + data.shape[partitions.ndim:]
outputs[i] = pack([data[js, ...] for js if partitions[js] == i])
data.shape must start with partitions.shape. For example: # Scalar partitions.
partitions = 1
num_partitions = 2
data = [10, 20]
outputs[0] = [] # Empty with shape [0, 2]
outputs[1] = [[10, 20]]
# Vector partitions.
partitions = [0, 0, 1, 1, 0]
num_partitions = 2
data = [10, 20, 30, 40, 50]
outputs[0] = [10, 20, 50]
outputs[1] = [30, 40]
See dynamic_stitch for an example on how to merge partitions back.
Args
data A Tensor.
partitions A Tensor of type int32. Any shape. Indices in the range [0, num_partitions).
num_partitions An int that is >= 1. The number of partitions to output.
name A name for the operation (optional).
Returns A list of num_partitions Tensor objects with the same type as data. | tensorflow.dynamic_partition |
tf.dynamic_stitch Interleave the values from the data tensors into a single tensor. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.dynamic_stitch
tf.dynamic_stitch(
indices, data, name=None
)
Builds a merged tensor such that merged[indices[m][i, ..., j], ...] = data[m][i, ..., j, ...]
For example, if each indices[m] is scalar or vector, we have # Scalar indices:
merged[indices[m], ...] = data[m][...]
# Vector indices:
merged[indices[m][i], ...] = data[m][i, ...]
Each data[i].shape must start with the corresponding indices[i].shape, and the rest of data[i].shape must be constant w.r.t. i. That is, we must have data[i].shape = indices[i].shape + constant. In terms of this constant, the output shape is merged.shape = [max(indices)] + constant
Values are merged in order, so if an index appears in both indices[m][i] and indices[n][j] for (m,i) < (n,j) the slice data[n][j] will appear in the merged result. If you do not need this guarantee, ParallelDynamicStitch might perform better on some devices. For example: indices[0] = 6
indices[1] = [4, 1]
indices[2] = [[5, 2], [0, 3]]
data[0] = [61, 62]
data[1] = [[41, 42], [11, 12]]
data[2] = [[[51, 52], [21, 22]], [[1, 2], [31, 32]]]
merged = [[1, 2], [11, 12], [21, 22], [31, 32], [41, 42],
[51, 52], [61, 62]]
This method can be used to merge partitions created by dynamic_partition as illustrated on the following example: # Apply function (increments x_i) on elements for which a certain condition
# apply (x_i != -1 in this example).
x=tf.constant([0.1, -1., 5.2, 4.3, -1., 7.4])
condition_mask=tf.not_equal(x,tf.constant(-1.))
partitioned_data = tf.dynamic_partition(
x, tf.cast(condition_mask, tf.int32) , 2)
partitioned_data[1] = partitioned_data[1] + 1.0
condition_indices = tf.dynamic_partition(
tf.range(tf.shape(x)[0]), tf.cast(condition_mask, tf.int32) , 2)
x = tf.dynamic_stitch(condition_indices, partitioned_data)
# Here x=[1.1, -1., 6.2, 5.3, -1, 8.4], the -1. values remain
# unchanged.
Args
indices A list of at least 1 Tensor objects with type int32.
data A list with the same length as indices of Tensor objects with the same type.
name A name for the operation (optional).
Returns A Tensor. Has the same type as data. | tensorflow.dynamic_stitch |
tf.edit_distance View source on GitHub Computes the Levenshtein distance between sequences. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.edit_distance
tf.edit_distance(
hypothesis, truth, normalize=True, name='edit_distance'
)
This operation takes variable-length sequences (hypothesis and truth), each provided as a SparseTensor, and computes the Levenshtein distance. You can normalize the edit distance by length of truth by setting normalize to true. For example: Given the following input,
hypothesis is a tf.SparseTensor of shape [2, 1, 1]
truth is a tf.SparseTensor of shape [2, 2, 2]
hypothesis = tf.SparseTensor(
[[0, 0, 0],
[1, 0, 0]],
["a", "b"],
(2, 1, 1))
truth = tf.SparseTensor(
[[0, 1, 0],
[1, 0, 0],
[1, 0, 1],
[1, 1, 0]],
["a", "b", "c", "a"],
(2, 2, 2))
tf.edit_distance(hypothesis, truth, normalize=True)
<tf.Tensor: shape=(2, 2), dtype=float32, numpy=
array([[inf, 1. ],
[0.5, 1. ]], dtype=float32)>
The operaton returns a dense Tensor of shape [2, 2] with edit distances normalized by truth lengths.
Note: It is possible to calculate edit distance between two sparse tensors with variable-length values. However, attempting to create them while eager execution is enabled will result in a ValueError.
For the following inputs, # 'hypothesis' is a tensor of shape `[2, 1]` with variable-length values:
# (0,0) = ["a"]
# (1,0) = ["b"]
hypothesis = tf.sparse.SparseTensor(
[[0, 0, 0],
[1, 0, 0]],
["a", "b"],
(2, 1, 1))
# 'truth' is a tensor of shape `[2, 2]` with variable-length values:
# (0,0) = []
# (0,1) = ["a"]
# (1,0) = ["b", "c"]
# (1,1) = ["a"]
truth = tf.sparse.SparseTensor(
[[0, 1, 0],
[1, 0, 0],
[1, 0, 1],
[1, 1, 0]],
["a", "b", "c", "a"],
(2, 2, 2))
normalize = True
# The output would be a dense Tensor of shape `(2,)`, with edit distances
noramlized by 'truth' lengths.
# output => array([0., 0.5], dtype=float32)
Args
hypothesis A SparseTensor containing hypothesis sequences.
truth A SparseTensor containing truth sequences.
normalize A bool. If True, normalizes the Levenshtein distance by length of truth.
name A name for the operation (optional).
Returns A dense Tensor with rank R - 1, where R is the rank of the SparseTensor inputs hypothesis and truth.
Raises
TypeError If either hypothesis or truth are not a SparseTensor. | tensorflow.edit_distance |
tf.einsum View source on GitHub Tensor contraction over specified indices and outer product. View aliases Main aliases
tf.linalg.einsum Compat aliases for migration See Migration guide for more details. tf.compat.v1.einsum, tf.compat.v1.linalg.einsum
tf.einsum(
equation, *inputs, **kwargs
)
Einsum allows defining Tensors by defining their element-wise computation. This computation is defined by equation, a shorthand form based on Einstein summation. As an example, consider multiplying two matrices A and B to form a matrix C. The elements of C are given by: $$ C_{i,k} = \sum_j A_{i,j} B_{j,k} $$ or C[i,k] = sum_j A[i,j] * B[j,k]
The corresponding einsum equation is: ij,jk->ik
In general, to convert the element-wise equation into the equation string, use the following procedure (intermediate strings for matrix multiplication example provided in parentheses): remove variable names, brackets, and commas, (ik = sum_j ij * jk) replace "*" with ",", (ik = sum_j ij , jk) drop summation signs, and (ik = ij, jk) move the output to the right, while replacing "=" with "->". (ij,jk->ik)
Note: If the output indices are not specified repeated indices are summed. So ij,jk->ik can be simplified to ij,jk.
Many common operations can be expressed in this way. For example: Matrix multiplication
m0 = tf.random.normal(shape=[2, 3])
m1 = tf.random.normal(shape=[3, 5])
e = tf.einsum('ij,jk->ik', m0, m1)
# output[i,k] = sum_j m0[i,j] * m1[j, k]
print(e.shape)
(2, 5)
Repeated indices are summed if the output indices are not specified.
e = tf.einsum('ij,jk', m0, m1) # output[i,k] = sum_j m0[i,j] * m1[j, k]
print(e.shape)
(2, 5)
Dot product
u = tf.random.normal(shape=[5])
v = tf.random.normal(shape=[5])
e = tf.einsum('i,i->', u, v) # output = sum_i u[i]*v[i]
print(e.shape)
()
Outer product
u = tf.random.normal(shape=[3])
v = tf.random.normal(shape=[5])
e = tf.einsum('i,j->ij', u, v) # output[i,j] = u[i]*v[j]
print(e.shape)
(3, 5)
Transpose
m = tf.ones(2,3)
e = tf.einsum('ij->ji', m0) # output[j,i] = m0[i,j]
print(e.shape)
(3, 2)
Diag
m = tf.reshape(tf.range(9), [3,3])
diag = tf.einsum('ii->i', m)
print(diag.shape)
(3,)
Trace
# Repeated indices are summed.
trace = tf.einsum('ii', m) # output[j,i] = trace(m) = sum_i m[i, i]
assert trace == sum(diag)
print(trace.shape)
()
Batch matrix multiplication
s = tf.random.normal(shape=[7,5,3])
t = tf.random.normal(shape=[7,3,2])
e = tf.einsum('bij,bjk->bik', s, t)
# output[a,i,k] = sum_j s[a,i,j] * t[a, j, k]
print(e.shape)
(7, 5, 2)
This method does not support broadcasting on named-axes. All axes with matching labels should have the same length. If you have length-1 axes, use tf.squeseze or tf.reshape to eliminate them. To write code that is agnostic to the number of indices in the input use an ellipsis. The ellipsis is a placeholder for "whatever other indices fit here". For example, to perform a NumPy-style broadcasting-batch-matrix multiplication where the matrix multiply acts on the last two axes of the input, use:
s = tf.random.normal(shape=[11, 7, 5, 3])
t = tf.random.normal(shape=[11, 7, 3, 2])
e = tf.einsum('...ij,...jk->...ik', s, t)
print(e.shape)
(11, 7, 5, 2)
Einsum will broadcast over axes covered by the ellipsis.
s = tf.random.normal(shape=[11, 1, 5, 3])
t = tf.random.normal(shape=[1, 7, 3, 2])
e = tf.einsum('...ij,...jk->...ik', s, t)
print(e.shape)
(11, 7, 5, 2)
Args
equation a str describing the contraction, in the same format as numpy.einsum.
*inputs the inputs to contract (each one a Tensor), whose shapes should be consistent with equation.
**kwargs optimize: Optimization strategy to use to find contraction path using opt_einsum. Must be 'greedy', 'optimal', 'branch-2', 'branch-all' or 'auto'. (optional, default: 'greedy'). name: A name for the operation (optional).
Returns The contracted Tensor, with shape determined by equation.
Raises
ValueError If the format of equation is incorrect, number of inputs or their shapes are inconsistent with equation. | tensorflow.einsum |
tf.ensure_shape View source on GitHub Updates the shape of a tensor and checks at runtime that the shape holds. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.ensure_shape
tf.ensure_shape(
x, shape, name=None
)
For example:
@tf.function(input_signature=[tf.TensorSpec(shape=None, dtype=tf.float32)])
def f(tensor):
return tf.ensure_shape(tensor, [3, 3])
f(tf.zeros([3, 3])) # Passes
<tf.Tensor: shape=(3, 3), dtype=float32, numpy=
array([[0., 0., 0.],
[0., 0., 0.],
[0., 0., 0.]], dtype=float32)>
f([1, 2, 3]) # fails
Traceback (most recent call last):
InvalidArgumentError: Shape of tensor x [3] is not compatible with expected shape [3,3].
The above example raises tf.errors.InvalidArgumentError, because the shape (3,) is not compatible with the shape (None, 3, 3) With eager execution this is a shape assertion, that returns the input:
x = tf.constant([1,2,3])
print(x.shape)
(3,)
x = tf.ensure_shape(x, [3])
x = tf.ensure_shape(x, [5])
Traceback (most recent call last):
tf.errors.InvalidArgumentError: Shape of tensor dummy_input [3] is not
compatible with expected shape [5]. [Op:EnsureShape]
Inside a tf.function or v1.Graph context it checks both the buildtime and runtime shapes. This is stricter than tf.Tensor.set_shape which only checks the buildtime shape.
Note: This differs from tf.Tensor.set_shape in that it sets the static shape of the resulting tensor and enforces it at runtime, raising an error if the tensor's runtime shape is incompatible with the specified shape. tf.Tensor.set_shape sets the static shape of the tensor without enforcing it at runtime, which may result in inconsistencies between the statically-known shape of tensors and the runtime value of tensors.
For example, of loading images of a known size:
@tf.function
def decode_image(png):
image = tf.image.decode_png(png, channels=3)
# the `print` executes during tracing.
print("Initial shape: ", image.shape)
image = tf.ensure_shape(image,[28, 28, 3])
print("Final shape: ", image.shape)
return image
When tracing a function, no ops are being executed, shapes may be unknown. See the Concrete Functions Guide for details.
concrete_decode = decode_image.get_concrete_function(
tf.TensorSpec([], dtype=tf.string))
Initial shape: (None, None, 3)
Final shape: (28, 28, 3)
image = tf.random.uniform(maxval=255, shape=[28, 28, 3], dtype=tf.int32)
image = tf.cast(image,tf.uint8)
png = tf.image.encode_png(image)
image2 = concrete_decode(png)
print(image2.shape)
(28, 28, 3)
image = tf.concat([image,image], axis=0)
print(image.shape)
(56, 28, 3)
png = tf.image.encode_png(image)
image2 = concrete_decode(png)
Traceback (most recent call last):
tf.errors.InvalidArgumentError: Shape of tensor DecodePng [56,28,3] is not
compatible with expected shape [28,28,3].
Caution: if you don't use the result of tf.ensure_shape the check may not run.
@tf.function
def bad_decode_image(png):
image = tf.image.decode_png(png, channels=3)
# the `print` executes during tracing.
print("Initial shape: ", image.shape)
# BAD: forgot to use the returned tensor.
tf.ensure_shape(image,[28, 28, 3])
print("Final shape: ", image.shape)
return image
image = bad_decode_image(png)
Initial shape: (None, None, 3)
Final shape: (None, None, 3)
print(image.shape)
(56, 28, 3)
Args
x A Tensor.
shape A TensorShape representing the shape of this tensor, a TensorShapeProto, a list, a tuple, or None.
name A name for this operation (optional). Defaults to "EnsureShape".
Returns A Tensor. Has the same type and contents as x.
Raises
tf.errors.InvalidArgumentError If shape is incompatible with the shape of x. | tensorflow.ensure_shape |
Module: tf.errors Exception types for TensorFlow errors. Classes class AbortedError: The operation was aborted, typically due to a concurrent action. class AlreadyExistsError: Raised when an entity that we attempted to create already exists. class CancelledError: Raised when an operation or step is cancelled. class DataLossError: Raised when unrecoverable data loss or corruption is encountered. class DeadlineExceededError: Raised when a deadline expires before an operation could complete. class FailedPreconditionError: Operation was rejected because the system is not in a state to execute it. class InternalError: Raised when the system experiences an internal error. class InvalidArgumentError: Raised when an operation receives an invalid argument. class NotFoundError: Raised when a requested entity (e.g., a file or directory) was not found. class OpError: A generic error that is raised when TensorFlow execution fails. class OperatorNotAllowedInGraphError: An error is raised for unsupported operator in Graph execution. class OutOfRangeError: Raised when an operation iterates past the valid input range. class PermissionDeniedError: Raised when the caller does not have permission to run an operation. class ResourceExhaustedError: Some resource has been exhausted. class UnauthenticatedError: The request does not have valid authentication credentials. class UnavailableError: Raised when the runtime is currently unavailable. class UnimplementedError: Raised when an operation has not been implemented. class UnknownError: Unknown error.
Other Members
ABORTED 10
ALREADY_EXISTS 6
CANCELLED 1
DATA_LOSS 15
DEADLINE_EXCEEDED 4
FAILED_PRECONDITION 9
INTERNAL 13
INVALID_ARGUMENT 3
NOT_FOUND 5
OK 0
OUT_OF_RANGE 11
PERMISSION_DENIED 7
RESOURCE_EXHAUSTED 8
UNAUTHENTICATED 16
UNAVAILABLE 14
UNIMPLEMENTED 12
UNKNOWN 2 | tensorflow.errors |
tf.errors.AbortedError View source on GitHub The operation was aborted, typically due to a concurrent action. Inherits From: OpError View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.errors.AbortedError
tf.errors.AbortedError(
node_def, op, message
)
For example, running a tf.QueueBase.enqueue operation may raise AbortedError if a tf.QueueBase.close operation previously ran.
Attributes
error_code The integer error code that describes the error.
message The error message that describes the error.
node_def The NodeDef proto representing the op that failed.
op The operation that failed, if known.
Note: If the failed op was synthesized at runtime, e.g. a Send or Recv op, there will be no corresponding tf.Operation object. In that case, this will return None, and you should instead use the tf.errors.OpError.node_def to discover information about the op. | tensorflow.errors.abortederror |
tf.errors.AlreadyExistsError View source on GitHub Raised when an entity that we attempted to create already exists. Inherits From: OpError View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.errors.AlreadyExistsError
tf.errors.AlreadyExistsError(
node_def, op, message
)
For example, running an operation that saves a file (e.g. tf.train.Saver.save) could potentially raise this exception if an explicit filename for an existing file was passed.
Attributes
error_code The integer error code that describes the error.
message The error message that describes the error.
node_def The NodeDef proto representing the op that failed.
op The operation that failed, if known.
Note: If the failed op was synthesized at runtime, e.g. a Send or Recv op, there will be no corresponding tf.Operation object. In that case, this will return None, and you should instead use the tf.errors.OpError.node_def to discover information about the op. | tensorflow.errors.alreadyexistserror |
tf.errors.CancelledError View source on GitHub Raised when an operation or step is cancelled. Inherits From: OpError View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.errors.CancelledError
tf.errors.CancelledError(
node_def, op, message
)
For example, a long-running operation (e.g. tf.QueueBase.enqueue may be cancelled by running another operation (e.g. tf.QueueBase.close, or by tf.Session.close. A step that is running such a long-running operation will fail by raising CancelledError.
Attributes
error_code The integer error code that describes the error.
message The error message that describes the error.
node_def The NodeDef proto representing the op that failed.
op The operation that failed, if known.
Note: If the failed op was synthesized at runtime, e.g. a Send or Recv op, there will be no corresponding tf.Operation object. In that case, this will return None, and you should instead use the tf.errors.OpError.node_def to discover information about the op. | tensorflow.errors.cancellederror |
tf.errors.DataLossError View source on GitHub Raised when unrecoverable data loss or corruption is encountered. Inherits From: OpError View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.errors.DataLossError
tf.errors.DataLossError(
node_def, op, message
)
For example, this may be raised by running a tf.WholeFileReader.read operation, if the file is truncated while it is being read.
Attributes
error_code The integer error code that describes the error.
message The error message that describes the error.
node_def The NodeDef proto representing the op that failed.
op The operation that failed, if known.
Note: If the failed op was synthesized at runtime, e.g. a Send or Recv op, there will be no corresponding tf.Operation object. In that case, this will return None, and you should instead use the tf.errors.OpError.node_def to discover information about the op. | tensorflow.errors.datalosserror |
tf.errors.DeadlineExceededError View source on GitHub Raised when a deadline expires before an operation could complete. Inherits From: OpError View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.errors.DeadlineExceededError
tf.errors.DeadlineExceededError(
node_def, op, message
)
This exception is not currently used.
Attributes
error_code The integer error code that describes the error.
message The error message that describes the error.
node_def The NodeDef proto representing the op that failed.
op The operation that failed, if known.
Note: If the failed op was synthesized at runtime, e.g. a Send or Recv op, there will be no corresponding tf.Operation object. In that case, this will return None, and you should instead use the tf.errors.OpError.node_def to discover information about the op. | tensorflow.errors.deadlineexceedederror |
tf.errors.FailedPreconditionError View source on GitHub Operation was rejected because the system is not in a state to execute it. Inherits From: OpError View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.errors.FailedPreconditionError
tf.errors.FailedPreconditionError(
node_def, op, message
)
This exception is most commonly raised when running an operation that reads a tf.Variable before it has been initialized.
Attributes
error_code The integer error code that describes the error.
message The error message that describes the error.
node_def The NodeDef proto representing the op that failed.
op The operation that failed, if known.
Note: If the failed op was synthesized at runtime, e.g. a Send or Recv op, there will be no corresponding tf.Operation object. In that case, this will return None, and you should instead use the tf.errors.OpError.node_def to discover information about the op. | tensorflow.errors.failedpreconditionerror |
tf.errors.InternalError View source on GitHub Raised when the system experiences an internal error. Inherits From: OpError View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.errors.InternalError
tf.errors.InternalError(
node_def, op, message
)
This exception is raised when some invariant expected by the runtime has been broken. Catching this exception is not recommended.
Attributes
error_code The integer error code that describes the error.
message The error message that describes the error.
node_def The NodeDef proto representing the op that failed.
op The operation that failed, if known.
Note: If the failed op was synthesized at runtime, e.g. a Send or Recv op, there will be no corresponding tf.Operation object. In that case, this will return None, and you should instead use the tf.errors.OpError.node_def to discover information about the op. | tensorflow.errors.internalerror |
tf.errors.InvalidArgumentError View source on GitHub Raised when an operation receives an invalid argument. Inherits From: OpError View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.errors.InvalidArgumentError
tf.errors.InvalidArgumentError(
node_def, op, message
)
This may occur, for example, if an operation receives an input tensor that has an invalid value or shape. For example, the tf.matmul op will raise this error if it receives an input that is not a matrix, and the tf.reshape op will raise this error if the new shape does not match the number of elements in the input tensor.
Attributes
error_code The integer error code that describes the error.
message The error message that describes the error.
node_def The NodeDef proto representing the op that failed.
op The operation that failed, if known.
Note: If the failed op was synthesized at runtime, e.g. a Send or Recv op, there will be no corresponding tf.Operation object. In that case, this will return None, and you should instead use the tf.errors.OpError.node_def to discover information about the op. | tensorflow.errors.invalidargumenterror |
tf.errors.NotFoundError View source on GitHub Raised when a requested entity (e.g., a file or directory) was not found. Inherits From: OpError View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.errors.NotFoundError
tf.errors.NotFoundError(
node_def, op, message
)
For example, running the tf.WholeFileReader.read operation could raise NotFoundError if it receives the name of a file that does not exist.
Attributes
error_code The integer error code that describes the error.
message The error message that describes the error.
node_def The NodeDef proto representing the op that failed.
op The operation that failed, if known.
Note: If the failed op was synthesized at runtime, e.g. a Send or Recv op, there will be no corresponding tf.Operation object. In that case, this will return None, and you should instead use the tf.errors.OpError.node_def to discover information about the op. | tensorflow.errors.notfounderror |
tf.errors.OperatorNotAllowedInGraphError An error is raised for unsupported operator in Graph execution.
tf.errors.OperatorNotAllowedInGraphError(
*args, **kwargs
)
For example, using a tf.Tensor as a Python bool in Graph execution is not allowed. | tensorflow.errors.operatornotallowedingrapherror |
tf.errors.OpError View source on GitHub A generic error that is raised when TensorFlow execution fails. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.OpError, tf.compat.v1.errors.OpError
tf.errors.OpError(
node_def, op, message, error_code
)
Whenever possible, the session will raise a more specific subclass of OpError from the tf.errors module.
Args
node_def The node_def_pb2.NodeDef proto representing the op that failed, if known; otherwise None.
op The ops.Operation that failed, if known; otherwise None.
message The message string describing the failure.
error_code The error_codes_pb2.Code describing the error.
Attributes
error_code The integer error code that describes the error.
message The error message that describes the error.
node_def The NodeDef proto representing the op that failed.
op The operation that failed, if known.
Note: If the failed op was synthesized at runtime, e.g. a Send or Recv op, there will be no corresponding tf.Operation object. In that case, this will return None, and you should instead use the tf.errors.OpError.node_def to discover information about the op. | tensorflow.errors.operror |
tf.errors.OutOfRangeError View source on GitHub Raised when an operation iterates past the valid input range. Inherits From: OpError View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.errors.OutOfRangeError
tf.errors.OutOfRangeError(
node_def, op, message
)
This exception is raised in "end-of-file" conditions, such as when a tf.QueueBase.dequeue operation is blocked on an empty queue, and a tf.QueueBase.close operation executes.
Attributes
error_code The integer error code that describes the error.
message The error message that describes the error.
node_def The NodeDef proto representing the op that failed.
op The operation that failed, if known.
Note: If the failed op was synthesized at runtime, e.g. a Send or Recv op, there will be no corresponding tf.Operation object. In that case, this will return None, and you should instead use the tf.errors.OpError.node_def to discover information about the op. | tensorflow.errors.outofrangeerror |
tf.errors.PermissionDeniedError View source on GitHub Raised when the caller does not have permission to run an operation. Inherits From: OpError View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.errors.PermissionDeniedError
tf.errors.PermissionDeniedError(
node_def, op, message
)
For example, running the tf.WholeFileReader.read operation could raise PermissionDeniedError if it receives the name of a file for which the user does not have the read file permission.
Attributes
error_code The integer error code that describes the error.
message The error message that describes the error.
node_def The NodeDef proto representing the op that failed.
op The operation that failed, if known.
Note: If the failed op was synthesized at runtime, e.g. a Send or Recv op, there will be no corresponding tf.Operation object. In that case, this will return None, and you should instead use the tf.errors.OpError.node_def to discover information about the op. | tensorflow.errors.permissiondeniederror |
tf.errors.ResourceExhaustedError View source on GitHub Some resource has been exhausted. Inherits From: OpError View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.errors.ResourceExhaustedError
tf.errors.ResourceExhaustedError(
node_def, op, message
)
For example, this error might be raised if a per-user quota is exhausted, or perhaps the entire file system is out of space.
Attributes
error_code The integer error code that describes the error.
message The error message that describes the error.
node_def The NodeDef proto representing the op that failed.
op The operation that failed, if known.
Note: If the failed op was synthesized at runtime, e.g. a Send or Recv op, there will be no corresponding tf.Operation object. In that case, this will return None, and you should instead use the tf.errors.OpError.node_def to discover information about the op. | tensorflow.errors.resourceexhaustederror |
tf.errors.UnauthenticatedError View source on GitHub The request does not have valid authentication credentials. Inherits From: OpError View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.errors.UnauthenticatedError
tf.errors.UnauthenticatedError(
node_def, op, message
)
This exception is not currently used.
Attributes
error_code The integer error code that describes the error.
message The error message that describes the error.
node_def The NodeDef proto representing the op that failed.
op The operation that failed, if known.
Note: If the failed op was synthesized at runtime, e.g. a Send or Recv op, there will be no corresponding tf.Operation object. In that case, this will return None, and you should instead use the tf.errors.OpError.node_def to discover information about the op. | tensorflow.errors.unauthenticatederror |
tf.errors.UnavailableError View source on GitHub Raised when the runtime is currently unavailable. Inherits From: OpError View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.errors.UnavailableError
tf.errors.UnavailableError(
node_def, op, message
)
This exception is not currently used.
Attributes
error_code The integer error code that describes the error.
message The error message that describes the error.
node_def The NodeDef proto representing the op that failed.
op The operation that failed, if known.
Note: If the failed op was synthesized at runtime, e.g. a Send or Recv op, there will be no corresponding tf.Operation object. In that case, this will return None, and you should instead use the tf.errors.OpError.node_def to discover information about the op. | tensorflow.errors.unavailableerror |
tf.errors.UnimplementedError View source on GitHub Raised when an operation has not been implemented. Inherits From: OpError View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.errors.UnimplementedError
tf.errors.UnimplementedError(
node_def, op, message
)
Some operations may raise this error when passed otherwise-valid arguments that it does not currently support. For example, running the tf.nn.max_pool2d operation would raise this error if pooling was requested on the batch dimension, because this is not yet supported.
Attributes
error_code The integer error code that describes the error.
message The error message that describes the error.
node_def The NodeDef proto representing the op that failed.
op The operation that failed, if known.
Note: If the failed op was synthesized at runtime, e.g. a Send or Recv op, there will be no corresponding tf.Operation object. In that case, this will return None, and you should instead use the tf.errors.OpError.node_def to discover information about the op. | tensorflow.errors.unimplementederror |
tf.errors.UnknownError View source on GitHub Unknown error. Inherits From: OpError View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.errors.UnknownError
tf.errors.UnknownError(
node_def, op, message, error_code=UNKNOWN
)
An example of where this error may be returned is if a Status value received from another address space belongs to an error-space that is not known to this address space. Also, errors raised by APIs that do not return enough error information may be converted to this error.
Attributes
error_code The integer error code that describes the error.
message The error message that describes the error.
node_def The NodeDef proto representing the op that failed.
op The operation that failed, if known.
Note: If the failed op was synthesized at runtime, e.g. a Send or Recv op, there will be no corresponding tf.Operation object. In that case, this will return None, and you should instead use the tf.errors.OpError.node_def to discover information about the op. | tensorflow.errors.unknownerror |
Module: tf.estimator Estimator: High level tools for working with models. Modules experimental module: Public API for tf.estimator.experimental namespace. export module: All public utility methods for exporting Estimator to SavedModel. Classes class BaselineClassifier: A classifier that can establish a simple baseline. class BaselineEstimator: An estimator that can establish a simple baseline. class BaselineRegressor: A regressor that can establish a simple baseline. class BestExporter: This class exports the serving graph and checkpoints of the best models. class BinaryClassHead: Creates a Head for single label binary classification. class BoostedTreesClassifier: A Classifier for Tensorflow Boosted Trees models. class BoostedTreesEstimator: An Estimator for Tensorflow Boosted Trees models. class BoostedTreesRegressor: A Regressor for Tensorflow Boosted Trees models. class CheckpointSaverHook: Saves checkpoints every N steps or seconds. class CheckpointSaverListener: Interface for listeners that take action before or after checkpoint save. class DNNClassifier: A classifier for TensorFlow DNN models. class DNNEstimator: An estimator for TensorFlow DNN models with user-specified head. class DNNLinearCombinedClassifier: An estimator for TensorFlow Linear and DNN joined classification models. class DNNLinearCombinedEstimator: An estimator for TensorFlow Linear and DNN joined models with custom head. class DNNLinearCombinedRegressor: An estimator for TensorFlow Linear and DNN joined models for regression. class DNNRegressor: A regressor for TensorFlow DNN models. class Estimator: Estimator class to train and evaluate TensorFlow models. class EstimatorSpec: Ops and objects returned from a model_fn and passed to an Estimator. class EvalSpec: Configuration for the "eval" part for the train_and_evaluate call. class Exporter: A class representing a type of model export. class FeedFnHook: Runs feed_fn and sets the feed_dict accordingly. class FinalExporter: This class exports the serving graph and checkpoints at the end. class FinalOpsHook: A hook which evaluates Tensors at the end of a session. class GlobalStepWaiterHook: Delays execution until global step reaches wait_until_step. class Head: Interface for the head/top of a model. class LatestExporter: This class regularly exports the serving graph and checkpoints. class LinearClassifier: Linear classifier model. class LinearEstimator: An estimator for TensorFlow linear models with user-specified head. class LinearRegressor: An estimator for TensorFlow Linear regression problems. class LoggingTensorHook: Prints the given tensors every N local steps, every N seconds, or at end. class LogisticRegressionHead: Creates a Head for logistic regression. class ModeKeys: Standard names for Estimator model modes. class MultiClassHead: Creates a Head for multi class classification. class MultiHead: Creates a Head for multi-objective learning. class MultiLabelHead: Creates a Head for multi-label classification. class NanLossDuringTrainingError: Unspecified run-time error. class NanTensorHook: Monitors the loss tensor and stops training if loss is NaN. class PoissonRegressionHead: Creates a Head for poisson regression using tf.nn.log_poisson_loss. class ProfilerHook: Captures CPU/GPU profiling information every N steps or seconds. class RegressionHead: Creates a Head for regression using the mean_squared_error loss. class RunConfig: This class specifies the configurations for an Estimator run. class SecondOrStepTimer: Timer that triggers at most once every N seconds or once every N steps. class SessionRunArgs: Represents arguments to be added to a Session.run() call. class SessionRunContext: Provides information about the session.run() call being made. class SessionRunHook: Hook to extend calls to MonitoredSession.run(). class SessionRunValues: Contains the results of Session.run(). class StepCounterHook: Hook that counts steps per second. class StopAtStepHook: Hook that requests stop at a specified step. class SummarySaverHook: Saves summaries every N steps. class TrainSpec: Configuration for the "train" part for the train_and_evaluate call. class VocabInfo: Vocabulary information for warm-starting. class WarmStartSettings: Settings for warm-starting in tf.estimator.Estimators. Functions add_metrics(...): Creates a new tf.estimator.Estimator which has given metrics. classifier_parse_example_spec(...): Generates parsing spec for tf.parse_example to be used with classifiers. regressor_parse_example_spec(...): Generates parsing spec for tf.parse_example to be used with regressors. train_and_evaluate(...): Train and evaluate the estimator. | tensorflow.estimator |
tf.estimator.add_metrics View source on GitHub Creates a new tf.estimator.Estimator which has given metrics. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.estimator.add_metrics
tf.estimator.add_metrics(
estimator, metric_fn
)
Example: def my_auc(labels, predictions):
auc_metric = tf.keras.metrics.AUC(name="my_auc")
auc_metric.update_state(y_true=labels, y_pred=predictions['logistic'])
return {'auc': auc_metric}
estimator = tf.estimator.DNNClassifier(...)
estimator = tf.estimator.add_metrics(estimator, my_auc)
estimator.train(...)
estimator.evaluate(...)
Example usage of custom metric which uses features: def my_auc(labels, predictions, features):
auc_metric = tf.keras.metrics.AUC(name="my_auc")
auc_metric.update_state(y_true=labels, y_pred=predictions['logistic'],
sample_weight=features['weight'])
return {'auc': auc_metric}
estimator = tf.estimator.DNNClassifier(...)
estimator = tf.estimator.add_metrics(estimator, my_auc)
estimator.train(...)
estimator.evaluate(...)
Args
estimator A tf.estimator.Estimator object.
metric_fn A function which should obey the following signature: Args: can only have following four arguments in any order: predictions: Predictions Tensor or dict of Tensor created by given estimator. features: Input dict of Tensor objects created by input_fn which is given to estimator.evaluate as an argument. labels: Labels Tensor or dict of Tensor created by input_fn which is given to estimator.evaluate as an argument. config: config attribute of the estimator. Returns: Dict of metric results keyed by name. Final metrics are a union of this and estimator's existing metrics. If there is a name conflict between this and estimators existing metrics, this will override the existing one. The values of the dict are the results of calling a metric function, namely a (metric_tensor, update_op) tuple.
Returns A new tf.estimator.Estimator which has a union of original metrics with given ones. | tensorflow.estimator.add_metrics |
tf.estimator.BaselineClassifier View source on GitHub A classifier that can establish a simple baseline. Inherits From: Estimator, Estimator
tf.estimator.BaselineClassifier(
model_dir=None, n_classes=2, weight_column=None, label_vocabulary=None,
optimizer='Ftrl', config=None,
loss_reduction=losses_utils.ReductionV2.SUM_OVER_BATCH_SIZE
)
This classifier ignores feature values and will learn to predict the average value of each label. For single-label problems, this will predict the probability distribution of the classes as seen in the labels. For multi-label problems, this will predict the fraction of examples that are positive for each class. Example:
# Build BaselineClassifier
classifier = tf.estimator.BaselineClassifier(n_classes=3)
# Input builders
def input_fn_train:
# Returns tf.data.Dataset of (x, y) tuple where y represents label's class
# index.
pass
def input_fn_eval:
# Returns tf.data.Dataset of (x, y) tuple where y represents label's class
# index.
pass
# Fit model.
classifier.train(input_fn=input_fn_train)
# Evaluate cross entropy between the test and train labels.
loss = classifier.evaluate(input_fn=input_fn_eval)["loss"]
# predict outputs the probability distribution of the classes as seen in
# training.
predictions = classifier.predict(new_samples)
Input of train and evaluate should have following features, otherwise there will be a KeyError: if weight_column is not None, a feature with key=weight_column whose value is a Tensor.
Args
model_dir Directory to save model parameters, graph and etc. This can also be used to load checkpoints from the directory into a estimator to continue training a previously saved model.
n_classes number of label classes. Default is binary classification. It must be greater than 1. Note: Class labels are integers representing the class index (i.e. values from 0 to n_classes-1). For arbitrary label values (e.g. string labels), convert to class indices first.
weight_column A string or a NumericColumn created by tf.feature_column.numeric_column defining feature column representing weights. It will be multiplied by the loss of the example.
label_vocabulary Optional list of strings with size [n_classes] defining the label vocabulary. Only supported for n_classes > 2.
optimizer String, tf.keras.optimizers.* object, or callable that creates the optimizer to use for training. If not specified, will use Ftrl as the default optimizer.
config RunConfig object to configure the runtime settings.
loss_reduction One of tf.losses.Reduction except NONE. Describes how to reduce training loss over batch. Defaults to SUM_OVER_BATCH_SIZE.
Raises
ValueError If n_classes < 2. Eager Compatibility Estimators can be used while eager execution is enabled. Note that input_fn and all hooks are executed inside a graph context, so they have to be written to be compatible with graph mode. Note that input_fn code using tf.data generally works in both graph and eager modes.
Attributes
config
export_savedmodel
model_dir
model_fn Returns the model_fn which is bound to self.params.
params
Methods eval_dir View source
eval_dir(
name=None
)
Shows the directory name where evaluation metrics are dumped.
Args
name Name of the evaluation if user needs to run multiple evaluations on different data sets, such as on training data vs test data. Metrics for different evaluations are saved in separate folders, and appear separately in tensorboard.
Returns A string which is the path of directory contains evaluation metrics.
evaluate View source
evaluate(
input_fn, steps=None, hooks=None, checkpoint_path=None, name=None
)
Evaluates the model given evaluation data input_fn. For each step, calls input_fn, which returns one batch of data. Evaluates until:
steps batches are processed, or
input_fn raises an end-of-input exception (tf.errors.OutOfRangeError or StopIteration).
Args
input_fn A function that constructs the input data for evaluation. See Premade Estimators for more information. The function should construct and return one of the following: A tf.data.Dataset object: Outputs of Dataset object must be a tuple (features, labels) with same constraints as below. A tuple (features, labels): Where features is a tf.Tensor or a dictionary of string feature name to Tensor and labels is a Tensor or a dictionary of string label name to Tensor. Both features and labels are consumed by model_fn. They should satisfy the expectation of model_fn from inputs.
steps Number of steps for which to evaluate model. If None, evaluates until input_fn raises an end-of-input exception.
hooks List of tf.train.SessionRunHook subclass instances. Used for callbacks inside the evaluation call.
checkpoint_path Path of a specific checkpoint to evaluate. If None, the latest checkpoint in model_dir is used. If there are no checkpoints in model_dir, evaluation is run with newly initialized Variables instead of ones restored from checkpoint.
name Name of the evaluation if user needs to run multiple evaluations on different data sets, such as on training data vs test data. Metrics for different evaluations are saved in separate folders, and appear separately in tensorboard.
Returns A dict containing the evaluation metrics specified in model_fn keyed by name, as well as an entry global_step which contains the value of the global step for which this evaluation was performed. For canned estimators, the dict contains the loss (mean loss per mini-batch) and the average_loss (mean loss per sample). Canned classifiers also return the accuracy. Canned regressors also return the label/mean and the prediction/mean.
Raises
ValueError If steps <= 0. experimental_export_all_saved_models View source
experimental_export_all_saved_models(
export_dir_base, input_receiver_fn_map, assets_extra=None, as_text=False,
checkpoint_path=None
)
Exports a SavedModel with tf.MetaGraphDefs for each requested mode. For each mode passed in via the input_receiver_fn_map, this method builds a new graph by calling the input_receiver_fn to obtain feature and label Tensors. Next, this method calls the Estimator's model_fn in the passed mode to generate the model graph based on those features and labels, and restores the given checkpoint (or, lacking that, the most recent checkpoint) into the graph. Only one of the modes is used for saving variables to the SavedModel (order of preference: tf.estimator.ModeKeys.TRAIN, tf.estimator.ModeKeys.EVAL, then tf.estimator.ModeKeys.PREDICT), such that up to three tf.MetaGraphDefs are saved with a single set of variables in a single SavedModel directory. For the variables and tf.MetaGraphDefs, a timestamped export directory below export_dir_base, and writes a SavedModel into it containing the tf.MetaGraphDef for the given mode and its associated signatures. For prediction, the exported MetaGraphDef will provide one SignatureDef for each element of the export_outputs dict returned from the model_fn, named using the same keys. One of these keys is always tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY, indicating which signature will be served when a serving request does not specify one. For each signature, the outputs are provided by the corresponding tf.estimator.export.ExportOutputs, and the inputs are always the input receivers provided by the serving_input_receiver_fn. For training and evaluation, the train_op is stored in an extra collection, and loss, metrics, and predictions are included in a SignatureDef for the mode in question. Extra assets may be written into the SavedModel via the assets_extra argument. This should be a dict, where each key gives a destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as {'my_asset_file.txt': '/path/to/my_asset_file.txt'}.
Args
export_dir_base A string containing a directory in which to create timestamped subdirectories containing exported SavedModels.
input_receiver_fn_map dict of tf.estimator.ModeKeys to input_receiver_fn mappings, where the input_receiver_fn is a function that takes no arguments and returns the appropriate subclass of InputReceiver.
assets_extra A dict specifying how to populate the assets.extra directory within the exported SavedModel, or None if no extra assets are needed.
as_text whether to write the SavedModel proto in text format.
checkpoint_path The checkpoint path to export. If None (the default), the most recent checkpoint found within the model directory is chosen.
Returns The path to the exported directory as a bytes object.
Raises
ValueError if any input_receiver_fn is None, no export_outputs are provided, or no checkpoint can be found. export_saved_model View source
export_saved_model(
export_dir_base, serving_input_receiver_fn, assets_extra=None, as_text=False,
checkpoint_path=None, experimental_mode=ModeKeys.PREDICT
)
Exports inference graph as a SavedModel into the given dir. For a detailed guide, see SavedModel from Estimators. This method builds a new graph by first calling the serving_input_receiver_fn to obtain feature Tensors, and then calling this Estimator's model_fn to generate the model graph based on those features. It restores the given checkpoint (or, lacking that, the most recent checkpoint) into this graph in a fresh session. Finally it creates a timestamped export directory below the given export_dir_base, and writes a SavedModel into it containing a single tf.MetaGraphDef saved from this session. The exported MetaGraphDef will provide one SignatureDef for each element of the export_outputs dict returned from the model_fn, named using the same keys. One of these keys is always tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY, indicating which signature will be served when a serving request does not specify one. For each signature, the outputs are provided by the corresponding tf.estimator.export.ExportOutputs, and the inputs are always the input receivers provided by the serving_input_receiver_fn. Extra assets may be written into the SavedModel via the assets_extra argument. This should be a dict, where each key gives a destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as {'my_asset_file.txt': '/path/to/my_asset_file.txt'}. The experimental_mode parameter can be used to export a single train/eval/predict graph as a SavedModel. See experimental_export_all_saved_models for full docs.
Args
export_dir_base A string containing a directory in which to create timestamped subdirectories containing exported SavedModels.
serving_input_receiver_fn A function that takes no argument and returns a tf.estimator.export.ServingInputReceiver or tf.estimator.export.TensorServingInputReceiver.
assets_extra A dict specifying how to populate the assets.extra directory within the exported SavedModel, or None if no extra assets are needed.
as_text whether to write the SavedModel proto in text format.
checkpoint_path The checkpoint path to export. If None (the default), the most recent checkpoint found within the model directory is chosen.
experimental_mode tf.estimator.ModeKeys value indicating with mode will be exported. Note that this feature is experimental.
Returns The path to the exported directory as a bytes object.
Raises
ValueError if no serving_input_receiver_fn is provided, no export_outputs are provided, or no checkpoint can be found. get_variable_names View source
get_variable_names()
Returns list of all variable names in this model.
Returns List of names.
Raises
ValueError If the Estimator has not produced a checkpoint yet. get_variable_value View source
get_variable_value(
name
)
Returns value of the variable given by name.
Args
name string or a list of string, name of the tensor.
Returns Numpy array - value of the tensor.
Raises
ValueError If the Estimator has not produced a checkpoint yet. latest_checkpoint View source
latest_checkpoint()
Finds the filename of the latest saved checkpoint file in model_dir.
Returns The full path to the latest checkpoint or None if no checkpoint was found.
predict View source
predict(
input_fn, predict_keys=None, hooks=None, checkpoint_path=None,
yield_single_examples=True
)
Yields predictions for given features. Please note that interleaving two predict outputs does not work. See: issue/20506
Args
input_fn A function that constructs the features. Prediction continues until input_fn raises an end-of-input exception (tf.errors.OutOfRangeError or StopIteration). See Premade Estimators for more information. The function should construct and return one of the following:
tf.data.Dataset object -- Outputs of Dataset object must have same constraints as below. features -- A tf.Tensor or a dictionary of string feature name to Tensor. features are consumed by model_fn. They should satisfy the expectation of model_fn from inputs. A tuple, in which case the first item is extracted as features.
predict_keys list of str, name of the keys to predict. It is used if the tf.estimator.EstimatorSpec.predictions is a dict. If predict_keys is used then rest of the predictions will be filtered from the dictionary. If None, returns all.
hooks List of tf.train.SessionRunHook subclass instances. Used for callbacks inside the prediction call.
checkpoint_path Path of a specific checkpoint to predict. If None, the latest checkpoint in model_dir is used. If there are no checkpoints in model_dir, prediction is run with newly initialized Variables instead of ones restored from checkpoint.
yield_single_examples If False, yields the whole batch as returned by the model_fn instead of decomposing the batch into individual elements. This is useful if model_fn returns some tensors whose first dimension is not equal to the batch size.
Yields Evaluated values of predictions tensors.
Raises
ValueError If batch length of predictions is not the same and yield_single_examples is True.
ValueError If there is a conflict between predict_keys and predictions. For example if predict_keys is not None but tf.estimator.EstimatorSpec.predictions is not a dict. train View source
train(
input_fn, hooks=None, steps=None, max_steps=None, saving_listeners=None
)
Trains a model given training data input_fn.
Args
input_fn A function that provides input data for training as minibatches. See Premade Estimators for more information. The function should construct and return one of the following: A tf.data.Dataset object: Outputs of Dataset object must be a tuple (features, labels) with same constraints as below. A tuple (features, labels): Where features is a tf.Tensor or a dictionary of string feature name to Tensor and labels is a Tensor or a dictionary of string label name to Tensor. Both features and labels are consumed by model_fn. They should satisfy the expectation of model_fn from inputs.
hooks List of tf.train.SessionRunHook subclass instances. Used for callbacks inside the training loop.
steps Number of steps for which to train the model. If None, train forever or train until input_fn generates the tf.errors.OutOfRange error or StopIteration exception. steps works incrementally. If you call two times train(steps=10) then training occurs in total 20 steps. If OutOfRange or StopIteration occurs in the middle, training stops before 20 steps. If you don't want to have incremental behavior please set max_steps instead. If set, max_steps must be None.
max_steps Number of total steps for which to train model. If None, train forever or train until input_fn generates the tf.errors.OutOfRange error or StopIteration exception. If set, steps must be None. If OutOfRange or StopIteration occurs in the middle, training stops before max_steps steps. Two calls to train(steps=100) means 200 training iterations. On the other hand, two calls to train(max_steps=100) means that the second call will not do any iteration since first call did all 100 steps.
saving_listeners list of CheckpointSaverListener objects. Used for callbacks that run immediately before or after checkpoint savings.
Returns self, for chaining.
Raises
ValueError If both steps and max_steps are not None.
ValueError If either steps or max_steps <= 0. | tensorflow.estimator.baselineclassifier |
tf.estimator.BaselineEstimator View source on GitHub An estimator that can establish a simple baseline. Inherits From: Estimator, Estimator
tf.estimator.BaselineEstimator(
head, model_dir=None, optimizer='Ftrl', config=None
)
The estimator uses a user-specified head. This estimator ignores feature values and will learn to predict the average value of each label. E.g. for single-label classification problems, this will predict the probability distribution of the classes as seen in the labels. For multi-label classification problems, it will predict the ratio of examples that contain each class. Example:
# Build baseline multi-label classifier.
estimator = tf.estimator.BaselineEstimator(
head=tf.estimator.MultiLabelHead(n_classes=3))
# Input builders
def input_fn_train:
# Returns tf.data.Dataset of (x, y) tuple where y represents label's class
# index.
pass
def input_fn_eval:
# Returns tf.data.Dataset of (x, y) tuple where y represents label's class
# index.
pass
# Fit model.
estimator.train(input_fn=input_fn_train)
# Evaluates cross entropy between the test and train labels.
loss = estimator.evaluate(input_fn=input_fn_eval)["loss"]
# For each class, predicts the ratio of training examples that contain the
# class.
predictions = estimator.predict(new_samples)
Input of train and evaluate should have following features, otherwise there will be a KeyError: if weight_column is specified in the head constructor (and not None) for the head passed to BaselineEstimator's constructor, a feature with key=weight_column whose value is a Tensor.
Args
head A Head instance constructed with a method such as tf.estimator.MultiLabelHead.
model_dir Directory to save model parameters, graph and etc. This can also be used to load checkpoints from the directory into a estimator to continue training a previously saved model.
optimizer String, tf.keras.optimizers.* object, or callable that creates the optimizer to use for training. If not specified, will use Ftrl as the default optimizer.
config RunConfig object to configure the runtime settings.
Attributes
config
export_savedmodel
model_dir
model_fn Returns the model_fn which is bound to self.params.
params
Methods eval_dir View source
eval_dir(
name=None
)
Shows the directory name where evaluation metrics are dumped.
Args
name Name of the evaluation if user needs to run multiple evaluations on different data sets, such as on training data vs test data. Metrics for different evaluations are saved in separate folders, and appear separately in tensorboard.
Returns A string which is the path of directory contains evaluation metrics.
evaluate View source
evaluate(
input_fn, steps=None, hooks=None, checkpoint_path=None, name=None
)
Evaluates the model given evaluation data input_fn. For each step, calls input_fn, which returns one batch of data. Evaluates until:
steps batches are processed, or
input_fn raises an end-of-input exception (tf.errors.OutOfRangeError or StopIteration).
Args
input_fn A function that constructs the input data for evaluation. See Premade Estimators for more information. The function should construct and return one of the following: A tf.data.Dataset object: Outputs of Dataset object must be a tuple (features, labels) with same constraints as below. A tuple (features, labels): Where features is a tf.Tensor or a dictionary of string feature name to Tensor and labels is a Tensor or a dictionary of string label name to Tensor. Both features and labels are consumed by model_fn. They should satisfy the expectation of model_fn from inputs.
steps Number of steps for which to evaluate model. If None, evaluates until input_fn raises an end-of-input exception.
hooks List of tf.train.SessionRunHook subclass instances. Used for callbacks inside the evaluation call.
checkpoint_path Path of a specific checkpoint to evaluate. If None, the latest checkpoint in model_dir is used. If there are no checkpoints in model_dir, evaluation is run with newly initialized Variables instead of ones restored from checkpoint.
name Name of the evaluation if user needs to run multiple evaluations on different data sets, such as on training data vs test data. Metrics for different evaluations are saved in separate folders, and appear separately in tensorboard.
Returns A dict containing the evaluation metrics specified in model_fn keyed by name, as well as an entry global_step which contains the value of the global step for which this evaluation was performed. For canned estimators, the dict contains the loss (mean loss per mini-batch) and the average_loss (mean loss per sample). Canned classifiers also return the accuracy. Canned regressors also return the label/mean and the prediction/mean.
Raises
ValueError If steps <= 0. experimental_export_all_saved_models View source
experimental_export_all_saved_models(
export_dir_base, input_receiver_fn_map, assets_extra=None, as_text=False,
checkpoint_path=None
)
Exports a SavedModel with tf.MetaGraphDefs for each requested mode. For each mode passed in via the input_receiver_fn_map, this method builds a new graph by calling the input_receiver_fn to obtain feature and label Tensors. Next, this method calls the Estimator's model_fn in the passed mode to generate the model graph based on those features and labels, and restores the given checkpoint (or, lacking that, the most recent checkpoint) into the graph. Only one of the modes is used for saving variables to the SavedModel (order of preference: tf.estimator.ModeKeys.TRAIN, tf.estimator.ModeKeys.EVAL, then tf.estimator.ModeKeys.PREDICT), such that up to three tf.MetaGraphDefs are saved with a single set of variables in a single SavedModel directory. For the variables and tf.MetaGraphDefs, a timestamped export directory below export_dir_base, and writes a SavedModel into it containing the tf.MetaGraphDef for the given mode and its associated signatures. For prediction, the exported MetaGraphDef will provide one SignatureDef for each element of the export_outputs dict returned from the model_fn, named using the same keys. One of these keys is always tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY, indicating which signature will be served when a serving request does not specify one. For each signature, the outputs are provided by the corresponding tf.estimator.export.ExportOutputs, and the inputs are always the input receivers provided by the serving_input_receiver_fn. For training and evaluation, the train_op is stored in an extra collection, and loss, metrics, and predictions are included in a SignatureDef for the mode in question. Extra assets may be written into the SavedModel via the assets_extra argument. This should be a dict, where each key gives a destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as {'my_asset_file.txt': '/path/to/my_asset_file.txt'}.
Args
export_dir_base A string containing a directory in which to create timestamped subdirectories containing exported SavedModels.
input_receiver_fn_map dict of tf.estimator.ModeKeys to input_receiver_fn mappings, where the input_receiver_fn is a function that takes no arguments and returns the appropriate subclass of InputReceiver.
assets_extra A dict specifying how to populate the assets.extra directory within the exported SavedModel, or None if no extra assets are needed.
as_text whether to write the SavedModel proto in text format.
checkpoint_path The checkpoint path to export. If None (the default), the most recent checkpoint found within the model directory is chosen.
Returns The path to the exported directory as a bytes object.
Raises
ValueError if any input_receiver_fn is None, no export_outputs are provided, or no checkpoint can be found. export_saved_model View source
export_saved_model(
export_dir_base, serving_input_receiver_fn, assets_extra=None, as_text=False,
checkpoint_path=None, experimental_mode=ModeKeys.PREDICT
)
Exports inference graph as a SavedModel into the given dir. For a detailed guide, see SavedModel from Estimators. This method builds a new graph by first calling the serving_input_receiver_fn to obtain feature Tensors, and then calling this Estimator's model_fn to generate the model graph based on those features. It restores the given checkpoint (or, lacking that, the most recent checkpoint) into this graph in a fresh session. Finally it creates a timestamped export directory below the given export_dir_base, and writes a SavedModel into it containing a single tf.MetaGraphDef saved from this session. The exported MetaGraphDef will provide one SignatureDef for each element of the export_outputs dict returned from the model_fn, named using the same keys. One of these keys is always tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY, indicating which signature will be served when a serving request does not specify one. For each signature, the outputs are provided by the corresponding tf.estimator.export.ExportOutputs, and the inputs are always the input receivers provided by the serving_input_receiver_fn. Extra assets may be written into the SavedModel via the assets_extra argument. This should be a dict, where each key gives a destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as {'my_asset_file.txt': '/path/to/my_asset_file.txt'}. The experimental_mode parameter can be used to export a single train/eval/predict graph as a SavedModel. See experimental_export_all_saved_models for full docs.
Args
export_dir_base A string containing a directory in which to create timestamped subdirectories containing exported SavedModels.
serving_input_receiver_fn A function that takes no argument and returns a tf.estimator.export.ServingInputReceiver or tf.estimator.export.TensorServingInputReceiver.
assets_extra A dict specifying how to populate the assets.extra directory within the exported SavedModel, or None if no extra assets are needed.
as_text whether to write the SavedModel proto in text format.
checkpoint_path The checkpoint path to export. If None (the default), the most recent checkpoint found within the model directory is chosen.
experimental_mode tf.estimator.ModeKeys value indicating with mode will be exported. Note that this feature is experimental.
Returns The path to the exported directory as a bytes object.
Raises
ValueError if no serving_input_receiver_fn is provided, no export_outputs are provided, or no checkpoint can be found. get_variable_names View source
get_variable_names()
Returns list of all variable names in this model.
Returns List of names.
Raises
ValueError If the Estimator has not produced a checkpoint yet. get_variable_value View source
get_variable_value(
name
)
Returns value of the variable given by name.
Args
name string or a list of string, name of the tensor.
Returns Numpy array - value of the tensor.
Raises
ValueError If the Estimator has not produced a checkpoint yet. latest_checkpoint View source
latest_checkpoint()
Finds the filename of the latest saved checkpoint file in model_dir.
Returns The full path to the latest checkpoint or None if no checkpoint was found.
predict View source
predict(
input_fn, predict_keys=None, hooks=None, checkpoint_path=None,
yield_single_examples=True
)
Yields predictions for given features. Please note that interleaving two predict outputs does not work. See: issue/20506
Args
input_fn A function that constructs the features. Prediction continues until input_fn raises an end-of-input exception (tf.errors.OutOfRangeError or StopIteration). See Premade Estimators for more information. The function should construct and return one of the following:
tf.data.Dataset object -- Outputs of Dataset object must have same constraints as below. features -- A tf.Tensor or a dictionary of string feature name to Tensor. features are consumed by model_fn. They should satisfy the expectation of model_fn from inputs. A tuple, in which case the first item is extracted as features.
predict_keys list of str, name of the keys to predict. It is used if the tf.estimator.EstimatorSpec.predictions is a dict. If predict_keys is used then rest of the predictions will be filtered from the dictionary. If None, returns all.
hooks List of tf.train.SessionRunHook subclass instances. Used for callbacks inside the prediction call.
checkpoint_path Path of a specific checkpoint to predict. If None, the latest checkpoint in model_dir is used. If there are no checkpoints in model_dir, prediction is run with newly initialized Variables instead of ones restored from checkpoint.
yield_single_examples If False, yields the whole batch as returned by the model_fn instead of decomposing the batch into individual elements. This is useful if model_fn returns some tensors whose first dimension is not equal to the batch size.
Yields Evaluated values of predictions tensors.
Raises
ValueError If batch length of predictions is not the same and yield_single_examples is True.
ValueError If there is a conflict between predict_keys and predictions. For example if predict_keys is not None but tf.estimator.EstimatorSpec.predictions is not a dict. train View source
train(
input_fn, hooks=None, steps=None, max_steps=None, saving_listeners=None
)
Trains a model given training data input_fn.
Args
input_fn A function that provides input data for training as minibatches. See Premade Estimators for more information. The function should construct and return one of the following: A tf.data.Dataset object: Outputs of Dataset object must be a tuple (features, labels) with same constraints as below. A tuple (features, labels): Where features is a tf.Tensor or a dictionary of string feature name to Tensor and labels is a Tensor or a dictionary of string label name to Tensor. Both features and labels are consumed by model_fn. They should satisfy the expectation of model_fn from inputs.
hooks List of tf.train.SessionRunHook subclass instances. Used for callbacks inside the training loop.
steps Number of steps for which to train the model. If None, train forever or train until input_fn generates the tf.errors.OutOfRange error or StopIteration exception. steps works incrementally. If you call two times train(steps=10) then training occurs in total 20 steps. If OutOfRange or StopIteration occurs in the middle, training stops before 20 steps. If you don't want to have incremental behavior please set max_steps instead. If set, max_steps must be None.
max_steps Number of total steps for which to train model. If None, train forever or train until input_fn generates the tf.errors.OutOfRange error or StopIteration exception. If set, steps must be None. If OutOfRange or StopIteration occurs in the middle, training stops before max_steps steps. Two calls to train(steps=100) means 200 training iterations. On the other hand, two calls to train(max_steps=100) means that the second call will not do any iteration since first call did all 100 steps.
saving_listeners list of CheckpointSaverListener objects. Used for callbacks that run immediately before or after checkpoint savings.
Returns self, for chaining.
Raises
ValueError If both steps and max_steps are not None.
ValueError If either steps or max_steps <= 0. | tensorflow.estimator.baselineestimator |
tf.estimator.BaselineRegressor View source on GitHub A regressor that can establish a simple baseline. Inherits From: Estimator, Estimator
tf.estimator.BaselineRegressor(
model_dir=None, label_dimension=1, weight_column=None,
optimizer='Ftrl', config=None,
loss_reduction=losses_utils.ReductionV2.SUM_OVER_BATCH_SIZE
)
This regressor ignores feature values and will learn to predict the average value of each label. Example:
# Build BaselineRegressor
regressor = tf.estimator.BaselineRegressor()
# Input builders
def input_fn_train:
# Returns tf.data.Dataset of (x, y) tuple where y represents label's class
# index.
pass
def input_fn_eval:
# Returns tf.data.Dataset of (x, y) tuple where y represents label's class
# index.
pass
# Fit model.
regressor.train(input_fn=input_fn_train)
# Evaluate squared-loss between the test and train targets.
loss = regressor.evaluate(input_fn=input_fn_eval)["loss"]
# predict outputs the mean value seen during training.
predictions = regressor.predict(new_samples)
Input of train and evaluate should have following features, otherwise there will be a KeyError: if weight_column is not None, a feature with key=weight_column whose value is a Tensor.
Args
model_dir Directory to save model parameters, graph and etc. This can also be used to load checkpoints from the directory into a estimator to continue training a previously saved model.
label_dimension Number of regression targets per example. This is the size of the last dimension of the labels and logits Tensor objects (typically, these have shape [batch_size, label_dimension]).
weight_column A string or a _NumericColumn created by tf.feature_column.numeric_column defining feature column representing weights. It will be multiplied by the loss of the example.
optimizer String, tf.keras.optimizers.* object, or callable that creates the optimizer to use for training. If not specified, will use Ftrl as the default optimizer.
config RunConfig object to configure the runtime settings.
loss_reduction One of tf.losses.Reduction except NONE. Describes how to reduce training loss over batch. Defaults to SUM_OVER_BATCH_SIZE. Eager Compatibility Estimators can be used while eager execution is enabled. Note that input_fn and all hooks are executed inside a graph context, so they have to be written to be compatible with graph mode. Note that input_fn code using tf.data generally works in both graph and eager modes.
Attributes
config
export_savedmodel
model_dir
model_fn Returns the model_fn which is bound to self.params.
params
Methods eval_dir View source
eval_dir(
name=None
)
Shows the directory name where evaluation metrics are dumped.
Args
name Name of the evaluation if user needs to run multiple evaluations on different data sets, such as on training data vs test data. Metrics for different evaluations are saved in separate folders, and appear separately in tensorboard.
Returns A string which is the path of directory contains evaluation metrics.
evaluate View source
evaluate(
input_fn, steps=None, hooks=None, checkpoint_path=None, name=None
)
Evaluates the model given evaluation data input_fn. For each step, calls input_fn, which returns one batch of data. Evaluates until:
steps batches are processed, or
input_fn raises an end-of-input exception (tf.errors.OutOfRangeError or StopIteration).
Args
input_fn A function that constructs the input data for evaluation. See Premade Estimators for more information. The function should construct and return one of the following: A tf.data.Dataset object: Outputs of Dataset object must be a tuple (features, labels) with same constraints as below. A tuple (features, labels): Where features is a tf.Tensor or a dictionary of string feature name to Tensor and labels is a Tensor or a dictionary of string label name to Tensor. Both features and labels are consumed by model_fn. They should satisfy the expectation of model_fn from inputs.
steps Number of steps for which to evaluate model. If None, evaluates until input_fn raises an end-of-input exception.
hooks List of tf.train.SessionRunHook subclass instances. Used for callbacks inside the evaluation call.
checkpoint_path Path of a specific checkpoint to evaluate. If None, the latest checkpoint in model_dir is used. If there are no checkpoints in model_dir, evaluation is run with newly initialized Variables instead of ones restored from checkpoint.
name Name of the evaluation if user needs to run multiple evaluations on different data sets, such as on training data vs test data. Metrics for different evaluations are saved in separate folders, and appear separately in tensorboard.
Returns A dict containing the evaluation metrics specified in model_fn keyed by name, as well as an entry global_step which contains the value of the global step for which this evaluation was performed. For canned estimators, the dict contains the loss (mean loss per mini-batch) and the average_loss (mean loss per sample). Canned classifiers also return the accuracy. Canned regressors also return the label/mean and the prediction/mean.
Raises
ValueError If steps <= 0. experimental_export_all_saved_models View source
experimental_export_all_saved_models(
export_dir_base, input_receiver_fn_map, assets_extra=None, as_text=False,
checkpoint_path=None
)
Exports a SavedModel with tf.MetaGraphDefs for each requested mode. For each mode passed in via the input_receiver_fn_map, this method builds a new graph by calling the input_receiver_fn to obtain feature and label Tensors. Next, this method calls the Estimator's model_fn in the passed mode to generate the model graph based on those features and labels, and restores the given checkpoint (or, lacking that, the most recent checkpoint) into the graph. Only one of the modes is used for saving variables to the SavedModel (order of preference: tf.estimator.ModeKeys.TRAIN, tf.estimator.ModeKeys.EVAL, then tf.estimator.ModeKeys.PREDICT), such that up to three tf.MetaGraphDefs are saved with a single set of variables in a single SavedModel directory. For the variables and tf.MetaGraphDefs, a timestamped export directory below export_dir_base, and writes a SavedModel into it containing the tf.MetaGraphDef for the given mode and its associated signatures. For prediction, the exported MetaGraphDef will provide one SignatureDef for each element of the export_outputs dict returned from the model_fn, named using the same keys. One of these keys is always tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY, indicating which signature will be served when a serving request does not specify one. For each signature, the outputs are provided by the corresponding tf.estimator.export.ExportOutputs, and the inputs are always the input receivers provided by the serving_input_receiver_fn. For training and evaluation, the train_op is stored in an extra collection, and loss, metrics, and predictions are included in a SignatureDef for the mode in question. Extra assets may be written into the SavedModel via the assets_extra argument. This should be a dict, where each key gives a destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as {'my_asset_file.txt': '/path/to/my_asset_file.txt'}.
Args
export_dir_base A string containing a directory in which to create timestamped subdirectories containing exported SavedModels.
input_receiver_fn_map dict of tf.estimator.ModeKeys to input_receiver_fn mappings, where the input_receiver_fn is a function that takes no arguments and returns the appropriate subclass of InputReceiver.
assets_extra A dict specifying how to populate the assets.extra directory within the exported SavedModel, or None if no extra assets are needed.
as_text whether to write the SavedModel proto in text format.
checkpoint_path The checkpoint path to export. If None (the default), the most recent checkpoint found within the model directory is chosen.
Returns The path to the exported directory as a bytes object.
Raises
ValueError if any input_receiver_fn is None, no export_outputs are provided, or no checkpoint can be found. export_saved_model View source
export_saved_model(
export_dir_base, serving_input_receiver_fn, assets_extra=None, as_text=False,
checkpoint_path=None, experimental_mode=ModeKeys.PREDICT
)
Exports inference graph as a SavedModel into the given dir. For a detailed guide, see SavedModel from Estimators. This method builds a new graph by first calling the serving_input_receiver_fn to obtain feature Tensors, and then calling this Estimator's model_fn to generate the model graph based on those features. It restores the given checkpoint (or, lacking that, the most recent checkpoint) into this graph in a fresh session. Finally it creates a timestamped export directory below the given export_dir_base, and writes a SavedModel into it containing a single tf.MetaGraphDef saved from this session. The exported MetaGraphDef will provide one SignatureDef for each element of the export_outputs dict returned from the model_fn, named using the same keys. One of these keys is always tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY, indicating which signature will be served when a serving request does not specify one. For each signature, the outputs are provided by the corresponding tf.estimator.export.ExportOutputs, and the inputs are always the input receivers provided by the serving_input_receiver_fn. Extra assets may be written into the SavedModel via the assets_extra argument. This should be a dict, where each key gives a destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as {'my_asset_file.txt': '/path/to/my_asset_file.txt'}. The experimental_mode parameter can be used to export a single train/eval/predict graph as a SavedModel. See experimental_export_all_saved_models for full docs.
Args
export_dir_base A string containing a directory in which to create timestamped subdirectories containing exported SavedModels.
serving_input_receiver_fn A function that takes no argument and returns a tf.estimator.export.ServingInputReceiver or tf.estimator.export.TensorServingInputReceiver.
assets_extra A dict specifying how to populate the assets.extra directory within the exported SavedModel, or None if no extra assets are needed.
as_text whether to write the SavedModel proto in text format.
checkpoint_path The checkpoint path to export. If None (the default), the most recent checkpoint found within the model directory is chosen.
experimental_mode tf.estimator.ModeKeys value indicating with mode will be exported. Note that this feature is experimental.
Returns The path to the exported directory as a bytes object.
Raises
ValueError if no serving_input_receiver_fn is provided, no export_outputs are provided, or no checkpoint can be found. get_variable_names View source
get_variable_names()
Returns list of all variable names in this model.
Returns List of names.
Raises
ValueError If the Estimator has not produced a checkpoint yet. get_variable_value View source
get_variable_value(
name
)
Returns value of the variable given by name.
Args
name string or a list of string, name of the tensor.
Returns Numpy array - value of the tensor.
Raises
ValueError If the Estimator has not produced a checkpoint yet. latest_checkpoint View source
latest_checkpoint()
Finds the filename of the latest saved checkpoint file in model_dir.
Returns The full path to the latest checkpoint or None if no checkpoint was found.
predict View source
predict(
input_fn, predict_keys=None, hooks=None, checkpoint_path=None,
yield_single_examples=True
)
Yields predictions for given features. Please note that interleaving two predict outputs does not work. See: issue/20506
Args
input_fn A function that constructs the features. Prediction continues until input_fn raises an end-of-input exception (tf.errors.OutOfRangeError or StopIteration). See Premade Estimators for more information. The function should construct and return one of the following:
tf.data.Dataset object -- Outputs of Dataset object must have same constraints as below. features -- A tf.Tensor or a dictionary of string feature name to Tensor. features are consumed by model_fn. They should satisfy the expectation of model_fn from inputs. A tuple, in which case the first item is extracted as features.
predict_keys list of str, name of the keys to predict. It is used if the tf.estimator.EstimatorSpec.predictions is a dict. If predict_keys is used then rest of the predictions will be filtered from the dictionary. If None, returns all.
hooks List of tf.train.SessionRunHook subclass instances. Used for callbacks inside the prediction call.
checkpoint_path Path of a specific checkpoint to predict. If None, the latest checkpoint in model_dir is used. If there are no checkpoints in model_dir, prediction is run with newly initialized Variables instead of ones restored from checkpoint.
yield_single_examples If False, yields the whole batch as returned by the model_fn instead of decomposing the batch into individual elements. This is useful if model_fn returns some tensors whose first dimension is not equal to the batch size.
Yields Evaluated values of predictions tensors.
Raises
ValueError If batch length of predictions is not the same and yield_single_examples is True.
ValueError If there is a conflict between predict_keys and predictions. For example if predict_keys is not None but tf.estimator.EstimatorSpec.predictions is not a dict. train View source
train(
input_fn, hooks=None, steps=None, max_steps=None, saving_listeners=None
)
Trains a model given training data input_fn.
Args
input_fn A function that provides input data for training as minibatches. See Premade Estimators for more information. The function should construct and return one of the following: A tf.data.Dataset object: Outputs of Dataset object must be a tuple (features, labels) with same constraints as below. A tuple (features, labels): Where features is a tf.Tensor or a dictionary of string feature name to Tensor and labels is a Tensor or a dictionary of string label name to Tensor. Both features and labels are consumed by model_fn. They should satisfy the expectation of model_fn from inputs.
hooks List of tf.train.SessionRunHook subclass instances. Used for callbacks inside the training loop.
steps Number of steps for which to train the model. If None, train forever or train until input_fn generates the tf.errors.OutOfRange error or StopIteration exception. steps works incrementally. If you call two times train(steps=10) then training occurs in total 20 steps. If OutOfRange or StopIteration occurs in the middle, training stops before 20 steps. If you don't want to have incremental behavior please set max_steps instead. If set, max_steps must be None.
max_steps Number of total steps for which to train model. If None, train forever or train until input_fn generates the tf.errors.OutOfRange error or StopIteration exception. If set, steps must be None. If OutOfRange or StopIteration occurs in the middle, training stops before max_steps steps. Two calls to train(steps=100) means 200 training iterations. On the other hand, two calls to train(max_steps=100) means that the second call will not do any iteration since first call did all 100 steps.
saving_listeners list of CheckpointSaverListener objects. Used for callbacks that run immediately before or after checkpoint savings.
Returns self, for chaining.
Raises
ValueError If both steps and max_steps are not None.
ValueError If either steps or max_steps <= 0. | tensorflow.estimator.baselineregressor |
tf.estimator.BestExporter View source on GitHub This class exports the serving graph and checkpoints of the best models. Inherits From: Exporter View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.estimator.BestExporter
tf.estimator.BestExporter(
name='best_exporter', serving_input_receiver_fn=None,
event_file_pattern='eval/*.tfevents.*', compare_fn=_loss_smaller,
assets_extra=None, as_text=False, exports_to_keep=5
)
This class performs a model export everytime the new model is better than any existing model.
Args
name unique name of this Exporter that is going to be used in the export path.
serving_input_receiver_fn a function that takes no arguments and returns a ServingInputReceiver.
event_file_pattern event file name pattern relative to model_dir. If None, however, the exporter would not be preemption-safe. To be preemption-safe, event_file_pattern must be specified.
compare_fn a function that compares two evaluation results and returns true if current evaluation result is better. Follows the signature: Args:
best_eval_result: This is the evaluation result of the best model.
current_eval_result: This is the evaluation result of current candidate model. Returns: True if current evaluation result is better; otherwise, False.
assets_extra An optional dict specifying how to populate the assets.extra directory within the exported SavedModel. Each key should give the destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as {'my_asset_file.txt': '/path/to/my_asset_file.txt'}.
as_text whether to write the SavedModel proto in text format. Defaults to False.
exports_to_keep Number of exports to keep. Older exports will be garbage-collected. Defaults to 5. Set to None to disable garbage collection.
Raises
ValueError if any argument is invalid.
Attributes
name Directory name. A directory name under the export base directory where exports of this type are written. Should not be None nor empty.
Methods export View source
export(
estimator, export_path, checkpoint_path, eval_result, is_the_final_export
)
Exports the given Estimator to a specific format.
Args
estimator the Estimator to export.
export_path A string containing a directory where to write the export.
checkpoint_path The checkpoint path to export.
eval_result The output of Estimator.evaluate on this checkpoint.
is_the_final_export This boolean is True when this is an export in the end of training. It is False for the intermediate exports during the training. When passing Exporter to tf.estimator.train_and_evaluate is_the_final_export is always False if TrainSpec.max_steps is None.
Returns The string path to the exported directory or None if export is skipped. | tensorflow.estimator.bestexporter |
tf.estimator.BinaryClassHead View source on GitHub Creates a Head for single label binary classification. Inherits From: Head View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.estimator.BinaryClassHead
tf.estimator.BinaryClassHead(
weight_column=None, thresholds=None, label_vocabulary=None,
loss_reduction=losses_utils.ReductionV2.SUM_OVER_BATCH_SIZE, loss_fn=None,
name=None
)
Uses sigmoid_cross_entropy_with_logits loss. The head expects logits with shape [D0, D1, ... DN, 1]. In many applications, the shape is [batch_size, 1]. labels must be a dense Tensor with shape matching logits, namely [D0, D1, ... DN, 1]. If label_vocabulary given, labels must be a string Tensor with values from the vocabulary. If label_vocabulary is not given, labels must be float Tensor with values in the interval [0, 1]. If weight_column is specified, weights must be of shape [D0, D1, ... DN], or [D0, D1, ... DN, 1]. The loss is the weighted sum over the input dimensions. Namely, if the input labels have shape [batch_size, 1], the loss is the weighted sum over batch_size. Also supports custom loss_fn. loss_fn takes (labels, logits) or (labels, logits, features, loss_reduction) as arguments and returns loss with shape [D0, D1, ... DN, 1]. loss_fn must support float labels with shape [D0, D1, ... DN, 1]. Namely, the head applies label_vocabulary to the input labels before passing them to loss_fn. Usage:
head = tf.estimator.BinaryClassHead()
logits = np.array(((45,), (-41,),), dtype=np.float32)
labels = np.array(((1,), (1,),), dtype=np.int32)
features = {'x': np.array(((42,),), dtype=np.float32)}
# expected_loss = sum(cross_entropy(labels, logits)) / batch_size
# = sum(0, 41) / 2 = 41 / 2 = 20.50
loss = head.loss(labels, logits, features=features)
print('{:.2f}'.format(loss.numpy()))
20.50
eval_metrics = head.metrics()
updated_metrics = head.update_metrics(
eval_metrics, features, logits, labels)
for k in sorted(updated_metrics):
print('{} : {:.2f}'.format(k, updated_metrics[k].result().numpy()))
accuracy : 0.50
accuracy_baseline : 1.00
auc : 0.00
auc_precision_recall : 1.00
average_loss : 20.50
label/mean : 1.00
precision : 1.00
prediction/mean : 0.50
recall : 0.50
preds = head.predictions(logits)
print(preds['logits'])
tf.Tensor(
[[ 45.]
[-41.]], shape=(2, 1), dtype=float32)
Usage with a canned estimator: my_head = tf.estimator.BinaryClassHead()
my_estimator = tf.estimator.DNNEstimator(
head=my_head,
hidden_units=...,
feature_columns=...)
It can also be used with a custom model_fn. Example: def _my_model_fn(features, labels, mode):
my_head = tf.estimator.BinaryClassHead()
logits = tf.keras.Model(...)(features)
return my_head.create_estimator_spec(
features=features,
mode=mode,
labels=labels,
optimizer=tf.keras.optimizers.Adagrad(lr=0.1),
logits=logits)
my_estimator = tf.estimator.Estimator(model_fn=_my_model_fn)
Args
weight_column A string or a NumericColumn created by tf.feature_column.numeric_column defining feature column representing weights. It is used to down weight or boost examples during training. It will be multiplied by the loss of the example.
thresholds Iterable of floats in the range (0, 1). For binary classification metrics such as precision and recall, an eval metric is generated for each threshold value. This threshold is applied to the logistic values to determine the binary classification (i.e., above the threshold is true, below is false.
label_vocabulary A list or tuple of strings representing possible label values. If it is not given, that means labels are already encoded within [0, 1]. If given, labels must be string type and have any value in label_vocabulary. Note that errors will be raised if label_vocabulary is not provided but labels are strings.
loss_reduction One of tf.losses.Reduction except NONE. Decides how to reduce training loss over batch. Defaults to SUM_OVER_BATCH_SIZE, namely weighted sum of losses divided by batch size * label_dimension.
loss_fn Optional loss function.
name Name of the head. If provided, summary and metrics keys will be suffixed by "/" + name. Also used as name_scope when creating ops.
Attributes
logits_dimension See base_head.Head for details.
loss_reduction See base_head.Head for details.
name See base_head.Head for details. Methods create_estimator_spec View source
create_estimator_spec(
features, mode, logits, labels=None, optimizer=None, trainable_variables=None,
train_op_fn=None, update_ops=None, regularization_losses=None
)
Returns EstimatorSpec that a model_fn can return. It is recommended to pass all args via name.
Args
features Input dict mapping string feature names to Tensor or SparseTensor objects containing the values for that feature in a minibatch. Often to be used to fetch example-weight tensor.
mode Estimator's ModeKeys.
logits Logits Tensor to be used by the head.
labels Labels Tensor, or dict mapping string label names to Tensor objects of the label values.
optimizer An tf.keras.optimizers.Optimizer instance to optimize the loss in TRAIN mode. Namely, sets train_op = optimizer.get_updates(loss, trainable_variables), which updates variables to minimize loss.
trainable_variables A list or tuple of Variable objects to update to minimize loss. In Tensorflow 1.x, by default these are the list of variables collected in the graph under the key GraphKeys.TRAINABLE_VARIABLES. As Tensorflow 2.x doesn't have collections and GraphKeys, trainable_variables need to be passed explicitly here.
train_op_fn Function that takes a scalar loss Tensor and returns an op to optimize the model with the loss in TRAIN mode. Used if optimizer is None. Exactly one of train_op_fn and optimizer must be set in TRAIN mode. By default, it is None in other modes. If you want to optimize loss yourself, you can pass lambda _: tf.no_op() and then use EstimatorSpec.loss to compute and apply gradients.
update_ops A list or tuple of update ops to be run at training time. For example, layers such as BatchNormalization create mean and variance update ops that need to be run at training time. In Tensorflow 1.x, these are thrown into an UPDATE_OPS collection. As Tensorflow 2.x doesn't have collections, update_ops need to be passed explicitly here.
regularization_losses A list of additional scalar losses to be added to the training loss, such as regularization losses.
Returns EstimatorSpec.
loss View source
loss(
labels, logits, features=None, mode=None, regularization_losses=None
)
Returns regularized training loss. See base_head.Head for details. metrics View source
metrics(
regularization_losses=None
)
Creates metrics. See base_head.Head for details. predictions View source
predictions(
logits, keys=None
)
Return predictions based on keys. See base_head.Head for details.
Args
logits logits Tensor with shape [D0, D1, ... DN, logits_dimension]. For many applications, the shape is [batch_size, logits_dimension].
keys a list or tuple of prediction keys. Each key can be either the class variable of prediction_keys.PredictionKeys or its string value, such as: prediction_keys.PredictionKeys.CLASSES or 'classes'. If not specified, it will return the predictions for all valid keys.
Returns A dict of predictions.
update_metrics View source
update_metrics(
eval_metrics, features, logits, labels, regularization_losses=None
)
Updates eval metrics. See base_head.Head for details. | tensorflow.estimator.binaryclasshead |
tf.estimator.BoostedTreesClassifier View source on GitHub A Classifier for Tensorflow Boosted Trees models. Inherits From: Estimator View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.estimator.BoostedTreesClassifier
tf.estimator.BoostedTreesClassifier(
feature_columns, n_batches_per_layer, model_dir=None, n_classes=2,
weight_column=None, label_vocabulary=None, n_trees=100, max_depth=6,
learning_rate=0.1, l1_regularization=0.0, l2_regularization=0.0,
tree_complexity=0.0, min_node_weight=0.0, config=None, center_bias=False,
pruning_mode='none', quantile_sketch_epsilon=0.01,
train_in_memory=False
)
Args
feature_columns An iterable containing all the feature columns used by the model. All items in the set should be instances of classes derived from FeatureColumn.
n_batches_per_layer the number of batches to collect statistics per layer. The total number of batches is total number of data divided by batch size.
model_dir Directory to save model parameters, graph and etc. This can also be used to load checkpoints from the directory into a estimator to continue training a previously saved model.
n_classes number of label classes. Default is binary classification.
weight_column A string or a NumericColumn created by tf.fc_old.numeric_column defining feature column representing weights. It is used to downweight or boost examples during training. It will be multiplied by the loss of the example. If it is a string, it is used as a key to fetch weight tensor from the features. If it is a NumericColumn, raw tensor is fetched by key weight_column.key, then weight_column.normalizer_fn is applied on it to get weight tensor.
label_vocabulary A list of strings represents possible label values. If given, labels must be string type and have any value in label_vocabulary. If it is not given, that means labels are already encoded as integer or float within [0, 1] for n_classes=2 and encoded as integer values in {0, 1,..., n_classes-1} for n_classes>2. Also, there will be errors if vocabulary is not provided and labels are string.
n_trees number trees to be created.
max_depth maximum depth of the tree to grow.
learning_rate shrinkage parameter to be used when a tree added to the model.
l1_regularization regularization multiplier applied to the absolute weights of the tree leafs. This is a per instance value. A good default is 1./(n_batches_per_layerbatch_size).
l2_regularization regularization multiplier applied to the square weights of the tree leafs. This is a per instance value. A good default is 1./(n_batches_per_layerbatch_size).
tree_complexity regularization factor to penalize trees with more leaves. This is a per instance value. A good default is 1./(n_batches_per_layer*batch_size).
min_node_weight min_node_weight: minimum hessian a node must have for a split to be considered. This is a per instance value. The value will be compared with sum(leaf_hessian)/(batch_size * n_batches_per_layer).
config RunConfig object to configure the runtime settings.
center_bias Whether bias centering needs to occur. Bias centering refers to the first node in the very first tree returning the prediction that is aligned with the original labels distribution. For example, for regression problems, the first node will return the mean of the labels. For binary classification problems, it will return a logit for a prior probability of label 1.
pruning_mode one of none, pre, post to indicate no pruning, pre- pruning (do not split a node if not enough gain is observed) and post pruning (build the tree up to a max depth and then prune branches with negative gain). For pre and post pruning, you MUST provide tree_complexity >0.
quantile_sketch_epsilon float between 0 and 1. Error bound for quantile computation. This is only used for float feature columns, and the number of buckets generated per float feature is 1/quantile_sketch_epsilon.
train_in_memory bool, when true, it assumes the dataset is in memory, i.e., input_fn should return the entire dataset as a single batch, n_batches_per_layer should be set as 1, num_worker_replicas should be 1, and num_ps_replicas should be 0 in tf.Estimator.RunConfig.
Raises
ValueError when wrong arguments are given or unsupported functionalities are requested. Eager Compatibility Estimators can be used while eager execution is enabled. Note that input_fn and all hooks are executed inside a graph context, so they have to be written to be compatible with graph mode. Note that input_fn code using tf.data generally works in both graph and eager modes.
Attributes
config
model_dir
model_fn Returns the model_fn which is bound to self.params.
params
Methods eval_dir View source
eval_dir(
name=None
)
Shows the directory name where evaluation metrics are dumped.
Args
name Name of the evaluation if user needs to run multiple evaluations on different data sets, such as on training data vs test data. Metrics for different evaluations are saved in separate folders, and appear separately in tensorboard.
Returns A string which is the path of directory contains evaluation metrics.
evaluate View source
evaluate(
input_fn, steps=None, hooks=None, checkpoint_path=None, name=None
)
Evaluates the model given evaluation data input_fn. For each step, calls input_fn, which returns one batch of data. Evaluates until:
steps batches are processed, or
input_fn raises an end-of-input exception (tf.errors.OutOfRangeError or StopIteration).
Args
input_fn A function that constructs the input data for evaluation. See Premade Estimators for more information. The function should construct and return one of the following: A tf.data.Dataset object: Outputs of Dataset object must be a tuple (features, labels) with same constraints as below. A tuple (features, labels): Where features is a tf.Tensor or a dictionary of string feature name to Tensor and labels is a Tensor or a dictionary of string label name to Tensor. Both features and labels are consumed by model_fn. They should satisfy the expectation of model_fn from inputs.
steps Number of steps for which to evaluate model. If None, evaluates until input_fn raises an end-of-input exception.
hooks List of tf.train.SessionRunHook subclass instances. Used for callbacks inside the evaluation call.
checkpoint_path Path of a specific checkpoint to evaluate. If None, the latest checkpoint in model_dir is used. If there are no checkpoints in model_dir, evaluation is run with newly initialized Variables instead of ones restored from checkpoint.
name Name of the evaluation if user needs to run multiple evaluations on different data sets, such as on training data vs test data. Metrics for different evaluations are saved in separate folders, and appear separately in tensorboard.
Returns A dict containing the evaluation metrics specified in model_fn keyed by name, as well as an entry global_step which contains the value of the global step for which this evaluation was performed. For canned estimators, the dict contains the loss (mean loss per mini-batch) and the average_loss (mean loss per sample). Canned classifiers also return the accuracy. Canned regressors also return the label/mean and the prediction/mean.
Raises
ValueError If steps <= 0. experimental_export_all_saved_models View source
experimental_export_all_saved_models(
export_dir_base, input_receiver_fn_map, assets_extra=None, as_text=False,
checkpoint_path=None
)
Exports a SavedModel with tf.MetaGraphDefs for each requested mode. For each mode passed in via the input_receiver_fn_map, this method builds a new graph by calling the input_receiver_fn to obtain feature and label Tensors. Next, this method calls the Estimator's model_fn in the passed mode to generate the model graph based on those features and labels, and restores the given checkpoint (or, lacking that, the most recent checkpoint) into the graph. Only one of the modes is used for saving variables to the SavedModel (order of preference: tf.estimator.ModeKeys.TRAIN, tf.estimator.ModeKeys.EVAL, then tf.estimator.ModeKeys.PREDICT), such that up to three tf.MetaGraphDefs are saved with a single set of variables in a single SavedModel directory. For the variables and tf.MetaGraphDefs, a timestamped export directory below export_dir_base, and writes a SavedModel into it containing the tf.MetaGraphDef for the given mode and its associated signatures. For prediction, the exported MetaGraphDef will provide one SignatureDef for each element of the export_outputs dict returned from the model_fn, named using the same keys. One of these keys is always tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY, indicating which signature will be served when a serving request does not specify one. For each signature, the outputs are provided by the corresponding tf.estimator.export.ExportOutputs, and the inputs are always the input receivers provided by the serving_input_receiver_fn. For training and evaluation, the train_op is stored in an extra collection, and loss, metrics, and predictions are included in a SignatureDef for the mode in question. Extra assets may be written into the SavedModel via the assets_extra argument. This should be a dict, where each key gives a destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as {'my_asset_file.txt': '/path/to/my_asset_file.txt'}.
Args
export_dir_base A string containing a directory in which to create timestamped subdirectories containing exported SavedModels.
input_receiver_fn_map dict of tf.estimator.ModeKeys to input_receiver_fn mappings, where the input_receiver_fn is a function that takes no arguments and returns the appropriate subclass of InputReceiver.
assets_extra A dict specifying how to populate the assets.extra directory within the exported SavedModel, or None if no extra assets are needed.
as_text whether to write the SavedModel proto in text format.
checkpoint_path The checkpoint path to export. If None (the default), the most recent checkpoint found within the model directory is chosen.
Returns The path to the exported directory as a bytes object.
Raises
ValueError if any input_receiver_fn is None, no export_outputs are provided, or no checkpoint can be found. experimental_feature_importances View source
experimental_feature_importances(
normalize=False
)
Computes gain-based feature importances. The higher the value, the more important the corresponding feature.
Args
normalize If True, normalize the feature importances.
Returns
feature_importances an OrderedDict, where the keys are the feature column names and the values are importances. It is sorted by importance.
Raises
ValueError When attempting to normalize on an empty ensemble or an ensemble of trees which have no splits. Or when attempting to normalize and feature importances have negative values. experimental_predict_with_explanations View source
experimental_predict_with_explanations(
input_fn, predict_keys=None, hooks=None, checkpoint_path=None
)
Computes model explainability outputs per example along with predictions. Currently supports directional feature contributions (DFCs). For each instance, DFCs indicate the aggregate contribution of each feature. See https://arxiv.org/abs/1312.1121 and http://blog.datadive.net/interpreting-random-forests/ for more details.
Args
input_fn A function that provides input data for predicting as minibatches. See Premade Estimators for more information. The function should construct and return one of the following: A tf.data.Dataset object: Outputs of Dataset object must be a tuple (features, labels) with same constraints as below. A tuple (features, labels): Where features is a tf.Tensor or a dictionary of string feature name to Tensor and labels is a Tensor or a dictionary of string label name to Tensor. Both features and labels are consumed by model_fn. They should satisfy the expectation of model_fn from inputs.
predict_keys list of str, name of the keys to predict. It is used if the tf.estimator.EstimatorSpec.predictions is a dict. If predict_keys is used then rest of the predictions will be filtered from the dictionary, with the exception of 'bias' and 'dfc', which will always be in the dictionary. If None, returns all keys in prediction dict, as well as two new keys 'dfc' and 'bias'.
hooks List of tf.train.SessionRunHook subclass instances. Used for callbacks inside the prediction call.
checkpoint_path Path of a specific checkpoint to predict. If None, the latest checkpoint in model_dir is used. If there are no checkpoints in model_dir, prediction is run with newly initialized Variables instead of ones restored from checkpoint.
Yields Evaluated values of predictions tensors. The predictions tensors will contain at least two keys 'dfc' and 'bias' for model explanations. The dfc value corresponds to the contribution of each feature to the overall prediction for this instance (positive indicating that the feature makes it more likely to select class 1 and negative less likely). The dfc is an OrderedDict, where the keys are the feature column names and the values are the contributions. It is sorted by the absolute value of the contribution (e.g OrderedDict([('age', -0.54), ('gender', 0.4), ('fare', 0.21)])). The 'bias' value will be the same across all the instances, corresponding to the probability (classification) or prediction (regression) of the training data distribution.
Raises
ValueError when wrong arguments are given or unsupported functionalities are requested. export_saved_model View source
export_saved_model(
export_dir_base, serving_input_receiver_fn, assets_extra=None, as_text=False,
checkpoint_path=None, experimental_mode=ModeKeys.PREDICT
)
Exports inference graph as a SavedModel into the given dir. For a detailed guide, see SavedModel from Estimators. This method builds a new graph by first calling the serving_input_receiver_fn to obtain feature Tensors, and then calling this Estimator's model_fn to generate the model graph based on those features. It restores the given checkpoint (or, lacking that, the most recent checkpoint) into this graph in a fresh session. Finally it creates a timestamped export directory below the given export_dir_base, and writes a SavedModel into it containing a single tf.MetaGraphDef saved from this session. The exported MetaGraphDef will provide one SignatureDef for each element of the export_outputs dict returned from the model_fn, named using the same keys. One of these keys is always tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY, indicating which signature will be served when a serving request does not specify one. For each signature, the outputs are provided by the corresponding tf.estimator.export.ExportOutputs, and the inputs are always the input receivers provided by the serving_input_receiver_fn. Extra assets may be written into the SavedModel via the assets_extra argument. This should be a dict, where each key gives a destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as {'my_asset_file.txt': '/path/to/my_asset_file.txt'}. The experimental_mode parameter can be used to export a single train/eval/predict graph as a SavedModel. See experimental_export_all_saved_models for full docs.
Args
export_dir_base A string containing a directory in which to create timestamped subdirectories containing exported SavedModels.
serving_input_receiver_fn A function that takes no argument and returns a tf.estimator.export.ServingInputReceiver or tf.estimator.export.TensorServingInputReceiver.
assets_extra A dict specifying how to populate the assets.extra directory within the exported SavedModel, or None if no extra assets are needed.
as_text whether to write the SavedModel proto in text format.
checkpoint_path The checkpoint path to export. If None (the default), the most recent checkpoint found within the model directory is chosen.
experimental_mode tf.estimator.ModeKeys value indicating with mode will be exported. Note that this feature is experimental.
Returns The path to the exported directory as a bytes object.
Raises
ValueError if no serving_input_receiver_fn is provided, no export_outputs are provided, or no checkpoint can be found. export_savedmodel View source
export_savedmodel(
export_dir_base, serving_input_receiver_fn, assets_extra=None, as_text=False,
checkpoint_path=None, strip_default_attrs=False
)
Exports inference graph as a SavedModel into the given dir. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: This function has been renamed, use export_saved_model instead. For a detailed guide, see SavedModel from Estimators. This method builds a new graph by first calling the serving_input_receiver_fn to obtain feature Tensors, and then calling this Estimator's model_fn to generate the model graph based on those features. It restores the given checkpoint (or, lacking that, the most recent checkpoint) into this graph in a fresh session. Finally it creates a timestamped export directory below the given export_dir_base, and writes a SavedModel into it containing a single tf.MetaGraphDef saved from this session. The exported MetaGraphDef will provide one SignatureDef for each element of the export_outputs dict returned from the model_fn, named using the same keys. One of these keys is always tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY, indicating which signature will be served when a serving request does not specify one. For each signature, the outputs are provided by the corresponding tf.estimator.export.ExportOutputs, and the inputs are always the input receivers provided by the serving_input_receiver_fn. Extra assets may be written into the SavedModel via the assets_extra argument. This should be a dict, where each key gives a destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as {'my_asset_file.txt': '/path/to/my_asset_file.txt'}.
Args
export_dir_base A string containing a directory in which to create timestamped subdirectories containing exported SavedModels.
serving_input_receiver_fn A function that takes no argument and returns a tf.estimator.export.ServingInputReceiver or tf.estimator.export.TensorServingInputReceiver.
assets_extra A dict specifying how to populate the assets.extra directory within the exported SavedModel, or None if no extra assets are needed.
as_text whether to write the SavedModel proto in text format.
checkpoint_path The checkpoint path to export. If None (the default), the most recent checkpoint found within the model directory is chosen.
strip_default_attrs Boolean. If True, default-valued attributes will be removed from the NodeDefs. For a detailed guide, see Stripping Default-Valued Attributes.
Returns The path to the exported directory as a bytes object.
Raises
ValueError if no serving_input_receiver_fn is provided, no export_outputs are provided, or no checkpoint can be found. get_variable_names View source
get_variable_names()
Returns list of all variable names in this model.
Returns List of names.
Raises
ValueError If the Estimator has not produced a checkpoint yet. get_variable_value View source
get_variable_value(
name
)
Returns value of the variable given by name.
Args
name string or a list of string, name of the tensor.
Returns Numpy array - value of the tensor.
Raises
ValueError If the Estimator has not produced a checkpoint yet. latest_checkpoint View source
latest_checkpoint()
Finds the filename of the latest saved checkpoint file in model_dir.
Returns The full path to the latest checkpoint or None if no checkpoint was found.
predict View source
predict(
input_fn, predict_keys=None, hooks=None, checkpoint_path=None,
yield_single_examples=True
)
Yields predictions for given features. Please note that interleaving two predict outputs does not work. See: issue/20506
Args
input_fn A function that constructs the features. Prediction continues until input_fn raises an end-of-input exception (tf.errors.OutOfRangeError or StopIteration). See Premade Estimators for more information. The function should construct and return one of the following:
tf.data.Dataset object -- Outputs of Dataset object must have same constraints as below. features -- A tf.Tensor or a dictionary of string feature name to Tensor. features are consumed by model_fn. They should satisfy the expectation of model_fn from inputs. A tuple, in which case the first item is extracted as features.
predict_keys list of str, name of the keys to predict. It is used if the tf.estimator.EstimatorSpec.predictions is a dict. If predict_keys is used then rest of the predictions will be filtered from the dictionary. If None, returns all.
hooks List of tf.train.SessionRunHook subclass instances. Used for callbacks inside the prediction call.
checkpoint_path Path of a specific checkpoint to predict. If None, the latest checkpoint in model_dir is used. If there are no checkpoints in model_dir, prediction is run with newly initialized Variables instead of ones restored from checkpoint.
yield_single_examples If False, yields the whole batch as returned by the model_fn instead of decomposing the batch into individual elements. This is useful if model_fn returns some tensors whose first dimension is not equal to the batch size.
Yields Evaluated values of predictions tensors.
Raises
ValueError If batch length of predictions is not the same and yield_single_examples is True.
ValueError If there is a conflict between predict_keys and predictions. For example if predict_keys is not None but tf.estimator.EstimatorSpec.predictions is not a dict. train View source
train(
input_fn, hooks=None, steps=None, max_steps=None, saving_listeners=None
)
Trains a model given training data input_fn.
Args
input_fn A function that provides input data for training as minibatches. See Premade Estimators for more information. The function should construct and return one of the following: A tf.data.Dataset object: Outputs of Dataset object must be a tuple (features, labels) with same constraints as below. A tuple (features, labels): Where features is a tf.Tensor or a dictionary of string feature name to Tensor and labels is a Tensor or a dictionary of string label name to Tensor. Both features and labels are consumed by model_fn. They should satisfy the expectation of model_fn from inputs.
hooks List of tf.train.SessionRunHook subclass instances. Used for callbacks inside the training loop.
steps Number of steps for which to train the model. If None, train forever or train until input_fn generates the tf.errors.OutOfRange error or StopIteration exception. steps works incrementally. If you call two times train(steps=10) then training occurs in total 20 steps. If OutOfRange or StopIteration occurs in the middle, training stops before 20 steps. If you don't want to have incremental behavior please set max_steps instead. If set, max_steps must be None.
max_steps Number of total steps for which to train model. If None, train forever or train until input_fn generates the tf.errors.OutOfRange error or StopIteration exception. If set, steps must be None. If OutOfRange or StopIteration occurs in the middle, training stops before max_steps steps. Two calls to train(steps=100) means 200 training iterations. On the other hand, two calls to train(max_steps=100) means that the second call will not do any iteration since first call did all 100 steps.
saving_listeners list of CheckpointSaverListener objects. Used for callbacks that run immediately before or after checkpoint savings.
Returns self, for chaining.
Raises
ValueError If both steps and max_steps are not None.
ValueError If either steps or max_steps <= 0. | tensorflow.estimator.boostedtreesclassifier |
tf.estimator.BoostedTreesEstimator View source on GitHub An Estimator for Tensorflow Boosted Trees models. Inherits From: Estimator View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.estimator.BoostedTreesEstimator
tf.estimator.BoostedTreesEstimator(
feature_columns, n_batches_per_layer, head, model_dir=None, weight_column=None,
n_trees=100, max_depth=6, learning_rate=0.1, l1_regularization=0.0,
l2_regularization=0.0, tree_complexity=0.0, min_node_weight=0.0, config=None,
center_bias=False, pruning_mode='none', quantile_sketch_epsilon=0.01
)
Args
feature_columns An iterable containing all the feature columns used by the model. All items in the set should be instances of classes derived from FeatureColumn.
n_batches_per_layer the number of batches to collect statistics per layer.
head the Head instance defined for Estimator.
model_dir Directory to save model parameters, graph and etc. This can also be used to load checkpoints from the directory into an estimator to continue training a previously saved model.
weight_column A string or a _NumericColumn created by tf.feature_column.numeric_column defining feature column representing weights. It is used to downweight or boost examples during training. It will be multiplied by the loss of the example. If it is a string, it is used as a key to fetch weight tensor from the features. If it is a _NumericColumn, raw tensor is fetched by key weight_column.key, then weight_column.normalizer_fn is applied on it to get weight tensor.
n_trees number trees to be created.
max_depth maximum depth of the tree to grow.
learning_rate shrinkage parameter to be used when a tree added to the model.
l1_regularization regularization multiplier applied to the absolute weights of the tree leafs.
l2_regularization regularization multiplier applied to the square weights of the tree leafs.
tree_complexity regularization factor to penalize trees with more leaves.
min_node_weight minimum hessian a node must have for a split to be considered. The value will be compared with sum(leaf_hessian)/ (batch_size * n_batches_per_layer).
config RunConfig object to configure the runtime settings.
center_bias Whether bias centering needs to occur. Bias centering refers to the first node in the very first tree returning the prediction that is aligned with the original labels distribution. For example, for regression problems, the first node will return the mean of the labels. For binary classification problems, it will return a logit for a prior probability of label 1.
pruning_mode one of none, pre, post to indicate no pruning, pre- pruning (do not split a node if not enough gain is observed) and post pruning (build the tree up to a max depth and then prune branches with negative gain). For pre and post pruning, you MUST provide tree_complexity>0.
quantile_sketch_epsilon float between 0 and 1. Error bound for quantile computation. This is only used for float feature columns, and the number of buckets generated per float feature is 1/quantile_sketch_epsilon.
Raises
ValueError when wrong arguments are given or unsupported functionalities are requested.
Attributes
config
model_dir
model_fn Returns the model_fn which is bound to self.params.
params
Methods eval_dir View source
eval_dir(
name=None
)
Shows the directory name where evaluation metrics are dumped.
Args
name Name of the evaluation if user needs to run multiple evaluations on different data sets, such as on training data vs test data. Metrics for different evaluations are saved in separate folders, and appear separately in tensorboard.
Returns A string which is the path of directory contains evaluation metrics.
evaluate View source
evaluate(
input_fn, steps=None, hooks=None, checkpoint_path=None, name=None
)
Evaluates the model given evaluation data input_fn. For each step, calls input_fn, which returns one batch of data. Evaluates until:
steps batches are processed, or
input_fn raises an end-of-input exception (tf.errors.OutOfRangeError or StopIteration).
Args
input_fn A function that constructs the input data for evaluation. See Premade Estimators for more information. The function should construct and return one of the following: A tf.data.Dataset object: Outputs of Dataset object must be a tuple (features, labels) with same constraints as below. A tuple (features, labels): Where features is a tf.Tensor or a dictionary of string feature name to Tensor and labels is a Tensor or a dictionary of string label name to Tensor. Both features and labels are consumed by model_fn. They should satisfy the expectation of model_fn from inputs.
steps Number of steps for which to evaluate model. If None, evaluates until input_fn raises an end-of-input exception.
hooks List of tf.train.SessionRunHook subclass instances. Used for callbacks inside the evaluation call.
checkpoint_path Path of a specific checkpoint to evaluate. If None, the latest checkpoint in model_dir is used. If there are no checkpoints in model_dir, evaluation is run with newly initialized Variables instead of ones restored from checkpoint.
name Name of the evaluation if user needs to run multiple evaluations on different data sets, such as on training data vs test data. Metrics for different evaluations are saved in separate folders, and appear separately in tensorboard.
Returns A dict containing the evaluation metrics specified in model_fn keyed by name, as well as an entry global_step which contains the value of the global step for which this evaluation was performed. For canned estimators, the dict contains the loss (mean loss per mini-batch) and the average_loss (mean loss per sample). Canned classifiers also return the accuracy. Canned regressors also return the label/mean and the prediction/mean.
Raises
ValueError If steps <= 0. experimental_export_all_saved_models View source
experimental_export_all_saved_models(
export_dir_base, input_receiver_fn_map, assets_extra=None, as_text=False,
checkpoint_path=None
)
Exports a SavedModel with tf.MetaGraphDefs for each requested mode. For each mode passed in via the input_receiver_fn_map, this method builds a new graph by calling the input_receiver_fn to obtain feature and label Tensors. Next, this method calls the Estimator's model_fn in the passed mode to generate the model graph based on those features and labels, and restores the given checkpoint (or, lacking that, the most recent checkpoint) into the graph. Only one of the modes is used for saving variables to the SavedModel (order of preference: tf.estimator.ModeKeys.TRAIN, tf.estimator.ModeKeys.EVAL, then tf.estimator.ModeKeys.PREDICT), such that up to three tf.MetaGraphDefs are saved with a single set of variables in a single SavedModel directory. For the variables and tf.MetaGraphDefs, a timestamped export directory below export_dir_base, and writes a SavedModel into it containing the tf.MetaGraphDef for the given mode and its associated signatures. For prediction, the exported MetaGraphDef will provide one SignatureDef for each element of the export_outputs dict returned from the model_fn, named using the same keys. One of these keys is always tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY, indicating which signature will be served when a serving request does not specify one. For each signature, the outputs are provided by the corresponding tf.estimator.export.ExportOutputs, and the inputs are always the input receivers provided by the serving_input_receiver_fn. For training and evaluation, the train_op is stored in an extra collection, and loss, metrics, and predictions are included in a SignatureDef for the mode in question. Extra assets may be written into the SavedModel via the assets_extra argument. This should be a dict, where each key gives a destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as {'my_asset_file.txt': '/path/to/my_asset_file.txt'}.
Args
export_dir_base A string containing a directory in which to create timestamped subdirectories containing exported SavedModels.
input_receiver_fn_map dict of tf.estimator.ModeKeys to input_receiver_fn mappings, where the input_receiver_fn is a function that takes no arguments and returns the appropriate subclass of InputReceiver.
assets_extra A dict specifying how to populate the assets.extra directory within the exported SavedModel, or None if no extra assets are needed.
as_text whether to write the SavedModel proto in text format.
checkpoint_path The checkpoint path to export. If None (the default), the most recent checkpoint found within the model directory is chosen.
Returns The path to the exported directory as a bytes object.
Raises
ValueError if any input_receiver_fn is None, no export_outputs are provided, or no checkpoint can be found. experimental_feature_importances View source
experimental_feature_importances(
normalize=False
)
Computes gain-based feature importances. The higher the value, the more important the corresponding feature.
Args
normalize If True, normalize the feature importances.
Returns
feature_importances an OrderedDict, where the keys are the feature column names and the values are importances. It is sorted by importance.
Raises
ValueError When attempting to normalize on an empty ensemble or an ensemble of trees which have no splits. Or when attempting to normalize and feature importances have negative values. experimental_predict_with_explanations View source
experimental_predict_with_explanations(
input_fn, predict_keys=None, hooks=None, checkpoint_path=None
)
Computes model explainability outputs per example along with predictions. Currently supports directional feature contributions (DFCs). For each instance, DFCs indicate the aggregate contribution of each feature. See https://arxiv.org/abs/1312.1121 and http://blog.datadive.net/interpreting-random-forests/ for more details.
Args
input_fn A function that provides input data for predicting as minibatches. See Premade Estimators for more information. The function should construct and return one of the following: A tf.data.Dataset object: Outputs of Dataset object must be a tuple (features, labels) with same constraints as below. A tuple (features, labels): Where features is a tf.Tensor or a dictionary of string feature name to Tensor and labels is a Tensor or a dictionary of string label name to Tensor. Both features and labels are consumed by model_fn. They should satisfy the expectation of model_fn from inputs.
predict_keys list of str, name of the keys to predict. It is used if the tf.estimator.EstimatorSpec.predictions is a dict. If predict_keys is used then rest of the predictions will be filtered from the dictionary, with the exception of 'bias' and 'dfc', which will always be in the dictionary. If None, returns all keys in prediction dict, as well as two new keys 'dfc' and 'bias'.
hooks List of tf.train.SessionRunHook subclass instances. Used for callbacks inside the prediction call.
checkpoint_path Path of a specific checkpoint to predict. If None, the latest checkpoint in model_dir is used. If there are no checkpoints in model_dir, prediction is run with newly initialized Variables instead of ones restored from checkpoint.
Yields Evaluated values of predictions tensors. The predictions tensors will contain at least two keys 'dfc' and 'bias' for model explanations. The dfc value corresponds to the contribution of each feature to the overall prediction for this instance (positive indicating that the feature makes it more likely to select class 1 and negative less likely). The dfc is an OrderedDict, where the keys are the feature column names and the values are the contributions. It is sorted by the absolute value of the contribution (e.g OrderedDict([('age', -0.54), ('gender', 0.4), ('fare', 0.21)])). The 'bias' value will be the same across all the instances, corresponding to the probability (classification) or prediction (regression) of the training data distribution.
Raises
ValueError when wrong arguments are given or unsupported functionalities are requested. export_saved_model View source
export_saved_model(
export_dir_base, serving_input_receiver_fn, assets_extra=None, as_text=False,
checkpoint_path=None, experimental_mode=ModeKeys.PREDICT
)
Exports inference graph as a SavedModel into the given dir. For a detailed guide, see SavedModel from Estimators. This method builds a new graph by first calling the serving_input_receiver_fn to obtain feature Tensors, and then calling this Estimator's model_fn to generate the model graph based on those features. It restores the given checkpoint (or, lacking that, the most recent checkpoint) into this graph in a fresh session. Finally it creates a timestamped export directory below the given export_dir_base, and writes a SavedModel into it containing a single tf.MetaGraphDef saved from this session. The exported MetaGraphDef will provide one SignatureDef for each element of the export_outputs dict returned from the model_fn, named using the same keys. One of these keys is always tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY, indicating which signature will be served when a serving request does not specify one. For each signature, the outputs are provided by the corresponding tf.estimator.export.ExportOutputs, and the inputs are always the input receivers provided by the serving_input_receiver_fn. Extra assets may be written into the SavedModel via the assets_extra argument. This should be a dict, where each key gives a destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as {'my_asset_file.txt': '/path/to/my_asset_file.txt'}. The experimental_mode parameter can be used to export a single train/eval/predict graph as a SavedModel. See experimental_export_all_saved_models for full docs.
Args
export_dir_base A string containing a directory in which to create timestamped subdirectories containing exported SavedModels.
serving_input_receiver_fn A function that takes no argument and returns a tf.estimator.export.ServingInputReceiver or tf.estimator.export.TensorServingInputReceiver.
assets_extra A dict specifying how to populate the assets.extra directory within the exported SavedModel, or None if no extra assets are needed.
as_text whether to write the SavedModel proto in text format.
checkpoint_path The checkpoint path to export. If None (the default), the most recent checkpoint found within the model directory is chosen.
experimental_mode tf.estimator.ModeKeys value indicating with mode will be exported. Note that this feature is experimental.
Returns The path to the exported directory as a bytes object.
Raises
ValueError if no serving_input_receiver_fn is provided, no export_outputs are provided, or no checkpoint can be found. export_savedmodel View source
export_savedmodel(
export_dir_base, serving_input_receiver_fn, assets_extra=None, as_text=False,
checkpoint_path=None, strip_default_attrs=False
)
Exports inference graph as a SavedModel into the given dir. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: This function has been renamed, use export_saved_model instead. For a detailed guide, see SavedModel from Estimators. This method builds a new graph by first calling the serving_input_receiver_fn to obtain feature Tensors, and then calling this Estimator's model_fn to generate the model graph based on those features. It restores the given checkpoint (or, lacking that, the most recent checkpoint) into this graph in a fresh session. Finally it creates a timestamped export directory below the given export_dir_base, and writes a SavedModel into it containing a single tf.MetaGraphDef saved from this session. The exported MetaGraphDef will provide one SignatureDef for each element of the export_outputs dict returned from the model_fn, named using the same keys. One of these keys is always tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY, indicating which signature will be served when a serving request does not specify one. For each signature, the outputs are provided by the corresponding tf.estimator.export.ExportOutputs, and the inputs are always the input receivers provided by the serving_input_receiver_fn. Extra assets may be written into the SavedModel via the assets_extra argument. This should be a dict, where each key gives a destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as {'my_asset_file.txt': '/path/to/my_asset_file.txt'}.
Args
export_dir_base A string containing a directory in which to create timestamped subdirectories containing exported SavedModels.
serving_input_receiver_fn A function that takes no argument and returns a tf.estimator.export.ServingInputReceiver or tf.estimator.export.TensorServingInputReceiver.
assets_extra A dict specifying how to populate the assets.extra directory within the exported SavedModel, or None if no extra assets are needed.
as_text whether to write the SavedModel proto in text format.
checkpoint_path The checkpoint path to export. If None (the default), the most recent checkpoint found within the model directory is chosen.
strip_default_attrs Boolean. If True, default-valued attributes will be removed from the NodeDefs. For a detailed guide, see Stripping Default-Valued Attributes.
Returns The path to the exported directory as a bytes object.
Raises
ValueError if no serving_input_receiver_fn is provided, no export_outputs are provided, or no checkpoint can be found. get_variable_names View source
get_variable_names()
Returns list of all variable names in this model.
Returns List of names.
Raises
ValueError If the Estimator has not produced a checkpoint yet. get_variable_value View source
get_variable_value(
name
)
Returns value of the variable given by name.
Args
name string or a list of string, name of the tensor.
Returns Numpy array - value of the tensor.
Raises
ValueError If the Estimator has not produced a checkpoint yet. latest_checkpoint View source
latest_checkpoint()
Finds the filename of the latest saved checkpoint file in model_dir.
Returns The full path to the latest checkpoint or None if no checkpoint was found.
predict View source
predict(
input_fn, predict_keys=None, hooks=None, checkpoint_path=None,
yield_single_examples=True
)
Yields predictions for given features. Please note that interleaving two predict outputs does not work. See: issue/20506
Args
input_fn A function that constructs the features. Prediction continues until input_fn raises an end-of-input exception (tf.errors.OutOfRangeError or StopIteration). See Premade Estimators for more information. The function should construct and return one of the following:
tf.data.Dataset object -- Outputs of Dataset object must have same constraints as below. features -- A tf.Tensor or a dictionary of string feature name to Tensor. features are consumed by model_fn. They should satisfy the expectation of model_fn from inputs. A tuple, in which case the first item is extracted as features.
predict_keys list of str, name of the keys to predict. It is used if the tf.estimator.EstimatorSpec.predictions is a dict. If predict_keys is used then rest of the predictions will be filtered from the dictionary. If None, returns all.
hooks List of tf.train.SessionRunHook subclass instances. Used for callbacks inside the prediction call.
checkpoint_path Path of a specific checkpoint to predict. If None, the latest checkpoint in model_dir is used. If there are no checkpoints in model_dir, prediction is run with newly initialized Variables instead of ones restored from checkpoint.
yield_single_examples If False, yields the whole batch as returned by the model_fn instead of decomposing the batch into individual elements. This is useful if model_fn returns some tensors whose first dimension is not equal to the batch size.
Yields Evaluated values of predictions tensors.
Raises
ValueError If batch length of predictions is not the same and yield_single_examples is True.
ValueError If there is a conflict between predict_keys and predictions. For example if predict_keys is not None but tf.estimator.EstimatorSpec.predictions is not a dict. train View source
train(
input_fn, hooks=None, steps=None, max_steps=None, saving_listeners=None
)
Trains a model given training data input_fn.
Args
input_fn A function that provides input data for training as minibatches. See Premade Estimators for more information. The function should construct and return one of the following: A tf.data.Dataset object: Outputs of Dataset object must be a tuple (features, labels) with same constraints as below. A tuple (features, labels): Where features is a tf.Tensor or a dictionary of string feature name to Tensor and labels is a Tensor or a dictionary of string label name to Tensor. Both features and labels are consumed by model_fn. They should satisfy the expectation of model_fn from inputs.
hooks List of tf.train.SessionRunHook subclass instances. Used for callbacks inside the training loop.
steps Number of steps for which to train the model. If None, train forever or train until input_fn generates the tf.errors.OutOfRange error or StopIteration exception. steps works incrementally. If you call two times train(steps=10) then training occurs in total 20 steps. If OutOfRange or StopIteration occurs in the middle, training stops before 20 steps. If you don't want to have incremental behavior please set max_steps instead. If set, max_steps must be None.
max_steps Number of total steps for which to train model. If None, train forever or train until input_fn generates the tf.errors.OutOfRange error or StopIteration exception. If set, steps must be None. If OutOfRange or StopIteration occurs in the middle, training stops before max_steps steps. Two calls to train(steps=100) means 200 training iterations. On the other hand, two calls to train(max_steps=100) means that the second call will not do any iteration since first call did all 100 steps.
saving_listeners list of CheckpointSaverListener objects. Used for callbacks that run immediately before or after checkpoint savings.
Returns self, for chaining.
Raises
ValueError If both steps and max_steps are not None.
ValueError If either steps or max_steps <= 0. | tensorflow.estimator.boostedtreesestimator |
tf.estimator.BoostedTreesRegressor View source on GitHub A Regressor for Tensorflow Boosted Trees models. Inherits From: Estimator View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.estimator.BoostedTreesRegressor
tf.estimator.BoostedTreesRegressor(
feature_columns, n_batches_per_layer, model_dir=None, label_dimension=1,
weight_column=None, n_trees=100, max_depth=6, learning_rate=0.1,
l1_regularization=0.0, l2_regularization=0.0, tree_complexity=0.0,
min_node_weight=0.0, config=None, center_bias=False,
pruning_mode='none', quantile_sketch_epsilon=0.01,
train_in_memory=False
)
Args
feature_columns An iterable containing all the feature columns used by the model. All items in the set should be instances of classes derived from FeatureColumn.
n_batches_per_layer the number of batches to collect statistics per layer. The total number of batches is total number of data divided by batch size.
model_dir Directory to save model parameters, graph and etc. This can also be used to load checkpoints from the directory into a estimator to continue training a previously saved model.
label_dimension Number of regression targets per example.
weight_column A string or a NumericColumn created by tf.fc_old.numeric_column defining feature column representing weights. It is used to downweight or boost examples during training. It will be multiplied by the loss of the example. If it is a string, it is used as a key to fetch weight tensor from the features. If it is a NumericColumn, raw tensor is fetched by key weight_column.key, then weight_column.normalizer_fn is applied on it to get weight tensor.
n_trees number trees to be created.
max_depth maximum depth of the tree to grow.
learning_rate shrinkage parameter to be used when a tree added to the model.
l1_regularization regularization multiplier applied to the absolute weights of the tree leafs.
l2_regularization regularization multiplier applied to the square weights of the tree leafs.
tree_complexity regularization factor to penalize trees with more leaves.
min_node_weight min_node_weight: minimum hessian a node must have for a split to be considered. The value will be compared with sum(leaf_hessian)/(batch_size * n_batches_per_layer).
config RunConfig object to configure the runtime settings.
center_bias Whether bias centering needs to occur. Bias centering refers to the first node in the very first tree returning the prediction that is aligned with the original labels distribution. For example, for regression problems, the first node will return the mean of the labels. For binary classification problems, it will return a logit for a prior probability of label 1.
pruning_mode one of none, pre, post to indicate no pruning, pre- pruning (do not split a node if not enough gain is observed) and post pruning (build the tree up to a max depth and then prune branches with negative gain). For pre and post pruning, you MUST provide tree_complexity>0.
quantile_sketch_epsilon float between 0 and 1. Error bound for quantile computation. This is only used for float feature columns, and the number of buckets generated per float feature is 1/quantile_sketch_epsilon.
train_in_memory bool, when true, it assumes the dataset is in memory, i.e., input_fn should return the entire dataset as a single batch, n_batches_per_layer should be set as 1, num_worker_replicas should be 1, and num_ps_replicas should be 0 in tf.Estimator.RunConfig.
Raises
ValueError when wrong arguments are given or unsupported functionalities are requested. Eager Compatibility Estimators can be used while eager execution is enabled. Note that input_fn and all hooks are executed inside a graph context, so they have to be written to be compatible with graph mode. Note that input_fn code using tf.data generally works in both graph and eager modes.
Attributes
config
model_dir
model_fn Returns the model_fn which is bound to self.params.
params
Methods eval_dir View source
eval_dir(
name=None
)
Shows the directory name where evaluation metrics are dumped.
Args
name Name of the evaluation if user needs to run multiple evaluations on different data sets, such as on training data vs test data. Metrics for different evaluations are saved in separate folders, and appear separately in tensorboard.
Returns A string which is the path of directory contains evaluation metrics.
evaluate View source
evaluate(
input_fn, steps=None, hooks=None, checkpoint_path=None, name=None
)
Evaluates the model given evaluation data input_fn. For each step, calls input_fn, which returns one batch of data. Evaluates until:
steps batches are processed, or
input_fn raises an end-of-input exception (tf.errors.OutOfRangeError or StopIteration).
Args
input_fn A function that constructs the input data for evaluation. See Premade Estimators for more information. The function should construct and return one of the following: A tf.data.Dataset object: Outputs of Dataset object must be a tuple (features, labels) with same constraints as below. A tuple (features, labels): Where features is a tf.Tensor or a dictionary of string feature name to Tensor and labels is a Tensor or a dictionary of string label name to Tensor. Both features and labels are consumed by model_fn. They should satisfy the expectation of model_fn from inputs.
steps Number of steps for which to evaluate model. If None, evaluates until input_fn raises an end-of-input exception.
hooks List of tf.train.SessionRunHook subclass instances. Used for callbacks inside the evaluation call.
checkpoint_path Path of a specific checkpoint to evaluate. If None, the latest checkpoint in model_dir is used. If there are no checkpoints in model_dir, evaluation is run with newly initialized Variables instead of ones restored from checkpoint.
name Name of the evaluation if user needs to run multiple evaluations on different data sets, such as on training data vs test data. Metrics for different evaluations are saved in separate folders, and appear separately in tensorboard.
Returns A dict containing the evaluation metrics specified in model_fn keyed by name, as well as an entry global_step which contains the value of the global step for which this evaluation was performed. For canned estimators, the dict contains the loss (mean loss per mini-batch) and the average_loss (mean loss per sample). Canned classifiers also return the accuracy. Canned regressors also return the label/mean and the prediction/mean.
Raises
ValueError If steps <= 0. experimental_export_all_saved_models View source
experimental_export_all_saved_models(
export_dir_base, input_receiver_fn_map, assets_extra=None, as_text=False,
checkpoint_path=None
)
Exports a SavedModel with tf.MetaGraphDefs for each requested mode. For each mode passed in via the input_receiver_fn_map, this method builds a new graph by calling the input_receiver_fn to obtain feature and label Tensors. Next, this method calls the Estimator's model_fn in the passed mode to generate the model graph based on those features and labels, and restores the given checkpoint (or, lacking that, the most recent checkpoint) into the graph. Only one of the modes is used for saving variables to the SavedModel (order of preference: tf.estimator.ModeKeys.TRAIN, tf.estimator.ModeKeys.EVAL, then tf.estimator.ModeKeys.PREDICT), such that up to three tf.MetaGraphDefs are saved with a single set of variables in a single SavedModel directory. For the variables and tf.MetaGraphDefs, a timestamped export directory below export_dir_base, and writes a SavedModel into it containing the tf.MetaGraphDef for the given mode and its associated signatures. For prediction, the exported MetaGraphDef will provide one SignatureDef for each element of the export_outputs dict returned from the model_fn, named using the same keys. One of these keys is always tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY, indicating which signature will be served when a serving request does not specify one. For each signature, the outputs are provided by the corresponding tf.estimator.export.ExportOutputs, and the inputs are always the input receivers provided by the serving_input_receiver_fn. For training and evaluation, the train_op is stored in an extra collection, and loss, metrics, and predictions are included in a SignatureDef for the mode in question. Extra assets may be written into the SavedModel via the assets_extra argument. This should be a dict, where each key gives a destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as {'my_asset_file.txt': '/path/to/my_asset_file.txt'}.
Args
export_dir_base A string containing a directory in which to create timestamped subdirectories containing exported SavedModels.
input_receiver_fn_map dict of tf.estimator.ModeKeys to input_receiver_fn mappings, where the input_receiver_fn is a function that takes no arguments and returns the appropriate subclass of InputReceiver.
assets_extra A dict specifying how to populate the assets.extra directory within the exported SavedModel, or None if no extra assets are needed.
as_text whether to write the SavedModel proto in text format.
checkpoint_path The checkpoint path to export. If None (the default), the most recent checkpoint found within the model directory is chosen.
Returns The path to the exported directory as a bytes object.
Raises
ValueError if any input_receiver_fn is None, no export_outputs are provided, or no checkpoint can be found. experimental_feature_importances View source
experimental_feature_importances(
normalize=False
)
Computes gain-based feature importances. The higher the value, the more important the corresponding feature.
Args
normalize If True, normalize the feature importances.
Returns
feature_importances an OrderedDict, where the keys are the feature column names and the values are importances. It is sorted by importance.
Raises
ValueError When attempting to normalize on an empty ensemble or an ensemble of trees which have no splits. Or when attempting to normalize and feature importances have negative values. experimental_predict_with_explanations View source
experimental_predict_with_explanations(
input_fn, predict_keys=None, hooks=None, checkpoint_path=None
)
Computes model explainability outputs per example along with predictions. Currently supports directional feature contributions (DFCs). For each instance, DFCs indicate the aggregate contribution of each feature. See https://arxiv.org/abs/1312.1121 and http://blog.datadive.net/interpreting-random-forests/ for more details.
Args
input_fn A function that provides input data for predicting as minibatches. See Premade Estimators for more information. The function should construct and return one of the following: A tf.data.Dataset object: Outputs of Dataset object must be a tuple (features, labels) with same constraints as below. A tuple (features, labels): Where features is a tf.Tensor or a dictionary of string feature name to Tensor and labels is a Tensor or a dictionary of string label name to Tensor. Both features and labels are consumed by model_fn. They should satisfy the expectation of model_fn from inputs.
predict_keys list of str, name of the keys to predict. It is used if the tf.estimator.EstimatorSpec.predictions is a dict. If predict_keys is used then rest of the predictions will be filtered from the dictionary, with the exception of 'bias' and 'dfc', which will always be in the dictionary. If None, returns all keys in prediction dict, as well as two new keys 'dfc' and 'bias'.
hooks List of tf.train.SessionRunHook subclass instances. Used for callbacks inside the prediction call.
checkpoint_path Path of a specific checkpoint to predict. If None, the latest checkpoint in model_dir is used. If there are no checkpoints in model_dir, prediction is run with newly initialized Variables instead of ones restored from checkpoint.
Yields Evaluated values of predictions tensors. The predictions tensors will contain at least two keys 'dfc' and 'bias' for model explanations. The dfc value corresponds to the contribution of each feature to the overall prediction for this instance (positive indicating that the feature makes it more likely to select class 1 and negative less likely). The dfc is an OrderedDict, where the keys are the feature column names and the values are the contributions. It is sorted by the absolute value of the contribution (e.g OrderedDict([('age', -0.54), ('gender', 0.4), ('fare', 0.21)])). The 'bias' value will be the same across all the instances, corresponding to the probability (classification) or prediction (regression) of the training data distribution.
Raises
ValueError when wrong arguments are given or unsupported functionalities are requested. export_saved_model View source
export_saved_model(
export_dir_base, serving_input_receiver_fn, assets_extra=None, as_text=False,
checkpoint_path=None, experimental_mode=ModeKeys.PREDICT
)
Exports inference graph as a SavedModel into the given dir. For a detailed guide, see SavedModel from Estimators. This method builds a new graph by first calling the serving_input_receiver_fn to obtain feature Tensors, and then calling this Estimator's model_fn to generate the model graph based on those features. It restores the given checkpoint (or, lacking that, the most recent checkpoint) into this graph in a fresh session. Finally it creates a timestamped export directory below the given export_dir_base, and writes a SavedModel into it containing a single tf.MetaGraphDef saved from this session. The exported MetaGraphDef will provide one SignatureDef for each element of the export_outputs dict returned from the model_fn, named using the same keys. One of these keys is always tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY, indicating which signature will be served when a serving request does not specify one. For each signature, the outputs are provided by the corresponding tf.estimator.export.ExportOutputs, and the inputs are always the input receivers provided by the serving_input_receiver_fn. Extra assets may be written into the SavedModel via the assets_extra argument. This should be a dict, where each key gives a destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as {'my_asset_file.txt': '/path/to/my_asset_file.txt'}. The experimental_mode parameter can be used to export a single train/eval/predict graph as a SavedModel. See experimental_export_all_saved_models for full docs.
Args
export_dir_base A string containing a directory in which to create timestamped subdirectories containing exported SavedModels.
serving_input_receiver_fn A function that takes no argument and returns a tf.estimator.export.ServingInputReceiver or tf.estimator.export.TensorServingInputReceiver.
assets_extra A dict specifying how to populate the assets.extra directory within the exported SavedModel, or None if no extra assets are needed.
as_text whether to write the SavedModel proto in text format.
checkpoint_path The checkpoint path to export. If None (the default), the most recent checkpoint found within the model directory is chosen.
experimental_mode tf.estimator.ModeKeys value indicating with mode will be exported. Note that this feature is experimental.
Returns The path to the exported directory as a bytes object.
Raises
ValueError if no serving_input_receiver_fn is provided, no export_outputs are provided, or no checkpoint can be found. export_savedmodel View source
export_savedmodel(
export_dir_base, serving_input_receiver_fn, assets_extra=None, as_text=False,
checkpoint_path=None, strip_default_attrs=False
)
Exports inference graph as a SavedModel into the given dir. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: This function has been renamed, use export_saved_model instead. For a detailed guide, see SavedModel from Estimators. This method builds a new graph by first calling the serving_input_receiver_fn to obtain feature Tensors, and then calling this Estimator's model_fn to generate the model graph based on those features. It restores the given checkpoint (or, lacking that, the most recent checkpoint) into this graph in a fresh session. Finally it creates a timestamped export directory below the given export_dir_base, and writes a SavedModel into it containing a single tf.MetaGraphDef saved from this session. The exported MetaGraphDef will provide one SignatureDef for each element of the export_outputs dict returned from the model_fn, named using the same keys. One of these keys is always tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY, indicating which signature will be served when a serving request does not specify one. For each signature, the outputs are provided by the corresponding tf.estimator.export.ExportOutputs, and the inputs are always the input receivers provided by the serving_input_receiver_fn. Extra assets may be written into the SavedModel via the assets_extra argument. This should be a dict, where each key gives a destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as {'my_asset_file.txt': '/path/to/my_asset_file.txt'}.
Args
export_dir_base A string containing a directory in which to create timestamped subdirectories containing exported SavedModels.
serving_input_receiver_fn A function that takes no argument and returns a tf.estimator.export.ServingInputReceiver or tf.estimator.export.TensorServingInputReceiver.
assets_extra A dict specifying how to populate the assets.extra directory within the exported SavedModel, or None if no extra assets are needed.
as_text whether to write the SavedModel proto in text format.
checkpoint_path The checkpoint path to export. If None (the default), the most recent checkpoint found within the model directory is chosen.
strip_default_attrs Boolean. If True, default-valued attributes will be removed from the NodeDefs. For a detailed guide, see Stripping Default-Valued Attributes.
Returns The path to the exported directory as a bytes object.
Raises
ValueError if no serving_input_receiver_fn is provided, no export_outputs are provided, or no checkpoint can be found. get_variable_names View source
get_variable_names()
Returns list of all variable names in this model.
Returns List of names.
Raises
ValueError If the Estimator has not produced a checkpoint yet. get_variable_value View source
get_variable_value(
name
)
Returns value of the variable given by name.
Args
name string or a list of string, name of the tensor.
Returns Numpy array - value of the tensor.
Raises
ValueError If the Estimator has not produced a checkpoint yet. latest_checkpoint View source
latest_checkpoint()
Finds the filename of the latest saved checkpoint file in model_dir.
Returns The full path to the latest checkpoint or None if no checkpoint was found.
predict View source
predict(
input_fn, predict_keys=None, hooks=None, checkpoint_path=None,
yield_single_examples=True
)
Yields predictions for given features. Please note that interleaving two predict outputs does not work. See: issue/20506
Args
input_fn A function that constructs the features. Prediction continues until input_fn raises an end-of-input exception (tf.errors.OutOfRangeError or StopIteration). See Premade Estimators for more information. The function should construct and return one of the following:
tf.data.Dataset object -- Outputs of Dataset object must have same constraints as below. features -- A tf.Tensor or a dictionary of string feature name to Tensor. features are consumed by model_fn. They should satisfy the expectation of model_fn from inputs. A tuple, in which case the first item is extracted as features.
predict_keys list of str, name of the keys to predict. It is used if the tf.estimator.EstimatorSpec.predictions is a dict. If predict_keys is used then rest of the predictions will be filtered from the dictionary. If None, returns all.
hooks List of tf.train.SessionRunHook subclass instances. Used for callbacks inside the prediction call.
checkpoint_path Path of a specific checkpoint to predict. If None, the latest checkpoint in model_dir is used. If there are no checkpoints in model_dir, prediction is run with newly initialized Variables instead of ones restored from checkpoint.
yield_single_examples If False, yields the whole batch as returned by the model_fn instead of decomposing the batch into individual elements. This is useful if model_fn returns some tensors whose first dimension is not equal to the batch size.
Yields Evaluated values of predictions tensors.
Raises
ValueError If batch length of predictions is not the same and yield_single_examples is True.
ValueError If there is a conflict between predict_keys and predictions. For example if predict_keys is not None but tf.estimator.EstimatorSpec.predictions is not a dict. train View source
train(
input_fn, hooks=None, steps=None, max_steps=None, saving_listeners=None
)
Trains a model given training data input_fn.
Args
input_fn A function that provides input data for training as minibatches. See Premade Estimators for more information. The function should construct and return one of the following: A tf.data.Dataset object: Outputs of Dataset object must be a tuple (features, labels) with same constraints as below. A tuple (features, labels): Where features is a tf.Tensor or a dictionary of string feature name to Tensor and labels is a Tensor or a dictionary of string label name to Tensor. Both features and labels are consumed by model_fn. They should satisfy the expectation of model_fn from inputs.
hooks List of tf.train.SessionRunHook subclass instances. Used for callbacks inside the training loop.
steps Number of steps for which to train the model. If None, train forever or train until input_fn generates the tf.errors.OutOfRange error or StopIteration exception. steps works incrementally. If you call two times train(steps=10) then training occurs in total 20 steps. If OutOfRange or StopIteration occurs in the middle, training stops before 20 steps. If you don't want to have incremental behavior please set max_steps instead. If set, max_steps must be None.
max_steps Number of total steps for which to train model. If None, train forever or train until input_fn generates the tf.errors.OutOfRange error or StopIteration exception. If set, steps must be None. If OutOfRange or StopIteration occurs in the middle, training stops before max_steps steps. Two calls to train(steps=100) means 200 training iterations. On the other hand, two calls to train(max_steps=100) means that the second call will not do any iteration since first call did all 100 steps.
saving_listeners list of CheckpointSaverListener objects. Used for callbacks that run immediately before or after checkpoint savings.
Returns self, for chaining.
Raises
ValueError If both steps and max_steps are not None.
ValueError If either steps or max_steps <= 0. | tensorflow.estimator.boostedtreesregressor |
tf.estimator.CheckpointSaverHook Saves checkpoints every N steps or seconds. Inherits From: SessionRunHook View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.estimator.CheckpointSaverHook, tf.compat.v1.train.CheckpointSaverHook
tf.estimator.CheckpointSaverHook(
checkpoint_dir, save_secs=None, save_steps=None, saver=None,
checkpoint_basename='model.ckpt', scaffold=None, listeners=None,
save_graph_def=True
)
Args
checkpoint_dir str, base directory for the checkpoint files.
save_secs int, save every N secs.
save_steps int, save every N steps.
saver Saver object, used for saving.
checkpoint_basename str, base name for the checkpoint files.
scaffold Scaffold, use to get saver object.
listeners List of CheckpointSaverListener subclass instances. Used for callbacks that run immediately before or after this hook saves the checkpoint.
save_graph_def Whether to save the GraphDef and MetaGraphDef to checkpoint_dir. The GraphDef is saved after the session is created as graph.pbtxt. MetaGraphDefs are saved out for every checkpoint as model.ckpt-*.meta.
Raises
ValueError One of save_steps or save_secs should be set.
ValueError At most one of saver or scaffold should be set. Methods after_create_session View source
after_create_session(
session, coord
)
Called when new TensorFlow session is created. This is called to signal the hooks that a new session has been created. This has two essential differences with the situation in which begin is called: When this is called, the graph is finalized and ops can no longer be added to the graph. This method will also be called as a result of recovering a wrapped session, not only at the beginning of the overall session.
Args
session A TensorFlow Session that has been created.
coord A Coordinator object which keeps track of all threads. after_run View source
after_run(
run_context, run_values
)
Called after each call to run(). The run_values argument contains results of requested ops/tensors by before_run(). The run_context argument is the same one send to before_run call. run_context.request_stop() can be called to stop the iteration. If session.run() raises any exceptions then after_run() is not called.
Args
run_context A SessionRunContext object.
run_values A SessionRunValues object. before_run View source
before_run(
run_context
)
Called before each call to run(). You can return from this call a SessionRunArgs object indicating ops or tensors to add to the upcoming run() call. These ops/tensors will be run together with the ops/tensors originally passed to the original run() call. The run args you return can also contain feeds to be added to the run() call. The run_context argument is a SessionRunContext that provides information about the upcoming run() call: the originally requested op/tensors, the TensorFlow Session. At this point graph is finalized and you can not add ops.
Args
run_context A SessionRunContext object.
Returns None or a SessionRunArgs object.
begin View source
begin()
Called once before using the session. When called, the default graph is the one that will be launched in the session. The hook can modify the graph by adding new operations to it. After the begin() call the graph will be finalized and the other callbacks can not modify the graph anymore. Second call of begin() on the same graph, should not change the graph. end View source
end(
session
)
Called at the end of session. The session argument can be used in case the hook wants to run final ops, such as saving a last checkpoint. If session.run() raises exception other than OutOfRangeError or StopIteration then end() is not called. Note the difference between end() and after_run() behavior when session.run() raises OutOfRangeError or StopIteration. In that case end() is called but after_run() is not called.
Args
session A TensorFlow Session that will be soon closed. | tensorflow.estimator.checkpointsaverhook |
tf.estimator.CheckpointSaverListener Interface for listeners that take action before or after checkpoint save. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.estimator.CheckpointSaverListener, tf.compat.v1.train.CheckpointSaverListener CheckpointSaverListener triggers only in steps when CheckpointSaverHook is triggered, and provides callbacks at the following points: before using the session before each call to Saver.save()
after each call to Saver.save()
at the end of session To use a listener, implement a class and pass the listener to a CheckpointSaverHook, as in this example: class ExampleCheckpointSaverListener(CheckpointSaverListener):
def begin(self):
# You can add ops to the graph here.
print('Starting the session.')
self.your_tensor = ...
def before_save(self, session, global_step_value):
print('About to write a checkpoint')
def after_save(self, session, global_step_value):
print('Done writing checkpoint.')
if decided_to_stop_training():
return True
def end(self, session, global_step_value):
print('Done with the session.')
...
listener = ExampleCheckpointSaverListener()
saver_hook = tf.estimator.CheckpointSaverHook(
checkpoint_dir, listeners=[listener])
with
tf.compat.v1.train.MonitoredTrainingSession(chief_only_hooks=[saver_hook]):
...
A CheckpointSaverListener may simply take some action after every checkpoint save. It is also possible for the listener to use its own schedule to act less frequently, e.g. based on global_step_value. In this case, implementors should implement the end() method to handle actions related to the last checkpoint save. But the listener should not act twice if after_save() already handled this last checkpoint save. A CheckpointSaverListener can request training to be stopped, by returning True in after_save. Please note that, in replicated distributed training setting, only chief should use this behavior. Otherwise each worker will do their own evaluation, which may be wasteful of resources. Methods after_save View source
after_save(
session, global_step_value
)
before_save View source
before_save(
session, global_step_value
)
begin View source
begin()
end View source
end(
session, global_step_value
) | tensorflow.estimator.checkpointsaverlistener |
tf.estimator.classifier_parse_example_spec View source on GitHub Generates parsing spec for tf.parse_example to be used with classifiers.
tf.estimator.classifier_parse_example_spec(
feature_columns, label_key, label_dtype=tf.dtypes.int64, label_default=None,
weight_column=None
)
If users keep data in tf.Example format, they need to call tf.parse_example with a proper feature spec. There are two main things that this utility helps: Users need to combine parsing spec of features with labels and weights (if any) since they are all parsed from same tf.Example instance. This utility combines these specs. It is difficult to map expected label by a classifier such as DNNClassifier to corresponding tf.parse_example spec. This utility encodes it by getting related information from users (key, dtype). Example output of parsing spec: # Define features and transformations
feature_b = tf.feature_column.numeric_column(...)
feature_c_bucketized = tf.feature_column.bucketized_column(
tf.feature_column.numeric_column("feature_c"), ...)
feature_a_x_feature_c = tf.feature_column.crossed_column(
columns=["feature_a", feature_c_bucketized], ...)
feature_columns = [feature_b, feature_c_bucketized, feature_a_x_feature_c]
parsing_spec = tf.estimator.classifier_parse_example_spec(
feature_columns, label_key='my-label', label_dtype=tf.string)
# For the above example, classifier_parse_example_spec would return the dict:
assert parsing_spec == {
"feature_a": parsing_ops.VarLenFeature(tf.string),
"feature_b": parsing_ops.FixedLenFeature([1], dtype=tf.float32),
"feature_c": parsing_ops.FixedLenFeature([1], dtype=tf.float32)
"my-label" : parsing_ops.FixedLenFeature([1], dtype=tf.string)
}
Example usage with a classifier: feature_columns = # define features via tf.feature_column
estimator = DNNClassifier(
n_classes=1000,
feature_columns=feature_columns,
weight_column='example-weight',
label_vocabulary=['photos', 'keep', ...],
hidden_units=[256, 64, 16])
# This label configuration tells the classifier the following:
# * weights are retrieved with key 'example-weight'
# * label is string and can be one of the following ['photos', 'keep', ...]
# * integer id for label 'photos' is 0, 'keep' is 1, ...
# Input builders
def input_fn_train(): # Returns a tuple of features and labels.
features = tf.contrib.learn.read_keyed_batch_features(
file_pattern=train_files,
batch_size=batch_size,
# creates parsing configuration for tf.parse_example
features=tf.estimator.classifier_parse_example_spec(
feature_columns,
label_key='my-label',
label_dtype=tf.string,
weight_column='example-weight'),
reader=tf.RecordIOReader)
labels = features.pop('my-label')
return features, labels
estimator.train(input_fn=input_fn_train)
Args
feature_columns An iterable containing all feature columns. All items should be instances of classes derived from FeatureColumn.
label_key A string identifying the label. It means tf.Example stores labels with this key.
label_dtype A tf.dtype identifies the type of labels. By default it is tf.int64. If user defines a label_vocabulary, this should be set as tf.string. tf.float32 labels are only supported for binary classification.
label_default used as label if label_key does not exist in given tf.Example. An example usage: let's say label_key is 'clicked' and tf.Example contains clicked data only for positive examples in following format key:clicked, value:1. This means that if there is no data with key 'clicked' it should count as negative example by setting label_deafault=0. Type of this value should be compatible with label_dtype.
weight_column A string or a NumericColumn created by tf.feature_column.numeric_column defining feature column representing weights. It is used to down weight or boost examples during training. It will be multiplied by the loss of the example. If it is a string, it is used as a key to fetch weight tensor from the features. If it is a NumericColumn, raw tensor is fetched by key weight_column.key, then weight_column.normalizer_fn is applied on it to get weight tensor.
Returns A dict mapping each feature key to a FixedLenFeature or VarLenFeature value.
Raises
ValueError If label is used in feature_columns.
ValueError If weight_column is used in feature_columns.
ValueError If any of the given feature_columns is not a _FeatureColumn instance.
ValueError If weight_column is not a NumericColumn instance.
ValueError if label_key is None. | tensorflow.estimator.classifier_parse_example_spec |
tf.estimator.DNNClassifier View source on GitHub A classifier for TensorFlow DNN models. Inherits From: Estimator, Estimator
tf.estimator.DNNClassifier(
hidden_units, feature_columns, model_dir=None, n_classes=2, weight_column=None,
label_vocabulary=None, optimizer='Adagrad', activation_fn=tf.nn.relu,
dropout=None, config=None, warm_start_from=None,
loss_reduction=losses_utils.ReductionV2.SUM_OVER_BATCH_SIZE, batch_norm=False
)
Example: categorical_feature_a = categorical_column_with_hash_bucket(...)
categorical_feature_b = categorical_column_with_hash_bucket(...)
categorical_feature_a_emb = embedding_column(
categorical_column=categorical_feature_a, ...)
categorical_feature_b_emb = embedding_column(
categorical_column=categorical_feature_b, ...)
estimator = tf.estimator.DNNClassifier(
feature_columns=[categorical_feature_a_emb, categorical_feature_b_emb],
hidden_units=[1024, 512, 256])
# Or estimator using the ProximalAdagradOptimizer optimizer with
# regularization.
estimator = tf.estimator.DNNClassifier(
feature_columns=[categorical_feature_a_emb, categorical_feature_b_emb],
hidden_units=[1024, 512, 256],
optimizer=tf.compat.v1.train.ProximalAdagradOptimizer(
learning_rate=0.1,
l1_regularization_strength=0.001
))
# Or estimator using an optimizer with a learning rate decay.
estimator = tf.estimator.DNNClassifier(
feature_columns=[categorical_feature_a_emb, categorical_feature_b_emb],
hidden_units=[1024, 512, 256],
optimizer=lambda: tf.keras.optimizers.Adam(
learning_rate=tf.compat.v1.train.exponential_decay(
learning_rate=0.1,
global_step=tf.compat.v1.train.get_global_step(),
decay_steps=10000,
decay_rate=0.96))
# Or estimator with warm-starting from a previous checkpoint.
estimator = tf.estimator.DNNClassifier(
feature_columns=[categorical_feature_a_emb, categorical_feature_b_emb],
hidden_units=[1024, 512, 256],
warm_start_from="/path/to/checkpoint/dir")
# Input builders
def input_fn_train:
# Returns tf.data.Dataset of (x, y) tuple where y represents label's class
# index.
pass
def input_fn_eval:
# Returns tf.data.Dataset of (x, y) tuple where y represents label's class
# index.
pass
def input_fn_predict:
# Returns tf.data.Dataset of (x, None) tuple.
pass
estimator.train(input_fn=input_fn_train)
metrics = estimator.evaluate(input_fn=input_fn_eval)
predictions = estimator.predict(input_fn=input_fn_predict)
Input of train and evaluate should have following features, otherwise there will be a KeyError: if weight_column is not None, a feature with key=weight_column whose value is a Tensor. for each column in feature_columns: if column is a CategoricalColumn, a feature with key=column.name whose value is a SparseTensor. if column is a WeightedCategoricalColumn, two features: the first with key the id column name, the second with key the weight column name. Both features' value must be a SparseTensor. if column is a DenseColumn, a feature with key=column.name whose value is a Tensor.
Loss is calculated by using softmax cross entropy.
Args
hidden_units Iterable of number hidden units per layer. All layers are fully connected. Ex. [64, 32] means first layer has 64 nodes and second one has 32.
feature_columns An iterable containing all the feature columns used by the model. All items in the set should be instances of classes derived from _FeatureColumn.
model_dir Directory to save model parameters, graph and etc. This can also be used to load checkpoints from the directory into a estimator to continue training a previously saved model.
n_classes Number of label classes. Defaults to 2, namely binary classification. Must be > 1.
weight_column A string or a NumericColumn created by tf.feature_column.numeric_column defining feature column representing weights. It is used to down weight or boost examples during training. It will be multiplied by the loss of the example. If it is a string, it is used as a key to fetch weight tensor from the features. If it is a _NumericColumn, raw tensor is fetched by key weight_column.key, then weight_column.normalizer_fn is applied on it to get weight tensor.
label_vocabulary A list of strings represents possible label values. If given, labels must be string type and have any value in label_vocabulary. If it is not given, that means labels are already encoded as integer or float within [0, 1] for n_classes=2 and encoded as integer values in {0, 1,..., n_classes-1} for n_classes>2 . Also there will be errors if vocabulary is not provided and labels are string.
optimizer An instance of tf.keras.optimizers.* used to train the model. Can also be a string (one of 'Adagrad', 'Adam', 'Ftrl', 'RMSProp', SGD'), or callable. Defaults to Adagrad optimizer.
activation_fn Activation function applied to each layer. If None, will use tf.nn.relu.
dropout When not None, the probability we will drop out a given coordinate.
config RunConfig object to configure the runtime settings.
warm_start_from A string filepath to a checkpoint to warm-start from, or a WarmStartSettings object to fully configure warm-starting. If the string filepath is provided instead of a WarmStartSettings, then all weights are warm-started, and it is assumed that vocabularies and Tensor names are unchanged.
loss_reduction One of tf.losses.Reduction except NONE. Describes how to reduce training loss over batch. Defaults to SUM_OVER_BATCH_SIZE.
batch_norm Whether to use batch normalization after each hidden layer. Eager Compatibility Estimators can be used while eager execution is enabled. Note that input_fn and all hooks are executed inside a graph context, so they have to be written to be compatible with graph mode. Note that input_fn code using tf.data generally works in both graph and eager modes.
Attributes
config
export_savedmodel
model_dir
model_fn Returns the model_fn which is bound to self.params.
params
Methods eval_dir View source
eval_dir(
name=None
)
Shows the directory name where evaluation metrics are dumped.
Args
name Name of the evaluation if user needs to run multiple evaluations on different data sets, such as on training data vs test data. Metrics for different evaluations are saved in separate folders, and appear separately in tensorboard.
Returns A string which is the path of directory contains evaluation metrics.
evaluate View source
evaluate(
input_fn, steps=None, hooks=None, checkpoint_path=None, name=None
)
Evaluates the model given evaluation data input_fn. For each step, calls input_fn, which returns one batch of data. Evaluates until:
steps batches are processed, or
input_fn raises an end-of-input exception (tf.errors.OutOfRangeError or StopIteration).
Args
input_fn A function that constructs the input data for evaluation. See Premade Estimators for more information. The function should construct and return one of the following: A tf.data.Dataset object: Outputs of Dataset object must be a tuple (features, labels) with same constraints as below. A tuple (features, labels): Where features is a tf.Tensor or a dictionary of string feature name to Tensor and labels is a Tensor or a dictionary of string label name to Tensor. Both features and labels are consumed by model_fn. They should satisfy the expectation of model_fn from inputs.
steps Number of steps for which to evaluate model. If None, evaluates until input_fn raises an end-of-input exception.
hooks List of tf.train.SessionRunHook subclass instances. Used for callbacks inside the evaluation call.
checkpoint_path Path of a specific checkpoint to evaluate. If None, the latest checkpoint in model_dir is used. If there are no checkpoints in model_dir, evaluation is run with newly initialized Variables instead of ones restored from checkpoint.
name Name of the evaluation if user needs to run multiple evaluations on different data sets, such as on training data vs test data. Metrics for different evaluations are saved in separate folders, and appear separately in tensorboard.
Returns A dict containing the evaluation metrics specified in model_fn keyed by name, as well as an entry global_step which contains the value of the global step for which this evaluation was performed. For canned estimators, the dict contains the loss (mean loss per mini-batch) and the average_loss (mean loss per sample). Canned classifiers also return the accuracy. Canned regressors also return the label/mean and the prediction/mean.
Raises
ValueError If steps <= 0. experimental_export_all_saved_models View source
experimental_export_all_saved_models(
export_dir_base, input_receiver_fn_map, assets_extra=None, as_text=False,
checkpoint_path=None
)
Exports a SavedModel with tf.MetaGraphDefs for each requested mode. For each mode passed in via the input_receiver_fn_map, this method builds a new graph by calling the input_receiver_fn to obtain feature and label Tensors. Next, this method calls the Estimator's model_fn in the passed mode to generate the model graph based on those features and labels, and restores the given checkpoint (or, lacking that, the most recent checkpoint) into the graph. Only one of the modes is used for saving variables to the SavedModel (order of preference: tf.estimator.ModeKeys.TRAIN, tf.estimator.ModeKeys.EVAL, then tf.estimator.ModeKeys.PREDICT), such that up to three tf.MetaGraphDefs are saved with a single set of variables in a single SavedModel directory. For the variables and tf.MetaGraphDefs, a timestamped export directory below export_dir_base, and writes a SavedModel into it containing the tf.MetaGraphDef for the given mode and its associated signatures. For prediction, the exported MetaGraphDef will provide one SignatureDef for each element of the export_outputs dict returned from the model_fn, named using the same keys. One of these keys is always tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY, indicating which signature will be served when a serving request does not specify one. For each signature, the outputs are provided by the corresponding tf.estimator.export.ExportOutputs, and the inputs are always the input receivers provided by the serving_input_receiver_fn. For training and evaluation, the train_op is stored in an extra collection, and loss, metrics, and predictions are included in a SignatureDef for the mode in question. Extra assets may be written into the SavedModel via the assets_extra argument. This should be a dict, where each key gives a destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as {'my_asset_file.txt': '/path/to/my_asset_file.txt'}.
Args
export_dir_base A string containing a directory in which to create timestamped subdirectories containing exported SavedModels.
input_receiver_fn_map dict of tf.estimator.ModeKeys to input_receiver_fn mappings, where the input_receiver_fn is a function that takes no arguments and returns the appropriate subclass of InputReceiver.
assets_extra A dict specifying how to populate the assets.extra directory within the exported SavedModel, or None if no extra assets are needed.
as_text whether to write the SavedModel proto in text format.
checkpoint_path The checkpoint path to export. If None (the default), the most recent checkpoint found within the model directory is chosen.
Returns The path to the exported directory as a bytes object.
Raises
ValueError if any input_receiver_fn is None, no export_outputs are provided, or no checkpoint can be found. export_saved_model View source
export_saved_model(
export_dir_base, serving_input_receiver_fn, assets_extra=None, as_text=False,
checkpoint_path=None, experimental_mode=ModeKeys.PREDICT
)
Exports inference graph as a SavedModel into the given dir. For a detailed guide, see SavedModel from Estimators. This method builds a new graph by first calling the serving_input_receiver_fn to obtain feature Tensors, and then calling this Estimator's model_fn to generate the model graph based on those features. It restores the given checkpoint (or, lacking that, the most recent checkpoint) into this graph in a fresh session. Finally it creates a timestamped export directory below the given export_dir_base, and writes a SavedModel into it containing a single tf.MetaGraphDef saved from this session. The exported MetaGraphDef will provide one SignatureDef for each element of the export_outputs dict returned from the model_fn, named using the same keys. One of these keys is always tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY, indicating which signature will be served when a serving request does not specify one. For each signature, the outputs are provided by the corresponding tf.estimator.export.ExportOutputs, and the inputs are always the input receivers provided by the serving_input_receiver_fn. Extra assets may be written into the SavedModel via the assets_extra argument. This should be a dict, where each key gives a destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as {'my_asset_file.txt': '/path/to/my_asset_file.txt'}. The experimental_mode parameter can be used to export a single train/eval/predict graph as a SavedModel. See experimental_export_all_saved_models for full docs.
Args
export_dir_base A string containing a directory in which to create timestamped subdirectories containing exported SavedModels.
serving_input_receiver_fn A function that takes no argument and returns a tf.estimator.export.ServingInputReceiver or tf.estimator.export.TensorServingInputReceiver.
assets_extra A dict specifying how to populate the assets.extra directory within the exported SavedModel, or None if no extra assets are needed.
as_text whether to write the SavedModel proto in text format.
checkpoint_path The checkpoint path to export. If None (the default), the most recent checkpoint found within the model directory is chosen.
experimental_mode tf.estimator.ModeKeys value indicating with mode will be exported. Note that this feature is experimental.
Returns The path to the exported directory as a bytes object.
Raises
ValueError if no serving_input_receiver_fn is provided, no export_outputs are provided, or no checkpoint can be found. get_variable_names View source
get_variable_names()
Returns list of all variable names in this model.
Returns List of names.
Raises
ValueError If the Estimator has not produced a checkpoint yet. get_variable_value View source
get_variable_value(
name
)
Returns value of the variable given by name.
Args
name string or a list of string, name of the tensor.
Returns Numpy array - value of the tensor.
Raises
ValueError If the Estimator has not produced a checkpoint yet. latest_checkpoint View source
latest_checkpoint()
Finds the filename of the latest saved checkpoint file in model_dir.
Returns The full path to the latest checkpoint or None if no checkpoint was found.
predict View source
predict(
input_fn, predict_keys=None, hooks=None, checkpoint_path=None,
yield_single_examples=True
)
Yields predictions for given features. Please note that interleaving two predict outputs does not work. See: issue/20506
Args
input_fn A function that constructs the features. Prediction continues until input_fn raises an end-of-input exception (tf.errors.OutOfRangeError or StopIteration). See Premade Estimators for more information. The function should construct and return one of the following:
tf.data.Dataset object -- Outputs of Dataset object must have same constraints as below. features -- A tf.Tensor or a dictionary of string feature name to Tensor. features are consumed by model_fn. They should satisfy the expectation of model_fn from inputs. A tuple, in which case the first item is extracted as features.
predict_keys list of str, name of the keys to predict. It is used if the tf.estimator.EstimatorSpec.predictions is a dict. If predict_keys is used then rest of the predictions will be filtered from the dictionary. If None, returns all.
hooks List of tf.train.SessionRunHook subclass instances. Used for callbacks inside the prediction call.
checkpoint_path Path of a specific checkpoint to predict. If None, the latest checkpoint in model_dir is used. If there are no checkpoints in model_dir, prediction is run with newly initialized Variables instead of ones restored from checkpoint.
yield_single_examples If False, yields the whole batch as returned by the model_fn instead of decomposing the batch into individual elements. This is useful if model_fn returns some tensors whose first dimension is not equal to the batch size.
Yields Evaluated values of predictions tensors.
Raises
ValueError If batch length of predictions is not the same and yield_single_examples is True.
ValueError If there is a conflict between predict_keys and predictions. For example if predict_keys is not None but tf.estimator.EstimatorSpec.predictions is not a dict. train View source
train(
input_fn, hooks=None, steps=None, max_steps=None, saving_listeners=None
)
Trains a model given training data input_fn.
Args
input_fn A function that provides input data for training as minibatches. See Premade Estimators for more information. The function should construct and return one of the following: A tf.data.Dataset object: Outputs of Dataset object must be a tuple (features, labels) with same constraints as below. A tuple (features, labels): Where features is a tf.Tensor or a dictionary of string feature name to Tensor and labels is a Tensor or a dictionary of string label name to Tensor. Both features and labels are consumed by model_fn. They should satisfy the expectation of model_fn from inputs.
hooks List of tf.train.SessionRunHook subclass instances. Used for callbacks inside the training loop.
steps Number of steps for which to train the model. If None, train forever or train until input_fn generates the tf.errors.OutOfRange error or StopIteration exception. steps works incrementally. If you call two times train(steps=10) then training occurs in total 20 steps. If OutOfRange or StopIteration occurs in the middle, training stops before 20 steps. If you don't want to have incremental behavior please set max_steps instead. If set, max_steps must be None.
max_steps Number of total steps for which to train model. If None, train forever or train until input_fn generates the tf.errors.OutOfRange error or StopIteration exception. If set, steps must be None. If OutOfRange or StopIteration occurs in the middle, training stops before max_steps steps. Two calls to train(steps=100) means 200 training iterations. On the other hand, two calls to train(max_steps=100) means that the second call will not do any iteration since first call did all 100 steps.
saving_listeners list of CheckpointSaverListener objects. Used for callbacks that run immediately before or after checkpoint savings.
Returns self, for chaining.
Raises
ValueError If both steps and max_steps are not None.
ValueError If either steps or max_steps <= 0. | tensorflow.estimator.dnnclassifier |
tf.estimator.DNNEstimator View source on GitHub An estimator for TensorFlow DNN models with user-specified head. Inherits From: Estimator, Estimator
tf.estimator.DNNEstimator(
head, hidden_units, feature_columns, model_dir=None,
optimizer='Adagrad', activation_fn=tf.nn.relu, dropout=None,
config=None, warm_start_from=None, batch_norm=False
)
Example: sparse_feature_a = sparse_column_with_hash_bucket(...)
sparse_feature_b = sparse_column_with_hash_bucket(...)
sparse_feature_a_emb = embedding_column(sparse_id_column=sparse_feature_a,
...)
sparse_feature_b_emb = embedding_column(sparse_id_column=sparse_feature_b,
...)
estimator = tf.estimator.DNNEstimator(
head=tf.estimator.MultiLabelHead(n_classes=3),
feature_columns=[sparse_feature_a_emb, sparse_feature_b_emb],
hidden_units=[1024, 512, 256])
# Or estimator using the ProximalAdagradOptimizer optimizer with
# regularization.
estimator = tf.estimator.DNNEstimator(
head=tf.estimator.MultiLabelHead(n_classes=3),
feature_columns=[sparse_feature_a_emb, sparse_feature_b_emb],
hidden_units=[1024, 512, 256],
optimizer=tf.compat.v1.train.ProximalAdagradOptimizer(
learning_rate=0.1,
l1_regularization_strength=0.001
))
# Or estimator using an optimizer with a learning rate decay.
estimator = tf.estimator.DNNEstimator(
head=tf.estimator.MultiLabelHead(n_classes=3),
feature_columns=[sparse_feature_a_emb, sparse_feature_b_emb],
hidden_units=[1024, 512, 256],
optimizer=lambda: tf.keras.optimizers.Adam(
learning_rate=tf.compat.v1.train.exponential_decay(
learning_rate=0.1,
global_step=tf.compat.v1.train.get_global_step(),
decay_steps=10000,
decay_rate=0.96))
# Or estimator with warm-starting from a previous checkpoint.
estimator = tf.estimator.DNNEstimator(
head=tf.estimator.MultiLabelHead(n_classes=3),
feature_columns=[sparse_feature_a_emb, sparse_feature_b_emb],
hidden_units=[1024, 512, 256],
warm_start_from="/path/to/checkpoint/dir")
# Input builders
def input_fn_train:
# Returns tf.data.Dataset of (x, y) tuple where y represents label's class
# index.
pass
def input_fn_eval:
# Returns tf.data.Dataset of (x, y) tuple where y represents label's class
# index.
pass
def input_fn_predict:
# Returns tf.data.Dataset of (x, None) tuple.
pass
estimator.train(input_fn=input_fn_train)
metrics = estimator.evaluate(input_fn=input_fn_eval)
predictions = estimator.predict(input_fn=input_fn_predict)
Input of train and evaluate should have following features, otherwise there will be a KeyError: if weight_column is not None, a feature with key=weight_column whose value is a Tensor. for each column in feature_columns: if column is a CategoricalColumn, a feature with key=column.name whose value is a SparseTensor. if column is a WeightedCategoricalColumn, two features: the first with key the id column name, the second with key the weight column name. Both features' value must be a SparseTensor. if column is a DenseColumn, a feature with key=column.name whose value is a Tensor.
Loss and predicted output are determined by the specified head.
Args
head A _Head instance constructed with a method such as tf.contrib.estimator.multi_label_head.
hidden_units Iterable of number hidden units per layer. All layers are fully connected. Ex. [64, 32] means first layer has 64 nodes and second one has 32.
feature_columns An iterable containing all the feature columns used by the model. All items in the set should be instances of classes derived from _FeatureColumn.
model_dir Directory to save model parameters, graph and etc. This can also be used to load checkpoints from the directory into a estimator to continue training a previously saved model.
optimizer An instance of tf.keras.optimizers.* used to train the model. Can also be a string (one of 'Adagrad', 'Adam', 'Ftrl', 'RMSProp', SGD'), or callable. Defaults to Adagrad optimizer.
activation_fn Activation function applied to each layer. If None, will use tf.nn.relu.
dropout When not None, the probability we will drop out a given coordinate.
config RunConfig object to configure the runtime settings.
warm_start_from A string filepath to a checkpoint to warm-start from, or a WarmStartSettings object to fully configure warm-starting. If the string filepath is provided instead of a WarmStartSettings, then all weights are warm-started, and it is assumed that vocabularies and Tensor names are unchanged.
batch_norm Whether to use batch normalization after each hidden layer. Eager Compatibility Estimators can be used while eager execution is enabled. Note that input_fn and all hooks are executed inside a graph context, so they have to be written to be compatible with graph mode. Note that input_fn code using tf.data generally works in both graph and eager modes.
Attributes
config
export_savedmodel
model_dir
model_fn Returns the model_fn which is bound to self.params.
params
Methods eval_dir View source
eval_dir(
name=None
)
Shows the directory name where evaluation metrics are dumped.
Args
name Name of the evaluation if user needs to run multiple evaluations on different data sets, such as on training data vs test data. Metrics for different evaluations are saved in separate folders, and appear separately in tensorboard.
Returns A string which is the path of directory contains evaluation metrics.
evaluate View source
evaluate(
input_fn, steps=None, hooks=None, checkpoint_path=None, name=None
)
Evaluates the model given evaluation data input_fn. For each step, calls input_fn, which returns one batch of data. Evaluates until:
steps batches are processed, or
input_fn raises an end-of-input exception (tf.errors.OutOfRangeError or StopIteration).
Args
input_fn A function that constructs the input data for evaluation. See Premade Estimators for more information. The function should construct and return one of the following: A tf.data.Dataset object: Outputs of Dataset object must be a tuple (features, labels) with same constraints as below. A tuple (features, labels): Where features is a tf.Tensor or a dictionary of string feature name to Tensor and labels is a Tensor or a dictionary of string label name to Tensor. Both features and labels are consumed by model_fn. They should satisfy the expectation of model_fn from inputs.
steps Number of steps for which to evaluate model. If None, evaluates until input_fn raises an end-of-input exception.
hooks List of tf.train.SessionRunHook subclass instances. Used for callbacks inside the evaluation call.
checkpoint_path Path of a specific checkpoint to evaluate. If None, the latest checkpoint in model_dir is used. If there are no checkpoints in model_dir, evaluation is run with newly initialized Variables instead of ones restored from checkpoint.
name Name of the evaluation if user needs to run multiple evaluations on different data sets, such as on training data vs test data. Metrics for different evaluations are saved in separate folders, and appear separately in tensorboard.
Returns A dict containing the evaluation metrics specified in model_fn keyed by name, as well as an entry global_step which contains the value of the global step for which this evaluation was performed. For canned estimators, the dict contains the loss (mean loss per mini-batch) and the average_loss (mean loss per sample). Canned classifiers also return the accuracy. Canned regressors also return the label/mean and the prediction/mean.
Raises
ValueError If steps <= 0. experimental_export_all_saved_models View source
experimental_export_all_saved_models(
export_dir_base, input_receiver_fn_map, assets_extra=None, as_text=False,
checkpoint_path=None
)
Exports a SavedModel with tf.MetaGraphDefs for each requested mode. For each mode passed in via the input_receiver_fn_map, this method builds a new graph by calling the input_receiver_fn to obtain feature and label Tensors. Next, this method calls the Estimator's model_fn in the passed mode to generate the model graph based on those features and labels, and restores the given checkpoint (or, lacking that, the most recent checkpoint) into the graph. Only one of the modes is used for saving variables to the SavedModel (order of preference: tf.estimator.ModeKeys.TRAIN, tf.estimator.ModeKeys.EVAL, then tf.estimator.ModeKeys.PREDICT), such that up to three tf.MetaGraphDefs are saved with a single set of variables in a single SavedModel directory. For the variables and tf.MetaGraphDefs, a timestamped export directory below export_dir_base, and writes a SavedModel into it containing the tf.MetaGraphDef for the given mode and its associated signatures. For prediction, the exported MetaGraphDef will provide one SignatureDef for each element of the export_outputs dict returned from the model_fn, named using the same keys. One of these keys is always tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY, indicating which signature will be served when a serving request does not specify one. For each signature, the outputs are provided by the corresponding tf.estimator.export.ExportOutputs, and the inputs are always the input receivers provided by the serving_input_receiver_fn. For training and evaluation, the train_op is stored in an extra collection, and loss, metrics, and predictions are included in a SignatureDef for the mode in question. Extra assets may be written into the SavedModel via the assets_extra argument. This should be a dict, where each key gives a destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as {'my_asset_file.txt': '/path/to/my_asset_file.txt'}.
Args
export_dir_base A string containing a directory in which to create timestamped subdirectories containing exported SavedModels.
input_receiver_fn_map dict of tf.estimator.ModeKeys to input_receiver_fn mappings, where the input_receiver_fn is a function that takes no arguments and returns the appropriate subclass of InputReceiver.
assets_extra A dict specifying how to populate the assets.extra directory within the exported SavedModel, or None if no extra assets are needed.
as_text whether to write the SavedModel proto in text format.
checkpoint_path The checkpoint path to export. If None (the default), the most recent checkpoint found within the model directory is chosen.
Returns The path to the exported directory as a bytes object.
Raises
ValueError if any input_receiver_fn is None, no export_outputs are provided, or no checkpoint can be found. export_saved_model View source
export_saved_model(
export_dir_base, serving_input_receiver_fn, assets_extra=None, as_text=False,
checkpoint_path=None, experimental_mode=ModeKeys.PREDICT
)
Exports inference graph as a SavedModel into the given dir. For a detailed guide, see SavedModel from Estimators. This method builds a new graph by first calling the serving_input_receiver_fn to obtain feature Tensors, and then calling this Estimator's model_fn to generate the model graph based on those features. It restores the given checkpoint (or, lacking that, the most recent checkpoint) into this graph in a fresh session. Finally it creates a timestamped export directory below the given export_dir_base, and writes a SavedModel into it containing a single tf.MetaGraphDef saved from this session. The exported MetaGraphDef will provide one SignatureDef for each element of the export_outputs dict returned from the model_fn, named using the same keys. One of these keys is always tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY, indicating which signature will be served when a serving request does not specify one. For each signature, the outputs are provided by the corresponding tf.estimator.export.ExportOutputs, and the inputs are always the input receivers provided by the serving_input_receiver_fn. Extra assets may be written into the SavedModel via the assets_extra argument. This should be a dict, where each key gives a destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as {'my_asset_file.txt': '/path/to/my_asset_file.txt'}. The experimental_mode parameter can be used to export a single train/eval/predict graph as a SavedModel. See experimental_export_all_saved_models for full docs.
Args
export_dir_base A string containing a directory in which to create timestamped subdirectories containing exported SavedModels.
serving_input_receiver_fn A function that takes no argument and returns a tf.estimator.export.ServingInputReceiver or tf.estimator.export.TensorServingInputReceiver.
assets_extra A dict specifying how to populate the assets.extra directory within the exported SavedModel, or None if no extra assets are needed.
as_text whether to write the SavedModel proto in text format.
checkpoint_path The checkpoint path to export. If None (the default), the most recent checkpoint found within the model directory is chosen.
experimental_mode tf.estimator.ModeKeys value indicating with mode will be exported. Note that this feature is experimental.
Returns The path to the exported directory as a bytes object.
Raises
ValueError if no serving_input_receiver_fn is provided, no export_outputs are provided, or no checkpoint can be found. get_variable_names View source
get_variable_names()
Returns list of all variable names in this model.
Returns List of names.
Raises
ValueError If the Estimator has not produced a checkpoint yet. get_variable_value View source
get_variable_value(
name
)
Returns value of the variable given by name.
Args
name string or a list of string, name of the tensor.
Returns Numpy array - value of the tensor.
Raises
ValueError If the Estimator has not produced a checkpoint yet. latest_checkpoint View source
latest_checkpoint()
Finds the filename of the latest saved checkpoint file in model_dir.
Returns The full path to the latest checkpoint or None if no checkpoint was found.
predict View source
predict(
input_fn, predict_keys=None, hooks=None, checkpoint_path=None,
yield_single_examples=True
)
Yields predictions for given features. Please note that interleaving two predict outputs does not work. See: issue/20506
Args
input_fn A function that constructs the features. Prediction continues until input_fn raises an end-of-input exception (tf.errors.OutOfRangeError or StopIteration). See Premade Estimators for more information. The function should construct and return one of the following:
tf.data.Dataset object -- Outputs of Dataset object must have same constraints as below. features -- A tf.Tensor or a dictionary of string feature name to Tensor. features are consumed by model_fn. They should satisfy the expectation of model_fn from inputs. A tuple, in which case the first item is extracted as features.
predict_keys list of str, name of the keys to predict. It is used if the tf.estimator.EstimatorSpec.predictions is a dict. If predict_keys is used then rest of the predictions will be filtered from the dictionary. If None, returns all.
hooks List of tf.train.SessionRunHook subclass instances. Used for callbacks inside the prediction call.
checkpoint_path Path of a specific checkpoint to predict. If None, the latest checkpoint in model_dir is used. If there are no checkpoints in model_dir, prediction is run with newly initialized Variables instead of ones restored from checkpoint.
yield_single_examples If False, yields the whole batch as returned by the model_fn instead of decomposing the batch into individual elements. This is useful if model_fn returns some tensors whose first dimension is not equal to the batch size.
Yields Evaluated values of predictions tensors.
Raises
ValueError If batch length of predictions is not the same and yield_single_examples is True.
ValueError If there is a conflict between predict_keys and predictions. For example if predict_keys is not None but tf.estimator.EstimatorSpec.predictions is not a dict. train View source
train(
input_fn, hooks=None, steps=None, max_steps=None, saving_listeners=None
)
Trains a model given training data input_fn.
Args
input_fn A function that provides input data for training as minibatches. See Premade Estimators for more information. The function should construct and return one of the following: A tf.data.Dataset object: Outputs of Dataset object must be a tuple (features, labels) with same constraints as below. A tuple (features, labels): Where features is a tf.Tensor or a dictionary of string feature name to Tensor and labels is a Tensor or a dictionary of string label name to Tensor. Both features and labels are consumed by model_fn. They should satisfy the expectation of model_fn from inputs.
hooks List of tf.train.SessionRunHook subclass instances. Used for callbacks inside the training loop.
steps Number of steps for which to train the model. If None, train forever or train until input_fn generates the tf.errors.OutOfRange error or StopIteration exception. steps works incrementally. If you call two times train(steps=10) then training occurs in total 20 steps. If OutOfRange or StopIteration occurs in the middle, training stops before 20 steps. If you don't want to have incremental behavior please set max_steps instead. If set, max_steps must be None.
max_steps Number of total steps for which to train model. If None, train forever or train until input_fn generates the tf.errors.OutOfRange error or StopIteration exception. If set, steps must be None. If OutOfRange or StopIteration occurs in the middle, training stops before max_steps steps. Two calls to train(steps=100) means 200 training iterations. On the other hand, two calls to train(max_steps=100) means that the second call will not do any iteration since first call did all 100 steps.
saving_listeners list of CheckpointSaverListener objects. Used for callbacks that run immediately before or after checkpoint savings.
Returns self, for chaining.
Raises
ValueError If both steps and max_steps are not None.
ValueError If either steps or max_steps <= 0. | tensorflow.estimator.dnnestimator |
tf.estimator.DNNLinearCombinedClassifier View source on GitHub An estimator for TensorFlow Linear and DNN joined classification models. Inherits From: Estimator, Estimator
tf.estimator.DNNLinearCombinedClassifier(
model_dir=None, linear_feature_columns=None, linear_optimizer='Ftrl',
dnn_feature_columns=None, dnn_optimizer='Adagrad',
dnn_hidden_units=None, dnn_activation_fn=tf.nn.relu, dnn_dropout=None,
n_classes=2, weight_column=None, label_vocabulary=None, config=None,
warm_start_from=None,
loss_reduction=losses_utils.ReductionV2.SUM_OVER_BATCH_SIZE, batch_norm=False,
linear_sparse_combiner='sum'
)
Note: This estimator is also known as wide-n-deep.
Example: numeric_feature = numeric_column(...)
categorical_column_a = categorical_column_with_hash_bucket(...)
categorical_column_b = categorical_column_with_hash_bucket(...)
categorical_feature_a_x_categorical_feature_b = crossed_column(...)
categorical_feature_a_emb = embedding_column(
categorical_column=categorical_feature_a, ...)
categorical_feature_b_emb = embedding_column(
categorical_id_column=categorical_feature_b, ...)
estimator = tf.estimator.DNNLinearCombinedClassifier(
# wide settings
linear_feature_columns=[categorical_feature_a_x_categorical_feature_b],
linear_optimizer=tf.keras.optimizers.Ftrl(...),
# deep settings
dnn_feature_columns=[
categorical_feature_a_emb, categorical_feature_b_emb,
numeric_feature],
dnn_hidden_units=[1000, 500, 100],
dnn_optimizer=tf.keras.optimizers.Adagrad(...),
# warm-start settings
warm_start_from="/path/to/checkpoint/dir")
# To apply L1 and L2 regularization, you can set dnn_optimizer to:
tf.compat.v1.train.ProximalAdagradOptimizer(
learning_rate=0.1,
l1_regularization_strength=0.001,
l2_regularization_strength=0.001)
# To apply learning rate decay, you can set dnn_optimizer to a callable:
lambda: tf.keras.optimizers.Adam(
learning_rate=tf.compat.v1.train.exponential_decay(
learning_rate=0.1,
global_step=tf.compat.v1.train.get_global_step(),
decay_steps=10000,
decay_rate=0.96)
# It is the same for linear_optimizer.
# Input builders
def input_fn_train:
# Returns tf.data.Dataset of (x, y) tuple where y represents label's class
# index.
pass
def input_fn_eval:
# Returns tf.data.Dataset of (x, y) tuple where y represents label's class
# index.
pass
def input_fn_predict:
# Returns tf.data.Dataset of (x, None) tuple.
pass
estimator.train(input_fn=input_fn_train, steps=100)
metrics = estimator.evaluate(input_fn=input_fn_eval, steps=10)
predictions = estimator.predict(input_fn=input_fn_predict)
Input of train and evaluate should have following features, otherwise there will be a KeyError: for each column in dnn_feature_columns + linear_feature_columns: if column is a CategoricalColumn, a feature with key=column.name whose value is a SparseTensor. if column is a WeightedCategoricalColumn, two features: the first with key the id column name, the second with key the weight column name. Both features' value must be a SparseTensor. if column is a DenseColumn, a feature with key=column.name whose value is a Tensor.
Loss is calculated by using softmax cross entropy.
Args
model_dir Directory to save model parameters, graph and etc. This can also be used to load checkpoints from the directory into a estimator to continue training a previously saved model.
linear_feature_columns An iterable containing all the feature columns used by linear part of the model. All items in the set must be instances of classes derived from FeatureColumn.
linear_optimizer An instance of tf.keras.optimizers.* used to apply gradients to the linear part of the model. Can also be a string (one of 'Adagrad', 'Adam', 'Ftrl', 'RMSProp', 'SGD'), or callable. Defaults to FTRL optimizer.
dnn_feature_columns An iterable containing all the feature columns used by deep part of the model. All items in the set must be instances of classes derived from FeatureColumn.
dnn_optimizer An instance of tf.keras.optimizers.* used to apply gradients to the deep part of the model. Can also be a string (one of 'Adagrad', 'Adam', 'Ftrl', 'RMSProp', 'SGD'), or callable. Defaults to Adagrad optimizer.
dnn_hidden_units List of hidden units per layer. All layers are fully connected.
dnn_activation_fn Activation function applied to each layer. If None, will use tf.nn.relu.
dnn_dropout When not None, the probability we will drop out a given coordinate.
n_classes Number of label classes. Defaults to 2, namely binary classification. Must be > 1.
weight_column A string or a _NumericColumn created by tf.feature_column.numeric_column defining feature column representing weights. It is used to down weight or boost examples during training. It will be multiplied by the loss of the example. If it is a string, it is used as a key to fetch weight tensor from the features. If it is a _NumericColumn, raw tensor is fetched by key weight_column.key, then weight_column.normalizer_fn is applied on it to get weight tensor.
label_vocabulary A list of strings represents possible label values. If given, labels must be string type and have any value in label_vocabulary. If it is not given, that means labels are already encoded as integer or float within [0, 1] for n_classes=2 and encoded as integer values in {0, 1,..., n_classes-1} for n_classes>2 . Also there will be errors if vocabulary is not provided and labels are string.
config RunConfig object to configure the runtime settings.
warm_start_from A string filepath to a checkpoint to warm-start from, or a WarmStartSettings object to fully configure warm-starting. If the string filepath is provided instead of a WarmStartSettings, then all weights are warm-started, and it is assumed that vocabularies and Tensor names are unchanged.
loss_reduction One of tf.losses.Reduction except NONE. Describes how to reduce training loss over batch. Defaults to SUM_OVER_BATCH_SIZE.
batch_norm Whether to use batch normalization after each hidden layer.
linear_sparse_combiner A string specifying how to reduce the linear model if a categorical column is multivalent. One of "mean", "sqrtn", and "sum" -- these are effectively different ways to do example-level normalization, which can be useful for bag-of-words features. For more details, see tf.feature_column.linear_model.
Raises
ValueError If both linear_feature_columns and dnn_features_columns are empty at the same time. Eager Compatibility Estimators can be used while eager execution is enabled. Note that input_fn and all hooks are executed inside a graph context, so they have to be written to be compatible with graph mode. Note that input_fn code using tf.data generally works in both graph and eager modes.
Attributes
config
export_savedmodel
model_dir
model_fn Returns the model_fn which is bound to self.params.
params
Methods eval_dir View source
eval_dir(
name=None
)
Shows the directory name where evaluation metrics are dumped.
Args
name Name of the evaluation if user needs to run multiple evaluations on different data sets, such as on training data vs test data. Metrics for different evaluations are saved in separate folders, and appear separately in tensorboard.
Returns A string which is the path of directory contains evaluation metrics.
evaluate View source
evaluate(
input_fn, steps=None, hooks=None, checkpoint_path=None, name=None
)
Evaluates the model given evaluation data input_fn. For each step, calls input_fn, which returns one batch of data. Evaluates until:
steps batches are processed, or
input_fn raises an end-of-input exception (tf.errors.OutOfRangeError or StopIteration).
Args
input_fn A function that constructs the input data for evaluation. See Premade Estimators for more information. The function should construct and return one of the following: A tf.data.Dataset object: Outputs of Dataset object must be a tuple (features, labels) with same constraints as below. A tuple (features, labels): Where features is a tf.Tensor or a dictionary of string feature name to Tensor and labels is a Tensor or a dictionary of string label name to Tensor. Both features and labels are consumed by model_fn. They should satisfy the expectation of model_fn from inputs.
steps Number of steps for which to evaluate model. If None, evaluates until input_fn raises an end-of-input exception.
hooks List of tf.train.SessionRunHook subclass instances. Used for callbacks inside the evaluation call.
checkpoint_path Path of a specific checkpoint to evaluate. If None, the latest checkpoint in model_dir is used. If there are no checkpoints in model_dir, evaluation is run with newly initialized Variables instead of ones restored from checkpoint.
name Name of the evaluation if user needs to run multiple evaluations on different data sets, such as on training data vs test data. Metrics for different evaluations are saved in separate folders, and appear separately in tensorboard.
Returns A dict containing the evaluation metrics specified in model_fn keyed by name, as well as an entry global_step which contains the value of the global step for which this evaluation was performed. For canned estimators, the dict contains the loss (mean loss per mini-batch) and the average_loss (mean loss per sample). Canned classifiers also return the accuracy. Canned regressors also return the label/mean and the prediction/mean.
Raises
ValueError If steps <= 0. experimental_export_all_saved_models View source
experimental_export_all_saved_models(
export_dir_base, input_receiver_fn_map, assets_extra=None, as_text=False,
checkpoint_path=None
)
Exports a SavedModel with tf.MetaGraphDefs for each requested mode. For each mode passed in via the input_receiver_fn_map, this method builds a new graph by calling the input_receiver_fn to obtain feature and label Tensors. Next, this method calls the Estimator's model_fn in the passed mode to generate the model graph based on those features and labels, and restores the given checkpoint (or, lacking that, the most recent checkpoint) into the graph. Only one of the modes is used for saving variables to the SavedModel (order of preference: tf.estimator.ModeKeys.TRAIN, tf.estimator.ModeKeys.EVAL, then tf.estimator.ModeKeys.PREDICT), such that up to three tf.MetaGraphDefs are saved with a single set of variables in a single SavedModel directory. For the variables and tf.MetaGraphDefs, a timestamped export directory below export_dir_base, and writes a SavedModel into it containing the tf.MetaGraphDef for the given mode and its associated signatures. For prediction, the exported MetaGraphDef will provide one SignatureDef for each element of the export_outputs dict returned from the model_fn, named using the same keys. One of these keys is always tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY, indicating which signature will be served when a serving request does not specify one. For each signature, the outputs are provided by the corresponding tf.estimator.export.ExportOutputs, and the inputs are always the input receivers provided by the serving_input_receiver_fn. For training and evaluation, the train_op is stored in an extra collection, and loss, metrics, and predictions are included in a SignatureDef for the mode in question. Extra assets may be written into the SavedModel via the assets_extra argument. This should be a dict, where each key gives a destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as {'my_asset_file.txt': '/path/to/my_asset_file.txt'}.
Args
export_dir_base A string containing a directory in which to create timestamped subdirectories containing exported SavedModels.
input_receiver_fn_map dict of tf.estimator.ModeKeys to input_receiver_fn mappings, where the input_receiver_fn is a function that takes no arguments and returns the appropriate subclass of InputReceiver.
assets_extra A dict specifying how to populate the assets.extra directory within the exported SavedModel, or None if no extra assets are needed.
as_text whether to write the SavedModel proto in text format.
checkpoint_path The checkpoint path to export. If None (the default), the most recent checkpoint found within the model directory is chosen.
Returns The path to the exported directory as a bytes object.
Raises
ValueError if any input_receiver_fn is None, no export_outputs are provided, or no checkpoint can be found. export_saved_model View source
export_saved_model(
export_dir_base, serving_input_receiver_fn, assets_extra=None, as_text=False,
checkpoint_path=None, experimental_mode=ModeKeys.PREDICT
)
Exports inference graph as a SavedModel into the given dir. For a detailed guide, see SavedModel from Estimators. This method builds a new graph by first calling the serving_input_receiver_fn to obtain feature Tensors, and then calling this Estimator's model_fn to generate the model graph based on those features. It restores the given checkpoint (or, lacking that, the most recent checkpoint) into this graph in a fresh session. Finally it creates a timestamped export directory below the given export_dir_base, and writes a SavedModel into it containing a single tf.MetaGraphDef saved from this session. The exported MetaGraphDef will provide one SignatureDef for each element of the export_outputs dict returned from the model_fn, named using the same keys. One of these keys is always tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY, indicating which signature will be served when a serving request does not specify one. For each signature, the outputs are provided by the corresponding tf.estimator.export.ExportOutputs, and the inputs are always the input receivers provided by the serving_input_receiver_fn. Extra assets may be written into the SavedModel via the assets_extra argument. This should be a dict, where each key gives a destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as {'my_asset_file.txt': '/path/to/my_asset_file.txt'}. The experimental_mode parameter can be used to export a single train/eval/predict graph as a SavedModel. See experimental_export_all_saved_models for full docs.
Args
export_dir_base A string containing a directory in which to create timestamped subdirectories containing exported SavedModels.
serving_input_receiver_fn A function that takes no argument and returns a tf.estimator.export.ServingInputReceiver or tf.estimator.export.TensorServingInputReceiver.
assets_extra A dict specifying how to populate the assets.extra directory within the exported SavedModel, or None if no extra assets are needed.
as_text whether to write the SavedModel proto in text format.
checkpoint_path The checkpoint path to export. If None (the default), the most recent checkpoint found within the model directory is chosen.
experimental_mode tf.estimator.ModeKeys value indicating with mode will be exported. Note that this feature is experimental.
Returns The path to the exported directory as a bytes object.
Raises
ValueError if no serving_input_receiver_fn is provided, no export_outputs are provided, or no checkpoint can be found. get_variable_names View source
get_variable_names()
Returns list of all variable names in this model.
Returns List of names.
Raises
ValueError If the Estimator has not produced a checkpoint yet. get_variable_value View source
get_variable_value(
name
)
Returns value of the variable given by name.
Args
name string or a list of string, name of the tensor.
Returns Numpy array - value of the tensor.
Raises
ValueError If the Estimator has not produced a checkpoint yet. latest_checkpoint View source
latest_checkpoint()
Finds the filename of the latest saved checkpoint file in model_dir.
Returns The full path to the latest checkpoint or None if no checkpoint was found.
predict View source
predict(
input_fn, predict_keys=None, hooks=None, checkpoint_path=None,
yield_single_examples=True
)
Yields predictions for given features. Please note that interleaving two predict outputs does not work. See: issue/20506
Args
input_fn A function that constructs the features. Prediction continues until input_fn raises an end-of-input exception (tf.errors.OutOfRangeError or StopIteration). See Premade Estimators for more information. The function should construct and return one of the following:
tf.data.Dataset object -- Outputs of Dataset object must have same constraints as below. features -- A tf.Tensor or a dictionary of string feature name to Tensor. features are consumed by model_fn. They should satisfy the expectation of model_fn from inputs. A tuple, in which case the first item is extracted as features.
predict_keys list of str, name of the keys to predict. It is used if the tf.estimator.EstimatorSpec.predictions is a dict. If predict_keys is used then rest of the predictions will be filtered from the dictionary. If None, returns all.
hooks List of tf.train.SessionRunHook subclass instances. Used for callbacks inside the prediction call.
checkpoint_path Path of a specific checkpoint to predict. If None, the latest checkpoint in model_dir is used. If there are no checkpoints in model_dir, prediction is run with newly initialized Variables instead of ones restored from checkpoint.
yield_single_examples If False, yields the whole batch as returned by the model_fn instead of decomposing the batch into individual elements. This is useful if model_fn returns some tensors whose first dimension is not equal to the batch size.
Yields Evaluated values of predictions tensors.
Raises
ValueError If batch length of predictions is not the same and yield_single_examples is True.
ValueError If there is a conflict between predict_keys and predictions. For example if predict_keys is not None but tf.estimator.EstimatorSpec.predictions is not a dict. train View source
train(
input_fn, hooks=None, steps=None, max_steps=None, saving_listeners=None
)
Trains a model given training data input_fn.
Args
input_fn A function that provides input data for training as minibatches. See Premade Estimators for more information. The function should construct and return one of the following: A tf.data.Dataset object: Outputs of Dataset object must be a tuple (features, labels) with same constraints as below. A tuple (features, labels): Where features is a tf.Tensor or a dictionary of string feature name to Tensor and labels is a Tensor or a dictionary of string label name to Tensor. Both features and labels are consumed by model_fn. They should satisfy the expectation of model_fn from inputs.
hooks List of tf.train.SessionRunHook subclass instances. Used for callbacks inside the training loop.
steps Number of steps for which to train the model. If None, train forever or train until input_fn generates the tf.errors.OutOfRange error or StopIteration exception. steps works incrementally. If you call two times train(steps=10) then training occurs in total 20 steps. If OutOfRange or StopIteration occurs in the middle, training stops before 20 steps. If you don't want to have incremental behavior please set max_steps instead. If set, max_steps must be None.
max_steps Number of total steps for which to train model. If None, train forever or train until input_fn generates the tf.errors.OutOfRange error or StopIteration exception. If set, steps must be None. If OutOfRange or StopIteration occurs in the middle, training stops before max_steps steps. Two calls to train(steps=100) means 200 training iterations. On the other hand, two calls to train(max_steps=100) means that the second call will not do any iteration since first call did all 100 steps.
saving_listeners list of CheckpointSaverListener objects. Used for callbacks that run immediately before or after checkpoint savings.
Returns self, for chaining.
Raises
ValueError If both steps and max_steps are not None.
ValueError If either steps or max_steps <= 0. | tensorflow.estimator.dnnlinearcombinedclassifier |
tf.estimator.DNNLinearCombinedEstimator View source on GitHub An estimator for TensorFlow Linear and DNN joined models with custom head. Inherits From: Estimator, Estimator
tf.estimator.DNNLinearCombinedEstimator(
head, model_dir=None, linear_feature_columns=None,
linear_optimizer='Ftrl', dnn_feature_columns=None,
dnn_optimizer='Adagrad', dnn_hidden_units=None,
dnn_activation_fn=tf.nn.relu, dnn_dropout=None, config=None,
linear_sparse_combiner='sum'
)
Note: This estimator is also known as wide-n-deep.
Example: numeric_feature = numeric_column(...)
categorical_column_a = categorical_column_with_hash_bucket(...)
categorical_column_b = categorical_column_with_hash_bucket(...)
categorical_feature_a_x_categorical_feature_b = crossed_column(...)
categorical_feature_a_emb = embedding_column(
categorical_column=categorical_feature_a, ...)
categorical_feature_b_emb = embedding_column(
categorical_column=categorical_feature_b, ...)
estimator = tf.estimator.DNNLinearCombinedEstimator(
head=tf.estimator.MultiLabelHead(n_classes=3),
# wide settings
linear_feature_columns=[categorical_feature_a_x_categorical_feature_b],
linear_optimizer=tf.keras.optimizers.Ftrl(...),
# deep settings
dnn_feature_columns=[
categorical_feature_a_emb, categorical_feature_b_emb,
numeric_feature],
dnn_hidden_units=[1000, 500, 100],
dnn_optimizer=tf.keras.optimizers.Adagrad(...))
# To apply L1 and L2 regularization, you can set dnn_optimizer to:
tf.compat.v1.train.ProximalAdagradOptimizer(
learning_rate=0.1,
l1_regularization_strength=0.001,
l2_regularization_strength=0.001)
# To apply learning rate decay, you can set dnn_optimizer to a callable:
lambda: tf.keras.optimizers.Adam(
learning_rate=tf.compat.v1.train.exponential_decay(
learning_rate=0.1,
global_step=tf.compat.v1.train.get_global_step(),
decay_steps=10000,
decay_rate=0.96)
# It is the same for linear_optimizer.
# Input builders
def input_fn_train:
# Returns tf.data.Dataset of (x, y) tuple where y represents label's class
# index.
pass
def input_fn_eval:
# Returns tf.data.Dataset of (x, y) tuple where y represents label's class
# index.
pass
def input_fn_predict:
# Returns tf.data.Dataset of (x, None) tuple.
pass
estimator.train(input_fn=input_fn_train, steps=100)
metrics = estimator.evaluate(input_fn=input_fn_eval, steps=10)
predictions = estimator.predict(input_fn=input_fn_predict)
Input of train and evaluate should have following features, otherwise there will be a KeyError: for each column in dnn_feature_columns + linear_feature_columns: if column is a CategoricalColumn, a feature with key=column.name whose value is a SparseTensor. if column is a WeightedCategoricalColumn, two features: the first with key the id column name, the second with key the weight column name. Both features' value must be a SparseTensor. if column is a DenseColumn, a feature with key=column.name whose value is a Tensor.
Loss is calculated by using mean squared error.
Args
head A Head instance constructed with a method such as tf.estimator.MultiLabelHead.
model_dir Directory to save model parameters, graph and etc. This can also be used to load checkpoints from the directory into an estimator to continue training a previously saved model.
linear_feature_columns An iterable containing all the feature columns used by linear part of the model. All items in the set must be instances of classes derived from FeatureColumn.
linear_optimizer An instance of tf.keras.optimizers.* used to apply gradients to the linear part of the model. Can also be a string (one of 'Adagrad', 'Adam', 'Ftrl', 'RMSProp', 'SGD'), or callable. Defaults to FTRL optimizer.
dnn_feature_columns An iterable containing all the feature columns used by deep part of the model. All items in the set must be instances of classes derived from FeatureColumn.
dnn_optimizer An instance of tf.keras.optimizers.* used to apply gradients to the deep part of the model. Can also be a string (one of 'Adagrad', 'Adam', 'Ftrl', 'RMSProp', 'SGD'), or callable. Defaults to Adagrad optimizer.
dnn_hidden_units List of hidden units per layer. All layers are fully connected.
dnn_activation_fn Activation function applied to each layer. If None, will use tf.nn.relu.
dnn_dropout When not None, the probability we will drop out a given coordinate.
config RunConfig object to configure the runtime settings.
linear_sparse_combiner A string specifying how to reduce the linear model if a categorical column is multivalent. One of "mean", "sqrtn", and "sum" -- these are effectively different ways to do example-level normalization, which can be useful for bag-of-words features. For more details, see tf.feature_column.linear_model.
Raises
ValueError If both linear_feature_columns and dnn_features_columns are empty at the same time. Eager Compatibility Estimators can be used while eager execution is enabled. Note that input_fn and all hooks are executed inside a graph context, so they have to be written to be compatible with graph mode. Note that input_fn code using tf.data generally works in both graph and eager modes.
Attributes
config
export_savedmodel
model_dir
model_fn Returns the model_fn which is bound to self.params.
params
Methods eval_dir View source
eval_dir(
name=None
)
Shows the directory name where evaluation metrics are dumped.
Args
name Name of the evaluation if user needs to run multiple evaluations on different data sets, such as on training data vs test data. Metrics for different evaluations are saved in separate folders, and appear separately in tensorboard.
Returns A string which is the path of directory contains evaluation metrics.
evaluate View source
evaluate(
input_fn, steps=None, hooks=None, checkpoint_path=None, name=None
)
Evaluates the model given evaluation data input_fn. For each step, calls input_fn, which returns one batch of data. Evaluates until:
steps batches are processed, or
input_fn raises an end-of-input exception (tf.errors.OutOfRangeError or StopIteration).
Args
input_fn A function that constructs the input data for evaluation. See Premade Estimators for more information. The function should construct and return one of the following: A tf.data.Dataset object: Outputs of Dataset object must be a tuple (features, labels) with same constraints as below. A tuple (features, labels): Where features is a tf.Tensor or a dictionary of string feature name to Tensor and labels is a Tensor or a dictionary of string label name to Tensor. Both features and labels are consumed by model_fn. They should satisfy the expectation of model_fn from inputs.
steps Number of steps for which to evaluate model. If None, evaluates until input_fn raises an end-of-input exception.
hooks List of tf.train.SessionRunHook subclass instances. Used for callbacks inside the evaluation call.
checkpoint_path Path of a specific checkpoint to evaluate. If None, the latest checkpoint in model_dir is used. If there are no checkpoints in model_dir, evaluation is run with newly initialized Variables instead of ones restored from checkpoint.
name Name of the evaluation if user needs to run multiple evaluations on different data sets, such as on training data vs test data. Metrics for different evaluations are saved in separate folders, and appear separately in tensorboard.
Returns A dict containing the evaluation metrics specified in model_fn keyed by name, as well as an entry global_step which contains the value of the global step for which this evaluation was performed. For canned estimators, the dict contains the loss (mean loss per mini-batch) and the average_loss (mean loss per sample). Canned classifiers also return the accuracy. Canned regressors also return the label/mean and the prediction/mean.
Raises
ValueError If steps <= 0. experimental_export_all_saved_models View source
experimental_export_all_saved_models(
export_dir_base, input_receiver_fn_map, assets_extra=None, as_text=False,
checkpoint_path=None
)
Exports a SavedModel with tf.MetaGraphDefs for each requested mode. For each mode passed in via the input_receiver_fn_map, this method builds a new graph by calling the input_receiver_fn to obtain feature and label Tensors. Next, this method calls the Estimator's model_fn in the passed mode to generate the model graph based on those features and labels, and restores the given checkpoint (or, lacking that, the most recent checkpoint) into the graph. Only one of the modes is used for saving variables to the SavedModel (order of preference: tf.estimator.ModeKeys.TRAIN, tf.estimator.ModeKeys.EVAL, then tf.estimator.ModeKeys.PREDICT), such that up to three tf.MetaGraphDefs are saved with a single set of variables in a single SavedModel directory. For the variables and tf.MetaGraphDefs, a timestamped export directory below export_dir_base, and writes a SavedModel into it containing the tf.MetaGraphDef for the given mode and its associated signatures. For prediction, the exported MetaGraphDef will provide one SignatureDef for each element of the export_outputs dict returned from the model_fn, named using the same keys. One of these keys is always tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY, indicating which signature will be served when a serving request does not specify one. For each signature, the outputs are provided by the corresponding tf.estimator.export.ExportOutputs, and the inputs are always the input receivers provided by the serving_input_receiver_fn. For training and evaluation, the train_op is stored in an extra collection, and loss, metrics, and predictions are included in a SignatureDef for the mode in question. Extra assets may be written into the SavedModel via the assets_extra argument. This should be a dict, where each key gives a destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as {'my_asset_file.txt': '/path/to/my_asset_file.txt'}.
Args
export_dir_base A string containing a directory in which to create timestamped subdirectories containing exported SavedModels.
input_receiver_fn_map dict of tf.estimator.ModeKeys to input_receiver_fn mappings, where the input_receiver_fn is a function that takes no arguments and returns the appropriate subclass of InputReceiver.
assets_extra A dict specifying how to populate the assets.extra directory within the exported SavedModel, or None if no extra assets are needed.
as_text whether to write the SavedModel proto in text format.
checkpoint_path The checkpoint path to export. If None (the default), the most recent checkpoint found within the model directory is chosen.
Returns The path to the exported directory as a bytes object.
Raises
ValueError if any input_receiver_fn is None, no export_outputs are provided, or no checkpoint can be found. export_saved_model View source
export_saved_model(
export_dir_base, serving_input_receiver_fn, assets_extra=None, as_text=False,
checkpoint_path=None, experimental_mode=ModeKeys.PREDICT
)
Exports inference graph as a SavedModel into the given dir. For a detailed guide, see SavedModel from Estimators. This method builds a new graph by first calling the serving_input_receiver_fn to obtain feature Tensors, and then calling this Estimator's model_fn to generate the model graph based on those features. It restores the given checkpoint (or, lacking that, the most recent checkpoint) into this graph in a fresh session. Finally it creates a timestamped export directory below the given export_dir_base, and writes a SavedModel into it containing a single tf.MetaGraphDef saved from this session. The exported MetaGraphDef will provide one SignatureDef for each element of the export_outputs dict returned from the model_fn, named using the same keys. One of these keys is always tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY, indicating which signature will be served when a serving request does not specify one. For each signature, the outputs are provided by the corresponding tf.estimator.export.ExportOutputs, and the inputs are always the input receivers provided by the serving_input_receiver_fn. Extra assets may be written into the SavedModel via the assets_extra argument. This should be a dict, where each key gives a destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as {'my_asset_file.txt': '/path/to/my_asset_file.txt'}. The experimental_mode parameter can be used to export a single train/eval/predict graph as a SavedModel. See experimental_export_all_saved_models for full docs.
Args
export_dir_base A string containing a directory in which to create timestamped subdirectories containing exported SavedModels.
serving_input_receiver_fn A function that takes no argument and returns a tf.estimator.export.ServingInputReceiver or tf.estimator.export.TensorServingInputReceiver.
assets_extra A dict specifying how to populate the assets.extra directory within the exported SavedModel, or None if no extra assets are needed.
as_text whether to write the SavedModel proto in text format.
checkpoint_path The checkpoint path to export. If None (the default), the most recent checkpoint found within the model directory is chosen.
experimental_mode tf.estimator.ModeKeys value indicating with mode will be exported. Note that this feature is experimental.
Returns The path to the exported directory as a bytes object.
Raises
ValueError if no serving_input_receiver_fn is provided, no export_outputs are provided, or no checkpoint can be found. get_variable_names View source
get_variable_names()
Returns list of all variable names in this model.
Returns List of names.
Raises
ValueError If the Estimator has not produced a checkpoint yet. get_variable_value View source
get_variable_value(
name
)
Returns value of the variable given by name.
Args
name string or a list of string, name of the tensor.
Returns Numpy array - value of the tensor.
Raises
ValueError If the Estimator has not produced a checkpoint yet. latest_checkpoint View source
latest_checkpoint()
Finds the filename of the latest saved checkpoint file in model_dir.
Returns The full path to the latest checkpoint or None if no checkpoint was found.
predict View source
predict(
input_fn, predict_keys=None, hooks=None, checkpoint_path=None,
yield_single_examples=True
)
Yields predictions for given features. Please note that interleaving two predict outputs does not work. See: issue/20506
Args
input_fn A function that constructs the features. Prediction continues until input_fn raises an end-of-input exception (tf.errors.OutOfRangeError or StopIteration). See Premade Estimators for more information. The function should construct and return one of the following:
tf.data.Dataset object -- Outputs of Dataset object must have same constraints as below. features -- A tf.Tensor or a dictionary of string feature name to Tensor. features are consumed by model_fn. They should satisfy the expectation of model_fn from inputs. A tuple, in which case the first item is extracted as features.
predict_keys list of str, name of the keys to predict. It is used if the tf.estimator.EstimatorSpec.predictions is a dict. If predict_keys is used then rest of the predictions will be filtered from the dictionary. If None, returns all.
hooks List of tf.train.SessionRunHook subclass instances. Used for callbacks inside the prediction call.
checkpoint_path Path of a specific checkpoint to predict. If None, the latest checkpoint in model_dir is used. If there are no checkpoints in model_dir, prediction is run with newly initialized Variables instead of ones restored from checkpoint.
yield_single_examples If False, yields the whole batch as returned by the model_fn instead of decomposing the batch into individual elements. This is useful if model_fn returns some tensors whose first dimension is not equal to the batch size.
Yields Evaluated values of predictions tensors.
Raises
ValueError If batch length of predictions is not the same and yield_single_examples is True.
ValueError If there is a conflict between predict_keys and predictions. For example if predict_keys is not None but tf.estimator.EstimatorSpec.predictions is not a dict. train View source
train(
input_fn, hooks=None, steps=None, max_steps=None, saving_listeners=None
)
Trains a model given training data input_fn.
Args
input_fn A function that provides input data for training as minibatches. See Premade Estimators for more information. The function should construct and return one of the following: A tf.data.Dataset object: Outputs of Dataset object must be a tuple (features, labels) with same constraints as below. A tuple (features, labels): Where features is a tf.Tensor or a dictionary of string feature name to Tensor and labels is a Tensor or a dictionary of string label name to Tensor. Both features and labels are consumed by model_fn. They should satisfy the expectation of model_fn from inputs.
hooks List of tf.train.SessionRunHook subclass instances. Used for callbacks inside the training loop.
steps Number of steps for which to train the model. If None, train forever or train until input_fn generates the tf.errors.OutOfRange error or StopIteration exception. steps works incrementally. If you call two times train(steps=10) then training occurs in total 20 steps. If OutOfRange or StopIteration occurs in the middle, training stops before 20 steps. If you don't want to have incremental behavior please set max_steps instead. If set, max_steps must be None.
max_steps Number of total steps for which to train model. If None, train forever or train until input_fn generates the tf.errors.OutOfRange error or StopIteration exception. If set, steps must be None. If OutOfRange or StopIteration occurs in the middle, training stops before max_steps steps. Two calls to train(steps=100) means 200 training iterations. On the other hand, two calls to train(max_steps=100) means that the second call will not do any iteration since first call did all 100 steps.
saving_listeners list of CheckpointSaverListener objects. Used for callbacks that run immediately before or after checkpoint savings.
Returns self, for chaining.
Raises
ValueError If both steps and max_steps are not None.
ValueError If either steps or max_steps <= 0. | tensorflow.estimator.dnnlinearcombinedestimator |
tf.estimator.DNNLinearCombinedRegressor View source on GitHub An estimator for TensorFlow Linear and DNN joined models for regression. Inherits From: Estimator, Estimator
tf.estimator.DNNLinearCombinedRegressor(
model_dir=None, linear_feature_columns=None, linear_optimizer='Ftrl',
dnn_feature_columns=None, dnn_optimizer='Adagrad',
dnn_hidden_units=None, dnn_activation_fn=tf.nn.relu, dnn_dropout=None,
label_dimension=1, weight_column=None, config=None, warm_start_from=None,
loss_reduction=losses_utils.ReductionV2.SUM_OVER_BATCH_SIZE, batch_norm=False,
linear_sparse_combiner='sum'
)
Note: This estimator is also known as wide-n-deep.
Example: numeric_feature = numeric_column(...)
categorical_column_a = categorical_column_with_hash_bucket(...)
categorical_column_b = categorical_column_with_hash_bucket(...)
categorical_feature_a_x_categorical_feature_b = crossed_column(...)
categorical_feature_a_emb = embedding_column(
categorical_column=categorical_feature_a, ...)
categorical_feature_b_emb = embedding_column(
categorical_column=categorical_feature_b, ...)
estimator = tf.estimator.DNNLinearCombinedRegressor(
# wide settings
linear_feature_columns=[categorical_feature_a_x_categorical_feature_b],
linear_optimizer=tf.keras.optimizers.Ftrl(...),
# deep settings
dnn_feature_columns=[
categorical_feature_a_emb, categorical_feature_b_emb,
numeric_feature],
dnn_hidden_units=[1000, 500, 100],
dnn_optimizer=tf.keras.optimizers.Adagrad(...),
# warm-start settings
warm_start_from="/path/to/checkpoint/dir")
# To apply L1 and L2 regularization, you can set dnn_optimizer to:
tf.compat.v1.train.ProximalAdagradOptimizer(
learning_rate=0.1,
l1_regularization_strength=0.001,
l2_regularization_strength=0.001)
# To apply learning rate decay, you can set dnn_optimizer to a callable:
lambda: tf.keras.optimizers.Adam(
learning_rate=tf.compat.v1.train.exponential_decay(
learning_rate=0.1,
global_step=tf.compat.v1.train.get_global_step(),
decay_steps=10000,
decay_rate=0.96)
# It is the same for linear_optimizer.
# Input builders
def input_fn_train:
# Returns tf.data.Dataset of (x, y) tuple where y represents label's class
# index.
pass
def input_fn_eval:
# Returns tf.data.Dataset of (x, y) tuple where y represents label's class
# index.
pass
def input_fn_predict:
# Returns tf.data.Dataset of (x, None) tuple.
pass
estimator.train(input_fn=input_fn_train, steps=100)
metrics = estimator.evaluate(input_fn=input_fn_eval, steps=10)
predictions = estimator.predict(input_fn=input_fn_predict)
Input of train and evaluate should have following features, otherwise there will be a KeyError: for each column in dnn_feature_columns + linear_feature_columns: if column is a CategoricalColumn, a feature with key=column.name whose value is a SparseTensor. if column is a WeightedCategoricalColumn, two features: the first with key the id column name, the second with key the weight column name. Both features' value must be a SparseTensor. if column is a DenseColumn, a feature with key=column.name whose value is a Tensor.
Loss is calculated by using mean squared error.
Args
model_dir Directory to save model parameters, graph and etc. This can also be used to load checkpoints from the directory into a estimator to continue training a previously saved model.
linear_feature_columns An iterable containing all the feature columns used by linear part of the model. All items in the set must be instances of classes derived from FeatureColumn.
linear_optimizer An instance of tf.keras.optimizers.* used to apply gradients to the linear part of the model. Can also be a string (one of 'Adagrad', 'Adam', 'Ftrl', 'RMSProp', 'SGD'), or callable. Defaults to FTRL optimizer.
dnn_feature_columns An iterable containing all the feature columns used by deep part of the model. All items in the set must be instances of classes derived from FeatureColumn.
dnn_optimizer An instance of tf.keras.optimizers.* used to apply gradients to the deep part of the model. Can also be a string (one of 'Adagrad', 'Adam', 'Ftrl', 'RMSProp', 'SGD'), or callable. Defaults to Adagrad optimizer.
dnn_hidden_units List of hidden units per layer. All layers are fully connected.
dnn_activation_fn Activation function applied to each layer. If None, will use tf.nn.relu.
dnn_dropout When not None, the probability we will drop out a given coordinate.
label_dimension Number of regression targets per example. This is the size of the last dimension of the labels and logits Tensor objects (typically, these have shape [batch_size, label_dimension]).
weight_column A string or a NumericColumn created by tf.feature_column.numeric_column defining feature column representing weights. It is used to down weight or boost examples during training. It will be multiplied by the loss of the example. If it is a string, it is used as a key to fetch weight tensor from the features. If it is a _NumericColumn, raw tensor is fetched by key weight_column.key, then weight_column.normalizer_fn is applied on it to get weight tensor.
config RunConfig object to configure the runtime settings.
warm_start_from A string filepath to a checkpoint to warm-start from, or a WarmStartSettings object to fully configure warm-starting. If the string filepath is provided instead of a WarmStartSettings, then all weights are warm-started, and it is assumed that vocabularies and Tensor names are unchanged.
loss_reduction One of tf.losses.Reduction except NONE. Describes how to reduce training loss over batch. Defaults to SUM_OVER_BATCH_SIZE.
batch_norm Whether to use batch normalization after each hidden layer.
linear_sparse_combiner A string specifying how to reduce the linear model if a categorical column is multivalent. One of "mean", "sqrtn", and "sum" -- these are effectively different ways to do example-level normalization, which can be useful for bag-of-words features. For more details, see tf.feature_column.linear_model.
Raises
ValueError If both linear_feature_columns and dnn_features_columns are empty at the same time. Eager Compatibility Estimators can be used while eager execution is enabled. Note that input_fn and all hooks are executed inside a graph context, so they have to be written to be compatible with graph mode. Note that input_fn code using tf.data generally works in both graph and eager modes.
Attributes
config
export_savedmodel
model_dir
model_fn Returns the model_fn which is bound to self.params.
params
Methods eval_dir View source
eval_dir(
name=None
)
Shows the directory name where evaluation metrics are dumped.
Args
name Name of the evaluation if user needs to run multiple evaluations on different data sets, such as on training data vs test data. Metrics for different evaluations are saved in separate folders, and appear separately in tensorboard.
Returns A string which is the path of directory contains evaluation metrics.
evaluate View source
evaluate(
input_fn, steps=None, hooks=None, checkpoint_path=None, name=None
)
Evaluates the model given evaluation data input_fn. For each step, calls input_fn, which returns one batch of data. Evaluates until:
steps batches are processed, or
input_fn raises an end-of-input exception (tf.errors.OutOfRangeError or StopIteration).
Args
input_fn A function that constructs the input data for evaluation. See Premade Estimators for more information. The function should construct and return one of the following: A tf.data.Dataset object: Outputs of Dataset object must be a tuple (features, labels) with same constraints as below. A tuple (features, labels): Where features is a tf.Tensor or a dictionary of string feature name to Tensor and labels is a Tensor or a dictionary of string label name to Tensor. Both features and labels are consumed by model_fn. They should satisfy the expectation of model_fn from inputs.
steps Number of steps for which to evaluate model. If None, evaluates until input_fn raises an end-of-input exception.
hooks List of tf.train.SessionRunHook subclass instances. Used for callbacks inside the evaluation call.
checkpoint_path Path of a specific checkpoint to evaluate. If None, the latest checkpoint in model_dir is used. If there are no checkpoints in model_dir, evaluation is run with newly initialized Variables instead of ones restored from checkpoint.
name Name of the evaluation if user needs to run multiple evaluations on different data sets, such as on training data vs test data. Metrics for different evaluations are saved in separate folders, and appear separately in tensorboard.
Returns A dict containing the evaluation metrics specified in model_fn keyed by name, as well as an entry global_step which contains the value of the global step for which this evaluation was performed. For canned estimators, the dict contains the loss (mean loss per mini-batch) and the average_loss (mean loss per sample). Canned classifiers also return the accuracy. Canned regressors also return the label/mean and the prediction/mean.
Raises
ValueError If steps <= 0. experimental_export_all_saved_models View source
experimental_export_all_saved_models(
export_dir_base, input_receiver_fn_map, assets_extra=None, as_text=False,
checkpoint_path=None
)
Exports a SavedModel with tf.MetaGraphDefs for each requested mode. For each mode passed in via the input_receiver_fn_map, this method builds a new graph by calling the input_receiver_fn to obtain feature and label Tensors. Next, this method calls the Estimator's model_fn in the passed mode to generate the model graph based on those features and labels, and restores the given checkpoint (or, lacking that, the most recent checkpoint) into the graph. Only one of the modes is used for saving variables to the SavedModel (order of preference: tf.estimator.ModeKeys.TRAIN, tf.estimator.ModeKeys.EVAL, then tf.estimator.ModeKeys.PREDICT), such that up to three tf.MetaGraphDefs are saved with a single set of variables in a single SavedModel directory. For the variables and tf.MetaGraphDefs, a timestamped export directory below export_dir_base, and writes a SavedModel into it containing the tf.MetaGraphDef for the given mode and its associated signatures. For prediction, the exported MetaGraphDef will provide one SignatureDef for each element of the export_outputs dict returned from the model_fn, named using the same keys. One of these keys is always tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY, indicating which signature will be served when a serving request does not specify one. For each signature, the outputs are provided by the corresponding tf.estimator.export.ExportOutputs, and the inputs are always the input receivers provided by the serving_input_receiver_fn. For training and evaluation, the train_op is stored in an extra collection, and loss, metrics, and predictions are included in a SignatureDef for the mode in question. Extra assets may be written into the SavedModel via the assets_extra argument. This should be a dict, where each key gives a destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as {'my_asset_file.txt': '/path/to/my_asset_file.txt'}.
Args
export_dir_base A string containing a directory in which to create timestamped subdirectories containing exported SavedModels.
input_receiver_fn_map dict of tf.estimator.ModeKeys to input_receiver_fn mappings, where the input_receiver_fn is a function that takes no arguments and returns the appropriate subclass of InputReceiver.
assets_extra A dict specifying how to populate the assets.extra directory within the exported SavedModel, or None if no extra assets are needed.
as_text whether to write the SavedModel proto in text format.
checkpoint_path The checkpoint path to export. If None (the default), the most recent checkpoint found within the model directory is chosen.
Returns The path to the exported directory as a bytes object.
Raises
ValueError if any input_receiver_fn is None, no export_outputs are provided, or no checkpoint can be found. export_saved_model View source
export_saved_model(
export_dir_base, serving_input_receiver_fn, assets_extra=None, as_text=False,
checkpoint_path=None, experimental_mode=ModeKeys.PREDICT
)
Exports inference graph as a SavedModel into the given dir. For a detailed guide, see SavedModel from Estimators. This method builds a new graph by first calling the serving_input_receiver_fn to obtain feature Tensors, and then calling this Estimator's model_fn to generate the model graph based on those features. It restores the given checkpoint (or, lacking that, the most recent checkpoint) into this graph in a fresh session. Finally it creates a timestamped export directory below the given export_dir_base, and writes a SavedModel into it containing a single tf.MetaGraphDef saved from this session. The exported MetaGraphDef will provide one SignatureDef for each element of the export_outputs dict returned from the model_fn, named using the same keys. One of these keys is always tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY, indicating which signature will be served when a serving request does not specify one. For each signature, the outputs are provided by the corresponding tf.estimator.export.ExportOutputs, and the inputs are always the input receivers provided by the serving_input_receiver_fn. Extra assets may be written into the SavedModel via the assets_extra argument. This should be a dict, where each key gives a destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as {'my_asset_file.txt': '/path/to/my_asset_file.txt'}. The experimental_mode parameter can be used to export a single train/eval/predict graph as a SavedModel. See experimental_export_all_saved_models for full docs.
Args
export_dir_base A string containing a directory in which to create timestamped subdirectories containing exported SavedModels.
serving_input_receiver_fn A function that takes no argument and returns a tf.estimator.export.ServingInputReceiver or tf.estimator.export.TensorServingInputReceiver.
assets_extra A dict specifying how to populate the assets.extra directory within the exported SavedModel, or None if no extra assets are needed.
as_text whether to write the SavedModel proto in text format.
checkpoint_path The checkpoint path to export. If None (the default), the most recent checkpoint found within the model directory is chosen.
experimental_mode tf.estimator.ModeKeys value indicating with mode will be exported. Note that this feature is experimental.
Returns The path to the exported directory as a bytes object.
Raises
ValueError if no serving_input_receiver_fn is provided, no export_outputs are provided, or no checkpoint can be found. get_variable_names View source
get_variable_names()
Returns list of all variable names in this model.
Returns List of names.
Raises
ValueError If the Estimator has not produced a checkpoint yet. get_variable_value View source
get_variable_value(
name
)
Returns value of the variable given by name.
Args
name string or a list of string, name of the tensor.
Returns Numpy array - value of the tensor.
Raises
ValueError If the Estimator has not produced a checkpoint yet. latest_checkpoint View source
latest_checkpoint()
Finds the filename of the latest saved checkpoint file in model_dir.
Returns The full path to the latest checkpoint or None if no checkpoint was found.
predict View source
predict(
input_fn, predict_keys=None, hooks=None, checkpoint_path=None,
yield_single_examples=True
)
Yields predictions for given features. Please note that interleaving two predict outputs does not work. See: issue/20506
Args
input_fn A function that constructs the features. Prediction continues until input_fn raises an end-of-input exception (tf.errors.OutOfRangeError or StopIteration). See Premade Estimators for more information. The function should construct and return one of the following:
tf.data.Dataset object -- Outputs of Dataset object must have same constraints as below. features -- A tf.Tensor or a dictionary of string feature name to Tensor. features are consumed by model_fn. They should satisfy the expectation of model_fn from inputs. A tuple, in which case the first item is extracted as features.
predict_keys list of str, name of the keys to predict. It is used if the tf.estimator.EstimatorSpec.predictions is a dict. If predict_keys is used then rest of the predictions will be filtered from the dictionary. If None, returns all.
hooks List of tf.train.SessionRunHook subclass instances. Used for callbacks inside the prediction call.
checkpoint_path Path of a specific checkpoint to predict. If None, the latest checkpoint in model_dir is used. If there are no checkpoints in model_dir, prediction is run with newly initialized Variables instead of ones restored from checkpoint.
yield_single_examples If False, yields the whole batch as returned by the model_fn instead of decomposing the batch into individual elements. This is useful if model_fn returns some tensors whose first dimension is not equal to the batch size.
Yields Evaluated values of predictions tensors.
Raises
ValueError If batch length of predictions is not the same and yield_single_examples is True.
ValueError If there is a conflict between predict_keys and predictions. For example if predict_keys is not None but tf.estimator.EstimatorSpec.predictions is not a dict. train View source
train(
input_fn, hooks=None, steps=None, max_steps=None, saving_listeners=None
)
Trains a model given training data input_fn.
Args
input_fn A function that provides input data for training as minibatches. See Premade Estimators for more information. The function should construct and return one of the following: A tf.data.Dataset object: Outputs of Dataset object must be a tuple (features, labels) with same constraints as below. A tuple (features, labels): Where features is a tf.Tensor or a dictionary of string feature name to Tensor and labels is a Tensor or a dictionary of string label name to Tensor. Both features and labels are consumed by model_fn. They should satisfy the expectation of model_fn from inputs.
hooks List of tf.train.SessionRunHook subclass instances. Used for callbacks inside the training loop.
steps Number of steps for which to train the model. If None, train forever or train until input_fn generates the tf.errors.OutOfRange error or StopIteration exception. steps works incrementally. If you call two times train(steps=10) then training occurs in total 20 steps. If OutOfRange or StopIteration occurs in the middle, training stops before 20 steps. If you don't want to have incremental behavior please set max_steps instead. If set, max_steps must be None.
max_steps Number of total steps for which to train model. If None, train forever or train until input_fn generates the tf.errors.OutOfRange error or StopIteration exception. If set, steps must be None. If OutOfRange or StopIteration occurs in the middle, training stops before max_steps steps. Two calls to train(steps=100) means 200 training iterations. On the other hand, two calls to train(max_steps=100) means that the second call will not do any iteration since first call did all 100 steps.
saving_listeners list of CheckpointSaverListener objects. Used for callbacks that run immediately before or after checkpoint savings.
Returns self, for chaining.
Raises
ValueError If both steps and max_steps are not None.
ValueError If either steps or max_steps <= 0. | tensorflow.estimator.dnnlinearcombinedregressor |
tf.estimator.DNNRegressor View source on GitHub A regressor for TensorFlow DNN models. Inherits From: Estimator, Estimator
tf.estimator.DNNRegressor(
hidden_units, feature_columns, model_dir=None, label_dimension=1,
weight_column=None, optimizer='Adagrad', activation_fn=tf.nn.relu,
dropout=None, config=None, warm_start_from=None,
loss_reduction=losses_utils.ReductionV2.SUM_OVER_BATCH_SIZE, batch_norm=False
)
Example: categorical_feature_a = categorical_column_with_hash_bucket(...)
categorical_feature_b = categorical_column_with_hash_bucket(...)
categorical_feature_a_emb = embedding_column(
categorical_column=categorical_feature_a, ...)
categorical_feature_b_emb = embedding_column(
categorical_column=categorical_feature_b, ...)
estimator = tf.estimator.DNNRegressor(
feature_columns=[categorical_feature_a_emb, categorical_feature_b_emb],
hidden_units=[1024, 512, 256])
# Or estimator using the ProximalAdagradOptimizer optimizer with
# regularization.
estimator = tf.estimator.DNNRegressor(
feature_columns=[categorical_feature_a_emb, categorical_feature_b_emb],
hidden_units=[1024, 512, 256],
optimizer=tf.compat.v1.train.ProximalAdagradOptimizer(
learning_rate=0.1,
l1_regularization_strength=0.001
))
# Or estimator using an optimizer with a learning rate decay.
estimator = tf.estimator.DNNRegressor(
feature_columns=[categorical_feature_a_emb, categorical_feature_b_emb],
hidden_units=[1024, 512, 256],
optimizer=lambda: tf.keras.optimizers.Adam(
learning_rate=tf.compat.v1.train.exponential_decay(
learning_rate=0.1,
global_step=tf.compat.v1.train.get_global_step(),
decay_steps=10000,
decay_rate=0.96))
# Or estimator with warm-starting from a previous checkpoint.
estimator = tf.estimator.DNNRegressor(
feature_columns=[categorical_feature_a_emb, categorical_feature_b_emb],
hidden_units=[1024, 512, 256],
warm_start_from="/path/to/checkpoint/dir")
# Input builders
def input_fn_train:
# Returns tf.data.Dataset of (x, y) tuple where y represents label's class
# index.
pass
def input_fn_eval:
# Returns tf.data.Dataset of (x, y) tuple where y represents label's class
# index.
pass
def input_fn_predict:
# Returns tf.data.Dataset of (x, None) tuple.
pass
estimator.train(input_fn=input_fn_train)
metrics = estimator.evaluate(input_fn=input_fn_eval)
predictions = estimator.predict(input_fn=input_fn_predict)
Input of train and evaluate should have following features, otherwise there will be a KeyError: if weight_column is not None, a feature with key=weight_column whose value is a Tensor. for each column in feature_columns: if column is a CategoricalColumn, a feature with key=column.name whose value is a SparseTensor. if column is a WeightedCategoricalColumn, two features: the first with key the id column name, the second with key the weight column name. Both features' value must be a SparseTensor. if column is a DenseColumn, a feature with key=column.name whose value is a Tensor.
Loss is calculated by using mean squared error.
Args
hidden_units Iterable of number hidden units per layer. All layers are fully connected. Ex. [64, 32] means first layer has 64 nodes and second one has 32.
feature_columns An iterable containing all the feature columns used by the model. All items in the set should be instances of classes derived from FeatureColumn.
model_dir Directory to save model parameters, graph and etc. This can also be used to load checkpoints from the directory into a estimator to continue training a previously saved model.
label_dimension Number of regression targets per example. This is the size of the last dimension of the labels and logits Tensor objects (typically, these have shape [batch_size, label_dimension]).
weight_column A string or a NumericColumn created by tf.feature_column.numeric_column defining feature column representing weights. It is used to down weight or boost examples during training. It will be multiplied by the loss of the example. If it is a string, it is used as a key to fetch weight tensor from the features. If it is a NumericColumn, raw tensor is fetched by key weight_column.key, then weight_column.normalizer_fn is applied on it to get weight tensor.
optimizer An instance of tf.keras.optimizers.* used to train the model. Can also be a string (one of 'Adagrad', 'Adam', 'Ftrl', 'RMSProp', SGD'), or callable. Defaults to Adagrad optimizer.
activation_fn Activation function applied to each layer. If None, will use tf.nn.relu.
dropout When not None, the probability we will drop out a given coordinate.
config RunConfig object to configure the runtime settings.
warm_start_from A string filepath to a checkpoint to warm-start from, or a WarmStartSettings object to fully configure warm-starting. If the string filepath is provided instead of a WarmStartSettings, then all weights are warm-started, and it is assumed that vocabularies and Tensor names are unchanged.
loss_reduction One of tf.losses.Reduction except NONE. Describes how to reduce training loss over batch. Defaults to SUM_OVER_BATCH_SIZE.
batch_norm Whether to use batch normalization after each hidden layer. Eager Compatibility Estimators can be used while eager execution is enabled. Note that input_fn and all hooks are executed inside a graph context, so they have to be written to be compatible with graph mode. Note that input_fn code using tf.data generally works in both graph and eager modes.
Attributes
config
export_savedmodel
model_dir
model_fn Returns the model_fn which is bound to self.params.
params
Methods eval_dir View source
eval_dir(
name=None
)
Shows the directory name where evaluation metrics are dumped.
Args
name Name of the evaluation if user needs to run multiple evaluations on different data sets, such as on training data vs test data. Metrics for different evaluations are saved in separate folders, and appear separately in tensorboard.
Returns A string which is the path of directory contains evaluation metrics.
evaluate View source
evaluate(
input_fn, steps=None, hooks=None, checkpoint_path=None, name=None
)
Evaluates the model given evaluation data input_fn. For each step, calls input_fn, which returns one batch of data. Evaluates until:
steps batches are processed, or
input_fn raises an end-of-input exception (tf.errors.OutOfRangeError or StopIteration).
Args
input_fn A function that constructs the input data for evaluation. See Premade Estimators for more information. The function should construct and return one of the following: A tf.data.Dataset object: Outputs of Dataset object must be a tuple (features, labels) with same constraints as below. A tuple (features, labels): Where features is a tf.Tensor or a dictionary of string feature name to Tensor and labels is a Tensor or a dictionary of string label name to Tensor. Both features and labels are consumed by model_fn. They should satisfy the expectation of model_fn from inputs.
steps Number of steps for which to evaluate model. If None, evaluates until input_fn raises an end-of-input exception.
hooks List of tf.train.SessionRunHook subclass instances. Used for callbacks inside the evaluation call.
checkpoint_path Path of a specific checkpoint to evaluate. If None, the latest checkpoint in model_dir is used. If there are no checkpoints in model_dir, evaluation is run with newly initialized Variables instead of ones restored from checkpoint.
name Name of the evaluation if user needs to run multiple evaluations on different data sets, such as on training data vs test data. Metrics for different evaluations are saved in separate folders, and appear separately in tensorboard.
Returns A dict containing the evaluation metrics specified in model_fn keyed by name, as well as an entry global_step which contains the value of the global step for which this evaluation was performed. For canned estimators, the dict contains the loss (mean loss per mini-batch) and the average_loss (mean loss per sample). Canned classifiers also return the accuracy. Canned regressors also return the label/mean and the prediction/mean.
Raises
ValueError If steps <= 0. experimental_export_all_saved_models View source
experimental_export_all_saved_models(
export_dir_base, input_receiver_fn_map, assets_extra=None, as_text=False,
checkpoint_path=None
)
Exports a SavedModel with tf.MetaGraphDefs for each requested mode. For each mode passed in via the input_receiver_fn_map, this method builds a new graph by calling the input_receiver_fn to obtain feature and label Tensors. Next, this method calls the Estimator's model_fn in the passed mode to generate the model graph based on those features and labels, and restores the given checkpoint (or, lacking that, the most recent checkpoint) into the graph. Only one of the modes is used for saving variables to the SavedModel (order of preference: tf.estimator.ModeKeys.TRAIN, tf.estimator.ModeKeys.EVAL, then tf.estimator.ModeKeys.PREDICT), such that up to three tf.MetaGraphDefs are saved with a single set of variables in a single SavedModel directory. For the variables and tf.MetaGraphDefs, a timestamped export directory below export_dir_base, and writes a SavedModel into it containing the tf.MetaGraphDef for the given mode and its associated signatures. For prediction, the exported MetaGraphDef will provide one SignatureDef for each element of the export_outputs dict returned from the model_fn, named using the same keys. One of these keys is always tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY, indicating which signature will be served when a serving request does not specify one. For each signature, the outputs are provided by the corresponding tf.estimator.export.ExportOutputs, and the inputs are always the input receivers provided by the serving_input_receiver_fn. For training and evaluation, the train_op is stored in an extra collection, and loss, metrics, and predictions are included in a SignatureDef for the mode in question. Extra assets may be written into the SavedModel via the assets_extra argument. This should be a dict, where each key gives a destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as {'my_asset_file.txt': '/path/to/my_asset_file.txt'}.
Args
export_dir_base A string containing a directory in which to create timestamped subdirectories containing exported SavedModels.
input_receiver_fn_map dict of tf.estimator.ModeKeys to input_receiver_fn mappings, where the input_receiver_fn is a function that takes no arguments and returns the appropriate subclass of InputReceiver.
assets_extra A dict specifying how to populate the assets.extra directory within the exported SavedModel, or None if no extra assets are needed.
as_text whether to write the SavedModel proto in text format.
checkpoint_path The checkpoint path to export. If None (the default), the most recent checkpoint found within the model directory is chosen.
Returns The path to the exported directory as a bytes object.
Raises
ValueError if any input_receiver_fn is None, no export_outputs are provided, or no checkpoint can be found. export_saved_model View source
export_saved_model(
export_dir_base, serving_input_receiver_fn, assets_extra=None, as_text=False,
checkpoint_path=None, experimental_mode=ModeKeys.PREDICT
)
Exports inference graph as a SavedModel into the given dir. For a detailed guide, see SavedModel from Estimators. This method builds a new graph by first calling the serving_input_receiver_fn to obtain feature Tensors, and then calling this Estimator's model_fn to generate the model graph based on those features. It restores the given checkpoint (or, lacking that, the most recent checkpoint) into this graph in a fresh session. Finally it creates a timestamped export directory below the given export_dir_base, and writes a SavedModel into it containing a single tf.MetaGraphDef saved from this session. The exported MetaGraphDef will provide one SignatureDef for each element of the export_outputs dict returned from the model_fn, named using the same keys. One of these keys is always tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY, indicating which signature will be served when a serving request does not specify one. For each signature, the outputs are provided by the corresponding tf.estimator.export.ExportOutputs, and the inputs are always the input receivers provided by the serving_input_receiver_fn. Extra assets may be written into the SavedModel via the assets_extra argument. This should be a dict, where each key gives a destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as {'my_asset_file.txt': '/path/to/my_asset_file.txt'}. The experimental_mode parameter can be used to export a single train/eval/predict graph as a SavedModel. See experimental_export_all_saved_models for full docs.
Args
export_dir_base A string containing a directory in which to create timestamped subdirectories containing exported SavedModels.
serving_input_receiver_fn A function that takes no argument and returns a tf.estimator.export.ServingInputReceiver or tf.estimator.export.TensorServingInputReceiver.
assets_extra A dict specifying how to populate the assets.extra directory within the exported SavedModel, or None if no extra assets are needed.
as_text whether to write the SavedModel proto in text format.
checkpoint_path The checkpoint path to export. If None (the default), the most recent checkpoint found within the model directory is chosen.
experimental_mode tf.estimator.ModeKeys value indicating with mode will be exported. Note that this feature is experimental.
Returns The path to the exported directory as a bytes object.
Raises
ValueError if no serving_input_receiver_fn is provided, no export_outputs are provided, or no checkpoint can be found. get_variable_names View source
get_variable_names()
Returns list of all variable names in this model.
Returns List of names.
Raises
ValueError If the Estimator has not produced a checkpoint yet. get_variable_value View source
get_variable_value(
name
)
Returns value of the variable given by name.
Args
name string or a list of string, name of the tensor.
Returns Numpy array - value of the tensor.
Raises
ValueError If the Estimator has not produced a checkpoint yet. latest_checkpoint View source
latest_checkpoint()
Finds the filename of the latest saved checkpoint file in model_dir.
Returns The full path to the latest checkpoint or None if no checkpoint was found.
predict View source
predict(
input_fn, predict_keys=None, hooks=None, checkpoint_path=None,
yield_single_examples=True
)
Yields predictions for given features. Please note that interleaving two predict outputs does not work. See: issue/20506
Args
input_fn A function that constructs the features. Prediction continues until input_fn raises an end-of-input exception (tf.errors.OutOfRangeError or StopIteration). See Premade Estimators for more information. The function should construct and return one of the following:
tf.data.Dataset object -- Outputs of Dataset object must have same constraints as below. features -- A tf.Tensor or a dictionary of string feature name to Tensor. features are consumed by model_fn. They should satisfy the expectation of model_fn from inputs. A tuple, in which case the first item is extracted as features.
predict_keys list of str, name of the keys to predict. It is used if the tf.estimator.EstimatorSpec.predictions is a dict. If predict_keys is used then rest of the predictions will be filtered from the dictionary. If None, returns all.
hooks List of tf.train.SessionRunHook subclass instances. Used for callbacks inside the prediction call.
checkpoint_path Path of a specific checkpoint to predict. If None, the latest checkpoint in model_dir is used. If there are no checkpoints in model_dir, prediction is run with newly initialized Variables instead of ones restored from checkpoint.
yield_single_examples If False, yields the whole batch as returned by the model_fn instead of decomposing the batch into individual elements. This is useful if model_fn returns some tensors whose first dimension is not equal to the batch size.
Yields Evaluated values of predictions tensors.
Raises
ValueError If batch length of predictions is not the same and yield_single_examples is True.
ValueError If there is a conflict between predict_keys and predictions. For example if predict_keys is not None but tf.estimator.EstimatorSpec.predictions is not a dict. train View source
train(
input_fn, hooks=None, steps=None, max_steps=None, saving_listeners=None
)
Trains a model given training data input_fn.
Args
input_fn A function that provides input data for training as minibatches. See Premade Estimators for more information. The function should construct and return one of the following: A tf.data.Dataset object: Outputs of Dataset object must be a tuple (features, labels) with same constraints as below. A tuple (features, labels): Where features is a tf.Tensor or a dictionary of string feature name to Tensor and labels is a Tensor or a dictionary of string label name to Tensor. Both features and labels are consumed by model_fn. They should satisfy the expectation of model_fn from inputs.
hooks List of tf.train.SessionRunHook subclass instances. Used for callbacks inside the training loop.
steps Number of steps for which to train the model. If None, train forever or train until input_fn generates the tf.errors.OutOfRange error or StopIteration exception. steps works incrementally. If you call two times train(steps=10) then training occurs in total 20 steps. If OutOfRange or StopIteration occurs in the middle, training stops before 20 steps. If you don't want to have incremental behavior please set max_steps instead. If set, max_steps must be None.
max_steps Number of total steps for which to train model. If None, train forever or train until input_fn generates the tf.errors.OutOfRange error or StopIteration exception. If set, steps must be None. If OutOfRange or StopIteration occurs in the middle, training stops before max_steps steps. Two calls to train(steps=100) means 200 training iterations. On the other hand, two calls to train(max_steps=100) means that the second call will not do any iteration since first call did all 100 steps.
saving_listeners list of CheckpointSaverListener objects. Used for callbacks that run immediately before or after checkpoint savings.
Returns self, for chaining.
Raises
ValueError If both steps and max_steps are not None.
ValueError If either steps or max_steps <= 0. | tensorflow.estimator.dnnregressor |
tf.estimator.Estimator View source on GitHub Estimator class to train and evaluate TensorFlow models. Inherits From: Estimator
tf.estimator.Estimator(
model_fn, model_dir=None, config=None, params=None, warm_start_from=None
)
The Estimator object wraps a model which is specified by a model_fn, which, given inputs and a number of other parameters, returns the ops necessary to perform training, evaluation, or predictions. All outputs (checkpoints, event files, etc.) are written to model_dir, or a subdirectory thereof. If model_dir is not set, a temporary directory is used. The config argument can be passed tf.estimator.RunConfig object containing information about the execution environment. It is passed on to the model_fn, if the model_fn has a parameter named "config" (and input functions in the same manner). If the config parameter is not passed, it is instantiated by the Estimator. Not passing config means that defaults useful for local execution are used. Estimator makes config available to the model (for instance, to allow specialization based on the number of workers available), and also uses some of its fields to control internals, especially regarding checkpointing. The params argument contains hyperparameters. It is passed to the model_fn, if the model_fn has a parameter named "params", and to the input functions in the same manner. Estimator only passes params along, it does not inspect it. The structure of params is therefore entirely up to the developer. None of Estimator's methods can be overridden in subclasses (its constructor enforces this). Subclasses should use model_fn to configure the base class, and may add methods implementing specialized functionality. See estimators for more information. To warm-start an Estimator: estimator = tf.estimator.DNNClassifier(
feature_columns=[categorical_feature_a_emb, categorical_feature_b_emb],
hidden_units=[1024, 512, 256],
warm_start_from="/path/to/checkpoint/dir")
For more details on warm-start configuration, see tf.estimator.WarmStartSettings.
Args
model_fn Model function. Follows the signature:
features -- This is the first item returned from the input_fn passed to train, evaluate, and predict. This should be a single tf.Tensor or dict of same.
labels -- This is the second item returned from the input_fn passed to train, evaluate, and predict. This should be a single tf.Tensor or dict of same (for multi-head models). If mode is tf.estimator.ModeKeys.PREDICT, labels=None will be passed. If the model_fn's signature does not accept mode, the model_fn must still be able to handle labels=None.
mode -- Optional. Specifies if this is training, evaluation or prediction. See tf.estimator.ModeKeys. params -- Optional dict of hyperparameters. Will receive what is passed to Estimator in params parameter. This allows to configure Estimators from hyper parameter tuning.
config -- Optional estimator.RunConfig object. Will receive what is passed to Estimator as its config parameter, or a default value. Allows setting up things in your model_fn based on configuration such as num_ps_replicas, or model_dir. Returns -- tf.estimator.EstimatorSpec
model_dir Directory to save model parameters, graph and etc. This can also be used to load checkpoints from the directory into an estimator to continue training a previously saved model. If PathLike object, the path will be resolved. If None, the model_dir in config will be used if set. If both are set, they must be same. If both are None, a temporary directory will be used.
config estimator.RunConfig configuration object.
params dict of hyper parameters that will be passed into model_fn. Keys are names of parameters, values are basic python types.
warm_start_from Optional string filepath to a checkpoint or SavedModel to warm-start from, or a tf.estimator.WarmStartSettings object to fully configure warm-starting. If None, only TRAINABLE variables are warm-started. If the string filepath is provided instead of a tf.estimator.WarmStartSettings, then all variables are warm-started, and it is assumed that vocabularies and tf.Tensor names are unchanged.
Raises
ValueError parameters of model_fn don't match params.
ValueError if this is called via a subclass and if that class overrides a member of Estimator. Eager Compatibility Calling methods of Estimator will work while eager execution is enabled. However, the model_fn and input_fn is not executed eagerly, Estimator will switch to graph mode before calling all user-provided functions (incl. hooks), so their code has to be compatible with graph mode execution. Note that input_fn code using tf.data generally works in both graph and eager modes.
Attributes
config
export_savedmodel
model_dir
model_fn Returns the model_fn which is bound to self.params.
params
Methods eval_dir View source
eval_dir(
name=None
)
Shows the directory name where evaluation metrics are dumped.
Args
name Name of the evaluation if user needs to run multiple evaluations on different data sets, such as on training data vs test data. Metrics for different evaluations are saved in separate folders, and appear separately in tensorboard.
Returns A string which is the path of directory contains evaluation metrics.
evaluate View source
evaluate(
input_fn, steps=None, hooks=None, checkpoint_path=None, name=None
)
Evaluates the model given evaluation data input_fn. For each step, calls input_fn, which returns one batch of data. Evaluates until:
steps batches are processed, or
input_fn raises an end-of-input exception (tf.errors.OutOfRangeError or StopIteration).
Args
input_fn A function that constructs the input data for evaluation. See Premade Estimators for more information. The function should construct and return one of the following: A tf.data.Dataset object: Outputs of Dataset object must be a tuple (features, labels) with same constraints as below. A tuple (features, labels): Where features is a tf.Tensor or a dictionary of string feature name to Tensor and labels is a Tensor or a dictionary of string label name to Tensor. Both features and labels are consumed by model_fn. They should satisfy the expectation of model_fn from inputs.
steps Number of steps for which to evaluate model. If None, evaluates until input_fn raises an end-of-input exception.
hooks List of tf.train.SessionRunHook subclass instances. Used for callbacks inside the evaluation call.
checkpoint_path Path of a specific checkpoint to evaluate. If None, the latest checkpoint in model_dir is used. If there are no checkpoints in model_dir, evaluation is run with newly initialized Variables instead of ones restored from checkpoint.
name Name of the evaluation if user needs to run multiple evaluations on different data sets, such as on training data vs test data. Metrics for different evaluations are saved in separate folders, and appear separately in tensorboard.
Returns A dict containing the evaluation metrics specified in model_fn keyed by name, as well as an entry global_step which contains the value of the global step for which this evaluation was performed. For canned estimators, the dict contains the loss (mean loss per mini-batch) and the average_loss (mean loss per sample). Canned classifiers also return the accuracy. Canned regressors also return the label/mean and the prediction/mean.
Raises
ValueError If steps <= 0. experimental_export_all_saved_models View source
experimental_export_all_saved_models(
export_dir_base, input_receiver_fn_map, assets_extra=None, as_text=False,
checkpoint_path=None
)
Exports a SavedModel with tf.MetaGraphDefs for each requested mode. For each mode passed in via the input_receiver_fn_map, this method builds a new graph by calling the input_receiver_fn to obtain feature and label Tensors. Next, this method calls the Estimator's model_fn in the passed mode to generate the model graph based on those features and labels, and restores the given checkpoint (or, lacking that, the most recent checkpoint) into the graph. Only one of the modes is used for saving variables to the SavedModel (order of preference: tf.estimator.ModeKeys.TRAIN, tf.estimator.ModeKeys.EVAL, then tf.estimator.ModeKeys.PREDICT), such that up to three tf.MetaGraphDefs are saved with a single set of variables in a single SavedModel directory. For the variables and tf.MetaGraphDefs, a timestamped export directory below export_dir_base, and writes a SavedModel into it containing the tf.MetaGraphDef for the given mode and its associated signatures. For prediction, the exported MetaGraphDef will provide one SignatureDef for each element of the export_outputs dict returned from the model_fn, named using the same keys. One of these keys is always tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY, indicating which signature will be served when a serving request does not specify one. For each signature, the outputs are provided by the corresponding tf.estimator.export.ExportOutputs, and the inputs are always the input receivers provided by the serving_input_receiver_fn. For training and evaluation, the train_op is stored in an extra collection, and loss, metrics, and predictions are included in a SignatureDef for the mode in question. Extra assets may be written into the SavedModel via the assets_extra argument. This should be a dict, where each key gives a destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as {'my_asset_file.txt': '/path/to/my_asset_file.txt'}.
Args
export_dir_base A string containing a directory in which to create timestamped subdirectories containing exported SavedModels.
input_receiver_fn_map dict of tf.estimator.ModeKeys to input_receiver_fn mappings, where the input_receiver_fn is a function that takes no arguments and returns the appropriate subclass of InputReceiver.
assets_extra A dict specifying how to populate the assets.extra directory within the exported SavedModel, or None if no extra assets are needed.
as_text whether to write the SavedModel proto in text format.
checkpoint_path The checkpoint path to export. If None (the default), the most recent checkpoint found within the model directory is chosen.
Returns The path to the exported directory as a bytes object.
Raises
ValueError if any input_receiver_fn is None, no export_outputs are provided, or no checkpoint can be found. export_saved_model View source
export_saved_model(
export_dir_base, serving_input_receiver_fn, assets_extra=None, as_text=False,
checkpoint_path=None, experimental_mode=ModeKeys.PREDICT
)
Exports inference graph as a SavedModel into the given dir. For a detailed guide, see SavedModel from Estimators. This method builds a new graph by first calling the serving_input_receiver_fn to obtain feature Tensors, and then calling this Estimator's model_fn to generate the model graph based on those features. It restores the given checkpoint (or, lacking that, the most recent checkpoint) into this graph in a fresh session. Finally it creates a timestamped export directory below the given export_dir_base, and writes a SavedModel into it containing a single tf.MetaGraphDef saved from this session. The exported MetaGraphDef will provide one SignatureDef for each element of the export_outputs dict returned from the model_fn, named using the same keys. One of these keys is always tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY, indicating which signature will be served when a serving request does not specify one. For each signature, the outputs are provided by the corresponding tf.estimator.export.ExportOutputs, and the inputs are always the input receivers provided by the serving_input_receiver_fn. Extra assets may be written into the SavedModel via the assets_extra argument. This should be a dict, where each key gives a destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as {'my_asset_file.txt': '/path/to/my_asset_file.txt'}. The experimental_mode parameter can be used to export a single train/eval/predict graph as a SavedModel. See experimental_export_all_saved_models for full docs.
Args
export_dir_base A string containing a directory in which to create timestamped subdirectories containing exported SavedModels.
serving_input_receiver_fn A function that takes no argument and returns a tf.estimator.export.ServingInputReceiver or tf.estimator.export.TensorServingInputReceiver.
assets_extra A dict specifying how to populate the assets.extra directory within the exported SavedModel, or None if no extra assets are needed.
as_text whether to write the SavedModel proto in text format.
checkpoint_path The checkpoint path to export. If None (the default), the most recent checkpoint found within the model directory is chosen.
experimental_mode tf.estimator.ModeKeys value indicating with mode will be exported. Note that this feature is experimental.
Returns The path to the exported directory as a bytes object.
Raises
ValueError if no serving_input_receiver_fn is provided, no export_outputs are provided, or no checkpoint can be found. get_variable_names View source
get_variable_names()
Returns list of all variable names in this model.
Returns List of names.
Raises
ValueError If the Estimator has not produced a checkpoint yet. get_variable_value View source
get_variable_value(
name
)
Returns value of the variable given by name.
Args
name string or a list of string, name of the tensor.
Returns Numpy array - value of the tensor.
Raises
ValueError If the Estimator has not produced a checkpoint yet. latest_checkpoint View source
latest_checkpoint()
Finds the filename of the latest saved checkpoint file in model_dir.
Returns The full path to the latest checkpoint or None if no checkpoint was found.
predict View source
predict(
input_fn, predict_keys=None, hooks=None, checkpoint_path=None,
yield_single_examples=True
)
Yields predictions for given features. Please note that interleaving two predict outputs does not work. See: issue/20506
Args
input_fn A function that constructs the features. Prediction continues until input_fn raises an end-of-input exception (tf.errors.OutOfRangeError or StopIteration). See Premade Estimators for more information. The function should construct and return one of the following:
tf.data.Dataset object -- Outputs of Dataset object must have same constraints as below. features -- A tf.Tensor or a dictionary of string feature name to Tensor. features are consumed by model_fn. They should satisfy the expectation of model_fn from inputs. A tuple, in which case the first item is extracted as features.
predict_keys list of str, name of the keys to predict. It is used if the tf.estimator.EstimatorSpec.predictions is a dict. If predict_keys is used then rest of the predictions will be filtered from the dictionary. If None, returns all.
hooks List of tf.train.SessionRunHook subclass instances. Used for callbacks inside the prediction call.
checkpoint_path Path of a specific checkpoint to predict. If None, the latest checkpoint in model_dir is used. If there are no checkpoints in model_dir, prediction is run with newly initialized Variables instead of ones restored from checkpoint.
yield_single_examples If False, yields the whole batch as returned by the model_fn instead of decomposing the batch into individual elements. This is useful if model_fn returns some tensors whose first dimension is not equal to the batch size.
Yields Evaluated values of predictions tensors.
Raises
ValueError If batch length of predictions is not the same and yield_single_examples is True.
ValueError If there is a conflict between predict_keys and predictions. For example if predict_keys is not None but tf.estimator.EstimatorSpec.predictions is not a dict. train View source
train(
input_fn, hooks=None, steps=None, max_steps=None, saving_listeners=None
)
Trains a model given training data input_fn.
Args
input_fn A function that provides input data for training as minibatches. See Premade Estimators for more information. The function should construct and return one of the following: A tf.data.Dataset object: Outputs of Dataset object must be a tuple (features, labels) with same constraints as below. A tuple (features, labels): Where features is a tf.Tensor or a dictionary of string feature name to Tensor and labels is a Tensor or a dictionary of string label name to Tensor. Both features and labels are consumed by model_fn. They should satisfy the expectation of model_fn from inputs.
hooks List of tf.train.SessionRunHook subclass instances. Used for callbacks inside the training loop.
steps Number of steps for which to train the model. If None, train forever or train until input_fn generates the tf.errors.OutOfRange error or StopIteration exception. steps works incrementally. If you call two times train(steps=10) then training occurs in total 20 steps. If OutOfRange or StopIteration occurs in the middle, training stops before 20 steps. If you don't want to have incremental behavior please set max_steps instead. If set, max_steps must be None.
max_steps Number of total steps for which to train model. If None, train forever or train until input_fn generates the tf.errors.OutOfRange error or StopIteration exception. If set, steps must be None. If OutOfRange or StopIteration occurs in the middle, training stops before max_steps steps. Two calls to train(steps=100) means 200 training iterations. On the other hand, two calls to train(max_steps=100) means that the second call will not do any iteration since first call did all 100 steps.
saving_listeners list of CheckpointSaverListener objects. Used for callbacks that run immediately before or after checkpoint savings.
Returns self, for chaining.
Raises
ValueError If both steps and max_steps are not None.
ValueError If either steps or max_steps <= 0. | tensorflow.estimator.estimator |
tf.estimator.EstimatorSpec View source on GitHub Ops and objects returned from a model_fn and passed to an Estimator. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.estimator.EstimatorSpec
tf.estimator.EstimatorSpec(
mode, predictions=None, loss=None, train_op=None, eval_metric_ops=None,
export_outputs=None, training_chief_hooks=None, training_hooks=None,
scaffold=None, evaluation_hooks=None, prediction_hooks=None
)
EstimatorSpec fully defines the model to be run by an Estimator.
Args
mode A ModeKeys. Specifies if this is training, evaluation or prediction.
predictions Predictions Tensor or dict of Tensor.
loss Training loss Tensor. Must be either scalar, or with shape [1].
train_op Op for the training step.
eval_metric_ops Dict of metric results keyed by name. The values of the dict can be one of the following: (1) instance of Metric class. (2) Results of calling a metric function, namely a (metric_tensor, update_op) tuple. metric_tensor should be evaluated without any impact on state (typically is a pure computation results based on variables.). For example, it should not trigger the update_op or requires any input fetching.
export_outputs Describes the output signatures to be exported to SavedModel and used during serving. A dict {name: output} where: name: An arbitrary name for this output. output: an ExportOutput object such as ClassificationOutput, RegressionOutput, or PredictOutput. Single-headed models only need to specify one entry in this dictionary. Multi-headed models should specify one entry for each head, one of which must be named using tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY. If no entry is provided, a default PredictOutput mapping to predictions will be created.
training_chief_hooks Iterable of tf.train.SessionRunHook objects to run on the chief worker during training.
training_hooks Iterable of tf.train.SessionRunHook objects to run on all workers during training.
scaffold A tf.train.Scaffold object that can be used to set initialization, saver, and more to be used in training.
evaluation_hooks Iterable of tf.train.SessionRunHook objects to run during evaluation.
prediction_hooks Iterable of tf.train.SessionRunHook objects to run during predictions.
Raises
ValueError If validation fails.
TypeError If any of the arguments is not the expected type.
Attributes
mode
predictions
loss
train_op
eval_metric_ops
export_outputs
training_chief_hooks
training_hooks
scaffold
evaluation_hooks
prediction_hooks | tensorflow.estimator.estimatorspec |
tf.estimator.EvalSpec View source on GitHub Configuration for the "eval" part for the train_and_evaluate call. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.estimator.EvalSpec
tf.estimator.EvalSpec(
input_fn, steps=100, name=None, hooks=None, exporters=None,
start_delay_secs=120, throttle_secs=600
)
EvalSpec combines details of evaluation of the trained model as well as its export. Evaluation consists of computing metrics to judge the performance of the trained model. Export writes out the trained model on to external storage.
Args
input_fn A function that constructs the input data for evaluation. See Premade Estimators for more information. The function should construct and return one of the following: A 'tf.data.Dataset' object: Outputs of Dataset object must be a tuple (features, labels) with same constraints as below. A tuple (features, labels): Where features is a Tensor or a dictionary of string feature name to Tensor and labels is a Tensor or a dictionary of string label name to Tensor.
steps Int. Positive number of steps for which to evaluate model. If None, evaluates until input_fn raises an end-of-input exception. See Estimator.evaluate for details.
name String. Name of the evaluation if user needs to run multiple evaluations on different data sets. Metrics for different evaluations are saved in separate folders, and appear separately in tensorboard.
hooks Iterable of tf.train.SessionRunHook objects to run during evaluation.
exporters Iterable of Exporters, or a single one, or None. exporters will be invoked after each evaluation.
start_delay_secs Int. Start evaluating after waiting for this many seconds.
throttle_secs Int. Do not re-evaluate unless the last evaluation was started at least this many seconds ago. Of course, evaluation does not occur if no new checkpoints are available, hence, this is the minimum.
Raises
ValueError If any of the input arguments is invalid.
TypeError If any of the arguments is not of the expected type.
Attributes
input_fn
steps
name
hooks
exporters
start_delay_secs
throttle_secs | tensorflow.estimator.evalspec |
Module: tf.estimator.experimental Public API for tf.estimator.experimental namespace. Classes class InMemoryEvaluatorHook: Hook to run evaluation in training without a checkpoint. class LinearSDCA: Stochastic Dual Coordinate Ascent helper for linear estimators. class RNNClassifier: A classifier for TensorFlow RNN models. class RNNEstimator: An Estimator for TensorFlow RNN models with user-specified head. Functions build_raw_supervised_input_receiver_fn(...): Build a supervised_input_receiver_fn for raw features and labels. call_logit_fn(...): Calls logit_fn (experimental). make_early_stopping_hook(...): Creates early-stopping hook. make_stop_at_checkpoint_step_hook(...): Creates a proper StopAtCheckpointStepHook based on chief status. stop_if_higher_hook(...): Creates hook to stop if the given metric is higher than the threshold. stop_if_lower_hook(...): Creates hook to stop if the given metric is lower than the threshold. stop_if_no_decrease_hook(...): Creates hook to stop if metric does not decrease within given max steps. stop_if_no_increase_hook(...): Creates hook to stop if metric does not increase within given max steps. | tensorflow.estimator.experimental |
tf.estimator.experimental.build_raw_supervised_input_receiver_fn Build a supervised_input_receiver_fn for raw features and labels. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.estimator.experimental.build_raw_supervised_input_receiver_fn
tf.estimator.experimental.build_raw_supervised_input_receiver_fn(
features, labels, default_batch_size=None
)
This function wraps tensor placeholders in a supervised_receiver_fn with the expectation that the features and labels appear precisely as the model_fn expects them. Features and labels can therefore be dicts of tensors, or raw tensors.
Args
features a dict of string to Tensor or Tensor.
labels a dict of string to Tensor or Tensor.
default_batch_size the number of query examples expected per batch. Leave unset for variable batch size (recommended).
Returns A supervised_input_receiver_fn.
Raises
ValueError if features and labels have overlapping keys. | tensorflow.estimator.experimental.build_raw_supervised_input_receiver_fn |
tf.estimator.experimental.call_logit_fn View source on GitHub Calls logit_fn (experimental). View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.estimator.experimental.call_logit_fn
tf.estimator.experimental.call_logit_fn(
logit_fn, features, mode, params, config
)
THIS FUNCTION IS EXPERIMENTAL. Keras layers/models are the recommended APIs for logit and model composition. A utility function that calls the provided logit_fn with the relevant subset of provided arguments. Similar to tf.estimator._call_model_fn().
Args
logit_fn A logit_fn as defined above.
features The features dict.
mode TRAIN / EVAL / PREDICT ModeKeys.
params The hyperparameter dict.
config The configuration object.
Returns A logit Tensor, the output of logit_fn.
Raises
ValueError if logit_fn does not return a Tensor or a dictionary mapping strings to Tensors. | tensorflow.estimator.experimental.call_logit_fn |
tf.estimator.experimental.InMemoryEvaluatorHook View source on GitHub Hook to run evaluation in training without a checkpoint. Inherits From: SessionRunHook View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.estimator.experimental.InMemoryEvaluatorHook
tf.estimator.experimental.InMemoryEvaluatorHook(
estimator, input_fn, steps=None, hooks=None, name=None, every_n_iter=100
)
Example: def train_input_fn():
...
return train_dataset
def eval_input_fn():
...
return eval_dataset
estimator = tf.estimator.DNNClassifier(...)
evaluator = tf.estimator.experimental.InMemoryEvaluatorHook(
estimator, eval_input_fn)
estimator.train(train_input_fn, hooks=[evaluator])
Current limitations of this approach are: It doesn't support multi-node distributed mode. It doesn't support saveable objects other than variables (such as boosted tree support) It doesn't support custom saver logic (such as ExponentialMovingAverage support)
Args
estimator A tf.estimator.Estimator instance to call evaluate.
input_fn Equivalent to the input_fn arg to estimator.evaluate. A function that constructs the input data for evaluation. See Creating input functions for more information. The function should construct and return one of the following: A 'tf.data.Dataset' object: Outputs of Dataset object must be a tuple (features, labels) with same constraints as below. A tuple (features, labels): Where features is a Tensor or a dictionary of string feature name to Tensor and labels is a Tensor or a dictionary of string label name to Tensor. Both features and labels are consumed by model_fn. They should satisfy the expectation of model_fn from inputs.
steps Equivalent to the steps arg to estimator.evaluate. Number of steps for which to evaluate model. If None, evaluates until input_fn raises an end-of-input exception.
hooks Equivalent to the hooks arg to estimator.evaluate. List of SessionRunHook subclass instances. Used for callbacks inside the evaluation call.
name Equivalent to the name arg to estimator.evaluate. Name of the evaluation if user needs to run multiple evaluations on different data sets, such as on training data vs test data. Metrics for different evaluations are saved in separate folders, and appear separately in tensorboard.
every_n_iter int, runs the evaluator once every N training iteration.
Raises
ValueError if every_n_iter is non-positive or it's not a single machine training Methods after_create_session View source
after_create_session(
session, coord
)
Does first run which shows the eval metrics before training. after_run View source
after_run(
run_context, run_values
)
Runs evaluator. before_run View source
before_run(
run_context
)
Called before each call to run(). You can return from this call a SessionRunArgs object indicating ops or tensors to add to the upcoming run() call. These ops/tensors will be run together with the ops/tensors originally passed to the original run() call. The run args you return can also contain feeds to be added to the run() call. The run_context argument is a SessionRunContext that provides information about the upcoming run() call: the originally requested op/tensors, the TensorFlow Session. At this point graph is finalized and you can not add ops.
Args
run_context A SessionRunContext object.
Returns None or a SessionRunArgs object.
begin View source
begin()
Build eval graph and restoring op. end View source
end(
session
)
Runs evaluator for final model. | tensorflow.estimator.experimental.inmemoryevaluatorhook |
tf.estimator.experimental.LinearSDCA View source on GitHub Stochastic Dual Coordinate Ascent helper for linear estimators. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.estimator.experimental.LinearSDCA
tf.estimator.experimental.LinearSDCA(
example_id_column, num_loss_partitions=1, num_table_shards=None,
symmetric_l1_regularization=0.0, symmetric_l2_regularization=1.0, adaptive=False
)
Objects of this class are intended to be provided as the optimizer argument (though LinearSDCA objects do not implement the tf.train.Optimizer interface) when creating tf.estimator.LinearClassifier or tf.estimator.LinearRegressor. SDCA can only be used with LinearClassifier and LinearRegressor under the following conditions: Feature columns are of type V2. Multivalent categorical columns are not normalized. In other words the sparse_combiner argument in the estimator constructor should be "sum". For classification: binary label. For regression: one-dimensional label. Example usage: real_feature_column = numeric_column(...)
sparse_feature_column = categorical_column_with_hash_bucket(...)
linear_sdca = tf.estimator.experimental.LinearSDCA(
example_id_column='example_id',
num_loss_partitions=1,
num_table_shards=1,
symmetric_l2_regularization=2.0)
classifier = tf.estimator.LinearClassifier(
feature_columns=[real_feature_column, sparse_feature_column],
weight_column=...,
optimizer=linear_sdca)
classifier.train(input_fn_train, steps=50)
classifier.evaluate(input_fn=input_fn_eval)
Here the expectation is that the input_fn_* functions passed to train and evaluate return a pair (dict, label_tensor) where dict has example_id_column as key whose value is a Tensor of shape [batch_size] and dtype string. num_loss_partitions defines sigma' in eq (11) of [3]. Convergence of (global) loss is guaranteed if num_loss_partitions is larger or equal to the product (#concurrent train ops/per worker) x (#workers). Larger values for num_loss_partitions lead to slower convergence. The recommended value for num_loss_partitions in tf.estimator (where currently there is one process per worker) is the number of workers running the train steps. It defaults to 1 (single machine). num_table_shards defines the number of shards for the internal state table, typically set to match the number of parameter servers for large data sets. The SDCA algorithm was originally introduced in [1] and it was followed by the L1 proximal step [2], a distributed version [3] and adaptive sampling [4]. [1] www.jmlr.org/papers/volume14/shalev-shwartz13a/shalev-shwartz13a.pdf [2] https://arxiv.org/pdf/1309.2375.pdf [3] https://arxiv.org/pdf/1502.03508.pdf [4] https://arxiv.org/pdf/1502.08053.pdf Details specific to this implementation are provided in: https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/canned/linear_optimizer/doc/sdca.ipynb
Args
example_id_column The column name containing the example ids.
num_loss_partitions Number of workers.
num_table_shards Number of shards of the internal state table, typically set to match the number of parameter servers.
symmetric_l1_regularization A float value, must be greater than or equal to zero.
symmetric_l2_regularization A float value, must be greater than zero and should typically be greater than 1.
adaptive A boolean indicating whether to use adaptive sampling. Methods get_train_step View source
get_train_step(
state_manager, weight_column_name, loss_type, feature_columns, features,
targets, bias_var, global_step
)
Returns the training operation of an SdcaModel optimizer. | tensorflow.estimator.experimental.linearsdca |
tf.estimator.experimental.make_early_stopping_hook View source on GitHub Creates early-stopping hook. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.estimator.experimental.make_early_stopping_hook
tf.estimator.experimental.make_early_stopping_hook(
estimator, should_stop_fn, run_every_secs=60, run_every_steps=None
)
Returns a SessionRunHook that stops training when should_stop_fn returns True. Usage example: estimator = ...
hook = early_stopping.make_early_stopping_hook(
estimator, should_stop_fn=make_stop_fn(...))
train_spec = tf.estimator.TrainSpec(..., hooks=[hook])
tf.estimator.train_and_evaluate(estimator, train_spec, ...)
Caveat: Current implementation supports early-stopping both training and evaluation in local mode. In distributed mode, training can be stopped but evaluation (where it's a separate job) will indefinitely wait for new model checkpoints to evaluate, so you will need other means to detect and stop it. Early-stopping evaluation in distributed mode requires changes in train_and_evaluate API and will be addressed in a future revision.
Args
estimator A tf.estimator.Estimator instance.
should_stop_fn callable, function that takes no arguments and returns a bool. If the function returns True, stopping will be initiated by the chief.
run_every_secs If specified, calls should_stop_fn at an interval of run_every_secs seconds. Defaults to 60 seconds. Either this or run_every_steps must be set.
run_every_steps If specified, calls should_stop_fn every run_every_steps steps. Either this or run_every_secs must be set.
Returns A SessionRunHook that periodically executes should_stop_fn and initiates early stopping if the function returns True.
Raises
TypeError If estimator is not of type tf.estimator.Estimator.
ValueError If both run_every_secs and run_every_steps are set. | tensorflow.estimator.experimental.make_early_stopping_hook |
tf.estimator.experimental.make_stop_at_checkpoint_step_hook View source on GitHub Creates a proper StopAtCheckpointStepHook based on chief status. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.estimator.experimental.make_stop_at_checkpoint_step_hook
tf.estimator.experimental.make_stop_at_checkpoint_step_hook(
estimator, last_step, wait_after_file_check_secs=30
) | tensorflow.estimator.experimental.make_stop_at_checkpoint_step_hook |
tf.estimator.experimental.RNNClassifier A classifier for TensorFlow RNN models. Inherits From: RNNEstimator, Estimator
tf.estimator.experimental.RNNClassifier(
sequence_feature_columns, context_feature_columns=None, units=None,
cell_type=USE_DEFAULT, rnn_cell_fn=None, return_sequences=False, model_dir=None,
n_classes=2, weight_column=None, label_vocabulary=None,
optimizer='Adagrad',
loss_reduction=losses_utils.ReductionV2.SUM_OVER_BATCH_SIZE,
sequence_mask='sequence_mask', config=None
)
Trains a recurrent neural network model to classify instances into one of multiple classes. Example: token_sequence = sequence_categorical_column_with_hash_bucket(...)
token_emb = embedding_column(categorical_column=token_sequence, ...)
estimator = RNNClassifier(
sequence_feature_columns=[token_emb],
units=[32, 16], cell_type='lstm')
# Input builders
def input_fn_train: # returns x, y
pass
estimator.train(input_fn=input_fn_train, steps=100)
def input_fn_eval: # returns x, y
pass
metrics = estimator.evaluate(input_fn=input_fn_eval, steps=10)
def input_fn_predict: # returns x, None
pass
predictions = estimator.predict(input_fn=input_fn_predict)
Input of train and evaluate should have following features, otherwise there will be a KeyError: if weight_column is not None, a feature with key=weight_column whose value is a Tensor. for each column in sequence_feature_columns: a feature with key=column.name whose value is a SparseTensor.
for each column in context_feature_columns: if column is a CategoricalColumn, a feature with key=column.name whose value is a SparseTensor. if column is a WeightedCategoricalColumn, two features: the first with key the id column name, the second with key the weight column name. Both features' value must be a SparseTensor. if column is a DenseColumn, a feature with key=column.name whose value is a Tensor.
Loss is calculated by using softmax cross entropy.
Args
sequence_feature_columns An iterable containing the FeatureColumns that represent sequential input. All items in the set should either be sequence columns (e.g. sequence_numeric_column) or constructed from one (e.g. embedding_column with sequence_categorical_column_* as input).
context_feature_columns An iterable containing the FeatureColumns for contextual input. The data represented by these columns will be replicated and given to the RNN at each timestep. These columns must be instances of classes derived from DenseColumn such as numeric_column, not the sequential variants.
units Iterable of integer number of hidden units per RNN layer. If set, cell_type must also be specified and rnn_cell_fn must be None.
cell_type A class producing a RNN cell or a string specifying the cell type. Supported strings are: 'simple_rnn', 'lstm', and 'gru'. If set, units must also be specified and rnn_cell_fn must be None.
rnn_cell_fn A function that returns a RNN cell instance that will be used to construct the RNN. If set, units and cell_type cannot be set. This is for advanced users who need additional customization beyond units and cell_type. Note that tf.keras.layers.StackedRNNCells is needed for stacked RNNs.
return_sequences A boolean indicating whether to return the last output in the output sequence, or the full sequence. Note that if True, weight_column must be None or a string.
model_dir Directory to save model parameters, graph and etc. This can also be used to load checkpoints from the directory into a estimator to continue training a previously saved model.
n_classes Number of label classes. Defaults to 2, namely binary classification. Must be > 1.
weight_column A string or a NumericColumn created by tf.feature_column.numeric_column defining feature column representing weights. It is used to down weight or boost examples during training. It will be multiplied by the loss of the example. If it is a string, it is used as a key to fetch weight tensor from the features. If it is a NumericColumn, raw tensor is fetched by key weight_column.key, then weight_column.normalizer_fn is applied on it to get weight tensor.
label_vocabulary A list of strings represents possible label values. If given, labels must be string type and have any value in label_vocabulary. If it is not given, that means labels are already encoded as integer or float within [0, 1] for n_classes=2 and encoded as integer values in {0, 1,..., n_classes-1} for n_classes>2 . Also there will be errors if vocabulary is not provided and labels are string.
optimizer An instance of tf.Optimizer or string specifying optimizer type. Defaults to Adagrad optimizer.
loss_reduction One of tf.losses.Reduction except NONE. Describes how to reduce training loss over batch. Defaults to SUM_OVER_BATCH_SIZE.
sequence_mask A string with the name of the sequence mask tensor. If sequence_mask is in the features dictionary, the provided tensor is used, otherwise the sequence mask is computed from the length of sequential features. The sequence mask is used in evaluation and training mode to aggregate loss and metrics computation while excluding padding steps. It is also added to the predictions dictionary in prediction mode to indicate which steps are padding.
config RunConfig object to configure the runtime settings.
Raises
ValueError If units, cell_type, and rnn_cell_fn are not compatible. Eager Compatibility Estimators are not compatible with eager execution.
Attributes
config
model_dir
model_fn Returns the model_fn which is bound to self.params.
params
Methods eval_dir View source
eval_dir(
name=None
)
Shows the directory name where evaluation metrics are dumped.
Args
name Name of the evaluation if user needs to run multiple evaluations on different data sets, such as on training data vs test data. Metrics for different evaluations are saved in separate folders, and appear separately in tensorboard.
Returns A string which is the path of directory contains evaluation metrics.
evaluate View source
evaluate(
input_fn, steps=None, hooks=None, checkpoint_path=None, name=None
)
Evaluates the model given evaluation data input_fn. For each step, calls input_fn, which returns one batch of data. Evaluates until:
steps batches are processed, or
input_fn raises an end-of-input exception (tf.errors.OutOfRangeError or StopIteration).
Args
input_fn A function that constructs the input data for evaluation. See Premade Estimators for more information. The function should construct and return one of the following: A tf.data.Dataset object: Outputs of Dataset object must be a tuple (features, labels) with same constraints as below. A tuple (features, labels): Where features is a tf.Tensor or a dictionary of string feature name to Tensor and labels is a Tensor or a dictionary of string label name to Tensor. Both features and labels are consumed by model_fn. They should satisfy the expectation of model_fn from inputs.
steps Number of steps for which to evaluate model. If None, evaluates until input_fn raises an end-of-input exception.
hooks List of tf.train.SessionRunHook subclass instances. Used for callbacks inside the evaluation call.
checkpoint_path Path of a specific checkpoint to evaluate. If None, the latest checkpoint in model_dir is used. If there are no checkpoints in model_dir, evaluation is run with newly initialized Variables instead of ones restored from checkpoint.
name Name of the evaluation if user needs to run multiple evaluations on different data sets, such as on training data vs test data. Metrics for different evaluations are saved in separate folders, and appear separately in tensorboard.
Returns A dict containing the evaluation metrics specified in model_fn keyed by name, as well as an entry global_step which contains the value of the global step for which this evaluation was performed. For canned estimators, the dict contains the loss (mean loss per mini-batch) and the average_loss (mean loss per sample). Canned classifiers also return the accuracy. Canned regressors also return the label/mean and the prediction/mean.
Raises
ValueError If steps <= 0. experimental_export_all_saved_models View source
experimental_export_all_saved_models(
export_dir_base, input_receiver_fn_map, assets_extra=None, as_text=False,
checkpoint_path=None
)
Exports a SavedModel with tf.MetaGraphDefs for each requested mode. For each mode passed in via the input_receiver_fn_map, this method builds a new graph by calling the input_receiver_fn to obtain feature and label Tensors. Next, this method calls the Estimator's model_fn in the passed mode to generate the model graph based on those features and labels, and restores the given checkpoint (or, lacking that, the most recent checkpoint) into the graph. Only one of the modes is used for saving variables to the SavedModel (order of preference: tf.estimator.ModeKeys.TRAIN, tf.estimator.ModeKeys.EVAL, then tf.estimator.ModeKeys.PREDICT), such that up to three tf.MetaGraphDefs are saved with a single set of variables in a single SavedModel directory. For the variables and tf.MetaGraphDefs, a timestamped export directory below export_dir_base, and writes a SavedModel into it containing the tf.MetaGraphDef for the given mode and its associated signatures. For prediction, the exported MetaGraphDef will provide one SignatureDef for each element of the export_outputs dict returned from the model_fn, named using the same keys. One of these keys is always tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY, indicating which signature will be served when a serving request does not specify one. For each signature, the outputs are provided by the corresponding tf.estimator.export.ExportOutputs, and the inputs are always the input receivers provided by the serving_input_receiver_fn. For training and evaluation, the train_op is stored in an extra collection, and loss, metrics, and predictions are included in a SignatureDef for the mode in question. Extra assets may be written into the SavedModel via the assets_extra argument. This should be a dict, where each key gives a destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as {'my_asset_file.txt': '/path/to/my_asset_file.txt'}.
Args
export_dir_base A string containing a directory in which to create timestamped subdirectories containing exported SavedModels.
input_receiver_fn_map dict of tf.estimator.ModeKeys to input_receiver_fn mappings, where the input_receiver_fn is a function that takes no arguments and returns the appropriate subclass of InputReceiver.
assets_extra A dict specifying how to populate the assets.extra directory within the exported SavedModel, or None if no extra assets are needed.
as_text whether to write the SavedModel proto in text format.
checkpoint_path The checkpoint path to export. If None (the default), the most recent checkpoint found within the model directory is chosen.
Returns The path to the exported directory as a bytes object.
Raises
ValueError if any input_receiver_fn is None, no export_outputs are provided, or no checkpoint can be found. export_saved_model View source
export_saved_model(
export_dir_base, serving_input_receiver_fn, assets_extra=None, as_text=False,
checkpoint_path=None, experimental_mode=ModeKeys.PREDICT
)
Exports inference graph as a SavedModel into the given dir. For a detailed guide, see SavedModel from Estimators. This method builds a new graph by first calling the serving_input_receiver_fn to obtain feature Tensors, and then calling this Estimator's model_fn to generate the model graph based on those features. It restores the given checkpoint (or, lacking that, the most recent checkpoint) into this graph in a fresh session. Finally it creates a timestamped export directory below the given export_dir_base, and writes a SavedModel into it containing a single tf.MetaGraphDef saved from this session. The exported MetaGraphDef will provide one SignatureDef for each element of the export_outputs dict returned from the model_fn, named using the same keys. One of these keys is always tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY, indicating which signature will be served when a serving request does not specify one. For each signature, the outputs are provided by the corresponding tf.estimator.export.ExportOutputs, and the inputs are always the input receivers provided by the serving_input_receiver_fn. Extra assets may be written into the SavedModel via the assets_extra argument. This should be a dict, where each key gives a destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as {'my_asset_file.txt': '/path/to/my_asset_file.txt'}. The experimental_mode parameter can be used to export a single train/eval/predict graph as a SavedModel. See experimental_export_all_saved_models for full docs.
Args
export_dir_base A string containing a directory in which to create timestamped subdirectories containing exported SavedModels.
serving_input_receiver_fn A function that takes no argument and returns a tf.estimator.export.ServingInputReceiver or tf.estimator.export.TensorServingInputReceiver.
assets_extra A dict specifying how to populate the assets.extra directory within the exported SavedModel, or None if no extra assets are needed.
as_text whether to write the SavedModel proto in text format.
checkpoint_path The checkpoint path to export. If None (the default), the most recent checkpoint found within the model directory is chosen.
experimental_mode tf.estimator.ModeKeys value indicating with mode will be exported. Note that this feature is experimental.
Returns The path to the exported directory as a bytes object.
Raises
ValueError if no serving_input_receiver_fn is provided, no export_outputs are provided, or no checkpoint can be found. export_savedmodel View source
export_savedmodel(
export_dir_base, serving_input_receiver_fn, assets_extra=None, as_text=False,
checkpoint_path=None, strip_default_attrs=False
)
Exports inference graph as a SavedModel into the given dir. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: This function has been renamed, use export_saved_model instead. For a detailed guide, see SavedModel from Estimators. This method builds a new graph by first calling the serving_input_receiver_fn to obtain feature Tensors, and then calling this Estimator's model_fn to generate the model graph based on those features. It restores the given checkpoint (or, lacking that, the most recent checkpoint) into this graph in a fresh session. Finally it creates a timestamped export directory below the given export_dir_base, and writes a SavedModel into it containing a single tf.MetaGraphDef saved from this session. The exported MetaGraphDef will provide one SignatureDef for each element of the export_outputs dict returned from the model_fn, named using the same keys. One of these keys is always tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY, indicating which signature will be served when a serving request does not specify one. For each signature, the outputs are provided by the corresponding tf.estimator.export.ExportOutputs, and the inputs are always the input receivers provided by the serving_input_receiver_fn. Extra assets may be written into the SavedModel via the assets_extra argument. This should be a dict, where each key gives a destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as {'my_asset_file.txt': '/path/to/my_asset_file.txt'}.
Args
export_dir_base A string containing a directory in which to create timestamped subdirectories containing exported SavedModels.
serving_input_receiver_fn A function that takes no argument and returns a tf.estimator.export.ServingInputReceiver or tf.estimator.export.TensorServingInputReceiver.
assets_extra A dict specifying how to populate the assets.extra directory within the exported SavedModel, or None if no extra assets are needed.
as_text whether to write the SavedModel proto in text format.
checkpoint_path The checkpoint path to export. If None (the default), the most recent checkpoint found within the model directory is chosen.
strip_default_attrs Boolean. If True, default-valued attributes will be removed from the NodeDefs. For a detailed guide, see Stripping Default-Valued Attributes.
Returns The path to the exported directory as a bytes object.
Raises
ValueError if no serving_input_receiver_fn is provided, no export_outputs are provided, or no checkpoint can be found. get_variable_names View source
get_variable_names()
Returns list of all variable names in this model.
Returns List of names.
Raises
ValueError If the Estimator has not produced a checkpoint yet. get_variable_value View source
get_variable_value(
name
)
Returns value of the variable given by name.
Args
name string or a list of string, name of the tensor.
Returns Numpy array - value of the tensor.
Raises
ValueError If the Estimator has not produced a checkpoint yet. latest_checkpoint View source
latest_checkpoint()
Finds the filename of the latest saved checkpoint file in model_dir.
Returns The full path to the latest checkpoint or None if no checkpoint was found.
predict View source
predict(
input_fn, predict_keys=None, hooks=None, checkpoint_path=None,
yield_single_examples=True
)
Yields predictions for given features. Please note that interleaving two predict outputs does not work. See: issue/20506
Args
input_fn A function that constructs the features. Prediction continues until input_fn raises an end-of-input exception (tf.errors.OutOfRangeError or StopIteration). See Premade Estimators for more information. The function should construct and return one of the following:
tf.data.Dataset object -- Outputs of Dataset object must have same constraints as below. features -- A tf.Tensor or a dictionary of string feature name to Tensor. features are consumed by model_fn. They should satisfy the expectation of model_fn from inputs. A tuple, in which case the first item is extracted as features.
predict_keys list of str, name of the keys to predict. It is used if the tf.estimator.EstimatorSpec.predictions is a dict. If predict_keys is used then rest of the predictions will be filtered from the dictionary. If None, returns all.
hooks List of tf.train.SessionRunHook subclass instances. Used for callbacks inside the prediction call.
checkpoint_path Path of a specific checkpoint to predict. If None, the latest checkpoint in model_dir is used. If there are no checkpoints in model_dir, prediction is run with newly initialized Variables instead of ones restored from checkpoint.
yield_single_examples If False, yields the whole batch as returned by the model_fn instead of decomposing the batch into individual elements. This is useful if model_fn returns some tensors whose first dimension is not equal to the batch size.
Yields Evaluated values of predictions tensors.
Raises
ValueError If batch length of predictions is not the same and yield_single_examples is True.
ValueError If there is a conflict between predict_keys and predictions. For example if predict_keys is not None but tf.estimator.EstimatorSpec.predictions is not a dict. train View source
train(
input_fn, hooks=None, steps=None, max_steps=None, saving_listeners=None
)
Trains a model given training data input_fn.
Args
input_fn A function that provides input data for training as minibatches. See Premade Estimators for more information. The function should construct and return one of the following: A tf.data.Dataset object: Outputs of Dataset object must be a tuple (features, labels) with same constraints as below. A tuple (features, labels): Where features is a tf.Tensor or a dictionary of string feature name to Tensor and labels is a Tensor or a dictionary of string label name to Tensor. Both features and labels are consumed by model_fn. They should satisfy the expectation of model_fn from inputs.
hooks List of tf.train.SessionRunHook subclass instances. Used for callbacks inside the training loop.
steps Number of steps for which to train the model. If None, train forever or train until input_fn generates the tf.errors.OutOfRange error or StopIteration exception. steps works incrementally. If you call two times train(steps=10) then training occurs in total 20 steps. If OutOfRange or StopIteration occurs in the middle, training stops before 20 steps. If you don't want to have incremental behavior please set max_steps instead. If set, max_steps must be None.
max_steps Number of total steps for which to train model. If None, train forever or train until input_fn generates the tf.errors.OutOfRange error or StopIteration exception. If set, steps must be None. If OutOfRange or StopIteration occurs in the middle, training stops before max_steps steps. Two calls to train(steps=100) means 200 training iterations. On the other hand, two calls to train(max_steps=100) means that the second call will not do any iteration since first call did all 100 steps.
saving_listeners list of CheckpointSaverListener objects. Used for callbacks that run immediately before or after checkpoint savings.
Returns self, for chaining.
Raises
ValueError If both steps and max_steps are not None.
ValueError If either steps or max_steps <= 0. | tensorflow.estimator.experimental.rnnclassifier |
tf.estimator.experimental.RNNEstimator An Estimator for TensorFlow RNN models with user-specified head. Inherits From: Estimator
tf.estimator.experimental.RNNEstimator(
head, sequence_feature_columns, context_feature_columns=None, units=None,
cell_type=USE_DEFAULT, rnn_cell_fn=None, return_sequences=False, model_dir=None,
optimizer='Adagrad', config=None
)
Example: token_sequence = sequence_categorical_column_with_hash_bucket(...)
token_emb = embedding_column(categorical_column=token_sequence, ...)
estimator = RNNEstimator(
head=tf.estimator.RegressionHead(),
sequence_feature_columns=[token_emb],
units=[32, 16], cell_type='lstm')
# Or with custom RNN cell:
def rnn_cell_fn(_):
cells = [ tf.keras.layers.LSTMCell(size) for size in [32, 16] ]
return tf.keras.layers.StackedRNNCells(cells)
estimator = RNNEstimator(
head=tf.estimator.RegressionHead(),
sequence_feature_columns=[token_emb],
rnn_cell_fn=rnn_cell_fn)
# Input builders
def input_fn_train: # returns x, y
pass
estimator.train(input_fn=input_fn_train, steps=100)
def input_fn_eval: # returns x, y
pass
metrics = estimator.evaluate(input_fn=input_fn_eval, steps=10)
def input_fn_predict: # returns x, None
pass
predictions = estimator.predict(input_fn=input_fn_predict)
Input of train and evaluate should have following features, otherwise there will be a KeyError: if the head's weight_column is not None, a feature with key=weight_column whose value is a Tensor. for each column in sequence_feature_columns: a feature with key=column.name whose value is a SparseTensor.
for each column in context_feature_columns: if column is a CategoricalColumn, a feature with key=column.name whose value is a SparseTensor. if column is a WeightedCategoricalColumn, two features: the first with key the id column name, the second with key the weight column name. Both features' value must be a SparseTensor. if column is a DenseColumn, a feature with key=column.name whose value is a Tensor.
Loss and predicted output are determined by the specified head.
Args
head A Head instance. This specifies the model's output and loss function to be optimized.
sequence_feature_columns An iterable containing the FeatureColumns that represent sequential input. All items in the set should either be sequence columns (e.g. sequence_numeric_column) or constructed from one (e.g. embedding_column with sequence_categorical_column_* as input).
context_feature_columns An iterable containing the FeatureColumns for contextual input. The data represented by these columns will be replicated and given to the RNN at each timestep. These columns must be instances of classes derived from DenseColumn such as numeric_column, not the sequential variants.
units Iterable of integer number of hidden units per RNN layer. If set, cell_type must also be specified and rnn_cell_fn must be None.
cell_type A class producing a RNN cell or a string specifying the cell type. Supported strings are: 'simple_rnn', 'lstm', and 'gru'. If set, units must also be specified and rnn_cell_fn must be None.
rnn_cell_fn A function that returns a RNN cell instance that will be used to construct the RNN. If set, units and cell_type cannot be set. This is for advanced users who need additional customization beyond units and cell_type. Note that tf.keras.layers.StackedRNNCells is needed for stacked RNNs.
return_sequences A boolean indicating whether to return the last output in the output sequence, or the full sequence.
model_dir Directory to save model parameters, graph and etc. This can also be used to load checkpoints from the directory into a estimator to continue training a previously saved model.
optimizer An instance of tf.Optimizer or string specifying optimizer type. Defaults to Adagrad optimizer.
config RunConfig object to configure the runtime settings.
Raises
ValueError If units, cell_type, and rnn_cell_fn are not compatible. Eager Compatibility Estimators are not compatible with eager execution.
Attributes
config
model_dir
model_fn Returns the model_fn which is bound to self.params.
params
Methods eval_dir View source
eval_dir(
name=None
)
Shows the directory name where evaluation metrics are dumped.
Args
name Name of the evaluation if user needs to run multiple evaluations on different data sets, such as on training data vs test data. Metrics for different evaluations are saved in separate folders, and appear separately in tensorboard.
Returns A string which is the path of directory contains evaluation metrics.
evaluate View source
evaluate(
input_fn, steps=None, hooks=None, checkpoint_path=None, name=None
)
Evaluates the model given evaluation data input_fn. For each step, calls input_fn, which returns one batch of data. Evaluates until:
steps batches are processed, or
input_fn raises an end-of-input exception (tf.errors.OutOfRangeError or StopIteration).
Args
input_fn A function that constructs the input data for evaluation. See Premade Estimators for more information. The function should construct and return one of the following: A tf.data.Dataset object: Outputs of Dataset object must be a tuple (features, labels) with same constraints as below. A tuple (features, labels): Where features is a tf.Tensor or a dictionary of string feature name to Tensor and labels is a Tensor or a dictionary of string label name to Tensor. Both features and labels are consumed by model_fn. They should satisfy the expectation of model_fn from inputs.
steps Number of steps for which to evaluate model. If None, evaluates until input_fn raises an end-of-input exception.
hooks List of tf.train.SessionRunHook subclass instances. Used for callbacks inside the evaluation call.
checkpoint_path Path of a specific checkpoint to evaluate. If None, the latest checkpoint in model_dir is used. If there are no checkpoints in model_dir, evaluation is run with newly initialized Variables instead of ones restored from checkpoint.
name Name of the evaluation if user needs to run multiple evaluations on different data sets, such as on training data vs test data. Metrics for different evaluations are saved in separate folders, and appear separately in tensorboard.
Returns A dict containing the evaluation metrics specified in model_fn keyed by name, as well as an entry global_step which contains the value of the global step for which this evaluation was performed. For canned estimators, the dict contains the loss (mean loss per mini-batch) and the average_loss (mean loss per sample). Canned classifiers also return the accuracy. Canned regressors also return the label/mean and the prediction/mean.
Raises
ValueError If steps <= 0. experimental_export_all_saved_models View source
experimental_export_all_saved_models(
export_dir_base, input_receiver_fn_map, assets_extra=None, as_text=False,
checkpoint_path=None
)
Exports a SavedModel with tf.MetaGraphDefs for each requested mode. For each mode passed in via the input_receiver_fn_map, this method builds a new graph by calling the input_receiver_fn to obtain feature and label Tensors. Next, this method calls the Estimator's model_fn in the passed mode to generate the model graph based on those features and labels, and restores the given checkpoint (or, lacking that, the most recent checkpoint) into the graph. Only one of the modes is used for saving variables to the SavedModel (order of preference: tf.estimator.ModeKeys.TRAIN, tf.estimator.ModeKeys.EVAL, then tf.estimator.ModeKeys.PREDICT), such that up to three tf.MetaGraphDefs are saved with a single set of variables in a single SavedModel directory. For the variables and tf.MetaGraphDefs, a timestamped export directory below export_dir_base, and writes a SavedModel into it containing the tf.MetaGraphDef for the given mode and its associated signatures. For prediction, the exported MetaGraphDef will provide one SignatureDef for each element of the export_outputs dict returned from the model_fn, named using the same keys. One of these keys is always tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY, indicating which signature will be served when a serving request does not specify one. For each signature, the outputs are provided by the corresponding tf.estimator.export.ExportOutputs, and the inputs are always the input receivers provided by the serving_input_receiver_fn. For training and evaluation, the train_op is stored in an extra collection, and loss, metrics, and predictions are included in a SignatureDef for the mode in question. Extra assets may be written into the SavedModel via the assets_extra argument. This should be a dict, where each key gives a destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as {'my_asset_file.txt': '/path/to/my_asset_file.txt'}.
Args
export_dir_base A string containing a directory in which to create timestamped subdirectories containing exported SavedModels.
input_receiver_fn_map dict of tf.estimator.ModeKeys to input_receiver_fn mappings, where the input_receiver_fn is a function that takes no arguments and returns the appropriate subclass of InputReceiver.
assets_extra A dict specifying how to populate the assets.extra directory within the exported SavedModel, or None if no extra assets are needed.
as_text whether to write the SavedModel proto in text format.
checkpoint_path The checkpoint path to export. If None (the default), the most recent checkpoint found within the model directory is chosen.
Returns The path to the exported directory as a bytes object.
Raises
ValueError if any input_receiver_fn is None, no export_outputs are provided, or no checkpoint can be found. export_saved_model View source
export_saved_model(
export_dir_base, serving_input_receiver_fn, assets_extra=None, as_text=False,
checkpoint_path=None, experimental_mode=ModeKeys.PREDICT
)
Exports inference graph as a SavedModel into the given dir. For a detailed guide, see SavedModel from Estimators. This method builds a new graph by first calling the serving_input_receiver_fn to obtain feature Tensors, and then calling this Estimator's model_fn to generate the model graph based on those features. It restores the given checkpoint (or, lacking that, the most recent checkpoint) into this graph in a fresh session. Finally it creates a timestamped export directory below the given export_dir_base, and writes a SavedModel into it containing a single tf.MetaGraphDef saved from this session. The exported MetaGraphDef will provide one SignatureDef for each element of the export_outputs dict returned from the model_fn, named using the same keys. One of these keys is always tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY, indicating which signature will be served when a serving request does not specify one. For each signature, the outputs are provided by the corresponding tf.estimator.export.ExportOutputs, and the inputs are always the input receivers provided by the serving_input_receiver_fn. Extra assets may be written into the SavedModel via the assets_extra argument. This should be a dict, where each key gives a destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as {'my_asset_file.txt': '/path/to/my_asset_file.txt'}. The experimental_mode parameter can be used to export a single train/eval/predict graph as a SavedModel. See experimental_export_all_saved_models for full docs.
Args
export_dir_base A string containing a directory in which to create timestamped subdirectories containing exported SavedModels.
serving_input_receiver_fn A function that takes no argument and returns a tf.estimator.export.ServingInputReceiver or tf.estimator.export.TensorServingInputReceiver.
assets_extra A dict specifying how to populate the assets.extra directory within the exported SavedModel, or None if no extra assets are needed.
as_text whether to write the SavedModel proto in text format.
checkpoint_path The checkpoint path to export. If None (the default), the most recent checkpoint found within the model directory is chosen.
experimental_mode tf.estimator.ModeKeys value indicating with mode will be exported. Note that this feature is experimental.
Returns The path to the exported directory as a bytes object.
Raises
ValueError if no serving_input_receiver_fn is provided, no export_outputs are provided, or no checkpoint can be found. export_savedmodel View source
export_savedmodel(
export_dir_base, serving_input_receiver_fn, assets_extra=None, as_text=False,
checkpoint_path=None, strip_default_attrs=False
)
Exports inference graph as a SavedModel into the given dir. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: This function has been renamed, use export_saved_model instead. For a detailed guide, see SavedModel from Estimators. This method builds a new graph by first calling the serving_input_receiver_fn to obtain feature Tensors, and then calling this Estimator's model_fn to generate the model graph based on those features. It restores the given checkpoint (or, lacking that, the most recent checkpoint) into this graph in a fresh session. Finally it creates a timestamped export directory below the given export_dir_base, and writes a SavedModel into it containing a single tf.MetaGraphDef saved from this session. The exported MetaGraphDef will provide one SignatureDef for each element of the export_outputs dict returned from the model_fn, named using the same keys. One of these keys is always tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY, indicating which signature will be served when a serving request does not specify one. For each signature, the outputs are provided by the corresponding tf.estimator.export.ExportOutputs, and the inputs are always the input receivers provided by the serving_input_receiver_fn. Extra assets may be written into the SavedModel via the assets_extra argument. This should be a dict, where each key gives a destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as {'my_asset_file.txt': '/path/to/my_asset_file.txt'}.
Args
export_dir_base A string containing a directory in which to create timestamped subdirectories containing exported SavedModels.
serving_input_receiver_fn A function that takes no argument and returns a tf.estimator.export.ServingInputReceiver or tf.estimator.export.TensorServingInputReceiver.
assets_extra A dict specifying how to populate the assets.extra directory within the exported SavedModel, or None if no extra assets are needed.
as_text whether to write the SavedModel proto in text format.
checkpoint_path The checkpoint path to export. If None (the default), the most recent checkpoint found within the model directory is chosen.
strip_default_attrs Boolean. If True, default-valued attributes will be removed from the NodeDefs. For a detailed guide, see Stripping Default-Valued Attributes.
Returns The path to the exported directory as a bytes object.
Raises
ValueError if no serving_input_receiver_fn is provided, no export_outputs are provided, or no checkpoint can be found. get_variable_names View source
get_variable_names()
Returns list of all variable names in this model.
Returns List of names.
Raises
ValueError If the Estimator has not produced a checkpoint yet. get_variable_value View source
get_variable_value(
name
)
Returns value of the variable given by name.
Args
name string or a list of string, name of the tensor.
Returns Numpy array - value of the tensor.
Raises
ValueError If the Estimator has not produced a checkpoint yet. latest_checkpoint View source
latest_checkpoint()
Finds the filename of the latest saved checkpoint file in model_dir.
Returns The full path to the latest checkpoint or None if no checkpoint was found.
predict View source
predict(
input_fn, predict_keys=None, hooks=None, checkpoint_path=None,
yield_single_examples=True
)
Yields predictions for given features. Please note that interleaving two predict outputs does not work. See: issue/20506
Args
input_fn A function that constructs the features. Prediction continues until input_fn raises an end-of-input exception (tf.errors.OutOfRangeError or StopIteration). See Premade Estimators for more information. The function should construct and return one of the following:
tf.data.Dataset object -- Outputs of Dataset object must have same constraints as below. features -- A tf.Tensor or a dictionary of string feature name to Tensor. features are consumed by model_fn. They should satisfy the expectation of model_fn from inputs. A tuple, in which case the first item is extracted as features.
predict_keys list of str, name of the keys to predict. It is used if the tf.estimator.EstimatorSpec.predictions is a dict. If predict_keys is used then rest of the predictions will be filtered from the dictionary. If None, returns all.
hooks List of tf.train.SessionRunHook subclass instances. Used for callbacks inside the prediction call.
checkpoint_path Path of a specific checkpoint to predict. If None, the latest checkpoint in model_dir is used. If there are no checkpoints in model_dir, prediction is run with newly initialized Variables instead of ones restored from checkpoint.
yield_single_examples If False, yields the whole batch as returned by the model_fn instead of decomposing the batch into individual elements. This is useful if model_fn returns some tensors whose first dimension is not equal to the batch size.
Yields Evaluated values of predictions tensors.
Raises
ValueError If batch length of predictions is not the same and yield_single_examples is True.
ValueError If there is a conflict between predict_keys and predictions. For example if predict_keys is not None but tf.estimator.EstimatorSpec.predictions is not a dict. train View source
train(
input_fn, hooks=None, steps=None, max_steps=None, saving_listeners=None
)
Trains a model given training data input_fn.
Args
input_fn A function that provides input data for training as minibatches. See Premade Estimators for more information. The function should construct and return one of the following: A tf.data.Dataset object: Outputs of Dataset object must be a tuple (features, labels) with same constraints as below. A tuple (features, labels): Where features is a tf.Tensor or a dictionary of string feature name to Tensor and labels is a Tensor or a dictionary of string label name to Tensor. Both features and labels are consumed by model_fn. They should satisfy the expectation of model_fn from inputs.
hooks List of tf.train.SessionRunHook subclass instances. Used for callbacks inside the training loop.
steps Number of steps for which to train the model. If None, train forever or train until input_fn generates the tf.errors.OutOfRange error or StopIteration exception. steps works incrementally. If you call two times train(steps=10) then training occurs in total 20 steps. If OutOfRange or StopIteration occurs in the middle, training stops before 20 steps. If you don't want to have incremental behavior please set max_steps instead. If set, max_steps must be None.
max_steps Number of total steps for which to train model. If None, train forever or train until input_fn generates the tf.errors.OutOfRange error or StopIteration exception. If set, steps must be None. If OutOfRange or StopIteration occurs in the middle, training stops before max_steps steps. Two calls to train(steps=100) means 200 training iterations. On the other hand, two calls to train(max_steps=100) means that the second call will not do any iteration since first call did all 100 steps.
saving_listeners list of CheckpointSaverListener objects. Used for callbacks that run immediately before or after checkpoint savings.
Returns self, for chaining.
Raises
ValueError If both steps and max_steps are not None.
ValueError If either steps or max_steps <= 0. | tensorflow.estimator.experimental.rnnestimator |
tf.estimator.experimental.stop_if_higher_hook View source on GitHub Creates hook to stop if the given metric is higher than the threshold. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.estimator.experimental.stop_if_higher_hook
tf.estimator.experimental.stop_if_higher_hook(
estimator, metric_name, threshold, eval_dir=None, min_steps=0,
run_every_secs=60, run_every_steps=None
)
Usage example: estimator = ...
# Hook to stop training if accuracy becomes higher than 0.9.
hook = early_stopping.stop_if_higher_hook(estimator, "accuracy", 0.9)
train_spec = tf.estimator.TrainSpec(..., hooks=[hook])
tf.estimator.train_and_evaluate(estimator, train_spec, ...)
Caveat: Current implementation supports early-stopping both training and evaluation in local mode. In distributed mode, training can be stopped but evaluation (where it's a separate job) will indefinitely wait for new model checkpoints to evaluate, so you will need other means to detect and stop it. Early-stopping evaluation in distributed mode requires changes in train_and_evaluate API and will be addressed in a future revision.
Args
estimator A tf.estimator.Estimator instance.
metric_name str, metric to track. "loss", "accuracy", etc.
threshold Numeric threshold for the given metric.
eval_dir If set, directory containing summary files with eval metrics. By default, estimator.eval_dir() will be used.
min_steps int, stop is never requested if global step is less than this value. Defaults to 0.
run_every_secs If specified, calls should_stop_fn at an interval of run_every_secs seconds. Defaults to 60 seconds. Either this or run_every_steps must be set.
run_every_steps If specified, calls should_stop_fn every run_every_steps steps. Either this or run_every_secs must be set.
Returns An early-stopping hook of type SessionRunHook that periodically checks if the given metric is higher than specified threshold and initiates early stopping if true. | tensorflow.estimator.experimental.stop_if_higher_hook |
tf.estimator.experimental.stop_if_lower_hook View source on GitHub Creates hook to stop if the given metric is lower than the threshold. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.estimator.experimental.stop_if_lower_hook
tf.estimator.experimental.stop_if_lower_hook(
estimator, metric_name, threshold, eval_dir=None, min_steps=0,
run_every_secs=60, run_every_steps=None
)
Usage example: estimator = ...
# Hook to stop training if loss becomes lower than 100.
hook = early_stopping.stop_if_lower_hook(estimator, "loss", 100)
train_spec = tf.estimator.TrainSpec(..., hooks=[hook])
tf.estimator.train_and_evaluate(estimator, train_spec, ...)
Caveat: Current implementation supports early-stopping both training and evaluation in local mode. In distributed mode, training can be stopped but evaluation (where it's a separate job) will indefinitely wait for new model checkpoints to evaluate, so you will need other means to detect and stop it. Early-stopping evaluation in distributed mode requires changes in train_and_evaluate API and will be addressed in a future revision.
Args
estimator A tf.estimator.Estimator instance.
metric_name str, metric to track. "loss", "accuracy", etc.
threshold Numeric threshold for the given metric.
eval_dir If set, directory containing summary files with eval metrics. By default, estimator.eval_dir() will be used.
min_steps int, stop is never requested if global step is less than this value. Defaults to 0.
run_every_secs If specified, calls should_stop_fn at an interval of run_every_secs seconds. Defaults to 60 seconds. Either this or run_every_steps must be set.
run_every_steps If specified, calls should_stop_fn every run_every_steps steps. Either this or run_every_secs must be set.
Returns An early-stopping hook of type SessionRunHook that periodically checks if the given metric is lower than specified threshold and initiates early stopping if true. | tensorflow.estimator.experimental.stop_if_lower_hook |
tf.estimator.experimental.stop_if_no_decrease_hook View source on GitHub Creates hook to stop if metric does not decrease within given max steps. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.estimator.experimental.stop_if_no_decrease_hook
tf.estimator.experimental.stop_if_no_decrease_hook(
estimator, metric_name, max_steps_without_decrease, eval_dir=None, min_steps=0,
run_every_secs=60, run_every_steps=None
)
Usage example: estimator = ...
# Hook to stop training if loss does not decrease in over 100000 steps.
hook = early_stopping.stop_if_no_decrease_hook(estimator, "loss", 100000)
train_spec = tf.estimator.TrainSpec(..., hooks=[hook])
tf.estimator.train_and_evaluate(estimator, train_spec, ...)
Caveat: Current implementation supports early-stopping both training and evaluation in local mode. In distributed mode, training can be stopped but evaluation (where it's a separate job) will indefinitely wait for new model checkpoints to evaluate, so you will need other means to detect and stop it. Early-stopping evaluation in distributed mode requires changes in train_and_evaluate API and will be addressed in a future revision.
Args
estimator A tf.estimator.Estimator instance.
metric_name str, metric to track. "loss", "accuracy", etc.
max_steps_without_decrease int, maximum number of training steps with no decrease in the given metric.
eval_dir If set, directory containing summary files with eval metrics. By default, estimator.eval_dir() will be used.
min_steps int, stop is never requested if global step is less than this value. Defaults to 0.
run_every_secs If specified, calls should_stop_fn at an interval of run_every_secs seconds. Defaults to 60 seconds. Either this or run_every_steps must be set.
run_every_steps If specified, calls should_stop_fn every run_every_steps steps. Either this or run_every_secs must be set.
Returns An early-stopping hook of type SessionRunHook that periodically checks if the given metric shows no decrease over given maximum number of training steps, and initiates early stopping if true. | tensorflow.estimator.experimental.stop_if_no_decrease_hook |
tf.estimator.experimental.stop_if_no_increase_hook View source on GitHub Creates hook to stop if metric does not increase within given max steps. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.estimator.experimental.stop_if_no_increase_hook
tf.estimator.experimental.stop_if_no_increase_hook(
estimator, metric_name, max_steps_without_increase, eval_dir=None, min_steps=0,
run_every_secs=60, run_every_steps=None
)
Usage example: estimator = ...
# Hook to stop training if accuracy does not increase in over 100000 steps.
hook = early_stopping.stop_if_no_increase_hook(estimator, "accuracy", 100000)
train_spec = tf.estimator.TrainSpec(..., hooks=[hook])
tf.estimator.train_and_evaluate(estimator, train_spec, ...)
Caveat: Current implementation supports early-stopping both training and evaluation in local mode. In distributed mode, training can be stopped but evaluation (where it's a separate job) will indefinitely wait for new model checkpoints to evaluate, so you will need other means to detect and stop it. Early-stopping evaluation in distributed mode requires changes in train_and_evaluate API and will be addressed in a future revision.
Args
estimator A tf.estimator.Estimator instance.
metric_name str, metric to track. "loss", "accuracy", etc.
max_steps_without_increase int, maximum number of training steps with no increase in the given metric.
eval_dir If set, directory containing summary files with eval metrics. By default, estimator.eval_dir() will be used.
min_steps int, stop is never requested if global step is less than this value. Defaults to 0.
run_every_secs If specified, calls should_stop_fn at an interval of run_every_secs seconds. Defaults to 60 seconds. Either this or run_every_steps must be set.
run_every_steps If specified, calls should_stop_fn every run_every_steps steps. Either this or run_every_secs must be set.
Returns An early-stopping hook of type SessionRunHook that periodically checks if the given metric shows no increase over given maximum number of training steps, and initiates early stopping if true. | tensorflow.estimator.experimental.stop_if_no_increase_hook |
Module: tf.estimator.export All public utility methods for exporting Estimator to SavedModel. This file includes functions and constants from core (model_utils) and export.py Classes class ClassificationOutput: Represents the output of a classification head. class ExportOutput: Represents an output of a model that can be served. class PredictOutput: Represents the output of a generic prediction head. class RegressionOutput: Represents the output of a regression head. class ServingInputReceiver: A return type for a serving_input_receiver_fn. class TensorServingInputReceiver: A return type for a serving_input_receiver_fn. Functions build_parsing_serving_input_receiver_fn(...): Build a serving_input_receiver_fn expecting fed tf.Examples. build_raw_serving_input_receiver_fn(...): Build a serving_input_receiver_fn expecting feature Tensors. | tensorflow.estimator.export |
tf.estimator.export.build_parsing_serving_input_receiver_fn View source on GitHub Build a serving_input_receiver_fn expecting fed tf.Examples. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.estimator.export.build_parsing_serving_input_receiver_fn
tf.estimator.export.build_parsing_serving_input_receiver_fn(
feature_spec, default_batch_size=None
)
Creates a serving_input_receiver_fn that expects a serialized tf.Example fed into a string placeholder. The function parses the tf.Example according to the provided feature_spec, and returns all parsed Tensors as features.
Args
feature_spec a dict of string to VarLenFeature/FixedLenFeature.
default_batch_size the number of query examples expected per batch. Leave unset for variable batch size (recommended).
Returns A serving_input_receiver_fn suitable for use in serving. | tensorflow.estimator.export.build_parsing_serving_input_receiver_fn |
tf.estimator.export.build_raw_serving_input_receiver_fn View source on GitHub Build a serving_input_receiver_fn expecting feature Tensors. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.estimator.export.build_raw_serving_input_receiver_fn
tf.estimator.export.build_raw_serving_input_receiver_fn(
features, default_batch_size=None
)
Creates an serving_input_receiver_fn that expects all features to be fed directly.
Args
features a dict of string to Tensor.
default_batch_size the number of query examples expected per batch. Leave unset for variable batch size (recommended).
Returns A serving_input_receiver_fn. | tensorflow.estimator.export.build_raw_serving_input_receiver_fn |
tf.estimator.export.ClassificationOutput View source on GitHub Represents the output of a classification head. Inherits From: ExportOutput View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.estimator.export.ClassificationOutput
tf.estimator.export.ClassificationOutput(
scores=None, classes=None
)
Either classes or scores or both must be set. The classes Tensor must provide string labels, not integer class IDs. If only classes is set, it is interpreted as providing top-k results in descending order. If only scores is set, it is interpreted as providing a score for every class in order of class ID. If both classes and scores are set, they are interpreted as zipped, so each score corresponds to the class at the same index. Clients should not depend on the order of the entries.
Args
scores A float Tensor giving scores (sometimes but not always interpretable as probabilities) for each class. May be None, but only if classes is set. Interpretation varies-- see class doc.
classes A string Tensor giving predicted class labels. May be None, but only if scores is set. Interpretation varies-- see class doc.
Raises
ValueError if neither classes nor scores is set, or one of them is not a Tensor with the correct dtype.
Attributes
classes
scores
Methods as_signature_def View source
as_signature_def(
receiver_tensors
)
Generate a SignatureDef proto for inclusion in a MetaGraphDef. The SignatureDef will specify outputs as described in this ExportOutput, and will use the provided receiver_tensors as inputs.
Args
receiver_tensors a Tensor, or a dict of string to Tensor, specifying input nodes that will be fed. | tensorflow.estimator.export.classificationoutput |
tf.estimator.export.ExportOutput View source on GitHub Represents an output of a model that can be served. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.estimator.export.ExportOutput These typically correspond to model heads. Methods as_signature_def View source
@abc.abstractmethod
as_signature_def(
receiver_tensors
)
Generate a SignatureDef proto for inclusion in a MetaGraphDef. The SignatureDef will specify outputs as described in this ExportOutput, and will use the provided receiver_tensors as inputs.
Args
receiver_tensors a Tensor, or a dict of string to Tensor, specifying input nodes that will be fed. | tensorflow.estimator.export.exportoutput |
tf.estimator.export.PredictOutput View source on GitHub Represents the output of a generic prediction head. Inherits From: ExportOutput View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.estimator.export.PredictOutput
tf.estimator.export.PredictOutput(
outputs
)
A generic prediction need not be either a classification or a regression. Named outputs must be provided as a dict from string to Tensor,
Args
outputs A Tensor or a dict of string to Tensor representing the predictions.
Raises
ValueError if the outputs is not dict, or any of its keys are not strings, or any of its values are not Tensors.
Attributes
outputs
Methods as_signature_def View source
as_signature_def(
receiver_tensors
)
Generate a SignatureDef proto for inclusion in a MetaGraphDef. The SignatureDef will specify outputs as described in this ExportOutput, and will use the provided receiver_tensors as inputs.
Args
receiver_tensors a Tensor, or a dict of string to Tensor, specifying input nodes that will be fed. | tensorflow.estimator.export.predictoutput |
tf.estimator.export.RegressionOutput View source on GitHub Represents the output of a regression head. Inherits From: ExportOutput View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.estimator.export.RegressionOutput
tf.estimator.export.RegressionOutput(
value
)
Args
value a float Tensor giving the predicted values. Required.
Raises
ValueError if the value is not a Tensor with dtype tf.float32.
Attributes
value
Methods as_signature_def View source
as_signature_def(
receiver_tensors
)
Generate a SignatureDef proto for inclusion in a MetaGraphDef. The SignatureDef will specify outputs as described in this ExportOutput, and will use the provided receiver_tensors as inputs.
Args
receiver_tensors a Tensor, or a dict of string to Tensor, specifying input nodes that will be fed. | tensorflow.estimator.export.regressionoutput |
tf.estimator.export.ServingInputReceiver View source on GitHub A return type for a serving_input_receiver_fn. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.estimator.export.ServingInputReceiver
tf.estimator.export.ServingInputReceiver(
features, receiver_tensors, receiver_tensors_alternatives=None
)
Attributes
features A Tensor, SparseTensor, or dict of string or int to Tensor or SparseTensor, specifying the features to be passed to the model. Note: if features passed is not a dict, it will be wrapped in a dict with a single entry, using 'feature' as the key. Consequently, the model must accept a feature dict of the form {'feature': tensor}. You may use TensorServingInputReceiver if you want the tensor to be passed as is.
receiver_tensors A Tensor, SparseTensor, or dict of string to Tensor or SparseTensor, specifying input nodes where this receiver expects to be fed by default. Typically, this is a single placeholder expecting serialized tf.Example protos.
receiver_tensors_alternatives a dict of string to additional groups of receiver tensors, each of which may be a Tensor, SparseTensor, or dict of string to Tensor orSparseTensor. These named receiver tensor alternatives generate additional serving signatures, which may be used to feed inputs at different points within the input receiver subgraph. A typical usage is to allow feeding raw feature Tensors downstream of the tf.parse_example() op. Defaults to None. | tensorflow.estimator.export.servinginputreceiver |
tf.estimator.export.TensorServingInputReceiver View source on GitHub A return type for a serving_input_receiver_fn. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.estimator.export.TensorServingInputReceiver
tf.estimator.export.TensorServingInputReceiver(
features, receiver_tensors, receiver_tensors_alternatives=None
)
This is for use with models that expect a single Tensor or SparseTensor as an input feature, as opposed to a dict of features. The normal ServingInputReceiver always returns a feature dict, even if it contains only one entry, and so can be used only with models that accept such a dict. For models that accept only a single raw feature, the serving_input_receiver_fn provided to Estimator.export_saved_model() should return this TensorServingInputReceiver instead. See: https://github.com/tensorflow/tensorflow/issues/11674 Note that the receiver_tensors and receiver_tensor_alternatives arguments will be automatically converted to the dict representation in either case, because the SavedModel format requires each input Tensor to have a name (provided by the dict key).
Attributes
features A single Tensor or SparseTensor, representing the feature to be passed to the model.
receiver_tensors A Tensor, SparseTensor, or dict of string to Tensor or SparseTensor, specifying input nodes where this receiver expects to be fed by default. Typically, this is a single placeholder expecting serialized tf.Example protos.
receiver_tensors_alternatives a dict of string to additional groups of receiver tensors, each of which may be a Tensor, SparseTensor, or dict of string to Tensor orSparseTensor. These named receiver tensor alternatives generate additional serving signatures, which may be used to feed inputs at different points within the input receiver subgraph. A typical usage is to allow feeding raw feature Tensors downstream of the tf.parse_example() op. Defaults to None. | tensorflow.estimator.export.tensorservinginputreceiver |
tf.estimator.Exporter View source on GitHub A class representing a type of model export. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.estimator.Exporter
Attributes
name Directory name. A directory name under the export base directory where exports of this type are written. Should not be None nor empty.
Methods export View source
@abc.abstractmethod
export(
estimator, export_path, checkpoint_path, eval_result, is_the_final_export
)
Exports the given Estimator to a specific format.
Args
estimator the Estimator to export.
export_path A string containing a directory where to write the export.
checkpoint_path The checkpoint path to export.
eval_result The output of Estimator.evaluate on this checkpoint.
is_the_final_export This boolean is True when this is an export in the end of training. It is False for the intermediate exports during the training. When passing Exporter to tf.estimator.train_and_evaluate is_the_final_export is always False if TrainSpec.max_steps is None.
Returns The string path to the exported directory or None if export is skipped. | tensorflow.estimator.exporter |
tf.estimator.FeedFnHook Runs feed_fn and sets the feed_dict accordingly. Inherits From: SessionRunHook View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.estimator.FeedFnHook, tf.compat.v1.train.FeedFnHook
tf.estimator.FeedFnHook(
feed_fn
)
Args
feed_fn function that takes no arguments and returns dict of Tensor to feed. Methods after_create_session View source
after_create_session(
session, coord
)
Called when new TensorFlow session is created. This is called to signal the hooks that a new session has been created. This has two essential differences with the situation in which begin is called: When this is called, the graph is finalized and ops can no longer be added to the graph. This method will also be called as a result of recovering a wrapped session, not only at the beginning of the overall session.
Args
session A TensorFlow Session that has been created.
coord A Coordinator object which keeps track of all threads. after_run View source
after_run(
run_context, run_values
)
Called after each call to run(). The run_values argument contains results of requested ops/tensors by before_run(). The run_context argument is the same one send to before_run call. run_context.request_stop() can be called to stop the iteration. If session.run() raises any exceptions then after_run() is not called.
Args
run_context A SessionRunContext object.
run_values A SessionRunValues object. before_run View source
before_run(
run_context
)
Called before each call to run(). You can return from this call a SessionRunArgs object indicating ops or tensors to add to the upcoming run() call. These ops/tensors will be run together with the ops/tensors originally passed to the original run() call. The run args you return can also contain feeds to be added to the run() call. The run_context argument is a SessionRunContext that provides information about the upcoming run() call: the originally requested op/tensors, the TensorFlow Session. At this point graph is finalized and you can not add ops.
Args
run_context A SessionRunContext object.
Returns None or a SessionRunArgs object.
begin View source
begin()
Called once before using the session. When called, the default graph is the one that will be launched in the session. The hook can modify the graph by adding new operations to it. After the begin() call the graph will be finalized and the other callbacks can not modify the graph anymore. Second call of begin() on the same graph, should not change the graph. end View source
end(
session
)
Called at the end of session. The session argument can be used in case the hook wants to run final ops, such as saving a last checkpoint. If session.run() raises exception other than OutOfRangeError or StopIteration then end() is not called. Note the difference between end() and after_run() behavior when session.run() raises OutOfRangeError or StopIteration. In that case end() is called but after_run() is not called.
Args
session A TensorFlow Session that will be soon closed. | tensorflow.estimator.feedfnhook |
tf.estimator.FinalExporter View source on GitHub This class exports the serving graph and checkpoints at the end. Inherits From: Exporter View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.estimator.FinalExporter
tf.estimator.FinalExporter(
name, serving_input_receiver_fn, assets_extra=None, as_text=False
)
This class performs a single export at the end of training.
Args
name unique name of this Exporter that is going to be used in the export path.
serving_input_receiver_fn a function that takes no arguments and returns a ServingInputReceiver.
assets_extra An optional dict specifying how to populate the assets.extra directory within the exported SavedModel. Each key should give the destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as {'my_asset_file.txt': '/path/to/my_asset_file.txt'}.
as_text whether to write the SavedModel proto in text format. Defaults to False.
Raises
ValueError if any arguments is invalid.
Attributes
name Directory name. A directory name under the export base directory where exports of this type are written. Should not be None nor empty.
Methods export View source
export(
estimator, export_path, checkpoint_path, eval_result, is_the_final_export
)
Exports the given Estimator to a specific format.
Args
estimator the Estimator to export.
export_path A string containing a directory where to write the export.
checkpoint_path The checkpoint path to export.
eval_result The output of Estimator.evaluate on this checkpoint.
is_the_final_export This boolean is True when this is an export in the end of training. It is False for the intermediate exports during the training. When passing Exporter to tf.estimator.train_and_evaluate is_the_final_export is always False if TrainSpec.max_steps is None.
Returns The string path to the exported directory or None if export is skipped. | tensorflow.estimator.finalexporter |
tf.estimator.FinalOpsHook A hook which evaluates Tensors at the end of a session. Inherits From: SessionRunHook View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.estimator.FinalOpsHook, tf.compat.v1.train.FinalOpsHook
tf.estimator.FinalOpsHook(
final_ops, final_ops_feed_dict=None
)
Args
final_ops A single Tensor, a list of Tensors or a dictionary of names to Tensors.
final_ops_feed_dict A feed dictionary to use when running final_ops_dict.
Attributes
final_ops_values
Methods after_create_session View source
after_create_session(
session, coord
)
Called when new TensorFlow session is created. This is called to signal the hooks that a new session has been created. This has two essential differences with the situation in which begin is called: When this is called, the graph is finalized and ops can no longer be added to the graph. This method will also be called as a result of recovering a wrapped session, not only at the beginning of the overall session.
Args
session A TensorFlow Session that has been created.
coord A Coordinator object which keeps track of all threads. after_run View source
after_run(
run_context, run_values
)
Called after each call to run(). The run_values argument contains results of requested ops/tensors by before_run(). The run_context argument is the same one send to before_run call. run_context.request_stop() can be called to stop the iteration. If session.run() raises any exceptions then after_run() is not called.
Args
run_context A SessionRunContext object.
run_values A SessionRunValues object. before_run View source
before_run(
run_context
)
Called before each call to run(). You can return from this call a SessionRunArgs object indicating ops or tensors to add to the upcoming run() call. These ops/tensors will be run together with the ops/tensors originally passed to the original run() call. The run args you return can also contain feeds to be added to the run() call. The run_context argument is a SessionRunContext that provides information about the upcoming run() call: the originally requested op/tensors, the TensorFlow Session. At this point graph is finalized and you can not add ops.
Args
run_context A SessionRunContext object.
Returns None or a SessionRunArgs object.
begin View source
begin()
Called once before using the session. When called, the default graph is the one that will be launched in the session. The hook can modify the graph by adding new operations to it. After the begin() call the graph will be finalized and the other callbacks can not modify the graph anymore. Second call of begin() on the same graph, should not change the graph. end View source
end(
session
)
Called at the end of session. The session argument can be used in case the hook wants to run final ops, such as saving a last checkpoint. If session.run() raises exception other than OutOfRangeError or StopIteration then end() is not called. Note the difference between end() and after_run() behavior when session.run() raises OutOfRangeError or StopIteration. In that case end() is called but after_run() is not called.
Args
session A TensorFlow Session that will be soon closed. | tensorflow.estimator.finalopshook |
tf.estimator.GlobalStepWaiterHook Delays execution until global step reaches wait_until_step. Inherits From: SessionRunHook View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.estimator.GlobalStepWaiterHook, tf.compat.v1.train.GlobalStepWaiterHook
tf.estimator.GlobalStepWaiterHook(
wait_until_step
)
This hook delays execution until global step reaches to wait_until_step. It is used to gradually start workers in distributed settings. One example usage would be setting wait_until_step=int(K*log(task_id+1)) assuming that task_id=0 is the chief.
Args
wait_until_step an int shows until which global step should we wait. Methods after_create_session View source
after_create_session(
session, coord
)
Called when new TensorFlow session is created. This is called to signal the hooks that a new session has been created. This has two essential differences with the situation in which begin is called: When this is called, the graph is finalized and ops can no longer be added to the graph. This method will also be called as a result of recovering a wrapped session, not only at the beginning of the overall session.
Args
session A TensorFlow Session that has been created.
coord A Coordinator object which keeps track of all threads. after_run View source
after_run(
run_context, run_values
)
Called after each call to run(). The run_values argument contains results of requested ops/tensors by before_run(). The run_context argument is the same one send to before_run call. run_context.request_stop() can be called to stop the iteration. If session.run() raises any exceptions then after_run() is not called.
Args
run_context A SessionRunContext object.
run_values A SessionRunValues object. before_run View source
before_run(
run_context
)
Called before each call to run(). You can return from this call a SessionRunArgs object indicating ops or tensors to add to the upcoming run() call. These ops/tensors will be run together with the ops/tensors originally passed to the original run() call. The run args you return can also contain feeds to be added to the run() call. The run_context argument is a SessionRunContext that provides information about the upcoming run() call: the originally requested op/tensors, the TensorFlow Session. At this point graph is finalized and you can not add ops.
Args
run_context A SessionRunContext object.
Returns None or a SessionRunArgs object.
begin View source
begin()
Called once before using the session. When called, the default graph is the one that will be launched in the session. The hook can modify the graph by adding new operations to it. After the begin() call the graph will be finalized and the other callbacks can not modify the graph anymore. Second call of begin() on the same graph, should not change the graph. end View source
end(
session
)
Called at the end of session. The session argument can be used in case the hook wants to run final ops, such as saving a last checkpoint. If session.run() raises exception other than OutOfRangeError or StopIteration then end() is not called. Note the difference between end() and after_run() behavior when session.run() raises OutOfRangeError or StopIteration. In that case end() is called but after_run() is not called.
Args
session A TensorFlow Session that will be soon closed. | tensorflow.estimator.globalstepwaiterhook |
tf.estimator.Head View source on GitHub Interface for the head/top of a model. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.estimator.Head Head sits on top of the model network and handles computing the outputs of the network. Given logits (or output of a hidden layer), a Head knows how to compute predictions, loss, train_op, metrics and export outputs. It is meant to: Simplify writing model_fn and to make model_fn more configurable for Estimator. Simpilfy creating loss and metrics for the train and test loop in Eager execution. Support wide range of machine learning models. Since most heads can work with logits, they can support DNN, RNN, Wide, Wide&Deep, Global objectives, Gradient boosted trees and many other types of machine learning models. Common usage: Here is simplified model_fn to build a DNN regression model. def _my_dnn_model_fn(features, labels, mode, params, config=None):
# Optionally your callers can pass head to model_fn as a param.
head = tf.estimator.RegressionHead(...)
feature_columns = tf.feature_column.numeric_column(...)
feature_layer = tf.keras.layers.DenseFeatures(feature_columns)
inputs = feature_layer(features)
# Compute logits with tf.keras.layers API
hidden_layer0 = tf.keras.layers.Dense(
units=1000, activation="relu")(inputs)
hidden_layer1 = tf.keras.layers.Dense(
units=500, activation="relu")(hidden_layer0)
logits = tf.keras.layers.Dense(
units=head.logits_dimension, activation=None)(hidden_layer1)
# Or use Keras model for logits computation
model = tf.keras.Sequential()
model.add(tf.keras.layers.Dense(units=1000, activation="relu"))
model.add(tf.keras.layers.Dense(units=500, activation="relu"))
model.add(tf.keras.layers.Dense(
units=head.logits_dimension, activation=None))
logits = model(inputs)
return head.create_estimator_spec(
features=features,
labels=labels,
mode=mode,
logits=logits,
optimizer=optimizer)
Attributes
logits_dimension Size of the last dimension of the logits Tensor. Often is the number of classes, labels, or real values to be predicted. Typically, logits is of shape [batch_size, logits_dimension].
loss_reduction One of tf.losses.Reduction. Describes how to reduce training loss over batch, such as mean or sum.
name The name of this head. Methods create_estimator_spec View source
create_estimator_spec(
features, mode, logits, labels=None, optimizer=None, trainable_variables=None,
train_op_fn=None, update_ops=None, regularization_losses=None
)
Returns EstimatorSpec that a model_fn can return. It is recommended to pass all args via name.
Args
features Input dict mapping string feature names to Tensor or SparseTensor objects containing the values for that feature in a minibatch. Often to be used to fetch example-weight tensor.
mode Estimator's ModeKeys.
logits Logits Tensor to be used by the head.
labels Labels Tensor, or dict mapping string label names to Tensor objects of the label values.
optimizer An tf.keras.optimizers.Optimizer instance to optimize the loss in TRAIN mode. Namely, sets train_op = optimizer.get_updates(loss, trainable_variables), which updates variables to minimize loss.
trainable_variables A list or tuple of Variable objects to update to minimize loss. In Tensorflow 1.x, by default these are the list of variables collected in the graph under the key GraphKeys.TRAINABLE_VARIABLES. As Tensorflow 2.x doesn't have collections and GraphKeys, trainable_variables need to be passed explicitly here.
train_op_fn Function that takes a scalar loss Tensor and returns an op to optimize the model with the loss in TRAIN mode. Used if optimizer is None. Exactly one of train_op_fn and optimizer must be set in TRAIN mode. By default, it is None in other modes. If you want to optimize loss yourself, you can pass lambda _: tf.no_op() and then use EstimatorSpec.loss to compute and apply gradients.
update_ops A list or tuple of update ops to be run at training time. For example, layers such as BatchNormalization create mean and variance update ops that need to be run at training time. In Tensorflow 1.x, these are thrown into an UPDATE_OPS collection. As Tensorflow 2.x doesn't have collections, update_ops need to be passed explicitly here.
regularization_losses A list of additional scalar losses to be added to the training loss, such as regularization losses.
Returns EstimatorSpec.
loss View source
@abc.abstractmethod
loss(
labels, logits, features=None, mode=None, regularization_losses=None
)
Returns a loss Tensor from provided arguments. Note that, the args of features and mode are most likely not used, but some Head implementations may require them.
Args
labels Labels Tensor, or dict mapping string label names to Tensor objects of the label values.
logits Logits Tensor to be used for loss construction.
features Input dict mapping string feature names to Tensor or SparseTensor objects containing the values for that feature in a minibatch. Often to be used to fetch example-weight tensor.
mode Estimator's ModeKeys. To be used in case loss calculation is different in Train and Eval mode.
regularization_losses A list of additional scalar losses to be added to the training loss, such as regularization losses.
Returns A scalar Tensor representing regularized training loss used in train and eval.
metrics View source
@abc.abstractmethod
metrics(
regularization_losses=None
)
Returns a dict of metric objects.
Args
regularization_losses A list of additional scalar losses to be added to the training loss, such as regularization losses.
Returns A dict of metrics keyed by string name. The value is an instance of Metric class.
predictions View source
@abc.abstractmethod
predictions(
logits, keys=None
)
Returns a dict of predictions from provided logits.
Args
logits Logits Tensor to be used for prediction construction.
keys A list of string for prediction keys. Defaults to None, meaning if not specified, predictions will be created for all the pre-defined valid keys in the head.
Returns A dict of predicted Tensor keyed by prediction name.
update_metrics View source
@abc.abstractmethod
update_metrics(
eval_metrics, features, logits, labels, mode=None, regularization_losses=None
)
Updates metric objects and returns a dict of the updated metrics.
Args
eval_metrics A dict of metrics to be updated.
features Input dict mapping string feature names to Tensor or SparseTensor objects containing the values for that feature in a minibatch. Often to be used to fetch example-weight tensor.
logits logits Tensor to be used for metrics update.
labels Labels Tensor, or dict mapping string label names to Tensor objects of the label values.
mode Estimator's ModeKeys. In most cases, this arg is not used and can be removed in the method implementation.
regularization_losses A list of additional scalar losses to be added to the training and evaluation loss, such as regularization losses. Note that, the mode arg is not used in the tf.estimator.*Head. If the update of the metrics doesn't rely on mode, it can be safely ignored in the method signature.
Returns A dict of updated metrics keyed by name. The value is an instance of Metric class. | tensorflow.estimator.head |
tf.estimator.LatestExporter View source on GitHub This class regularly exports the serving graph and checkpoints. Inherits From: Exporter View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.estimator.LatestExporter
tf.estimator.LatestExporter(
name, serving_input_receiver_fn, assets_extra=None, as_text=False,
exports_to_keep=5
)
In addition to exporting, this class also garbage collects stale exports.
Args
name unique name of this Exporter that is going to be used in the export path.
serving_input_receiver_fn a function that takes no arguments and returns a ServingInputReceiver.
assets_extra An optional dict specifying how to populate the assets.extra directory within the exported SavedModel. Each key should give the destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as {'my_asset_file.txt': '/path/to/my_asset_file.txt'}.
as_text whether to write the SavedModel proto in text format. Defaults to False.
exports_to_keep Number of exports to keep. Older exports will be garbage-collected. Defaults to 5. Set to None to disable garbage collection.
Raises
ValueError if any arguments is invalid.
Attributes
name Directory name. A directory name under the export base directory where exports of this type are written. Should not be None nor empty.
Methods export View source
export(
estimator, export_path, checkpoint_path, eval_result, is_the_final_export
)
Exports the given Estimator to a specific format.
Args
estimator the Estimator to export.
export_path A string containing a directory where to write the export.
checkpoint_path The checkpoint path to export.
eval_result The output of Estimator.evaluate on this checkpoint.
is_the_final_export This boolean is True when this is an export in the end of training. It is False for the intermediate exports during the training. When passing Exporter to tf.estimator.train_and_evaluate is_the_final_export is always False if TrainSpec.max_steps is None.
Returns The string path to the exported directory or None if export is skipped. | tensorflow.estimator.latestexporter |
tf.estimator.LinearClassifier View source on GitHub Linear classifier model. Inherits From: Estimator, Estimator
tf.estimator.LinearClassifier(
feature_columns, model_dir=None, n_classes=2, weight_column=None,
label_vocabulary=None, optimizer='Ftrl', config=None,
warm_start_from=None,
loss_reduction=losses_utils.ReductionV2.SUM_OVER_BATCH_SIZE,
sparse_combiner='sum'
)
Train a linear model to classify instances into one of multiple possible classes. When number of possible classes is 2, this is binary classification. Example: categorical_column_a = categorical_column_with_hash_bucket(...)
categorical_column_b = categorical_column_with_hash_bucket(...)
categorical_feature_a_x_categorical_feature_b = crossed_column(...)
# Estimator using the default optimizer.
estimator = tf.estimator.LinearClassifier(
feature_columns=[categorical_column_a,
categorical_feature_a_x_categorical_feature_b])
# Or estimator using the FTRL optimizer with regularization.
estimator = tf.estimator.LinearClassifier(
feature_columns=[categorical_column_a,
categorical_feature_a_x_categorical_feature_b],
optimizer=tf.keras.optimizers.Ftrl(
learning_rate=0.1,
l1_regularization_strength=0.001
))
# Or estimator using an optimizer with a learning rate decay.
estimator = tf.estimator.LinearClassifier(
feature_columns=[categorical_column_a,
categorical_feature_a_x_categorical_feature_b],
optimizer=lambda: tf.keras.optimizers.Ftrl(
learning_rate=tf.exponential_decay(
learning_rate=0.1,
global_step=tf.get_global_step(),
decay_steps=10000,
decay_rate=0.96))
# Or estimator with warm-starting from a previous checkpoint.
estimator = tf.estimator.LinearClassifier(
feature_columns=[categorical_column_a,
categorical_feature_a_x_categorical_feature_b],
warm_start_from="/path/to/checkpoint/dir")
# Input builders
def input_fn_train:
# Returns tf.data.Dataset of (x, y) tuple where y represents label's class
# index.
pass
def input_fn_eval:
# Returns tf.data.Dataset of (x, y) tuple where y represents label's class
# index.
pass
def input_fn_predict:
# Returns tf.data.Dataset of (x, None) tuple.
pass
estimator.train(input_fn=input_fn_train)
metrics = estimator.evaluate(input_fn=input_fn_eval)
predictions = estimator.predict(input_fn=input_fn_predict)
Input of train and evaluate should have following features, otherwise there will be a KeyError: if weight_column is not None, a feature with key=weight_column whose value is a Tensor. for each column in feature_columns: if column is a SparseColumn, a feature with key=column.name whose value is a SparseTensor. if column is a WeightedSparseColumn, two features: the first with key the id column name, the second with key the weight column name. Both features' value must be a SparseTensor. if column is a RealValuedColumn, a feature with key=column.name whose value is a Tensor.
Loss is calculated by using softmax cross entropy.
Args
feature_columns An iterable containing all the feature columns used by the model. All items in the set should be instances of classes derived from FeatureColumn.
model_dir Directory to save model parameters, graph and etc. This can also be used to load checkpoints from the directory into a estimator to continue training a previously saved model.
n_classes number of label classes. Default is binary classification. Note that class labels are integers representing the class index (i.e. values from 0 to n_classes-1). For arbitrary label values (e.g. string labels), convert to class indices first.
weight_column A string or a _NumericColumn created by tf.feature_column.numeric_column defining feature column representing weights. It is used to down weight or boost examples during training. It will be multiplied by the loss of the example. If it is a string, it is used as a key to fetch weight tensor from the features. If it is a _NumericColumn, raw tensor is fetched by key weight_column.key, then weight_column.normalizer_fn is applied on it to get weight tensor.
label_vocabulary A list of strings represents possible label values. If given, labels must be string type and have any value in label_vocabulary. If it is not given, that means labels are already encoded as integer or float within [0, 1] for n_classes=2 and encoded as integer values in {0, 1,..., n_classes-1} for n_classes>2 . Also there will be errors if vocabulary is not provided and labels are string.
optimizer An instance of tf.keras.optimizers.* or tf.estimator.experimental.LinearSDCA used to train the model. Can also be a string (one of 'Adagrad', 'Adam', 'Ftrl', 'RMSProp', 'SGD'), or callable. Defaults to FTRL optimizer.
config RunConfig object to configure the runtime settings.
warm_start_from A string filepath to a checkpoint to warm-start from, or a WarmStartSettings object to fully configure warm-starting. If the string filepath is provided instead of a WarmStartSettings, then all weights and biases are warm-started, and it is assumed that vocabularies and Tensor names are unchanged.
loss_reduction One of tf.losses.Reduction except NONE. Describes how to reduce training loss over batch. Defaults to SUM_OVER_BATCH_SIZE.
sparse_combiner A string specifying how to reduce if a categorical column is multivalent. One of "mean", "sqrtn", and "sum" -- these are effectively different ways to do example-level normalization, which can be useful for bag-of-words features. for more details, see tf.feature_column.linear_model.
Raises
ValueError if n_classes < 2. Eager Compatibility Estimators can be used while eager execution is enabled. Note that input_fn and all hooks are executed inside a graph context, so they have to be written to be compatible with graph mode. Note that input_fn code using tf.data generally works in both graph and eager modes.
Attributes
config
export_savedmodel
model_dir
model_fn Returns the model_fn which is bound to self.params.
params
Methods eval_dir View source
eval_dir(
name=None
)
Shows the directory name where evaluation metrics are dumped.
Args
name Name of the evaluation if user needs to run multiple evaluations on different data sets, such as on training data vs test data. Metrics for different evaluations are saved in separate folders, and appear separately in tensorboard.
Returns A string which is the path of directory contains evaluation metrics.
evaluate View source
evaluate(
input_fn, steps=None, hooks=None, checkpoint_path=None, name=None
)
Evaluates the model given evaluation data input_fn. For each step, calls input_fn, which returns one batch of data. Evaluates until:
steps batches are processed, or
input_fn raises an end-of-input exception (tf.errors.OutOfRangeError or StopIteration).
Args
input_fn A function that constructs the input data for evaluation. See Premade Estimators for more information. The function should construct and return one of the following: A tf.data.Dataset object: Outputs of Dataset object must be a tuple (features, labels) with same constraints as below. A tuple (features, labels): Where features is a tf.Tensor or a dictionary of string feature name to Tensor and labels is a Tensor or a dictionary of string label name to Tensor. Both features and labels are consumed by model_fn. They should satisfy the expectation of model_fn from inputs.
steps Number of steps for which to evaluate model. If None, evaluates until input_fn raises an end-of-input exception.
hooks List of tf.train.SessionRunHook subclass instances. Used for callbacks inside the evaluation call.
checkpoint_path Path of a specific checkpoint to evaluate. If None, the latest checkpoint in model_dir is used. If there are no checkpoints in model_dir, evaluation is run with newly initialized Variables instead of ones restored from checkpoint.
name Name of the evaluation if user needs to run multiple evaluations on different data sets, such as on training data vs test data. Metrics for different evaluations are saved in separate folders, and appear separately in tensorboard.
Returns A dict containing the evaluation metrics specified in model_fn keyed by name, as well as an entry global_step which contains the value of the global step for which this evaluation was performed. For canned estimators, the dict contains the loss (mean loss per mini-batch) and the average_loss (mean loss per sample). Canned classifiers also return the accuracy. Canned regressors also return the label/mean and the prediction/mean.
Raises
ValueError If steps <= 0. experimental_export_all_saved_models View source
experimental_export_all_saved_models(
export_dir_base, input_receiver_fn_map, assets_extra=None, as_text=False,
checkpoint_path=None
)
Exports a SavedModel with tf.MetaGraphDefs for each requested mode. For each mode passed in via the input_receiver_fn_map, this method builds a new graph by calling the input_receiver_fn to obtain feature and label Tensors. Next, this method calls the Estimator's model_fn in the passed mode to generate the model graph based on those features and labels, and restores the given checkpoint (or, lacking that, the most recent checkpoint) into the graph. Only one of the modes is used for saving variables to the SavedModel (order of preference: tf.estimator.ModeKeys.TRAIN, tf.estimator.ModeKeys.EVAL, then tf.estimator.ModeKeys.PREDICT), such that up to three tf.MetaGraphDefs are saved with a single set of variables in a single SavedModel directory. For the variables and tf.MetaGraphDefs, a timestamped export directory below export_dir_base, and writes a SavedModel into it containing the tf.MetaGraphDef for the given mode and its associated signatures. For prediction, the exported MetaGraphDef will provide one SignatureDef for each element of the export_outputs dict returned from the model_fn, named using the same keys. One of these keys is always tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY, indicating which signature will be served when a serving request does not specify one. For each signature, the outputs are provided by the corresponding tf.estimator.export.ExportOutputs, and the inputs are always the input receivers provided by the serving_input_receiver_fn. For training and evaluation, the train_op is stored in an extra collection, and loss, metrics, and predictions are included in a SignatureDef for the mode in question. Extra assets may be written into the SavedModel via the assets_extra argument. This should be a dict, where each key gives a destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as {'my_asset_file.txt': '/path/to/my_asset_file.txt'}.
Args
export_dir_base A string containing a directory in which to create timestamped subdirectories containing exported SavedModels.
input_receiver_fn_map dict of tf.estimator.ModeKeys to input_receiver_fn mappings, where the input_receiver_fn is a function that takes no arguments and returns the appropriate subclass of InputReceiver.
assets_extra A dict specifying how to populate the assets.extra directory within the exported SavedModel, or None if no extra assets are needed.
as_text whether to write the SavedModel proto in text format.
checkpoint_path The checkpoint path to export. If None (the default), the most recent checkpoint found within the model directory is chosen.
Returns The path to the exported directory as a bytes object.
Raises
ValueError if any input_receiver_fn is None, no export_outputs are provided, or no checkpoint can be found. export_saved_model View source
export_saved_model(
export_dir_base, serving_input_receiver_fn, assets_extra=None, as_text=False,
checkpoint_path=None, experimental_mode=ModeKeys.PREDICT
)
Exports inference graph as a SavedModel into the given dir. For a detailed guide, see SavedModel from Estimators. This method builds a new graph by first calling the serving_input_receiver_fn to obtain feature Tensors, and then calling this Estimator's model_fn to generate the model graph based on those features. It restores the given checkpoint (or, lacking that, the most recent checkpoint) into this graph in a fresh session. Finally it creates a timestamped export directory below the given export_dir_base, and writes a SavedModel into it containing a single tf.MetaGraphDef saved from this session. The exported MetaGraphDef will provide one SignatureDef for each element of the export_outputs dict returned from the model_fn, named using the same keys. One of these keys is always tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY, indicating which signature will be served when a serving request does not specify one. For each signature, the outputs are provided by the corresponding tf.estimator.export.ExportOutputs, and the inputs are always the input receivers provided by the serving_input_receiver_fn. Extra assets may be written into the SavedModel via the assets_extra argument. This should be a dict, where each key gives a destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as {'my_asset_file.txt': '/path/to/my_asset_file.txt'}. The experimental_mode parameter can be used to export a single train/eval/predict graph as a SavedModel. See experimental_export_all_saved_models for full docs.
Args
export_dir_base A string containing a directory in which to create timestamped subdirectories containing exported SavedModels.
serving_input_receiver_fn A function that takes no argument and returns a tf.estimator.export.ServingInputReceiver or tf.estimator.export.TensorServingInputReceiver.
assets_extra A dict specifying how to populate the assets.extra directory within the exported SavedModel, or None if no extra assets are needed.
as_text whether to write the SavedModel proto in text format.
checkpoint_path The checkpoint path to export. If None (the default), the most recent checkpoint found within the model directory is chosen.
experimental_mode tf.estimator.ModeKeys value indicating with mode will be exported. Note that this feature is experimental.
Returns The path to the exported directory as a bytes object.
Raises
ValueError if no serving_input_receiver_fn is provided, no export_outputs are provided, or no checkpoint can be found. get_variable_names View source
get_variable_names()
Returns list of all variable names in this model.
Returns List of names.
Raises
ValueError If the Estimator has not produced a checkpoint yet. get_variable_value View source
get_variable_value(
name
)
Returns value of the variable given by name.
Args
name string or a list of string, name of the tensor.
Returns Numpy array - value of the tensor.
Raises
ValueError If the Estimator has not produced a checkpoint yet. latest_checkpoint View source
latest_checkpoint()
Finds the filename of the latest saved checkpoint file in model_dir.
Returns The full path to the latest checkpoint or None if no checkpoint was found.
predict View source
predict(
input_fn, predict_keys=None, hooks=None, checkpoint_path=None,
yield_single_examples=True
)
Yields predictions for given features. Please note that interleaving two predict outputs does not work. See: issue/20506
Args
input_fn A function that constructs the features. Prediction continues until input_fn raises an end-of-input exception (tf.errors.OutOfRangeError or StopIteration). See Premade Estimators for more information. The function should construct and return one of the following:
tf.data.Dataset object -- Outputs of Dataset object must have same constraints as below. features -- A tf.Tensor or a dictionary of string feature name to Tensor. features are consumed by model_fn. They should satisfy the expectation of model_fn from inputs. A tuple, in which case the first item is extracted as features.
predict_keys list of str, name of the keys to predict. It is used if the tf.estimator.EstimatorSpec.predictions is a dict. If predict_keys is used then rest of the predictions will be filtered from the dictionary. If None, returns all.
hooks List of tf.train.SessionRunHook subclass instances. Used for callbacks inside the prediction call.
checkpoint_path Path of a specific checkpoint to predict. If None, the latest checkpoint in model_dir is used. If there are no checkpoints in model_dir, prediction is run with newly initialized Variables instead of ones restored from checkpoint.
yield_single_examples If False, yields the whole batch as returned by the model_fn instead of decomposing the batch into individual elements. This is useful if model_fn returns some tensors whose first dimension is not equal to the batch size.
Yields Evaluated values of predictions tensors.
Raises
ValueError If batch length of predictions is not the same and yield_single_examples is True.
ValueError If there is a conflict between predict_keys and predictions. For example if predict_keys is not None but tf.estimator.EstimatorSpec.predictions is not a dict. train View source
train(
input_fn, hooks=None, steps=None, max_steps=None, saving_listeners=None
)
Trains a model given training data input_fn.
Args
input_fn A function that provides input data for training as minibatches. See Premade Estimators for more information. The function should construct and return one of the following: A tf.data.Dataset object: Outputs of Dataset object must be a tuple (features, labels) with same constraints as below. A tuple (features, labels): Where features is a tf.Tensor or a dictionary of string feature name to Tensor and labels is a Tensor or a dictionary of string label name to Tensor. Both features and labels are consumed by model_fn. They should satisfy the expectation of model_fn from inputs.
hooks List of tf.train.SessionRunHook subclass instances. Used for callbacks inside the training loop.
steps Number of steps for which to train the model. If None, train forever or train until input_fn generates the tf.errors.OutOfRange error or StopIteration exception. steps works incrementally. If you call two times train(steps=10) then training occurs in total 20 steps. If OutOfRange or StopIteration occurs in the middle, training stops before 20 steps. If you don't want to have incremental behavior please set max_steps instead. If set, max_steps must be None.
max_steps Number of total steps for which to train model. If None, train forever or train until input_fn generates the tf.errors.OutOfRange error or StopIteration exception. If set, steps must be None. If OutOfRange or StopIteration occurs in the middle, training stops before max_steps steps. Two calls to train(steps=100) means 200 training iterations. On the other hand, two calls to train(max_steps=100) means that the second call will not do any iteration since first call did all 100 steps.
saving_listeners list of CheckpointSaverListener objects. Used for callbacks that run immediately before or after checkpoint savings.
Returns self, for chaining.
Raises
ValueError If both steps and max_steps are not None.
ValueError If either steps or max_steps <= 0. | tensorflow.estimator.linearclassifier |
tf.estimator.LinearEstimator View source on GitHub An estimator for TensorFlow linear models with user-specified head. Inherits From: Estimator, Estimator
tf.estimator.LinearEstimator(
head, feature_columns, model_dir=None, optimizer='Ftrl', config=None,
sparse_combiner='sum', warm_start_from=None
)
Example: categorical_column_a = categorical_column_with_hash_bucket(...)
categorical_column_b = categorical_column_with_hash_bucket(...)
categorical_feature_a_x_categorical_feature_b = crossed_column(...)
# Estimator using the default optimizer.
estimator = tf.estimator.LinearEstimator(
head=tf.estimator.MultiLabelHead(n_classes=3),
feature_columns=[categorical_column_a,
categorical_feature_a_x_categorical_feature_b])
# Or estimator using an optimizer with a learning rate decay.
estimator = tf.estimator.LinearEstimator(
head=tf.estimator.MultiLabelHead(n_classes=3),
feature_columns=[categorical_column_a,
categorical_feature_a_x_categorical_feature_b],
optimizer=lambda: tf.keras.optimizers.Ftrl(
learning_rate=tf.compat.v1.train.exponential_decay(
learning_rate=0.1,
global_step=tf.compat.v1.train.get_global_step(),
decay_steps=10000,
decay_rate=0.96))
# Or estimator using the FTRL optimizer with regularization.
estimator = tf.estimator.LinearEstimator(
head=tf.estimator.MultiLabelHead(n_classes=3),
feature_columns=[categorical_column_a,
categorical_feature_a_x_categorical_feature_b])
optimizer=tf.keras.optimizers.Ftrl(
learning_rate=0.1,
l1_regularization_strength=0.001
))
def input_fn_train:
# Returns tf.data.Dataset of (x, y) tuple where y represents label's class
# index.
pass
def input_fn_eval:
# Returns tf.data.Dataset of (x, y) tuple where y represents label's class
# index.
pass
def input_fn_predict:
# Returns tf.data.Dataset of (x, None) tuple.
pass
estimator.train(input_fn=input_fn_train, steps=100)
metrics = estimator.evaluate(input_fn=input_fn_eval, steps=10)
predictions = estimator.predict(input_fn=input_fn_predict)
Input of train and evaluate should have following features, otherwise there will be a KeyError: if weight_column is not None, a feature with key=weight_column whose value is a Tensor. for each column in feature_columns: if column is a CategoricalColumn, a feature with key=column.name whose value is a SparseTensor. if column is a WeightedCategoricalColumn, two features: the first with key the id column name, the second with key the weight column name. Both features' value must be a SparseTensor. if column is a DenseColumn, a feature with key=column.name whose value is a Tensor.
Loss and predicted output are determined by the specified head.
Args
head A Head instance constructed with a method such as tf.estimator.MultiLabelHead.
feature_columns An iterable containing all the feature columns used by the model. All items in the set should be instances of classes derived from FeatureColumn.
model_dir Directory to save model parameters, graph and etc. This can also be used to load checkpoints from the directory into a estimator to continue training a previously saved model.
optimizer An instance of tf.keras.optimizers.* used to train the model. Can also be a string (one of 'Adagrad', 'Adam', 'Ftrl', 'RMSProp', 'SGD'), or callable. Defaults to FTRL optimizer.
config RunConfig object to configure the runtime settings.
sparse_combiner A string specifying how to reduce if a categorical column is multivalent. One of "mean", "sqrtn", and "sum" -- these are effectively different ways to do example-level normalization, which can be useful for bag-of-words features. for more details, see tf.feature_column.linear_model.
warm_start_from A string filepath to a checkpoint to warm-start from, or a WarmStartSettings object to fully configure warm-starting. If the string filepath is provided instead of a WarmStartSettings, then all weights and biases are warm-started, and it is assumed that vocabularies and Tensor names are unchanged. Eager Compatibility Estimators can be used while eager execution is enabled. Note that input_fn and all hooks are executed inside a graph context, so they have to be written to be compatible with graph mode. Note that input_fn code using tf.data generally works in both graph and eager modes.
Attributes
config
export_savedmodel
model_dir
model_fn Returns the model_fn which is bound to self.params.
params
Methods eval_dir View source
eval_dir(
name=None
)
Shows the directory name where evaluation metrics are dumped.
Args
name Name of the evaluation if user needs to run multiple evaluations on different data sets, such as on training data vs test data. Metrics for different evaluations are saved in separate folders, and appear separately in tensorboard.
Returns A string which is the path of directory contains evaluation metrics.
evaluate View source
evaluate(
input_fn, steps=None, hooks=None, checkpoint_path=None, name=None
)
Evaluates the model given evaluation data input_fn. For each step, calls input_fn, which returns one batch of data. Evaluates until:
steps batches are processed, or
input_fn raises an end-of-input exception (tf.errors.OutOfRangeError or StopIteration).
Args
input_fn A function that constructs the input data for evaluation. See Premade Estimators for more information. The function should construct and return one of the following: A tf.data.Dataset object: Outputs of Dataset object must be a tuple (features, labels) with same constraints as below. A tuple (features, labels): Where features is a tf.Tensor or a dictionary of string feature name to Tensor and labels is a Tensor or a dictionary of string label name to Tensor. Both features and labels are consumed by model_fn. They should satisfy the expectation of model_fn from inputs.
steps Number of steps for which to evaluate model. If None, evaluates until input_fn raises an end-of-input exception.
hooks List of tf.train.SessionRunHook subclass instances. Used for callbacks inside the evaluation call.
checkpoint_path Path of a specific checkpoint to evaluate. If None, the latest checkpoint in model_dir is used. If there are no checkpoints in model_dir, evaluation is run with newly initialized Variables instead of ones restored from checkpoint.
name Name of the evaluation if user needs to run multiple evaluations on different data sets, such as on training data vs test data. Metrics for different evaluations are saved in separate folders, and appear separately in tensorboard.
Returns A dict containing the evaluation metrics specified in model_fn keyed by name, as well as an entry global_step which contains the value of the global step for which this evaluation was performed. For canned estimators, the dict contains the loss (mean loss per mini-batch) and the average_loss (mean loss per sample). Canned classifiers also return the accuracy. Canned regressors also return the label/mean and the prediction/mean.
Raises
ValueError If steps <= 0. experimental_export_all_saved_models View source
experimental_export_all_saved_models(
export_dir_base, input_receiver_fn_map, assets_extra=None, as_text=False,
checkpoint_path=None
)
Exports a SavedModel with tf.MetaGraphDefs for each requested mode. For each mode passed in via the input_receiver_fn_map, this method builds a new graph by calling the input_receiver_fn to obtain feature and label Tensors. Next, this method calls the Estimator's model_fn in the passed mode to generate the model graph based on those features and labels, and restores the given checkpoint (or, lacking that, the most recent checkpoint) into the graph. Only one of the modes is used for saving variables to the SavedModel (order of preference: tf.estimator.ModeKeys.TRAIN, tf.estimator.ModeKeys.EVAL, then tf.estimator.ModeKeys.PREDICT), such that up to three tf.MetaGraphDefs are saved with a single set of variables in a single SavedModel directory. For the variables and tf.MetaGraphDefs, a timestamped export directory below export_dir_base, and writes a SavedModel into it containing the tf.MetaGraphDef for the given mode and its associated signatures. For prediction, the exported MetaGraphDef will provide one SignatureDef for each element of the export_outputs dict returned from the model_fn, named using the same keys. One of these keys is always tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY, indicating which signature will be served when a serving request does not specify one. For each signature, the outputs are provided by the corresponding tf.estimator.export.ExportOutputs, and the inputs are always the input receivers provided by the serving_input_receiver_fn. For training and evaluation, the train_op is stored in an extra collection, and loss, metrics, and predictions are included in a SignatureDef for the mode in question. Extra assets may be written into the SavedModel via the assets_extra argument. This should be a dict, where each key gives a destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as {'my_asset_file.txt': '/path/to/my_asset_file.txt'}.
Args
export_dir_base A string containing a directory in which to create timestamped subdirectories containing exported SavedModels.
input_receiver_fn_map dict of tf.estimator.ModeKeys to input_receiver_fn mappings, where the input_receiver_fn is a function that takes no arguments and returns the appropriate subclass of InputReceiver.
assets_extra A dict specifying how to populate the assets.extra directory within the exported SavedModel, or None if no extra assets are needed.
as_text whether to write the SavedModel proto in text format.
checkpoint_path The checkpoint path to export. If None (the default), the most recent checkpoint found within the model directory is chosen.
Returns The path to the exported directory as a bytes object.
Raises
ValueError if any input_receiver_fn is None, no export_outputs are provided, or no checkpoint can be found. export_saved_model View source
export_saved_model(
export_dir_base, serving_input_receiver_fn, assets_extra=None, as_text=False,
checkpoint_path=None, experimental_mode=ModeKeys.PREDICT
)
Exports inference graph as a SavedModel into the given dir. For a detailed guide, see SavedModel from Estimators. This method builds a new graph by first calling the serving_input_receiver_fn to obtain feature Tensors, and then calling this Estimator's model_fn to generate the model graph based on those features. It restores the given checkpoint (or, lacking that, the most recent checkpoint) into this graph in a fresh session. Finally it creates a timestamped export directory below the given export_dir_base, and writes a SavedModel into it containing a single tf.MetaGraphDef saved from this session. The exported MetaGraphDef will provide one SignatureDef for each element of the export_outputs dict returned from the model_fn, named using the same keys. One of these keys is always tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY, indicating which signature will be served when a serving request does not specify one. For each signature, the outputs are provided by the corresponding tf.estimator.export.ExportOutputs, and the inputs are always the input receivers provided by the serving_input_receiver_fn. Extra assets may be written into the SavedModel via the assets_extra argument. This should be a dict, where each key gives a destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as {'my_asset_file.txt': '/path/to/my_asset_file.txt'}. The experimental_mode parameter can be used to export a single train/eval/predict graph as a SavedModel. See experimental_export_all_saved_models for full docs.
Args
export_dir_base A string containing a directory in which to create timestamped subdirectories containing exported SavedModels.
serving_input_receiver_fn A function that takes no argument and returns a tf.estimator.export.ServingInputReceiver or tf.estimator.export.TensorServingInputReceiver.
assets_extra A dict specifying how to populate the assets.extra directory within the exported SavedModel, or None if no extra assets are needed.
as_text whether to write the SavedModel proto in text format.
checkpoint_path The checkpoint path to export. If None (the default), the most recent checkpoint found within the model directory is chosen.
experimental_mode tf.estimator.ModeKeys value indicating with mode will be exported. Note that this feature is experimental.
Returns The path to the exported directory as a bytes object.
Raises
ValueError if no serving_input_receiver_fn is provided, no export_outputs are provided, or no checkpoint can be found. get_variable_names View source
get_variable_names()
Returns list of all variable names in this model.
Returns List of names.
Raises
ValueError If the Estimator has not produced a checkpoint yet. get_variable_value View source
get_variable_value(
name
)
Returns value of the variable given by name.
Args
name string or a list of string, name of the tensor.
Returns Numpy array - value of the tensor.
Raises
ValueError If the Estimator has not produced a checkpoint yet. latest_checkpoint View source
latest_checkpoint()
Finds the filename of the latest saved checkpoint file in model_dir.
Returns The full path to the latest checkpoint or None if no checkpoint was found.
predict View source
predict(
input_fn, predict_keys=None, hooks=None, checkpoint_path=None,
yield_single_examples=True
)
Yields predictions for given features. Please note that interleaving two predict outputs does not work. See: issue/20506
Args
input_fn A function that constructs the features. Prediction continues until input_fn raises an end-of-input exception (tf.errors.OutOfRangeError or StopIteration). See Premade Estimators for more information. The function should construct and return one of the following:
tf.data.Dataset object -- Outputs of Dataset object must have same constraints as below. features -- A tf.Tensor or a dictionary of string feature name to Tensor. features are consumed by model_fn. They should satisfy the expectation of model_fn from inputs. A tuple, in which case the first item is extracted as features.
predict_keys list of str, name of the keys to predict. It is used if the tf.estimator.EstimatorSpec.predictions is a dict. If predict_keys is used then rest of the predictions will be filtered from the dictionary. If None, returns all.
hooks List of tf.train.SessionRunHook subclass instances. Used for callbacks inside the prediction call.
checkpoint_path Path of a specific checkpoint to predict. If None, the latest checkpoint in model_dir is used. If there are no checkpoints in model_dir, prediction is run with newly initialized Variables instead of ones restored from checkpoint.
yield_single_examples If False, yields the whole batch as returned by the model_fn instead of decomposing the batch into individual elements. This is useful if model_fn returns some tensors whose first dimension is not equal to the batch size.
Yields Evaluated values of predictions tensors.
Raises
ValueError If batch length of predictions is not the same and yield_single_examples is True.
ValueError If there is a conflict between predict_keys and predictions. For example if predict_keys is not None but tf.estimator.EstimatorSpec.predictions is not a dict. train View source
train(
input_fn, hooks=None, steps=None, max_steps=None, saving_listeners=None
)
Trains a model given training data input_fn.
Args
input_fn A function that provides input data for training as minibatches. See Premade Estimators for more information. The function should construct and return one of the following: A tf.data.Dataset object: Outputs of Dataset object must be a tuple (features, labels) with same constraints as below. A tuple (features, labels): Where features is a tf.Tensor or a dictionary of string feature name to Tensor and labels is a Tensor or a dictionary of string label name to Tensor. Both features and labels are consumed by model_fn. They should satisfy the expectation of model_fn from inputs.
hooks List of tf.train.SessionRunHook subclass instances. Used for callbacks inside the training loop.
steps Number of steps for which to train the model. If None, train forever or train until input_fn generates the tf.errors.OutOfRange error or StopIteration exception. steps works incrementally. If you call two times train(steps=10) then training occurs in total 20 steps. If OutOfRange or StopIteration occurs in the middle, training stops before 20 steps. If you don't want to have incremental behavior please set max_steps instead. If set, max_steps must be None.
max_steps Number of total steps for which to train model. If None, train forever or train until input_fn generates the tf.errors.OutOfRange error or StopIteration exception. If set, steps must be None. If OutOfRange or StopIteration occurs in the middle, training stops before max_steps steps. Two calls to train(steps=100) means 200 training iterations. On the other hand, two calls to train(max_steps=100) means that the second call will not do any iteration since first call did all 100 steps.
saving_listeners list of CheckpointSaverListener objects. Used for callbacks that run immediately before or after checkpoint savings.
Returns self, for chaining.
Raises
ValueError If both steps and max_steps are not None.
ValueError If either steps or max_steps <= 0. | tensorflow.estimator.linearestimator |
tf.estimator.LinearRegressor View source on GitHub An estimator for TensorFlow Linear regression problems. Inherits From: Estimator, Estimator
tf.estimator.LinearRegressor(
feature_columns, model_dir=None, label_dimension=1, weight_column=None,
optimizer='Ftrl', config=None, warm_start_from=None,
loss_reduction=losses_utils.ReductionV2.SUM_OVER_BATCH_SIZE,
sparse_combiner='sum'
)
Train a linear regression model to predict label value given observation of feature values. Example: categorical_column_a = categorical_column_with_hash_bucket(...)
categorical_column_b = categorical_column_with_hash_bucket(...)
categorical_feature_a_x_categorical_feature_b = crossed_column(...)
# Estimator using the default optimizer.
estimator = tf.estimator.LinearRegressor(
feature_columns=[categorical_column_a,
categorical_feature_a_x_categorical_feature_b])
# Or estimator using the FTRL optimizer with regularization.
estimator = tf.estimator.LinearRegressor(
feature_columns=[categorical_column_a,
categorical_feature_a_x_categorical_feature_b],
optimizer=tf.keras.optimizers.Ftrl(
learning_rate=0.1,
l1_regularization_strength=0.001
))
# Or estimator using an optimizer with a learning rate decay.
estimator = tf.estimator.LinearRegressor(
feature_columns=[categorical_column_a,
categorical_feature_a_x_categorical_feature_b],
optimizer=lambda: tf.keras.optimizers.Ftrl(
learning_rate=tf.compat.v1.train.exponential_decay(
learning_rate=0.1,
global_step=tf.compat.v1.train.get_global_step(),
decay_steps=10000,
decay_rate=0.96))
# Or estimator with warm-starting from a previous checkpoint.
estimator = tf.estimator.LinearRegressor(
feature_columns=[categorical_column_a,
categorical_feature_a_x_categorical_feature_b],
warm_start_from="/path/to/checkpoint/dir")
# Input builders
def input_fn_train:
# Returns tf.data.Dataset of (x, y) tuple where y represents label's class
# index.
pass
def input_fn_eval:
# Returns tf.data.Dataset of (x, y) tuple where y represents label's class
# index.
pass
def input_fn_predict:
# Returns tf.data.Dataset of (x, None) tuple.
pass
estimator.train(input_fn=input_fn_train)
metrics = estimator.evaluate(input_fn=input_fn_eval)
predictions = estimator.predict(input_fn=input_fn_predict)
Input of train and evaluate should have following features, otherwise there will be a KeyError: if weight_column is not None, a feature with key=weight_column whose value is a Tensor. for each column in feature_columns: if column is a SparseColumn, a feature with key=column.name whose value is a SparseTensor. if column is a WeightedSparseColumn, two features: the first with key the id column name, the second with key the weight column name. Both features' value must be a SparseTensor. if column is a RealValuedColumn, a feature with key=column.name whose value is a Tensor.
Loss is calculated by using mean squared error.
Args
feature_columns An iterable containing all the feature columns used by the model. All items in the set should be instances of classes derived from FeatureColumn.
model_dir Directory to save model parameters, graph and etc. This can also be used to load checkpoints from the directory into a estimator to continue training a previously saved model.
label_dimension Number of regression targets per example. This is the size of the last dimension of the labels and logits Tensor objects (typically, these have shape [batch_size, label_dimension]).
weight_column A string or a NumericColumn created by tf.feature_column.numeric_column defining feature column representing weights. It is used to down weight or boost examples during training. It will be multiplied by the loss of the example. If it is a string, it is used as a key to fetch weight tensor from the features. If it is a NumericColumn, raw tensor is fetched by key weight_column.key, then weight_column.normalizer_fn is applied on it to get weight tensor.
optimizer An instance of tf.keras.optimizers.* or tf.estimator.experimental.LinearSDCA used to train the model. Can also be a string (one of 'Adagrad', 'Adam', 'Ftrl', 'RMSProp', 'SGD'), or callable. Defaults to FTRL optimizer.
config RunConfig object to configure the runtime settings.
warm_start_from A string filepath to a checkpoint to warm-start from, or a WarmStartSettings object to fully configure warm-starting. If the string filepath is provided instead of a WarmStartSettings, then all weights and biases are warm-started, and it is assumed that vocabularies and Tensor names are unchanged.
loss_reduction One of tf.losses.Reduction except NONE. Describes how to reduce training loss over batch. Defaults to SUM.
sparse_combiner A string specifying how to reduce if a categorical column is multivalent. One of "mean", "sqrtn", and "sum" -- these are effectively different ways to do example-level normalization, which can be useful for bag-of-words features. for more details, see tf.feature_column.linear_model. Eager Compatibility Estimators can be used while eager execution is enabled. Note that input_fn and all hooks are executed inside a graph context, so they have to be written to be compatible with graph mode. Note that input_fn code using tf.data generally works in both graph and eager modes.
Attributes
config
export_savedmodel
model_dir
model_fn Returns the model_fn which is bound to self.params.
params
Methods eval_dir View source
eval_dir(
name=None
)
Shows the directory name where evaluation metrics are dumped.
Args
name Name of the evaluation if user needs to run multiple evaluations on different data sets, such as on training data vs test data. Metrics for different evaluations are saved in separate folders, and appear separately in tensorboard.
Returns A string which is the path of directory contains evaluation metrics.
evaluate View source
evaluate(
input_fn, steps=None, hooks=None, checkpoint_path=None, name=None
)
Evaluates the model given evaluation data input_fn. For each step, calls input_fn, which returns one batch of data. Evaluates until:
steps batches are processed, or
input_fn raises an end-of-input exception (tf.errors.OutOfRangeError or StopIteration).
Args
input_fn A function that constructs the input data for evaluation. See Premade Estimators for more information. The function should construct and return one of the following: A tf.data.Dataset object: Outputs of Dataset object must be a tuple (features, labels) with same constraints as below. A tuple (features, labels): Where features is a tf.Tensor or a dictionary of string feature name to Tensor and labels is a Tensor or a dictionary of string label name to Tensor. Both features and labels are consumed by model_fn. They should satisfy the expectation of model_fn from inputs.
steps Number of steps for which to evaluate model. If None, evaluates until input_fn raises an end-of-input exception.
hooks List of tf.train.SessionRunHook subclass instances. Used for callbacks inside the evaluation call.
checkpoint_path Path of a specific checkpoint to evaluate. If None, the latest checkpoint in model_dir is used. If there are no checkpoints in model_dir, evaluation is run with newly initialized Variables instead of ones restored from checkpoint.
name Name of the evaluation if user needs to run multiple evaluations on different data sets, such as on training data vs test data. Metrics for different evaluations are saved in separate folders, and appear separately in tensorboard.
Returns A dict containing the evaluation metrics specified in model_fn keyed by name, as well as an entry global_step which contains the value of the global step for which this evaluation was performed. For canned estimators, the dict contains the loss (mean loss per mini-batch) and the average_loss (mean loss per sample). Canned classifiers also return the accuracy. Canned regressors also return the label/mean and the prediction/mean.
Raises
ValueError If steps <= 0. experimental_export_all_saved_models View source
experimental_export_all_saved_models(
export_dir_base, input_receiver_fn_map, assets_extra=None, as_text=False,
checkpoint_path=None
)
Exports a SavedModel with tf.MetaGraphDefs for each requested mode. For each mode passed in via the input_receiver_fn_map, this method builds a new graph by calling the input_receiver_fn to obtain feature and label Tensors. Next, this method calls the Estimator's model_fn in the passed mode to generate the model graph based on those features and labels, and restores the given checkpoint (or, lacking that, the most recent checkpoint) into the graph. Only one of the modes is used for saving variables to the SavedModel (order of preference: tf.estimator.ModeKeys.TRAIN, tf.estimator.ModeKeys.EVAL, then tf.estimator.ModeKeys.PREDICT), such that up to three tf.MetaGraphDefs are saved with a single set of variables in a single SavedModel directory. For the variables and tf.MetaGraphDefs, a timestamped export directory below export_dir_base, and writes a SavedModel into it containing the tf.MetaGraphDef for the given mode and its associated signatures. For prediction, the exported MetaGraphDef will provide one SignatureDef for each element of the export_outputs dict returned from the model_fn, named using the same keys. One of these keys is always tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY, indicating which signature will be served when a serving request does not specify one. For each signature, the outputs are provided by the corresponding tf.estimator.export.ExportOutputs, and the inputs are always the input receivers provided by the serving_input_receiver_fn. For training and evaluation, the train_op is stored in an extra collection, and loss, metrics, and predictions are included in a SignatureDef for the mode in question. Extra assets may be written into the SavedModel via the assets_extra argument. This should be a dict, where each key gives a destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as {'my_asset_file.txt': '/path/to/my_asset_file.txt'}.
Args
export_dir_base A string containing a directory in which to create timestamped subdirectories containing exported SavedModels.
input_receiver_fn_map dict of tf.estimator.ModeKeys to input_receiver_fn mappings, where the input_receiver_fn is a function that takes no arguments and returns the appropriate subclass of InputReceiver.
assets_extra A dict specifying how to populate the assets.extra directory within the exported SavedModel, or None if no extra assets are needed.
as_text whether to write the SavedModel proto in text format.
checkpoint_path The checkpoint path to export. If None (the default), the most recent checkpoint found within the model directory is chosen.
Returns The path to the exported directory as a bytes object.
Raises
ValueError if any input_receiver_fn is None, no export_outputs are provided, or no checkpoint can be found. export_saved_model View source
export_saved_model(
export_dir_base, serving_input_receiver_fn, assets_extra=None, as_text=False,
checkpoint_path=None, experimental_mode=ModeKeys.PREDICT
)
Exports inference graph as a SavedModel into the given dir. For a detailed guide, see SavedModel from Estimators. This method builds a new graph by first calling the serving_input_receiver_fn to obtain feature Tensors, and then calling this Estimator's model_fn to generate the model graph based on those features. It restores the given checkpoint (or, lacking that, the most recent checkpoint) into this graph in a fresh session. Finally it creates a timestamped export directory below the given export_dir_base, and writes a SavedModel into it containing a single tf.MetaGraphDef saved from this session. The exported MetaGraphDef will provide one SignatureDef for each element of the export_outputs dict returned from the model_fn, named using the same keys. One of these keys is always tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY, indicating which signature will be served when a serving request does not specify one. For each signature, the outputs are provided by the corresponding tf.estimator.export.ExportOutputs, and the inputs are always the input receivers provided by the serving_input_receiver_fn. Extra assets may be written into the SavedModel via the assets_extra argument. This should be a dict, where each key gives a destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as {'my_asset_file.txt': '/path/to/my_asset_file.txt'}. The experimental_mode parameter can be used to export a single train/eval/predict graph as a SavedModel. See experimental_export_all_saved_models for full docs.
Args
export_dir_base A string containing a directory in which to create timestamped subdirectories containing exported SavedModels.
serving_input_receiver_fn A function that takes no argument and returns a tf.estimator.export.ServingInputReceiver or tf.estimator.export.TensorServingInputReceiver.
assets_extra A dict specifying how to populate the assets.extra directory within the exported SavedModel, or None if no extra assets are needed.
as_text whether to write the SavedModel proto in text format.
checkpoint_path The checkpoint path to export. If None (the default), the most recent checkpoint found within the model directory is chosen.
experimental_mode tf.estimator.ModeKeys value indicating with mode will be exported. Note that this feature is experimental.
Returns The path to the exported directory as a bytes object.
Raises
ValueError if no serving_input_receiver_fn is provided, no export_outputs are provided, or no checkpoint can be found. get_variable_names View source
get_variable_names()
Returns list of all variable names in this model.
Returns List of names.
Raises
ValueError If the Estimator has not produced a checkpoint yet. get_variable_value View source
get_variable_value(
name
)
Returns value of the variable given by name.
Args
name string or a list of string, name of the tensor.
Returns Numpy array - value of the tensor.
Raises
ValueError If the Estimator has not produced a checkpoint yet. latest_checkpoint View source
latest_checkpoint()
Finds the filename of the latest saved checkpoint file in model_dir.
Returns The full path to the latest checkpoint or None if no checkpoint was found.
predict View source
predict(
input_fn, predict_keys=None, hooks=None, checkpoint_path=None,
yield_single_examples=True
)
Yields predictions for given features. Please note that interleaving two predict outputs does not work. See: issue/20506
Args
input_fn A function that constructs the features. Prediction continues until input_fn raises an end-of-input exception (tf.errors.OutOfRangeError or StopIteration). See Premade Estimators for more information. The function should construct and return one of the following:
tf.data.Dataset object -- Outputs of Dataset object must have same constraints as below. features -- A tf.Tensor or a dictionary of string feature name to Tensor. features are consumed by model_fn. They should satisfy the expectation of model_fn from inputs. A tuple, in which case the first item is extracted as features.
predict_keys list of str, name of the keys to predict. It is used if the tf.estimator.EstimatorSpec.predictions is a dict. If predict_keys is used then rest of the predictions will be filtered from the dictionary. If None, returns all.
hooks List of tf.train.SessionRunHook subclass instances. Used for callbacks inside the prediction call.
checkpoint_path Path of a specific checkpoint to predict. If None, the latest checkpoint in model_dir is used. If there are no checkpoints in model_dir, prediction is run with newly initialized Variables instead of ones restored from checkpoint.
yield_single_examples If False, yields the whole batch as returned by the model_fn instead of decomposing the batch into individual elements. This is useful if model_fn returns some tensors whose first dimension is not equal to the batch size.
Yields Evaluated values of predictions tensors.
Raises
ValueError If batch length of predictions is not the same and yield_single_examples is True.
ValueError If there is a conflict between predict_keys and predictions. For example if predict_keys is not None but tf.estimator.EstimatorSpec.predictions is not a dict. train View source
train(
input_fn, hooks=None, steps=None, max_steps=None, saving_listeners=None
)
Trains a model given training data input_fn.
Args
input_fn A function that provides input data for training as minibatches. See Premade Estimators for more information. The function should construct and return one of the following: A tf.data.Dataset object: Outputs of Dataset object must be a tuple (features, labels) with same constraints as below. A tuple (features, labels): Where features is a tf.Tensor or a dictionary of string feature name to Tensor and labels is a Tensor or a dictionary of string label name to Tensor. Both features and labels are consumed by model_fn. They should satisfy the expectation of model_fn from inputs.
hooks List of tf.train.SessionRunHook subclass instances. Used for callbacks inside the training loop.
steps Number of steps for which to train the model. If None, train forever or train until input_fn generates the tf.errors.OutOfRange error or StopIteration exception. steps works incrementally. If you call two times train(steps=10) then training occurs in total 20 steps. If OutOfRange or StopIteration occurs in the middle, training stops before 20 steps. If you don't want to have incremental behavior please set max_steps instead. If set, max_steps must be None.
max_steps Number of total steps for which to train model. If None, train forever or train until input_fn generates the tf.errors.OutOfRange error or StopIteration exception. If set, steps must be None. If OutOfRange or StopIteration occurs in the middle, training stops before max_steps steps. Two calls to train(steps=100) means 200 training iterations. On the other hand, two calls to train(max_steps=100) means that the second call will not do any iteration since first call did all 100 steps.
saving_listeners list of CheckpointSaverListener objects. Used for callbacks that run immediately before or after checkpoint savings.
Returns self, for chaining.
Raises
ValueError If both steps and max_steps are not None.
ValueError If either steps or max_steps <= 0. | tensorflow.estimator.linearregressor |
tf.estimator.LoggingTensorHook Prints the given tensors every N local steps, every N seconds, or at end. Inherits From: SessionRunHook View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.estimator.LoggingTensorHook, tf.compat.v1.train.LoggingTensorHook
tf.estimator.LoggingTensorHook(
tensors, every_n_iter=None, every_n_secs=None, at_end=False, formatter=None
)
The tensors will be printed to the log, with INFO severity. If you are not seeing the logs, you might want to add the following line after your imports: tf.compat.v1.logging.set_verbosity(tf.compat.v1.logging.INFO)
Note that if at_end is True, tensors should not include any tensor whose evaluation produces a side effect such as consuming additional inputs.
Args
tensors dict that maps string-valued tags to tensors/tensor names, or iterable of tensors/tensor names.
every_n_iter int, print the values of tensors once every N local steps taken on the current worker.
every_n_secs int or float, print the values of tensors once every N seconds. Exactly one of every_n_iter and every_n_secs should be provided.
at_end bool specifying whether to print the values of tensors at the end of the run.
formatter function, takes dict of tag->Tensor and returns a string. If None uses default printing all tensors.
Raises
ValueError if every_n_iter is non-positive. Methods after_create_session View source
after_create_session(
session, coord
)
Called when new TensorFlow session is created. This is called to signal the hooks that a new session has been created. This has two essential differences with the situation in which begin is called: When this is called, the graph is finalized and ops can no longer be added to the graph. This method will also be called as a result of recovering a wrapped session, not only at the beginning of the overall session.
Args
session A TensorFlow Session that has been created.
coord A Coordinator object which keeps track of all threads. after_run View source
after_run(
run_context, run_values
)
Called after each call to run(). The run_values argument contains results of requested ops/tensors by before_run(). The run_context argument is the same one send to before_run call. run_context.request_stop() can be called to stop the iteration. If session.run() raises any exceptions then after_run() is not called.
Args
run_context A SessionRunContext object.
run_values A SessionRunValues object. before_run View source
before_run(
run_context
)
Called before each call to run(). You can return from this call a SessionRunArgs object indicating ops or tensors to add to the upcoming run() call. These ops/tensors will be run together with the ops/tensors originally passed to the original run() call. The run args you return can also contain feeds to be added to the run() call. The run_context argument is a SessionRunContext that provides information about the upcoming run() call: the originally requested op/tensors, the TensorFlow Session. At this point graph is finalized and you can not add ops.
Args
run_context A SessionRunContext object.
Returns None or a SessionRunArgs object.
begin View source
begin()
Called once before using the session. When called, the default graph is the one that will be launched in the session. The hook can modify the graph by adding new operations to it. After the begin() call the graph will be finalized and the other callbacks can not modify the graph anymore. Second call of begin() on the same graph, should not change the graph. end View source
end(
session
)
Called at the end of session. The session argument can be used in case the hook wants to run final ops, such as saving a last checkpoint. If session.run() raises exception other than OutOfRangeError or StopIteration then end() is not called. Note the difference between end() and after_run() behavior when session.run() raises OutOfRangeError or StopIteration. In that case end() is called but after_run() is not called.
Args
session A TensorFlow Session that will be soon closed. | tensorflow.estimator.loggingtensorhook |
tf.estimator.LogisticRegressionHead View source on GitHub Creates a Head for logistic regression. Inherits From: RegressionHead, Head View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.estimator.LogisticRegressionHead
tf.estimator.LogisticRegressionHead(
weight_column=None, loss_reduction=losses_utils.ReductionV2.SUM_OVER_BATCH_SIZE,
name=None
)
Uses sigmoid_cross_entropy_with_logits loss, which is the same as BinaryClassHead. The differences compared to BinaryClassHead are: Does not support label_vocabulary. Instead, labels must be float in the range [0, 1]. Does not calculate some metrics that do not make sense, such as AUC. In PREDICT mode, only returns logits and predictions (=tf.sigmoid(logits)), whereas BinaryClassHead also returns probabilities, classes, and class_ids. Export output defaults to RegressionOutput, whereas BinaryClassHead defaults to PredictOutput. The head expects logits with shape [D0, D1, ... DN, 1]. In many applications, the shape is [batch_size, 1]. The labels shape must match logits, namely [D0, D1, ... DN] or [D0, D1, ... DN, 1]. If weight_column is specified, weights must be of shape [D0, D1, ... DN] or [D0, D1, ... DN, 1]. This is implemented as a generalized linear model, see https://en.wikipedia.org/wiki/Generalized_linear_model The head can be used with a canned estimator. Example: my_head = tf.estimator.LogisticRegressionHead()
my_estimator = tf.estimator.DNNEstimator(
head=my_head,
hidden_units=...,
feature_columns=...)
It can also be used with a custom model_fn. Example: def _my_model_fn(features, labels, mode):
my_head = tf.estimator.LogisticRegressionHead()
logits = tf.keras.Model(...)(features)
return my_head.create_estimator_spec(
features=features,
mode=mode,
labels=labels,
optimizer=tf.keras.optimizers.Adagrad(lr=0.1),
logits=logits)
my_estimator = tf.estimator.Estimator(model_fn=_my_model_fn)
Args
weight_column A string or a NumericColumn created by tf.feature_column.numeric_column defining feature column representing weights. It is used to down weight or boost examples during training. It will be multiplied by the loss of the example.
loss_reduction One of tf.losses.Reduction except NONE. Decides how to reduce training loss over batch and label dimension. Defaults to SUM_OVER_BATCH_SIZE, namely weighted sum of losses divided by batch size * label_dimension.
name name of the head. If provided, summary and metrics keys will be suffixed by "/" + name. Also used as name_scope when creating ops.
Attributes
logits_dimension See base_head.Head for details.
loss_reduction See base_head.Head for details.
name See base_head.Head for details. Methods create_estimator_spec View source
create_estimator_spec(
features, mode, logits, labels=None, optimizer=None, trainable_variables=None,
train_op_fn=None, update_ops=None, regularization_losses=None
)
Returns EstimatorSpec that a model_fn can return. It is recommended to pass all args via name.
Args
features Input dict mapping string feature names to Tensor or SparseTensor objects containing the values for that feature in a minibatch. Often to be used to fetch example-weight tensor.
mode Estimator's ModeKeys.
logits Logits Tensor to be used by the head.
labels Labels Tensor, or dict mapping string label names to Tensor objects of the label values.
optimizer An tf.keras.optimizers.Optimizer instance to optimize the loss in TRAIN mode. Namely, sets train_op = optimizer.get_updates(loss, trainable_variables), which updates variables to minimize loss.
trainable_variables A list or tuple of Variable objects to update to minimize loss. In Tensorflow 1.x, by default these are the list of variables collected in the graph under the key GraphKeys.TRAINABLE_VARIABLES. As Tensorflow 2.x doesn't have collections and GraphKeys, trainable_variables need to be passed explicitly here.
train_op_fn Function that takes a scalar loss Tensor and returns an op to optimize the model with the loss in TRAIN mode. Used if optimizer is None. Exactly one of train_op_fn and optimizer must be set in TRAIN mode. By default, it is None in other modes. If you want to optimize loss yourself, you can pass lambda _: tf.no_op() and then use EstimatorSpec.loss to compute and apply gradients.
update_ops A list or tuple of update ops to be run at training time. For example, layers such as BatchNormalization create mean and variance update ops that need to be run at training time. In Tensorflow 1.x, these are thrown into an UPDATE_OPS collection. As Tensorflow 2.x doesn't have collections, update_ops need to be passed explicitly here.
regularization_losses A list of additional scalar losses to be added to the training loss, such as regularization losses.
Returns EstimatorSpec.
loss View source
loss(
labels, logits, features=None, mode=None, regularization_losses=None
)
Return predictions based on keys. See base_head.Head for details. metrics View source
metrics(
regularization_losses=None
)
Creates metrics. See base_head.Head for details. predictions View source
predictions(
logits
)
Return predictions based on keys. See base_head.Head for details.
Args
logits logits Tensor with shape [D0, D1, ... DN, logits_dimension]. For many applications, the shape is [batch_size, logits_dimension].
Returns A dict of predictions.
update_metrics View source
update_metrics(
eval_metrics, features, logits, labels, regularization_losses=None
)
Updates eval metrics. See base_head.Head for details. | tensorflow.estimator.logisticregressionhead |
tf.estimator.ModeKeys View source on GitHub Standard names for Estimator model modes. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.estimator.ModeKeys The following standard keys are defined:
TRAIN: training/fitting mode.
EVAL: testing/evaluation mode.
PREDICT: predication/inference mode.
Class Variables
EVAL 'eval'
PREDICT 'infer'
TRAIN 'train' | tensorflow.estimator.modekeys |
tf.estimator.MultiClassHead View source on GitHub Creates a Head for multi class classification. Inherits From: Head View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.estimator.MultiClassHead
tf.estimator.MultiClassHead(
n_classes, weight_column=None, label_vocabulary=None,
loss_reduction=losses_utils.ReductionV2.SUM_OVER_BATCH_SIZE, loss_fn=None,
name=None
)
Uses sparse_softmax_cross_entropy loss. The head expects logits with shape [D0, D1, ... DN, n_classes]. In many applications, the shape is [batch_size, n_classes]. labels must be a dense Tensor with shape matching logits, namely [D0, D1, ... DN, 1]. If label_vocabulary given, labels must be a string Tensor with values from the vocabulary. If label_vocabulary is not given, labels must be an integer Tensor with values specifying the class index. If weight_column is specified, weights must be of shape [D0, D1, ... DN], or [D0, D1, ... DN, 1]. The loss is the weighted sum over the input dimensions. Namely, if the input labels have shape [batch_size, 1], the loss is the weighted sum over batch_size. Also supports custom loss_fn. loss_fn takes (labels, logits) or (labels, logits, features, loss_reduction) as arguments and returns unreduced loss with shape [D0, D1, ... DN, 1]. loss_fn must support integer labels with shape [D0, D1, ... DN, 1]. Namely, the head applies label_vocabulary to the input labels before passing them to loss_fn. Usage:
n_classes = 3
head = tf.estimator.MultiClassHead(n_classes)
logits = np.array(((10, 0, 0), (0, 10, 0),), dtype=np.float32)
labels = np.array(((1,), (1,)), dtype=np.int64)
features = {'x': np.array(((42,),), dtype=np.int32)}
# expected_loss = sum(cross_entropy(labels, logits)) / batch_size
# = sum(10, 0) / 2 = 5.
loss = head.loss(labels, logits, features=features)
print('{:.2f}'.format(loss.numpy()))
5.00
eval_metrics = head.metrics()
updated_metrics = head.update_metrics(
eval_metrics, features, logits, labels)
for k in sorted(updated_metrics):
print('{} : {:.2f}'.format(k, updated_metrics[k].result().numpy()))
accuracy : 0.50
average_loss : 5.00
preds = head.predictions(logits)
print(preds['logits'])
tf.Tensor(
[[10. 0. 0.]
[ 0. 10. 0.]], shape=(2, 3), dtype=float32)
Usage with a canned estimator: my_head = tf.estimator.MultiClassHead(n_classes=3)
my_estimator = tf.estimator.DNNEstimator(
head=my_head,
hidden_units=...,
feature_columns=...)
It can also be used with a custom model_fn. Example: def _my_model_fn(features, labels, mode):
my_head = tf.estimator.MultiClassHead(n_classes=3)
logits = tf.keras.Model(...)(features)
return my_head.create_estimator_spec(
features=features,
mode=mode,
labels=labels,
optimizer=tf.keras.optimizers.Adagrad(lr=0.1),
logits=logits)
my_estimator = tf.estimator.Estimator(model_fn=_my_model_fn)
Args
n_classes Number of classes, must be greater than 2 (for 2 classes, use BinaryClassHead).
weight_column A string or a NumericColumn created by tf.feature_column.numeric_column defining feature column representing weights. It is used to down weight or boost examples during training. It will be multiplied by the loss of the example.
label_vocabulary A list or tuple of strings representing possible label values. If it is not given, that means labels are already encoded as an integer within [0, n_classes). If given, labels must be of string type and have any value in label_vocabulary. Note that errors will be raised if label_vocabulary is not provided but labels are strings. If both n_classes and label_vocabulary are provided, label_vocabulary should contain exactly n_classes items.
loss_reduction One of tf.losses.Reduction except NONE. Decides how to reduce training loss over batch. Defaults to SUM_OVER_BATCH_SIZE, namely weighted sum of losses divided by batch size * label_dimension.
loss_fn Optional loss function.
name Name of the head. If provided, summary and metrics keys will be suffixed by "/" + name. Also used as name_scope when creating ops.
Attributes
logits_dimension See base_head.Head for details.
loss_reduction See base_head.Head for details.
name See base_head.Head for details. Methods create_estimator_spec View source
create_estimator_spec(
features, mode, logits, labels=None, optimizer=None, trainable_variables=None,
train_op_fn=None, update_ops=None, regularization_losses=None
)
Returns EstimatorSpec that a model_fn can return. It is recommended to pass all args via name.
Args
features Input dict mapping string feature names to Tensor or SparseTensor objects containing the values for that feature in a minibatch. Often to be used to fetch example-weight tensor.
mode Estimator's ModeKeys.
logits Logits Tensor to be used by the head.
labels Labels Tensor, or dict mapping string label names to Tensor objects of the label values.
optimizer An tf.keras.optimizers.Optimizer instance to optimize the loss in TRAIN mode. Namely, sets train_op = optimizer.get_updates(loss, trainable_variables), which updates variables to minimize loss.
trainable_variables A list or tuple of Variable objects to update to minimize loss. In Tensorflow 1.x, by default these are the list of variables collected in the graph under the key GraphKeys.TRAINABLE_VARIABLES. As Tensorflow 2.x doesn't have collections and GraphKeys, trainable_variables need to be passed explicitly here.
train_op_fn Function that takes a scalar loss Tensor and returns an op to optimize the model with the loss in TRAIN mode. Used if optimizer is None. Exactly one of train_op_fn and optimizer must be set in TRAIN mode. By default, it is None in other modes. If you want to optimize loss yourself, you can pass lambda _: tf.no_op() and then use EstimatorSpec.loss to compute and apply gradients.
update_ops A list or tuple of update ops to be run at training time. For example, layers such as BatchNormalization create mean and variance update ops that need to be run at training time. In Tensorflow 1.x, these are thrown into an UPDATE_OPS collection. As Tensorflow 2.x doesn't have collections, update_ops need to be passed explicitly here.
regularization_losses A list of additional scalar losses to be added to the training loss, such as regularization losses.
Returns EstimatorSpec.
loss View source
loss(
labels, logits, features=None, mode=None, regularization_losses=None
)
Returns regularized training loss. See base_head.Head for details. metrics View source
metrics(
regularization_losses=None
)
Creates metrics. See base_head.Head for details. predictions View source
predictions(
logits, keys=None
)
Return predictions based on keys. See base_head.Head for details.
Args
logits logits Tensor with shape [D0, D1, ... DN, logits_dimension]. For many applications, the shape is [batch_size, logits_dimension].
keys a list or tuple of prediction keys. Each key can be either the class variable of prediction_keys.PredictionKeys or its string value, such as: prediction_keys.PredictionKeys.CLASSES or 'classes'. If not specified, it will return the predictions for all valid keys.
Returns A dict of predictions.
update_metrics View source
update_metrics(
eval_metrics, features, logits, labels, regularization_losses=None
)
Updates eval metrics. See base_head.Head for details. | tensorflow.estimator.multiclasshead |
tf.estimator.MultiHead View source on GitHub Creates a Head for multi-objective learning. Inherits From: Head View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.estimator.MultiHead
tf.estimator.MultiHead(
heads, head_weights=None
)
This class merges the output of multiple Head objects. Specifically: For training, sums losses of each head, calls train_op_fn with this final loss. For eval, merges metrics by adding head.name suffix to the keys in eval metrics, such as precision/head1.name, precision/head2.name. For prediction, merges predictions and updates keys in prediction dict to a 2-tuple, (head.name, prediction_key). Merges export_outputs such that by default the first head is served. Usage:
head1 = tf.estimator.MultiLabelHead(n_classes=2, name='head1')
head2 = tf.estimator.MultiLabelHead(n_classes=3, name='head2')
multi_head = tf.estimator.MultiHead([head1, head2])
logits = {
'head1': np.array([[-10., 10.], [-15., 10.]], dtype=np.float32),
'head2': np.array([[20., -20., 20.], [-30., 20., -20.]],
dtype=np.float32),}
labels = {
'head1': np.array([[1, 0], [1, 1]], dtype=np.int64),
'head2': np.array([[0, 1, 0], [1, 1, 0]], dtype=np.int64),}
features = {'x': np.array(((42,),), dtype=np.float32)}
# For large logits, sigmoid cross entropy loss is approximated as:
# loss = labels * (logits < 0) * (-logits) +
# (1 - labels) * (logits > 0) * logits =>
# head1: expected_unweighted_loss = [[10., 10.], [15., 0.]]
# loss1 = ((10 + 10) / 2 + (15 + 0) / 2) / 2 = 8.75
# head2: expected_unweighted_loss = [[20., 20., 20.], [30., 0., 0]]
# loss2 = ((20 + 20 + 20) / 3 + (30 + 0 + 0) / 3) / 2 = 15.00
# loss = loss1 + loss2 = 8.75 + 15.00 = 23.75
loss = multi_head.loss(labels, logits, features=features)
print('{:.2f}'.format(loss.numpy()))
23.75
eval_metrics = multi_head.metrics()
updated_metrics = multi_head.update_metrics(
eval_metrics, features, logits, labels)
for k in sorted(updated_metrics):
print('{} : {:.2f}'.format(k, updated_metrics[k].result().numpy()))
auc/head1 : 0.17
auc/head2 : 0.33
auc_precision_recall/head1 : 0.60
auc_precision_recall/head2 : 0.40
average_loss/head1 : 8.75
average_loss/head2 : 15.00
loss/head1 : 8.75
loss/head2 : 15.00
preds = multi_head.predictions(logits)
print(preds[('head1', 'logits')])
tf.Tensor(
[[-10. 10.]
[-15. 10.]], shape=(2, 2), dtype=float32)
Usage with a canned estimator: # In `input_fn`, specify labels as a dict keyed by head name:
def input_fn():
features = ...
labels1 = ...
labels2 = ...
return features, {'head1.name': labels1, 'head2.name': labels2}
# In `model_fn`, specify logits as a dict keyed by head name:
def model_fn(features, labels, mode):
# Create simple heads and specify head name.
head1 = tf.estimator.MultiClassHead(n_classes=3, name='head1')
head2 = tf.estimator.BinaryClassHead(name='head2')
# Create MultiHead from two simple heads.
head = tf.estimator.MultiHead([head1, head2])
# Create logits for each head, and combine them into a dict.
logits1, logits2 = logit_fn()
logits = {'head1.name': logits1, 'head2.name': logits2}
# Return the merged EstimatorSpec
return head.create_estimator_spec(..., logits=logits, ...)
# Create an estimator with this model_fn.
estimator = tf.estimator.Estimator(model_fn=model_fn)
estimator.train(input_fn=input_fn)
Also supports logits as a Tensor of shape [D0, D1, ... DN, logits_dimension]. It will split the Tensor along the last dimension and distribute it appropriately among the heads. E.g.: # Input logits.
logits = np.array([[-1., 1., 2., -2., 2.], [-1.5, 1., -3., 2., -2.]],
dtype=np.float32)
# Suppose head1 and head2 have the following logits dimension.
head1.logits_dimension = 2
head2.logits_dimension = 3
# After splitting, the result will be:
logits_dict = {'head1_name': [[-1., 1.], [-1.5, 1.]],
'head2_name': [[2., -2., 2.], [-3., 2., -2.]]}
Usage: def model_fn(features, labels, mode):
# Create simple heads and specify head name.
head1 = tf.estimator.MultiClassHead(n_classes=3, name='head1')
head2 = tf.estimator.BinaryClassHead(name='head2')
# Create multi-head from two simple heads.
head = tf.estimator.MultiHead([head1, head2])
# Create logits for the multihead. The result of logits is a `Tensor`.
logits = logit_fn(logits_dimension=head.logits_dimension)
# Return the merged EstimatorSpec
return head.create_estimator_spec(..., logits=logits, ...)
Args
heads List or tuple of Head instances. All heads must have name specified. The first head in the list is the default used at serving time.
head_weights Optional list of weights, same length as heads. Used when merging losses to calculate the weighted sum of losses from each head. If None, all losses are weighted equally.
Attributes
logits_dimension See base_head.Head for details.
loss_reduction See base_head.Head for details.
name See base_head.Head for details. Methods create_estimator_spec View source
create_estimator_spec(
features, mode, logits, labels=None, optimizer=None, trainable_variables=None,
train_op_fn=None, update_ops=None, regularization_losses=None
)
Returns a model_fn.EstimatorSpec.
Args
features Input dict of Tensor or SparseTensor objects.
mode Estimator's ModeKeys.
logits Input dict keyed by head name, or logits Tensor with shape [D0, D1, ... DN, logits_dimension]. For many applications, the Tensor shape is [batch_size, logits_dimension]. If logits is a Tensor, it will split the Tensor along the last dimension and distribute it appropriately among the heads. Check MultiHead for examples.
labels Input dict keyed by head name. For each head, the label value can be integer or string Tensor with shape matching its corresponding logits.labels is a required argument when mode equals TRAIN or EVAL.
optimizer An tf.keras.optimizers.Optimizer instance to optimize the loss in TRAIN mode. Namely, sets train_op = optimizer.get_updates(loss, trainable_variables), which updates variables to minimize loss.
trainable_variables A list or tuple of Variable objects to update to minimize loss. In Tensorflow 1.x, by default these are the list of variables collected in the graph under the key GraphKeys.TRAINABLE_VARIABLES. As Tensorflow 2.x doesn't have collections and GraphKeys, trainable_variables need to be passed explicitly here.
train_op_fn Function that takes a scalar loss Tensor and returns train_op. Used if optimizer is None.
update_ops A list or tuple of update ops to be run at training time. For example, layers such as BatchNormalization create mean and variance update ops that need to be run at training time. In Tensorflow 1.x, these are thrown into an UPDATE_OPS collection. As Tensorflow 2.x doesn't have collections, update_ops need to be passed explicitly here.
regularization_losses A list of additional scalar losses to be added to the training loss, such as regularization losses. These losses are usually expressed as a batch average, so for best results, in each head, users need to use the default loss_reduction=SUM_OVER_BATCH_SIZE to avoid scaling errors. Compared to the regularization losses for each head, this loss is to regularize the merged loss of all heads in multi head, and will be added to the overall training loss of multi head.
Returns A model_fn.EstimatorSpec instance.
Raises
ValueError If both train_op_fn and optimizer are None in TRAIN mode, or if both are set. If mode is not in Estimator's ModeKeys. loss View source
loss(
labels, logits, features=None, mode=None, regularization_losses=None
)
Returns regularized training loss. See base_head.Head for details. metrics View source
metrics(
regularization_losses=None
)
Creates metrics. See base_head.Head for details. predictions View source
predictions(
logits, keys=None
)
Create predictions. See base_head.Head for details. update_metrics View source
update_metrics(
eval_metrics, features, logits, labels, regularization_losses=None
)
Updates eval metrics. See base_head.Head for details. | tensorflow.estimator.multihead |
tf.estimator.MultiLabelHead View source on GitHub Creates a Head for multi-label classification. Inherits From: Head View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.estimator.MultiLabelHead
tf.estimator.MultiLabelHead(
n_classes, weight_column=None, thresholds=None, label_vocabulary=None,
loss_reduction=losses_utils.ReductionV2.SUM_OVER_BATCH_SIZE, loss_fn=None,
classes_for_class_based_metrics=None, name=None
)
Multi-label classification handles the case where each example may have zero or more associated labels, from a discrete set. This is distinct from MultiClassHead which has exactly one label per example. Uses sigmoid_cross_entropy loss average over classes and weighted sum over the batch. Namely, if the input logits have shape [batch_size, n_classes], the loss is the average over n_classes and the weighted sum over batch_size. The head expects logits with shape [D0, D1, ... DN, n_classes]. In many applications, the shape is [batch_size, n_classes]. Labels can be: A multi-hot tensor of shape [D0, D1, ... DN, n_classes]
An integer SparseTensor of class indices. The dense_shape must be [D0, D1, ... DN, ?] and the values within [0, n_classes). If label_vocabulary is given, a string SparseTensor. The dense_shape must be [D0, D1, ... DN, ?] and the values within label_vocabulary or a multi-hot tensor of shape [D0, D1, ... DN, n_classes]. If weight_column is specified, weights must be of shape [D0, D1, ... DN], or [D0, D1, ... DN, 1]. Also supports custom loss_fn. loss_fn takes (labels, logits) or (labels, logits, features) as arguments and returns unreduced loss with shape [D0, D1, ... DN, 1]. loss_fn must support indicator labels with shape [D0, D1, ... DN, n_classes]. Namely, the head applies label_vocabulary to the input labels before passing them to loss_fn. Usage:
n_classes = 2
head = tf.estimator.MultiLabelHead(n_classes)
logits = np.array([[-1., 1.], [-1.5, 1.5]], dtype=np.float32)
labels = np.array([[1, 0], [1, 1]], dtype=np.int64)
features = {'x': np.array([[41], [42]], dtype=np.int32)}
# expected_loss = sum(_sigmoid_cross_entropy(labels, logits)) / batch_size
# = sum(1.31326169, 0.9514133) / 2 = 1.13
loss = head.loss(labels, logits, features=features)
print('{:.2f}'.format(loss.numpy()))
1.13
eval_metrics = head.metrics()
updated_metrics = head.update_metrics(
eval_metrics, features, logits, labels)
for k in sorted(updated_metrics):
print('{} : {:.2f}'.format(k, updated_metrics[k].result().numpy()))
auc : 0.33
auc_precision_recall : 0.77
average_loss : 1.13
preds = head.predictions(logits)
print(preds['logits'])
tf.Tensor(
[[-1. 1. ]
[-1.5 1.5]], shape=(2, 2), dtype=float32)
Usage with a canned estimator: my_head = tf.estimator.MultiLabelHead(n_classes=3)
my_estimator = tf.estimator.DNNEstimator(
head=my_head,
hidden_units=...,
feature_columns=...)
It can also be used with a custom model_fn. Example: def _my_model_fn(features, labels, mode):
my_head = tf.estimator.MultiLabelHead(n_classes=3)
logits = tf.keras.Model(...)(features)
return my_head.create_estimator_spec(
features=features,
mode=mode,
labels=labels,
optimizer=tf.keras.optimizers.Adagrad(lr=0.1),
logits=logits)
my_estimator = tf.estimator.Estimator(model_fn=_my_model_fn)
Args
n_classes Number of classes, must be greater than 1 (for 1 class, use BinaryClassHead).
weight_column A string or a NumericColumn created by tf.feature_column.numeric_column defining feature column representing weights. It is used to down weight or boost examples during training. It will be multiplied by the loss of the example. Per-class weighting is not supported.
thresholds Iterable of floats in the range (0, 1). Accuracy, precision and recall metrics are evaluated for each threshold value. The threshold is applied to the predicted probabilities, i.e. above the threshold is true, below is false.
label_vocabulary A list of strings represents possible label values. If it is not given, that means labels are already encoded as integer within [0, n_classes) or multi-hot Tensor. If given, labels must be SparseTensor string type and have any value in label_vocabulary. Also there will be errors if vocabulary is not provided and labels are string.
loss_reduction One of tf.losses.Reduction except NONE. Decides how to reduce training loss over batch. Defaults to SUM_OVER_BATCH_SIZE, namely weighted sum of losses divided by batch size.
loss_fn Optional loss function.
classes_for_class_based_metrics List of integer class IDs or string class names for which per-class metrics are evaluated. If integers, all must be in the range [0, n_classes - 1]. If strings, all must be in label_vocabulary.
name Name of the head. If provided, summary and metrics keys will be suffixed by "/" + name. Also used as name_scope when creating ops.
Attributes
logits_dimension See base_head.Head for details.
loss_reduction See base_head.Head for details.
name See base_head.Head for details. Methods create_estimator_spec View source
create_estimator_spec(
features, mode, logits, labels=None, optimizer=None, trainable_variables=None,
train_op_fn=None, update_ops=None, regularization_losses=None
)
Returns EstimatorSpec that a model_fn can return. It is recommended to pass all args via name.
Args
features Input dict mapping string feature names to Tensor or SparseTensor objects containing the values for that feature in a minibatch. Often to be used to fetch example-weight tensor.
mode Estimator's ModeKeys.
logits Logits Tensor to be used by the head.
labels Labels Tensor, or dict mapping string label names to Tensor objects of the label values.
optimizer An tf.keras.optimizers.Optimizer instance to optimize the loss in TRAIN mode. Namely, sets train_op = optimizer.get_updates(loss, trainable_variables), which updates variables to minimize loss.
trainable_variables A list or tuple of Variable objects to update to minimize loss. In Tensorflow 1.x, by default these are the list of variables collected in the graph under the key GraphKeys.TRAINABLE_VARIABLES. As Tensorflow 2.x doesn't have collections and GraphKeys, trainable_variables need to be passed explicitly here.
train_op_fn Function that takes a scalar loss Tensor and returns an op to optimize the model with the loss in TRAIN mode. Used if optimizer is None. Exactly one of train_op_fn and optimizer must be set in TRAIN mode. By default, it is None in other modes. If you want to optimize loss yourself, you can pass lambda _: tf.no_op() and then use EstimatorSpec.loss to compute and apply gradients.
update_ops A list or tuple of update ops to be run at training time. For example, layers such as BatchNormalization create mean and variance update ops that need to be run at training time. In Tensorflow 1.x, these are thrown into an UPDATE_OPS collection. As Tensorflow 2.x doesn't have collections, update_ops need to be passed explicitly here.
regularization_losses A list of additional scalar losses to be added to the training loss, such as regularization losses.
Returns EstimatorSpec.
loss View source
loss(
labels, logits, features=None, mode=None, regularization_losses=None
)
Returns regularized training loss. See base_head.Head for details. metrics View source
metrics(
regularization_losses=None
)
Creates metrics. See base_head.Head for details. predictions View source
predictions(
logits, keys=None
)
Return predictions based on keys. See base_head.Head for details.
Args
logits logits Tensor with shape [D0, D1, ... DN, logits_dimension]. For many applications, the shape is [batch_size, logits_dimension].
keys a list of prediction keys. Key can be either the class variable of prediction_keys.PredictionKeys or its string value, such as: prediction_keys.PredictionKeys.LOGITS or 'logits'.
Returns A dict of predictions.
update_metrics View source
update_metrics(
eval_metrics, features, logits, labels, regularization_losses=None
)
Updates eval metrics. See base_head.Head for details. | tensorflow.estimator.multilabelhead |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.