doc_content
stringlengths 1
386k
| doc_id
stringlengths 5
188
|
---|---|
tf.data.experimental.service.DispatcherConfig Configuration class for tf.data service dispatchers. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.data.experimental.service.DispatcherConfig
tf.data.experimental.service.DispatcherConfig(
port=0, protocol='grpc', work_dir=None, fault_tolerant_mode=False,
job_gc_check_interval_ms=None, job_gc_timeout_ms=None
)
Fields:
port: Specifies the port to bind to. A value of 0 indicates that the server may bind to any available port.
protocol: The protocol to use for communicating with the tf.data service. Defaults to "grpc".
work_dir: A directory to store dispatcher state in. This argument is required for the dispatcher to be able to recover from restarts.
fault_tolerant_mode: Whether the dispatcher should write its state to a journal so that it can recover from restarts. Dispatcher state, including registered datasets and created jobs, is synchronously written to the journal before responding to RPCs. If True, work_dir must also be specified.
job_gc_check_interval_ms: How often the dispatcher should scan through to delete old and unused jobs, in milliseconds. If not set, the runtime will select a reasonable default. A higher value will reduce load on the dispatcher, while a lower value will reduce the time it takes for the dispatcher to garbage collect expired jobs.
job_gc_timeout_ms: How long a job needs to be unused before it becomes a candidate for garbage collection, in milliseconds. If not set, the runtime will select a reasonable default. A higher value will cause jobs to stay around longer with no consumers. This is useful if there is a large gap in time between when consumers read from the job. A lower value will reduce the time it takes to reclaim the resources from expired jobs.
Attributes
port
protocol
work_dir
fault_tolerant_mode
job_gc_check_interval_ms
job_gc_timeout_ms | tensorflow.data.experimental.service.dispatcherconfig |
tf.data.experimental.service.DispatchServer An in-process tf.data service dispatch server.
tf.data.experimental.service.DispatchServer(
config=None, start=True
)
A tf.data.experimental.service.DispatchServer coordinates a cluster of tf.data.experimental.service.WorkerServers. When the workers start, they register themselves with the dispatcher.
dispatcher = tf.data.experimental.service.DispatchServer()
dispatcher_address = dispatcher.target.split("://")[1]
worker = tf.data.experimental.service.WorkerServer(WorkerConfig(
dispatcher_address=dispatcher_address))
dataset = tf.data.Dataset.range(10)
dataset = dataset.apply(tf.data.experimental.service.distribute(
processing_mode="parallel_epochs", service=dispatcher.target))
print(list(dataset.as_numpy_iterator()))
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
When starting a dedicated tf.data dispatch process, use join() to block indefinitely after starting up the server. dispatcher = tf.data.experimental.service.DispatchServer(
tf.data.experimental.service.DispatcherConfig(port=5050))
dispatcher.join()
To start a DispatchServer in fault-tolerant mode, set work_dir and fault_tolerant_mode like below: dispatcher = tf.data.experimental.service.DispatchServer(
tf.data.experimental.service.DispatcherConfig(
port=5050,
work_dir="gs://my-bucket/dispatcher/work_dir",
fault_tolerant_mode=True))
Args
config (Optional.) A tf.data.experimental.service.DispatcherConfig configration. If None, the dispatcher will use default configuration values.
start (Optional.) Boolean, indicating whether to start the server after creating it. Defaults to True.
Attributes
target Returns a target that can be used to connect to the server.
dispatcher = tf.data.experimental.service.DispatchServer()
dataset = tf.data.Dataset.range(10)
dataset = dataset.apply(tf.data.experimental.service.distribute(
processing_mode="parallel_epochs", service=dispatcher.target))
The returned string will be in the form protocol://address, e.g. "grpc://localhost:5050".
Methods join View source
join()
Blocks until the server has shut down. This is useful when starting a dedicated dispatch process. dispatcher = tf.data.experimental.service.DispatchServer(
tf.data.experimental.service.DispatcherConfig(port=5050))
dispatcher.join()
Raises
tf.errors.OpError Or one of its subclasses if an error occurs while joining the server. start View source
start()
Starts this server.
dispatcher = tf.data.experimental.service.DispatchServer(start=False)
dispatcher.start()
Raises
tf.errors.OpError Or one of its subclasses if an error occurs while starting the server. | tensorflow.data.experimental.service.dispatchserver |
tf.data.experimental.service.distribute A transformation that moves dataset processing to the tf.data service. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.data.experimental.service.distribute
tf.data.experimental.service.distribute(
processing_mode, service, job_name=None, max_outstanding_requests=None
)
When you iterate over a dataset containing the distribute transformation, the tf.data service creates a "job" which produces data for the dataset iteration. The tf.data service uses a cluster of workers to prepare data for training your model. The processing_mode argument to tf.data.experimental.service.distribute describes how to leverage multiple workers to process the input dataset. Currently, there are two processing modes to choose from: "distributed_epoch" and "parallel_epochs". "distributed_epoch" means that the dataset will be split across all tf.data service workers. The dispatcher produces "splits" for the dataset and sends them to workers for further processing. For example, if a dataset begins with a list of filenames, the dispatcher will iterate through the filenames and send the filenames to tf.data workers, which will perform the rest of the dataset transformations on those files. "distributed_epoch" is useful when your model needs to see each element of the dataset exactly once, or if it needs to see the data in a generally-sequential order. "distributed_epoch" only works for datasets with splittable sources, such as Dataset.from_tensor_slices, Dataset.list_files, or Dataset.range. "parallel_epochs" means that the entire input dataset will be processed independently by each of the tf.data service workers. For this reason, it is important to shuffle data (e.g. filenames) non-deterministically, so that each worker will process the elements of the dataset in a different order. "parallel_epochs" can be used to distribute datasets that aren't splittable. With two workers, "parallel_epochs" will produce every element of the dataset twice:
dispatcher = tf.data.experimental.service.DispatchServer()
dispatcher_address = dispatcher.target.split("://")[1]
# Start two workers
workers = [
tf.data.experimental.service.WorkerServer(
tf.data.experimental.service.WorkerConfig(
dispatcher_address=dispatcher_address)) for _ in range(2)
]
dataset = tf.data.Dataset.range(10)
dataset = dataset.apply(tf.data.experimental.service.distribute(
processing_mode="parallel_epochs", service=dispatcher.target))
print(sorted(list(dataset.as_numpy_iterator())))
[0, 0, 1, 1, 2, 2, 3, 3, 4, 4, 5, 5, 6, 6, 7, 7, 8, 8, 9, 9]
"distributed_epoch", on the other hand, will still produce each element once:
dispatcher = tf.data.experimental.service.DispatchServer()
dispatcher_address = dispatcher.target.split("://")[1]
workers = [
tf.data.experimental.service.WorkerServer(
tf.data.experimental.service.WorkerConfig(
dispatcher_address=dispatcher_address)) for _ in range(2)
]
dataset = tf.data.Dataset.range(10)
dataset = dataset.apply(tf.data.experimental.service.distribute(
processing_mode="distributed_epoch", service=dispatcher.target))
print(sorted(list(dataset.as_numpy_iterator())))
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
When using apply(tf.data.experimental.service.distribute(...)), the dataset before the apply transformation executes within the tf.data service, while the operations after apply happen within the local process.
dispatcher = tf.data.experimental.service.DispatchServer()
dispatcher_address = dispatcher.target.split("://")[1]
workers = [
tf.data.experimental.service.WorkerServer(
tf.data.experimental.service.WorkerConfig(
dispatcher_address=dispatcher_address)) for _ in range(2)
]
dataset = tf.data.Dataset.range(5)
dataset = dataset.map(lambda x: x*x)
dataset = dataset.apply(
tf.data.experimental.service.distribute("parallel_epochs",
dispatcher.target))
dataset = dataset.map(lambda x: x+1)
print(sorted(list(dataset.as_numpy_iterator())))
[1, 1, 2, 2, 5, 5, 10, 10, 17, 17]
In the above example, the dataset operations (before applying the distribute function on the elements) will be executed on the tf.data workers, and the elements are provided over RPC. The remaining transformations (after the call to distribute) will be executed locally. The dispatcher and the workers will bind to usused free ports (which are chosen at random), in order to communicate with each other. However, to bind them to specific ports, the port parameter can be passed. The job_name argument allows jobs to be shared across multiple datasets. Instead of each dataset creating its own job, all datasets with the same job_name will consume from the same job. A new job will be created for each iteration of the dataset (with each repetition of Dataset.repeat counting as a new iteration). Suppose the DispatchServer is serving on localhost:5000 and two training workers (in either a single client or multi-client setup) iterate over the below dataset, and there is a single tf.data worker: range5_dataset = tf.data.Dataset.range(5)
dataset = range5_dataset.apply(tf.data.experimental.service.distribute(
"parallel_epochs", "grpc://localhost:5000", job_name="my_job_name"))
for iteration in range(3):
print(list(dataset))
The elements of each job will be split between the two processes, with elements being consumed by the processes on a first-come first-served basis. One possible result is that process 1 prints [0, 2, 4]
[0, 1, 3]
[1]
and process 2 prints [1, 3]
[2, 4]
[0, 2, 3, 4]
Job names must not be re-used across different training jobs within the lifetime of the tf.data service. In general, the tf.data service is expected to live for the duration of a single training job. To use the tf.data service with multiple training jobs, make sure to use different job names to avoid conflicts. For example, suppose a training job calls distribute with job_name="job" and reads until end of input. If another independent job connects to the same tf.data service and tries to read from job_name="job", it will immediately receive end of input, without getting any data. Keras and Distribution Strategies The dataset produced by the distribute transformation can be passed to Keras' Model.fit or Distribution Strategy's tf.distribute.Strategy.experimental_distribute_dataset like any other tf.data.Dataset. We recommend setting a job_name on the call to distribute so that if there are multiple workers, they read data from the same job. Note that the autosharding normally performed by experimental_distribute_dataset will be disabled when setting a job_name, since sharing the job already results in splitting data across the workers. When using a shared job, data will be dynamically balanced across workers, so that they reach end of input about the same time. This results in better worker utilization than with autosharding, where each worker processes an independent set of files, and some workers may run out of data earlier than others.
Args
processing_mode A string specifying the policy for how data should be processed by tf.data workers. Can be either "parallel_epochs" to have each tf.data worker process a copy of the dataset, or "distributed_epoch" to split a single iteration of the dataset across all the workers.
service A string indicating how to connect to the tf.data service. The string should be in the format "protocol://address", e.g. "grpc://localhost:5000".
job_name (Optional.) The name of the job. This argument makes it possible for multiple datasets to share the same job. The default behavior is that the dataset creates anonymous, exclusively owned jobs.
max_outstanding_requests (Optional.) A limit on how many elements may be requested at the same time. You can use this option to control the amount of memory used, since distribute won't use more than element_size * max_outstanding_requests of memory.
Returns
Dataset A Dataset of the elements produced by the data service. | tensorflow.data.experimental.service.distribute |
tf.data.experimental.service.from_dataset_id Creates a dataset which reads data from the tf.data service. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.data.experimental.service.from_dataset_id
tf.data.experimental.service.from_dataset_id(
processing_mode, service, dataset_id, element_spec=None, job_name=None,
max_outstanding_requests=None
)
This is useful when the dataset is registered by one process, then used in another process. When the same process is both registering and reading from the dataset, it is simpler to use tf.data.experimental.service.distribute instead. Before using from_dataset_id, the dataset must have been registered with the tf.data service using tf.data.experimental.service.register_dataset. register_dataset returns a dataset id for the registered dataset. That is the dataset_id which should be passed to from_dataset_id. The element_spec argument indicates the tf.TypeSpecs for the elements produced by the dataset. Currently element_spec must be explicitly specified, and match the dataset registered under dataset_id. element_spec defaults to None so that in the future we can support automatically discovering the element_spec by querying the tf.data service. tf.data.experimental.service.distribute is a convenience method which combines register_dataset and from_dataset_id into a dataset transformation. See the documentation for tf.data.experimental.service.distribute for more detail about how from_dataset_id works.
dispatcher = tf.data.experimental.service.DispatchServer()
dispatcher_address = dispatcher.target.split("://")[1]
worker = tf.data.experimental.service.WorkerServer(
tf.data.experimental.service.WorkerConfig(
dispatcher_address=dispatcher_address))
dataset = tf.data.Dataset.range(10)
dataset_id = tf.data.experimental.service.register_dataset(
dispatcher.target, dataset)
dataset = tf.data.experimental.service.from_dataset_id(
processing_mode="parallel_epochs",
service=dispatcher.target,
dataset_id=dataset_id,
element_spec=dataset.element_spec)
print(list(dataset.as_numpy_iterator()))
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
Args
processing_mode A string specifying the policy for how data should be processed by tf.data workers. Can be either "parallel_epochs" to have each tf.data worker process a copy of the dataset, or "distributed_epoch" to split a single iteration of the dataset across all the workers.
service A string indicating how to connect to the tf.data service. The string should be in the format "protocol://address", e.g. "grpc://localhost:5000".
dataset_id The id of the dataset to read from. This id is returned by register_dataset when the dataset is registered with the tf.data service.
element_spec A nested structure of tf.TypeSpecs representing the type of elements produced by the dataset. Use tf.data.Dataset.element_spec to see the element spec for a given dataset.
job_name (Optional.) The name of the job. This argument makes it possible for multiple datasets to share the same job. The default behavior is that the dataset creates anonymous, exclusively owned jobs.
max_outstanding_requests (Optional.) A limit on how many elements may be requested at the same time. You can use this option to control the amount of memory used, since distribute won't use more than element_size * max_outstanding_requests of memory.
Returns A tf.data.Dataset which reads from the tf.data service. | tensorflow.data.experimental.service.from_dataset_id |
tf.data.experimental.service.register_dataset Registers a dataset with the tf.data service. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.data.experimental.service.register_dataset
tf.data.experimental.service.register_dataset(
service, dataset
)
register_dataset registers a dataset with the tf.data service so that datasets can be created later with tf.data.experimental.service.from_dataset_id. This is useful when the dataset is registered by one process, then used in another process. When the same process is both registering and reading from the dataset, it is simpler to use tf.data.experimental.service.distribute instead. If the dataset is already registered with the tf.data service, register_dataset returns the already-registered dataset's id.
dispatcher = tf.data.experimental.service.DispatchServer()
dispatcher_address = dispatcher.target.split("://")[1]
worker = tf.data.experimental.service.WorkerServer(
tf.data.experimental.service.WorkerConfig(
dispatcher_address=dispatcher_address))
dataset = tf.data.Dataset.range(10)
dataset_id = tf.data.experimental.service.register_dataset(
dispatcher.target, dataset)
dataset = tf.data.experimental.service.from_dataset_id(
processing_mode="parallel_epochs",
service=dispatcher.target,
dataset_id=dataset_id,
element_spec=dataset.element_spec)
print(list(dataset.as_numpy_iterator()))
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
Args
service A string indicating how to connect to the tf.data service. The string should be in the format "protocol://address", e.g. "grpc://localhost:5000".
dataset A tf.data.Dataset to register with the tf.data service.
Returns A scalar int64 tensor of the registered dataset's id. | tensorflow.data.experimental.service.register_dataset |
tf.data.experimental.service.WorkerConfig Configuration class for tf.data service dispatchers. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.data.experimental.service.WorkerConfig
tf.data.experimental.service.WorkerConfig(
dispatcher_address, worker_address=None, port=0, protocol='grpc',
heartbeat_interval_ms=None, dispatcher_timeout_ms=None
)
Fields:
dispatcher_address: Specifies the address of the dispatcher.
worker_address: Specifies the address of the worker server. This address is passed to the dispatcher so that the dispatcher can tell clients how to connect to this worker.
port: Specifies the port to bind to. A value of 0 indicates that the worker can bind to any available port.
protocol: (Optional.) Specifies the protocol to be used by the server. Defaults to "grpc".
heartbeat_interval_ms: How often the worker should heartbeat to the dispatcher, in milliseconds. If not set, the runtime will select a reasonable default. A higher value will reduce the load on the dispatcher, while a lower value will reduce the time it takes to reclaim resources from finished jobs.
dispatcher_timeout_ms: How long, in milliseconds, to retry requests to the dispatcher before giving up and reporting an error. Defaults to 1 hour.
Attributes
dispatcher_address
worker_address
port
protocol
heartbeat_interval_ms
dispatcher_timeout_ms | tensorflow.data.experimental.service.workerconfig |
tf.data.experimental.service.WorkerServer An in-process tf.data service worker server.
tf.data.experimental.service.WorkerServer(
config, start=True
)
A tf.data.experimental.service.WorkerServer performs tf.data.Dataset processing for user-defined datasets, and provides the resulting elements over RPC. A worker is associated with a single tf.data.experimental.service.DispatchServer.
dispatcher = tf.data.experimental.service.DispatchServer()
dispatcher_address = dispatcher.target.split("://")[1]
worker = tf.data.experimental.service.WorkerServer(
tf.data.experimental.service.WorkerConfig(
dispatcher_address=dispatcher_address))
dataset = tf.data.Dataset.range(10)
dataset = dataset.apply(tf.data.experimental.service.distribute(
processing_mode="parallel_epochs", service=dispatcher.target))
print(list(dataset.as_numpy_iterator()))
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
When starting a dedicated tf.data worker process, use join() to block indefinitely after starting up the server. worker = tf.data.experimental.service.WorkerServer(
port=5051, dispatcher_address="grpc://localhost:5050")
worker.join()
Args
config A tf.data.experimental.service.WorkerConfig configration.
start (Optional.) Boolean, indicating whether to start the server after creating it. Defaults to True. Methods join View source
join()
Blocks until the server has shut down. This is useful when starting a dedicated worker process. worker_server = tf.data.experimental.service.WorkerServer(
port=5051, dispatcher_address="grpc://localhost:5050")
worker_server.join()
This method currently blocks forever.
Raises
tf.errors.OpError Or one of its subclasses if an error occurs while joining the server. start View source
start()
Starts this server.
Raises
tf.errors.OpError Or one of its subclasses if an error occurs while starting the server. | tensorflow.data.experimental.service.workerserver |
tf.data.experimental.shuffle_and_repeat View source on GitHub Shuffles and repeats a Dataset, reshuffling with each repetition. (deprecated) View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.data.experimental.shuffle_and_repeat
tf.data.experimental.shuffle_and_repeat(
buffer_size, count=None, seed=None
)
Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use tf.data.Dataset.shuffle(buffer_size, seed) followed by tf.data.Dataset.repeat(count). Static tf.data optimizations will take care of using the fused implementation.
d = tf.data.Dataset.from_tensor_slices([1, 2, 3])
d = d.apply(tf.data.experimental.shuffle_and_repeat(2, count=2))
[elem.numpy() for elem in d] # doctest: +SKIP
[2, 3, 1, 1, 3, 2]
dataset.apply(
tf.data.experimental.shuffle_and_repeat(buffer_size, count, seed))
produces the same output as dataset.shuffle(
buffer_size, seed=seed, reshuffle_each_iteration=True).repeat(count)
In each repetition, this dataset fills a buffer with buffer_size elements, then randomly samples elements from this buffer, replacing the selected elements with new elements. For perfect shuffling, set the buffer size equal to the full size of the dataset. For instance, if your dataset contains 10,000 elements but buffer_size is set to 1,000, then shuffle will initially select a random element from only the first 1,000 elements in the buffer. Once an element is selected, its space in the buffer is replaced by the next (i.e. 1,001-st) element, maintaining the 1,000 element buffer.
Args
buffer_size A tf.int64 scalar tf.Tensor, representing the maximum number elements that will be buffered when prefetching.
count (Optional.) A tf.int64 scalar tf.Tensor, representing the number of times the dataset should be repeated. The default behavior (if count is None or -1) is for the dataset be repeated indefinitely.
seed (Optional.) A tf.int64 scalar tf.Tensor, representing the random seed that will be used to create the distribution. See tf.random.set_seed for behavior.
Returns A Dataset transformation function, which can be passed to tf.data.Dataset.apply. | tensorflow.data.experimental.shuffle_and_repeat |
tf.data.experimental.snapshot API to persist the output of the input dataset. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.data.experimental.snapshot
tf.data.experimental.snapshot(
path, compression='AUTO', reader_func=None, shard_func=None
)
The snapshot API allows users to transparently persist the output of their preprocessing pipeline to disk, and materialize the pre-processed data on a different training run. This API enables repeated preprocessing steps to be consolidated, and allows re-use of already processed data, trading off disk storage and network bandwidth for freeing up more valuable CPU resources and accelerator compute time. https://github.com/tensorflow/community/blob/master/rfcs/20200107-tf-data-snapshot.md has detailed design documentation of this feature. Users can specify various options to control the behavior of snapshot, including how snapshots are read from and written to by passing in user-defined functions to the reader_func and shard_func parameters. shard_func is a user specified function that maps input elements to snapshot shards. Users may want to specify this function to control how snapshot files should be written to disk. Below is an example of how a potential shard_func could be written. dataset = ...
dataset = dataset.enumerate()
dataset = dataset.apply(tf.data.experimental.snapshot("/path/to/snapshot/dir",
shard_func=lambda x, y: x % NUM_SHARDS, ...))
dataset = dataset.map(lambda x, y: y)
reader_func is a user specified function that accepts a single argument: (1) a Dataset of Datasets, each representing a "split" of elements of the original dataset. The cardinality of the input dataset matches the number of the shards specified in the shard_func (see above). The function should return a Dataset of elements of the original dataset. Users may want specify this function to control how snapshot files should be read from disk, including the amount of shuffling and parallelism. Here is an example of a standard reader function a user can define. This function enables both dataset shuffling and parallel reading of datasets: def user_reader_func(datasets):
# shuffle the datasets splits
datasets = datasets.shuffle(NUM_CORES)
# read datasets in parallel and interleave their elements
return datasets.interleave(lambda x: x, num_parallel_calls=AUTOTUNE)
dataset = dataset.apply(tf.data.experimental.snapshot("/path/to/snapshot/dir",
reader_func=user_reader_func))
By default, snapshot parallelizes reads by the number of cores available on the system, but will not attempt to shuffle the data.
Args
path Required. A directory to use for storing / loading the snapshot to / from.
compression Optional. The type of compression to apply to the snapshot written to disk. Supported options are GZIP, SNAPPY, AUTO or None. Defaults to AUTO, which attempts to pick an appropriate compression algorithm for the dataset.
reader_func Optional. A function to control how to read data from snapshot shards.
shard_func Optional. A function to control how to shard data when writing a snapshot.
Returns A Dataset transformation function, which can be passed to tf.data.Dataset.apply. | tensorflow.data.experimental.snapshot |
tf.data.experimental.SqlDataset View source on GitHub A Dataset consisting of the results from a SQL query. Inherits From: Dataset
tf.data.experimental.SqlDataset(
driver_name, data_source_name, query, output_types
)
Args
driver_name A 0-D tf.string tensor containing the database type. Currently, the only supported value is 'sqlite'.
data_source_name A 0-D tf.string tensor containing a connection string to connect to the database.
query A 0-D tf.string tensor containing the SQL query to execute.
output_types A tuple of tf.DType objects representing the types of the columns returned by query.
Attributes
element_spec The type specification of an element of this dataset.
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
dataset.element_spec
TensorSpec(shape=(), dtype=tf.int32, name=None)
Methods apply View source
apply(
transformation_func
)
Applies a transformation function to this dataset. apply enables chaining of custom Dataset transformations, which are represented as functions that take one Dataset argument and return a transformed Dataset.
dataset = tf.data.Dataset.range(100)
def dataset_fn(ds):
return ds.filter(lambda x: x < 5)
dataset = dataset.apply(dataset_fn)
list(dataset.as_numpy_iterator())
[0, 1, 2, 3, 4]
Args
transformation_func A function that takes one Dataset argument and returns a Dataset.
Returns
Dataset The Dataset returned by applying transformation_func to this dataset. as_numpy_iterator View source
as_numpy_iterator()
Returns an iterator which converts all elements of the dataset to numpy. Use as_numpy_iterator to inspect the content of your dataset. To see element shapes and types, print dataset elements directly instead of using as_numpy_iterator.
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
for element in dataset:
print(element)
tf.Tensor(1, shape=(), dtype=int32)
tf.Tensor(2, shape=(), dtype=int32)
tf.Tensor(3, shape=(), dtype=int32)
This method requires that you are running in eager mode and the dataset's element_spec contains only TensorSpec components.
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
for element in dataset.as_numpy_iterator():
print(element)
1
2
3
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
print(list(dataset.as_numpy_iterator()))
[1, 2, 3]
as_numpy_iterator() will preserve the nested structure of dataset elements.
dataset = tf.data.Dataset.from_tensor_slices({'a': ([1, 2], [3, 4]),
'b': [5, 6]})
list(dataset.as_numpy_iterator()) == [{'a': (1, 3), 'b': 5},
{'a': (2, 4), 'b': 6}]
True
Returns An iterable over the elements of the dataset, with their tensors converted to numpy arrays.
Raises
TypeError if an element contains a non-Tensor value.
RuntimeError if eager execution is not enabled. batch View source
batch(
batch_size, drop_remainder=False
)
Combines consecutive elements of this dataset into batches.
dataset = tf.data.Dataset.range(8)
dataset = dataset.batch(3)
list(dataset.as_numpy_iterator())
[array([0, 1, 2]), array([3, 4, 5]), array([6, 7])]
dataset = tf.data.Dataset.range(8)
dataset = dataset.batch(3, drop_remainder=True)
list(dataset.as_numpy_iterator())
[array([0, 1, 2]), array([3, 4, 5])]
The components of the resulting element will have an additional outer dimension, which will be batch_size (or N % batch_size for the last element if batch_size does not divide the number of input elements N evenly and drop_remainder is False). If your program depends on the batches having the same outer dimension, you should set the drop_remainder argument to True to prevent the smaller batch from being produced.
Args
batch_size A tf.int64 scalar tf.Tensor, representing the number of consecutive elements of this dataset to combine in a single batch.
drop_remainder (Optional.) A tf.bool scalar tf.Tensor, representing whether the last batch should be dropped in the case it has fewer than batch_size elements; the default behavior is not to drop the smaller batch.
Returns
Dataset A Dataset. cache View source
cache(
filename=''
)
Caches the elements in this dataset. The first time the dataset is iterated over, its elements will be cached either in the specified file or in memory. Subsequent iterations will use the cached data.
Note: For the cache to be finalized, the input dataset must be iterated through in its entirety. Otherwise, subsequent iterations will not use cached data.
dataset = tf.data.Dataset.range(5)
dataset = dataset.map(lambda x: x**2)
dataset = dataset.cache()
# The first time reading through the data will generate the data using
# `range` and `map`.
list(dataset.as_numpy_iterator())
[0, 1, 4, 9, 16]
# Subsequent iterations read from the cache.
list(dataset.as_numpy_iterator())
[0, 1, 4, 9, 16]
When caching to a file, the cached data will persist across runs. Even the first iteration through the data will read from the cache file. Changing the input pipeline before the call to .cache() will have no effect until the cache file is removed or the filename is changed.
dataset = tf.data.Dataset.range(5)
dataset = dataset.cache("/path/to/file") # doctest: +SKIP
list(dataset.as_numpy_iterator()) # doctest: +SKIP
[0, 1, 2, 3, 4]
dataset = tf.data.Dataset.range(10)
dataset = dataset.cache("/path/to/file") # Same file! # doctest: +SKIP
list(dataset.as_numpy_iterator()) # doctest: +SKIP
[0, 1, 2, 3, 4]
Note: cache will produce exactly the same elements during each iteration through the dataset. If you wish to randomize the iteration order, make sure to call shuffle after calling cache.
Args
filename A tf.string scalar tf.Tensor, representing the name of a directory on the filesystem to use for caching elements in this Dataset. If a filename is not provided, the dataset will be cached in memory.
Returns
Dataset A Dataset. cardinality View source
cardinality()
Returns the cardinality of the dataset, if known. cardinality may return tf.data.INFINITE_CARDINALITY if the dataset contains an infinite number of elements or tf.data.UNKNOWN_CARDINALITY if the analysis fails to determine the number of elements in the dataset (e.g. when the dataset source is a file).
dataset = tf.data.Dataset.range(42)
print(dataset.cardinality().numpy())
42
dataset = dataset.repeat()
cardinality = dataset.cardinality()
print((cardinality == tf.data.INFINITE_CARDINALITY).numpy())
True
dataset = dataset.filter(lambda x: True)
cardinality = dataset.cardinality()
print((cardinality == tf.data.UNKNOWN_CARDINALITY).numpy())
True
Returns A scalar tf.int64 Tensor representing the cardinality of the dataset. If the cardinality is infinite or unknown, cardinality returns the named constants tf.data.INFINITE_CARDINALITY and tf.data.UNKNOWN_CARDINALITY respectively.
concatenate View source
concatenate(
dataset
)
Creates a Dataset by concatenating the given dataset with this dataset.
a = tf.data.Dataset.range(1, 4) # ==> [ 1, 2, 3 ]
b = tf.data.Dataset.range(4, 8) # ==> [ 4, 5, 6, 7 ]
ds = a.concatenate(b)
list(ds.as_numpy_iterator())
[1, 2, 3, 4, 5, 6, 7]
# The input dataset and dataset to be concatenated should have the same
# nested structures and output types.
c = tf.data.Dataset.zip((a, b))
a.concatenate(c)
Traceback (most recent call last):
TypeError: Two datasets to concatenate have different types
<dtype: 'int64'> and (tf.int64, tf.int64)
d = tf.data.Dataset.from_tensor_slices(["a", "b", "c"])
a.concatenate(d)
Traceback (most recent call last):
TypeError: Two datasets to concatenate have different types
<dtype: 'int64'> and <dtype: 'string'>
Args
dataset Dataset to be concatenated.
Returns
Dataset A Dataset. enumerate View source
enumerate(
start=0
)
Enumerates the elements of this dataset. It is similar to python's enumerate.
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
dataset = dataset.enumerate(start=5)
for element in dataset.as_numpy_iterator():
print(element)
(5, 1)
(6, 2)
(7, 3)
# The nested structure of the input dataset determines the structure of
# elements in the resulting dataset.
dataset = tf.data.Dataset.from_tensor_slices([(7, 8), (9, 10)])
dataset = dataset.enumerate()
for element in dataset.as_numpy_iterator():
print(element)
(0, array([7, 8], dtype=int32))
(1, array([ 9, 10], dtype=int32))
Args
start A tf.int64 scalar tf.Tensor, representing the start value for enumeration.
Returns
Dataset A Dataset. filter View source
filter(
predicate
)
Filters this dataset according to predicate.
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
dataset = dataset.filter(lambda x: x < 3)
list(dataset.as_numpy_iterator())
[1, 2]
# `tf.math.equal(x, y)` is required for equality comparison
def filter_fn(x):
return tf.math.equal(x, 1)
dataset = dataset.filter(filter_fn)
list(dataset.as_numpy_iterator())
[1]
Args
predicate A function mapping a dataset element to a boolean.
Returns
Dataset The Dataset containing the elements of this dataset for which predicate is True. flat_map View source
flat_map(
map_func
)
Maps map_func across this dataset and flattens the result. Use flat_map if you want to make sure that the order of your dataset stays the same. For example, to flatten a dataset of batches into a dataset of their elements:
dataset = tf.data.Dataset.from_tensor_slices(
[[1, 2, 3], [4, 5, 6], [7, 8, 9]])
dataset = dataset.flat_map(lambda x: Dataset.from_tensor_slices(x))
list(dataset.as_numpy_iterator())
[1, 2, 3, 4, 5, 6, 7, 8, 9]
tf.data.Dataset.interleave() is a generalization of flat_map, since flat_map produces the same output as tf.data.Dataset.interleave(cycle_length=1)
Args
map_func A function mapping a dataset element to a dataset.
Returns
Dataset A Dataset. from_generator View source
@staticmethod
from_generator(
generator, output_types=None, output_shapes=None, args=None,
output_signature=None
)
Creates a Dataset whose elements are generated by generator. (deprecated arguments) Warning: SOME ARGUMENTS ARE DEPRECATED: (output_shapes, output_types). They will be removed in a future version. Instructions for updating: Use output_signature instead The generator argument must be a callable object that returns an object that supports the iter() protocol (e.g. a generator function). The elements generated by generator must be compatible with either the given output_signature argument or with the given output_types and (optionally) output_shapes arguments, whichiver was specified. The recommended way to call from_generator is to use the output_signature argument. In this case the output will be assumed to consist of objects with the classes, shapes and types defined by tf.TypeSpec objects from output_signature argument:
def gen():
ragged_tensor = tf.ragged.constant([[1, 2], [3]])
yield 42, ragged_tensor
dataset = tf.data.Dataset.from_generator(
gen,
output_signature=(
tf.TensorSpec(shape=(), dtype=tf.int32),
tf.RaggedTensorSpec(shape=(2, None), dtype=tf.int32)))
list(dataset.take(1))
[(<tf.Tensor: shape=(), dtype=int32, numpy=42>,
<tf.RaggedTensor [[1, 2], [3]]>)]
There is also a deprecated way to call from_generator by either with output_types argument alone or together with output_shapes argument. In this case the output of the function will be assumed to consist of tf.Tensor objects with with the types defined by output_types and with the shapes which are either unknown or defined by output_shapes.
Note: The current implementation of Dataset.from_generator() uses tf.numpy_function and inherits the same constraints. In particular, it requires the dataset and iterator related operations to be placed on a device in the same process as the Python program that called Dataset.from_generator(). The body of generator will not be serialized in a GraphDef, and you should not use this method if you need to serialize your model and restore it in a different environment.
Note: If generator depends on mutable global variables or other external state, be aware that the runtime may invoke generator multiple times (in order to support repeating the Dataset) and at any time between the call to Dataset.from_generator() and the production of the first element from the generator. Mutating global variables or external state can cause undefined behavior, and we recommend that you explicitly cache any external state in generator before calling Dataset.from_generator().
Args
generator A callable object that returns an object that supports the iter() protocol. If args is not specified, generator must take no arguments; otherwise it must take as many arguments as there are values in args.
output_types (Optional.) A nested structure of tf.DType objects corresponding to each component of an element yielded by generator.
output_shapes (Optional.) A nested structure of tf.TensorShape objects corresponding to each component of an element yielded by generator.
args (Optional.) A tuple of tf.Tensor objects that will be evaluated and passed to generator as NumPy-array arguments.
output_signature (Optional.) A nested structure of tf.TypeSpec objects corresponding to each component of an element yielded by generator.
Returns
Dataset A Dataset. from_tensor_slices View source
@staticmethod
from_tensor_slices(
tensors
)
Creates a Dataset whose elements are slices of the given tensors. The given tensors are sliced along their first dimension. This operation preserves the structure of the input tensors, removing the first dimension of each tensor and using it as the dataset dimension. All input tensors must have the same size in their first dimensions.
# Slicing a 1D tensor produces scalar tensor elements.
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
list(dataset.as_numpy_iterator())
[1, 2, 3]
# Slicing a 2D tensor produces 1D tensor elements.
dataset = tf.data.Dataset.from_tensor_slices([[1, 2], [3, 4]])
list(dataset.as_numpy_iterator())
[array([1, 2], dtype=int32), array([3, 4], dtype=int32)]
# Slicing a tuple of 1D tensors produces tuple elements containing
# scalar tensors.
dataset = tf.data.Dataset.from_tensor_slices(([1, 2], [3, 4], [5, 6]))
list(dataset.as_numpy_iterator())
[(1, 3, 5), (2, 4, 6)]
# Dictionary structure is also preserved.
dataset = tf.data.Dataset.from_tensor_slices({"a": [1, 2], "b": [3, 4]})
list(dataset.as_numpy_iterator()) == [{'a': 1, 'b': 3},
{'a': 2, 'b': 4}]
True
# Two tensors can be combined into one Dataset object.
features = tf.constant([[1, 3], [2, 1], [3, 3]]) # ==> 3x2 tensor
labels = tf.constant(['A', 'B', 'A']) # ==> 3x1 tensor
dataset = Dataset.from_tensor_slices((features, labels))
# Both the features and the labels tensors can be converted
# to a Dataset object separately and combined after.
features_dataset = Dataset.from_tensor_slices(features)
labels_dataset = Dataset.from_tensor_slices(labels)
dataset = Dataset.zip((features_dataset, labels_dataset))
# A batched feature and label set can be converted to a Dataset
# in similar fashion.
batched_features = tf.constant([[[1, 3], [2, 3]],
[[2, 1], [1, 2]],
[[3, 3], [3, 2]]], shape=(3, 2, 2))
batched_labels = tf.constant([['A', 'A'],
['B', 'B'],
['A', 'B']], shape=(3, 2, 1))
dataset = Dataset.from_tensor_slices((batched_features, batched_labels))
for element in dataset.as_numpy_iterator():
print(element)
(array([[1, 3],
[2, 3]], dtype=int32), array([[b'A'],
[b'A']], dtype=object))
(array([[2, 1],
[1, 2]], dtype=int32), array([[b'B'],
[b'B']], dtype=object))
(array([[3, 3],
[3, 2]], dtype=int32), array([[b'A'],
[b'B']], dtype=object))
Note that if tensors contains a NumPy array, and eager execution is not enabled, the values will be embedded in the graph as one or more tf.constant operations. For large datasets (> 1 GB), this can waste memory and run into byte limits of graph serialization. If tensors contains one or more large NumPy arrays, consider the alternative described in this guide.
Args
tensors A dataset element, with each component having the same size in the first dimension.
Returns
Dataset A Dataset. from_tensors View source
@staticmethod
from_tensors(
tensors
)
Creates a Dataset with a single element, comprising the given tensors. from_tensors produces a dataset containing only a single element. To slice the input tensor into multiple elements, use from_tensor_slices instead.
dataset = tf.data.Dataset.from_tensors([1, 2, 3])
list(dataset.as_numpy_iterator())
[array([1, 2, 3], dtype=int32)]
dataset = tf.data.Dataset.from_tensors(([1, 2, 3], 'A'))
list(dataset.as_numpy_iterator())
[(array([1, 2, 3], dtype=int32), b'A')]
# You can use `from_tensors` to produce a dataset which repeats
# the same example many times.
example = tf.constant([1,2,3])
dataset = tf.data.Dataset.from_tensors(example).repeat(2)
list(dataset.as_numpy_iterator())
[array([1, 2, 3], dtype=int32), array([1, 2, 3], dtype=int32)]
Note that if tensors contains a NumPy array, and eager execution is not enabled, the values will be embedded in the graph as one or more tf.constant operations. For large datasets (> 1 GB), this can waste memory and run into byte limits of graph serialization. If tensors contains one or more large NumPy arrays, consider the alternative described in this guide.
Args
tensors A dataset element.
Returns
Dataset A Dataset. interleave View source
interleave(
map_func, cycle_length=None, block_length=None, num_parallel_calls=None,
deterministic=None
)
Maps map_func across this dataset, and interleaves the results. For example, you can use Dataset.interleave() to process many input files concurrently:
# Preprocess 4 files concurrently, and interleave blocks of 16 records
# from each file.
filenames = ["/var/data/file1.txt", "/var/data/file2.txt",
"/var/data/file3.txt", "/var/data/file4.txt"]
dataset = tf.data.Dataset.from_tensor_slices(filenames)
def parse_fn(filename):
return tf.data.Dataset.range(10)
dataset = dataset.interleave(lambda x:
tf.data.TextLineDataset(x).map(parse_fn, num_parallel_calls=1),
cycle_length=4, block_length=16)
The cycle_length and block_length arguments control the order in which elements are produced. cycle_length controls the number of input elements that are processed concurrently. If you set cycle_length to 1, this transformation will handle one input element at a time, and will produce identical results to tf.data.Dataset.flat_map. In general, this transformation will apply map_func to cycle_length input elements, open iterators on the returned Dataset objects, and cycle through them producing block_length consecutive elements from each iterator, and consuming the next input element each time it reaches the end of an iterator. For example:
dataset = Dataset.range(1, 6) # ==> [ 1, 2, 3, 4, 5 ]
# NOTE: New lines indicate "block" boundaries.
dataset = dataset.interleave(
lambda x: Dataset.from_tensors(x).repeat(6),
cycle_length=2, block_length=4)
list(dataset.as_numpy_iterator())
[1, 1, 1, 1,
2, 2, 2, 2,
1, 1,
2, 2,
3, 3, 3, 3,
4, 4, 4, 4,
3, 3,
4, 4,
5, 5, 5, 5,
5, 5]
Note: The order of elements yielded by this transformation is deterministic, as long as map_func is a pure function and deterministic=True. If map_func contains any stateful operations, the order in which that state is accessed is undefined.
Performance can often be improved by setting num_parallel_calls so that interleave will use multiple threads to fetch elements. If determinism isn't required, it can also improve performance to set deterministic=False.
filenames = ["/var/data/file1.txt", "/var/data/file2.txt",
"/var/data/file3.txt", "/var/data/file4.txt"]
dataset = tf.data.Dataset.from_tensor_slices(filenames)
dataset = dataset.interleave(lambda x: tf.data.TFRecordDataset(x),
cycle_length=4, num_parallel_calls=tf.data.AUTOTUNE,
deterministic=False)
Args
map_func A function mapping a dataset element to a dataset.
cycle_length (Optional.) The number of input elements that will be processed concurrently. If not set, the tf.data runtime decides what it should be based on available CPU. If num_parallel_calls is set to tf.data.AUTOTUNE, the cycle_length argument identifies the maximum degree of parallelism.
block_length (Optional.) The number of consecutive elements to produce from each input element before cycling to another input element. If not set, defaults to 1.
num_parallel_calls (Optional.) If specified, the implementation creates a threadpool, which is used to fetch inputs from cycle elements asynchronously and in parallel. The default behavior is to fetch inputs from cycle elements synchronously with no parallelism. If the value tf.data.AUTOTUNE is used, then the number of parallel calls is set dynamically based on available CPU.
deterministic (Optional.) A boolean controlling whether determinism should be traded for performance by allowing elements to be produced out of order. If deterministic is None, the tf.data.Options.experimental_deterministic dataset option (True by default) is used to decide whether to produce elements deterministically.
Returns
Dataset A Dataset. list_files View source
@staticmethod
list_files(
file_pattern, shuffle=None, seed=None
)
A dataset of all files matching one or more glob patterns. The file_pattern argument should be a small number of glob patterns. If your filenames have already been globbed, use Dataset.from_tensor_slices(filenames) instead, as re-globbing every filename with list_files may result in poor performance with remote storage systems.
Note: The default behavior of this method is to return filenames in a non-deterministic random shuffled order. Pass a seed or shuffle=False to get results in a deterministic order.
Example: If we had the following files on our filesystem: /path/to/dir/a.txt /path/to/dir/b.py /path/to/dir/c.py If we pass "/path/to/dir/*.py" as the directory, the dataset would produce: /path/to/dir/b.py /path/to/dir/c.py
Args
file_pattern A string, a list of strings, or a tf.Tensor of string type (scalar or vector), representing the filename glob (i.e. shell wildcard) pattern(s) that will be matched.
shuffle (Optional.) If True, the file names will be shuffled randomly. Defaults to True.
seed (Optional.) A tf.int64 scalar tf.Tensor, representing the random seed that will be used to create the distribution. See tf.random.set_seed for behavior.
Returns
Dataset A Dataset of strings corresponding to file names. map View source
map(
map_func, num_parallel_calls=None, deterministic=None
)
Maps map_func across the elements of this dataset. This transformation applies map_func to each element of this dataset, and returns a new dataset containing the transformed elements, in the same order as they appeared in the input. map_func can be used to change both the values and the structure of a dataset's elements. For example, adding 1 to each element, or projecting a subset of element components.
dataset = Dataset.range(1, 6) # ==> [ 1, 2, 3, 4, 5 ]
dataset = dataset.map(lambda x: x + 1)
list(dataset.as_numpy_iterator())
[2, 3, 4, 5, 6]
The input signature of map_func is determined by the structure of each element in this dataset.
dataset = Dataset.range(5)
# `map_func` takes a single argument of type `tf.Tensor` with the same
# shape and dtype.
result = dataset.map(lambda x: x + 1)
# Each element is a tuple containing two `tf.Tensor` objects.
elements = [(1, "foo"), (2, "bar"), (3, "baz")]
dataset = tf.data.Dataset.from_generator(
lambda: elements, (tf.int32, tf.string))
# `map_func` takes two arguments of type `tf.Tensor`. This function
# projects out just the first component.
result = dataset.map(lambda x_int, y_str: x_int)
list(result.as_numpy_iterator())
[1, 2, 3]
# Each element is a dictionary mapping strings to `tf.Tensor` objects.
elements = ([{"a": 1, "b": "foo"},
{"a": 2, "b": "bar"},
{"a": 3, "b": "baz"}])
dataset = tf.data.Dataset.from_generator(
lambda: elements, {"a": tf.int32, "b": tf.string})
# `map_func` takes a single argument of type `dict` with the same keys
# as the elements.
result = dataset.map(lambda d: str(d["a"]) + d["b"])
The value or values returned by map_func determine the structure of each element in the returned dataset.
dataset = tf.data.Dataset.range(3)
# `map_func` returns two `tf.Tensor` objects.
def g(x):
return tf.constant(37.0), tf.constant(["Foo", "Bar", "Baz"])
result = dataset.map(g)
result.element_spec
(TensorSpec(shape=(), dtype=tf.float32, name=None), TensorSpec(shape=(3,), dtype=tf.string, name=None))
# Python primitives, lists, and NumPy arrays are implicitly converted to
# `tf.Tensor`.
def h(x):
return 37.0, ["Foo", "Bar"], np.array([1.0, 2.0], dtype=np.float64)
result = dataset.map(h)
result.element_spec
(TensorSpec(shape=(), dtype=tf.float32, name=None), TensorSpec(shape=(2,), dtype=tf.string, name=None), TensorSpec(shape=(2,), dtype=tf.float64, name=None))
# `map_func` can return nested structures.
def i(x):
return (37.0, [42, 16]), "foo"
result = dataset.map(i)
result.element_spec
((TensorSpec(shape=(), dtype=tf.float32, name=None),
TensorSpec(shape=(2,), dtype=tf.int32, name=None)),
TensorSpec(shape=(), dtype=tf.string, name=None))
map_func can accept as arguments and return any type of dataset element. Note that irrespective of the context in which map_func is defined (eager vs. graph), tf.data traces the function and executes it as a graph. To use Python code inside of the function you have a few options: 1) Rely on AutoGraph to convert Python code into an equivalent graph computation. The downside of this approach is that AutoGraph can convert some but not all Python code. 2) Use tf.py_function, which allows you to write arbitrary Python code but will generally result in worse performance than 1). For example:
d = tf.data.Dataset.from_tensor_slices(['hello', 'world'])
# transform a string tensor to upper case string using a Python function
def upper_case_fn(t: tf.Tensor):
return t.numpy().decode('utf-8').upper()
d = d.map(lambda x: tf.py_function(func=upper_case_fn,
inp=[x], Tout=tf.string))
list(d.as_numpy_iterator())
[b'HELLO', b'WORLD']
3) Use tf.numpy_function, which also allows you to write arbitrary Python code. Note that tf.py_function accepts tf.Tensor whereas tf.numpy_function accepts numpy arrays and returns only numpy arrays. For example:
d = tf.data.Dataset.from_tensor_slices(['hello', 'world'])
def upper_case_fn(t: np.ndarray):
return t.decode('utf-8').upper()
d = d.map(lambda x: tf.numpy_function(func=upper_case_fn,
inp=[x], Tout=tf.string))
list(d.as_numpy_iterator())
[b'HELLO', b'WORLD']
Note that the use of tf.numpy_function and tf.py_function in general precludes the possibility of executing user-defined transformations in parallel (because of Python GIL). Performance can often be improved by setting num_parallel_calls so that map will use multiple threads to process elements. If deterministic order isn't required, it can also improve performance to set deterministic=False.
dataset = Dataset.range(1, 6) # ==> [ 1, 2, 3, 4, 5 ]
dataset = dataset.map(lambda x: x + 1,
num_parallel_calls=tf.data.AUTOTUNE,
deterministic=False)
Args
map_func A function mapping a dataset element to another dataset element.
num_parallel_calls (Optional.) A tf.int32 scalar tf.Tensor, representing the number elements to process asynchronously in parallel. If not specified, elements will be processed sequentially. If the value tf.data.AUTOTUNE is used, then the number of parallel calls is set dynamically based on available CPU.
deterministic (Optional.) A boolean controlling whether determinism should be traded for performance by allowing elements to be produced out of order. If deterministic is None, the tf.data.Options.experimental_deterministic dataset option (True by default) is used to decide whether to produce elements deterministically.
Returns
Dataset A Dataset. options View source
options()
Returns the options for this dataset and its inputs.
Returns A tf.data.Options object representing the dataset options.
padded_batch View source
padded_batch(
batch_size, padded_shapes=None, padding_values=None, drop_remainder=False
)
Combines consecutive elements of this dataset into padded batches. This transformation combines multiple consecutive elements of the input dataset into a single element. Like tf.data.Dataset.batch, the components of the resulting element will have an additional outer dimension, which will be batch_size (or N % batch_size for the last element if batch_size does not divide the number of input elements N evenly and drop_remainder is False). If your program depends on the batches having the same outer dimension, you should set the drop_remainder argument to True to prevent the smaller batch from being produced. Unlike tf.data.Dataset.batch, the input elements to be batched may have different shapes, and this transformation will pad each component to the respective shape in padded_shapes. The padded_shapes argument determines the resulting shape for each dimension of each component in an output element: If the dimension is a constant, the component will be padded out to that length in that dimension. If the dimension is unknown, the component will be padded out to the maximum length of all elements in that dimension.
A = (tf.data.Dataset
.range(1, 5, output_type=tf.int32)
.map(lambda x: tf.fill([x], x)))
# Pad to the smallest per-batch size that fits all elements.
B = A.padded_batch(2)
for element in B.as_numpy_iterator():
print(element)
[[1 0]
[2 2]]
[[3 3 3 0]
[4 4 4 4]]
# Pad to a fixed size.
C = A.padded_batch(2, padded_shapes=5)
for element in C.as_numpy_iterator():
print(element)
[[1 0 0 0 0]
[2 2 0 0 0]]
[[3 3 3 0 0]
[4 4 4 4 0]]
# Pad with a custom value.
D = A.padded_batch(2, padded_shapes=5, padding_values=-1)
for element in D.as_numpy_iterator():
print(element)
[[ 1 -1 -1 -1 -1]
[ 2 2 -1 -1 -1]]
[[ 3 3 3 -1 -1]
[ 4 4 4 4 -1]]
# Components of nested elements can be padded independently.
elements = [([1, 2, 3], [10]),
([4, 5], [11, 12])]
dataset = tf.data.Dataset.from_generator(
lambda: iter(elements), (tf.int32, tf.int32))
# Pad the first component of the tuple to length 4, and the second
# component to the smallest size that fits.
dataset = dataset.padded_batch(2,
padded_shapes=([4], [None]),
padding_values=(-1, 100))
list(dataset.as_numpy_iterator())
[(array([[ 1, 2, 3, -1], [ 4, 5, -1, -1]], dtype=int32),
array([[ 10, 100], [ 11, 12]], dtype=int32))]
# Pad with a single value and multiple components.
E = tf.data.Dataset.zip((A, A)).padded_batch(2, padding_values=-1)
for element in E.as_numpy_iterator():
print(element)
(array([[ 1, -1],
[ 2, 2]], dtype=int32), array([[ 1, -1],
[ 2, 2]], dtype=int32))
(array([[ 3, 3, 3, -1],
[ 4, 4, 4, 4]], dtype=int32), array([[ 3, 3, 3, -1],
[ 4, 4, 4, 4]], dtype=int32))
See also tf.data.experimental.dense_to_sparse_batch, which combines elements that may have different shapes into a tf.sparse.SparseTensor.
Args
batch_size A tf.int64 scalar tf.Tensor, representing the number of consecutive elements of this dataset to combine in a single batch.
padded_shapes (Optional.) A nested structure of tf.TensorShape or tf.int64 vector tensor-like objects representing the shape to which the respective component of each input element should be padded prior to batching. Any unknown dimensions will be padded to the maximum size of that dimension in each batch. If unset, all dimensions of all components are padded to the maximum size in the batch. padded_shapes must be set if any component has an unknown rank.
padding_values (Optional.) A nested structure of scalar-shaped tf.Tensor, representing the padding values to use for the respective components. None represents that the nested structure should be padded with default values. Defaults are 0 for numeric types and the empty string for string types. The padding_values should have the same structure as the input dataset. If padding_values is a single element and the input dataset has multiple components, then the same padding_values will be used to pad every component of the dataset. If padding_values is a scalar, then its value will be broadcasted to match the shape of each component.
drop_remainder (Optional.) A tf.bool scalar tf.Tensor, representing whether the last batch should be dropped in the case it has fewer than batch_size elements; the default behavior is not to drop the smaller batch.
Returns
Dataset A Dataset.
Raises
ValueError If a component has an unknown rank, and the padded_shapes argument is not set. prefetch View source
prefetch(
buffer_size
)
Creates a Dataset that prefetches elements from this dataset. Most dataset input pipelines should end with a call to prefetch. This allows later elements to be prepared while the current element is being processed. This often improves latency and throughput, at the cost of using additional memory to store prefetched elements.
Note: Like other Dataset methods, prefetch operates on the elements of the input dataset. It has no concept of examples vs. batches. examples.prefetch(2) will prefetch two elements (2 examples), while examples.batch(20).prefetch(2) will prefetch 2 elements (2 batches, of 20 examples each).
dataset = tf.data.Dataset.range(3)
dataset = dataset.prefetch(2)
list(dataset.as_numpy_iterator())
[0, 1, 2]
Args
buffer_size A tf.int64 scalar tf.Tensor, representing the maximum number of elements that will be buffered when prefetching.
Returns
Dataset A Dataset. range View source
@staticmethod
range(
*args, **kwargs
)
Creates a Dataset of a step-separated range of values.
list(Dataset.range(5).as_numpy_iterator())
[0, 1, 2, 3, 4]
list(Dataset.range(2, 5).as_numpy_iterator())
[2, 3, 4]
list(Dataset.range(1, 5, 2).as_numpy_iterator())
[1, 3]
list(Dataset.range(1, 5, -2).as_numpy_iterator())
[]
list(Dataset.range(5, 1).as_numpy_iterator())
[]
list(Dataset.range(5, 1, -2).as_numpy_iterator())
[5, 3]
list(Dataset.range(2, 5, output_type=tf.int32).as_numpy_iterator())
[2, 3, 4]
list(Dataset.range(1, 5, 2, output_type=tf.float32).as_numpy_iterator())
[1.0, 3.0]
Args
*args follows the same semantics as python's xrange. len(args) == 1 -> start = 0, stop = args[0], step = 1. len(args) == 2 -> start = args[0], stop = args[1], step = 1. len(args) == 3 -> start = args[0], stop = args[1], step = args[2].
**kwargs output_type: Its expected dtype. (Optional, default: tf.int64).
Returns
Dataset A RangeDataset.
Raises
ValueError if len(args) == 0. reduce View source
reduce(
initial_state, reduce_func
)
Reduces the input dataset to a single element. The transformation calls reduce_func successively on every element of the input dataset until the dataset is exhausted, aggregating information in its internal state. The initial_state argument is used for the initial state and the final state is returned as the result.
tf.data.Dataset.range(5).reduce(np.int64(0), lambda x, _: x + 1).numpy()
5
tf.data.Dataset.range(5).reduce(np.int64(0), lambda x, y: x + y).numpy()
10
Args
initial_state An element representing the initial state of the transformation.
reduce_func A function that maps (old_state, input_element) to new_state. It must take two arguments and return a new element The structure of new_state must match the structure of initial_state.
Returns A dataset element corresponding to the final state of the transformation.
repeat View source
repeat(
count=None
)
Repeats this dataset so each original value is seen count times.
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
dataset = dataset.repeat(3)
list(dataset.as_numpy_iterator())
[1, 2, 3, 1, 2, 3, 1, 2, 3]
Note: If this dataset is a function of global state (e.g. a random number generator), then different repetitions may produce different elements.
Args
count (Optional.) A tf.int64 scalar tf.Tensor, representing the number of times the dataset should be repeated. The default behavior (if count is None or -1) is for the dataset be repeated indefinitely.
Returns
Dataset A Dataset. shard View source
shard(
num_shards, index
)
Creates a Dataset that includes only 1/num_shards of this dataset. shard is deterministic. The Dataset produced by A.shard(n, i) will contain all elements of A whose index mod n = i.
A = tf.data.Dataset.range(10)
B = A.shard(num_shards=3, index=0)
list(B.as_numpy_iterator())
[0, 3, 6, 9]
C = A.shard(num_shards=3, index=1)
list(C.as_numpy_iterator())
[1, 4, 7]
D = A.shard(num_shards=3, index=2)
list(D.as_numpy_iterator())
[2, 5, 8]
This dataset operator is very useful when running distributed training, as it allows each worker to read a unique subset. When reading a single input file, you can shard elements as follows: d = tf.data.TFRecordDataset(input_file)
d = d.shard(num_workers, worker_index)
d = d.repeat(num_epochs)
d = d.shuffle(shuffle_buffer_size)
d = d.map(parser_fn, num_parallel_calls=num_map_threads)
Important caveats: Be sure to shard before you use any randomizing operator (such as shuffle). Generally it is best if the shard operator is used early in the dataset pipeline. For example, when reading from a set of TFRecord files, shard before converting the dataset to input samples. This avoids reading every file on every worker. The following is an example of an efficient sharding strategy within a complete pipeline: d = Dataset.list_files(pattern)
d = d.shard(num_workers, worker_index)
d = d.repeat(num_epochs)
d = d.shuffle(shuffle_buffer_size)
d = d.interleave(tf.data.TFRecordDataset,
cycle_length=num_readers, block_length=1)
d = d.map(parser_fn, num_parallel_calls=num_map_threads)
Args
num_shards A tf.int64 scalar tf.Tensor, representing the number of shards operating in parallel.
index A tf.int64 scalar tf.Tensor, representing the worker index.
Returns
Dataset A Dataset.
Raises
InvalidArgumentError if num_shards or index are illegal values.
Note: error checking is done on a best-effort basis, and errors aren't guaranteed to be caught upon dataset creation. (e.g. providing in a placeholder tensor bypasses the early checking, and will instead result in an error during a session.run call.)
shuffle View source
shuffle(
buffer_size, seed=None, reshuffle_each_iteration=None
)
Randomly shuffles the elements of this dataset. This dataset fills a buffer with buffer_size elements, then randomly samples elements from this buffer, replacing the selected elements with new elements. For perfect shuffling, a buffer size greater than or equal to the full size of the dataset is required. For instance, if your dataset contains 10,000 elements but buffer_size is set to 1,000, then shuffle will initially select a random element from only the first 1,000 elements in the buffer. Once an element is selected, its space in the buffer is replaced by the next (i.e. 1,001-st) element, maintaining the 1,000 element buffer. reshuffle_each_iteration controls whether the shuffle order should be different for each epoch. In TF 1.X, the idiomatic way to create epochs was through the repeat transformation:
dataset = tf.data.Dataset.range(3)
dataset = dataset.shuffle(3, reshuffle_each_iteration=True)
dataset = dataset.repeat(2) # doctest: +SKIP
[1, 0, 2, 1, 2, 0]
dataset = tf.data.Dataset.range(3)
dataset = dataset.shuffle(3, reshuffle_each_iteration=False)
dataset = dataset.repeat(2) # doctest: +SKIP
[1, 0, 2, 1, 0, 2]
In TF 2.0, tf.data.Dataset objects are Python iterables which makes it possible to also create epochs through Python iteration:
dataset = tf.data.Dataset.range(3)
dataset = dataset.shuffle(3, reshuffle_each_iteration=True)
list(dataset.as_numpy_iterator()) # doctest: +SKIP
[1, 0, 2]
list(dataset.as_numpy_iterator()) # doctest: +SKIP
[1, 2, 0]
dataset = tf.data.Dataset.range(3)
dataset = dataset.shuffle(3, reshuffle_each_iteration=False)
list(dataset.as_numpy_iterator()) # doctest: +SKIP
[1, 0, 2]
list(dataset.as_numpy_iterator()) # doctest: +SKIP
[1, 0, 2]
Args
buffer_size A tf.int64 scalar tf.Tensor, representing the number of elements from this dataset from which the new dataset will sample.
seed (Optional.) A tf.int64 scalar tf.Tensor, representing the random seed that will be used to create the distribution. See tf.random.set_seed for behavior.
reshuffle_each_iteration (Optional.) A boolean, which if true indicates that the dataset should be pseudorandomly reshuffled each time it is iterated over. (Defaults to True.)
Returns
Dataset A Dataset. skip View source
skip(
count
)
Creates a Dataset that skips count elements from this dataset.
dataset = tf.data.Dataset.range(10)
dataset = dataset.skip(7)
list(dataset.as_numpy_iterator())
[7, 8, 9]
Args
count A tf.int64 scalar tf.Tensor, representing the number of elements of this dataset that should be skipped to form the new dataset. If count is greater than the size of this dataset, the new dataset will contain no elements. If count is -1, skips the entire dataset.
Returns
Dataset A Dataset. take View source
take(
count
)
Creates a Dataset with at most count elements from this dataset.
dataset = tf.data.Dataset.range(10)
dataset = dataset.take(3)
list(dataset.as_numpy_iterator())
[0, 1, 2]
Args
count A tf.int64 scalar tf.Tensor, representing the number of elements of this dataset that should be taken to form the new dataset. If count is -1, or if count is greater than the size of this dataset, the new dataset will contain all elements of this dataset.
Returns
Dataset A Dataset. unbatch View source
unbatch()
Splits elements of a dataset into multiple elements. For example, if elements of the dataset are shaped [B, a0, a1, ...], where B may vary for each input element, then for each element in the dataset, the unbatched dataset will contain B consecutive elements of shape [a0, a1, ...].
elements = [ [1, 2, 3], [1, 2], [1, 2, 3, 4] ]
dataset = tf.data.Dataset.from_generator(lambda: elements, tf.int64)
dataset = dataset.unbatch()
list(dataset.as_numpy_iterator())
[1, 2, 3, 1, 2, 1, 2, 3, 4]
Note: unbatch requires a data copy to slice up the batched tensor into smaller, unbatched tensors. When optimizing performance, try to avoid unnecessary usage of unbatch.
Returns A Dataset.
window View source
window(
size, shift=None, stride=1, drop_remainder=False
)
Combines (nests of) input elements into a dataset of (nests of) windows. A "window" is a finite dataset of flat elements of size size (or possibly fewer if there are not enough input elements to fill the window and drop_remainder evaluates to False). The shift argument determines the number of input elements by which the window moves on each iteration. If windows and elements are both numbered starting at 0, the first element in window k will be element k * shift of the input dataset. In particular, the first element of the first window will always be the first element of the input dataset. The stride argument determines the stride of the input elements, and the shift argument determines the shift of the window. For example:
dataset = tf.data.Dataset.range(7).window(2)
for window in dataset:
print(list(window.as_numpy_iterator()))
[0, 1]
[2, 3]
[4, 5]
[6]
dataset = tf.data.Dataset.range(7).window(3, 2, 1, True)
for window in dataset:
print(list(window.as_numpy_iterator()))
[0, 1, 2]
[2, 3, 4]
[4, 5, 6]
dataset = tf.data.Dataset.range(7).window(3, 1, 2, True)
for window in dataset:
print(list(window.as_numpy_iterator()))
[0, 2, 4]
[1, 3, 5]
[2, 4, 6]
Note that when the window transformation is applied to a dataset of nested elements, it produces a dataset of nested windows.
nested = ([1, 2, 3, 4], [5, 6, 7, 8])
dataset = tf.data.Dataset.from_tensor_slices(nested).window(2)
for window in dataset:
def to_numpy(ds):
return list(ds.as_numpy_iterator())
print(tuple(to_numpy(component) for component in window))
([1, 2], [5, 6])
([3, 4], [7, 8])
dataset = tf.data.Dataset.from_tensor_slices({'a': [1, 2, 3, 4]})
dataset = dataset.window(2)
for window in dataset:
def to_numpy(ds):
return list(ds.as_numpy_iterator())
print({'a': to_numpy(window['a'])})
{'a': [1, 2]}
{'a': [3, 4]}
Args
size A tf.int64 scalar tf.Tensor, representing the number of elements of the input dataset to combine into a window. Must be positive.
shift (Optional.) A tf.int64 scalar tf.Tensor, representing the number of input elements by which the window moves in each iteration. Defaults to size. Must be positive.
stride (Optional.) A tf.int64 scalar tf.Tensor, representing the stride of the input elements in the sliding window. Must be positive. The default value of 1 means "retain every input element".
drop_remainder (Optional.) A tf.bool scalar tf.Tensor, representing whether the last windows should be dropped if their size is smaller than size.
Returns
Dataset A Dataset of (nests of) windows -- a finite datasets of flat elements created from the (nests of) input elements. with_options View source
with_options(
options
)
Returns a new tf.data.Dataset with the given options set. The options are "global" in the sense they apply to the entire dataset. If options are set multiple times, they are merged as long as different options do not use different non-default values.
ds = tf.data.Dataset.range(5)
ds = ds.interleave(lambda x: tf.data.Dataset.range(5),
cycle_length=3,
num_parallel_calls=3)
options = tf.data.Options()
# This will make the interleave order non-deterministic.
options.experimental_deterministic = False
ds = ds.with_options(options)
Args
options A tf.data.Options that identifies the options the use.
Returns
Dataset A Dataset with the given options.
Raises
ValueError when an option is set more than once to a non-default value zip View source
@staticmethod
zip(
datasets
)
Creates a Dataset by zipping together the given datasets. This method has similar semantics to the built-in zip() function in Python, with the main difference being that the datasets argument can be an arbitrary nested structure of Dataset objects.
# The nested structure of the `datasets` argument determines the
# structure of elements in the resulting dataset.
a = tf.data.Dataset.range(1, 4) # ==> [ 1, 2, 3 ]
b = tf.data.Dataset.range(4, 7) # ==> [ 4, 5, 6 ]
ds = tf.data.Dataset.zip((a, b))
list(ds.as_numpy_iterator())
[(1, 4), (2, 5), (3, 6)]
ds = tf.data.Dataset.zip((b, a))
list(ds.as_numpy_iterator())
[(4, 1), (5, 2), (6, 3)]
# The `datasets` argument may contain an arbitrary number of datasets.
c = tf.data.Dataset.range(7, 13).batch(2) # ==> [ [7, 8],
# [9, 10],
# [11, 12] ]
ds = tf.data.Dataset.zip((a, b, c))
for element in ds.as_numpy_iterator():
print(element)
(1, 4, array([7, 8]))
(2, 5, array([ 9, 10]))
(3, 6, array([11, 12]))
# The number of elements in the resulting dataset is the same as
# the size of the smallest dataset in `datasets`.
d = tf.data.Dataset.range(13, 15) # ==> [ 13, 14 ]
ds = tf.data.Dataset.zip((a, d))
list(ds.as_numpy_iterator())
[(1, 13), (2, 14)]
Args
datasets A nested structure of datasets.
Returns
Dataset A Dataset. __bool__ View source
__bool__()
__iter__ View source
__iter__()
Creates an iterator for elements of this dataset. The returned iterator implements the Python Iterator protocol.
Returns An tf.data.Iterator for the elements of this dataset.
Raises
RuntimeError If not inside of tf.function and not executing eagerly. __len__ View source
__len__()
Returns the length of the dataset if it is known and finite. This method requires that you are running in eager mode, and that the length of the dataset is known and non-infinite. When the length may be unknown or infinite, or if you are running in graph mode, use tf.data.Dataset.cardinality instead.
Returns An integer representing the length of the dataset.
Raises
RuntimeError If the dataset length is unknown or infinite, or if eager execution is not enabled. __nonzero__ View source
__nonzero__() | tensorflow.data.experimental.sqldataset |
tf.data.experimental.StatsAggregator View source on GitHub A stateful resource that aggregates statistics from one or more iterators.
tf.data.experimental.StatsAggregator()
To record statistics, use one of the custom transformation functions defined in this module when defining your tf.data.Dataset. All statistics will be aggregated by the StatsAggregator that is associated with a particular iterator (see below). For example, to record the latency of producing each element by iterating over a dataset: dataset = ...
dataset = dataset.apply(tf.data.experimental.latency_stats("total_bytes"))
To associate a StatsAggregator with a tf.data.Dataset object, use the following pattern: aggregator = tf.data.experimental.StatsAggregator()
dataset = ...
# Apply `StatsOptions` to associate `dataset` with `aggregator`.
options = tf.data.Options()
options.experimental_stats.aggregator = aggregator
dataset = dataset.with_options(options)
Note: This interface is experimental and expected to change. In particular, we expect to add other implementations of StatsAggregator that provide different ways of exporting statistics, and add more types of statistics. | tensorflow.data.experimental.statsaggregator |
tf.data.experimental.StatsOptions View source on GitHub Represents options for collecting dataset stats using StatsAggregator. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.data.experimental.StatsOptions
tf.data.experimental.StatsOptions()
You can set the stats options of a dataset through the experimental_stats property of tf.data.Options; the property is an instance of tf.data.experimental.StatsOptions. For example, to collect latency stats on all dataset edges, use the following pattern: aggregator = tf.data.experimental.StatsAggregator()
options = tf.data.Options()
options.experimental_stats.aggregator = aggregator
options.experimental_stats.latency_all_edges = True
dataset = dataset.with_options(options)
Attributes
aggregator Associates the given statistics aggregator with the dataset pipeline.
counter_prefix Prefix for the statistics recorded as counter.
latency_all_edges Whether to add latency measurements on all edges. Defaults to False.
prefix Prefix to prepend all statistics recorded for the input dataset with. Methods __eq__ View source
__eq__(
other
)
Return self==value. __ne__ View source
__ne__(
other
)
Return self!=value. | tensorflow.data.experimental.statsoptions |
tf.data.experimental.take_while View source on GitHub A transformation that stops dataset iteration based on a predicate. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.data.experimental.take_while
tf.data.experimental.take_while(
predicate
)
Args
predicate A function that maps a nested structure of tensors (having shapes and types defined by self.output_shapes and self.output_types) to a scalar tf.bool tensor.
Returns A Dataset transformation function, which can be passed to tf.data.Dataset.apply. | tensorflow.data.experimental.take_while |
tf.data.experimental.TFRecordWriter View source on GitHub Writes a dataset to a TFRecord file. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.data.experimental.TFRecordWriter
tf.data.experimental.TFRecordWriter(
filename, compression_type=None
)
The elements of the dataset must be scalar strings. To serialize dataset elements as strings, you can use the tf.io.serialize_tensor function. dataset = tf.data.Dataset.range(3)
dataset = dataset.map(tf.io.serialize_tensor)
writer = tf.data.experimental.TFRecordWriter("/path/to/file.tfrecord")
writer.write(dataset)
To read back the elements, use TFRecordDataset. dataset = tf.data.TFRecordDataset("/path/to/file.tfrecord")
dataset = dataset.map(lambda x: tf.io.parse_tensor(x, tf.int64))
To shard a dataset across multiple TFRecord files: dataset = ... # dataset to be written
def reduce_func(key, dataset):
filename = tf.strings.join([PATH_PREFIX, tf.strings.as_string(key)])
writer = tf.data.experimental.TFRecordWriter(filename)
writer.write(dataset.map(lambda _, x: x))
return tf.data.Dataset.from_tensors(filename)
dataset = dataset.enumerate()
dataset = dataset.apply(tf.data.experimental.group_by_window(
lambda i, _: i % NUM_SHARDS, reduce_func, tf.int64.max
))
Args
filename a string path indicating where to write the TFRecord data.
compression_type (Optional.) a string indicating what type of compression to use when writing the file. See tf.io.TFRecordCompressionType for what types of compression are available. Defaults to None. Methods write View source
write(
dataset
)
Writes a dataset to a TFRecord file. An operation that writes the content of the specified dataset to the file specified in the constructor. If the file exists, it will be overwritten.
Args
dataset a tf.data.Dataset whose elements are to be written to a file
Returns In graph mode, this returns an operation which when executed performs the write. In eager mode, the write is performed by the method itself and there is no return value.
Raises TypeError: if dataset is not a tf.data.Dataset. TypeError: if the elements produced by the dataset are not scalar strings. | tensorflow.data.experimental.tfrecordwriter |
tf.data.experimental.ThreadingOptions View source on GitHub Represents options for dataset threading. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.data.experimental.ThreadingOptions
tf.data.experimental.ThreadingOptions()
You can set the threading options of a dataset through the experimental_threading property of tf.data.Options; the property is an instance of tf.data.experimental.ThreadingOptions. options = tf.data.Options()
options.experimental_threading.private_threadpool_size = 10
dataset = dataset.with_options(options)
Attributes
max_intra_op_parallelism If set, it overrides the maximum degree of intra-op parallelism.
private_threadpool_size If set, the dataset will use a private threadpool of the given size. Methods __eq__ View source
__eq__(
other
)
Return self==value. __ne__ View source
__ne__(
other
)
Return self!=value. | tensorflow.data.experimental.threadingoptions |
tf.data.experimental.to_variant View source on GitHub Returns a variant representing the given dataset. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.data.experimental.to_variant
tf.data.experimental.to_variant(
dataset
)
Args
dataset A tf.data.Dataset.
Returns A scalar tf.variant tensor representing the given dataset. | tensorflow.data.experimental.to_variant |
tf.data.experimental.unbatch View source on GitHub Splits elements of a dataset into multiple elements on the batch dimension. (deprecated) View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.data.experimental.unbatch
tf.data.experimental.unbatch()
Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use tf.data.Dataset.unbatch(). For example, if elements of the dataset are shaped [B, a0, a1, ...], where B may vary for each input element, then for each element in the dataset, the unbatched dataset will contain B consecutive elements of shape [a0, a1, ...]. # NOTE: The following example uses `{ ... }` to represent the contents
# of a dataset.
a = { ['a', 'b', 'c'], ['a', 'b'], ['a', 'b', 'c', 'd'] }
a.unbatch() == {
'a', 'b', 'c', 'a', 'b', 'a', 'b', 'c', 'd'}
Returns A Dataset transformation function, which can be passed to tf.data.Dataset.apply. | tensorflow.data.experimental.unbatch |
tf.data.experimental.unique View source on GitHub Creates a Dataset from another Dataset, discarding duplicates. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.data.experimental.unique
tf.data.experimental.unique()
Use this transformation to produce a dataset that contains one instance of each unique element in the input. For example: dataset = tf.data.Dataset.from_tensor_slices([1, 37, 2, 37, 2, 1])
# Using `unique()` will drop the duplicate elements.
dataset = dataset.apply(tf.data.experimental.unique()) # ==> { 1, 37, 2 }
Returns A Dataset transformation function, which can be passed to tf.data.Dataset.apply. | tensorflow.data.experimental.unique |
tf.data.FixedLengthRecordDataset View source on GitHub A Dataset of fixed-length records from one or more binary files. Inherits From: Dataset
tf.data.FixedLengthRecordDataset(
filenames, record_bytes, header_bytes=None, footer_bytes=None, buffer_size=None,
compression_type=None, num_parallel_reads=None
)
Args
filenames A tf.string tensor or tf.data.Dataset containing one or more filenames.
record_bytes A tf.int64 scalar representing the number of bytes in each record.
header_bytes (Optional.) A tf.int64 scalar representing the number of bytes to skip at the start of a file.
footer_bytes (Optional.) A tf.int64 scalar representing the number of bytes to ignore at the end of a file.
buffer_size (Optional.) A tf.int64 scalar representing the number of bytes to buffer when reading.
compression_type (Optional.) A tf.string scalar evaluating to one of "" (no compression), "ZLIB", or "GZIP".
num_parallel_reads (Optional.) A tf.int64 scalar representing the number of files to read in parallel. If greater than one, the records of files read in parallel are outputted in an interleaved order. If your input pipeline is I/O bottlenecked, consider setting this parameter to a value greater than one to parallelize the I/O. If None, files will be read sequentially.
Attributes
element_spec The type specification of an element of this dataset.
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
dataset.element_spec
TensorSpec(shape=(), dtype=tf.int32, name=None)
Methods apply View source
apply(
transformation_func
)
Applies a transformation function to this dataset. apply enables chaining of custom Dataset transformations, which are represented as functions that take one Dataset argument and return a transformed Dataset.
dataset = tf.data.Dataset.range(100)
def dataset_fn(ds):
return ds.filter(lambda x: x < 5)
dataset = dataset.apply(dataset_fn)
list(dataset.as_numpy_iterator())
[0, 1, 2, 3, 4]
Args
transformation_func A function that takes one Dataset argument and returns a Dataset.
Returns
Dataset The Dataset returned by applying transformation_func to this dataset. as_numpy_iterator View source
as_numpy_iterator()
Returns an iterator which converts all elements of the dataset to numpy. Use as_numpy_iterator to inspect the content of your dataset. To see element shapes and types, print dataset elements directly instead of using as_numpy_iterator.
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
for element in dataset:
print(element)
tf.Tensor(1, shape=(), dtype=int32)
tf.Tensor(2, shape=(), dtype=int32)
tf.Tensor(3, shape=(), dtype=int32)
This method requires that you are running in eager mode and the dataset's element_spec contains only TensorSpec components.
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
for element in dataset.as_numpy_iterator():
print(element)
1
2
3
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
print(list(dataset.as_numpy_iterator()))
[1, 2, 3]
as_numpy_iterator() will preserve the nested structure of dataset elements.
dataset = tf.data.Dataset.from_tensor_slices({'a': ([1, 2], [3, 4]),
'b': [5, 6]})
list(dataset.as_numpy_iterator()) == [{'a': (1, 3), 'b': 5},
{'a': (2, 4), 'b': 6}]
True
Returns An iterable over the elements of the dataset, with their tensors converted to numpy arrays.
Raises
TypeError if an element contains a non-Tensor value.
RuntimeError if eager execution is not enabled. batch View source
batch(
batch_size, drop_remainder=False
)
Combines consecutive elements of this dataset into batches.
dataset = tf.data.Dataset.range(8)
dataset = dataset.batch(3)
list(dataset.as_numpy_iterator())
[array([0, 1, 2]), array([3, 4, 5]), array([6, 7])]
dataset = tf.data.Dataset.range(8)
dataset = dataset.batch(3, drop_remainder=True)
list(dataset.as_numpy_iterator())
[array([0, 1, 2]), array([3, 4, 5])]
The components of the resulting element will have an additional outer dimension, which will be batch_size (or N % batch_size for the last element if batch_size does not divide the number of input elements N evenly and drop_remainder is False). If your program depends on the batches having the same outer dimension, you should set the drop_remainder argument to True to prevent the smaller batch from being produced.
Args
batch_size A tf.int64 scalar tf.Tensor, representing the number of consecutive elements of this dataset to combine in a single batch.
drop_remainder (Optional.) A tf.bool scalar tf.Tensor, representing whether the last batch should be dropped in the case it has fewer than batch_size elements; the default behavior is not to drop the smaller batch.
Returns
Dataset A Dataset. cache View source
cache(
filename=''
)
Caches the elements in this dataset. The first time the dataset is iterated over, its elements will be cached either in the specified file or in memory. Subsequent iterations will use the cached data.
Note: For the cache to be finalized, the input dataset must be iterated through in its entirety. Otherwise, subsequent iterations will not use cached data.
dataset = tf.data.Dataset.range(5)
dataset = dataset.map(lambda x: x**2)
dataset = dataset.cache()
# The first time reading through the data will generate the data using
# `range` and `map`.
list(dataset.as_numpy_iterator())
[0, 1, 4, 9, 16]
# Subsequent iterations read from the cache.
list(dataset.as_numpy_iterator())
[0, 1, 4, 9, 16]
When caching to a file, the cached data will persist across runs. Even the first iteration through the data will read from the cache file. Changing the input pipeline before the call to .cache() will have no effect until the cache file is removed or the filename is changed.
dataset = tf.data.Dataset.range(5)
dataset = dataset.cache("/path/to/file") # doctest: +SKIP
list(dataset.as_numpy_iterator()) # doctest: +SKIP
[0, 1, 2, 3, 4]
dataset = tf.data.Dataset.range(10)
dataset = dataset.cache("/path/to/file") # Same file! # doctest: +SKIP
list(dataset.as_numpy_iterator()) # doctest: +SKIP
[0, 1, 2, 3, 4]
Note: cache will produce exactly the same elements during each iteration through the dataset. If you wish to randomize the iteration order, make sure to call shuffle after calling cache.
Args
filename A tf.string scalar tf.Tensor, representing the name of a directory on the filesystem to use for caching elements in this Dataset. If a filename is not provided, the dataset will be cached in memory.
Returns
Dataset A Dataset. cardinality View source
cardinality()
Returns the cardinality of the dataset, if known. cardinality may return tf.data.INFINITE_CARDINALITY if the dataset contains an infinite number of elements or tf.data.UNKNOWN_CARDINALITY if the analysis fails to determine the number of elements in the dataset (e.g. when the dataset source is a file).
dataset = tf.data.Dataset.range(42)
print(dataset.cardinality().numpy())
42
dataset = dataset.repeat()
cardinality = dataset.cardinality()
print((cardinality == tf.data.INFINITE_CARDINALITY).numpy())
True
dataset = dataset.filter(lambda x: True)
cardinality = dataset.cardinality()
print((cardinality == tf.data.UNKNOWN_CARDINALITY).numpy())
True
Returns A scalar tf.int64 Tensor representing the cardinality of the dataset. If the cardinality is infinite or unknown, cardinality returns the named constants tf.data.INFINITE_CARDINALITY and tf.data.UNKNOWN_CARDINALITY respectively.
concatenate View source
concatenate(
dataset
)
Creates a Dataset by concatenating the given dataset with this dataset.
a = tf.data.Dataset.range(1, 4) # ==> [ 1, 2, 3 ]
b = tf.data.Dataset.range(4, 8) # ==> [ 4, 5, 6, 7 ]
ds = a.concatenate(b)
list(ds.as_numpy_iterator())
[1, 2, 3, 4, 5, 6, 7]
# The input dataset and dataset to be concatenated should have the same
# nested structures and output types.
c = tf.data.Dataset.zip((a, b))
a.concatenate(c)
Traceback (most recent call last):
TypeError: Two datasets to concatenate have different types
<dtype: 'int64'> and (tf.int64, tf.int64)
d = tf.data.Dataset.from_tensor_slices(["a", "b", "c"])
a.concatenate(d)
Traceback (most recent call last):
TypeError: Two datasets to concatenate have different types
<dtype: 'int64'> and <dtype: 'string'>
Args
dataset Dataset to be concatenated.
Returns
Dataset A Dataset. enumerate View source
enumerate(
start=0
)
Enumerates the elements of this dataset. It is similar to python's enumerate.
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
dataset = dataset.enumerate(start=5)
for element in dataset.as_numpy_iterator():
print(element)
(5, 1)
(6, 2)
(7, 3)
# The nested structure of the input dataset determines the structure of
# elements in the resulting dataset.
dataset = tf.data.Dataset.from_tensor_slices([(7, 8), (9, 10)])
dataset = dataset.enumerate()
for element in dataset.as_numpy_iterator():
print(element)
(0, array([7, 8], dtype=int32))
(1, array([ 9, 10], dtype=int32))
Args
start A tf.int64 scalar tf.Tensor, representing the start value for enumeration.
Returns
Dataset A Dataset. filter View source
filter(
predicate
)
Filters this dataset according to predicate.
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
dataset = dataset.filter(lambda x: x < 3)
list(dataset.as_numpy_iterator())
[1, 2]
# `tf.math.equal(x, y)` is required for equality comparison
def filter_fn(x):
return tf.math.equal(x, 1)
dataset = dataset.filter(filter_fn)
list(dataset.as_numpy_iterator())
[1]
Args
predicate A function mapping a dataset element to a boolean.
Returns
Dataset The Dataset containing the elements of this dataset for which predicate is True. flat_map View source
flat_map(
map_func
)
Maps map_func across this dataset and flattens the result. Use flat_map if you want to make sure that the order of your dataset stays the same. For example, to flatten a dataset of batches into a dataset of their elements:
dataset = tf.data.Dataset.from_tensor_slices(
[[1, 2, 3], [4, 5, 6], [7, 8, 9]])
dataset = dataset.flat_map(lambda x: Dataset.from_tensor_slices(x))
list(dataset.as_numpy_iterator())
[1, 2, 3, 4, 5, 6, 7, 8, 9]
tf.data.Dataset.interleave() is a generalization of flat_map, since flat_map produces the same output as tf.data.Dataset.interleave(cycle_length=1)
Args
map_func A function mapping a dataset element to a dataset.
Returns
Dataset A Dataset. from_generator View source
@staticmethod
from_generator(
generator, output_types=None, output_shapes=None, args=None,
output_signature=None
)
Creates a Dataset whose elements are generated by generator. (deprecated arguments) Warning: SOME ARGUMENTS ARE DEPRECATED: (output_shapes, output_types). They will be removed in a future version. Instructions for updating: Use output_signature instead The generator argument must be a callable object that returns an object that supports the iter() protocol (e.g. a generator function). The elements generated by generator must be compatible with either the given output_signature argument or with the given output_types and (optionally) output_shapes arguments, whichiver was specified. The recommended way to call from_generator is to use the output_signature argument. In this case the output will be assumed to consist of objects with the classes, shapes and types defined by tf.TypeSpec objects from output_signature argument:
def gen():
ragged_tensor = tf.ragged.constant([[1, 2], [3]])
yield 42, ragged_tensor
dataset = tf.data.Dataset.from_generator(
gen,
output_signature=(
tf.TensorSpec(shape=(), dtype=tf.int32),
tf.RaggedTensorSpec(shape=(2, None), dtype=tf.int32)))
list(dataset.take(1))
[(<tf.Tensor: shape=(), dtype=int32, numpy=42>,
<tf.RaggedTensor [[1, 2], [3]]>)]
There is also a deprecated way to call from_generator by either with output_types argument alone or together with output_shapes argument. In this case the output of the function will be assumed to consist of tf.Tensor objects with with the types defined by output_types and with the shapes which are either unknown or defined by output_shapes.
Note: The current implementation of Dataset.from_generator() uses tf.numpy_function and inherits the same constraints. In particular, it requires the dataset and iterator related operations to be placed on a device in the same process as the Python program that called Dataset.from_generator(). The body of generator will not be serialized in a GraphDef, and you should not use this method if you need to serialize your model and restore it in a different environment.
Note: If generator depends on mutable global variables or other external state, be aware that the runtime may invoke generator multiple times (in order to support repeating the Dataset) and at any time between the call to Dataset.from_generator() and the production of the first element from the generator. Mutating global variables or external state can cause undefined behavior, and we recommend that you explicitly cache any external state in generator before calling Dataset.from_generator().
Args
generator A callable object that returns an object that supports the iter() protocol. If args is not specified, generator must take no arguments; otherwise it must take as many arguments as there are values in args.
output_types (Optional.) A nested structure of tf.DType objects corresponding to each component of an element yielded by generator.
output_shapes (Optional.) A nested structure of tf.TensorShape objects corresponding to each component of an element yielded by generator.
args (Optional.) A tuple of tf.Tensor objects that will be evaluated and passed to generator as NumPy-array arguments.
output_signature (Optional.) A nested structure of tf.TypeSpec objects corresponding to each component of an element yielded by generator.
Returns
Dataset A Dataset. from_tensor_slices View source
@staticmethod
from_tensor_slices(
tensors
)
Creates a Dataset whose elements are slices of the given tensors. The given tensors are sliced along their first dimension. This operation preserves the structure of the input tensors, removing the first dimension of each tensor and using it as the dataset dimension. All input tensors must have the same size in their first dimensions.
# Slicing a 1D tensor produces scalar tensor elements.
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
list(dataset.as_numpy_iterator())
[1, 2, 3]
# Slicing a 2D tensor produces 1D tensor elements.
dataset = tf.data.Dataset.from_tensor_slices([[1, 2], [3, 4]])
list(dataset.as_numpy_iterator())
[array([1, 2], dtype=int32), array([3, 4], dtype=int32)]
# Slicing a tuple of 1D tensors produces tuple elements containing
# scalar tensors.
dataset = tf.data.Dataset.from_tensor_slices(([1, 2], [3, 4], [5, 6]))
list(dataset.as_numpy_iterator())
[(1, 3, 5), (2, 4, 6)]
# Dictionary structure is also preserved.
dataset = tf.data.Dataset.from_tensor_slices({"a": [1, 2], "b": [3, 4]})
list(dataset.as_numpy_iterator()) == [{'a': 1, 'b': 3},
{'a': 2, 'b': 4}]
True
# Two tensors can be combined into one Dataset object.
features = tf.constant([[1, 3], [2, 1], [3, 3]]) # ==> 3x2 tensor
labels = tf.constant(['A', 'B', 'A']) # ==> 3x1 tensor
dataset = Dataset.from_tensor_slices((features, labels))
# Both the features and the labels tensors can be converted
# to a Dataset object separately and combined after.
features_dataset = Dataset.from_tensor_slices(features)
labels_dataset = Dataset.from_tensor_slices(labels)
dataset = Dataset.zip((features_dataset, labels_dataset))
# A batched feature and label set can be converted to a Dataset
# in similar fashion.
batched_features = tf.constant([[[1, 3], [2, 3]],
[[2, 1], [1, 2]],
[[3, 3], [3, 2]]], shape=(3, 2, 2))
batched_labels = tf.constant([['A', 'A'],
['B', 'B'],
['A', 'B']], shape=(3, 2, 1))
dataset = Dataset.from_tensor_slices((batched_features, batched_labels))
for element in dataset.as_numpy_iterator():
print(element)
(array([[1, 3],
[2, 3]], dtype=int32), array([[b'A'],
[b'A']], dtype=object))
(array([[2, 1],
[1, 2]], dtype=int32), array([[b'B'],
[b'B']], dtype=object))
(array([[3, 3],
[3, 2]], dtype=int32), array([[b'A'],
[b'B']], dtype=object))
Note that if tensors contains a NumPy array, and eager execution is not enabled, the values will be embedded in the graph as one or more tf.constant operations. For large datasets (> 1 GB), this can waste memory and run into byte limits of graph serialization. If tensors contains one or more large NumPy arrays, consider the alternative described in this guide.
Args
tensors A dataset element, with each component having the same size in the first dimension.
Returns
Dataset A Dataset. from_tensors View source
@staticmethod
from_tensors(
tensors
)
Creates a Dataset with a single element, comprising the given tensors. from_tensors produces a dataset containing only a single element. To slice the input tensor into multiple elements, use from_tensor_slices instead.
dataset = tf.data.Dataset.from_tensors([1, 2, 3])
list(dataset.as_numpy_iterator())
[array([1, 2, 3], dtype=int32)]
dataset = tf.data.Dataset.from_tensors(([1, 2, 3], 'A'))
list(dataset.as_numpy_iterator())
[(array([1, 2, 3], dtype=int32), b'A')]
# You can use `from_tensors` to produce a dataset which repeats
# the same example many times.
example = tf.constant([1,2,3])
dataset = tf.data.Dataset.from_tensors(example).repeat(2)
list(dataset.as_numpy_iterator())
[array([1, 2, 3], dtype=int32), array([1, 2, 3], dtype=int32)]
Note that if tensors contains a NumPy array, and eager execution is not enabled, the values will be embedded in the graph as one or more tf.constant operations. For large datasets (> 1 GB), this can waste memory and run into byte limits of graph serialization. If tensors contains one or more large NumPy arrays, consider the alternative described in this guide.
Args
tensors A dataset element.
Returns
Dataset A Dataset. interleave View source
interleave(
map_func, cycle_length=None, block_length=None, num_parallel_calls=None,
deterministic=None
)
Maps map_func across this dataset, and interleaves the results. For example, you can use Dataset.interleave() to process many input files concurrently:
# Preprocess 4 files concurrently, and interleave blocks of 16 records
# from each file.
filenames = ["/var/data/file1.txt", "/var/data/file2.txt",
"/var/data/file3.txt", "/var/data/file4.txt"]
dataset = tf.data.Dataset.from_tensor_slices(filenames)
def parse_fn(filename):
return tf.data.Dataset.range(10)
dataset = dataset.interleave(lambda x:
tf.data.TextLineDataset(x).map(parse_fn, num_parallel_calls=1),
cycle_length=4, block_length=16)
The cycle_length and block_length arguments control the order in which elements are produced. cycle_length controls the number of input elements that are processed concurrently. If you set cycle_length to 1, this transformation will handle one input element at a time, and will produce identical results to tf.data.Dataset.flat_map. In general, this transformation will apply map_func to cycle_length input elements, open iterators on the returned Dataset objects, and cycle through them producing block_length consecutive elements from each iterator, and consuming the next input element each time it reaches the end of an iterator. For example:
dataset = Dataset.range(1, 6) # ==> [ 1, 2, 3, 4, 5 ]
# NOTE: New lines indicate "block" boundaries.
dataset = dataset.interleave(
lambda x: Dataset.from_tensors(x).repeat(6),
cycle_length=2, block_length=4)
list(dataset.as_numpy_iterator())
[1, 1, 1, 1,
2, 2, 2, 2,
1, 1,
2, 2,
3, 3, 3, 3,
4, 4, 4, 4,
3, 3,
4, 4,
5, 5, 5, 5,
5, 5]
Note: The order of elements yielded by this transformation is deterministic, as long as map_func is a pure function and deterministic=True. If map_func contains any stateful operations, the order in which that state is accessed is undefined.
Performance can often be improved by setting num_parallel_calls so that interleave will use multiple threads to fetch elements. If determinism isn't required, it can also improve performance to set deterministic=False.
filenames = ["/var/data/file1.txt", "/var/data/file2.txt",
"/var/data/file3.txt", "/var/data/file4.txt"]
dataset = tf.data.Dataset.from_tensor_slices(filenames)
dataset = dataset.interleave(lambda x: tf.data.TFRecordDataset(x),
cycle_length=4, num_parallel_calls=tf.data.AUTOTUNE,
deterministic=False)
Args
map_func A function mapping a dataset element to a dataset.
cycle_length (Optional.) The number of input elements that will be processed concurrently. If not set, the tf.data runtime decides what it should be based on available CPU. If num_parallel_calls is set to tf.data.AUTOTUNE, the cycle_length argument identifies the maximum degree of parallelism.
block_length (Optional.) The number of consecutive elements to produce from each input element before cycling to another input element. If not set, defaults to 1.
num_parallel_calls (Optional.) If specified, the implementation creates a threadpool, which is used to fetch inputs from cycle elements asynchronously and in parallel. The default behavior is to fetch inputs from cycle elements synchronously with no parallelism. If the value tf.data.AUTOTUNE is used, then the number of parallel calls is set dynamically based on available CPU.
deterministic (Optional.) A boolean controlling whether determinism should be traded for performance by allowing elements to be produced out of order. If deterministic is None, the tf.data.Options.experimental_deterministic dataset option (True by default) is used to decide whether to produce elements deterministically.
Returns
Dataset A Dataset. list_files View source
@staticmethod
list_files(
file_pattern, shuffle=None, seed=None
)
A dataset of all files matching one or more glob patterns. The file_pattern argument should be a small number of glob patterns. If your filenames have already been globbed, use Dataset.from_tensor_slices(filenames) instead, as re-globbing every filename with list_files may result in poor performance with remote storage systems.
Note: The default behavior of this method is to return filenames in a non-deterministic random shuffled order. Pass a seed or shuffle=False to get results in a deterministic order.
Example: If we had the following files on our filesystem: /path/to/dir/a.txt /path/to/dir/b.py /path/to/dir/c.py If we pass "/path/to/dir/*.py" as the directory, the dataset would produce: /path/to/dir/b.py /path/to/dir/c.py
Args
file_pattern A string, a list of strings, or a tf.Tensor of string type (scalar or vector), representing the filename glob (i.e. shell wildcard) pattern(s) that will be matched.
shuffle (Optional.) If True, the file names will be shuffled randomly. Defaults to True.
seed (Optional.) A tf.int64 scalar tf.Tensor, representing the random seed that will be used to create the distribution. See tf.random.set_seed for behavior.
Returns
Dataset A Dataset of strings corresponding to file names. map View source
map(
map_func, num_parallel_calls=None, deterministic=None
)
Maps map_func across the elements of this dataset. This transformation applies map_func to each element of this dataset, and returns a new dataset containing the transformed elements, in the same order as they appeared in the input. map_func can be used to change both the values and the structure of a dataset's elements. For example, adding 1 to each element, or projecting a subset of element components.
dataset = Dataset.range(1, 6) # ==> [ 1, 2, 3, 4, 5 ]
dataset = dataset.map(lambda x: x + 1)
list(dataset.as_numpy_iterator())
[2, 3, 4, 5, 6]
The input signature of map_func is determined by the structure of each element in this dataset.
dataset = Dataset.range(5)
# `map_func` takes a single argument of type `tf.Tensor` with the same
# shape and dtype.
result = dataset.map(lambda x: x + 1)
# Each element is a tuple containing two `tf.Tensor` objects.
elements = [(1, "foo"), (2, "bar"), (3, "baz")]
dataset = tf.data.Dataset.from_generator(
lambda: elements, (tf.int32, tf.string))
# `map_func` takes two arguments of type `tf.Tensor`. This function
# projects out just the first component.
result = dataset.map(lambda x_int, y_str: x_int)
list(result.as_numpy_iterator())
[1, 2, 3]
# Each element is a dictionary mapping strings to `tf.Tensor` objects.
elements = ([{"a": 1, "b": "foo"},
{"a": 2, "b": "bar"},
{"a": 3, "b": "baz"}])
dataset = tf.data.Dataset.from_generator(
lambda: elements, {"a": tf.int32, "b": tf.string})
# `map_func` takes a single argument of type `dict` with the same keys
# as the elements.
result = dataset.map(lambda d: str(d["a"]) + d["b"])
The value or values returned by map_func determine the structure of each element in the returned dataset.
dataset = tf.data.Dataset.range(3)
# `map_func` returns two `tf.Tensor` objects.
def g(x):
return tf.constant(37.0), tf.constant(["Foo", "Bar", "Baz"])
result = dataset.map(g)
result.element_spec
(TensorSpec(shape=(), dtype=tf.float32, name=None), TensorSpec(shape=(3,), dtype=tf.string, name=None))
# Python primitives, lists, and NumPy arrays are implicitly converted to
# `tf.Tensor`.
def h(x):
return 37.0, ["Foo", "Bar"], np.array([1.0, 2.0], dtype=np.float64)
result = dataset.map(h)
result.element_spec
(TensorSpec(shape=(), dtype=tf.float32, name=None), TensorSpec(shape=(2,), dtype=tf.string, name=None), TensorSpec(shape=(2,), dtype=tf.float64, name=None))
# `map_func` can return nested structures.
def i(x):
return (37.0, [42, 16]), "foo"
result = dataset.map(i)
result.element_spec
((TensorSpec(shape=(), dtype=tf.float32, name=None),
TensorSpec(shape=(2,), dtype=tf.int32, name=None)),
TensorSpec(shape=(), dtype=tf.string, name=None))
map_func can accept as arguments and return any type of dataset element. Note that irrespective of the context in which map_func is defined (eager vs. graph), tf.data traces the function and executes it as a graph. To use Python code inside of the function you have a few options: 1) Rely on AutoGraph to convert Python code into an equivalent graph computation. The downside of this approach is that AutoGraph can convert some but not all Python code. 2) Use tf.py_function, which allows you to write arbitrary Python code but will generally result in worse performance than 1). For example:
d = tf.data.Dataset.from_tensor_slices(['hello', 'world'])
# transform a string tensor to upper case string using a Python function
def upper_case_fn(t: tf.Tensor):
return t.numpy().decode('utf-8').upper()
d = d.map(lambda x: tf.py_function(func=upper_case_fn,
inp=[x], Tout=tf.string))
list(d.as_numpy_iterator())
[b'HELLO', b'WORLD']
3) Use tf.numpy_function, which also allows you to write arbitrary Python code. Note that tf.py_function accepts tf.Tensor whereas tf.numpy_function accepts numpy arrays and returns only numpy arrays. For example:
d = tf.data.Dataset.from_tensor_slices(['hello', 'world'])
def upper_case_fn(t: np.ndarray):
return t.decode('utf-8').upper()
d = d.map(lambda x: tf.numpy_function(func=upper_case_fn,
inp=[x], Tout=tf.string))
list(d.as_numpy_iterator())
[b'HELLO', b'WORLD']
Note that the use of tf.numpy_function and tf.py_function in general precludes the possibility of executing user-defined transformations in parallel (because of Python GIL). Performance can often be improved by setting num_parallel_calls so that map will use multiple threads to process elements. If deterministic order isn't required, it can also improve performance to set deterministic=False.
dataset = Dataset.range(1, 6) # ==> [ 1, 2, 3, 4, 5 ]
dataset = dataset.map(lambda x: x + 1,
num_parallel_calls=tf.data.AUTOTUNE,
deterministic=False)
Args
map_func A function mapping a dataset element to another dataset element.
num_parallel_calls (Optional.) A tf.int32 scalar tf.Tensor, representing the number elements to process asynchronously in parallel. If not specified, elements will be processed sequentially. If the value tf.data.AUTOTUNE is used, then the number of parallel calls is set dynamically based on available CPU.
deterministic (Optional.) A boolean controlling whether determinism should be traded for performance by allowing elements to be produced out of order. If deterministic is None, the tf.data.Options.experimental_deterministic dataset option (True by default) is used to decide whether to produce elements deterministically.
Returns
Dataset A Dataset. options View source
options()
Returns the options for this dataset and its inputs.
Returns A tf.data.Options object representing the dataset options.
padded_batch View source
padded_batch(
batch_size, padded_shapes=None, padding_values=None, drop_remainder=False
)
Combines consecutive elements of this dataset into padded batches. This transformation combines multiple consecutive elements of the input dataset into a single element. Like tf.data.Dataset.batch, the components of the resulting element will have an additional outer dimension, which will be batch_size (or N % batch_size for the last element if batch_size does not divide the number of input elements N evenly and drop_remainder is False). If your program depends on the batches having the same outer dimension, you should set the drop_remainder argument to True to prevent the smaller batch from being produced. Unlike tf.data.Dataset.batch, the input elements to be batched may have different shapes, and this transformation will pad each component to the respective shape in padded_shapes. The padded_shapes argument determines the resulting shape for each dimension of each component in an output element: If the dimension is a constant, the component will be padded out to that length in that dimension. If the dimension is unknown, the component will be padded out to the maximum length of all elements in that dimension.
A = (tf.data.Dataset
.range(1, 5, output_type=tf.int32)
.map(lambda x: tf.fill([x], x)))
# Pad to the smallest per-batch size that fits all elements.
B = A.padded_batch(2)
for element in B.as_numpy_iterator():
print(element)
[[1 0]
[2 2]]
[[3 3 3 0]
[4 4 4 4]]
# Pad to a fixed size.
C = A.padded_batch(2, padded_shapes=5)
for element in C.as_numpy_iterator():
print(element)
[[1 0 0 0 0]
[2 2 0 0 0]]
[[3 3 3 0 0]
[4 4 4 4 0]]
# Pad with a custom value.
D = A.padded_batch(2, padded_shapes=5, padding_values=-1)
for element in D.as_numpy_iterator():
print(element)
[[ 1 -1 -1 -1 -1]
[ 2 2 -1 -1 -1]]
[[ 3 3 3 -1 -1]
[ 4 4 4 4 -1]]
# Components of nested elements can be padded independently.
elements = [([1, 2, 3], [10]),
([4, 5], [11, 12])]
dataset = tf.data.Dataset.from_generator(
lambda: iter(elements), (tf.int32, tf.int32))
# Pad the first component of the tuple to length 4, and the second
# component to the smallest size that fits.
dataset = dataset.padded_batch(2,
padded_shapes=([4], [None]),
padding_values=(-1, 100))
list(dataset.as_numpy_iterator())
[(array([[ 1, 2, 3, -1], [ 4, 5, -1, -1]], dtype=int32),
array([[ 10, 100], [ 11, 12]], dtype=int32))]
# Pad with a single value and multiple components.
E = tf.data.Dataset.zip((A, A)).padded_batch(2, padding_values=-1)
for element in E.as_numpy_iterator():
print(element)
(array([[ 1, -1],
[ 2, 2]], dtype=int32), array([[ 1, -1],
[ 2, 2]], dtype=int32))
(array([[ 3, 3, 3, -1],
[ 4, 4, 4, 4]], dtype=int32), array([[ 3, 3, 3, -1],
[ 4, 4, 4, 4]], dtype=int32))
See also tf.data.experimental.dense_to_sparse_batch, which combines elements that may have different shapes into a tf.sparse.SparseTensor.
Args
batch_size A tf.int64 scalar tf.Tensor, representing the number of consecutive elements of this dataset to combine in a single batch.
padded_shapes (Optional.) A nested structure of tf.TensorShape or tf.int64 vector tensor-like objects representing the shape to which the respective component of each input element should be padded prior to batching. Any unknown dimensions will be padded to the maximum size of that dimension in each batch. If unset, all dimensions of all components are padded to the maximum size in the batch. padded_shapes must be set if any component has an unknown rank.
padding_values (Optional.) A nested structure of scalar-shaped tf.Tensor, representing the padding values to use for the respective components. None represents that the nested structure should be padded with default values. Defaults are 0 for numeric types and the empty string for string types. The padding_values should have the same structure as the input dataset. If padding_values is a single element and the input dataset has multiple components, then the same padding_values will be used to pad every component of the dataset. If padding_values is a scalar, then its value will be broadcasted to match the shape of each component.
drop_remainder (Optional.) A tf.bool scalar tf.Tensor, representing whether the last batch should be dropped in the case it has fewer than batch_size elements; the default behavior is not to drop the smaller batch.
Returns
Dataset A Dataset.
Raises
ValueError If a component has an unknown rank, and the padded_shapes argument is not set. prefetch View source
prefetch(
buffer_size
)
Creates a Dataset that prefetches elements from this dataset. Most dataset input pipelines should end with a call to prefetch. This allows later elements to be prepared while the current element is being processed. This often improves latency and throughput, at the cost of using additional memory to store prefetched elements.
Note: Like other Dataset methods, prefetch operates on the elements of the input dataset. It has no concept of examples vs. batches. examples.prefetch(2) will prefetch two elements (2 examples), while examples.batch(20).prefetch(2) will prefetch 2 elements (2 batches, of 20 examples each).
dataset = tf.data.Dataset.range(3)
dataset = dataset.prefetch(2)
list(dataset.as_numpy_iterator())
[0, 1, 2]
Args
buffer_size A tf.int64 scalar tf.Tensor, representing the maximum number of elements that will be buffered when prefetching.
Returns
Dataset A Dataset. range View source
@staticmethod
range(
*args, **kwargs
)
Creates a Dataset of a step-separated range of values.
list(Dataset.range(5).as_numpy_iterator())
[0, 1, 2, 3, 4]
list(Dataset.range(2, 5).as_numpy_iterator())
[2, 3, 4]
list(Dataset.range(1, 5, 2).as_numpy_iterator())
[1, 3]
list(Dataset.range(1, 5, -2).as_numpy_iterator())
[]
list(Dataset.range(5, 1).as_numpy_iterator())
[]
list(Dataset.range(5, 1, -2).as_numpy_iterator())
[5, 3]
list(Dataset.range(2, 5, output_type=tf.int32).as_numpy_iterator())
[2, 3, 4]
list(Dataset.range(1, 5, 2, output_type=tf.float32).as_numpy_iterator())
[1.0, 3.0]
Args
*args follows the same semantics as python's xrange. len(args) == 1 -> start = 0, stop = args[0], step = 1. len(args) == 2 -> start = args[0], stop = args[1], step = 1. len(args) == 3 -> start = args[0], stop = args[1], step = args[2].
**kwargs output_type: Its expected dtype. (Optional, default: tf.int64).
Returns
Dataset A RangeDataset.
Raises
ValueError if len(args) == 0. reduce View source
reduce(
initial_state, reduce_func
)
Reduces the input dataset to a single element. The transformation calls reduce_func successively on every element of the input dataset until the dataset is exhausted, aggregating information in its internal state. The initial_state argument is used for the initial state and the final state is returned as the result.
tf.data.Dataset.range(5).reduce(np.int64(0), lambda x, _: x + 1).numpy()
5
tf.data.Dataset.range(5).reduce(np.int64(0), lambda x, y: x + y).numpy()
10
Args
initial_state An element representing the initial state of the transformation.
reduce_func A function that maps (old_state, input_element) to new_state. It must take two arguments and return a new element The structure of new_state must match the structure of initial_state.
Returns A dataset element corresponding to the final state of the transformation.
repeat View source
repeat(
count=None
)
Repeats this dataset so each original value is seen count times.
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
dataset = dataset.repeat(3)
list(dataset.as_numpy_iterator())
[1, 2, 3, 1, 2, 3, 1, 2, 3]
Note: If this dataset is a function of global state (e.g. a random number generator), then different repetitions may produce different elements.
Args
count (Optional.) A tf.int64 scalar tf.Tensor, representing the number of times the dataset should be repeated. The default behavior (if count is None or -1) is for the dataset be repeated indefinitely.
Returns
Dataset A Dataset. shard View source
shard(
num_shards, index
)
Creates a Dataset that includes only 1/num_shards of this dataset. shard is deterministic. The Dataset produced by A.shard(n, i) will contain all elements of A whose index mod n = i.
A = tf.data.Dataset.range(10)
B = A.shard(num_shards=3, index=0)
list(B.as_numpy_iterator())
[0, 3, 6, 9]
C = A.shard(num_shards=3, index=1)
list(C.as_numpy_iterator())
[1, 4, 7]
D = A.shard(num_shards=3, index=2)
list(D.as_numpy_iterator())
[2, 5, 8]
This dataset operator is very useful when running distributed training, as it allows each worker to read a unique subset. When reading a single input file, you can shard elements as follows: d = tf.data.TFRecordDataset(input_file)
d = d.shard(num_workers, worker_index)
d = d.repeat(num_epochs)
d = d.shuffle(shuffle_buffer_size)
d = d.map(parser_fn, num_parallel_calls=num_map_threads)
Important caveats: Be sure to shard before you use any randomizing operator (such as shuffle). Generally it is best if the shard operator is used early in the dataset pipeline. For example, when reading from a set of TFRecord files, shard before converting the dataset to input samples. This avoids reading every file on every worker. The following is an example of an efficient sharding strategy within a complete pipeline: d = Dataset.list_files(pattern)
d = d.shard(num_workers, worker_index)
d = d.repeat(num_epochs)
d = d.shuffle(shuffle_buffer_size)
d = d.interleave(tf.data.TFRecordDataset,
cycle_length=num_readers, block_length=1)
d = d.map(parser_fn, num_parallel_calls=num_map_threads)
Args
num_shards A tf.int64 scalar tf.Tensor, representing the number of shards operating in parallel.
index A tf.int64 scalar tf.Tensor, representing the worker index.
Returns
Dataset A Dataset.
Raises
InvalidArgumentError if num_shards or index are illegal values.
Note: error checking is done on a best-effort basis, and errors aren't guaranteed to be caught upon dataset creation. (e.g. providing in a placeholder tensor bypasses the early checking, and will instead result in an error during a session.run call.)
shuffle View source
shuffle(
buffer_size, seed=None, reshuffle_each_iteration=None
)
Randomly shuffles the elements of this dataset. This dataset fills a buffer with buffer_size elements, then randomly samples elements from this buffer, replacing the selected elements with new elements. For perfect shuffling, a buffer size greater than or equal to the full size of the dataset is required. For instance, if your dataset contains 10,000 elements but buffer_size is set to 1,000, then shuffle will initially select a random element from only the first 1,000 elements in the buffer. Once an element is selected, its space in the buffer is replaced by the next (i.e. 1,001-st) element, maintaining the 1,000 element buffer. reshuffle_each_iteration controls whether the shuffle order should be different for each epoch. In TF 1.X, the idiomatic way to create epochs was through the repeat transformation:
dataset = tf.data.Dataset.range(3)
dataset = dataset.shuffle(3, reshuffle_each_iteration=True)
dataset = dataset.repeat(2) # doctest: +SKIP
[1, 0, 2, 1, 2, 0]
dataset = tf.data.Dataset.range(3)
dataset = dataset.shuffle(3, reshuffle_each_iteration=False)
dataset = dataset.repeat(2) # doctest: +SKIP
[1, 0, 2, 1, 0, 2]
In TF 2.0, tf.data.Dataset objects are Python iterables which makes it possible to also create epochs through Python iteration:
dataset = tf.data.Dataset.range(3)
dataset = dataset.shuffle(3, reshuffle_each_iteration=True)
list(dataset.as_numpy_iterator()) # doctest: +SKIP
[1, 0, 2]
list(dataset.as_numpy_iterator()) # doctest: +SKIP
[1, 2, 0]
dataset = tf.data.Dataset.range(3)
dataset = dataset.shuffle(3, reshuffle_each_iteration=False)
list(dataset.as_numpy_iterator()) # doctest: +SKIP
[1, 0, 2]
list(dataset.as_numpy_iterator()) # doctest: +SKIP
[1, 0, 2]
Args
buffer_size A tf.int64 scalar tf.Tensor, representing the number of elements from this dataset from which the new dataset will sample.
seed (Optional.) A tf.int64 scalar tf.Tensor, representing the random seed that will be used to create the distribution. See tf.random.set_seed for behavior.
reshuffle_each_iteration (Optional.) A boolean, which if true indicates that the dataset should be pseudorandomly reshuffled each time it is iterated over. (Defaults to True.)
Returns
Dataset A Dataset. skip View source
skip(
count
)
Creates a Dataset that skips count elements from this dataset.
dataset = tf.data.Dataset.range(10)
dataset = dataset.skip(7)
list(dataset.as_numpy_iterator())
[7, 8, 9]
Args
count A tf.int64 scalar tf.Tensor, representing the number of elements of this dataset that should be skipped to form the new dataset. If count is greater than the size of this dataset, the new dataset will contain no elements. If count is -1, skips the entire dataset.
Returns
Dataset A Dataset. take View source
take(
count
)
Creates a Dataset with at most count elements from this dataset.
dataset = tf.data.Dataset.range(10)
dataset = dataset.take(3)
list(dataset.as_numpy_iterator())
[0, 1, 2]
Args
count A tf.int64 scalar tf.Tensor, representing the number of elements of this dataset that should be taken to form the new dataset. If count is -1, or if count is greater than the size of this dataset, the new dataset will contain all elements of this dataset.
Returns
Dataset A Dataset. unbatch View source
unbatch()
Splits elements of a dataset into multiple elements. For example, if elements of the dataset are shaped [B, a0, a1, ...], where B may vary for each input element, then for each element in the dataset, the unbatched dataset will contain B consecutive elements of shape [a0, a1, ...].
elements = [ [1, 2, 3], [1, 2], [1, 2, 3, 4] ]
dataset = tf.data.Dataset.from_generator(lambda: elements, tf.int64)
dataset = dataset.unbatch()
list(dataset.as_numpy_iterator())
[1, 2, 3, 1, 2, 1, 2, 3, 4]
Note: unbatch requires a data copy to slice up the batched tensor into smaller, unbatched tensors. When optimizing performance, try to avoid unnecessary usage of unbatch.
Returns A Dataset.
window View source
window(
size, shift=None, stride=1, drop_remainder=False
)
Combines (nests of) input elements into a dataset of (nests of) windows. A "window" is a finite dataset of flat elements of size size (or possibly fewer if there are not enough input elements to fill the window and drop_remainder evaluates to False). The shift argument determines the number of input elements by which the window moves on each iteration. If windows and elements are both numbered starting at 0, the first element in window k will be element k * shift of the input dataset. In particular, the first element of the first window will always be the first element of the input dataset. The stride argument determines the stride of the input elements, and the shift argument determines the shift of the window. For example:
dataset = tf.data.Dataset.range(7).window(2)
for window in dataset:
print(list(window.as_numpy_iterator()))
[0, 1]
[2, 3]
[4, 5]
[6]
dataset = tf.data.Dataset.range(7).window(3, 2, 1, True)
for window in dataset:
print(list(window.as_numpy_iterator()))
[0, 1, 2]
[2, 3, 4]
[4, 5, 6]
dataset = tf.data.Dataset.range(7).window(3, 1, 2, True)
for window in dataset:
print(list(window.as_numpy_iterator()))
[0, 2, 4]
[1, 3, 5]
[2, 4, 6]
Note that when the window transformation is applied to a dataset of nested elements, it produces a dataset of nested windows.
nested = ([1, 2, 3, 4], [5, 6, 7, 8])
dataset = tf.data.Dataset.from_tensor_slices(nested).window(2)
for window in dataset:
def to_numpy(ds):
return list(ds.as_numpy_iterator())
print(tuple(to_numpy(component) for component in window))
([1, 2], [5, 6])
([3, 4], [7, 8])
dataset = tf.data.Dataset.from_tensor_slices({'a': [1, 2, 3, 4]})
dataset = dataset.window(2)
for window in dataset:
def to_numpy(ds):
return list(ds.as_numpy_iterator())
print({'a': to_numpy(window['a'])})
{'a': [1, 2]}
{'a': [3, 4]}
Args
size A tf.int64 scalar tf.Tensor, representing the number of elements of the input dataset to combine into a window. Must be positive.
shift (Optional.) A tf.int64 scalar tf.Tensor, representing the number of input elements by which the window moves in each iteration. Defaults to size. Must be positive.
stride (Optional.) A tf.int64 scalar tf.Tensor, representing the stride of the input elements in the sliding window. Must be positive. The default value of 1 means "retain every input element".
drop_remainder (Optional.) A tf.bool scalar tf.Tensor, representing whether the last windows should be dropped if their size is smaller than size.
Returns
Dataset A Dataset of (nests of) windows -- a finite datasets of flat elements created from the (nests of) input elements. with_options View source
with_options(
options
)
Returns a new tf.data.Dataset with the given options set. The options are "global" in the sense they apply to the entire dataset. If options are set multiple times, they are merged as long as different options do not use different non-default values.
ds = tf.data.Dataset.range(5)
ds = ds.interleave(lambda x: tf.data.Dataset.range(5),
cycle_length=3,
num_parallel_calls=3)
options = tf.data.Options()
# This will make the interleave order non-deterministic.
options.experimental_deterministic = False
ds = ds.with_options(options)
Args
options A tf.data.Options that identifies the options the use.
Returns
Dataset A Dataset with the given options.
Raises
ValueError when an option is set more than once to a non-default value zip View source
@staticmethod
zip(
datasets
)
Creates a Dataset by zipping together the given datasets. This method has similar semantics to the built-in zip() function in Python, with the main difference being that the datasets argument can be an arbitrary nested structure of Dataset objects.
# The nested structure of the `datasets` argument determines the
# structure of elements in the resulting dataset.
a = tf.data.Dataset.range(1, 4) # ==> [ 1, 2, 3 ]
b = tf.data.Dataset.range(4, 7) # ==> [ 4, 5, 6 ]
ds = tf.data.Dataset.zip((a, b))
list(ds.as_numpy_iterator())
[(1, 4), (2, 5), (3, 6)]
ds = tf.data.Dataset.zip((b, a))
list(ds.as_numpy_iterator())
[(4, 1), (5, 2), (6, 3)]
# The `datasets` argument may contain an arbitrary number of datasets.
c = tf.data.Dataset.range(7, 13).batch(2) # ==> [ [7, 8],
# [9, 10],
# [11, 12] ]
ds = tf.data.Dataset.zip((a, b, c))
for element in ds.as_numpy_iterator():
print(element)
(1, 4, array([7, 8]))
(2, 5, array([ 9, 10]))
(3, 6, array([11, 12]))
# The number of elements in the resulting dataset is the same as
# the size of the smallest dataset in `datasets`.
d = tf.data.Dataset.range(13, 15) # ==> [ 13, 14 ]
ds = tf.data.Dataset.zip((a, d))
list(ds.as_numpy_iterator())
[(1, 13), (2, 14)]
Args
datasets A nested structure of datasets.
Returns
Dataset A Dataset. __bool__ View source
__bool__()
__iter__ View source
__iter__()
Creates an iterator for elements of this dataset. The returned iterator implements the Python Iterator protocol.
Returns An tf.data.Iterator for the elements of this dataset.
Raises
RuntimeError If not inside of tf.function and not executing eagerly. __len__ View source
__len__()
Returns the length of the dataset if it is known and finite. This method requires that you are running in eager mode, and that the length of the dataset is known and non-infinite. When the length may be unknown or infinite, or if you are running in graph mode, use tf.data.Dataset.cardinality instead.
Returns An integer representing the length of the dataset.
Raises
RuntimeError If the dataset length is unknown or infinite, or if eager execution is not enabled. __nonzero__ View source
__nonzero__() | tensorflow.data.fixedlengthrecorddataset |
tf.data.Iterator View source on GitHub Represents an iterator of a tf.data.Dataset. tf.data.Iterator is the primary mechanism for enumerating elements of a tf.data.Dataset. It supports the Python Iterator protocol, which means it can be iterated over using a for-loop:
dataset = tf.data.Dataset.range(2)
for element in dataset:
print(element)
tf.Tensor(0, shape=(), dtype=int64)
tf.Tensor(1, shape=(), dtype=int64)
or by fetching individual elements explicitly via get_next():
dataset = tf.data.Dataset.range(2)
iterator = iter(dataset)
print(iterator.get_next())
tf.Tensor(0, shape=(), dtype=int64)
print(iterator.get_next())
tf.Tensor(1, shape=(), dtype=int64)
In addition, non-raising iteration is supported via get_next_as_optional(), which returns the next element (if available) wrapped in a tf.experimental.Optional.
dataset = tf.data.Dataset.from_tensors(42)
iterator = iter(dataset)
optional = iterator.get_next_as_optional()
print(optional.has_value())
tf.Tensor(True, shape=(), dtype=bool)
optional = iterator.get_next_as_optional()
print(optional.has_value())
tf.Tensor(False, shape=(), dtype=bool)
Attributes
element_spec The type specification of an element of this iterator.
dataset = tf.data.Dataset.from_tensors(42)
iterator = iter(dataset)
iterator.element_spec
tf.TensorSpec(shape=(), dtype=tf.int32, name=None)
Methods get_next View source
@abc.abstractmethod
get_next()
Returns a nested structure of tf.Tensors containing the next element.
dataset = tf.data.Dataset.from_tensors(42)
iterator = iter(dataset)
print(iterator.get_next())
tf.Tensor(42, shape=(), dtype=int32)
Returns A nested structure of tf.Tensor objects.
Raises tf.errors.OutOfRangeError: If the end of the iterator has been reached.
get_next_as_optional View source
@abc.abstractmethod
get_next_as_optional()
Returns a tf.experimental.Optional which contains the next element. If the iterator has reached the end of the sequence, the returned tf.experimental.Optional will have no value.
dataset = tf.data.Dataset.from_tensors(42)
iterator = iter(dataset)
optional = iterator.get_next_as_optional()
print(optional.has_value())
tf.Tensor(True, shape=(), dtype=bool)
print(optional.get_value())
tf.Tensor(42, shape=(), dtype=int32)
optional = iterator.get_next_as_optional()
print(optional.has_value())
tf.Tensor(False, shape=(), dtype=bool)
Returns A tf.experimental.Optional object representing the next element.
__iter__
__iter__() | tensorflow.data.iterator |
tf.data.IteratorSpec Type specification for tf.data.Iterator. Inherits From: TypeSpec
tf.data.IteratorSpec(
element_spec
)
For instance, tf.data.IteratorSpec can be used to define a tf.function that takes tf.data.Iterator as an input argument:
@tf.function(input_signature=[tf.data.IteratorSpec(
tf.TensorSpec(shape=(), dtype=tf.int32, name=None))])
def square(iterator):
x = iterator.get_next()
return x * x
dataset = tf.data.Dataset.from_tensors(5)
iterator = iter(dataset)
print(square(iterator))
tf.Tensor(25, shape=(), dtype=int32)
Attributes
element_spec A nested structure of TypeSpec objects that represents the type specification of the iterator elements.
value_type The Python type for values that are compatible with this TypeSpec. In particular, all values that are compatible with this TypeSpec must be an instance of this type.
Methods from_value View source
@staticmethod
from_value(
value
)
is_compatible_with View source
is_compatible_with(
spec_or_value
)
Returns true if spec_or_value is compatible with this TypeSpec. most_specific_compatible_type View source
most_specific_compatible_type(
other
)
Returns the most specific TypeSpec compatible with self and other.
Args
other A TypeSpec.
Raises
ValueError If there is no TypeSpec that is compatible with both self and other. __eq__ View source
__eq__(
other
)
Return self==value. __ne__ View source
__ne__(
other
)
Return self!=value. | tensorflow.data.iteratorspec |
tf.data.Options View source on GitHub Represents options for tf.data.Dataset. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.data.Options
tf.data.Options()
A tf.data.Options object can be, for instance, used to control which static optimizations to apply to the input pipeline graph or whether to use performance modeling to dynamically tune the parallelism of operations such as tf.data.Dataset.map or tf.data.Dataset.interleave. The options are set for the entire dataset and are carried over to datasets created through tf.data transformations. The options can be set either by mutating the object returned by tf.data.Dataset.options() or by constructing an Options object and using the tf.data.Dataset.with_options(options) transformation, which returns a dataset with the options set.
dataset = tf.data.Dataset.range(42)
dataset.options().experimental_deterministic = False
print(dataset.options().experimental_deterministic)
False
dataset = tf.data.Dataset.range(42)
options = tf.data.Options()
options.experimental_deterministic = False
dataset = dataset.with_options(options)
print(dataset.options().experimental_deterministic)
False
Note: A known limitation of the tf.data.Options implementation is that the options are not preserved across tf.function boundaries. In particular, to set options for a dataset that is iterated within a tf.function, the options need to be set within the same tf.function.
Attributes
experimental_deterministic Whether the outputs need to be produced in deterministic order. If None, defaults to True.
experimental_distribute The distribution strategy options associated with the dataset. See tf.data.experimental.DistributeOptions for more details.
experimental_external_state_policy This option can be used to override the default policy for how to handle external state when serializing a dataset or checkpointing its iterator. There are three settings available - IGNORE: in which we completely ignore any state; WARN: We warn the user that some state might be thrown away; FAIL: We fail if any state is being captured.
experimental_optimization The optimization options associated with the dataset. See tf.data.experimental.OptimizationOptions for more details.
experimental_slack Whether to introduce 'slack' in the last prefetch of the input pipeline, if it exists. This may reduce CPU contention with accelerator host-side activity at the start of a step. The slack frequency is determined by the number of devices attached to this input pipeline. If None, defaults to False.
experimental_stats The statistics options associated with the dataset. See tf.data.experimental.StatsOptions for more details.
experimental_threading The threading options associated with the dataset. See tf.data.experimental.ThreadingOptions for more details. Methods merge View source
merge(
options
)
Merges itself with the given tf.data.Options. If this object and the options to merge set an option differently, a warning is generated and this object's value is updated with the options object's value.
Args
options a tf.data.Options to merge with
Returns New tf.data.Options object which is the result of merging self with the input tf.data.Options.
__eq__ View source
__eq__(
other
)
Return self==value. __ne__ View source
__ne__(
other
)
Return self!=value. | tensorflow.data.options |
tf.data.TextLineDataset View source on GitHub A Dataset comprising lines from one or more text files. Inherits From: Dataset
tf.data.TextLineDataset(
filenames, compression_type=None, buffer_size=None, num_parallel_reads=None
)
Args
filenames A tf.string tensor or tf.data.Dataset containing one or more filenames.
compression_type (Optional.) A tf.string scalar evaluating to one of "" (no compression), "ZLIB", or "GZIP".
buffer_size (Optional.) A tf.int64 scalar denoting the number of bytes to buffer. A value of 0 results in the default buffering values chosen based on the compression type.
num_parallel_reads (Optional.) A tf.int64 scalar representing the number of files to read in parallel. If greater than one, the records of files read in parallel are outputted in an interleaved order. If your input pipeline is I/O bottlenecked, consider setting this parameter to a value greater than one to parallelize the I/O. If None, files will be read sequentially.
Attributes
element_spec The type specification of an element of this dataset.
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
dataset.element_spec
TensorSpec(shape=(), dtype=tf.int32, name=None)
Methods apply View source
apply(
transformation_func
)
Applies a transformation function to this dataset. apply enables chaining of custom Dataset transformations, which are represented as functions that take one Dataset argument and return a transformed Dataset.
dataset = tf.data.Dataset.range(100)
def dataset_fn(ds):
return ds.filter(lambda x: x < 5)
dataset = dataset.apply(dataset_fn)
list(dataset.as_numpy_iterator())
[0, 1, 2, 3, 4]
Args
transformation_func A function that takes one Dataset argument and returns a Dataset.
Returns
Dataset The Dataset returned by applying transformation_func to this dataset. as_numpy_iterator View source
as_numpy_iterator()
Returns an iterator which converts all elements of the dataset to numpy. Use as_numpy_iterator to inspect the content of your dataset. To see element shapes and types, print dataset elements directly instead of using as_numpy_iterator.
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
for element in dataset:
print(element)
tf.Tensor(1, shape=(), dtype=int32)
tf.Tensor(2, shape=(), dtype=int32)
tf.Tensor(3, shape=(), dtype=int32)
This method requires that you are running in eager mode and the dataset's element_spec contains only TensorSpec components.
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
for element in dataset.as_numpy_iterator():
print(element)
1
2
3
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
print(list(dataset.as_numpy_iterator()))
[1, 2, 3]
as_numpy_iterator() will preserve the nested structure of dataset elements.
dataset = tf.data.Dataset.from_tensor_slices({'a': ([1, 2], [3, 4]),
'b': [5, 6]})
list(dataset.as_numpy_iterator()) == [{'a': (1, 3), 'b': 5},
{'a': (2, 4), 'b': 6}]
True
Returns An iterable over the elements of the dataset, with their tensors converted to numpy arrays.
Raises
TypeError if an element contains a non-Tensor value.
RuntimeError if eager execution is not enabled. batch View source
batch(
batch_size, drop_remainder=False
)
Combines consecutive elements of this dataset into batches.
dataset = tf.data.Dataset.range(8)
dataset = dataset.batch(3)
list(dataset.as_numpy_iterator())
[array([0, 1, 2]), array([3, 4, 5]), array([6, 7])]
dataset = tf.data.Dataset.range(8)
dataset = dataset.batch(3, drop_remainder=True)
list(dataset.as_numpy_iterator())
[array([0, 1, 2]), array([3, 4, 5])]
The components of the resulting element will have an additional outer dimension, which will be batch_size (or N % batch_size for the last element if batch_size does not divide the number of input elements N evenly and drop_remainder is False). If your program depends on the batches having the same outer dimension, you should set the drop_remainder argument to True to prevent the smaller batch from being produced.
Args
batch_size A tf.int64 scalar tf.Tensor, representing the number of consecutive elements of this dataset to combine in a single batch.
drop_remainder (Optional.) A tf.bool scalar tf.Tensor, representing whether the last batch should be dropped in the case it has fewer than batch_size elements; the default behavior is not to drop the smaller batch.
Returns
Dataset A Dataset. cache View source
cache(
filename=''
)
Caches the elements in this dataset. The first time the dataset is iterated over, its elements will be cached either in the specified file or in memory. Subsequent iterations will use the cached data.
Note: For the cache to be finalized, the input dataset must be iterated through in its entirety. Otherwise, subsequent iterations will not use cached data.
dataset = tf.data.Dataset.range(5)
dataset = dataset.map(lambda x: x**2)
dataset = dataset.cache()
# The first time reading through the data will generate the data using
# `range` and `map`.
list(dataset.as_numpy_iterator())
[0, 1, 4, 9, 16]
# Subsequent iterations read from the cache.
list(dataset.as_numpy_iterator())
[0, 1, 4, 9, 16]
When caching to a file, the cached data will persist across runs. Even the first iteration through the data will read from the cache file. Changing the input pipeline before the call to .cache() will have no effect until the cache file is removed or the filename is changed.
dataset = tf.data.Dataset.range(5)
dataset = dataset.cache("/path/to/file") # doctest: +SKIP
list(dataset.as_numpy_iterator()) # doctest: +SKIP
[0, 1, 2, 3, 4]
dataset = tf.data.Dataset.range(10)
dataset = dataset.cache("/path/to/file") # Same file! # doctest: +SKIP
list(dataset.as_numpy_iterator()) # doctest: +SKIP
[0, 1, 2, 3, 4]
Note: cache will produce exactly the same elements during each iteration through the dataset. If you wish to randomize the iteration order, make sure to call shuffle after calling cache.
Args
filename A tf.string scalar tf.Tensor, representing the name of a directory on the filesystem to use for caching elements in this Dataset. If a filename is not provided, the dataset will be cached in memory.
Returns
Dataset A Dataset. cardinality View source
cardinality()
Returns the cardinality of the dataset, if known. cardinality may return tf.data.INFINITE_CARDINALITY if the dataset contains an infinite number of elements or tf.data.UNKNOWN_CARDINALITY if the analysis fails to determine the number of elements in the dataset (e.g. when the dataset source is a file).
dataset = tf.data.Dataset.range(42)
print(dataset.cardinality().numpy())
42
dataset = dataset.repeat()
cardinality = dataset.cardinality()
print((cardinality == tf.data.INFINITE_CARDINALITY).numpy())
True
dataset = dataset.filter(lambda x: True)
cardinality = dataset.cardinality()
print((cardinality == tf.data.UNKNOWN_CARDINALITY).numpy())
True
Returns A scalar tf.int64 Tensor representing the cardinality of the dataset. If the cardinality is infinite or unknown, cardinality returns the named constants tf.data.INFINITE_CARDINALITY and tf.data.UNKNOWN_CARDINALITY respectively.
concatenate View source
concatenate(
dataset
)
Creates a Dataset by concatenating the given dataset with this dataset.
a = tf.data.Dataset.range(1, 4) # ==> [ 1, 2, 3 ]
b = tf.data.Dataset.range(4, 8) # ==> [ 4, 5, 6, 7 ]
ds = a.concatenate(b)
list(ds.as_numpy_iterator())
[1, 2, 3, 4, 5, 6, 7]
# The input dataset and dataset to be concatenated should have the same
# nested structures and output types.
c = tf.data.Dataset.zip((a, b))
a.concatenate(c)
Traceback (most recent call last):
TypeError: Two datasets to concatenate have different types
<dtype: 'int64'> and (tf.int64, tf.int64)
d = tf.data.Dataset.from_tensor_slices(["a", "b", "c"])
a.concatenate(d)
Traceback (most recent call last):
TypeError: Two datasets to concatenate have different types
<dtype: 'int64'> and <dtype: 'string'>
Args
dataset Dataset to be concatenated.
Returns
Dataset A Dataset. enumerate View source
enumerate(
start=0
)
Enumerates the elements of this dataset. It is similar to python's enumerate.
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
dataset = dataset.enumerate(start=5)
for element in dataset.as_numpy_iterator():
print(element)
(5, 1)
(6, 2)
(7, 3)
# The nested structure of the input dataset determines the structure of
# elements in the resulting dataset.
dataset = tf.data.Dataset.from_tensor_slices([(7, 8), (9, 10)])
dataset = dataset.enumerate()
for element in dataset.as_numpy_iterator():
print(element)
(0, array([7, 8], dtype=int32))
(1, array([ 9, 10], dtype=int32))
Args
start A tf.int64 scalar tf.Tensor, representing the start value for enumeration.
Returns
Dataset A Dataset. filter View source
filter(
predicate
)
Filters this dataset according to predicate.
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
dataset = dataset.filter(lambda x: x < 3)
list(dataset.as_numpy_iterator())
[1, 2]
# `tf.math.equal(x, y)` is required for equality comparison
def filter_fn(x):
return tf.math.equal(x, 1)
dataset = dataset.filter(filter_fn)
list(dataset.as_numpy_iterator())
[1]
Args
predicate A function mapping a dataset element to a boolean.
Returns
Dataset The Dataset containing the elements of this dataset for which predicate is True. flat_map View source
flat_map(
map_func
)
Maps map_func across this dataset and flattens the result. Use flat_map if you want to make sure that the order of your dataset stays the same. For example, to flatten a dataset of batches into a dataset of their elements:
dataset = tf.data.Dataset.from_tensor_slices(
[[1, 2, 3], [4, 5, 6], [7, 8, 9]])
dataset = dataset.flat_map(lambda x: Dataset.from_tensor_slices(x))
list(dataset.as_numpy_iterator())
[1, 2, 3, 4, 5, 6, 7, 8, 9]
tf.data.Dataset.interleave() is a generalization of flat_map, since flat_map produces the same output as tf.data.Dataset.interleave(cycle_length=1)
Args
map_func A function mapping a dataset element to a dataset.
Returns
Dataset A Dataset. from_generator View source
@staticmethod
from_generator(
generator, output_types=None, output_shapes=None, args=None,
output_signature=None
)
Creates a Dataset whose elements are generated by generator. (deprecated arguments) Warning: SOME ARGUMENTS ARE DEPRECATED: (output_shapes, output_types). They will be removed in a future version. Instructions for updating: Use output_signature instead The generator argument must be a callable object that returns an object that supports the iter() protocol (e.g. a generator function). The elements generated by generator must be compatible with either the given output_signature argument or with the given output_types and (optionally) output_shapes arguments, whichiver was specified. The recommended way to call from_generator is to use the output_signature argument. In this case the output will be assumed to consist of objects with the classes, shapes and types defined by tf.TypeSpec objects from output_signature argument:
def gen():
ragged_tensor = tf.ragged.constant([[1, 2], [3]])
yield 42, ragged_tensor
dataset = tf.data.Dataset.from_generator(
gen,
output_signature=(
tf.TensorSpec(shape=(), dtype=tf.int32),
tf.RaggedTensorSpec(shape=(2, None), dtype=tf.int32)))
list(dataset.take(1))
[(<tf.Tensor: shape=(), dtype=int32, numpy=42>,
<tf.RaggedTensor [[1, 2], [3]]>)]
There is also a deprecated way to call from_generator by either with output_types argument alone or together with output_shapes argument. In this case the output of the function will be assumed to consist of tf.Tensor objects with with the types defined by output_types and with the shapes which are either unknown or defined by output_shapes.
Note: The current implementation of Dataset.from_generator() uses tf.numpy_function and inherits the same constraints. In particular, it requires the dataset and iterator related operations to be placed on a device in the same process as the Python program that called Dataset.from_generator(). The body of generator will not be serialized in a GraphDef, and you should not use this method if you need to serialize your model and restore it in a different environment.
Note: If generator depends on mutable global variables or other external state, be aware that the runtime may invoke generator multiple times (in order to support repeating the Dataset) and at any time between the call to Dataset.from_generator() and the production of the first element from the generator. Mutating global variables or external state can cause undefined behavior, and we recommend that you explicitly cache any external state in generator before calling Dataset.from_generator().
Args
generator A callable object that returns an object that supports the iter() protocol. If args is not specified, generator must take no arguments; otherwise it must take as many arguments as there are values in args.
output_types (Optional.) A nested structure of tf.DType objects corresponding to each component of an element yielded by generator.
output_shapes (Optional.) A nested structure of tf.TensorShape objects corresponding to each component of an element yielded by generator.
args (Optional.) A tuple of tf.Tensor objects that will be evaluated and passed to generator as NumPy-array arguments.
output_signature (Optional.) A nested structure of tf.TypeSpec objects corresponding to each component of an element yielded by generator.
Returns
Dataset A Dataset. from_tensor_slices View source
@staticmethod
from_tensor_slices(
tensors
)
Creates a Dataset whose elements are slices of the given tensors. The given tensors are sliced along their first dimension. This operation preserves the structure of the input tensors, removing the first dimension of each tensor and using it as the dataset dimension. All input tensors must have the same size in their first dimensions.
# Slicing a 1D tensor produces scalar tensor elements.
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
list(dataset.as_numpy_iterator())
[1, 2, 3]
# Slicing a 2D tensor produces 1D tensor elements.
dataset = tf.data.Dataset.from_tensor_slices([[1, 2], [3, 4]])
list(dataset.as_numpy_iterator())
[array([1, 2], dtype=int32), array([3, 4], dtype=int32)]
# Slicing a tuple of 1D tensors produces tuple elements containing
# scalar tensors.
dataset = tf.data.Dataset.from_tensor_slices(([1, 2], [3, 4], [5, 6]))
list(dataset.as_numpy_iterator())
[(1, 3, 5), (2, 4, 6)]
# Dictionary structure is also preserved.
dataset = tf.data.Dataset.from_tensor_slices({"a": [1, 2], "b": [3, 4]})
list(dataset.as_numpy_iterator()) == [{'a': 1, 'b': 3},
{'a': 2, 'b': 4}]
True
# Two tensors can be combined into one Dataset object.
features = tf.constant([[1, 3], [2, 1], [3, 3]]) # ==> 3x2 tensor
labels = tf.constant(['A', 'B', 'A']) # ==> 3x1 tensor
dataset = Dataset.from_tensor_slices((features, labels))
# Both the features and the labels tensors can be converted
# to a Dataset object separately and combined after.
features_dataset = Dataset.from_tensor_slices(features)
labels_dataset = Dataset.from_tensor_slices(labels)
dataset = Dataset.zip((features_dataset, labels_dataset))
# A batched feature and label set can be converted to a Dataset
# in similar fashion.
batched_features = tf.constant([[[1, 3], [2, 3]],
[[2, 1], [1, 2]],
[[3, 3], [3, 2]]], shape=(3, 2, 2))
batched_labels = tf.constant([['A', 'A'],
['B', 'B'],
['A', 'B']], shape=(3, 2, 1))
dataset = Dataset.from_tensor_slices((batched_features, batched_labels))
for element in dataset.as_numpy_iterator():
print(element)
(array([[1, 3],
[2, 3]], dtype=int32), array([[b'A'],
[b'A']], dtype=object))
(array([[2, 1],
[1, 2]], dtype=int32), array([[b'B'],
[b'B']], dtype=object))
(array([[3, 3],
[3, 2]], dtype=int32), array([[b'A'],
[b'B']], dtype=object))
Note that if tensors contains a NumPy array, and eager execution is not enabled, the values will be embedded in the graph as one or more tf.constant operations. For large datasets (> 1 GB), this can waste memory and run into byte limits of graph serialization. If tensors contains one or more large NumPy arrays, consider the alternative described in this guide.
Args
tensors A dataset element, with each component having the same size in the first dimension.
Returns
Dataset A Dataset. from_tensors View source
@staticmethod
from_tensors(
tensors
)
Creates a Dataset with a single element, comprising the given tensors. from_tensors produces a dataset containing only a single element. To slice the input tensor into multiple elements, use from_tensor_slices instead.
dataset = tf.data.Dataset.from_tensors([1, 2, 3])
list(dataset.as_numpy_iterator())
[array([1, 2, 3], dtype=int32)]
dataset = tf.data.Dataset.from_tensors(([1, 2, 3], 'A'))
list(dataset.as_numpy_iterator())
[(array([1, 2, 3], dtype=int32), b'A')]
# You can use `from_tensors` to produce a dataset which repeats
# the same example many times.
example = tf.constant([1,2,3])
dataset = tf.data.Dataset.from_tensors(example).repeat(2)
list(dataset.as_numpy_iterator())
[array([1, 2, 3], dtype=int32), array([1, 2, 3], dtype=int32)]
Note that if tensors contains a NumPy array, and eager execution is not enabled, the values will be embedded in the graph as one or more tf.constant operations. For large datasets (> 1 GB), this can waste memory and run into byte limits of graph serialization. If tensors contains one or more large NumPy arrays, consider the alternative described in this guide.
Args
tensors A dataset element.
Returns
Dataset A Dataset. interleave View source
interleave(
map_func, cycle_length=None, block_length=None, num_parallel_calls=None,
deterministic=None
)
Maps map_func across this dataset, and interleaves the results. For example, you can use Dataset.interleave() to process many input files concurrently:
# Preprocess 4 files concurrently, and interleave blocks of 16 records
# from each file.
filenames = ["/var/data/file1.txt", "/var/data/file2.txt",
"/var/data/file3.txt", "/var/data/file4.txt"]
dataset = tf.data.Dataset.from_tensor_slices(filenames)
def parse_fn(filename):
return tf.data.Dataset.range(10)
dataset = dataset.interleave(lambda x:
tf.data.TextLineDataset(x).map(parse_fn, num_parallel_calls=1),
cycle_length=4, block_length=16)
The cycle_length and block_length arguments control the order in which elements are produced. cycle_length controls the number of input elements that are processed concurrently. If you set cycle_length to 1, this transformation will handle one input element at a time, and will produce identical results to tf.data.Dataset.flat_map. In general, this transformation will apply map_func to cycle_length input elements, open iterators on the returned Dataset objects, and cycle through them producing block_length consecutive elements from each iterator, and consuming the next input element each time it reaches the end of an iterator. For example:
dataset = Dataset.range(1, 6) # ==> [ 1, 2, 3, 4, 5 ]
# NOTE: New lines indicate "block" boundaries.
dataset = dataset.interleave(
lambda x: Dataset.from_tensors(x).repeat(6),
cycle_length=2, block_length=4)
list(dataset.as_numpy_iterator())
[1, 1, 1, 1,
2, 2, 2, 2,
1, 1,
2, 2,
3, 3, 3, 3,
4, 4, 4, 4,
3, 3,
4, 4,
5, 5, 5, 5,
5, 5]
Note: The order of elements yielded by this transformation is deterministic, as long as map_func is a pure function and deterministic=True. If map_func contains any stateful operations, the order in which that state is accessed is undefined.
Performance can often be improved by setting num_parallel_calls so that interleave will use multiple threads to fetch elements. If determinism isn't required, it can also improve performance to set deterministic=False.
filenames = ["/var/data/file1.txt", "/var/data/file2.txt",
"/var/data/file3.txt", "/var/data/file4.txt"]
dataset = tf.data.Dataset.from_tensor_slices(filenames)
dataset = dataset.interleave(lambda x: tf.data.TFRecordDataset(x),
cycle_length=4, num_parallel_calls=tf.data.AUTOTUNE,
deterministic=False)
Args
map_func A function mapping a dataset element to a dataset.
cycle_length (Optional.) The number of input elements that will be processed concurrently. If not set, the tf.data runtime decides what it should be based on available CPU. If num_parallel_calls is set to tf.data.AUTOTUNE, the cycle_length argument identifies the maximum degree of parallelism.
block_length (Optional.) The number of consecutive elements to produce from each input element before cycling to another input element. If not set, defaults to 1.
num_parallel_calls (Optional.) If specified, the implementation creates a threadpool, which is used to fetch inputs from cycle elements asynchronously and in parallel. The default behavior is to fetch inputs from cycle elements synchronously with no parallelism. If the value tf.data.AUTOTUNE is used, then the number of parallel calls is set dynamically based on available CPU.
deterministic (Optional.) A boolean controlling whether determinism should be traded for performance by allowing elements to be produced out of order. If deterministic is None, the tf.data.Options.experimental_deterministic dataset option (True by default) is used to decide whether to produce elements deterministically.
Returns
Dataset A Dataset. list_files View source
@staticmethod
list_files(
file_pattern, shuffle=None, seed=None
)
A dataset of all files matching one or more glob patterns. The file_pattern argument should be a small number of glob patterns. If your filenames have already been globbed, use Dataset.from_tensor_slices(filenames) instead, as re-globbing every filename with list_files may result in poor performance with remote storage systems.
Note: The default behavior of this method is to return filenames in a non-deterministic random shuffled order. Pass a seed or shuffle=False to get results in a deterministic order.
Example: If we had the following files on our filesystem: /path/to/dir/a.txt /path/to/dir/b.py /path/to/dir/c.py If we pass "/path/to/dir/*.py" as the directory, the dataset would produce: /path/to/dir/b.py /path/to/dir/c.py
Args
file_pattern A string, a list of strings, or a tf.Tensor of string type (scalar or vector), representing the filename glob (i.e. shell wildcard) pattern(s) that will be matched.
shuffle (Optional.) If True, the file names will be shuffled randomly. Defaults to True.
seed (Optional.) A tf.int64 scalar tf.Tensor, representing the random seed that will be used to create the distribution. See tf.random.set_seed for behavior.
Returns
Dataset A Dataset of strings corresponding to file names. map View source
map(
map_func, num_parallel_calls=None, deterministic=None
)
Maps map_func across the elements of this dataset. This transformation applies map_func to each element of this dataset, and returns a new dataset containing the transformed elements, in the same order as they appeared in the input. map_func can be used to change both the values and the structure of a dataset's elements. For example, adding 1 to each element, or projecting a subset of element components.
dataset = Dataset.range(1, 6) # ==> [ 1, 2, 3, 4, 5 ]
dataset = dataset.map(lambda x: x + 1)
list(dataset.as_numpy_iterator())
[2, 3, 4, 5, 6]
The input signature of map_func is determined by the structure of each element in this dataset.
dataset = Dataset.range(5)
# `map_func` takes a single argument of type `tf.Tensor` with the same
# shape and dtype.
result = dataset.map(lambda x: x + 1)
# Each element is a tuple containing two `tf.Tensor` objects.
elements = [(1, "foo"), (2, "bar"), (3, "baz")]
dataset = tf.data.Dataset.from_generator(
lambda: elements, (tf.int32, tf.string))
# `map_func` takes two arguments of type `tf.Tensor`. This function
# projects out just the first component.
result = dataset.map(lambda x_int, y_str: x_int)
list(result.as_numpy_iterator())
[1, 2, 3]
# Each element is a dictionary mapping strings to `tf.Tensor` objects.
elements = ([{"a": 1, "b": "foo"},
{"a": 2, "b": "bar"},
{"a": 3, "b": "baz"}])
dataset = tf.data.Dataset.from_generator(
lambda: elements, {"a": tf.int32, "b": tf.string})
# `map_func` takes a single argument of type `dict` with the same keys
# as the elements.
result = dataset.map(lambda d: str(d["a"]) + d["b"])
The value or values returned by map_func determine the structure of each element in the returned dataset.
dataset = tf.data.Dataset.range(3)
# `map_func` returns two `tf.Tensor` objects.
def g(x):
return tf.constant(37.0), tf.constant(["Foo", "Bar", "Baz"])
result = dataset.map(g)
result.element_spec
(TensorSpec(shape=(), dtype=tf.float32, name=None), TensorSpec(shape=(3,), dtype=tf.string, name=None))
# Python primitives, lists, and NumPy arrays are implicitly converted to
# `tf.Tensor`.
def h(x):
return 37.0, ["Foo", "Bar"], np.array([1.0, 2.0], dtype=np.float64)
result = dataset.map(h)
result.element_spec
(TensorSpec(shape=(), dtype=tf.float32, name=None), TensorSpec(shape=(2,), dtype=tf.string, name=None), TensorSpec(shape=(2,), dtype=tf.float64, name=None))
# `map_func` can return nested structures.
def i(x):
return (37.0, [42, 16]), "foo"
result = dataset.map(i)
result.element_spec
((TensorSpec(shape=(), dtype=tf.float32, name=None),
TensorSpec(shape=(2,), dtype=tf.int32, name=None)),
TensorSpec(shape=(), dtype=tf.string, name=None))
map_func can accept as arguments and return any type of dataset element. Note that irrespective of the context in which map_func is defined (eager vs. graph), tf.data traces the function and executes it as a graph. To use Python code inside of the function you have a few options: 1) Rely on AutoGraph to convert Python code into an equivalent graph computation. The downside of this approach is that AutoGraph can convert some but not all Python code. 2) Use tf.py_function, which allows you to write arbitrary Python code but will generally result in worse performance than 1). For example:
d = tf.data.Dataset.from_tensor_slices(['hello', 'world'])
# transform a string tensor to upper case string using a Python function
def upper_case_fn(t: tf.Tensor):
return t.numpy().decode('utf-8').upper()
d = d.map(lambda x: tf.py_function(func=upper_case_fn,
inp=[x], Tout=tf.string))
list(d.as_numpy_iterator())
[b'HELLO', b'WORLD']
3) Use tf.numpy_function, which also allows you to write arbitrary Python code. Note that tf.py_function accepts tf.Tensor whereas tf.numpy_function accepts numpy arrays and returns only numpy arrays. For example:
d = tf.data.Dataset.from_tensor_slices(['hello', 'world'])
def upper_case_fn(t: np.ndarray):
return t.decode('utf-8').upper()
d = d.map(lambda x: tf.numpy_function(func=upper_case_fn,
inp=[x], Tout=tf.string))
list(d.as_numpy_iterator())
[b'HELLO', b'WORLD']
Note that the use of tf.numpy_function and tf.py_function in general precludes the possibility of executing user-defined transformations in parallel (because of Python GIL). Performance can often be improved by setting num_parallel_calls so that map will use multiple threads to process elements. If deterministic order isn't required, it can also improve performance to set deterministic=False.
dataset = Dataset.range(1, 6) # ==> [ 1, 2, 3, 4, 5 ]
dataset = dataset.map(lambda x: x + 1,
num_parallel_calls=tf.data.AUTOTUNE,
deterministic=False)
Args
map_func A function mapping a dataset element to another dataset element.
num_parallel_calls (Optional.) A tf.int32 scalar tf.Tensor, representing the number elements to process asynchronously in parallel. If not specified, elements will be processed sequentially. If the value tf.data.AUTOTUNE is used, then the number of parallel calls is set dynamically based on available CPU.
deterministic (Optional.) A boolean controlling whether determinism should be traded for performance by allowing elements to be produced out of order. If deterministic is None, the tf.data.Options.experimental_deterministic dataset option (True by default) is used to decide whether to produce elements deterministically.
Returns
Dataset A Dataset. options View source
options()
Returns the options for this dataset and its inputs.
Returns A tf.data.Options object representing the dataset options.
padded_batch View source
padded_batch(
batch_size, padded_shapes=None, padding_values=None, drop_remainder=False
)
Combines consecutive elements of this dataset into padded batches. This transformation combines multiple consecutive elements of the input dataset into a single element. Like tf.data.Dataset.batch, the components of the resulting element will have an additional outer dimension, which will be batch_size (or N % batch_size for the last element if batch_size does not divide the number of input elements N evenly and drop_remainder is False). If your program depends on the batches having the same outer dimension, you should set the drop_remainder argument to True to prevent the smaller batch from being produced. Unlike tf.data.Dataset.batch, the input elements to be batched may have different shapes, and this transformation will pad each component to the respective shape in padded_shapes. The padded_shapes argument determines the resulting shape for each dimension of each component in an output element: If the dimension is a constant, the component will be padded out to that length in that dimension. If the dimension is unknown, the component will be padded out to the maximum length of all elements in that dimension.
A = (tf.data.Dataset
.range(1, 5, output_type=tf.int32)
.map(lambda x: tf.fill([x], x)))
# Pad to the smallest per-batch size that fits all elements.
B = A.padded_batch(2)
for element in B.as_numpy_iterator():
print(element)
[[1 0]
[2 2]]
[[3 3 3 0]
[4 4 4 4]]
# Pad to a fixed size.
C = A.padded_batch(2, padded_shapes=5)
for element in C.as_numpy_iterator():
print(element)
[[1 0 0 0 0]
[2 2 0 0 0]]
[[3 3 3 0 0]
[4 4 4 4 0]]
# Pad with a custom value.
D = A.padded_batch(2, padded_shapes=5, padding_values=-1)
for element in D.as_numpy_iterator():
print(element)
[[ 1 -1 -1 -1 -1]
[ 2 2 -1 -1 -1]]
[[ 3 3 3 -1 -1]
[ 4 4 4 4 -1]]
# Components of nested elements can be padded independently.
elements = [([1, 2, 3], [10]),
([4, 5], [11, 12])]
dataset = tf.data.Dataset.from_generator(
lambda: iter(elements), (tf.int32, tf.int32))
# Pad the first component of the tuple to length 4, and the second
# component to the smallest size that fits.
dataset = dataset.padded_batch(2,
padded_shapes=([4], [None]),
padding_values=(-1, 100))
list(dataset.as_numpy_iterator())
[(array([[ 1, 2, 3, -1], [ 4, 5, -1, -1]], dtype=int32),
array([[ 10, 100], [ 11, 12]], dtype=int32))]
# Pad with a single value and multiple components.
E = tf.data.Dataset.zip((A, A)).padded_batch(2, padding_values=-1)
for element in E.as_numpy_iterator():
print(element)
(array([[ 1, -1],
[ 2, 2]], dtype=int32), array([[ 1, -1],
[ 2, 2]], dtype=int32))
(array([[ 3, 3, 3, -1],
[ 4, 4, 4, 4]], dtype=int32), array([[ 3, 3, 3, -1],
[ 4, 4, 4, 4]], dtype=int32))
See also tf.data.experimental.dense_to_sparse_batch, which combines elements that may have different shapes into a tf.sparse.SparseTensor.
Args
batch_size A tf.int64 scalar tf.Tensor, representing the number of consecutive elements of this dataset to combine in a single batch.
padded_shapes (Optional.) A nested structure of tf.TensorShape or tf.int64 vector tensor-like objects representing the shape to which the respective component of each input element should be padded prior to batching. Any unknown dimensions will be padded to the maximum size of that dimension in each batch. If unset, all dimensions of all components are padded to the maximum size in the batch. padded_shapes must be set if any component has an unknown rank.
padding_values (Optional.) A nested structure of scalar-shaped tf.Tensor, representing the padding values to use for the respective components. None represents that the nested structure should be padded with default values. Defaults are 0 for numeric types and the empty string for string types. The padding_values should have the same structure as the input dataset. If padding_values is a single element and the input dataset has multiple components, then the same padding_values will be used to pad every component of the dataset. If padding_values is a scalar, then its value will be broadcasted to match the shape of each component.
drop_remainder (Optional.) A tf.bool scalar tf.Tensor, representing whether the last batch should be dropped in the case it has fewer than batch_size elements; the default behavior is not to drop the smaller batch.
Returns
Dataset A Dataset.
Raises
ValueError If a component has an unknown rank, and the padded_shapes argument is not set. prefetch View source
prefetch(
buffer_size
)
Creates a Dataset that prefetches elements from this dataset. Most dataset input pipelines should end with a call to prefetch. This allows later elements to be prepared while the current element is being processed. This often improves latency and throughput, at the cost of using additional memory to store prefetched elements.
Note: Like other Dataset methods, prefetch operates on the elements of the input dataset. It has no concept of examples vs. batches. examples.prefetch(2) will prefetch two elements (2 examples), while examples.batch(20).prefetch(2) will prefetch 2 elements (2 batches, of 20 examples each).
dataset = tf.data.Dataset.range(3)
dataset = dataset.prefetch(2)
list(dataset.as_numpy_iterator())
[0, 1, 2]
Args
buffer_size A tf.int64 scalar tf.Tensor, representing the maximum number of elements that will be buffered when prefetching.
Returns
Dataset A Dataset. range View source
@staticmethod
range(
*args, **kwargs
)
Creates a Dataset of a step-separated range of values.
list(Dataset.range(5).as_numpy_iterator())
[0, 1, 2, 3, 4]
list(Dataset.range(2, 5).as_numpy_iterator())
[2, 3, 4]
list(Dataset.range(1, 5, 2).as_numpy_iterator())
[1, 3]
list(Dataset.range(1, 5, -2).as_numpy_iterator())
[]
list(Dataset.range(5, 1).as_numpy_iterator())
[]
list(Dataset.range(5, 1, -2).as_numpy_iterator())
[5, 3]
list(Dataset.range(2, 5, output_type=tf.int32).as_numpy_iterator())
[2, 3, 4]
list(Dataset.range(1, 5, 2, output_type=tf.float32).as_numpy_iterator())
[1.0, 3.0]
Args
*args follows the same semantics as python's xrange. len(args) == 1 -> start = 0, stop = args[0], step = 1. len(args) == 2 -> start = args[0], stop = args[1], step = 1. len(args) == 3 -> start = args[0], stop = args[1], step = args[2].
**kwargs output_type: Its expected dtype. (Optional, default: tf.int64).
Returns
Dataset A RangeDataset.
Raises
ValueError if len(args) == 0. reduce View source
reduce(
initial_state, reduce_func
)
Reduces the input dataset to a single element. The transformation calls reduce_func successively on every element of the input dataset until the dataset is exhausted, aggregating information in its internal state. The initial_state argument is used for the initial state and the final state is returned as the result.
tf.data.Dataset.range(5).reduce(np.int64(0), lambda x, _: x + 1).numpy()
5
tf.data.Dataset.range(5).reduce(np.int64(0), lambda x, y: x + y).numpy()
10
Args
initial_state An element representing the initial state of the transformation.
reduce_func A function that maps (old_state, input_element) to new_state. It must take two arguments and return a new element The structure of new_state must match the structure of initial_state.
Returns A dataset element corresponding to the final state of the transformation.
repeat View source
repeat(
count=None
)
Repeats this dataset so each original value is seen count times.
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
dataset = dataset.repeat(3)
list(dataset.as_numpy_iterator())
[1, 2, 3, 1, 2, 3, 1, 2, 3]
Note: If this dataset is a function of global state (e.g. a random number generator), then different repetitions may produce different elements.
Args
count (Optional.) A tf.int64 scalar tf.Tensor, representing the number of times the dataset should be repeated. The default behavior (if count is None or -1) is for the dataset be repeated indefinitely.
Returns
Dataset A Dataset. shard View source
shard(
num_shards, index
)
Creates a Dataset that includes only 1/num_shards of this dataset. shard is deterministic. The Dataset produced by A.shard(n, i) will contain all elements of A whose index mod n = i.
A = tf.data.Dataset.range(10)
B = A.shard(num_shards=3, index=0)
list(B.as_numpy_iterator())
[0, 3, 6, 9]
C = A.shard(num_shards=3, index=1)
list(C.as_numpy_iterator())
[1, 4, 7]
D = A.shard(num_shards=3, index=2)
list(D.as_numpy_iterator())
[2, 5, 8]
This dataset operator is very useful when running distributed training, as it allows each worker to read a unique subset. When reading a single input file, you can shard elements as follows: d = tf.data.TFRecordDataset(input_file)
d = d.shard(num_workers, worker_index)
d = d.repeat(num_epochs)
d = d.shuffle(shuffle_buffer_size)
d = d.map(parser_fn, num_parallel_calls=num_map_threads)
Important caveats: Be sure to shard before you use any randomizing operator (such as shuffle). Generally it is best if the shard operator is used early in the dataset pipeline. For example, when reading from a set of TFRecord files, shard before converting the dataset to input samples. This avoids reading every file on every worker. The following is an example of an efficient sharding strategy within a complete pipeline: d = Dataset.list_files(pattern)
d = d.shard(num_workers, worker_index)
d = d.repeat(num_epochs)
d = d.shuffle(shuffle_buffer_size)
d = d.interleave(tf.data.TFRecordDataset,
cycle_length=num_readers, block_length=1)
d = d.map(parser_fn, num_parallel_calls=num_map_threads)
Args
num_shards A tf.int64 scalar tf.Tensor, representing the number of shards operating in parallel.
index A tf.int64 scalar tf.Tensor, representing the worker index.
Returns
Dataset A Dataset.
Raises
InvalidArgumentError if num_shards or index are illegal values.
Note: error checking is done on a best-effort basis, and errors aren't guaranteed to be caught upon dataset creation. (e.g. providing in a placeholder tensor bypasses the early checking, and will instead result in an error during a session.run call.)
shuffle View source
shuffle(
buffer_size, seed=None, reshuffle_each_iteration=None
)
Randomly shuffles the elements of this dataset. This dataset fills a buffer with buffer_size elements, then randomly samples elements from this buffer, replacing the selected elements with new elements. For perfect shuffling, a buffer size greater than or equal to the full size of the dataset is required. For instance, if your dataset contains 10,000 elements but buffer_size is set to 1,000, then shuffle will initially select a random element from only the first 1,000 elements in the buffer. Once an element is selected, its space in the buffer is replaced by the next (i.e. 1,001-st) element, maintaining the 1,000 element buffer. reshuffle_each_iteration controls whether the shuffle order should be different for each epoch. In TF 1.X, the idiomatic way to create epochs was through the repeat transformation:
dataset = tf.data.Dataset.range(3)
dataset = dataset.shuffle(3, reshuffle_each_iteration=True)
dataset = dataset.repeat(2) # doctest: +SKIP
[1, 0, 2, 1, 2, 0]
dataset = tf.data.Dataset.range(3)
dataset = dataset.shuffle(3, reshuffle_each_iteration=False)
dataset = dataset.repeat(2) # doctest: +SKIP
[1, 0, 2, 1, 0, 2]
In TF 2.0, tf.data.Dataset objects are Python iterables which makes it possible to also create epochs through Python iteration:
dataset = tf.data.Dataset.range(3)
dataset = dataset.shuffle(3, reshuffle_each_iteration=True)
list(dataset.as_numpy_iterator()) # doctest: +SKIP
[1, 0, 2]
list(dataset.as_numpy_iterator()) # doctest: +SKIP
[1, 2, 0]
dataset = tf.data.Dataset.range(3)
dataset = dataset.shuffle(3, reshuffle_each_iteration=False)
list(dataset.as_numpy_iterator()) # doctest: +SKIP
[1, 0, 2]
list(dataset.as_numpy_iterator()) # doctest: +SKIP
[1, 0, 2]
Args
buffer_size A tf.int64 scalar tf.Tensor, representing the number of elements from this dataset from which the new dataset will sample.
seed (Optional.) A tf.int64 scalar tf.Tensor, representing the random seed that will be used to create the distribution. See tf.random.set_seed for behavior.
reshuffle_each_iteration (Optional.) A boolean, which if true indicates that the dataset should be pseudorandomly reshuffled each time it is iterated over. (Defaults to True.)
Returns
Dataset A Dataset. skip View source
skip(
count
)
Creates a Dataset that skips count elements from this dataset.
dataset = tf.data.Dataset.range(10)
dataset = dataset.skip(7)
list(dataset.as_numpy_iterator())
[7, 8, 9]
Args
count A tf.int64 scalar tf.Tensor, representing the number of elements of this dataset that should be skipped to form the new dataset. If count is greater than the size of this dataset, the new dataset will contain no elements. If count is -1, skips the entire dataset.
Returns
Dataset A Dataset. take View source
take(
count
)
Creates a Dataset with at most count elements from this dataset.
dataset = tf.data.Dataset.range(10)
dataset = dataset.take(3)
list(dataset.as_numpy_iterator())
[0, 1, 2]
Args
count A tf.int64 scalar tf.Tensor, representing the number of elements of this dataset that should be taken to form the new dataset. If count is -1, or if count is greater than the size of this dataset, the new dataset will contain all elements of this dataset.
Returns
Dataset A Dataset. unbatch View source
unbatch()
Splits elements of a dataset into multiple elements. For example, if elements of the dataset are shaped [B, a0, a1, ...], where B may vary for each input element, then for each element in the dataset, the unbatched dataset will contain B consecutive elements of shape [a0, a1, ...].
elements = [ [1, 2, 3], [1, 2], [1, 2, 3, 4] ]
dataset = tf.data.Dataset.from_generator(lambda: elements, tf.int64)
dataset = dataset.unbatch()
list(dataset.as_numpy_iterator())
[1, 2, 3, 1, 2, 1, 2, 3, 4]
Note: unbatch requires a data copy to slice up the batched tensor into smaller, unbatched tensors. When optimizing performance, try to avoid unnecessary usage of unbatch.
Returns A Dataset.
window View source
window(
size, shift=None, stride=1, drop_remainder=False
)
Combines (nests of) input elements into a dataset of (nests of) windows. A "window" is a finite dataset of flat elements of size size (or possibly fewer if there are not enough input elements to fill the window and drop_remainder evaluates to False). The shift argument determines the number of input elements by which the window moves on each iteration. If windows and elements are both numbered starting at 0, the first element in window k will be element k * shift of the input dataset. In particular, the first element of the first window will always be the first element of the input dataset. The stride argument determines the stride of the input elements, and the shift argument determines the shift of the window. For example:
dataset = tf.data.Dataset.range(7).window(2)
for window in dataset:
print(list(window.as_numpy_iterator()))
[0, 1]
[2, 3]
[4, 5]
[6]
dataset = tf.data.Dataset.range(7).window(3, 2, 1, True)
for window in dataset:
print(list(window.as_numpy_iterator()))
[0, 1, 2]
[2, 3, 4]
[4, 5, 6]
dataset = tf.data.Dataset.range(7).window(3, 1, 2, True)
for window in dataset:
print(list(window.as_numpy_iterator()))
[0, 2, 4]
[1, 3, 5]
[2, 4, 6]
Note that when the window transformation is applied to a dataset of nested elements, it produces a dataset of nested windows.
nested = ([1, 2, 3, 4], [5, 6, 7, 8])
dataset = tf.data.Dataset.from_tensor_slices(nested).window(2)
for window in dataset:
def to_numpy(ds):
return list(ds.as_numpy_iterator())
print(tuple(to_numpy(component) for component in window))
([1, 2], [5, 6])
([3, 4], [7, 8])
dataset = tf.data.Dataset.from_tensor_slices({'a': [1, 2, 3, 4]})
dataset = dataset.window(2)
for window in dataset:
def to_numpy(ds):
return list(ds.as_numpy_iterator())
print({'a': to_numpy(window['a'])})
{'a': [1, 2]}
{'a': [3, 4]}
Args
size A tf.int64 scalar tf.Tensor, representing the number of elements of the input dataset to combine into a window. Must be positive.
shift (Optional.) A tf.int64 scalar tf.Tensor, representing the number of input elements by which the window moves in each iteration. Defaults to size. Must be positive.
stride (Optional.) A tf.int64 scalar tf.Tensor, representing the stride of the input elements in the sliding window. Must be positive. The default value of 1 means "retain every input element".
drop_remainder (Optional.) A tf.bool scalar tf.Tensor, representing whether the last windows should be dropped if their size is smaller than size.
Returns
Dataset A Dataset of (nests of) windows -- a finite datasets of flat elements created from the (nests of) input elements. with_options View source
with_options(
options
)
Returns a new tf.data.Dataset with the given options set. The options are "global" in the sense they apply to the entire dataset. If options are set multiple times, they are merged as long as different options do not use different non-default values.
ds = tf.data.Dataset.range(5)
ds = ds.interleave(lambda x: tf.data.Dataset.range(5),
cycle_length=3,
num_parallel_calls=3)
options = tf.data.Options()
# This will make the interleave order non-deterministic.
options.experimental_deterministic = False
ds = ds.with_options(options)
Args
options A tf.data.Options that identifies the options the use.
Returns
Dataset A Dataset with the given options.
Raises
ValueError when an option is set more than once to a non-default value zip View source
@staticmethod
zip(
datasets
)
Creates a Dataset by zipping together the given datasets. This method has similar semantics to the built-in zip() function in Python, with the main difference being that the datasets argument can be an arbitrary nested structure of Dataset objects.
# The nested structure of the `datasets` argument determines the
# structure of elements in the resulting dataset.
a = tf.data.Dataset.range(1, 4) # ==> [ 1, 2, 3 ]
b = tf.data.Dataset.range(4, 7) # ==> [ 4, 5, 6 ]
ds = tf.data.Dataset.zip((a, b))
list(ds.as_numpy_iterator())
[(1, 4), (2, 5), (3, 6)]
ds = tf.data.Dataset.zip((b, a))
list(ds.as_numpy_iterator())
[(4, 1), (5, 2), (6, 3)]
# The `datasets` argument may contain an arbitrary number of datasets.
c = tf.data.Dataset.range(7, 13).batch(2) # ==> [ [7, 8],
# [9, 10],
# [11, 12] ]
ds = tf.data.Dataset.zip((a, b, c))
for element in ds.as_numpy_iterator():
print(element)
(1, 4, array([7, 8]))
(2, 5, array([ 9, 10]))
(3, 6, array([11, 12]))
# The number of elements in the resulting dataset is the same as
# the size of the smallest dataset in `datasets`.
d = tf.data.Dataset.range(13, 15) # ==> [ 13, 14 ]
ds = tf.data.Dataset.zip((a, d))
list(ds.as_numpy_iterator())
[(1, 13), (2, 14)]
Args
datasets A nested structure of datasets.
Returns
Dataset A Dataset. __bool__ View source
__bool__()
__iter__ View source
__iter__()
Creates an iterator for elements of this dataset. The returned iterator implements the Python Iterator protocol.
Returns An tf.data.Iterator for the elements of this dataset.
Raises
RuntimeError If not inside of tf.function and not executing eagerly. __len__ View source
__len__()
Returns the length of the dataset if it is known and finite. This method requires that you are running in eager mode, and that the length of the dataset is known and non-infinite. When the length may be unknown or infinite, or if you are running in graph mode, use tf.data.Dataset.cardinality instead.
Returns An integer representing the length of the dataset.
Raises
RuntimeError If the dataset length is unknown or infinite, or if eager execution is not enabled. __nonzero__ View source
__nonzero__() | tensorflow.data.textlinedataset |
tf.data.TFRecordDataset View source on GitHub A Dataset comprising records from one or more TFRecord files. Inherits From: Dataset
tf.data.TFRecordDataset(
filenames, compression_type=None, buffer_size=None, num_parallel_reads=None
)
Args
filenames A tf.string tensor or tf.data.Dataset containing one or more filenames.
compression_type (Optional.) A tf.string scalar evaluating to one of "" (no compression), "ZLIB", or "GZIP".
buffer_size (Optional.) A tf.int64 scalar representing the number of bytes in the read buffer. If your input pipeline is I/O bottlenecked, consider setting this parameter to a value 1-100 MBs. If None, a sensible default for both local and remote file systems is used.
num_parallel_reads (Optional.) A tf.int64 scalar representing the number of files to read in parallel. If greater than one, the records of files read in parallel are outputted in an interleaved order. If your input pipeline is I/O bottlenecked, consider setting this parameter to a value greater than one to parallelize the I/O. If None, files will be read sequentially.
Raises
TypeError If any argument does not have the expected type.
ValueError If any argument does not have the expected shape.
Attributes
element_spec The type specification of an element of this dataset.
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
dataset.element_spec
TensorSpec(shape=(), dtype=tf.int32, name=None)
Methods apply View source
apply(
transformation_func
)
Applies a transformation function to this dataset. apply enables chaining of custom Dataset transformations, which are represented as functions that take one Dataset argument and return a transformed Dataset.
dataset = tf.data.Dataset.range(100)
def dataset_fn(ds):
return ds.filter(lambda x: x < 5)
dataset = dataset.apply(dataset_fn)
list(dataset.as_numpy_iterator())
[0, 1, 2, 3, 4]
Args
transformation_func A function that takes one Dataset argument and returns a Dataset.
Returns
Dataset The Dataset returned by applying transformation_func to this dataset. as_numpy_iterator View source
as_numpy_iterator()
Returns an iterator which converts all elements of the dataset to numpy. Use as_numpy_iterator to inspect the content of your dataset. To see element shapes and types, print dataset elements directly instead of using as_numpy_iterator.
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
for element in dataset:
print(element)
tf.Tensor(1, shape=(), dtype=int32)
tf.Tensor(2, shape=(), dtype=int32)
tf.Tensor(3, shape=(), dtype=int32)
This method requires that you are running in eager mode and the dataset's element_spec contains only TensorSpec components.
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
for element in dataset.as_numpy_iterator():
print(element)
1
2
3
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
print(list(dataset.as_numpy_iterator()))
[1, 2, 3]
as_numpy_iterator() will preserve the nested structure of dataset elements.
dataset = tf.data.Dataset.from_tensor_slices({'a': ([1, 2], [3, 4]),
'b': [5, 6]})
list(dataset.as_numpy_iterator()) == [{'a': (1, 3), 'b': 5},
{'a': (2, 4), 'b': 6}]
True
Returns An iterable over the elements of the dataset, with their tensors converted to numpy arrays.
Raises
TypeError if an element contains a non-Tensor value.
RuntimeError if eager execution is not enabled. batch View source
batch(
batch_size, drop_remainder=False
)
Combines consecutive elements of this dataset into batches.
dataset = tf.data.Dataset.range(8)
dataset = dataset.batch(3)
list(dataset.as_numpy_iterator())
[array([0, 1, 2]), array([3, 4, 5]), array([6, 7])]
dataset = tf.data.Dataset.range(8)
dataset = dataset.batch(3, drop_remainder=True)
list(dataset.as_numpy_iterator())
[array([0, 1, 2]), array([3, 4, 5])]
The components of the resulting element will have an additional outer dimension, which will be batch_size (or N % batch_size for the last element if batch_size does not divide the number of input elements N evenly and drop_remainder is False). If your program depends on the batches having the same outer dimension, you should set the drop_remainder argument to True to prevent the smaller batch from being produced.
Args
batch_size A tf.int64 scalar tf.Tensor, representing the number of consecutive elements of this dataset to combine in a single batch.
drop_remainder (Optional.) A tf.bool scalar tf.Tensor, representing whether the last batch should be dropped in the case it has fewer than batch_size elements; the default behavior is not to drop the smaller batch.
Returns
Dataset A Dataset. cache View source
cache(
filename=''
)
Caches the elements in this dataset. The first time the dataset is iterated over, its elements will be cached either in the specified file or in memory. Subsequent iterations will use the cached data.
Note: For the cache to be finalized, the input dataset must be iterated through in its entirety. Otherwise, subsequent iterations will not use cached data.
dataset = tf.data.Dataset.range(5)
dataset = dataset.map(lambda x: x**2)
dataset = dataset.cache()
# The first time reading through the data will generate the data using
# `range` and `map`.
list(dataset.as_numpy_iterator())
[0, 1, 4, 9, 16]
# Subsequent iterations read from the cache.
list(dataset.as_numpy_iterator())
[0, 1, 4, 9, 16]
When caching to a file, the cached data will persist across runs. Even the first iteration through the data will read from the cache file. Changing the input pipeline before the call to .cache() will have no effect until the cache file is removed or the filename is changed.
dataset = tf.data.Dataset.range(5)
dataset = dataset.cache("/path/to/file") # doctest: +SKIP
list(dataset.as_numpy_iterator()) # doctest: +SKIP
[0, 1, 2, 3, 4]
dataset = tf.data.Dataset.range(10)
dataset = dataset.cache("/path/to/file") # Same file! # doctest: +SKIP
list(dataset.as_numpy_iterator()) # doctest: +SKIP
[0, 1, 2, 3, 4]
Note: cache will produce exactly the same elements during each iteration through the dataset. If you wish to randomize the iteration order, make sure to call shuffle after calling cache.
Args
filename A tf.string scalar tf.Tensor, representing the name of a directory on the filesystem to use for caching elements in this Dataset. If a filename is not provided, the dataset will be cached in memory.
Returns
Dataset A Dataset. cardinality View source
cardinality()
Returns the cardinality of the dataset, if known. cardinality may return tf.data.INFINITE_CARDINALITY if the dataset contains an infinite number of elements or tf.data.UNKNOWN_CARDINALITY if the analysis fails to determine the number of elements in the dataset (e.g. when the dataset source is a file).
dataset = tf.data.Dataset.range(42)
print(dataset.cardinality().numpy())
42
dataset = dataset.repeat()
cardinality = dataset.cardinality()
print((cardinality == tf.data.INFINITE_CARDINALITY).numpy())
True
dataset = dataset.filter(lambda x: True)
cardinality = dataset.cardinality()
print((cardinality == tf.data.UNKNOWN_CARDINALITY).numpy())
True
Returns A scalar tf.int64 Tensor representing the cardinality of the dataset. If the cardinality is infinite or unknown, cardinality returns the named constants tf.data.INFINITE_CARDINALITY and tf.data.UNKNOWN_CARDINALITY respectively.
concatenate View source
concatenate(
dataset
)
Creates a Dataset by concatenating the given dataset with this dataset.
a = tf.data.Dataset.range(1, 4) # ==> [ 1, 2, 3 ]
b = tf.data.Dataset.range(4, 8) # ==> [ 4, 5, 6, 7 ]
ds = a.concatenate(b)
list(ds.as_numpy_iterator())
[1, 2, 3, 4, 5, 6, 7]
# The input dataset and dataset to be concatenated should have the same
# nested structures and output types.
c = tf.data.Dataset.zip((a, b))
a.concatenate(c)
Traceback (most recent call last):
TypeError: Two datasets to concatenate have different types
<dtype: 'int64'> and (tf.int64, tf.int64)
d = tf.data.Dataset.from_tensor_slices(["a", "b", "c"])
a.concatenate(d)
Traceback (most recent call last):
TypeError: Two datasets to concatenate have different types
<dtype: 'int64'> and <dtype: 'string'>
Args
dataset Dataset to be concatenated.
Returns
Dataset A Dataset. enumerate View source
enumerate(
start=0
)
Enumerates the elements of this dataset. It is similar to python's enumerate.
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
dataset = dataset.enumerate(start=5)
for element in dataset.as_numpy_iterator():
print(element)
(5, 1)
(6, 2)
(7, 3)
# The nested structure of the input dataset determines the structure of
# elements in the resulting dataset.
dataset = tf.data.Dataset.from_tensor_slices([(7, 8), (9, 10)])
dataset = dataset.enumerate()
for element in dataset.as_numpy_iterator():
print(element)
(0, array([7, 8], dtype=int32))
(1, array([ 9, 10], dtype=int32))
Args
start A tf.int64 scalar tf.Tensor, representing the start value for enumeration.
Returns
Dataset A Dataset. filter View source
filter(
predicate
)
Filters this dataset according to predicate.
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
dataset = dataset.filter(lambda x: x < 3)
list(dataset.as_numpy_iterator())
[1, 2]
# `tf.math.equal(x, y)` is required for equality comparison
def filter_fn(x):
return tf.math.equal(x, 1)
dataset = dataset.filter(filter_fn)
list(dataset.as_numpy_iterator())
[1]
Args
predicate A function mapping a dataset element to a boolean.
Returns
Dataset The Dataset containing the elements of this dataset for which predicate is True. flat_map View source
flat_map(
map_func
)
Maps map_func across this dataset and flattens the result. Use flat_map if you want to make sure that the order of your dataset stays the same. For example, to flatten a dataset of batches into a dataset of their elements:
dataset = tf.data.Dataset.from_tensor_slices(
[[1, 2, 3], [4, 5, 6], [7, 8, 9]])
dataset = dataset.flat_map(lambda x: Dataset.from_tensor_slices(x))
list(dataset.as_numpy_iterator())
[1, 2, 3, 4, 5, 6, 7, 8, 9]
tf.data.Dataset.interleave() is a generalization of flat_map, since flat_map produces the same output as tf.data.Dataset.interleave(cycle_length=1)
Args
map_func A function mapping a dataset element to a dataset.
Returns
Dataset A Dataset. from_generator View source
@staticmethod
from_generator(
generator, output_types=None, output_shapes=None, args=None,
output_signature=None
)
Creates a Dataset whose elements are generated by generator. (deprecated arguments) Warning: SOME ARGUMENTS ARE DEPRECATED: (output_shapes, output_types). They will be removed in a future version. Instructions for updating: Use output_signature instead The generator argument must be a callable object that returns an object that supports the iter() protocol (e.g. a generator function). The elements generated by generator must be compatible with either the given output_signature argument or with the given output_types and (optionally) output_shapes arguments, whichiver was specified. The recommended way to call from_generator is to use the output_signature argument. In this case the output will be assumed to consist of objects with the classes, shapes and types defined by tf.TypeSpec objects from output_signature argument:
def gen():
ragged_tensor = tf.ragged.constant([[1, 2], [3]])
yield 42, ragged_tensor
dataset = tf.data.Dataset.from_generator(
gen,
output_signature=(
tf.TensorSpec(shape=(), dtype=tf.int32),
tf.RaggedTensorSpec(shape=(2, None), dtype=tf.int32)))
list(dataset.take(1))
[(<tf.Tensor: shape=(), dtype=int32, numpy=42>,
<tf.RaggedTensor [[1, 2], [3]]>)]
There is also a deprecated way to call from_generator by either with output_types argument alone or together with output_shapes argument. In this case the output of the function will be assumed to consist of tf.Tensor objects with with the types defined by output_types and with the shapes which are either unknown or defined by output_shapes.
Note: The current implementation of Dataset.from_generator() uses tf.numpy_function and inherits the same constraints. In particular, it requires the dataset and iterator related operations to be placed on a device in the same process as the Python program that called Dataset.from_generator(). The body of generator will not be serialized in a GraphDef, and you should not use this method if you need to serialize your model and restore it in a different environment.
Note: If generator depends on mutable global variables or other external state, be aware that the runtime may invoke generator multiple times (in order to support repeating the Dataset) and at any time between the call to Dataset.from_generator() and the production of the first element from the generator. Mutating global variables or external state can cause undefined behavior, and we recommend that you explicitly cache any external state in generator before calling Dataset.from_generator().
Args
generator A callable object that returns an object that supports the iter() protocol. If args is not specified, generator must take no arguments; otherwise it must take as many arguments as there are values in args.
output_types (Optional.) A nested structure of tf.DType objects corresponding to each component of an element yielded by generator.
output_shapes (Optional.) A nested structure of tf.TensorShape objects corresponding to each component of an element yielded by generator.
args (Optional.) A tuple of tf.Tensor objects that will be evaluated and passed to generator as NumPy-array arguments.
output_signature (Optional.) A nested structure of tf.TypeSpec objects corresponding to each component of an element yielded by generator.
Returns
Dataset A Dataset. from_tensor_slices View source
@staticmethod
from_tensor_slices(
tensors
)
Creates a Dataset whose elements are slices of the given tensors. The given tensors are sliced along their first dimension. This operation preserves the structure of the input tensors, removing the first dimension of each tensor and using it as the dataset dimension. All input tensors must have the same size in their first dimensions.
# Slicing a 1D tensor produces scalar tensor elements.
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
list(dataset.as_numpy_iterator())
[1, 2, 3]
# Slicing a 2D tensor produces 1D tensor elements.
dataset = tf.data.Dataset.from_tensor_slices([[1, 2], [3, 4]])
list(dataset.as_numpy_iterator())
[array([1, 2], dtype=int32), array([3, 4], dtype=int32)]
# Slicing a tuple of 1D tensors produces tuple elements containing
# scalar tensors.
dataset = tf.data.Dataset.from_tensor_slices(([1, 2], [3, 4], [5, 6]))
list(dataset.as_numpy_iterator())
[(1, 3, 5), (2, 4, 6)]
# Dictionary structure is also preserved.
dataset = tf.data.Dataset.from_tensor_slices({"a": [1, 2], "b": [3, 4]})
list(dataset.as_numpy_iterator()) == [{'a': 1, 'b': 3},
{'a': 2, 'b': 4}]
True
# Two tensors can be combined into one Dataset object.
features = tf.constant([[1, 3], [2, 1], [3, 3]]) # ==> 3x2 tensor
labels = tf.constant(['A', 'B', 'A']) # ==> 3x1 tensor
dataset = Dataset.from_tensor_slices((features, labels))
# Both the features and the labels tensors can be converted
# to a Dataset object separately and combined after.
features_dataset = Dataset.from_tensor_slices(features)
labels_dataset = Dataset.from_tensor_slices(labels)
dataset = Dataset.zip((features_dataset, labels_dataset))
# A batched feature and label set can be converted to a Dataset
# in similar fashion.
batched_features = tf.constant([[[1, 3], [2, 3]],
[[2, 1], [1, 2]],
[[3, 3], [3, 2]]], shape=(3, 2, 2))
batched_labels = tf.constant([['A', 'A'],
['B', 'B'],
['A', 'B']], shape=(3, 2, 1))
dataset = Dataset.from_tensor_slices((batched_features, batched_labels))
for element in dataset.as_numpy_iterator():
print(element)
(array([[1, 3],
[2, 3]], dtype=int32), array([[b'A'],
[b'A']], dtype=object))
(array([[2, 1],
[1, 2]], dtype=int32), array([[b'B'],
[b'B']], dtype=object))
(array([[3, 3],
[3, 2]], dtype=int32), array([[b'A'],
[b'B']], dtype=object))
Note that if tensors contains a NumPy array, and eager execution is not enabled, the values will be embedded in the graph as one or more tf.constant operations. For large datasets (> 1 GB), this can waste memory and run into byte limits of graph serialization. If tensors contains one or more large NumPy arrays, consider the alternative described in this guide.
Args
tensors A dataset element, with each component having the same size in the first dimension.
Returns
Dataset A Dataset. from_tensors View source
@staticmethod
from_tensors(
tensors
)
Creates a Dataset with a single element, comprising the given tensors. from_tensors produces a dataset containing only a single element. To slice the input tensor into multiple elements, use from_tensor_slices instead.
dataset = tf.data.Dataset.from_tensors([1, 2, 3])
list(dataset.as_numpy_iterator())
[array([1, 2, 3], dtype=int32)]
dataset = tf.data.Dataset.from_tensors(([1, 2, 3], 'A'))
list(dataset.as_numpy_iterator())
[(array([1, 2, 3], dtype=int32), b'A')]
# You can use `from_tensors` to produce a dataset which repeats
# the same example many times.
example = tf.constant([1,2,3])
dataset = tf.data.Dataset.from_tensors(example).repeat(2)
list(dataset.as_numpy_iterator())
[array([1, 2, 3], dtype=int32), array([1, 2, 3], dtype=int32)]
Note that if tensors contains a NumPy array, and eager execution is not enabled, the values will be embedded in the graph as one or more tf.constant operations. For large datasets (> 1 GB), this can waste memory and run into byte limits of graph serialization. If tensors contains one or more large NumPy arrays, consider the alternative described in this guide.
Args
tensors A dataset element.
Returns
Dataset A Dataset. interleave View source
interleave(
map_func, cycle_length=None, block_length=None, num_parallel_calls=None,
deterministic=None
)
Maps map_func across this dataset, and interleaves the results. For example, you can use Dataset.interleave() to process many input files concurrently:
# Preprocess 4 files concurrently, and interleave blocks of 16 records
# from each file.
filenames = ["/var/data/file1.txt", "/var/data/file2.txt",
"/var/data/file3.txt", "/var/data/file4.txt"]
dataset = tf.data.Dataset.from_tensor_slices(filenames)
def parse_fn(filename):
return tf.data.Dataset.range(10)
dataset = dataset.interleave(lambda x:
tf.data.TextLineDataset(x).map(parse_fn, num_parallel_calls=1),
cycle_length=4, block_length=16)
The cycle_length and block_length arguments control the order in which elements are produced. cycle_length controls the number of input elements that are processed concurrently. If you set cycle_length to 1, this transformation will handle one input element at a time, and will produce identical results to tf.data.Dataset.flat_map. In general, this transformation will apply map_func to cycle_length input elements, open iterators on the returned Dataset objects, and cycle through them producing block_length consecutive elements from each iterator, and consuming the next input element each time it reaches the end of an iterator. For example:
dataset = Dataset.range(1, 6) # ==> [ 1, 2, 3, 4, 5 ]
# NOTE: New lines indicate "block" boundaries.
dataset = dataset.interleave(
lambda x: Dataset.from_tensors(x).repeat(6),
cycle_length=2, block_length=4)
list(dataset.as_numpy_iterator())
[1, 1, 1, 1,
2, 2, 2, 2,
1, 1,
2, 2,
3, 3, 3, 3,
4, 4, 4, 4,
3, 3,
4, 4,
5, 5, 5, 5,
5, 5]
Note: The order of elements yielded by this transformation is deterministic, as long as map_func is a pure function and deterministic=True. If map_func contains any stateful operations, the order in which that state is accessed is undefined.
Performance can often be improved by setting num_parallel_calls so that interleave will use multiple threads to fetch elements. If determinism isn't required, it can also improve performance to set deterministic=False.
filenames = ["/var/data/file1.txt", "/var/data/file2.txt",
"/var/data/file3.txt", "/var/data/file4.txt"]
dataset = tf.data.Dataset.from_tensor_slices(filenames)
dataset = dataset.interleave(lambda x: tf.data.TFRecordDataset(x),
cycle_length=4, num_parallel_calls=tf.data.AUTOTUNE,
deterministic=False)
Args
map_func A function mapping a dataset element to a dataset.
cycle_length (Optional.) The number of input elements that will be processed concurrently. If not set, the tf.data runtime decides what it should be based on available CPU. If num_parallel_calls is set to tf.data.AUTOTUNE, the cycle_length argument identifies the maximum degree of parallelism.
block_length (Optional.) The number of consecutive elements to produce from each input element before cycling to another input element. If not set, defaults to 1.
num_parallel_calls (Optional.) If specified, the implementation creates a threadpool, which is used to fetch inputs from cycle elements asynchronously and in parallel. The default behavior is to fetch inputs from cycle elements synchronously with no parallelism. If the value tf.data.AUTOTUNE is used, then the number of parallel calls is set dynamically based on available CPU.
deterministic (Optional.) A boolean controlling whether determinism should be traded for performance by allowing elements to be produced out of order. If deterministic is None, the tf.data.Options.experimental_deterministic dataset option (True by default) is used to decide whether to produce elements deterministically.
Returns
Dataset A Dataset. list_files View source
@staticmethod
list_files(
file_pattern, shuffle=None, seed=None
)
A dataset of all files matching one or more glob patterns. The file_pattern argument should be a small number of glob patterns. If your filenames have already been globbed, use Dataset.from_tensor_slices(filenames) instead, as re-globbing every filename with list_files may result in poor performance with remote storage systems.
Note: The default behavior of this method is to return filenames in a non-deterministic random shuffled order. Pass a seed or shuffle=False to get results in a deterministic order.
Example: If we had the following files on our filesystem: /path/to/dir/a.txt /path/to/dir/b.py /path/to/dir/c.py If we pass "/path/to/dir/*.py" as the directory, the dataset would produce: /path/to/dir/b.py /path/to/dir/c.py
Args
file_pattern A string, a list of strings, or a tf.Tensor of string type (scalar or vector), representing the filename glob (i.e. shell wildcard) pattern(s) that will be matched.
shuffle (Optional.) If True, the file names will be shuffled randomly. Defaults to True.
seed (Optional.) A tf.int64 scalar tf.Tensor, representing the random seed that will be used to create the distribution. See tf.random.set_seed for behavior.
Returns
Dataset A Dataset of strings corresponding to file names. map View source
map(
map_func, num_parallel_calls=None, deterministic=None
)
Maps map_func across the elements of this dataset. This transformation applies map_func to each element of this dataset, and returns a new dataset containing the transformed elements, in the same order as they appeared in the input. map_func can be used to change both the values and the structure of a dataset's elements. For example, adding 1 to each element, or projecting a subset of element components.
dataset = Dataset.range(1, 6) # ==> [ 1, 2, 3, 4, 5 ]
dataset = dataset.map(lambda x: x + 1)
list(dataset.as_numpy_iterator())
[2, 3, 4, 5, 6]
The input signature of map_func is determined by the structure of each element in this dataset.
dataset = Dataset.range(5)
# `map_func` takes a single argument of type `tf.Tensor` with the same
# shape and dtype.
result = dataset.map(lambda x: x + 1)
# Each element is a tuple containing two `tf.Tensor` objects.
elements = [(1, "foo"), (2, "bar"), (3, "baz")]
dataset = tf.data.Dataset.from_generator(
lambda: elements, (tf.int32, tf.string))
# `map_func` takes two arguments of type `tf.Tensor`. This function
# projects out just the first component.
result = dataset.map(lambda x_int, y_str: x_int)
list(result.as_numpy_iterator())
[1, 2, 3]
# Each element is a dictionary mapping strings to `tf.Tensor` objects.
elements = ([{"a": 1, "b": "foo"},
{"a": 2, "b": "bar"},
{"a": 3, "b": "baz"}])
dataset = tf.data.Dataset.from_generator(
lambda: elements, {"a": tf.int32, "b": tf.string})
# `map_func` takes a single argument of type `dict` with the same keys
# as the elements.
result = dataset.map(lambda d: str(d["a"]) + d["b"])
The value or values returned by map_func determine the structure of each element in the returned dataset.
dataset = tf.data.Dataset.range(3)
# `map_func` returns two `tf.Tensor` objects.
def g(x):
return tf.constant(37.0), tf.constant(["Foo", "Bar", "Baz"])
result = dataset.map(g)
result.element_spec
(TensorSpec(shape=(), dtype=tf.float32, name=None), TensorSpec(shape=(3,), dtype=tf.string, name=None))
# Python primitives, lists, and NumPy arrays are implicitly converted to
# `tf.Tensor`.
def h(x):
return 37.0, ["Foo", "Bar"], np.array([1.0, 2.0], dtype=np.float64)
result = dataset.map(h)
result.element_spec
(TensorSpec(shape=(), dtype=tf.float32, name=None), TensorSpec(shape=(2,), dtype=tf.string, name=None), TensorSpec(shape=(2,), dtype=tf.float64, name=None))
# `map_func` can return nested structures.
def i(x):
return (37.0, [42, 16]), "foo"
result = dataset.map(i)
result.element_spec
((TensorSpec(shape=(), dtype=tf.float32, name=None),
TensorSpec(shape=(2,), dtype=tf.int32, name=None)),
TensorSpec(shape=(), dtype=tf.string, name=None))
map_func can accept as arguments and return any type of dataset element. Note that irrespective of the context in which map_func is defined (eager vs. graph), tf.data traces the function and executes it as a graph. To use Python code inside of the function you have a few options: 1) Rely on AutoGraph to convert Python code into an equivalent graph computation. The downside of this approach is that AutoGraph can convert some but not all Python code. 2) Use tf.py_function, which allows you to write arbitrary Python code but will generally result in worse performance than 1). For example:
d = tf.data.Dataset.from_tensor_slices(['hello', 'world'])
# transform a string tensor to upper case string using a Python function
def upper_case_fn(t: tf.Tensor):
return t.numpy().decode('utf-8').upper()
d = d.map(lambda x: tf.py_function(func=upper_case_fn,
inp=[x], Tout=tf.string))
list(d.as_numpy_iterator())
[b'HELLO', b'WORLD']
3) Use tf.numpy_function, which also allows you to write arbitrary Python code. Note that tf.py_function accepts tf.Tensor whereas tf.numpy_function accepts numpy arrays and returns only numpy arrays. For example:
d = tf.data.Dataset.from_tensor_slices(['hello', 'world'])
def upper_case_fn(t: np.ndarray):
return t.decode('utf-8').upper()
d = d.map(lambda x: tf.numpy_function(func=upper_case_fn,
inp=[x], Tout=tf.string))
list(d.as_numpy_iterator())
[b'HELLO', b'WORLD']
Note that the use of tf.numpy_function and tf.py_function in general precludes the possibility of executing user-defined transformations in parallel (because of Python GIL). Performance can often be improved by setting num_parallel_calls so that map will use multiple threads to process elements. If deterministic order isn't required, it can also improve performance to set deterministic=False.
dataset = Dataset.range(1, 6) # ==> [ 1, 2, 3, 4, 5 ]
dataset = dataset.map(lambda x: x + 1,
num_parallel_calls=tf.data.AUTOTUNE,
deterministic=False)
Args
map_func A function mapping a dataset element to another dataset element.
num_parallel_calls (Optional.) A tf.int32 scalar tf.Tensor, representing the number elements to process asynchronously in parallel. If not specified, elements will be processed sequentially. If the value tf.data.AUTOTUNE is used, then the number of parallel calls is set dynamically based on available CPU.
deterministic (Optional.) A boolean controlling whether determinism should be traded for performance by allowing elements to be produced out of order. If deterministic is None, the tf.data.Options.experimental_deterministic dataset option (True by default) is used to decide whether to produce elements deterministically.
Returns
Dataset A Dataset. options View source
options()
Returns the options for this dataset and its inputs.
Returns A tf.data.Options object representing the dataset options.
padded_batch View source
padded_batch(
batch_size, padded_shapes=None, padding_values=None, drop_remainder=False
)
Combines consecutive elements of this dataset into padded batches. This transformation combines multiple consecutive elements of the input dataset into a single element. Like tf.data.Dataset.batch, the components of the resulting element will have an additional outer dimension, which will be batch_size (or N % batch_size for the last element if batch_size does not divide the number of input elements N evenly and drop_remainder is False). If your program depends on the batches having the same outer dimension, you should set the drop_remainder argument to True to prevent the smaller batch from being produced. Unlike tf.data.Dataset.batch, the input elements to be batched may have different shapes, and this transformation will pad each component to the respective shape in padded_shapes. The padded_shapes argument determines the resulting shape for each dimension of each component in an output element: If the dimension is a constant, the component will be padded out to that length in that dimension. If the dimension is unknown, the component will be padded out to the maximum length of all elements in that dimension.
A = (tf.data.Dataset
.range(1, 5, output_type=tf.int32)
.map(lambda x: tf.fill([x], x)))
# Pad to the smallest per-batch size that fits all elements.
B = A.padded_batch(2)
for element in B.as_numpy_iterator():
print(element)
[[1 0]
[2 2]]
[[3 3 3 0]
[4 4 4 4]]
# Pad to a fixed size.
C = A.padded_batch(2, padded_shapes=5)
for element in C.as_numpy_iterator():
print(element)
[[1 0 0 0 0]
[2 2 0 0 0]]
[[3 3 3 0 0]
[4 4 4 4 0]]
# Pad with a custom value.
D = A.padded_batch(2, padded_shapes=5, padding_values=-1)
for element in D.as_numpy_iterator():
print(element)
[[ 1 -1 -1 -1 -1]
[ 2 2 -1 -1 -1]]
[[ 3 3 3 -1 -1]
[ 4 4 4 4 -1]]
# Components of nested elements can be padded independently.
elements = [([1, 2, 3], [10]),
([4, 5], [11, 12])]
dataset = tf.data.Dataset.from_generator(
lambda: iter(elements), (tf.int32, tf.int32))
# Pad the first component of the tuple to length 4, and the second
# component to the smallest size that fits.
dataset = dataset.padded_batch(2,
padded_shapes=([4], [None]),
padding_values=(-1, 100))
list(dataset.as_numpy_iterator())
[(array([[ 1, 2, 3, -1], [ 4, 5, -1, -1]], dtype=int32),
array([[ 10, 100], [ 11, 12]], dtype=int32))]
# Pad with a single value and multiple components.
E = tf.data.Dataset.zip((A, A)).padded_batch(2, padding_values=-1)
for element in E.as_numpy_iterator():
print(element)
(array([[ 1, -1],
[ 2, 2]], dtype=int32), array([[ 1, -1],
[ 2, 2]], dtype=int32))
(array([[ 3, 3, 3, -1],
[ 4, 4, 4, 4]], dtype=int32), array([[ 3, 3, 3, -1],
[ 4, 4, 4, 4]], dtype=int32))
See also tf.data.experimental.dense_to_sparse_batch, which combines elements that may have different shapes into a tf.sparse.SparseTensor.
Args
batch_size A tf.int64 scalar tf.Tensor, representing the number of consecutive elements of this dataset to combine in a single batch.
padded_shapes (Optional.) A nested structure of tf.TensorShape or tf.int64 vector tensor-like objects representing the shape to which the respective component of each input element should be padded prior to batching. Any unknown dimensions will be padded to the maximum size of that dimension in each batch. If unset, all dimensions of all components are padded to the maximum size in the batch. padded_shapes must be set if any component has an unknown rank.
padding_values (Optional.) A nested structure of scalar-shaped tf.Tensor, representing the padding values to use for the respective components. None represents that the nested structure should be padded with default values. Defaults are 0 for numeric types and the empty string for string types. The padding_values should have the same structure as the input dataset. If padding_values is a single element and the input dataset has multiple components, then the same padding_values will be used to pad every component of the dataset. If padding_values is a scalar, then its value will be broadcasted to match the shape of each component.
drop_remainder (Optional.) A tf.bool scalar tf.Tensor, representing whether the last batch should be dropped in the case it has fewer than batch_size elements; the default behavior is not to drop the smaller batch.
Returns
Dataset A Dataset.
Raises
ValueError If a component has an unknown rank, and the padded_shapes argument is not set. prefetch View source
prefetch(
buffer_size
)
Creates a Dataset that prefetches elements from this dataset. Most dataset input pipelines should end with a call to prefetch. This allows later elements to be prepared while the current element is being processed. This often improves latency and throughput, at the cost of using additional memory to store prefetched elements.
Note: Like other Dataset methods, prefetch operates on the elements of the input dataset. It has no concept of examples vs. batches. examples.prefetch(2) will prefetch two elements (2 examples), while examples.batch(20).prefetch(2) will prefetch 2 elements (2 batches, of 20 examples each).
dataset = tf.data.Dataset.range(3)
dataset = dataset.prefetch(2)
list(dataset.as_numpy_iterator())
[0, 1, 2]
Args
buffer_size A tf.int64 scalar tf.Tensor, representing the maximum number of elements that will be buffered when prefetching.
Returns
Dataset A Dataset. range View source
@staticmethod
range(
*args, **kwargs
)
Creates a Dataset of a step-separated range of values.
list(Dataset.range(5).as_numpy_iterator())
[0, 1, 2, 3, 4]
list(Dataset.range(2, 5).as_numpy_iterator())
[2, 3, 4]
list(Dataset.range(1, 5, 2).as_numpy_iterator())
[1, 3]
list(Dataset.range(1, 5, -2).as_numpy_iterator())
[]
list(Dataset.range(5, 1).as_numpy_iterator())
[]
list(Dataset.range(5, 1, -2).as_numpy_iterator())
[5, 3]
list(Dataset.range(2, 5, output_type=tf.int32).as_numpy_iterator())
[2, 3, 4]
list(Dataset.range(1, 5, 2, output_type=tf.float32).as_numpy_iterator())
[1.0, 3.0]
Args
*args follows the same semantics as python's xrange. len(args) == 1 -> start = 0, stop = args[0], step = 1. len(args) == 2 -> start = args[0], stop = args[1], step = 1. len(args) == 3 -> start = args[0], stop = args[1], step = args[2].
**kwargs output_type: Its expected dtype. (Optional, default: tf.int64).
Returns
Dataset A RangeDataset.
Raises
ValueError if len(args) == 0. reduce View source
reduce(
initial_state, reduce_func
)
Reduces the input dataset to a single element. The transformation calls reduce_func successively on every element of the input dataset until the dataset is exhausted, aggregating information in its internal state. The initial_state argument is used for the initial state and the final state is returned as the result.
tf.data.Dataset.range(5).reduce(np.int64(0), lambda x, _: x + 1).numpy()
5
tf.data.Dataset.range(5).reduce(np.int64(0), lambda x, y: x + y).numpy()
10
Args
initial_state An element representing the initial state of the transformation.
reduce_func A function that maps (old_state, input_element) to new_state. It must take two arguments and return a new element The structure of new_state must match the structure of initial_state.
Returns A dataset element corresponding to the final state of the transformation.
repeat View source
repeat(
count=None
)
Repeats this dataset so each original value is seen count times.
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
dataset = dataset.repeat(3)
list(dataset.as_numpy_iterator())
[1, 2, 3, 1, 2, 3, 1, 2, 3]
Note: If this dataset is a function of global state (e.g. a random number generator), then different repetitions may produce different elements.
Args
count (Optional.) A tf.int64 scalar tf.Tensor, representing the number of times the dataset should be repeated. The default behavior (if count is None or -1) is for the dataset be repeated indefinitely.
Returns
Dataset A Dataset. shard View source
shard(
num_shards, index
)
Creates a Dataset that includes only 1/num_shards of this dataset. shard is deterministic. The Dataset produced by A.shard(n, i) will contain all elements of A whose index mod n = i.
A = tf.data.Dataset.range(10)
B = A.shard(num_shards=3, index=0)
list(B.as_numpy_iterator())
[0, 3, 6, 9]
C = A.shard(num_shards=3, index=1)
list(C.as_numpy_iterator())
[1, 4, 7]
D = A.shard(num_shards=3, index=2)
list(D.as_numpy_iterator())
[2, 5, 8]
This dataset operator is very useful when running distributed training, as it allows each worker to read a unique subset. When reading a single input file, you can shard elements as follows: d = tf.data.TFRecordDataset(input_file)
d = d.shard(num_workers, worker_index)
d = d.repeat(num_epochs)
d = d.shuffle(shuffle_buffer_size)
d = d.map(parser_fn, num_parallel_calls=num_map_threads)
Important caveats: Be sure to shard before you use any randomizing operator (such as shuffle). Generally it is best if the shard operator is used early in the dataset pipeline. For example, when reading from a set of TFRecord files, shard before converting the dataset to input samples. This avoids reading every file on every worker. The following is an example of an efficient sharding strategy within a complete pipeline: d = Dataset.list_files(pattern)
d = d.shard(num_workers, worker_index)
d = d.repeat(num_epochs)
d = d.shuffle(shuffle_buffer_size)
d = d.interleave(tf.data.TFRecordDataset,
cycle_length=num_readers, block_length=1)
d = d.map(parser_fn, num_parallel_calls=num_map_threads)
Args
num_shards A tf.int64 scalar tf.Tensor, representing the number of shards operating in parallel.
index A tf.int64 scalar tf.Tensor, representing the worker index.
Returns
Dataset A Dataset.
Raises
InvalidArgumentError if num_shards or index are illegal values.
Note: error checking is done on a best-effort basis, and errors aren't guaranteed to be caught upon dataset creation. (e.g. providing in a placeholder tensor bypasses the early checking, and will instead result in an error during a session.run call.)
shuffle View source
shuffle(
buffer_size, seed=None, reshuffle_each_iteration=None
)
Randomly shuffles the elements of this dataset. This dataset fills a buffer with buffer_size elements, then randomly samples elements from this buffer, replacing the selected elements with new elements. For perfect shuffling, a buffer size greater than or equal to the full size of the dataset is required. For instance, if your dataset contains 10,000 elements but buffer_size is set to 1,000, then shuffle will initially select a random element from only the first 1,000 elements in the buffer. Once an element is selected, its space in the buffer is replaced by the next (i.e. 1,001-st) element, maintaining the 1,000 element buffer. reshuffle_each_iteration controls whether the shuffle order should be different for each epoch. In TF 1.X, the idiomatic way to create epochs was through the repeat transformation:
dataset = tf.data.Dataset.range(3)
dataset = dataset.shuffle(3, reshuffle_each_iteration=True)
dataset = dataset.repeat(2) # doctest: +SKIP
[1, 0, 2, 1, 2, 0]
dataset = tf.data.Dataset.range(3)
dataset = dataset.shuffle(3, reshuffle_each_iteration=False)
dataset = dataset.repeat(2) # doctest: +SKIP
[1, 0, 2, 1, 0, 2]
In TF 2.0, tf.data.Dataset objects are Python iterables which makes it possible to also create epochs through Python iteration:
dataset = tf.data.Dataset.range(3)
dataset = dataset.shuffle(3, reshuffle_each_iteration=True)
list(dataset.as_numpy_iterator()) # doctest: +SKIP
[1, 0, 2]
list(dataset.as_numpy_iterator()) # doctest: +SKIP
[1, 2, 0]
dataset = tf.data.Dataset.range(3)
dataset = dataset.shuffle(3, reshuffle_each_iteration=False)
list(dataset.as_numpy_iterator()) # doctest: +SKIP
[1, 0, 2]
list(dataset.as_numpy_iterator()) # doctest: +SKIP
[1, 0, 2]
Args
buffer_size A tf.int64 scalar tf.Tensor, representing the number of elements from this dataset from which the new dataset will sample.
seed (Optional.) A tf.int64 scalar tf.Tensor, representing the random seed that will be used to create the distribution. See tf.random.set_seed for behavior.
reshuffle_each_iteration (Optional.) A boolean, which if true indicates that the dataset should be pseudorandomly reshuffled each time it is iterated over. (Defaults to True.)
Returns
Dataset A Dataset. skip View source
skip(
count
)
Creates a Dataset that skips count elements from this dataset.
dataset = tf.data.Dataset.range(10)
dataset = dataset.skip(7)
list(dataset.as_numpy_iterator())
[7, 8, 9]
Args
count A tf.int64 scalar tf.Tensor, representing the number of elements of this dataset that should be skipped to form the new dataset. If count is greater than the size of this dataset, the new dataset will contain no elements. If count is -1, skips the entire dataset.
Returns
Dataset A Dataset. take View source
take(
count
)
Creates a Dataset with at most count elements from this dataset.
dataset = tf.data.Dataset.range(10)
dataset = dataset.take(3)
list(dataset.as_numpy_iterator())
[0, 1, 2]
Args
count A tf.int64 scalar tf.Tensor, representing the number of elements of this dataset that should be taken to form the new dataset. If count is -1, or if count is greater than the size of this dataset, the new dataset will contain all elements of this dataset.
Returns
Dataset A Dataset. unbatch View source
unbatch()
Splits elements of a dataset into multiple elements. For example, if elements of the dataset are shaped [B, a0, a1, ...], where B may vary for each input element, then for each element in the dataset, the unbatched dataset will contain B consecutive elements of shape [a0, a1, ...].
elements = [ [1, 2, 3], [1, 2], [1, 2, 3, 4] ]
dataset = tf.data.Dataset.from_generator(lambda: elements, tf.int64)
dataset = dataset.unbatch()
list(dataset.as_numpy_iterator())
[1, 2, 3, 1, 2, 1, 2, 3, 4]
Note: unbatch requires a data copy to slice up the batched tensor into smaller, unbatched tensors. When optimizing performance, try to avoid unnecessary usage of unbatch.
Returns A Dataset.
window View source
window(
size, shift=None, stride=1, drop_remainder=False
)
Combines (nests of) input elements into a dataset of (nests of) windows. A "window" is a finite dataset of flat elements of size size (or possibly fewer if there are not enough input elements to fill the window and drop_remainder evaluates to False). The shift argument determines the number of input elements by which the window moves on each iteration. If windows and elements are both numbered starting at 0, the first element in window k will be element k * shift of the input dataset. In particular, the first element of the first window will always be the first element of the input dataset. The stride argument determines the stride of the input elements, and the shift argument determines the shift of the window. For example:
dataset = tf.data.Dataset.range(7).window(2)
for window in dataset:
print(list(window.as_numpy_iterator()))
[0, 1]
[2, 3]
[4, 5]
[6]
dataset = tf.data.Dataset.range(7).window(3, 2, 1, True)
for window in dataset:
print(list(window.as_numpy_iterator()))
[0, 1, 2]
[2, 3, 4]
[4, 5, 6]
dataset = tf.data.Dataset.range(7).window(3, 1, 2, True)
for window in dataset:
print(list(window.as_numpy_iterator()))
[0, 2, 4]
[1, 3, 5]
[2, 4, 6]
Note that when the window transformation is applied to a dataset of nested elements, it produces a dataset of nested windows.
nested = ([1, 2, 3, 4], [5, 6, 7, 8])
dataset = tf.data.Dataset.from_tensor_slices(nested).window(2)
for window in dataset:
def to_numpy(ds):
return list(ds.as_numpy_iterator())
print(tuple(to_numpy(component) for component in window))
([1, 2], [5, 6])
([3, 4], [7, 8])
dataset = tf.data.Dataset.from_tensor_slices({'a': [1, 2, 3, 4]})
dataset = dataset.window(2)
for window in dataset:
def to_numpy(ds):
return list(ds.as_numpy_iterator())
print({'a': to_numpy(window['a'])})
{'a': [1, 2]}
{'a': [3, 4]}
Args
size A tf.int64 scalar tf.Tensor, representing the number of elements of the input dataset to combine into a window. Must be positive.
shift (Optional.) A tf.int64 scalar tf.Tensor, representing the number of input elements by which the window moves in each iteration. Defaults to size. Must be positive.
stride (Optional.) A tf.int64 scalar tf.Tensor, representing the stride of the input elements in the sliding window. Must be positive. The default value of 1 means "retain every input element".
drop_remainder (Optional.) A tf.bool scalar tf.Tensor, representing whether the last windows should be dropped if their size is smaller than size.
Returns
Dataset A Dataset of (nests of) windows -- a finite datasets of flat elements created from the (nests of) input elements. with_options View source
with_options(
options
)
Returns a new tf.data.Dataset with the given options set. The options are "global" in the sense they apply to the entire dataset. If options are set multiple times, they are merged as long as different options do not use different non-default values.
ds = tf.data.Dataset.range(5)
ds = ds.interleave(lambda x: tf.data.Dataset.range(5),
cycle_length=3,
num_parallel_calls=3)
options = tf.data.Options()
# This will make the interleave order non-deterministic.
options.experimental_deterministic = False
ds = ds.with_options(options)
Args
options A tf.data.Options that identifies the options the use.
Returns
Dataset A Dataset with the given options.
Raises
ValueError when an option is set more than once to a non-default value zip View source
@staticmethod
zip(
datasets
)
Creates a Dataset by zipping together the given datasets. This method has similar semantics to the built-in zip() function in Python, with the main difference being that the datasets argument can be an arbitrary nested structure of Dataset objects.
# The nested structure of the `datasets` argument determines the
# structure of elements in the resulting dataset.
a = tf.data.Dataset.range(1, 4) # ==> [ 1, 2, 3 ]
b = tf.data.Dataset.range(4, 7) # ==> [ 4, 5, 6 ]
ds = tf.data.Dataset.zip((a, b))
list(ds.as_numpy_iterator())
[(1, 4), (2, 5), (3, 6)]
ds = tf.data.Dataset.zip((b, a))
list(ds.as_numpy_iterator())
[(4, 1), (5, 2), (6, 3)]
# The `datasets` argument may contain an arbitrary number of datasets.
c = tf.data.Dataset.range(7, 13).batch(2) # ==> [ [7, 8],
# [9, 10],
# [11, 12] ]
ds = tf.data.Dataset.zip((a, b, c))
for element in ds.as_numpy_iterator():
print(element)
(1, 4, array([7, 8]))
(2, 5, array([ 9, 10]))
(3, 6, array([11, 12]))
# The number of elements in the resulting dataset is the same as
# the size of the smallest dataset in `datasets`.
d = tf.data.Dataset.range(13, 15) # ==> [ 13, 14 ]
ds = tf.data.Dataset.zip((a, d))
list(ds.as_numpy_iterator())
[(1, 13), (2, 14)]
Args
datasets A nested structure of datasets.
Returns
Dataset A Dataset. __bool__ View source
__bool__()
__iter__ View source
__iter__()
Creates an iterator for elements of this dataset. The returned iterator implements the Python Iterator protocol.
Returns An tf.data.Iterator for the elements of this dataset.
Raises
RuntimeError If not inside of tf.function and not executing eagerly. __len__ View source
__len__()
Returns the length of the dataset if it is known and finite. This method requires that you are running in eager mode, and that the length of the dataset is known and non-infinite. When the length may be unknown or infinite, or if you are running in graph mode, use tf.data.Dataset.cardinality instead.
Returns An integer representing the length of the dataset.
Raises
RuntimeError If the dataset length is unknown or infinite, or if eager execution is not enabled. __nonzero__ View source
__nonzero__() | tensorflow.data.tfrecorddataset |
Module: tf.debugging Public API for tf.debugging namespace. Modules experimental module: Public API for tf.debugging.experimental namespace. Functions Assert(...): Asserts that the given condition is true. assert_all_finite(...): Assert that the tensor does not contain any NaN's or Inf's. assert_equal(...): Assert the condition x == y holds element-wise. assert_greater(...): Assert the condition x > y holds element-wise. assert_greater_equal(...): Assert the condition x >= y holds element-wise. assert_integer(...): Assert that x is of integer dtype. assert_less(...): Assert the condition x < y holds element-wise. assert_less_equal(...): Assert the condition x <= y holds element-wise. assert_near(...): Assert the condition x and y are close element-wise. assert_negative(...): Assert the condition x < 0 holds element-wise. assert_non_negative(...): Assert the condition x >= 0 holds element-wise. assert_non_positive(...): Assert the condition x <= 0 holds element-wise. assert_none_equal(...): Assert the condition x != y holds for all elements. assert_positive(...): Assert the condition x > 0 holds element-wise. assert_proper_iterable(...): Static assert that values is a "proper" iterable. assert_rank(...): Assert that x has rank equal to rank. assert_rank_at_least(...): Assert that x has rank of at least rank. assert_rank_in(...): Assert that x has a rank in ranks. assert_same_float_dtype(...): Validate and return float type based on tensors and dtype. assert_scalar(...): Asserts that the given tensor is a scalar. assert_shapes(...): Assert tensor shapes and dimension size relationships between tensors. assert_type(...): Asserts that the given Tensor is of the specified type. check_numerics(...): Checks a tensor for NaN and Inf values. disable_check_numerics(...): Disable the eager/graph unified numerics checking mechanism. enable_check_numerics(...): Enable tensor numerics checking in an eager/graph unified fashion. get_log_device_placement(...): Get if device placements are logged. is_numeric_tensor(...): Returns True if the elements of tensor are numbers. set_log_device_placement(...): Set if device placements should be logged. | tensorflow.debugging |
tf.debugging.Assert View source on GitHub Asserts that the given condition is true. View aliases Main aliases
tf.Assert Compat aliases for migration See Migration guide for more details. tf.compat.v1.Assert, tf.compat.v1.debugging.Assert
tf.debugging.Assert(
condition, data, summarize=None, name=None
)
If condition evaluates to false, print the list of tensors in data. summarize determines how many entries of the tensors to print.
Args
condition The condition to evaluate.
data The tensors to print out when condition is false.
summarize Print this many entries of each tensor.
name A name for this operation (optional).
Returns
assert_op An Operation that, when executed, raises a tf.errors.InvalidArgumentError if condition is not true.
Raises
Note: The output of this function should be used. If it is not, a warning will be logged or an error may be raised. To mark the output as used, call its .mark_used() method.
Tf1 Compatibility When in TF V1 mode (that is, outside tf.function) Assert needs a control dependency on the output to ensure the assertion executes: # Ensure maximum element of x is smaller or equal to 1
assert_op = tf.Assert(tf.less_equal(tf.reduce_max(x), 1.), [x])
with tf.control_dependencies([assert_op]):
... code using x ...
Eager Compatibility returns None | tensorflow.debugging.assert |
tf.debugging.assert_all_finite View source on GitHub Assert that the tensor does not contain any NaN's or Inf's.
tf.debugging.assert_all_finite(
x, message, name=None
)
Args
x Tensor to check.
message Message to log on failure.
name A name for this operation (optional).
Returns Same tensor as x. | tensorflow.debugging.assert_all_finite |
tf.debugging.assert_equal View source on GitHub Assert the condition x == y holds element-wise. View aliases Main aliases
tf.assert_equal
tf.debugging.assert_equal(
x, y, message=None, summarize=None, name=None
)
This Op checks that x[i] == y[i] holds for every pair of (possibly broadcast) elements of x and y. If both x and y are empty, this is trivially satisfied. If x and y are not equal, message, as well as the first summarize entries of x and y are printed, and InvalidArgumentError is raised.
Args
x Numeric Tensor.
y Numeric Tensor, same dtype as and broadcastable to x.
message A string to prefix to the default message.
summarize Print this many entries of each tensor.
name A name for this operation (optional). Defaults to "assert_equal".
Returns Op that raises InvalidArgumentError if x == y is False. This can be used with tf.control_dependencies inside of tf.functions to block followup computation until the check has executed.
Raises
InvalidArgumentError if the check can be performed immediately and x == y is False. The check can be performed immediately during eager execution or if x and y are statically known. Eager Compatibility returns None | tensorflow.debugging.assert_equal |
tf.debugging.assert_greater View source on GitHub Assert the condition x > y holds element-wise. View aliases Main aliases
tf.assert_greater
tf.debugging.assert_greater(
x, y, message=None, summarize=None, name=None
)
This Op checks that x[i] > y[i] holds for every pair of (possibly broadcast) elements of x and y. If both x and y are empty, this is trivially satisfied. If x is not greater than y element-wise, message, as well as the first summarize entries of x and y are printed, and InvalidArgumentError is raised.
Args
x Numeric Tensor.
y Numeric Tensor, same dtype as and broadcastable to x.
message A string to prefix to the default message.
summarize Print this many entries of each tensor.
name A name for this operation (optional). Defaults to "assert_greater".
Returns Op that raises InvalidArgumentError if x > y is False. This can be used with tf.control_dependencies inside of tf.functions to block followup computation until the check has executed.
Raises
InvalidArgumentError if the check can be performed immediately and x > y is False. The check can be performed immediately during eager execution or if x and y are statically known. Eager Compatibility returns None | tensorflow.debugging.assert_greater |
tf.debugging.assert_greater_equal View source on GitHub Assert the condition x >= y holds element-wise.
tf.debugging.assert_greater_equal(
x, y, message=None, summarize=None, name=None
)
This Op checks that x[i] >= y[i] holds for every pair of (possibly broadcast) elements of x and y. If both x and y are empty, this is trivially satisfied. If x is not greater or equal to y element-wise, message, as well as the first summarize entries of x and y are printed, and InvalidArgumentError is raised.
Args
x Numeric Tensor.
y Numeric Tensor, same dtype as and broadcastable to x.
message A string to prefix to the default message.
summarize Print this many entries of each tensor.
name A name for this operation (optional). Defaults to "assert_greater_equal".
Returns Op that raises InvalidArgumentError if x >= y is False. This can be used with tf.control_dependencies inside of tf.functions to block followup computation until the check has executed.
Raises
InvalidArgumentError if the check can be performed immediately and x >= y is False. The check can be performed immediately during eager execution or if x and y are statically known. Eager Compatibility returns None | tensorflow.debugging.assert_greater_equal |
tf.debugging.assert_integer View source on GitHub Assert that x is of integer dtype.
tf.debugging.assert_integer(
x, message=None, name=None
)
If x has a non-integer type, message, as well as the dtype of x are printed, and InvalidArgumentError is raised. This can always be checked statically, so this method returns nothing.
Args
x A Tensor.
message A string to prefix to the default message.
name A name for this operation (optional). Defaults to "assert_integer".
Raises
TypeError If x.dtype is not a non-quantized integer type. | tensorflow.debugging.assert_integer |
tf.debugging.assert_less View source on GitHub Assert the condition x < y holds element-wise. View aliases Main aliases
tf.assert_less
tf.debugging.assert_less(
x, y, message=None, summarize=None, name=None
)
This Op checks that x[i] < y[i] holds for every pair of (possibly broadcast) elements of x and y. If both x and y are empty, this is trivially satisfied. If x is not less than y element-wise, message, as well as the first summarize entries of x and y are printed, and InvalidArgumentError is raised.
Args
x Numeric Tensor.
y Numeric Tensor, same dtype as and broadcastable to x.
message A string to prefix to the default message.
summarize Print this many entries of each tensor.
name A name for this operation (optional). Defaults to "assert_less".
Returns Op that raises InvalidArgumentError if x < y is False. This can be used with tf.control_dependencies inside of tf.functions to block followup computation until the check has executed.
Raises
InvalidArgumentError if the check can be performed immediately and x < y is False. The check can be performed immediately during eager execution or if x and y are statically known. Eager Compatibility returns None | tensorflow.debugging.assert_less |
tf.debugging.assert_less_equal View source on GitHub Assert the condition x <= y holds element-wise.
tf.debugging.assert_less_equal(
x, y, message=None, summarize=None, name=None
)
This Op checks that x[i] <= y[i] holds for every pair of (possibly broadcast) elements of x and y. If both x and y are empty, this is trivially satisfied. If x is not less or equal than y element-wise, message, as well as the first summarize entries of x and y are printed, and InvalidArgumentError is raised.
Args
x Numeric Tensor.
y Numeric Tensor, same dtype as and broadcastable to x.
message A string to prefix to the default message.
summarize Print this many entries of each tensor.
name A name for this operation (optional). Defaults to "assert_less_equal".
Returns Op that raises InvalidArgumentError if x <= y is False. This can be used with tf.control_dependencies inside of tf.functions to block followup computation until the check has executed.
Raises
InvalidArgumentError if the check can be performed immediately and x <= y is False. The check can be performed immediately during eager execution or if x and y are statically known. Eager Compatibility returns None | tensorflow.debugging.assert_less_equal |
tf.debugging.assert_near View source on GitHub Assert the condition x and y are close element-wise.
tf.debugging.assert_near(
x, y, rtol=None, atol=None, message=None, summarize=None, name=None
)
This Op checks that x[i] - y[i] < atol + rtol * tf.abs(y[i]) holds for every pair of (possibly broadcast) elements of x and y. If both x and y are empty, this is trivially satisfied. If any elements of x and y are not close, message, as well as the first summarize entries of x and y are printed, and InvalidArgumentError is raised. The default atol and rtol is 10 * eps, where eps is the smallest representable positive number such that 1 + eps != 1. This is about 1.2e-6 in 32bit, 2.22e-15 in 64bit, and 0.00977 in 16bit. See numpy.finfo.
Args
x Float or complex Tensor.
y Float or complex Tensor, same dtype as and broadcastable to x.
rtol Tensor. Same dtype as, and broadcastable to, x. The relative tolerance. Default is 10 * eps.
atol Tensor. Same dtype as, and broadcastable to, x. The absolute tolerance. Default is 10 * eps.
message A string to prefix to the default message.
summarize Print this many entries of each tensor.
name A name for this operation (optional). Defaults to "assert_near".
Returns Op that raises InvalidArgumentError if x and y are not close enough. This can be used with tf.control_dependencies inside of tf.functions to block followup computation until the check has executed.
Raises
InvalidArgumentError if the check can be performed immediately and x != y is False for any pair of elements in x and y. The check can be performed immediately during eager execution or if x and y are statically known. Eager Compatibility returns None Numpy Compatibility Similar to numpy.testing.assert_allclose, except tolerance depends on data type. This is due to the fact that TensorFlow is often used with 32bit, 64bit, and even 16bit data. | tensorflow.debugging.assert_near |
tf.debugging.assert_negative View source on GitHub Assert the condition x < 0 holds element-wise.
tf.debugging.assert_negative(
x, message=None, summarize=None, name=None
)
This Op checks that x[i] < 0 holds for every element of x. If x is empty, this is trivially satisfied. If x is not negative everywhere, message, as well as the first summarize entries of x are printed, and InvalidArgumentError is raised.
Args
x Numeric Tensor.
message A string to prefix to the default message.
summarize Print this many entries of each tensor.
name A name for this operation (optional). Defaults to "assert_negative".
Returns Op raising InvalidArgumentError unless x is all negative. This can be used with tf.control_dependencies inside of tf.functions to block followup computation until the check has executed.
Raises
InvalidArgumentError if the check can be performed immediately and x[i] < 0 is False. The check can be performed immediately during eager execution or if x is statically known. Eager Compatibility returns None | tensorflow.debugging.assert_negative |
tf.debugging.assert_none_equal View source on GitHub Assert the condition x != y holds for all elements.
tf.debugging.assert_none_equal(
x, y, summarize=None, message=None, name=None
)
This Op checks that x[i] != y[i] holds for every pair of (possibly broadcast) elements of x and y. If both x and y are empty, this is trivially satisfied. If any elements of x and y are equal, message, as well as the first summarize entries of x and y are printed, and InvalidArgumentError is raised.
Args
x Numeric Tensor.
y Numeric Tensor, same dtype as and broadcastable to x.
summarize Print this many entries of each tensor.
message A string to prefix to the default message.
name A name for this operation (optional). Defaults to "assert_none_equal".
Returns Op that raises InvalidArgumentError if x != y is ever False. This can be used with tf.control_dependencies inside of tf.functions to block followup computation until the check has executed.
Raises
InvalidArgumentError if the check can be performed immediately and x != y is False for any pair of elements in x and y. The check can be performed immediately during eager execution or if x and y are statically known. Eager Compatibility returns None | tensorflow.debugging.assert_none_equal |
tf.debugging.assert_non_negative View source on GitHub Assert the condition x >= 0 holds element-wise.
tf.debugging.assert_non_negative(
x, message=None, summarize=None, name=None
)
This Op checks that x[i] >= 0 holds for every element of x. If x is empty, this is trivially satisfied. If x is not >= 0 everywhere, message, as well as the first summarize entries of x are printed, and InvalidArgumentError is raised.
Args
x Numeric Tensor.
message A string to prefix to the default message.
summarize Print this many entries of each tensor.
name A name for this operation (optional). Defaults to "assert_non_negative".
Returns Op raising InvalidArgumentError unless x is all non-negative. This can be used with tf.control_dependencies inside of tf.functions to block followup computation until the check has executed.
Raises
InvalidArgumentError if the check can be performed immediately and x[i] >= 0 is False. The check can be performed immediately during eager execution or if x is statically known. Eager Compatibility returns None | tensorflow.debugging.assert_non_negative |
tf.debugging.assert_non_positive View source on GitHub Assert the condition x <= 0 holds element-wise.
tf.debugging.assert_non_positive(
x, message=None, summarize=None, name=None
)
This Op checks that x[i] <= 0 holds for every element of x. If x is empty, this is trivially satisfied. If x is not <= 0 everywhere, message, as well as the first summarize entries of x are printed, and InvalidArgumentError is raised.
Args
x Numeric Tensor.
message A string to prefix to the default message.
summarize Print this many entries of each tensor.
name A name for this operation (optional). Defaults to "assert_non_positive".
Returns Op raising InvalidArgumentError unless x is all non-positive. This can be used with tf.control_dependencies inside of tf.functions to block followup computation until the check has executed.
Raises
InvalidArgumentError if the check can be performed immediately and x[i] <= 0 is False. The check can be performed immediately during eager execution or if x is statically known. Eager Compatibility returns None | tensorflow.debugging.assert_non_positive |
tf.debugging.assert_positive View source on GitHub Assert the condition x > 0 holds element-wise.
tf.debugging.assert_positive(
x, message=None, summarize=None, name=None
)
This Op checks that x[i] > 0 holds for every element of x. If x is empty, this is trivially satisfied. If x is not positive everywhere, message, as well as the first summarize entries of x are printed, and InvalidArgumentError is raised.
Args
x Numeric Tensor.
message A string to prefix to the default message.
summarize Print this many entries of each tensor.
name A name for this operation (optional). Defaults to "assert_positive".
Returns Op raising InvalidArgumentError unless x is all positive. This can be used with tf.control_dependencies inside of tf.functions to block followup computation until the check has executed.
Raises
InvalidArgumentError if the check can be performed immediately and x[i] > 0 is False. The check can be performed immediately during eager execution or if x is statically known. Eager Compatibility returns None | tensorflow.debugging.assert_positive |
tf.debugging.assert_proper_iterable View source on GitHub Static assert that values is a "proper" iterable. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.assert_proper_iterable, tf.compat.v1.debugging.assert_proper_iterable
tf.debugging.assert_proper_iterable(
values
)
Ops that expect iterables of Tensor can call this to validate input. Useful since Tensor, ndarray, byte/text type are all iterables themselves.
Args
values Object to be checked.
Raises
TypeError If values is not iterable or is one of Tensor, SparseTensor, np.array, tf.compat.bytes_or_text_types. | tensorflow.debugging.assert_proper_iterable |
tf.debugging.assert_rank View source on GitHub Assert that x has rank equal to rank. View aliases Main aliases
tf.assert_rank
tf.debugging.assert_rank(
x, rank, message=None, name=None
)
This Op checks that the rank of x is equal to rank. If x has a different rank, message, as well as the shape of x are printed, and InvalidArgumentError is raised.
Args
x Tensor.
rank Scalar integer Tensor.
message A string to prefix to the default message.
name A name for this operation (optional). Defaults to "assert_rank".
Returns Op raising InvalidArgumentError unless x has specified rank. If static checks determine x has correct rank, a no_op is returned. This can be used with tf.control_dependencies inside of tf.functions to block followup computation until the check has executed.
Raises
InvalidArgumentError if the check can be performed immediately and x does not have rank rank. The check can be performed immediately during eager execution or if the shape of x is statically known. Eager Compatibility returns None | tensorflow.debugging.assert_rank |
tf.debugging.assert_rank_at_least View source on GitHub Assert that x has rank of at least rank.
tf.debugging.assert_rank_at_least(
x, rank, message=None, name=None
)
This Op checks that the rank of x is greater or equal to rank. If x has a rank lower than rank, message, as well as the shape of x are printed, and InvalidArgumentError is raised.
Args
x Tensor.
rank Scalar integer Tensor.
message A string to prefix to the default message.
name A name for this operation (optional). Defaults to "assert_rank_at_least".
Returns Op raising InvalidArgumentError unless x has specified rank or higher. If static checks determine x has correct rank, a no_op is returned. This can be used with tf.control_dependencies inside of tf.functions to block followup computation until the check has executed.
Raises
InvalidArgumentError x does not have rank at least rank, but the rank cannot be statically determined.
ValueError If static checks determine x has mismatched rank. Eager Compatibility returns None | tensorflow.debugging.assert_rank_at_least |
tf.debugging.assert_rank_in View source on GitHub Assert that x has a rank in ranks.
tf.debugging.assert_rank_in(
x, ranks, message=None, name=None
)
This Op checks that the rank of x is in ranks. If x has a different rank, message, as well as the shape of x are printed, and InvalidArgumentError is raised.
Args
x Tensor.
ranks Iterable of scalar Tensor objects.
message A string to prefix to the default message.
name A name for this operation (optional). Defaults to "assert_rank_in".
Returns Op raising InvalidArgumentError unless rank of x is in ranks. If static checks determine x has matching rank, a no_op is returned. This can be used with tf.control_dependencies inside of tf.functions to block followup computation until the check has executed.
Raises
InvalidArgumentError x does not have rank in ranks, but the rank cannot be statically determined.
ValueError If static checks determine x has mismatched rank. Eager Compatibility returns None | tensorflow.debugging.assert_rank_in |
tf.debugging.assert_same_float_dtype View source on GitHub Validate and return float type based on tensors and dtype. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.assert_same_float_dtype, tf.compat.v1.debugging.assert_same_float_dtype
tf.debugging.assert_same_float_dtype(
tensors=None, dtype=None
)
For ops such as matrix multiplication, inputs and weights must be of the same float type. This function validates that all tensors are the same type, validates that type is dtype (if supplied), and returns the type. Type must be a floating point type. If neither tensors nor dtype is supplied, the function will return dtypes.float32.
Args
tensors Tensors of input values. Can include None elements, which will be ignored.
dtype Expected type.
Returns Validated type.
Raises
ValueError if neither tensors nor dtype is supplied, or result is not float, or the common type of the inputs is not a floating point type. | tensorflow.debugging.assert_same_float_dtype |
tf.debugging.assert_scalar View source on GitHub Asserts that the given tensor is a scalar.
tf.debugging.assert_scalar(
tensor, message=None, name=None
)
This function raises ValueError unless it can be certain that the given tensor is a scalar. ValueError is also raised if the shape of tensor is unknown. This is always checked statically, so this method returns nothing.
Args
tensor A Tensor.
message A string to prefix to the default message.
name A name for this operation. Defaults to "assert_scalar"
Raises
ValueError If the tensor is not scalar (rank 0), or if its shape is unknown. | tensorflow.debugging.assert_scalar |
tf.debugging.assert_shapes View source on GitHub Assert tensor shapes and dimension size relationships between tensors.
tf.debugging.assert_shapes(
shapes, data=None, summarize=None, message=None, name=None
)
This Op checks that a collection of tensors shape relationships satisfies given constraints. Example:
n = 10
q = 3
d = 7
x = tf.zeros([n,q])
y = tf.ones([n,d])
param = tf.Variable([1.0, 2.0, 3.0])
scalar = 1.0
tf.debugging.assert_shapes([
(x, ('N', 'Q')),
(y, ('N', 'D')),
(param, ('Q',)),
(scalar, ()),
])
tf.debugging.assert_shapes([
(x, ('N', 'D')),
(y, ('N', 'D'))
])
Traceback (most recent call last):
ValueError: ...
If x, y, param or scalar does not have a shape that satisfies all specified constraints, message, as well as the first summarize entries of the first encountered violating tensor are printed, and InvalidArgumentError is raised. Size entries in the specified shapes are checked against other entries by their hash, except: a size entry is interpreted as an explicit size if it can be parsed as an integer primitive. a size entry is interpreted as any size if it is None or '.'. If the first entry of a shape is ... (type Ellipsis) or '*' that indicates a variable number of outer dimensions of unspecified size, i.e. the constraint applies to the inner-most dimensions only. Scalar tensors and specified shapes of length zero (excluding the 'inner-most' prefix) are both treated as having a single dimension of size one.
Args
shapes dictionary with (Tensor to shape) items, or a list of (Tensor, shape) tuples. A shape must be an iterable.
data The tensors to print out if the condition is False. Defaults to error message and first few entries of the violating tensor.
summarize Print this many entries of the tensor.
message A string to prefix to the default message.
name A name for this operation (optional). Defaults to "assert_shapes".
Raises
ValueError If static checks determine any shape constraint is violated. | tensorflow.debugging.assert_shapes |
tf.debugging.assert_type View source on GitHub Asserts that the given Tensor is of the specified type.
tf.debugging.assert_type(
tensor, tf_type, message=None, name=None
)
This can always be checked statically, so this method returns nothing.
Args
tensor A Tensor or SparseTensor.
tf_type A tensorflow type (dtypes.float32, tf.int64, dtypes.bool, etc).
message A string to prefix to the default message.
name A name for this operation. Defaults to "assert_type"
Raises
TypeError If the tensor's data type doesn't match tf_type. | tensorflow.debugging.assert_type |
tf.debugging.check_numerics Checks a tensor for NaN and Inf values. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.check_numerics, tf.compat.v1.debugging.check_numerics
tf.debugging.check_numerics(
tensor, message, name=None
)
When run, reports an InvalidArgument error if tensor has any values that are not a number (NaN) or infinity (Inf). Otherwise, passes tensor as-is.
Args
tensor A Tensor. Must be one of the following types: bfloat16, half, float32, float64.
message A string. Prefix of the error message.
name A name for the operation (optional).
Returns A Tensor. Has the same type as tensor. | tensorflow.debugging.check_numerics |
tf.debugging.disable_check_numerics Disable the eager/graph unified numerics checking mechanism. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.debugging.disable_check_numerics
tf.debugging.disable_check_numerics()
This method can be used after a call to tf.debugging.enable_check_numerics() to disable the numerics-checking mechanism that catches infinity and NaN values output by ops executed eagerly or in tf.function-compiled graphs. This method is idempotent. Calling it multiple times has the same effect as calling it once. This method takes effect only on the thread in which it is called. | tensorflow.debugging.disable_check_numerics |
tf.debugging.enable_check_numerics Enable tensor numerics checking in an eager/graph unified fashion. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.debugging.enable_check_numerics
tf.debugging.enable_check_numerics(
stack_height_limit=30, path_length_limit=50
)
The numerics checking mechanism will cause any TensorFlow eager execution or graph execution to error out as soon as an op's output tensor contains infinity or NaN. This method is idempotent. Calling it multiple times has the same effect as calling it once. This method takes effect only on the thread in which it is called. When a op's float-type output tensor contains any Infinity or NaN, an tf.errors.InvalidArgumentError will be thrown, with an error message that reveals the following information: The type of the op that generated the tensor with bad numerics. Data type (dtype) of the tensor. Shape of the tensor (to the extent known at the time of eager execution or graph construction). Name of the containing graph (if available). (Graph mode only): The stack trace of the intra-graph op's creation, with a stack-height limit and a path-length limit for visual clarity. The stack frames that belong to the user's code (as opposed to tensorflow's internal code) are highlighted with a text arrow ("->"). (Eager mode only): How many of the offending tensor's elements are Infinity and NaN, respectively. Once enabled, the check-numerics mechanism can be disabled by using tf.debugging.disable_check_numerics(). Example usage:
Catching infinity during the execution of a tf.function graph: import tensorflow as tf
tf.debugging.enable_check_numerics()
@tf.function
def square_log_x_plus_1(x):
v = tf.math.log(x + 1)
return tf.math.square(v)
x = -1.0
# When the following line runs, a function graph will be compiled
# from the Python function `square_log_x_plus_1()`. Due to the
# `enable_check_numerics()` call above, the graph will contain
# numerics checking ops that will run during the function graph's
# execution. The function call generates an -infinity when the Log
# (logarithm) op operates on the output tensor of the Add op.
# The program errors out at this line, printing an error message.
y = square_log_x_plus_1(x)
z = -y
Catching NaN during eager execution: import numpy as np
import tensorflow as tf
tf.debugging.enable_check_numerics()
x = np.array([[0.0, -1.0], [4.0, 3.0]])
# The following line executes the Sqrt op eagerly. Due to the negative
# element in the input array, a NaN is generated. Due to the
# `enable_check_numerics()` call above, the program errors immediately
# at this line, printing an error message.
y = tf.math.sqrt(x)
z = tf.matmul(y, y)
Note: If your code is running on TPUs, be sure to call tf.config.set_soft_device_placement(True) before calling tf.debugging.enable_check_numerics() as this API uses automatic outside compilation on TPUs. For example:
tf.config.set_soft_device_placement(True)
tf.debugging.enable_check_numerics()
resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu='')
strategy = tf.distribute.TPUStrategy(resolver)
with strategy.scope():
# ...
Args
stack_height_limit Limit to the height of the printed stack trace. Applicable only to ops in tf.functions (graphs).
path_length_limit Limit to the file path included in the printed stack trace. Applicable only to ops in tf.functions (graphs). | tensorflow.debugging.enable_check_numerics |
Module: tf.debugging.experimental Public API for tf.debugging.experimental namespace. Functions disable_dump_debug_info(...): Disable the currently-enabled debugging dumping. enable_dump_debug_info(...): Enable dumping debugging information from a TensorFlow program. | tensorflow.debugging.experimental |
tf.debugging.experimental.disable_dump_debug_info Disable the currently-enabled debugging dumping. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.debugging.experimental.disable_dump_debug_info
tf.debugging.experimental.disable_dump_debug_info()
If the enable_dump_debug_info() method under the same Python namespace has been invoked before, calling this method disables it. If no call to enable_dump_debug_info() has been made, calling this method is a no-op. Calling this method more than once is idempotent. | tensorflow.debugging.experimental.disable_dump_debug_info |
tf.debugging.experimental.enable_dump_debug_info Enable dumping debugging information from a TensorFlow program. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.debugging.experimental.enable_dump_debug_info
tf.debugging.experimental.enable_dump_debug_info(
dump_root, tensor_debug_mode=DEFAULT_TENSOR_DEBUG_MODE,
circular_buffer_size=1000, op_regex=None, tensor_dtypes=None
)
The debugging information is dumped to a directory on the file system specified as dump_root. The dumped debugging information can be ingested by debugger UIs. The files in the dump directory contain the following information: TensorFlow Function construction (e.g., compilation of Python functions decorated with @tf.function), the op types, names (if available), context, the input and output tensors, and the associated stack traces. Execution of TensorFlow operations (ops) and Functions and their stack traces, op types, names (if available) and contexts. In addition, depending on the value of the tensor_debug_mode argument (see Args section below), the value(s) of the output tensors or more concise summaries of the tensor values will be dumped. A snapshot of Python source files involved in the execution of the TensorFlow program. Once enabled, the dumping can be disabled with the corresponding disable_dump_debug_info() method under the same Python namespace. Calling this method more than once with the same dump_root is idempotent. Calling this method more than once with different tensor_debug_modes leads to a ValueError. Calling this method more than once with different circular_buffer_sizes leads to a ValueError. Calling this method with a different dump_root abolishes the previously-enabled dump_root. Usage example: tf.debugging.experimental.enable_dump_debug_info('/tmp/my-tfdbg-dumps')
# Code to build, train and run your TensorFlow model...
Note: If your code is running on TPUs, be sure to call tf.config.set_soft_device_placement(True) before calling tf.debugging.experimental.enable_dump_debug_info() as this API uses automatic outside compilation on TPUs. For example:
tf.config.set_soft_device_placement(True)
tf.debugging.experimental.enable_dump_debug_info(
logdir, tensor_debug_mode="FULL_HEALTH")
resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu='')
strategy = tf.distribute.TPUStrategy(resolver)
with strategy.scope():
# ...
Args
dump_root The directory path where the dumping information will be written.
tensor_debug_mode Debug mode for tensor values, as a string. The currently supported options are: "NO_TENSOR": (Default) Only traces the output tensors of all executed ops (including those executed eagerly at the Python level or as a part of a TensorFlow graph) and functions, while not extracting any information from the values of the tensors. "CURT_HEALTH": For each floating-dtype tensor (e.g., tensors of dtypes such as float32, float64 and bfloat16), extracts a binary bit indicating whether it contains any -infinity, +infinity or NaN. "CONCISE_HEALTH": For each floating-dtype tensor, extract total element count, and counts of -infinity, +infinity and NaN elements. "FULL_HEALTH": For each floating-dtype tensor, extracts the dtype, rank (number of dimensions), total element count, and counts of -infinity, +infinity and NaN elements. "SHAPE": For each tensor (regardless of dtype), extracts its dtype, rank, total element count and shape.
circular_buffer_size Size of the circular buffers for execution events. These circular buffers are designed to reduce the overhead of debugging dumping. They hold the most recent debug events concerning eager execution of ops and tf.functions and traces of tensor values computed inside tf.functions. They are written to the file system only when the proper flushing method is called (see description of return values below). Expected to be an integer. If <= 0, the circular-buffer behavior will be disabled, i.e., the execution debug events will be written to the file writers in the same way as non-execution events such as op creations and source-file snapshots.
op_regex Dump data from only the tensors from op types that matches to the regular expression (through Python's re.match()). "Op type" refers to the names of the TensorFlow operations (e.g., "MatMul", "LogSoftmax"), which may repeat in a TensorFlow function. It does not refer to the names of nodes (e.g., "dense/MatMul", "dense_1/MatMul_1") which are unique within a function. Example 1: Dump tensor data from only MatMul and Relu ops op_regex="^(MatMul|Relu)$". Example 2: Dump tensors from all ops except Relu: op_regex="(?!^Relu$)". This filter operates in a logical AND relation with tensor_dtypes.
tensor_dtypes Dump data from only the tensors of which the specified dtypes. This optional argument can be in any of the following format: a list or tuple of DType objects or strings that can be converted to DType objects via tf.as_dtype(). Examples:
tensor_dtype=[tf.float32, tf.float64],
tensor_dtype=["float32", "float64"],
tensor_dtypes=(tf.int32, tf.bool), tensor_dtypes=("int32", "bool") a callable that takes a single DType argument and returns a Python boolean indicating whether the dtype is to be included in the data dumping. Examples:
tensor_dtype=lambda dtype: dtype.is_integer. This filter operates in a logical AND relation with op_regex.
Returns A DebugEventsWriter instance used by the dumping callback. The caller may use its flushing methods, including FlushNonExecutionFiles() and FlushExecutionFiles(). | tensorflow.debugging.experimental.enable_dump_debug_info |
tf.debugging.get_log_device_placement View source on GitHub Get if device placements are logged. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.debugging.get_log_device_placement
tf.debugging.get_log_device_placement()
Returns If device placements are logged. | tensorflow.debugging.get_log_device_placement |
tf.debugging.is_numeric_tensor View source on GitHub Returns True if the elements of tensor are numbers. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.debugging.is_numeric_tensor, tf.compat.v1.is_numeric_tensor
tf.debugging.is_numeric_tensor(
tensor
)
Specifically, returns True if the dtype of tensor is one of the following: tf.float32 tf.float64 tf.int8 tf.int16 tf.int32 tf.int64 tf.uint8 tf.qint8 tf.qint32 tf.quint8 tf.complex64 Returns False if tensor is of a non-numeric type or if tensor is not a tf.Tensor object. | tensorflow.debugging.is_numeric_tensor |
tf.debugging.set_log_device_placement View source on GitHub Set if device placements should be logged. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.debugging.set_log_device_placement
tf.debugging.set_log_device_placement(
enabled
)
Args
enabled Whether to enabled device placement logging. | tensorflow.debugging.set_log_device_placement |
tf.device View source on GitHub Specifies the device for ops created/executed in this context.
tf.device(
device_name
)
This function specifies the device to be used for ops created/executed in a particular context. Nested contexts will inherit and also create/execute their ops on the specified device. If a specific device is not required, consider not using this function so that a device can be automatically assigned. In general the use of this function is optional. device_name can be fully specified, as in "/job:worker/task:1/device:cpu:0", or partially specified, containing only a subset of the "/"-separated fields. Any fields which are specified will override device annotations from outer scopes. For example: with tf.device('/job:foo'):
# ops created here have devices with /job:foo
with tf.device('/job:bar/task:0/device:gpu:2'):
# ops created here have the fully specified device above
with tf.device('/device:gpu:1'):
# ops created here have the device '/job:foo/device:gpu:1'
Args
device_name The device name to use in the context.
Returns A context manager that specifies the default device to use for newly created ops.
Raises
RuntimeError If a function is passed in. | tensorflow.device |
tf.DeviceSpec View source on GitHub Represents a (possibly partial) specification for a TensorFlow device.
tf.DeviceSpec(
job=None, replica=None, task=None, device_type=None, device_index=None
)
DeviceSpecs are used throughout TensorFlow to describe where state is stored and computations occur. Using DeviceSpec allows you to parse device spec strings to verify their validity, merge them or compose them programmatically. Example: # Place the operations on device "GPU:0" in the "ps" job.
device_spec = DeviceSpec(job="ps", device_type="GPU", device_index=0)
with tf.device(device_spec.to_string()):
# Both my_var and squared_var will be placed on /job:ps/device:GPU:0.
my_var = tf.Variable(..., name="my_variable")
squared_var = tf.square(my_var)
With eager execution disabled (by default in TensorFlow 1.x and by calling disable_eager_execution() in TensorFlow 2.x), the following syntax can be used: tf.compat.v1.disable_eager_execution()
# Same as previous
device_spec = DeviceSpec(job="ps", device_type="GPU", device_index=0)
# No need of .to_string() method.
with tf.device(device_spec):
my_var = tf.Variable(..., name="my_variable")
squared_var = tf.square(my_var)
```
If a `DeviceSpec` is partially specified, it will be merged with other
`DeviceSpec`s according to the scope in which it is defined. `DeviceSpec`
components defined in inner scopes take precedence over those defined in
outer scopes.
```python
gpu0_spec = DeviceSpec(job="ps", device_type="GPU", device_index=0)
with tf.device(DeviceSpec(job="train").to_string()):
with tf.device(gpu0_spec.to_string()):
# Nodes created here will be assigned to /job:ps/device:GPU:0.
with tf.device(DeviceSpec(device_type="GPU", device_index=1).to_string()):
# Nodes created here will be assigned to /job:train/device:GPU:1.
A DeviceSpec consists of 5 components -- each of which is optionally specified: Job: The job name. Replica: The replica index. Task: The task index. Device type: The device type string (e.g. "CPU" or "GPU"). Device index: The device index.
Args
job string. Optional job name.
replica int. Optional replica index.
task int. Optional task index.
device_type Optional device type string (e.g. "CPU" or "GPU")
device_index int. Optional device index. If left unspecified, device represents 'any' device_index.
Attributes
device_index
device_type
job
replica
task
Methods from_string View source
@classmethod
from_string(
spec
)
Construct a DeviceSpec from a string.
Args
spec a string of the form /job:/replica:/task:/device:CPU: or /job:/replica:/task:/device:GPU: as cpu and gpu are mutually exclusive. All entries are optional.
Returns A DeviceSpec.
make_merged_spec View source
make_merged_spec(
dev
)
Returns a new DeviceSpec which incorporates dev. When combining specs, dev will take precedence over the current spec. So for instance: first_spec = tf.DeviceSpec(job=0, device_type="CPU")
second_spec = tf.DeviceSpec(device_type="GPU")
combined_spec = first_spec.make_merged_spec(second_spec)
is equivalent to: combined_spec = tf.DeviceSpec(job=0, device_type="GPU")
Args
dev a DeviceSpec
Returns A new DeviceSpec which combines self and dev
parse_from_string View source
parse_from_string(
spec
)
Parse a DeviceSpec name into its components. 2.x behavior change: In TensorFlow 1.x, this function mutates its own state and returns itself. In 2.x, DeviceSpecs are immutable, and this function will return a DeviceSpec which contains the spec. Recommended: ```
# my_spec and my_updated_spec are unrelated.
my_spec = tf.DeviceSpec.from_string("/CPU:0")
my_updated_spec = tf.DeviceSpec.from_string("/GPU:0")
with tf.device(my_updated_spec):
...
```
Will work in 1.x and 2.x (though deprecated in 2.x): ```
my_spec = tf.DeviceSpec.from_string("/CPU:0")
my_updated_spec = my_spec.parse_from_string("/GPU:0")
with tf.device(my_updated_spec):
...
```
Will NOT work in 2.x: ```
my_spec = tf.DeviceSpec.from_string("/CPU:0")
my_spec.parse_from_string("/GPU:0") # <== Will not update my_spec
with tf.device(my_spec):
...
```
In general, DeviceSpec.from_string should completely replace DeviceSpec.parse_from_string, and DeviceSpec.replace should completely replace setting attributes directly.
Args
spec an optional string of the form /job:/replica:/task:/device:CPU: or /job:/replica:/task:/device:GPU: as cpu and gpu are mutually exclusive. All entries are optional.
Returns The DeviceSpec.
Raises
ValueError if the spec was not valid. replace View source
replace(
**kwargs
)
Convenience method for making a new DeviceSpec by overriding fields. For instance: my_spec = DeviceSpec=(job="my_job", device="CPU")
my_updated_spec = my_spec.replace(device="GPU")
my_other_spec = my_spec.replace(device=None)
Args
**kwargs This method takes the same args as the DeviceSpec constructor
Returns A DeviceSpec with the fields specified in kwargs overridden.
to_string View source
to_string()
Return a string representation of this DeviceSpec.
Returns a string of the form /job:/replica:/task:/device::.
__eq__ View source
__eq__(
other
)
Checks if the other DeviceSpec is same as the current instance, eg have same value for all the internal fields.
Args
other Another DeviceSpec
Returns Return True if other is also a DeviceSpec instance and has same value as the current instance. Return False otherwise. | tensorflow.devicespec |
Module: tf.distribute Library for running a computation across multiple devices. The intent of this library is that you can write an algorithm in a stylized way and it will be usable with a variety of different tf.distribute.Strategy implementations. Each descendant will implement a different strategy for distributing the algorithm across multiple devices/machines. Furthermore, these changes can be hidden inside the specific layers and other library classes that need special treatment to run in a distributed setting, so that most users' model definition code can run unchanged. The tf.distribute.Strategy API works the same way with eager and graph execution. Guides TensorFlow v2.x TensorFlow v1.x Tutorials
Distributed Training Tutorials The tutorials cover how to use tf.distribute.Strategy to do distributed training with native Keras APIs, custom training loops, and Esitmator APIs. They also cover how to save/load model when using tf.distribute.Strategy.
Glossary
Data parallelism is where we run multiple copies of the model on different slices of the input data. This is in contrast to model parallelism where we divide up a single copy of a model across multiple devices. Note: we only support data parallelism for now, but hope to add support for model parallelism in the future. A device is a CPU or accelerator (e.g. GPUs, TPUs) on some machine that TensorFlow can run operations on (see e.g. tf.device). You may have multiple devices on a single machine, or be connected to devices on multiple machines. Devices used to run computations are called worker devices. Devices used to store variables are parameter devices. For some strategies, such as tf.distribute.MirroredStrategy, the worker and parameter devices will be the same (see mirrored variables below). For others they will be different. For example, tf.distribute.experimental.CentralStorageStrategy puts the variables on a single device (which may be a worker device or may be the CPU), and tf.distribute.experimental.ParameterServerStrategy puts the variables on separate machines called parameter servers (see below). A replica is one copy of the model, running on one slice of the input data. Right now each replica is executed on its own worker device, but once we add support for model parallelism a replica may span multiple worker devices. A host is the CPU device on a machine with worker devices, typically used for running input pipelines. A worker is defined to be the physical machine(s) containing the physical devices (e.g. GPUs, TPUs) on which the replicated computation is executed. A worker may contain one or more replicas, but contains at least one replica. Typically one worker will correspond to one machine, but in the case of very large models with model parallelism, one worker may span multiple machines. We typically run one input pipeline per worker, feeding all the replicas on that worker.
Synchronous, or more commonly sync, training is where the updates from each replica are aggregated together before updating the model variables. This is in contrast to asynchronous, or async training, where each replica updates the model variables independently. You may also have replicas partitioned into groups which are in sync within each group but async between groups. Parameter servers: These are machines that hold a single copy of parameters/variables, used by some strategies (right now just tf.distribute.experimental.ParameterServerStrategy). All replicas that want to operate on a variable retrieve it at the beginning of a step and send an update to be applied at the end of the step. These can in priniciple support either sync or async training, but right now we only have support for async training with parameter servers. Compare to tf.distribute.experimental.CentralStorageStrategy, which puts all variables on a single device on the same machine (and does sync training), and tf.distribute.MirroredStrategy, which mirrors variables to multiple devices (see below).
Replica context vs. Cross-replica context vs Update context A replica context applies when you execute the computation function that was called with strategy.run. Conceptually, you're in replica context when executing the computation function that is being replicated. An update context is entered in a tf.distribute.StrategyExtended.update call. An cross-replica context is entered when you enter a strategy.scope. This is useful for calling tf.distribute.Strategy methods which operate across the replicas (like reduce_to()). By default you start in a replica context (the "default single replica context") and then some methods can switch you back and forth.
Distributed value: Distributed value is represented by the base class tf.distribute.DistributedValues. tf.distribute.DistributedValues is useful to represent values on multiple devices, and it contains a map from replica id to values. Two representative kinds of tf.distribute.DistributedValues are "PerReplica" and "Mirrored" values. "PerReplica" values exist on the worker devices, with a different value for each replica. They are produced by iterating through a distributed dataset returned by tf.distribute.Strategy.experimental_distribute_dataset and tf.distribute.Strategy.distribute_datasets_from_function. They are also the typical result returned by tf.distribute.Strategy.run. "Mirrored" values are like "PerReplica" values, except we know that the value on all replicas are the same. We can safely read a "Mirrored" value in a cross-replica context by using the value on any replica.
Unwrapping and merging: Consider calling a function fn on multiple replicas, like strategy.run(fn, args=[w]) with an argument w that is a tf.distribute.DistributedValues. This means w will have a map taking replica id 0 to w0, replica id 1 to w1, etc. strategy.run() unwraps w before calling fn, so it calls fn(w0) on device d0, fn(w1) on device d1, etc. It then merges the return values from fn(), which leads to one common object if the returned values are the same object from every replica, or a DistributedValues object otherwise. Reductions and all-reduce: A reduction is a method of aggregating multiple values into one value, like "sum" or "mean". If a strategy is doing sync training, we will perform a reduction on the gradients to a parameter from all replicas before applying the update. All-reduce is an algorithm for performing a reduction on values from multiple devices and making the result available on all of those devices. Mirrored variables: These are variables that are created on multiple devices, where we keep the variables in sync by applying the same updates to every copy. Mirrored variables are created with tf.Variable(...synchronization=tf.VariableSynchronization.ON_WRITE...). Normally they are only used in synchronous training.
SyncOnRead variables SyncOnRead variables are created by tf.Variable(...synchronization=tf.VariableSynchronization.ON_READ...), and they are created on multiple devices. In replica context, each component variable on the local replica can perform reads and writes without synchronization with each other. When the SyncOnRead variable is read in cross-replica context, the values from component variables are aggregated and returned. SyncOnRead variables bring a lot of custom configuration difficulty to the underlying logic, so we do not encourage users to instantiate and use SyncOnRead variable on their own. We have mainly used SyncOnRead variables for use cases such as batch norm and metrics. For performance reasons, we often don't need to keep these statistics in sync every step and they can be accumulated on each replica independently. The only time we want to sync them is reporting or checkpointing, which typically happens in cross-replica context. SyncOnRead variables are also often used by advanced users who want to control when variable values are aggregated. For example, users sometimes want to maintain gradients independently on each replica for a couple of steps without aggregation.
Distribute-aware layers Layers are generally called in a replica context, except when defining a Keras functional model. tf.distribute.in_cross_replica_context will let you determine which case you are in. If in a replica context, the tf.distribute.get_replica_context function will return the default replica context outside a strategy scope, None within a strategy scope, and a tf.distribute.ReplicaContext object inside a strategy scope and within a tf.distribute.Strategy.run function. The ReplicaContext object has an all_reduce method for aggregating across all replicas.
Note that we provide a default version of tf.distribute.Strategy that is used when no other strategy is in scope, that provides the same API with reasonable default behavior. Modules cluster_resolver module: Library imports for ClusterResolvers. experimental module: Public API for tf.distribute.experimental namespace. Classes class CrossDeviceOps: Base class for cross-device reduction and broadcasting algorithms. class DistributedDataset: Represents a dataset distributed among devices and machines. class DistributedIterator: An iterator over tf.distribute.DistributedDataset. class DistributedValues: Base class for representing distributed values. class HierarchicalCopyAllReduce: Hierarchical copy all-reduce implementation of CrossDeviceOps. class InputContext: A class wrapping information needed by an input function. class InputOptions: Run options for experimental_distribute_dataset(s_from_function). class InputReplicationMode: Replication mode for input function. class MirroredStrategy: Synchronous training across multiple replicas on one machine. class MultiWorkerMirroredStrategy: A distribution strategy for synchronous training on multiple workers. class NcclAllReduce: NCCL all-reduce implementation of CrossDeviceOps. class OneDeviceStrategy: A distribution strategy for running on a single device. class ReduceOp: Indicates how a set of values should be reduced. class ReductionToOneDevice: A CrossDeviceOps implementation that copies values to one device to reduce. class ReplicaContext: A class with a collection of APIs that can be called in a replica context. class RunOptions: Run options for strategy.run. class Server: An in-process TensorFlow server, for use in distributed training. class Strategy: A state & compute distribution policy on a list of devices. class StrategyExtended: Additional APIs for algorithms that need to be distribution-aware. class TPUStrategy: Synchronous training on TPUs and TPU Pods. Functions experimental_set_strategy(...): Set a tf.distribute.Strategy as current without with strategy.scope(). get_replica_context(...): Returns the current tf.distribute.ReplicaContext or None. get_strategy(...): Returns the current tf.distribute.Strategy object. has_strategy(...): Return if there is a current non-default tf.distribute.Strategy. in_cross_replica_context(...): Returns True if in a cross-replica context. | tensorflow.distribute |
Module: tf.distribute.cluster_resolver Library imports for ClusterResolvers. This library contains all implementations of ClusterResolvers. ClusterResolvers are a way of specifying cluster information for distributed execution. Built on top of existing ClusterSpec framework, ClusterResolvers are a way for TensorFlow to communicate with various cluster management systems (e.g. GCE, AWS, etc...). Classes class ClusterResolver: Abstract class for all implementations of ClusterResolvers. class GCEClusterResolver: ClusterResolver for Google Compute Engine. class KubernetesClusterResolver: ClusterResolver for Kubernetes. class SimpleClusterResolver: Simple implementation of ClusterResolver that accepts all attributes. class SlurmClusterResolver: ClusterResolver for system with Slurm workload manager. class TFConfigClusterResolver: Implementation of a ClusterResolver which reads the TF_CONFIG EnvVar. class TPUClusterResolver: Cluster Resolver for Google Cloud TPUs. class UnionResolver: Performs a union on underlying ClusterResolvers. | tensorflow.distribute.cluster_resolver |
tf.distribute.cluster_resolver.ClusterResolver View source on GitHub Abstract class for all implementations of ClusterResolvers. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.distribute.cluster_resolver.ClusterResolver This defines the skeleton for all implementations of ClusterResolvers. ClusterResolvers are a way for TensorFlow to communicate with various cluster management systems (e.g. GCE, AWS, etc...) and gives TensorFlow necessary information to set up distributed training. By letting TensorFlow communicate with these systems, we will be able to automatically discover and resolve IP addresses for various TensorFlow workers. This will eventually allow us to automatically recover from underlying machine failures and scale TensorFlow worker clusters up and down. Note to Implementors of tf.distribute.cluster_resolver.ClusterResolver subclass: In addition to these abstract methods, when task_type, task_id, and rpc_layer attributes are applicable, you should also implement them either as properties with getters or setters, or directly set the attributes self._task_type, self._task_id, or self._rpc_layer so the base class' getters and setters are used. See tf.distribute.clusterresolver.SimpleClusterResolver.init_ for an example. In general, multi-client tf.distribute strategies such as tf.distribute.experimental.MultiWorkerMirroredStrategy require task_type and task_id properties to be available in the ClusterResolver they are using. On the other hand, these concepts are not applicable in single-client strategies, such as tf.distribute.experimental.TPUStrategy, because the program is only expected to be run on one task, so there should not be a need to have code branches according to task type and task id. task_type is the name of the server's current named job (e.g. 'worker', 'ps' in a distributed parameterized training job). task_id is the ordinal index of the server within the task type. rpc_layer is the protocol used by TensorFlow to communicate with other TensorFlow servers in a distributed environment.
Attributes
environment Returns the current environment which TensorFlow is running in. There are two possible return values, "google" (when TensorFlow is running in a Google-internal environment) or an empty string (when TensorFlow is running elsewhere). If you are implementing a ClusterResolver that works in both the Google environment and the open-source world (for instance, a TPU ClusterResolver or similar), you will have to return the appropriate string depending on the environment, which you will have to detect. Otherwise, if you are implementing a ClusterResolver that will only work in open-source TensorFlow, you do not need to implement this property.
task_id Returns the task id this ClusterResolver indicates. In TensorFlow distributed environment, each job may have an applicable task id, which is the index of the instance within its task type. This is useful when user needs to run specific code according to task index. For example, cluster_spec = tf.train.ClusterSpec({
"ps": ["localhost:2222", "localhost:2223"],
"worker": ["localhost:2224", "localhost:2225", "localhost:2226"]
})
# SimpleClusterResolver is used here for illustration; other cluster
# resolvers may be used for other source of task type/id.
simple_resolver = SimpleClusterResolver(cluster_spec, task_type="worker",
task_id=0)
...
if cluster_resolver.task_type == 'worker' and cluster_resolver.task_id == 0:
# Perform something that's only applicable on 'worker' type, id 0. This
# block will run on this particular instance since we've specified this
# task to be a 'worker', id 0 in above cluster resolver.
else:
# Perform something that's only applicable on other ids. This block will
# not run on this particular instance.
Returns None if such information is not available or is not applicable in the current distributed environment, such as training with tf.distribute.cluster_resolver.TPUClusterResolver. For more information, please see tf.distribute.cluster_resolver.ClusterResolver's class docstring.
task_type Returns the task type this ClusterResolver indicates. In TensorFlow distributed environment, each job may have an applicable task type. Valid task types in TensorFlow include 'chief': a worker that is designated with more responsibility, 'worker': a regular worker for training/evaluation, 'ps': a parameter server, or 'evaluator': an evaluator that evaluates the checkpoints for metrics. See Multi-worker configuration for more information about 'chief' and 'worker' task type, which are most commonly used. Having access to such information is useful when user needs to run specific code according to task types. For example, cluster_spec = tf.train.ClusterSpec({
"ps": ["localhost:2222", "localhost:2223"],
"worker": ["localhost:2224", "localhost:2225", "localhost:2226"]
})
# SimpleClusterResolver is used here for illustration; other cluster
# resolvers may be used for other source of task type/id.
simple_resolver = SimpleClusterResolver(cluster_spec, task_type="worker",
task_id=1)
...
if cluster_resolver.task_type == 'worker':
# Perform something that's only applicable on workers. This block
# will run on this particular instance since we've specified this task to
# be a worker in above cluster resolver.
elif cluster_resolver.task_type == 'ps':
# Perform something that's only applicable on parameter servers. This
# block will not run on this particular instance.
Returns None if such information is not available or is not applicable in the current distributed environment, such as training with tf.distribute.experimental.TPUStrategy. For more information, please see tf.distribute.cluster_resolver.ClusterResolver's class doc.
Methods cluster_spec View source
@abc.abstractmethod
cluster_spec()
Retrieve the current state of the cluster and return a tf.train.ClusterSpec.
Returns A tf.train.ClusterSpec representing the state of the cluster at the moment this function is called.
Implementors of this function must take care in ensuring that the ClusterSpec returned is up-to-date at the time of calling this function. This usually means retrieving the information from the underlying cluster management system every time this function is invoked and reconstructing a cluster_spec, rather than attempting to cache anything. master View source
@abc.abstractmethod
master(
task_type=None, task_id=None, rpc_layer=None
)
Retrieves the name or URL of the session master.
Note: this is only useful for TensorFlow 1.x.
Args
task_type (Optional) The type of the TensorFlow task of the master.
task_id (Optional) The index of the TensorFlow task of the master.
rpc_layer (Optional) The RPC protocol for the given cluster.
Returns The name or URL of the session master.
Implementors of this function must take care in ensuring that the master returned is up-to-date at the time to calling this function. This usually means retrieving the master every time this function is invoked. num_accelerators View source
num_accelerators(
task_type=None, task_id=None, config_proto=None
)
Returns the number of accelerator cores per worker. This returns the number of accelerator cores (such as GPUs and TPUs) available per worker. Optionally, we allow callers to specify the task_type, and task_id, for if they want to target a specific TensorFlow task to query the number of accelerators. This is to support heterogenous environments, where the number of accelerators cores per host is different.
Args
task_type (Optional) The type of the TensorFlow task of the machine we want to query.
task_id (Optional) The index of the TensorFlow task of the machine we want to query.
config_proto (Optional) Configuration for starting a new session to query how many accelerator cores it has.
Returns A map of accelerator types to number of cores. | tensorflow.distribute.cluster_resolver.clusterresolver |
tf.distribute.cluster_resolver.GCEClusterResolver View source on GitHub ClusterResolver for Google Compute Engine. Inherits From: ClusterResolver View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.distribute.cluster_resolver.GCEClusterResolver
tf.distribute.cluster_resolver.GCEClusterResolver(
project, zone, instance_group, port, task_type='worker', task_id=0,
rpc_layer='grpc', credentials='default', service=None
)
This is an implementation of cluster resolvers for the Google Compute Engine instance group platform. By specifying a project, zone, and instance group, this will retrieve the IP address of all the instances within the instance group and return a ClusterResolver object suitable for use for distributed TensorFlow.
Note: this cluster resolver cannot retrieve task_type, task_id or rpc_layer. To use it with some distribution strategies like tf.distribute.experimental.MultiWorkerMirroredStrategy, you will need to specify task_type and task_id in the constructor.
Usage example with tf.distribute.Strategy: # On worker 0
cluster_resolver = GCEClusterResolver("my-project", "us-west1",
"my-instance-group",
task_type="worker", task_id=0)
strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy(
cluster_resolver=cluster_resolver)
# On worker 1
cluster_resolver = GCEClusterResolver("my-project", "us-west1",
"my-instance-group",
task_type="worker", task_id=1)
strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy(
cluster_resolver=cluster_resolver)
Args
project Name of the GCE project.
zone Zone of the GCE instance group.
instance_group Name of the GCE instance group.
port Port of the listening TensorFlow server (default: 8470)
task_type Name of the TensorFlow job this GCE instance group of VM instances belong to.
task_id The task index for this particular VM, within the GCE instance group. In particular, every single instance should be assigned a unique ordinal index within an instance group manually so that they can be distinguished from each other.
rpc_layer The RPC layer TensorFlow should use to communicate across instances.
credentials GCE Credentials. If nothing is specified, this defaults to GoogleCredentials.get_application_default().
service The GCE API object returned by the googleapiclient.discovery function. (Default: discovery.build('compute', 'v1')). If you specify a custom service object, then the credentials parameter will be ignored.
Raises
ImportError If the googleapiclient is not installed.
Attributes
environment Returns the current environment which TensorFlow is running in. There are two possible return values, "google" (when TensorFlow is running in a Google-internal environment) or an empty string (when TensorFlow is running elsewhere). If you are implementing a ClusterResolver that works in both the Google environment and the open-source world (for instance, a TPU ClusterResolver or similar), you will have to return the appropriate string depending on the environment, which you will have to detect. Otherwise, if you are implementing a ClusterResolver that will only work in open-source TensorFlow, you do not need to implement this property.
rpc_layer
task_id Returns the task id this ClusterResolver indicates. In TensorFlow distributed environment, each job may have an applicable task id, which is the index of the instance within its task type. This is useful when user needs to run specific code according to task index. For example, cluster_spec = tf.train.ClusterSpec({
"ps": ["localhost:2222", "localhost:2223"],
"worker": ["localhost:2224", "localhost:2225", "localhost:2226"]
})
# SimpleClusterResolver is used here for illustration; other cluster
# resolvers may be used for other source of task type/id.
simple_resolver = SimpleClusterResolver(cluster_spec, task_type="worker",
task_id=0)
...
if cluster_resolver.task_type == 'worker' and cluster_resolver.task_id == 0:
# Perform something that's only applicable on 'worker' type, id 0. This
# block will run on this particular instance since we've specified this
# task to be a 'worker', id 0 in above cluster resolver.
else:
# Perform something that's only applicable on other ids. This block will
# not run on this particular instance.
Returns None if such information is not available or is not applicable in the current distributed environment, such as training with tf.distribute.cluster_resolver.TPUClusterResolver. For more information, please see tf.distribute.cluster_resolver.ClusterResolver's class docstring.
task_type Returns the task type this ClusterResolver indicates. In TensorFlow distributed environment, each job may have an applicable task type. Valid task types in TensorFlow include 'chief': a worker that is designated with more responsibility, 'worker': a regular worker for training/evaluation, 'ps': a parameter server, or 'evaluator': an evaluator that evaluates the checkpoints for metrics. See Multi-worker configuration for more information about 'chief' and 'worker' task type, which are most commonly used. Having access to such information is useful when user needs to run specific code according to task types. For example, cluster_spec = tf.train.ClusterSpec({
"ps": ["localhost:2222", "localhost:2223"],
"worker": ["localhost:2224", "localhost:2225", "localhost:2226"]
})
# SimpleClusterResolver is used here for illustration; other cluster
# resolvers may be used for other source of task type/id.
simple_resolver = SimpleClusterResolver(cluster_spec, task_type="worker",
task_id=1)
...
if cluster_resolver.task_type == 'worker':
# Perform something that's only applicable on workers. This block
# will run on this particular instance since we've specified this task to
# be a worker in above cluster resolver.
elif cluster_resolver.task_type == 'ps':
# Perform something that's only applicable on parameter servers. This
# block will not run on this particular instance.
Returns None if such information is not available or is not applicable in the current distributed environment, such as training with tf.distribute.experimental.TPUStrategy. For more information, please see tf.distribute.cluster_resolver.ClusterResolver's class doc.
Methods cluster_spec View source
cluster_spec()
Returns a ClusterSpec object based on the latest instance group info. This returns a ClusterSpec object for use based on information from the specified instance group. We will retrieve the information from the GCE APIs every time this method is called.
Returns A ClusterSpec containing host information retrieved from GCE.
master View source
master(
task_type=None, task_id=None, rpc_layer=None
)
Retrieves the name or URL of the session master.
Note: this is only useful for TensorFlow 1.x.
Args
task_type (Optional) The type of the TensorFlow task of the master.
task_id (Optional) The index of the TensorFlow task of the master.
rpc_layer (Optional) The RPC protocol for the given cluster.
Returns The name or URL of the session master.
Implementors of this function must take care in ensuring that the master returned is up-to-date at the time to calling this function. This usually means retrieving the master every time this function is invoked. num_accelerators View source
num_accelerators(
task_type=None, task_id=None, config_proto=None
)
Returns the number of accelerator cores per worker. This returns the number of accelerator cores (such as GPUs and TPUs) available per worker. Optionally, we allow callers to specify the task_type, and task_id, for if they want to target a specific TensorFlow task to query the number of accelerators. This is to support heterogenous environments, where the number of accelerators cores per host is different.
Args
task_type (Optional) The type of the TensorFlow task of the machine we want to query.
task_id (Optional) The index of the TensorFlow task of the machine we want to query.
config_proto (Optional) Configuration for starting a new session to query how many accelerator cores it has.
Returns A map of accelerator types to number of cores. | tensorflow.distribute.cluster_resolver.gceclusterresolver |
tf.distribute.cluster_resolver.KubernetesClusterResolver View source on GitHub ClusterResolver for Kubernetes. Inherits From: ClusterResolver View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.distribute.cluster_resolver.KubernetesClusterResolver
tf.distribute.cluster_resolver.KubernetesClusterResolver(
job_to_label_mapping=None, tf_server_port=8470, rpc_layer='grpc',
override_client=None
)
This is an implementation of cluster resolvers for Kubernetes. When given the the Kubernetes namespace and label selector for pods, we will retrieve the pod IP addresses of all running pods matching the selector, and return a ClusterSpec based on that information.
Note: it cannot retrieve task_type, task_id or rpc_layer. To use it with some distribution strategies like tf.distribute.experimental.MultiWorkerMirroredStrategy, you will need to specify task_type and task_id by setting these attributes.
Usage example with tf.distribute.Strategy: # On worker 0
cluster_resolver = KubernetesClusterResolver(
{"worker": ["job-name=worker-cluster-a", "job-name=worker-cluster-b"]})
cluster_resolver.task_type = "worker"
cluster_resolver.task_id = 0
strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy(
cluster_resolver=cluster_resolver)
# On worker 1
cluster_resolver = KubernetesClusterResolver(
{"worker": ["job-name=worker-cluster-a", "job-name=worker-cluster-b"]})
cluster_resolver.task_type = "worker"
cluster_resolver.task_id = 1
strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy(
cluster_resolver=cluster_resolver)
Args
job_to_label_mapping A mapping of TensorFlow jobs to label selectors. This allows users to specify many TensorFlow jobs in one Cluster Resolver, and each job can have pods belong with different label selectors. For example, a sample mapping might be {'worker': ['job-name=worker-cluster-a', 'job-name=worker-cluster-b'],
'ps': ['job-name=ps-1', 'job-name=ps-2']}
tf_server_port The port the TensorFlow server is listening on.
rpc_layer (Optional) The RPC layer TensorFlow should use to communicate between tasks in Kubernetes. Defaults to 'grpc'.
override_client The Kubernetes client (usually automatically retrieved using from kubernetes import client as k8sclient). If you pass this in, you are responsible for setting Kubernetes credentials manually.
Raises
ImportError If the Kubernetes Python client is not installed and no override_client is passed in.
RuntimeError If autoresolve_task is not a boolean or a callable.
Attributes
environment Returns the current environment which TensorFlow is running in. There are two possible return values, "google" (when TensorFlow is running in a Google-internal environment) or an empty string (when TensorFlow is running elsewhere). If you are implementing a ClusterResolver that works in both the Google environment and the open-source world (for instance, a TPU ClusterResolver or similar), you will have to return the appropriate string depending on the environment, which you will have to detect. Otherwise, if you are implementing a ClusterResolver that will only work in open-source TensorFlow, you do not need to implement this property.
task_id Returns the task id this ClusterResolver indicates. In TensorFlow distributed environment, each job may have an applicable task id, which is the index of the instance within its task type. This is useful when user needs to run specific code according to task index. For example, cluster_spec = tf.train.ClusterSpec({
"ps": ["localhost:2222", "localhost:2223"],
"worker": ["localhost:2224", "localhost:2225", "localhost:2226"]
})
# SimpleClusterResolver is used here for illustration; other cluster
# resolvers may be used for other source of task type/id.
simple_resolver = SimpleClusterResolver(cluster_spec, task_type="worker",
task_id=0)
...
if cluster_resolver.task_type == 'worker' and cluster_resolver.task_id == 0:
# Perform something that's only applicable on 'worker' type, id 0. This
# block will run on this particular instance since we've specified this
# task to be a 'worker', id 0 in above cluster resolver.
else:
# Perform something that's only applicable on other ids. This block will
# not run on this particular instance.
Returns None if such information is not available or is not applicable in the current distributed environment, such as training with tf.distribute.cluster_resolver.TPUClusterResolver. For more information, please see tf.distribute.cluster_resolver.ClusterResolver's class docstring.
task_type Returns the task type this ClusterResolver indicates. In TensorFlow distributed environment, each job may have an applicable task type. Valid task types in TensorFlow include 'chief': a worker that is designated with more responsibility, 'worker': a regular worker for training/evaluation, 'ps': a parameter server, or 'evaluator': an evaluator that evaluates the checkpoints for metrics. See Multi-worker configuration for more information about 'chief' and 'worker' task type, which are most commonly used. Having access to such information is useful when user needs to run specific code according to task types. For example, cluster_spec = tf.train.ClusterSpec({
"ps": ["localhost:2222", "localhost:2223"],
"worker": ["localhost:2224", "localhost:2225", "localhost:2226"]
})
# SimpleClusterResolver is used here for illustration; other cluster
# resolvers may be used for other source of task type/id.
simple_resolver = SimpleClusterResolver(cluster_spec, task_type="worker",
task_id=1)
...
if cluster_resolver.task_type == 'worker':
# Perform something that's only applicable on workers. This block
# will run on this particular instance since we've specified this task to
# be a worker in above cluster resolver.
elif cluster_resolver.task_type == 'ps':
# Perform something that's only applicable on parameter servers. This
# block will not run on this particular instance.
Returns None if such information is not available or is not applicable in the current distributed environment, such as training with tf.distribute.experimental.TPUStrategy. For more information, please see tf.distribute.cluster_resolver.ClusterResolver's class doc.
Methods cluster_spec View source
cluster_spec()
Returns a ClusterSpec object based on the latest info from Kubernetes. We retrieve the information from the Kubernetes master every time this method is called.
Returns A ClusterSpec containing host information returned from Kubernetes.
Raises
RuntimeError If any of the pods returned by the master is not in the Running phase. master View source
master(
task_type=None, task_id=None, rpc_layer=None
)
Returns the master address to use when creating a session. You must have set the task_type and task_id object properties before calling this function, or pass in the task_type and task_id parameters when using this function. If you do both, the function parameters will override the object properties.
Note: this is only useful for TensorFlow 1.x.
Args
task_type (Optional) The type of the TensorFlow task of the master.
task_id (Optional) The index of the TensorFlow task of the master.
rpc_layer (Optional) The RPC protocol for the given cluster.
Returns The name or URL of the session master.
num_accelerators View source
num_accelerators(
task_type=None, task_id=None, config_proto=None
)
Returns the number of accelerator cores per worker. This returns the number of accelerator cores (such as GPUs and TPUs) available per worker. Optionally, we allow callers to specify the task_type, and task_id, for if they want to target a specific TensorFlow task to query the number of accelerators. This is to support heterogenous environments, where the number of accelerators cores per host is different.
Args
task_type (Optional) The type of the TensorFlow task of the machine we want to query.
task_id (Optional) The index of the TensorFlow task of the machine we want to query.
config_proto (Optional) Configuration for starting a new session to query how many accelerator cores it has.
Returns A map of accelerator types to number of cores. | tensorflow.distribute.cluster_resolver.kubernetesclusterresolver |
tf.distribute.cluster_resolver.SimpleClusterResolver View source on GitHub Simple implementation of ClusterResolver that accepts all attributes. Inherits From: ClusterResolver View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.distribute.cluster_resolver.SimpleClusterResolver
tf.distribute.cluster_resolver.SimpleClusterResolver(
cluster_spec, master='', task_type=None, task_id=None,
environment='', num_accelerators=None, rpc_layer=None
)
Please see the base class for documentation of arguments of its constructor. It is useful if you want to specify some or all attributes. Usage example with tf.distribute.Strategy: cluster = tf.train.ClusterSpec({"worker": ["worker0.example.com:2222",
"worker1.example.com:2222"]})
# On worker 0
cluster_resolver = SimpleClusterResolver(cluster, task_type="worker",
task_id=0,
num_accelerators={"GPU": 8},
rpc_layer="grpc")
strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy(
cluster_resolver=cluster_resolver)
# On worker 1
cluster_resolver = SimpleClusterResolver(cluster, task_type="worker",
task_id=1,
num_accelerators={"GPU": 8},
rpc_layer="grpc")
strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy(
cluster_resolver=cluster_resolver)
Attributes
environment Returns the current environment which TensorFlow is running in. There are two possible return values, "google" (when TensorFlow is running in a Google-internal environment) or an empty string (when TensorFlow is running elsewhere). If you are implementing a ClusterResolver that works in both the Google environment and the open-source world (for instance, a TPU ClusterResolver or similar), you will have to return the appropriate string depending on the environment, which you will have to detect. Otherwise, if you are implementing a ClusterResolver that will only work in open-source TensorFlow, you do not need to implement this property.
rpc_layer
task_id Returns the task id this ClusterResolver indicates. In TensorFlow distributed environment, each job may have an applicable task id, which is the index of the instance within its task type. This is useful when user needs to run specific code according to task index. For example, cluster_spec = tf.train.ClusterSpec({
"ps": ["localhost:2222", "localhost:2223"],
"worker": ["localhost:2224", "localhost:2225", "localhost:2226"]
})
# SimpleClusterResolver is used here for illustration; other cluster
# resolvers may be used for other source of task type/id.
simple_resolver = SimpleClusterResolver(cluster_spec, task_type="worker",
task_id=0)
...
if cluster_resolver.task_type == 'worker' and cluster_resolver.task_id == 0:
# Perform something that's only applicable on 'worker' type, id 0. This
# block will run on this particular instance since we've specified this
# task to be a 'worker', id 0 in above cluster resolver.
else:
# Perform something that's only applicable on other ids. This block will
# not run on this particular instance.
Returns None if such information is not available or is not applicable in the current distributed environment, such as training with tf.distribute.cluster_resolver.TPUClusterResolver. For more information, please see tf.distribute.cluster_resolver.ClusterResolver's class docstring.
task_type Returns the task type this ClusterResolver indicates. In TensorFlow distributed environment, each job may have an applicable task type. Valid task types in TensorFlow include 'chief': a worker that is designated with more responsibility, 'worker': a regular worker for training/evaluation, 'ps': a parameter server, or 'evaluator': an evaluator that evaluates the checkpoints for metrics. See Multi-worker configuration for more information about 'chief' and 'worker' task type, which are most commonly used. Having access to such information is useful when user needs to run specific code according to task types. For example, cluster_spec = tf.train.ClusterSpec({
"ps": ["localhost:2222", "localhost:2223"],
"worker": ["localhost:2224", "localhost:2225", "localhost:2226"]
})
# SimpleClusterResolver is used here for illustration; other cluster
# resolvers may be used for other source of task type/id.
simple_resolver = SimpleClusterResolver(cluster_spec, task_type="worker",
task_id=1)
...
if cluster_resolver.task_type == 'worker':
# Perform something that's only applicable on workers. This block
# will run on this particular instance since we've specified this task to
# be a worker in above cluster resolver.
elif cluster_resolver.task_type == 'ps':
# Perform something that's only applicable on parameter servers. This
# block will not run on this particular instance.
Returns None if such information is not available or is not applicable in the current distributed environment, such as training with tf.distribute.experimental.TPUStrategy. For more information, please see tf.distribute.cluster_resolver.ClusterResolver's class doc.
Methods cluster_spec View source
cluster_spec()
Returns the ClusterSpec passed into the constructor. master View source
master(
task_type=None, task_id=None, rpc_layer=None
)
Returns the master address to use when creating a session.
Note: this is only useful for TensorFlow 1.x.
Args
task_type (Optional) The type of the TensorFlow task of the master.
task_id (Optional) The index of the TensorFlow task of the master.
rpc_layer (Optional) The RPC used by distributed TensorFlow.
Returns The name or URL of the session master.
If a task_type and task_id is given, this will override the master string passed into the initialization function. num_accelerators View source
num_accelerators(
task_type=None, task_id=None, config_proto=None
)
Returns the number of accelerator cores per worker. The SimpleClusterResolver does not do automatic detection of accelerators, and thus all arguments are unused and we simply return the value provided in the constructor.
Args
task_type Unused.
task_id Unused.
config_proto Unused. | tensorflow.distribute.cluster_resolver.simpleclusterresolver |
tf.distribute.cluster_resolver.SlurmClusterResolver View source on GitHub ClusterResolver for system with Slurm workload manager. Inherits From: ClusterResolver View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.distribute.cluster_resolver.SlurmClusterResolver
tf.distribute.cluster_resolver.SlurmClusterResolver(
jobs=None, port_base=8888, gpus_per_node=None, gpus_per_task=None,
tasks_per_node=None, auto_set_gpu=True, rpc_layer='grpc'
)
This is an implementation of ClusterResolver for Slurm clusters. This allows the specification of jobs and task counts, number of tasks per node, number of GPUs on each node and number of GPUs for each task. It retrieves system attributes by Slurm environment variables, resolves allocated computing node names, constructs a cluster and returns a ClusterResolver object which can be used for distributed TensorFlow.
Args
jobs Dictionary with job names as key and number of tasks in the job as value. Defaults to as many 'worker's as there are (Slurm) tasks.
port_base The first port number to start with for processes on a node.
gpus_per_node Number of GPUs available on each node. Defaults to the number of GPUs reported by nvidia-smi
gpus_per_task Number of GPUs to be used for each task. Default is to evenly distribute the gpus_per_node to tasks_per_node.
tasks_per_node Number of tasks running on each node. Can be an integer if the number of tasks per node is constant or a dictionary mapping hostnames to number of tasks on that node. If not set the Slurm environment is queried for the correct mapping.
auto_set_gpu Set the visible CUDA devices automatically while resolving the cluster by setting CUDA_VISIBLE_DEVICES environment variable. Defaults to True.
rpc_layer The protocol TensorFlow used to communicate between nodes. Defaults to 'grpc'.
Raises
RuntimeError If requested more GPUs per node then available or requested more tasks then assigned tasks or resolving missing values from the environment failed.
Attributes
environment Returns the current environment which TensorFlow is running in. There are two possible return values, "google" (when TensorFlow is running in a Google-internal environment) or an empty string (when TensorFlow is running elsewhere). If you are implementing a ClusterResolver that works in both the Google environment and the open-source world (for instance, a TPU ClusterResolver or similar), you will have to return the appropriate string depending on the environment, which you will have to detect. Otherwise, if you are implementing a ClusterResolver that will only work in open-source TensorFlow, you do not need to implement this property.
task_id Returns the task id this ClusterResolver indicates. In TensorFlow distributed environment, each job may have an applicable task id, which is the index of the instance within its task type. This is useful when user needs to run specific code according to task index. For example, cluster_spec = tf.train.ClusterSpec({
"ps": ["localhost:2222", "localhost:2223"],
"worker": ["localhost:2224", "localhost:2225", "localhost:2226"]
})
# SimpleClusterResolver is used here for illustration; other cluster
# resolvers may be used for other source of task type/id.
simple_resolver = SimpleClusterResolver(cluster_spec, task_type="worker",
task_id=0)
...
if cluster_resolver.task_type == 'worker' and cluster_resolver.task_id == 0:
# Perform something that's only applicable on 'worker' type, id 0. This
# block will run on this particular instance since we've specified this
# task to be a 'worker', id 0 in above cluster resolver.
else:
# Perform something that's only applicable on other ids. This block will
# not run on this particular instance.
Returns None if such information is not available or is not applicable in the current distributed environment, such as training with tf.distribute.cluster_resolver.TPUClusterResolver. For more information, please see tf.distribute.cluster_resolver.ClusterResolver's class docstring.
task_type Returns the task type this ClusterResolver indicates. In TensorFlow distributed environment, each job may have an applicable task type. Valid task types in TensorFlow include 'chief': a worker that is designated with more responsibility, 'worker': a regular worker for training/evaluation, 'ps': a parameter server, or 'evaluator': an evaluator that evaluates the checkpoints for metrics. See Multi-worker configuration for more information about 'chief' and 'worker' task type, which are most commonly used. Having access to such information is useful when user needs to run specific code according to task types. For example, cluster_spec = tf.train.ClusterSpec({
"ps": ["localhost:2222", "localhost:2223"],
"worker": ["localhost:2224", "localhost:2225", "localhost:2226"]
})
# SimpleClusterResolver is used here for illustration; other cluster
# resolvers may be used for other source of task type/id.
simple_resolver = SimpleClusterResolver(cluster_spec, task_type="worker",
task_id=1)
...
if cluster_resolver.task_type == 'worker':
# Perform something that's only applicable on workers. This block
# will run on this particular instance since we've specified this task to
# be a worker in above cluster resolver.
elif cluster_resolver.task_type == 'ps':
# Perform something that's only applicable on parameter servers. This
# block will not run on this particular instance.
Returns None if such information is not available or is not applicable in the current distributed environment, such as training with tf.distribute.experimental.TPUStrategy. For more information, please see tf.distribute.cluster_resolver.ClusterResolver's class doc.
Methods cluster_spec View source
cluster_spec()
Returns a ClusterSpec object based on the latest instance group info. This returns a ClusterSpec object for use based on information from the specified initialization parameters and Slurm environment variables. The cluster specification is resolved each time this function is called. The resolver extract hostnames of nodes by scontrol and pack tasks in that order until a node a has number of tasks that is equal to specification. GPUs on nodes are allocated to tasks by specification through setting CUDA_VISIBLE_DEVICES environment variable.
Returns A ClusterSpec containing host information retrieved from Slurm's environment variables.
get_task_info View source
get_task_info()
Returns job name and task_id for the process which calls this. This returns the job name and task index for the process which calls this function according to its rank and cluster specification. The job name and task index are set after a cluster is constructed by cluster_spec otherwise defaults to None.
Returns A string specifying job name the process belongs to and an integer specifying the task index the process belongs to in that job.
master View source
master(
task_type=None, task_id=None, rpc_layer=None
)
Returns the master string for connecting to a TensorFlow master.
Args
task_type (Optional) Overrides the default auto-selected task type.
task_id (Optional) Overrides the default auto-selected task index.
rpc_layer (Optional) Overrides the default RPC protocol TensorFlow uses to communicate across nodes.
Returns A connection string for connecting to a TensorFlow master.
num_accelerators View source
num_accelerators(
task_type=None, task_id=None, config_proto=None
)
Returns the number of accelerator cores per worker. This returns the number of accelerator cores (such as GPUs and TPUs) available per worker. Optionally, we allow callers to specify the task_type, and task_id, for if they want to target a specific TensorFlow task to query the number of accelerators. This is to support heterogenous environments, where the number of accelerators cores per host is different.
Args
task_type (Optional) The type of the TensorFlow task of the machine we want to query.
task_id (Optional) The index of the TensorFlow task of the machine we want to query.
config_proto (Optional) Configuration for starting a new session to query how many accelerator cores it has.
Returns A map of accelerator types to number of cores. | tensorflow.distribute.cluster_resolver.slurmclusterresolver |
tf.distribute.cluster_resolver.TFConfigClusterResolver View source on GitHub Implementation of a ClusterResolver which reads the TF_CONFIG EnvVar. Inherits From: ClusterResolver View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.distribute.cluster_resolver.TFConfigClusterResolver
tf.distribute.cluster_resolver.TFConfigClusterResolver(
task_type=None, task_id=None, rpc_layer=None, environment=None
)
This is an implementation of cluster resolvers when using TF_CONFIG to set information about the cluster. The cluster spec returned will be initialized from the TF_CONFIG environment variable. An example to set TF_CONFIG is: os.environ['TF_CONFIG'] = json.dumps({
'cluster': {
'worker': ["localhost:12345", "localhost:23456"]
},
'task': {'type': 'worker', 'index': 0}
})
However, sometimes the container orchestration framework will set TF_CONFIG for you. In this case, you can just create an instance without passing in any arguments. You can find an example here to let Kuburnetes set TF_CONFIG for you: https://github.com/tensorflow/ecosystem/tree/master/kubernetes. Then you can use it with tf.distribute.Strategy as: # `TFConfigClusterResolver` is already the default one in the following
# strategy.
strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy(
cluster_resolver=TFConfigClusterResolver())
Args
task_type (String, optional) Overrides the task type specified in the TF_CONFIG environment variable.
task_id (Integer, optional) Overrides the task index specified in the TF_CONFIG environment variable.
rpc_layer (String, optional) Overrides the rpc layer TensorFlow uses.
environment (String, optional) Overrides the environment TensorFlow operates in.
Attributes
environment Returns the current environment which TensorFlow is running in. There are two possible return values, "google" (when TensorFlow is running in a Google-internal environment) or an empty string (when TensorFlow is running elsewhere). If you are implementing a ClusterResolver that works in both the Google environment and the open-source world (for instance, a TPU ClusterResolver or similar), you will have to return the appropriate string depending on the environment, which you will have to detect. Otherwise, if you are implementing a ClusterResolver that will only work in open-source TensorFlow, you do not need to implement this property.
rpc_layer
task_id Returns the task id this ClusterResolver indicates. In TensorFlow distributed environment, each job may have an applicable task id, which is the index of the instance within its task type. This is useful when user needs to run specific code according to task index. For example, cluster_spec = tf.train.ClusterSpec({
"ps": ["localhost:2222", "localhost:2223"],
"worker": ["localhost:2224", "localhost:2225", "localhost:2226"]
})
# SimpleClusterResolver is used here for illustration; other cluster
# resolvers may be used for other source of task type/id.
simple_resolver = SimpleClusterResolver(cluster_spec, task_type="worker",
task_id=0)
...
if cluster_resolver.task_type == 'worker' and cluster_resolver.task_id == 0:
# Perform something that's only applicable on 'worker' type, id 0. This
# block will run on this particular instance since we've specified this
# task to be a 'worker', id 0 in above cluster resolver.
else:
# Perform something that's only applicable on other ids. This block will
# not run on this particular instance.
Returns None if such information is not available or is not applicable in the current distributed environment, such as training with tf.distribute.cluster_resolver.TPUClusterResolver. For more information, please see tf.distribute.cluster_resolver.ClusterResolver's class docstring.
task_type Returns the task type this ClusterResolver indicates. In TensorFlow distributed environment, each job may have an applicable task type. Valid task types in TensorFlow include 'chief': a worker that is designated with more responsibility, 'worker': a regular worker for training/evaluation, 'ps': a parameter server, or 'evaluator': an evaluator that evaluates the checkpoints for metrics. See Multi-worker configuration for more information about 'chief' and 'worker' task type, which are most commonly used. Having access to such information is useful when user needs to run specific code according to task types. For example, cluster_spec = tf.train.ClusterSpec({
"ps": ["localhost:2222", "localhost:2223"],
"worker": ["localhost:2224", "localhost:2225", "localhost:2226"]
})
# SimpleClusterResolver is used here for illustration; other cluster
# resolvers may be used for other source of task type/id.
simple_resolver = SimpleClusterResolver(cluster_spec, task_type="worker",
task_id=1)
...
if cluster_resolver.task_type == 'worker':
# Perform something that's only applicable on workers. This block
# will run on this particular instance since we've specified this task to
# be a worker in above cluster resolver.
elif cluster_resolver.task_type == 'ps':
# Perform something that's only applicable on parameter servers. This
# block will not run on this particular instance.
Returns None if such information is not available or is not applicable in the current distributed environment, such as training with tf.distribute.experimental.TPUStrategy. For more information, please see tf.distribute.cluster_resolver.ClusterResolver's class doc.
Methods cluster_spec View source
cluster_spec()
Returns a ClusterSpec based on the TF_CONFIG environment variable.
Returns A ClusterSpec with information from the TF_CONFIG environment variable.
master View source
master(
task_type=None, task_id=None, rpc_layer=None
)
Returns the master address to use when creating a TensorFlow session.
Note: this is only useful for TensorFlow 1.x.
Args
task_type (String, optional) Overrides and sets the task_type of the master.
task_id (Integer, optional) Overrides and sets the task id of the master.
rpc_layer (String, optional) Overrides and sets the protocol over which TensorFlow nodes communicate with each other.
Returns The address of the master.
Raises
RuntimeError If the task_type or task_id is not specified and the TF_CONFIG environment variable does not contain a task section. num_accelerators View source
num_accelerators(
task_type=None, task_id=None, config_proto=None
)
Returns the number of accelerator cores per worker. This returns the number of accelerator cores (such as GPUs and TPUs) available per worker. Optionally, we allow callers to specify the task_type, and task_id, for if they want to target a specific TensorFlow task to query the number of accelerators. This is to support heterogenous environments, where the number of accelerators cores per host is different.
Args
task_type (Optional) The type of the TensorFlow task of the machine we want to query.
task_id (Optional) The index of the TensorFlow task of the machine we want to query.
config_proto (Optional) Configuration for starting a new session to query how many accelerator cores it has.
Returns A map of accelerator types to number of cores. | tensorflow.distribute.cluster_resolver.tfconfigclusterresolver |
tf.distribute.cluster_resolver.TPUClusterResolver View source on GitHub Cluster Resolver for Google Cloud TPUs. Inherits From: ClusterResolver View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.distribute.cluster_resolver.TPUClusterResolver
tf.distribute.cluster_resolver.TPUClusterResolver(
tpu=None, zone=None, project=None, job_name='worker',
coordinator_name=None, coordinator_address=None,
credentials='default', service=None, discovery_url=None
)
This is an implementation of cluster resolvers for the Google Cloud TPU service. TPUClusterResolver supports the following distinct environments: Google Compute Engine Google Kubernetes Engine Google internal It can be passed into tf.distribute.TPUStrategy to support TF2 training on Cloud TPUs.
Args
tpu A string corresponding to the TPU to use. It can be the TPU name or TPU worker gRPC address. If not set, it will try automatically resolve the TPU address on Cloud TPUs. If set to "local", it will assume that the TPU is directly connected to the VM instead of over the network.
zone Zone where the TPUs are located. If omitted or empty, we will assume that the zone of the TPU is the same as the zone of the GCE VM, which we will try to discover from the GCE metadata service.
project Name of the GCP project containing Cloud TPUs. If omitted or empty, we will try to discover the project name of the GCE VM from the GCE metadata service.
job_name Name of the TensorFlow job the TPUs belong to.
coordinator_name The name to use for the coordinator. Set to None if the coordinator should not be included in the computed ClusterSpec.
coordinator_address The address of the coordinator (typically an ip:port pair). If set to None, a TF server will be started. If coordinator_name is None, a TF server will not be started even if coordinator_address is None.
credentials GCE Credentials. If None, then we use default credentials from the oauth2client
service The GCE API object returned by the googleapiclient.discovery function. If you specify a custom service object, then the credentials parameter will be ignored.
discovery_url A URL template that points to the location of the discovery service. It should have two parameters {api} and {apiVersion} that when filled in produce an absolute URL to the discovery document for that service. The environment variable 'TPU_API_DISCOVERY_URL' will override this.
Raises
ImportError If the googleapiclient is not installed.
ValueError If no TPUs are specified.
RuntimeError If an empty TPU name is specified and this is running in a Google Cloud environment.
Attributes
environment Returns the current environment which TensorFlow is running in.
task_id Returns the task id this ClusterResolver indicates. In TensorFlow distributed environment, each job may have an applicable task id, which is the index of the instance within its task type. This is useful when user needs to run specific code according to task index. For example, cluster_spec = tf.train.ClusterSpec({
"ps": ["localhost:2222", "localhost:2223"],
"worker": ["localhost:2224", "localhost:2225", "localhost:2226"]
})
# SimpleClusterResolver is used here for illustration; other cluster
# resolvers may be used for other source of task type/id.
simple_resolver = SimpleClusterResolver(cluster_spec, task_type="worker",
task_id=0)
...
if cluster_resolver.task_type == 'worker' and cluster_resolver.task_id == 0:
# Perform something that's only applicable on 'worker' type, id 0. This
# block will run on this particular instance since we've specified this
# task to be a 'worker', id 0 in above cluster resolver.
else:
# Perform something that's only applicable on other ids. This block will
# not run on this particular instance.
Returns None if such information is not available or is not applicable in the current distributed environment, such as training with tf.distribute.cluster_resolver.TPUClusterResolver. For more information, please see tf.distribute.cluster_resolver.ClusterResolver's class docstring.
task_type Returns the task type this ClusterResolver indicates. In TensorFlow distributed environment, each job may have an applicable task type. Valid task types in TensorFlow include 'chief': a worker that is designated with more responsibility, 'worker': a regular worker for training/evaluation, 'ps': a parameter server, or 'evaluator': an evaluator that evaluates the checkpoints for metrics. See Multi-worker configuration for more information about 'chief' and 'worker' task type, which are most commonly used. Having access to such information is useful when user needs to run specific code according to task types. For example, cluster_spec = tf.train.ClusterSpec({
"ps": ["localhost:2222", "localhost:2223"],
"worker": ["localhost:2224", "localhost:2225", "localhost:2226"]
})
# SimpleClusterResolver is used here for illustration; other cluster
# resolvers may be used for other source of task type/id.
simple_resolver = SimpleClusterResolver(cluster_spec, task_type="worker",
task_id=1)
...
if cluster_resolver.task_type == 'worker':
# Perform something that's only applicable on workers. This block
# will run on this particular instance since we've specified this task to
# be a worker in above cluster resolver.
elif cluster_resolver.task_type == 'ps':
# Perform something that's only applicable on parameter servers. This
# block will not run on this particular instance.
Returns None if such information is not available or is not applicable in the current distributed environment, such as training with tf.distribute.experimental.TPUStrategy. For more information, please see tf.distribute.cluster_resolver.ClusterResolver's class doc.
Methods cluster_spec View source
cluster_spec()
Returns a ClusterSpec object based on the latest TPU information. We retrieve the information from the GCE APIs every time this method is called.
Returns A ClusterSpec containing host information returned from Cloud TPUs, or None.
Raises
RuntimeError If the provided TPU is not healthy. connect View source
@staticmethod
connect(
tpu=None, zone=None, project=None
)
Initializes TPU and returns a TPUClusterResolver. This API will connect to remote TPU cluster and initialize the TPU hardwares. Example usage:
resolver = tf.distribute.cluster_resolver.TPUClusterResolver.connect(
tpu='')
It can be viewed as convenient wrapper of the following code:
resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu='')
tf.config.experimental_connect_to_cluster(resolver)
tf.tpu.experimental.initialize_tpu_system(resolver)
Args
tpu A string corresponding to the TPU to use. It can be the TPU name or TPU worker gRPC address. If not set, it will try automatically resolve the TPU address on Cloud TPUs.
zone Zone where the TPUs are located. If omitted or empty, we will assume that the zone of the TPU is the same as the zone of the GCE VM, which we will try to discover from the GCE metadata service.
project Name of the GCP project containing Cloud TPUs. If omitted or empty, we will try to discover the project name of the GCE VM from the GCE metadata service.
Returns An instance of TPUClusterResolver object.
Raises
NotFoundError If no TPU devices found in eager mode. get_job_name View source
get_job_name()
get_master View source
get_master()
get_tpu_system_metadata View source
get_tpu_system_metadata()
Returns the metadata of the TPU system. Users can call this method to get some facts of the TPU system, like total number of cores, number of TPU workers and the devices. E.g.
resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu='')
tpu_system_medata = resolver.get_tpu_system_metadata()
num_hosts = tpu_system_medata.num_hosts
Returns A tf.tpu.experimental.TPUSystemMetadata object.
master View source
master(
task_type=None, task_id=None, rpc_layer=None
)
Get the Master string to be used for the session. In the normal case, this returns the grpc path (grpc://1.2.3.4:8470) of first instance in the ClusterSpec returned by the cluster_spec function. If a non-TPU name is used when constructing a TPUClusterResolver, that will be returned instead (e.g. If the tpus argument's value when constructing this TPUClusterResolver was 'grpc://10.240.1.2:8470', 'grpc://10.240.1.2:8470' will be returned).
Args
task_type (Optional, string) The type of the TensorFlow task of the master.
task_id (Optional, integer) The index of the TensorFlow task of the master.
rpc_layer (Optional, string) The RPC protocol TensorFlow should use to communicate with TPUs.
Returns string, the connection string to use when creating a session.
Raises
ValueError If none of the TPUs specified exists. num_accelerators View source
num_accelerators(
task_type=None, task_id=None, config_proto=None
)
Returns the number of TPU cores per worker. Connects to the master and list all the devices present in the master, and counts them up. Also verifies that the device counts per host in the cluster is the same before returning the number of TPU cores per host.
Args
task_type Unused.
task_id Unused.
config_proto Used to create a connection to a TPU master in order to retrieve the system metadata.
Raises
RuntimeError If we cannot talk to a TPU worker after retrying or if the number of TPU devices per host is different. __enter__ View source
__enter__()
__exit__ View source
__exit__(
type, value, traceback
) | tensorflow.distribute.cluster_resolver.tpuclusterresolver |
tf.distribute.cluster_resolver.UnionResolver View source on GitHub Performs a union on underlying ClusterResolvers. Inherits From: ClusterResolver View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.distribute.cluster_resolver.UnionResolver
tf.distribute.cluster_resolver.UnionResolver(
*args, **kwargs
)
This class performs a union given two or more existing ClusterResolvers. It merges the underlying ClusterResolvers, and returns one unified ClusterSpec when cluster_spec is called. The details of the merge function is documented in the cluster_spec function. For additional ClusterResolver properties such as task type, task index, rpc layer, environment, etc..., we will return the value from the first ClusterResolver in the union. An example to combine two cluster resolvers: cluster_0 = tf.train.ClusterSpec({"worker": ["worker0.example.com:2222",
"worker1.example.com:2222"]})
cluster_resolver_0 = SimpleClusterResolver(cluster, task_type="worker",
task_id=0,
rpc_layer="grpc")
cluster_1 = tf.train.ClusterSpec({"ps": ["ps0.example.com:2222",
"ps1.example.com:2222"]})
cluster_resolver_1 = SimpleClusterResolver(cluster, task_type="ps",
task_id=0,
rpc_layer="grpc")
# Its task type would be "worker".
cluster_resolver = UnionClusterResolver(cluster_resolver_0,
cluster_resolver_1)
An example to override the number of GPUs in a TFConfigClusterResolver instance: tf_config = TFConfigClusterResolver()
gpu_override = SimpleClusterResolver(tf_config.cluster_spec(),
num_accelerators={"GPU": 1})
cluster_resolver = UnionResolver(gpu_override, tf_config)
Args
*args ClusterResolver objects to be unionized.
**kwargs rpc_layer - (Optional) Override value for the RPC layer used by TensorFlow. task_type - (Optional) Override value for the current task type. task_id - (Optional) Override value for the current task index.
Raises
TypeError If any argument is not a subclass of ClusterResolvers.
ValueError If there are no arguments passed.
Attributes
environment Returns the current environment which TensorFlow is running in. There are two possible return values, "google" (when TensorFlow is running in a Google-internal environment) or an empty string (when TensorFlow is running elsewhere). If you are implementing a ClusterResolver that works in both the Google environment and the open-source world (for instance, a TPU ClusterResolver or similar), you will have to return the appropriate string depending on the environment, which you will have to detect. Otherwise, if you are implementing a ClusterResolver that will only work in open-source TensorFlow, you do not need to implement this property.
rpc_layer
task_id Returns the task id this ClusterResolver indicates. In TensorFlow distributed environment, each job may have an applicable task id, which is the index of the instance within its task type. This is useful when user needs to run specific code according to task index. For example, cluster_spec = tf.train.ClusterSpec({
"ps": ["localhost:2222", "localhost:2223"],
"worker": ["localhost:2224", "localhost:2225", "localhost:2226"]
})
# SimpleClusterResolver is used here for illustration; other cluster
# resolvers may be used for other source of task type/id.
simple_resolver = SimpleClusterResolver(cluster_spec, task_type="worker",
task_id=0)
...
if cluster_resolver.task_type == 'worker' and cluster_resolver.task_id == 0:
# Perform something that's only applicable on 'worker' type, id 0. This
# block will run on this particular instance since we've specified this
# task to be a 'worker', id 0 in above cluster resolver.
else:
# Perform something that's only applicable on other ids. This block will
# not run on this particular instance.
Returns None if such information is not available or is not applicable in the current distributed environment, such as training with tf.distribute.cluster_resolver.TPUClusterResolver. For more information, please see tf.distribute.cluster_resolver.ClusterResolver's class docstring.
task_type Returns the task type this ClusterResolver indicates. In TensorFlow distributed environment, each job may have an applicable task type. Valid task types in TensorFlow include 'chief': a worker that is designated with more responsibility, 'worker': a regular worker for training/evaluation, 'ps': a parameter server, or 'evaluator': an evaluator that evaluates the checkpoints for metrics. See Multi-worker configuration for more information about 'chief' and 'worker' task type, which are most commonly used. Having access to such information is useful when user needs to run specific code according to task types. For example, cluster_spec = tf.train.ClusterSpec({
"ps": ["localhost:2222", "localhost:2223"],
"worker": ["localhost:2224", "localhost:2225", "localhost:2226"]
})
# SimpleClusterResolver is used here for illustration; other cluster
# resolvers may be used for other source of task type/id.
simple_resolver = SimpleClusterResolver(cluster_spec, task_type="worker",
task_id=1)
...
if cluster_resolver.task_type == 'worker':
# Perform something that's only applicable on workers. This block
# will run on this particular instance since we've specified this task to
# be a worker in above cluster resolver.
elif cluster_resolver.task_type == 'ps':
# Perform something that's only applicable on parameter servers. This
# block will not run on this particular instance.
Returns None if such information is not available or is not applicable in the current distributed environment, such as training with tf.distribute.experimental.TPUStrategy. For more information, please see tf.distribute.cluster_resolver.ClusterResolver's class doc.
Methods cluster_spec View source
cluster_spec()
Returns a union of all the ClusterSpecs from the ClusterResolvers.
Returns A ClusterSpec containing host information merged from all the underlying ClusterResolvers.
Raises
KeyError If there are conflicting keys detected when merging two or more dictionaries, this exception is raised.
Note: If there are multiple ClusterResolvers exposing ClusterSpecs with the same job name, we will merge the list/dict of workers.
If all underlying ClusterSpecs expose the set of workers as lists, we will concatenate the lists of workers, starting with the list of workers from the first ClusterResolver passed into the constructor. If any of the ClusterSpecs expose the set of workers as a dict, we will treat all the sets of workers as dicts (even if they are returned as lists) and will only merge them into a dict if there is no conflicting keys. If there is a conflicting key, we will raise a KeyError. master View source
master(
task_type=None, task_id=None, rpc_layer=None
)
Returns the master address to use when creating a session. This usually returns the master from the first ClusterResolver passed in, but you can override this by specifying the task_type and task_id.
Note: this is only useful for TensorFlow 1.x.
Args
task_type (Optional) The type of the TensorFlow task of the master.
task_id (Optional) The index of the TensorFlow task of the master.
rpc_layer (Optional) The RPC protocol for the given cluster.
Returns The name or URL of the session master.
num_accelerators View source
num_accelerators(
task_type=None, task_id=None, config_proto=None
)
Returns the number of accelerator cores per worker. This returns the number of accelerator cores (such as GPUs and TPUs) available per worker. Optionally, we allow callers to specify the task_type, and task_id, for if they want to target a specific TensorFlow task to query the number of accelerators. This is to support heterogenous environments, where the number of accelerators cores per host is different.
Args
task_type (Optional) The type of the TensorFlow task of the machine we want to query.
task_id (Optional) The index of the TensorFlow task of the machine we want to query.
config_proto (Optional) Configuration for starting a new session to query how many accelerator cores it has.
Returns A map of accelerator types to number of cores. | tensorflow.distribute.cluster_resolver.unionresolver |
tf.distribute.CrossDeviceOps View source on GitHub Base class for cross-device reduction and broadcasting algorithms. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.distribute.CrossDeviceOps
tf.distribute.CrossDeviceOps()
The main purpose of this class is to be passed to tf.distribute.MirroredStrategy in order to choose among different cross device communication implementations. Prefer using the methods of tf.distribute.Strategy instead of the ones of this class. Implementations: tf.distribute.ReductionToOneDevice tf.distribute.NcclAllReduce tf.distribute.HierarchicalCopyAllReduce Methods batch_reduce View source
batch_reduce(
reduce_op, value_destination_pairs, options=None
)
Reduce values to destinations in batches. See tf.distribute.StrategyExtended.batch_reduce_to. This can only be called in the cross-replica context.
Args
reduce_op a tf.distribute.ReduceOp specifying how values should be combined.
value_destination_pairs a sequence of (value, destinations) pairs. See tf.distribute.CrossDeviceOps.reduce for descriptions.
options a tf.distribute.experimental.CommunicationOptions. See tf.distribute.experimental.CommunicationOptions for details.
Returns A list of tf.Tensor or tf.distribute.DistributedValues, one per pair in value_destination_pairs.
Raises
ValueError if value_destination_pairs is not an iterable of tuples of tf.distribute.DistributedValues and destinations. batch_reduce_implementation View source
batch_reduce_implementation(
reduce_op, value_destination_pairs, options
)
Implementation of batch_reduce. Overriding this method is useful for subclass implementers.
Args
reduce_op a tf.distribute.ReduceOp specifying how values should be combined.
value_destination_pairs a sequence of (value, destinations) pairs. See reduce for descriptions.
options a tf.distribute.experimental.CommunicationOptions. See tf.distribute.experimental.CommunicationOptions for details.
Returns A list of tf.Tensor or tf.distribute.DistributedValues, one per pair in value_destination_pairs.
Raises
ValueError if value_destination_pairs is not an iterable of tuples of tf.distribute.DistributedValues and destinations. broadcast View source
broadcast(
tensor, destinations
)
Broadcast tensor to destinations. This can only be called in the cross-replica context.
Args
tensor a tf.Tensor like object. The value to broadcast.
destinations a tf.distribute.DistributedValues, a tf.Variable, a tf.Tensor alike object, or a device string. It specifies the devices to broadcast to. Note that if it's a tf.Variable, the value is broadcasted to the devices of that variable, this method doesn't update the variable.
Returns A tf.Tensor or tf.distribute.DistributedValues.
broadcast_implementation View source
broadcast_implementation(
tensor, destinations
)
Implementation of broadcast.
Args
tensor a tf.Tensor like object. The value to broadcast.
destinations a tf.distribute.DistributedValues, a tf.Variable, a tf.Tensor alike object, or a device string. It specifies the devices to broadcast to. destinations. Note that if it's a tf.Variable, the value is broadcasted to the devices of that variable, this method doesn't update the variable.
Returns A tf.Tensor or tf.distribute.DistributedValues.
reduce View source
reduce(
reduce_op, per_replica_value, destinations, options=None
)
Reduce per_replica_value to destinations. See tf.distribute.StrategyExtended.reduce_to. This can only be called in the cross-replica context.
Args
reduce_op a tf.distribute.ReduceOp specifying how values should be combined.
per_replica_value a tf.distribute.DistributedValues, or a tf.Tensor like object.
destinations a tf.distribute.DistributedValues, a tf.Variable, a tf.Tensor alike object, or a device string. It specifies the devices to reduce to. To perform an all-reduce, pass the same to value and destinations. Note that if it's a tf.Variable, the value is reduced to the devices of that variable, and this method doesn't update the variable.
options a tf.distribute.experimental.CommunicationOptions. See tf.distribute.experimental.CommunicationOptions for details.
Returns A tf.Tensor or tf.distribute.DistributedValues.
Raises
ValueError if per_replica_value can't be converted to a tf.distribute.DistributedValues or if destinations is not a string, tf.Variable or tf.distribute.DistributedValues. reduce_implementation View source
reduce_implementation(
reduce_op, per_replica_value, destinations, options
)
Implementation of reduce. Overriding this method is useful for subclass implementers.
Args
reduce_op a tf.distribute.ReduceOp specifying how values should be combined.
per_replica_value a tf.distribute.DistributedValues, or a tf.Tensor like object.
destinations a tf.distribute.DistributedValues, a tf.Variable, a tf.Tensor alike object, or a device string. It specifies the devices to reduce to. To perform an all-reduce, pass the same to value and destinations. Note that if it's a tf.Variable, the value is reduced to the devices of that variable, this method doesn't update the variable.
options a tf.distribute.experimental.CommunicationOptions. See tf.distribute.experimental.CommunicationOptions for details.
Returns A tf.Tensor or tf.distribute.DistributedValues.
Raises
ValueError if per_replica_value can't be converted to a tf.distribute.DistributedValues or if destinations is not a string, tf.Variable or tf.distribute.DistributedValues. | tensorflow.distribute.crossdeviceops |
tf.distribute.DistributedDataset Represents a dataset distributed among devices and machines. A tf.distribute.DistributedDataset could be thought of as a "distributed" dataset. When you use tf.distribute API to scale training to multiple devices or machines, you also need to distribute the input data, which leads to a tf.distribute.DistributedDataset instance, instead of a tf.data.Dataset instance in the non-distributed case. In TF 2.x, tf.distribute.DistributedDataset objects are Python iterables.
Note: tf.distribute.DistributedDataset instances are not of type tf.data.Dataset. It only supports two usages we will mention below: iteration and element_spec. We don't support any other APIs to transform or inspect the dataset.
There are two APIs to create a tf.distribute.DistributedDataset object: tf.distribute.Strategy.experimental_distribute_dataset(dataset)and tf.distribute.Strategy.distribute_datasets_from_function(dataset_fn). When to use which? When you have a tf.data.Dataset instance, and the regular batch splitting (i.e. re-batch the input tf.data.Dataset instance with a new batch size that is equal to the global batch size divided by the number of replicas in sync) and autosharding (i.e. the tf.data.experimental.AutoShardPolicy options) work for you, use the former API. Otherwise, if you are not using a canonical tf.data.Dataset instance, or you would like to customize the batch splitting or sharding, you can wrap these logic in a dataset_fn and use the latter API. Both API handles prefetch to device for the user. For more details and examples, follow the links to the APIs. There are two main usages of a DistributedDataset object:
Iterate over it to generate the input for a single device or multiple devices, which is a tf.distribute.DistributedValues instance. To do this, you can: use a pythonic for-loop construct:
global_batch_size = 4
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"])
dataset = tf.data.Dataset.from_tensors(([1.],[1.])).repeat(4).batch(global_batch_size)
dist_dataset = strategy.experimental_distribute_dataset(dataset)
@tf.function
def train_step(input):
features, labels = input
return labels - 0.3 * features
for x in dist_dataset:
# train_step trains the model using the dataset elements
loss = strategy.run(train_step, args=(x,))
print("Loss is", loss)
Loss is PerReplica:{
0: tf.Tensor(
[[0.7]
[0.7]], shape=(2, 1), dtype=float32),
1: tf.Tensor(
[[0.7]
[0.7]], shape=(2, 1), dtype=float32)
}
Placing the loop inside a <a href="../../tf/function"><code>tf.function</code></a> will give a performance boost.
However `break` and `return` are currently not supported if the loop is
placed inside a <a href="../../tf/function"><code>tf.function</code></a>. We also don't support placing the loop
inside a <a href="../../tf/function"><code>tf.function</code></a> when using
<a href="../../tf/distribute/experimental/MultiWorkerMirroredStrategy"><code>tf.distribute.experimental.MultiWorkerMirroredStrategy</code></a> or
<a href="../../tf/distribute/experimental/TPUStrategy"><code>tf.distribute.experimental.TPUStrategy</code></a> with multiple workers.
use __iter__ to create an explicit iterator, which is of type tf.distribute.DistributedIterator
global_batch_size = 4
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"])
train_dataset = tf.data.Dataset.from_tensors(([1.],[1.])).repeat(50).batch(global_batch_size)
train_dist_dataset = strategy.experimental_distribute_dataset(train_dataset)
@tf.function
def distributed_train_step(dataset_inputs):
def train_step(input):
loss = tf.constant(0.1)
return loss
per_replica_losses = strategy.run(train_step, args=(dataset_inputs,))
return strategy.reduce(tf.distribute.ReduceOp.SUM, per_replica_losses,axis=None)
EPOCHS = 2
STEPS = 3
for epoch in range(EPOCHS):
total_loss = 0.0
num_batches = 0
dist_dataset_iterator = iter(train_dist_dataset)
for _ in range(STEPS):
total_loss += distributed_train_step(next(dist_dataset_iterator))
num_batches += 1
average_train_loss = total_loss / num_batches
template = ("Epoch {}, Loss: {:.4f}")
print (template.format(epoch+1, average_train_loss))
Epoch 1, Loss: 0.2000
Epoch 2, Loss: 0.2000
To achieve a performance improvement, you can also wrap the strategy.run call with a tf.range inside a tf.function. This runs multiple steps in a tf.function. Autograph will convert it to a tf.while_loop on the worker. However, it is less flexible comparing with running a single step inside tf.function. For example, you cannot run things eagerly or arbitrary python code within the steps.
Inspect the tf.TypeSpec of the data generated by DistributedDataset. tf.distribute.DistributedDataset generates tf.distribute.DistributedValues as input to the devices. If you pass the input to a tf.function and would like to specify the shape and type of each Tensor argument to the function, you can pass a tf.TypeSpec object to the input_signature argument of the tf.function. To get the tf.TypeSpec of the input, you can use the element_spec property of the tf.distribute.DistributedDataset or tf.distribute.DistributedIterator object. For example:
global_batch_size = 4
epochs = 1
steps_per_epoch = 1
mirrored_strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"])
dataset = tf.data.Dataset.from_tensors(([2.])).repeat(100).batch(global_batch_size)
dist_dataset = mirrored_strategy.experimental_distribute_dataset(dataset)
@tf.function(input_signature=[dist_dataset.element_spec])
def train_step(per_replica_inputs):
def step_fn(inputs):
return tf.square(inputs)
return mirrored_strategy.run(step_fn, args=(per_replica_inputs,))
for _ in range(epochs):
iterator = iter(dist_dataset)
for _ in range(steps_per_epoch):
output = train_step(next(iterator))
print(output)
PerReplica:{
0: tf.Tensor(
[[4.]
[4.]], shape=(2, 1), dtype=float32),
1: tf.Tensor(
[[4.]
[4.]], shape=(2, 1), dtype=float32)
}
Visit the tutorial on distributed input for more examples and caveats.
Attributes
element_spec The type specification of an element of this tf.distribute.DistributedDataset.
global_batch_size = 16
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"])
dataset = tf.data.Dataset.from_tensors(([1.],[2])).repeat(100).batch(global_batch_size)
dist_dataset = strategy.experimental_distribute_dataset(dataset)
dist_dataset.element_spec
(PerReplicaSpec(TensorSpec(shape=(None, 1), dtype=tf.float32, name=None),
TensorSpec(shape=(None, 1), dtype=tf.float32, name=None)),
PerReplicaSpec(TensorSpec(shape=(None, 1), dtype=tf.int32, name=None),
TensorSpec(shape=(None, 1), dtype=tf.int32, name=None)))
Methods __iter__ View source
__iter__()
Creates an iterator for the tf.distribute.DistributedDataset. The returned iterator implements the Python Iterator protocol. Example usage:
global_batch_size = 4
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"])
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3, 4]).repeat().batch(global_batch_size)
distributed_iterator = iter(strategy.experimental_distribute_dataset(dataset))
print(next(distributed_iterator))
PerReplica:{
0: tf.Tensor([1 2], shape=(2,), dtype=int32),
1: tf.Tensor([3 4], shape=(2,), dtype=int32)
}
Returns An tf.distribute.DistributedIterator instance for the given tf.distribute.DistributedDataset object to enumerate over the distributed data. | tensorflow.distribute.distributeddataset |
tf.distribute.DistributedIterator An iterator over tf.distribute.DistributedDataset. tf.distribute.DistributedIterator is the primary mechanism for enumerating elements of a tf.distribute.DistributedDataset. It supports the Python Iterator protocol, which means it can be iterated over using a for-loop or by fetching individual elements explicitly via get_next(). You can create a tf.distribute.DistributedIterator by calling iter on a tf.distribute.DistributedDataset or creating a python loop over a tf.distribute.DistributedDataset. Visit the tutorial on distributed input for more examples and caveats.
Attributes
element_spec The type specification of an element of tf.distribute.DistributedIterator.
global_batch_size = 16
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"])
dataset = tf.data.Dataset.from_tensors(([1.],[2])).repeat(100).batch(global_batch_size)
distributed_iterator = iter(strategy.experimental_distribute_dataset(dataset))
distributed_iterator.element_spec
(PerReplicaSpec(TensorSpec(shape=(None, 1), dtype=tf.float32, name=None),
TensorSpec(shape=(None, 1), dtype=tf.float32, name=None)),
PerReplicaSpec(TensorSpec(shape=(None, 1), dtype=tf.int32, name=None),
TensorSpec(shape=(None, 1), dtype=tf.int32, name=None)))
Methods get_next View source
get_next()
Returns the next input from the iterator for all replicas. Example use:
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"])
dataset = tf.data.Dataset.range(100).batch(2)
dist_dataset = strategy.experimental_distribute_dataset(dataset)
dist_dataset_iterator = iter(dist_dataset)
@tf.function
def one_step(input):
return input
step_num = 5
for _ in range(step_num):
strategy.run(one_step, args=(dist_dataset_iterator.get_next(),))
strategy.experimental_local_results(dist_dataset_iterator.get_next())
(<tf.Tensor: shape=(1,), dtype=int64, numpy=array([10])>,
<tf.Tensor: shape=(1,), dtype=int64, numpy=array([11])>)
Returns A single tf.Tensor or a tf.distribute.DistributedValues which contains the next input for all replicas.
Raises tf.errors.OutOfRangeError: If the end of the iterator has been reached.
get_next_as_optional View source
get_next_as_optional()
Returns a tf.experimental.Optional that contains the next value for all replicas. If the tf.distribute.DistributedIterator has reached the end of the sequence, the returned tf.experimental.Optional will have no value. Example usage:
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"])
global_batch_size = 2
steps_per_loop = 2
dataset = tf.data.Dataset.range(10).batch(global_batch_size)
distributed_iterator = iter(
strategy.experimental_distribute_dataset(dataset))
def step_fn(x):
# train the model with inputs
return x
@tf.function
def train_fn(distributed_iterator):
for _ in tf.range(steps_per_loop):
optional_data = distributed_iterator.get_next_as_optional()
if not optional_data.has_value():
break
per_replica_results = strategy.run(step_fn, args=(optional_data.get_value(),))
tf.print(strategy.experimental_local_results(per_replica_results))
train_fn(distributed_iterator)
# ([0 1], [2 3])
# ([4], [])
Returns An tf.experimental.Optional object representing the next value from the tf.distribute.DistributedIterator (if it has one) or no value.
__iter__
__iter__() | tensorflow.distribute.distributediterator |
tf.distribute.DistributedValues Base class for representing distributed values.
tf.distribute.DistributedValues(
values
)
A subclass instance of tf.distribute.DistributedValues is created when creating variables within a distribution strategy, iterating a tf.distribute.DistributedDataset or through tf.distribute.Strategy.run. This base class should never be instantiated directly. tf.distribute.DistributedValues contains a value per replica. Depending on the subclass, the values could either be synced on update, synced on demand, or never synced. tf.distribute.DistributedValues can be reduced to obtain single value across replicas, as input into tf.distribute.Strategy.run or the per-replica values inspected using tf.distribute.Strategy.experimental_local_results. Example usage: Created from a tf.distribute.DistributedDataset:
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"])
dataset = tf.data.Dataset.from_tensor_slices([5., 6., 7., 8.]).batch(2)
dataset_iterator = iter(strategy.experimental_distribute_dataset(dataset))
distributed_values = next(dataset_iterator)
Returned by run:
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"])
@tf.function
def run():
ctx = tf.distribute.get_replica_context()
return ctx.replica_id_in_sync_group
distributed_values = strategy.run(run)
As input into run:
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"])
dataset = tf.data.Dataset.from_tensor_slices([5., 6., 7., 8.]).batch(2)
dataset_iterator = iter(strategy.experimental_distribute_dataset(dataset))
distributed_values = next(dataset_iterator)
@tf.function
def run(input):
return input + 1.0
updated_value = strategy.run(run, args=(distributed_values,))
Reduce value:
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"])
dataset = tf.data.Dataset.from_tensor_slices([5., 6., 7., 8.]).batch(2)
dataset_iterator = iter(strategy.experimental_distribute_dataset(dataset))
distributed_values = next(dataset_iterator)
reduced_value = strategy.reduce(tf.distribute.ReduceOp.SUM,
distributed_values,
axis = 0)
Inspect local replica values:
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"])
dataset = tf.data.Dataset.from_tensor_slices([5., 6., 7., 8.]).batch(2)
dataset_iterator = iter(strategy.experimental_distribute_dataset(dataset))
per_replica_values = strategy.experimental_local_results(
distributed_values)
per_replica_values
(<tf.Tensor: shape=(1,), dtype=float32, numpy=array([5.], dtype=float32)>,
<tf.Tensor: shape=(1,), dtype=float32, numpy=array([6.], dtype=float32)>) | tensorflow.distribute.distributedvalues |
Module: tf.distribute.experimental Public API for tf.distribute.experimental namespace. Modules coordinator module: Public API for tf.distribute.experimental.coordinator namespace. partitioners module: Public API for tf.distribute.experimental.partitioners namespace. Classes class CentralStorageStrategy: A one-machine strategy that puts all variables on a single device. class CollectiveCommunication: Cross device communication implementation. class CollectiveHints: Hints for collective operations like AllReduce. class CommunicationImplementation: Cross device communication implementation. class CommunicationOptions: Options for cross device communications like All-reduce. class MultiWorkerMirroredStrategy: A distribution strategy for synchronous training on multiple workers. class ParameterServerStrategy: An multi-worker tf.distribute strategy with parameter servers. class TPUStrategy: Synchronous training on TPUs and TPU Pods. class ValueContext: A class wrapping information needed by a distribute function. | tensorflow.distribute.experimental |
tf.distribute.experimental.CentralStorageStrategy View source on GitHub A one-machine strategy that puts all variables on a single device. Inherits From: Strategy
tf.distribute.experimental.CentralStorageStrategy(
compute_devices=None, parameter_device=None
)
Variables are assigned to local CPU or the only GPU. If there is more than one GPU, compute operations (other than variable update operations) will be replicated across all GPUs. For Example: strategy = tf.distribute.experimental.CentralStorageStrategy()
# Create a dataset
ds = tf.data.Dataset.range(5).batch(2)
# Distribute that dataset
dist_dataset = strategy.experimental_distribute_dataset(ds)
with strategy.scope():
@tf.function
def train_step(val):
return val + 1
# Iterate over the distributed dataset
for x in dist_dataset:
# process dataset elements
strategy.run(train_step, args=(x,))
Attributes
cluster_resolver Returns the cluster resolver associated with this strategy. In general, when using a multi-worker tf.distribute strategy such as tf.distribute.experimental.MultiWorkerMirroredStrategy or tf.distribute.TPUStrategy(), there is a tf.distribute.cluster_resolver.ClusterResolver associated with the strategy used, and such an instance is returned by this property. Strategies that intend to have an associated tf.distribute.cluster_resolver.ClusterResolver must set the relevant attribute, or override this property; otherwise, None is returned by default. Those strategies should also provide information regarding what is returned by this property. Single-worker strategies usually do not have a tf.distribute.cluster_resolver.ClusterResolver, and in those cases this property will return None. The tf.distribute.cluster_resolver.ClusterResolver may be useful when the user needs to access information such as the cluster spec, task type or task id. For example,
os.environ['TF_CONFIG'] = json.dumps({
'cluster': {
'worker': ["localhost:12345", "localhost:23456"],
'ps': ["localhost:34567"]
},
'task': {'type': 'worker', 'index': 0}
})
# This implicitly uses TF_CONFIG for the cluster and current task info.
strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy()
...
if strategy.cluster_resolver.task_type == 'worker':
# Perform something that's only applicable on workers. Since we set this
# as a worker above, this block will run on this particular instance.
elif strategy.cluster_resolver.task_type == 'ps':
# Perform something that's only applicable on parameter servers. Since we
# set this as a worker above, this block will not run on this particular
# instance.
For more information, please see tf.distribute.cluster_resolver.ClusterResolver's API docstring.
extended tf.distribute.StrategyExtended with additional methods.
num_replicas_in_sync Returns number of replicas over which gradients are aggregated. Methods distribute_datasets_from_function View source
distribute_datasets_from_function(
dataset_fn, options=None
)
Distributes tf.data.Dataset instances created by calls to dataset_fn. The argument dataset_fn that users pass in is an input function that has a tf.distribute.InputContext argument and returns a tf.data.Dataset instance. It is expected that the returned dataset from dataset_fn is already batched by per-replica batch size (i.e. global batch size divided by the number of replicas in sync) and sharded. tf.distribute.Strategy.distribute_datasets_from_function does not batch or shard the tf.data.Dataset instance returned from the input function. dataset_fn will be called on the CPU device of each of the workers and each generates a dataset where every replica on that worker will dequeue one batch of inputs (i.e. if a worker has two replicas, two batches will be dequeued from the Dataset every step). This method can be used for several purposes. First, it allows you to specify your own batching and sharding logic. (In contrast, tf.distribute.experimental_distribute_dataset does batching and sharding for you.) For example, where experimental_distribute_dataset is unable to shard the input files, this method might be used to manually shard the dataset (avoiding the slow fallback behavior in experimental_distribute_dataset). In cases where the dataset is infinite, this sharding can be done by creating dataset replicas that differ only in their random seed. The dataset_fn should take an tf.distribute.InputContext instance where information about batching and input replication can be accessed. You can use element_spec property of the tf.distribute.DistributedDataset returned by this API to query the tf.TypeSpec of the elements returned by the iterator. This can be used to set the input_signature property of a tf.function. Follow tf.distribute.DistributedDataset.element_spec to see an example. Key Point: The tf.data.Dataset returned by dataset_fn should have a per-replica batch size, unlike experimental_distribute_dataset, which uses the global batch size. This may be computed using input_context.get_per_replica_batch_size.
Note: If you are using TPUStrategy, the order in which the data is processed by the workers when using tf.distribute.Strategy.experimental_distribute_dataset or tf.distribute.Strategy.distribute_datasets_from_function is not guaranteed. This is typically required if you are using tf.distribute to scale prediction. You can however insert an index for each element in the batch and order outputs accordingly. Refer to this snippet for an example of how to order outputs.
Note: Stateful dataset transformations are currently not supported with tf.distribute.experimental_distribute_dataset or tf.distribute.distribute_datasets_from_function. Any stateful ops that the dataset may have are currently ignored. For example, if your dataset has a map_fn that uses tf.random.uniform to rotate an image, then you have a dataset graph that depends on state (i.e the random seed) on the local machine where the python process is being executed.
For a tutorial on more usage and properties of this method, refer to the tutorial on distributed input). If you are interested in last partial batch handling, read this section.
Args
dataset_fn A function taking a tf.distribute.InputContext instance and returning a tf.data.Dataset.
options tf.distribute.InputOptions used to control options on how this dataset is distributed.
Returns A tf.distribute.DistributedDataset.
experimental_distribute_dataset View source
experimental_distribute_dataset(
dataset, options=None
)
Distributes a tf.data.Dataset instance provided via dataset. The returned dataset is a wrapped strategy dataset which creates a multidevice iterator under the hood. It prefetches the input data to the specified devices on the worker. The returned distributed dataset can be iterated over similar to how regular datasets can.
Note: Currently, the user cannot add any more transformations to a distributed dataset.
For Example: strategy = tf.distribute.CentralStorageStrategy() # with 1 CPU and 1 GPU
dataset = tf.data.Dataset.range(10).batch(2)
dist_dataset = strategy.experimental_distribute_dataset(dataset)
for x in dist_dataset:
print(x) # Prints PerReplica values [0, 1], [2, 3],...
Args: dataset: tf.data.Dataset to be prefetched to device. options: tf.distribute.InputOptions used to control options on how this dataset is distributed.
Returns A "distributed Dataset" that the caller can iterate over.
experimental_distribute_values_from_function View source
experimental_distribute_values_from_function(
value_fn
)
Generates tf.distribute.DistributedValues from value_fn. This function is to generate tf.distribute.DistributedValues to pass into run, reduce, or other methods that take distributed values when not using datasets.
Args
value_fn The function to run to generate values. It is called for each replica with tf.distribute.ValueContext as the sole argument. It must return a Tensor or a type that can be converted to a Tensor.
Returns A tf.distribute.DistributedValues containing a value for each replica.
Example usage: Return constant value per replica:
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"])
def value_fn(ctx):
return tf.constant(1.)
distributed_values = (
strategy.experimental_distribute_values_from_function(
value_fn))
local_result = strategy.experimental_local_results(distributed_values)
local_result
(<tf.Tensor: shape=(), dtype=float32, numpy=1.0>,
<tf.Tensor: shape=(), dtype=float32, numpy=1.0>)
Distribute values in array based on replica_id:
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"])
array_value = np.array([3., 2., 1.])
def value_fn(ctx):
return array_value[ctx.replica_id_in_sync_group]
distributed_values = (
strategy.experimental_distribute_values_from_function(
value_fn))
local_result = strategy.experimental_local_results(distributed_values)
local_result
(3.0, 2.0)
Specify values using num_replicas_in_sync:
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"])
def value_fn(ctx):
return ctx.num_replicas_in_sync
distributed_values = (
strategy.experimental_distribute_values_from_function(
value_fn))
local_result = strategy.experimental_local_results(distributed_values)
local_result
(2, 2)
Place values on devices and distribute: strategy = tf.distribute.TPUStrategy()
worker_devices = strategy.extended.worker_devices
multiple_values = []
for i in range(strategy.num_replicas_in_sync):
with tf.device(worker_devices[i]):
multiple_values.append(tf.constant(1.0))
def value_fn(ctx):
return multiple_values[ctx.replica_id_in_sync_group]
distributed_values = strategy.
experimental_distribute_values_from_function(
value_fn)
experimental_local_results View source
experimental_local_results(
value
)
Returns the list of all local per-replica values contained in value. In CentralStorageStrategy there is a single worker so the value returned will be all the values on that worker.
Args
value A value returned by run(), extended.call_for_each_replica(), or a variable created in scope.
Returns A tuple of values contained in value. If value represents a single value, this returns (value,).
gather View source
gather(
value, axis
)
Gather value across replicas along axis to the current device. Given a tf.distribute.DistributedValues or tf.Tensor-like object value, this API gathers and concatenates value across replicas along the axis-th dimension. The result is copied to the "current" device which would typically be the CPU of the worker on which the program is running. For tf.distribute.TPUStrategy, it is the first TPU host. For multi-client MultiWorkerMirroredStrategy, this is CPU of each worker. This API can only be called in the cross-replica context. For a counterpart in the replica context, see tf.distribute.ReplicaContext.all_gather.
Note: For all strategies except tf.distribute.TPUStrategy, the input value on different replicas must have the same rank, and their shapes must be the same in all dimensions except the axis-th dimension. In other words, their shapes cannot be different in a dimension d where d does not equal to the axis argument. For example, given a tf.distribute.DistributedValues with component tensors of shape (1, 2, 3) and (1, 3, 3) on two replicas, you can call gather(..., axis=1, ...) on it, but not gather(..., axis=0, ...) or gather(..., axis=2, ...). However, for tf.distribute.TPUStrategy.gather, all tensors must have exactly the same rank and same shape.
Note: Given a tf.distribute.DistributedValues value, its component tensors must have a non-zero rank. Otherwise, consider using tf.expand_dims before gathering them.
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"])
# A DistributedValues with component tensor of shape (2, 1) on each replica
distributed_values = strategy.experimental_distribute_values_from_function(lambda _: tf.identity(tf.constant([[1], [2]])))
@tf.function
def run():
return strategy.gather(distributed_values, axis=0)
run()
<tf.Tensor: shape=(4, 1), dtype=int32, numpy=
array([[1],
[2],
[1],
[2]], dtype=int32)>
Consider the following example for more combinations:
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1", "GPU:2", "GPU:3"])
single_tensor = tf.reshape(tf.range(6), shape=(1,2,3))
distributed_values = strategy.experimental_distribute_values_from_function(lambda _: tf.identity(single_tensor))
@tf.function
def run(axis):
return strategy.gather(distributed_values, axis=axis)
axis=0
run(axis)
<tf.Tensor: shape=(4, 2, 3), dtype=int32, numpy=
array([[[0, 1, 2],
[3, 4, 5]],
[[0, 1, 2],
[3, 4, 5]],
[[0, 1, 2],
[3, 4, 5]],
[[0, 1, 2],
[3, 4, 5]]], dtype=int32)>
axis=1
run(axis)
<tf.Tensor: shape=(1, 8, 3), dtype=int32, numpy=
array([[[0, 1, 2],
[3, 4, 5],
[0, 1, 2],
[3, 4, 5],
[0, 1, 2],
[3, 4, 5],
[0, 1, 2],
[3, 4, 5]]], dtype=int32)>
axis=2
run(axis)
<tf.Tensor: shape=(1, 2, 12), dtype=int32, numpy=
array([[[0, 1, 2, 0, 1, 2, 0, 1, 2, 0, 1, 2],
[3, 4, 5, 3, 4, 5, 3, 4, 5, 3, 4, 5]]], dtype=int32)>
Args
value a tf.distribute.DistributedValues instance, e.g. returned by Strategy.run, to be combined into a single tensor. It can also be a regular tensor when used with tf.distribute.OneDeviceStrategy or the default strategy. The tensors that constitute the DistributedValues can only be dense tensors with non-zero rank, NOT a tf.IndexedSlices.
axis 0-D int32 Tensor. Dimension along which to gather. Must be in the range [0, rank(value)).
Returns A Tensor that's the concatenation of value across replicas along axis dimension.
reduce View source
reduce(
reduce_op, value, axis
)
Reduce value across replicas. Given a per-replica value returned by run, say a per-example loss, the batch will be divided across all the replicas. This function allows you to aggregate across replicas and optionally also across batch elements. For example, if you have a global batch size of 8 and 2 replicas, values for examples [0, 1, 2, 3] will be on replica 0 and [4, 5, 6, 7] will be on replica 1. By default, reduce will just aggregate across replicas, returning [0+4, 1+5, 2+6, 3+7]. This is useful when each replica is computing a scalar or some other value that doesn't have a "batch" dimension (like a gradient). More often you will want to aggregate across the global batch, which you can get by specifying the batch dimension as the axis, typically axis=0. In this case it would return a scalar 0+1+2+3+4+5+6+7. If there is a last partial batch, you will need to specify an axis so that the resulting shape is consistent across replicas. So if the last batch has size 6 and it is divided into [0, 1, 2, 3] and [4, 5], you would get a shape mismatch unless you specify axis=0. If you specify tf.distribute.ReduceOp.MEAN, using axis=0 will use the correct denominator of 6. Contrast this with computing reduce_mean to get a scalar value on each replica and this function to average those means, which will weigh some values 1/8 and others 1/4. For Example: strategy = tf.distribute.experimental.CentralStorageStrategy(
compute_devices=['CPU:0', 'GPU:0'], parameter_device='CPU:0')
ds = tf.data.Dataset.range(10)
# Distribute that dataset
dist_dataset = strategy.experimental_distribute_dataset(ds)
with strategy.scope():
@tf.function
def train_step(val):
# pass through
return val
# Iterate over the distributed dataset
for x in dist_dataset:
result = strategy.run(train_step, args=(x,))
result = strategy.reduce(tf.distribute.ReduceOp.SUM, result,
axis=None).numpy()
# result: array([ 4, 6, 8, 10])
result = strategy.reduce(tf.distribute.ReduceOp.SUM, result, axis=0).numpy()
# result: 28
Args
reduce_op A tf.distribute.ReduceOp value specifying how values should be combined.
value A "per replica" value, e.g. returned by run to be combined into a single tensor.
axis Specifies the dimension to reduce along within each replica's tensor. Should typically be set to the batch dimension, or None to only reduce across replicas (e.g. if the tensor has no batch dimension).
Returns A Tensor.
run View source
run(
fn, args=(), kwargs=None, options=None
)
Run fn on each replica, with the given arguments. In CentralStorageStrategy, fn is called on each of the compute replicas, with the provided "per replica" arguments specific to that device.
Args
fn The function to run. The output must be a tf.nest of Tensors.
args (Optional) Positional arguments to fn.
kwargs (Optional) Keyword arguments to fn.
options (Optional) An instance of tf.distribute.RunOptions specifying the options to run fn.
Returns Return value from running fn.
scope View source
scope()
Context manager to make the strategy current and distribute variables. This method returns a context manager, and is used as follows:
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"])
# Variable created inside scope:
with strategy.scope():
mirrored_variable = tf.Variable(1.)
mirrored_variable
MirroredVariable:{
0: <tf.Variable 'Variable:0' shape=() dtype=float32, numpy=1.0>,
1: <tf.Variable 'Variable/replica_1:0' shape=() dtype=float32, numpy=1.0>
}
# Variable created outside scope:
regular_variable = tf.Variable(1.)
regular_variable
<tf.Variable 'Variable:0' shape=() dtype=float32, numpy=1.0>
What happens when Strategy.scope is entered?
strategy is installed in the global context as the "current" strategy. Inside this scope, tf.distribute.get_strategy() will now return this strategy. Outside this scope, it returns the default no-op strategy. Entering the scope also enters the "cross-replica context". See tf.distribute.StrategyExtended for an explanation on cross-replica and replica contexts. Variable creation inside scope is intercepted by the strategy. Each strategy defines how it wants to affect the variable creation. Sync strategies like MirroredStrategy, TPUStrategy and MultiWorkerMiroredStrategy create variables replicated on each replica, whereas ParameterServerStrategy creates variables on the parameter servers. This is done using a custom tf.variable_creator_scope. In some strategies, a default device scope may also be entered: in MultiWorkerMiroredStrategy, a default device scope of "/CPU:0" is entered on each worker.
Note: Entering a scope does not automatically distribute a computation, except in the case of high level training framework like keras model.fit. If you're not using model.fit, you need to use strategy.run API to explicitly distribute that computation. See an example in the custom training loop tutorial.
What should be in scope and what should be outside? There are a number of requirements on what needs to happen inside the scope. However, in places where we have information about which strategy is in use, we often enter the scope for the user, so they don't have to do it explicitly (i.e. calling those either inside or outside the scope is OK). Anything that creates variables that should be distributed variables must be in strategy.scope. This can be either by directly putting it in scope, or relying on another API like strategy.run or model.fit to enter it for you. Any variable that is created outside scope will not be distributed and may have performance implications. Common things that create variables in TF: models, optimizers, metrics. These should always be created inside the scope. Another source of variable creation can be a checkpoint restore - when variables are created lazily. Note that any variable created inside a strategy captures the strategy information. So reading and writing to these variables outside the strategy.scope can also work seamlessly, without the user having to enter the scope. Some strategy APIs (such as strategy.run and strategy.reduce) which require to be in a strategy's scope, enter the scope for you automatically, which means when using those APIs you don't need to enter the scope yourself. When a tf.keras.Model is created inside a strategy.scope, we capture this information. When high level training frameworks methods such as model.compile, model.fit etc are then called on this model, we automatically enter the scope, as well as use this strategy to distribute the training etc. See detailed example in distributed keras tutorial. Note that simply calling the model(..) is not impacted - only high level training framework APIs are. model.compile, model.fit, model.evaluate, model.predict and model.save can all be called inside or outside the scope. The following can be either inside or outside the scope: Creating the input datasets Defining tf.functions that represent your training step Saving APIs such as tf.saved_model.save. Loading creates variables, so that should go inside the scope if you want to train the model in a distributed way. Checkpoint saving. As mentioned above - checkpoint.restore may sometimes need to be inside scope if it creates variables.
Returns A context manager. | tensorflow.distribute.experimental.centralstoragestrategy |
tf.distribute.experimental.CollectiveHints Hints for collective operations like AllReduce. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.distribute.experimental.CollectiveHints
tf.distribute.experimental.CollectiveHints(
bytes_per_pack=0, timeout_seconds=None
)
This can be passed to methods like tf.distribute.get_replica_context().all_reduce() to optimize collective operation performance. Note that these are only hints, which may or may not change the actual behavior. Some options only apply to certain strategy and are ignored by others. One common optimization is to break gradients all-reduce into multiple packs so that weight updates can overlap with gradient all-reduce. Examples: bytes_per_pack hints = tf.distribute.experimental.CollectiveHints(
bytes_per_pack=50 * 1024 * 1024)
grads = tf.distribute.get_replica_context().all_reduce(
'sum', grads, experimental_hints=hints)
optimizer.apply_gradients(zip(grads, vars),
experimental_aggregate_gradients=False)
timeout_seconds strategy = tf.distribute.MirroredStrategy()
hints = tf.distribute.experimental.CollectiveHints(
timeout_seconds=120)
try:
strategy.reduce("sum", v, axis=None, experimental_hints=hints)
except tf.errors.DeadlineExceededError:
do_something()
Args
bytes_per_pack a non-negative integer. Breaks collective operations into packs of certain size. If it's zero, the value is determined automatically. This only applies to all-reduce with MultiWorkerMirroredStrategy currently.
timeout_seconds a float or None, timeout in seconds. If not None, the collective raises tf.errors.DeadlineExceededError if it takes longer than this timeout. This can be useful when debugging hanging issues. This should only be used for debugging since it creates a new thread for each collective, i.e. an overhead of timeout_seconds * num_collectives_per_second more threads. This only works for tf.distribute.experimental.MultiWorkerMirroredStrategy.
Raises
ValueError When arguments have invalid value. | tensorflow.distribute.experimental.collectivehints |
tf.distribute.experimental.CommunicationImplementation Cross device communication implementation. View aliases Main aliases
tf.distribute.experimental.CollectiveCommunication Compat aliases for migration See Migration guide for more details. tf.compat.v1.distribute.experimental.CollectiveCommunication, tf.compat.v1.distribute.experimental.CommunicationImplementation Warning: The alias tf.distribute.experimental.CollectiveCommunication is deprecated and will be removed in a future version. Use tf.distribute.experimental.CommunicationImplementation instead.
AUTO: Automatically chosen by Tensorflow.
RING: TensorFlow's ring algorithms for all-reduce and all-gather.
NCCL: NVIDIA®'s NCCL library. This is now only used for all-reduce on GPUs; all-reduce on CPU, all-gather and broadcast fallbacks to RING.
Class Variables
AUTO tf.distribute.experimental.CommunicationImplementation
NCCL tf.distribute.experimental.CommunicationImplementation
RING tf.distribute.experimental.CommunicationImplementation | tensorflow.distribute.experimental.communicationimplementation |
tf.distribute.experimental.CommunicationOptions Options for cross device communications like All-reduce. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.distribute.experimental.CommunicationOptions
tf.distribute.experimental.CommunicationOptions(
bytes_per_pack=0, timeout_seconds=None,
implementation=tf.distribute.experimental.CollectiveCommunication.AUTO
)
This can be passed to methods like tf.distribute.get_replica_context().all_reduce() to optimize collective operation performance. Note that these are only hints, which may or may not change the actual behavior. Some options only apply to certain strategy and are ignored by others. One common optimization is to break gradients all-reduce into multiple packs so that weight updates can overlap with gradient all-reduce. Examples: options = tf.distribute.experimental.CommunicationOptions(
bytes_per_pack=50 * 1024 * 1024,
timeout_seconds=120,
implementation=tf.distribute.experimental.CommunicationImplementation.NCCL
)
grads = tf.distribute.get_replica_context().all_reduce(
'sum', grads, options=options)
optimizer.apply_gradients(zip(grads, vars),
experimental_aggregate_gradients=False)
Args
bytes_per_pack a non-negative integer. Breaks collective operations into packs of certain size. If it's zero, the value is determined automatically. This only applies to all-reduce with MultiWorkerMirroredStrategy currently.
timeout_seconds a float or None, timeout in seconds. If not None, the collective raises tf.errors.DeadlineExceededError if it takes longer than this timeout. Zero disables timeout. This can be useful when debugging hanging issues. This should only be used for debugging since it creates a new thread for each collective, i.e. an overhead of timeout_seconds * num_collectives_per_second more threads. This only works for tf.distribute.experimental.MultiWorkerMirroredStrategy.
implementation a tf.distribute.experimental.CommunicationImplementation. This is a hint on the preferred communication implementation. Possible values include AUTO, RING, and NCCL. NCCL is generally more performant for GPU, but doesn't work for CPU. This only works for tf.distribute.experimental.MultiWorkerMirroredStrategy.
Raises
ValueError When arguments have invalid value. | tensorflow.distribute.experimental.communicationoptions |
Module: tf.distribute.experimental.coordinator Public API for tf.distribute.experimental.coordinator namespace. Classes class ClusterCoordinator: An object to schedule and coordinate remote function execution. class PerWorkerValues: A container that holds a list of values, one value per worker. class RemoteValue: An asynchronously available value of a scheduled function. | tensorflow.distribute.experimental.coordinator |
tf.distribute.experimental.coordinator.ClusterCoordinator An object to schedule and coordinate remote function execution.
tf.distribute.experimental.coordinator.ClusterCoordinator(
strategy
)
This class is used to create fault-tolerant resources and dispatch functions to remote TensorFlow servers. Currently, this class is not supported to be used in a standalone manner. It should be used in conjunction with a tf.distribute strategy that is designed to work with it. The ClusterCoordinator class currently only works tf.distribute.experimental.ParameterServerStrategy. The schedule/join APIs The most important APIs provided by this class is the schedule/join pair. The schedule API is non-blocking in that it queues a tf.function and returns a RemoteValue immediately. The queued functions will be dispatched to remote workers in background threads and their RemoteValues will be filled asynchronously. Since schedule doesn’t require worker assignment, the tf.function passed in can be executed on any available worker. If the worker it is executed on becomes unavailable before its completion, it will be migrated to another worker. Because of this fact and function execution is not atomic, a function may be executed more than once. Handling Task Failure This class when used with tf.distribute.experimental.ParameterServerStrategy, comes with built-in fault tolerance for worker failures. That is, when some workers are not available for any reason to be reached from the coordinator, the training progress continues to be made with the remaining workers. Upon recovery of a failed worker, it will be added for function execution after datasets created by create_per_worker_dataset are re-built on it. When a parameter server the coordinator fails, a tf.errors.UnavailableError is raised by schedule, join or done. In this case, in addition to bringing back the failed parameter server, users should restart the coordinator to so that it reconnects to the parameter server, re-creates the variables and loads checkpoints. If the coordinator fails, users need to bring it back as well. The program will automatically connect to the parameter servers and workers, and continue the progress from a checkpoint. It is thus essential that in user's program, a checkpoint file is periodically saved, and restored at the start of the program. If an tf.keras.optimizers.Optimizer is checkpointed, after restoring from a checkpoiont, its iterations property roughly indicates the number of steps that have been made. This can be used to decide how many epochs and steps are needed before the training completion. See tf.distribute.experimental.ParameterServerStrategy docstring for an example usage of this API. This is currently under development, and the API as well as implementation are subject to changes.
Args
strategy a supported tf.distribute.Strategy object. Currently, only tf.distribute.experimental.ParameterServerStrategy is supported.
Raises
ValueError if the strategy being used is not supported.
Attributes
strategy Returns the Strategy associated with the ClusterCoordinator. Methods create_per_worker_dataset View source
create_per_worker_dataset(
dataset_fn
)
Create dataset on workers by calling dataset_fn on worker devices. This creates the given dataset generated by dataset_fn on workers and returns an object that represents the collection of those individual datasets. Calling iter on such collection of datasets returns a tf.distribute.experimental.coordinator.PerWorkerValues, which is a collection of iterators, where the iterators have been placed on respective workers. Calling next on a PerWorkerValues of iterator is unsupported. The iterator is meant to be passed as an argument into tf.distribute.experimental.coordinator.ClusterCoordinator.schedule. When the scheduled function is about to be executed by a worker, the function will receive the individual iterator that corresponds to the worker. The next method can be called on an iterator inside a scheduled function when the iterator is an input of the function. Currently the schedule method assumes workers are all the same and thus assumes the datasets on different workers are the same, except they may be shuffled differently if they contain a dataset.shuffle operation and a random seed is not set. Because of this, we also recommend the datasets to be repeated indefinitely and schedule a finite number of steps instead of relying on the OutOfRangeError from a dataset. Example: strategy = tf.distribute.experimental.ParameterServerStrategy(
cluster_resolver=...)
coordinator = tf.distribute.experimental.coordinator.ClusterCoordinator(
strategy=strategy)
@tf.function
def worker_fn(iterator):
return next(iterator)
def per_worker_dataset_fn():
return strategy.distribute_datasets_from_function(
lambda x: tf.data.from_tensor_slices([3] * 3)
per_worker_dataset = coordinator.create_per_worker_dataset(
per_worker_dataset_fn)
per_worker_iter = iter(per_worker_dataset)
remote_value = coordinator.schedule(worker_fn, args=(per_worker_iter,))
assert remote_value.fetch() == 3
Args
dataset_fn The dataset function that returns a dataset. This is to be executed on the workers.
Returns An object that represents the collection of those individual datasets. iter is expected to be called on this object that returns a tf.distribute.experimental.coordinator.PerWorkerValues of the iterators (that are on the workers).
done View source
done()
Returns whether all the scheduled functions have finished execution. If any previously scheduled function raises an error, done will fail by raising any one of those errors. When done returns True or raises, it guarantees that there is no function that is still being executed.
Returns Whether all the scheduled functions have finished execution.
Raises
Exception one of the exceptions caught by the coordinator by any previously scheduled function since the last time an error was thrown or since the beginning of the program. fetch View source
fetch(
val
)
Blocking call to fetch results from the remote values. This is a wrapper around tf.distribute.experimental.coordinator.RemoteValue.fetch for a RemoteValue structure; it returns the execution results of RemoteValues. If not ready, wait for them while blocking the caller. Example: strategy = ...
coordinator = tf.distribute.experimental.coordinator.ClusterCoordinator(
strategy)
def dataset_fn():
return tf.data.Dataset.from_tensor_slices([1, 1, 1])
with strategy.scope():
v = tf.Variable(initial_value=0)
@tf.function
def worker_fn(iterator):
def replica_fn(x):
v.assign_add(x)
return v.read_value()
return strategy.run(replica_fn, args=(next(iterator),))
distributed_dataset = coordinator.create_per_worker_dataset(dataset_fn)
distributed_iterator = iter(distributed_dataset)
result = coordinator.schedule(worker_fn, args=(distributed_iterator,))
assert coordinator.fetch(result) == 1
Args
val The value to fetch the results from. If this is structure of tf.distribute.experimental.coordinator.RemoteValue, fetch() will be called on the individual tf.distribute.experimental.coordinator.RemoteValue to get the result.
Returns If val is a tf.distribute.experimental.coordinator.RemoteValue or a structure of tf.distribute.experimental.coordinator.RemoteValues, return the fetched tf.distribute.experimental.coordinator.RemoteValue values immediately if they are available, or block the call until they are available, and return the fetched tf.distribute.experimental.coordinator.RemoteValue values with the same structure. If val is other types, return it as-is.
join View source
join()
Blocks until all the scheduled functions have finished execution. If any previously scheduled function raises an error, join will fail by raising any one of those errors, and clear the errors collected so far. If this happens, some of the previously scheduled functions may have not been executed. Users can call fetch on the returned tf.distribute.experimental.coordinator.RemoteValue to inspect if they have executed, failed, or cancelled. If some that have been cancelled need to be rescheduled, users should call schedule with the function again. When join returns or raises, it guarantees that there is no function that is still being executed.
Raises
Exception one of the exceptions caught by the coordinator by any previously scheduled function since the last time an error was thrown or since the beginning of the program. schedule View source
schedule(
fn, args=None, kwargs=None
)
Schedules fn to be dispatched to a worker for asynchronous execution. This method is non-blocking in that it queues the fn which will be executed later and returns a tf.distribute.experimental.coordinator.RemoteValue object immediately. fetch can be called on the it to wait for the function execution to finish and retrieve its output from a remote worker. On the other hand, call tf.distribute.experimental.coordinator.ClusterCoordinator.join to wait for all scheduled functions to finish. schedule guarantees that fn will be executed on a worker at least once; it could be more than once if its corresponding worker fails in the middle of its execution. Note that since worker can fail at any point when executing the function, it is possible that the function is partially executed, but tf.distribute.experimental.coordinator.ClusterCoordinator guarantees that in those events, the function will eventually be executed on any worker that is available. If any previously scheduled function raises an error, schedule will raise any one of those errors, and clear the errors collected so far. What happens here, some of the previously scheduled functions may have not been executed. User can call fetch on the returned tf.distribute.experimental.coordinator.RemoteValue to inspect if they have executed, failed, or cancelled, and reschedule the corresponding function if needed. When schedule raises, it guarantees that there is no function that is still being executed. At this time, there is no support of worker assignment for function execution, or priority of the workers. args and kwargs are the arguments passed into fn, when fn is executed on a worker. They can be tf.distribute.experimental.coordinator.PerWorkerValues and in this case, the argument will be substituted with the corresponding component on the target worker. Arguments that are not tf.distribute.experimental.coordinator.PerWorkerValues will be passed into fn as-is. Currently, tf.distribute.experimental.coordinator.RemoteValue is not supported to be input args or kwargs.
Args
fn A tf.function; the function to be dispatched to a worker for execution asynchronously.
args Positional arguments for fn.
kwargs Keyword arguments for fn.
Returns A tf.distribute.experimental.coordinator.RemoteValue object that represents the output of the function scheduled.
Raises
Exception one of the exceptions caught by the coordinator from any previously scheduled function, since the last time an error was thrown or since the beginning of the program. | tensorflow.distribute.experimental.coordinator.clustercoordinator |
tf.distribute.experimental.coordinator.PerWorkerValues A container that holds a list of values, one value per worker.
tf.distribute.experimental.coordinator.PerWorkerValues(
values
)
tf.distribute.experimental.coordinator.PerWorkerValues contains a collection of values, where each of the value is located one worker respectively, and upon being used as one of the args or kwargs of tf.distribute.experimental.coordinator.ClusterCoordinator.schedule(), the value specific to a worker will be passed into the function being executed at that particular worker. Currently, the only supported path to create an object of tf.distribute.experimental.coordinator.PerWorkerValues is through calling iter on a ClusterCoordinator.create_per_worker_dataset-returned distributed dataset instance. The mechanism to create a custom tf.distribute.experimental.coordinator.PerWorkerValues is not yet supported. | tensorflow.distribute.experimental.coordinator.perworkervalues |
tf.distribute.experimental.coordinator.RemoteValue An asynchronously available value of a scheduled function. This class is used as the return value of tf.distribute.experimental.coordinator.ClusterCoordinator.schedule where the underlying value becomes available at a later time once the function has been executed. Using tf.distribute.experimental.coordinator.RemoteValue as an input to a subsequent function scheduled with tf.distribute.experimental.coordinator.ClusterCoordinator.schedule is currently not supported. Example: strategy = tf.distribute.experimental.ParameterServerStrategy(
cluster_resolver=...)
coordinator = (
tf.distribute.experimental.coordinator.ClusterCoordinator(strategy))
with strategy.scope():
v1 = tf.Variable(initial_value=0.0)
v2 = tf.Variable(initial_value=1.0)
@tf.function
def worker_fn():
v1.assign_add(0.1)
v2.assign_sub(0.2)
return v1.read_value() / v2.read_value()
result = coordinator.schedule(worker_fn)
# Note that `fetch()` gives the actual result instead of a `tf.Tensor`.
assert result.fetch() == 0.125
for _ in range(10):
# `worker_fn` will be run on arbitrary workers that are available. The
# `result` value will be available later.
result = coordinator.schedule(worker_fn)
Methods fetch View source
fetch()
Wait for the result of RemoteValue to be ready and return the result. This makes the value concrete by copying the remote value to local.
Returns The actual output of the tf.function associated with this RemoteValue, previously by a tf.distribute.experimental.coordinator.ClusterCoordinator.schedule call. This can be a single value, or a structure of values, depending on the output of the tf.function.
Raises
tf.errors.CancelledError If the function that produces this RemoteValue is aborted or cancelled due to failure. | tensorflow.distribute.experimental.coordinator.remotevalue |
tf.distribute.experimental.MultiWorkerMirroredStrategy View source on GitHub A distribution strategy for synchronous training on multiple workers. Inherits From: MultiWorkerMirroredStrategy, Strategy
tf.distribute.experimental.MultiWorkerMirroredStrategy(
communication=tf.distribute.experimental.CollectiveCommunication.AUTO,
cluster_resolver=None
)
This strategy implements synchronous distributed training across multiple workers, each with potentially multiple GPUs. Similar to tf.distribute.MirroredStrategy, it replicates all variables and computations to each local device. The difference is that it uses a distributed collective implementation (e.g. all-reduce), so that multiple workers can work together. You need to launch your program on each worker and configure cluster_resolver correctly. For example, if you are using tf.distribute.cluster_resolver.TFConfigClusterResolver, each worker needs to have its corresponding task_type and task_id set in the TF_CONFIG environment variable. An example TF_CONFIG on worker-0 of a two worker cluster is: TF_CONFIG = '{"cluster": {"worker": ["localhost:12345", "localhost:23456"]}, "task": {"type": "worker", "index": 0} }'
Your program runs on each worker as-is. Note that collectives require each worker to participate. All tf.distribute and non tf.distribute API may use collectives internally, e.g. checkpointing and saving since reading a tf.Variable with tf.VariableSynchronization.ON_READ all-reduces the value. Therefore it's recommended to run exactly the same program on each worker. Dispatching based on task_type or task_id of the worker is error-prone. cluster_resolver.num_accelerators() determines the number of GPUs the strategy uses. If it's zero, the strategy uses the CPU. All workers need to use the same number of devices, otherwise the behavior is undefined. This strategy is not intended for TPU. Use tf.distribute.TPUStrategy instead. After setting up TF_CONFIG, using this strategy is similar to using tf.distribute.MirroredStrategy and tf.distribute.TPUStrategy. strategy = tf.distribute.MultiWorkerMirroredStrategy()
with strategy.scope():
model = tf.keras.Sequential([
tf.keras.layers.Dense(2, input_shape=(5,)),
])
optimizer = tf.keras.optimizers.SGD(learning_rate=0.1)
def dataset_fn(ctx):
x = np.random.random((2, 5)).astype(np.float32)
y = np.random.randint(2, size=(2, 1))
dataset = tf.data.Dataset.from_tensor_slices((x, y))
return dataset.repeat().batch(1, drop_remainder=True)
dist_dataset = strategy.distribute_datasets_from_function(dataset_fn)
model.compile()
model.fit(dist_dataset)
You can also write your own training loop: @tf.function
def train_step(iterator):
def step_fn(inputs):
features, labels = inputs
with tf.GradientTape() as tape:
logits = model(features, training=True)
loss = tf.keras.losses.sparse_categorical_crossentropy(
labels, logits)
grads = tape.gradient(loss, model.trainable_variables)
optimizer.apply_gradients(zip(grads, model.trainable_variables))
strategy.run(step_fn, args=(next(iterator),))
for _ in range(NUM_STEP):
train_step(iterator)
See Multi-worker training with Keras for a detailed tutorial. Saving You need to save and checkpoint on all workers instead of just one. This is because variables whose synchronization=ON_READ triggers aggregation during saving. It's recommended to save to a different path on each worker to avoid race conditions. Each worker saves the same thing. See Multi-worker training with Keras tutorial for examples. Known Issues
tf.distribute.cluster_resolver.TFConfigClusterResolver does not return the correct number of accelerators. The strategy uses all available GPUs if cluster_resolver is tf.distribute.cluster_resolver.TFConfigClusterResolver or None. In eager mode, the strategy needs to be created before calling any other Tensorflow API.
Args
communication optional tf.distribute.experimental.CommunicationImplementation. This is a hint on the preferred collective communication implementation. Possible values include AUTO, RING, and NCCL.
cluster_resolver optional tf.distribute.cluster_resolver.ClusterResolver. If None, tf.distribute.cluster_resolver.TFConfigClusterResolver is used.
Attributes
cluster_resolver Returns the cluster resolver associated with this strategy. As a multi-worker strategy, tf.distribute.experimental.MultiWorkerMirroredStrategy provides the associated tf.distribute.cluster_resolver.ClusterResolver. If the user provides one in __init__, that instance is returned; if the user does not, a default TFConfigClusterResolver is provided.
extended tf.distribute.StrategyExtended with additional methods.
num_replicas_in_sync Returns number of replicas over which gradients are aggregated. Methods distribute_datasets_from_function View source
distribute_datasets_from_function(
dataset_fn, options=None
)
Distributes tf.data.Dataset instances created by calls to dataset_fn. The argument dataset_fn that users pass in is an input function that has a tf.distribute.InputContext argument and returns a tf.data.Dataset instance. It is expected that the returned dataset from dataset_fn is already batched by per-replica batch size (i.e. global batch size divided by the number of replicas in sync) and sharded. tf.distribute.Strategy.distribute_datasets_from_function does not batch or shard the tf.data.Dataset instance returned from the input function. dataset_fn will be called on the CPU device of each of the workers and each generates a dataset where every replica on that worker will dequeue one batch of inputs (i.e. if a worker has two replicas, two batches will be dequeued from the Dataset every step). This method can be used for several purposes. First, it allows you to specify your own batching and sharding logic. (In contrast, tf.distribute.experimental_distribute_dataset does batching and sharding for you.) For example, where experimental_distribute_dataset is unable to shard the input files, this method might be used to manually shard the dataset (avoiding the slow fallback behavior in experimental_distribute_dataset). In cases where the dataset is infinite, this sharding can be done by creating dataset replicas that differ only in their random seed. The dataset_fn should take an tf.distribute.InputContext instance where information about batching and input replication can be accessed. You can use element_spec property of the tf.distribute.DistributedDataset returned by this API to query the tf.TypeSpec of the elements returned by the iterator. This can be used to set the input_signature property of a tf.function. Follow tf.distribute.DistributedDataset.element_spec to see an example. Key Point: The tf.data.Dataset returned by dataset_fn should have a per-replica batch size, unlike experimental_distribute_dataset, which uses the global batch size. This may be computed using input_context.get_per_replica_batch_size.
Note: If you are using TPUStrategy, the order in which the data is processed by the workers when using tf.distribute.Strategy.experimental_distribute_dataset or tf.distribute.Strategy.distribute_datasets_from_function is not guaranteed. This is typically required if you are using tf.distribute to scale prediction. You can however insert an index for each element in the batch and order outputs accordingly. Refer to this snippet for an example of how to order outputs.
Note: Stateful dataset transformations are currently not supported with tf.distribute.experimental_distribute_dataset or tf.distribute.distribute_datasets_from_function. Any stateful ops that the dataset may have are currently ignored. For example, if your dataset has a map_fn that uses tf.random.uniform to rotate an image, then you have a dataset graph that depends on state (i.e the random seed) on the local machine where the python process is being executed.
For a tutorial on more usage and properties of this method, refer to the tutorial on distributed input). If you are interested in last partial batch handling, read this section.
Args
dataset_fn A function taking a tf.distribute.InputContext instance and returning a tf.data.Dataset.
options tf.distribute.InputOptions used to control options on how this dataset is distributed.
Returns A tf.distribute.DistributedDataset.
experimental_distribute_dataset View source
experimental_distribute_dataset(
dataset, options=None
)
Creates tf.distribute.DistributedDataset from tf.data.Dataset. The returned tf.distribute.DistributedDataset can be iterated over similar to regular datasets. NOTE: The user cannot add any more transformations to a tf.distribute.DistributedDataset. You can only create an iterator or examine the tf.TypeSpec of the data generated by it. See API docs of tf.distribute.DistributedDataset to learn more. The following is an example:
global_batch_size = 2
# Passing the devices is optional.
strategy = tf.distribute.MirroredStrategy(devices=["GPU:0", "GPU:1"])
# Create a dataset
dataset = tf.data.Dataset.range(4).batch(global_batch_size)
# Distribute that dataset
dist_dataset = strategy.experimental_distribute_dataset(dataset)
@tf.function
def replica_fn(input):
return input*2
result = []
# Iterate over the `tf.distribute.DistributedDataset`
for x in dist_dataset:
# process dataset elements
result.append(strategy.run(replica_fn, args=(x,)))
print(result)
[PerReplica:{
0: <tf.Tensor: shape=(1,), dtype=int64, numpy=array([0])>,
1: <tf.Tensor: shape=(1,), dtype=int64, numpy=array([2])>
}, PerReplica:{
0: <tf.Tensor: shape=(1,), dtype=int64, numpy=array([4])>,
1: <tf.Tensor: shape=(1,), dtype=int64, numpy=array([6])>
}]
Three key actions happending under the hood of this method are batching, sharding, and prefetching. In the code snippet above, dataset is batched by global_batch_size, and calling experimental_distribute_dataset on it rebatches dataset to a new batch size that is equal to the global batch size divided by the number of replicas in sync. We iterate through it using a Pythonic for loop. x is a tf.distribute.DistributedValues containing data for all replicas, and each replica gets data of the new batch size. tf.distribute.Strategy.run will take care of feeding the right per-replica data in x to the right replica_fn executed on each replica. Sharding contains autosharding across multiple workers and within every worker. First, in multi-worker distributed training (i.e. when you use tf.distribute.experimental.MultiWorkerMirroredStrategy or tf.distribute.TPUStrategy), autosharding a dataset over a set of workers means that each worker is assigned a subset of the entire dataset (if the right tf.data.experimental.AutoShardPolicy is set). This is to ensure that at each step, a global batch size of non-overlapping dataset elements will be processed by each worker. Autosharding has a couple of different options that can be specified using tf.data.experimental.DistributeOptions. Then, sharding within each worker means the method will split the data among all the worker devices (if more than one a present). This will happen regardless of multi-worker autosharding.
Note: for autosharding across multiple workers, the default mode is tf.data.experimental.AutoShardPolicy.AUTO. This mode will attempt to shard the input dataset by files if the dataset is being created out of reader datasets (e.g. tf.data.TFRecordDataset, tf.data.TextLineDataset, etc.) or otherwise shard the dataset by data, where each of the workers will read the entire dataset and only process the shard assigned to it. However, if you have less than one input file per worker, we suggest that you disable dataset autosharding across workers by setting the tf.data.experimental.DistributeOptions.auto_shard_policy to be tf.data.experimental.AutoShardPolicy.OFF.
By default, this method adds a prefetch transformation at the end of the user provided tf.data.Dataset instance. The argument to the prefetch transformation which is buffer_size is equal to the number of replicas in sync. If the above batch splitting and dataset sharding logic is undesirable, please use tf.distribute.Strategy.distribute_datasets_from_function instead, which does not do any automatic batching or sharding for you.
Note: If you are using TPUStrategy, the order in which the data is processed by the workers when using tf.distribute.Strategy.experimental_distribute_dataset or tf.distribute.Strategy.distribute_datasets_from_function is not guaranteed. This is typically required if you are using tf.distribute to scale prediction. You can however insert an index for each element in the batch and order outputs accordingly. Refer to this snippet for an example of how to order outputs.
Note: Stateful dataset transformations are currently not supported with tf.distribute.experimental_distribute_dataset or tf.distribute.distribute_datasets_from_function. Any stateful ops that the dataset may have are currently ignored. For example, if your dataset has a map_fn that uses tf.random.uniform to rotate an image, then you have a dataset graph that depends on state (i.e the random seed) on the local machine where the python process is being executed.
For a tutorial on more usage and properties of this method, refer to the tutorial on distributed input. If you are interested in last partial batch handling, read this section.
Args
dataset tf.data.Dataset that will be sharded across all replicas using the rules stated above.
options tf.distribute.InputOptions used to control options on how this dataset is distributed.
Returns A tf.distribute.DistributedDataset.
experimental_distribute_values_from_function View source
experimental_distribute_values_from_function(
value_fn
)
Generates tf.distribute.DistributedValues from value_fn. This function is to generate tf.distribute.DistributedValues to pass into run, reduce, or other methods that take distributed values when not using datasets.
Args
value_fn The function to run to generate values. It is called for each replica with tf.distribute.ValueContext as the sole argument. It must return a Tensor or a type that can be converted to a Tensor.
Returns A tf.distribute.DistributedValues containing a value for each replica.
Example usage: Return constant value per replica:
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"])
def value_fn(ctx):
return tf.constant(1.)
distributed_values = (
strategy.experimental_distribute_values_from_function(
value_fn))
local_result = strategy.experimental_local_results(distributed_values)
local_result
(<tf.Tensor: shape=(), dtype=float32, numpy=1.0>,
<tf.Tensor: shape=(), dtype=float32, numpy=1.0>)
Distribute values in array based on replica_id:
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"])
array_value = np.array([3., 2., 1.])
def value_fn(ctx):
return array_value[ctx.replica_id_in_sync_group]
distributed_values = (
strategy.experimental_distribute_values_from_function(
value_fn))
local_result = strategy.experimental_local_results(distributed_values)
local_result
(3.0, 2.0)
Specify values using num_replicas_in_sync:
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"])
def value_fn(ctx):
return ctx.num_replicas_in_sync
distributed_values = (
strategy.experimental_distribute_values_from_function(
value_fn))
local_result = strategy.experimental_local_results(distributed_values)
local_result
(2, 2)
Place values on devices and distribute: strategy = tf.distribute.TPUStrategy()
worker_devices = strategy.extended.worker_devices
multiple_values = []
for i in range(strategy.num_replicas_in_sync):
with tf.device(worker_devices[i]):
multiple_values.append(tf.constant(1.0))
def value_fn(ctx):
return multiple_values[ctx.replica_id_in_sync_group]
distributed_values = strategy.
experimental_distribute_values_from_function(
value_fn)
experimental_local_results View source
experimental_local_results(
value
)
Returns the list of all local per-replica values contained in value.
Note: This only returns values on the worker initiated by this client. When using a tf.distribute.Strategy like tf.distribute.experimental.MultiWorkerMirroredStrategy, each worker will be its own client, and this function will only return values computed on that worker.
Args
value A value returned by experimental_run(), run(), extended.call_for_each_replica(), or a variable created in scope.
Returns A tuple of values contained in value. If value represents a single value, this returns (value,).
gather View source
gather(
value, axis
)
Gather value across replicas along axis to the current device. Given a tf.distribute.DistributedValues or tf.Tensor-like object value, this API gathers and concatenates value across replicas along the axis-th dimension. The result is copied to the "current" device which would typically be the CPU of the worker on which the program is running. For tf.distribute.TPUStrategy, it is the first TPU host. For multi-client MultiWorkerMirroredStrategy, this is CPU of each worker. This API can only be called in the cross-replica context. For a counterpart in the replica context, see tf.distribute.ReplicaContext.all_gather.
Note: For all strategies except tf.distribute.TPUStrategy, the input value on different replicas must have the same rank, and their shapes must be the same in all dimensions except the axis-th dimension. In other words, their shapes cannot be different in a dimension d where d does not equal to the axis argument. For example, given a tf.distribute.DistributedValues with component tensors of shape (1, 2, 3) and (1, 3, 3) on two replicas, you can call gather(..., axis=1, ...) on it, but not gather(..., axis=0, ...) or gather(..., axis=2, ...). However, for tf.distribute.TPUStrategy.gather, all tensors must have exactly the same rank and same shape.
Note: Given a tf.distribute.DistributedValues value, its component tensors must have a non-zero rank. Otherwise, consider using tf.expand_dims before gathering them.
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"])
# A DistributedValues with component tensor of shape (2, 1) on each replica
distributed_values = strategy.experimental_distribute_values_from_function(lambda _: tf.identity(tf.constant([[1], [2]])))
@tf.function
def run():
return strategy.gather(distributed_values, axis=0)
run()
<tf.Tensor: shape=(4, 1), dtype=int32, numpy=
array([[1],
[2],
[1],
[2]], dtype=int32)>
Consider the following example for more combinations:
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1", "GPU:2", "GPU:3"])
single_tensor = tf.reshape(tf.range(6), shape=(1,2,3))
distributed_values = strategy.experimental_distribute_values_from_function(lambda _: tf.identity(single_tensor))
@tf.function
def run(axis):
return strategy.gather(distributed_values, axis=axis)
axis=0
run(axis)
<tf.Tensor: shape=(4, 2, 3), dtype=int32, numpy=
array([[[0, 1, 2],
[3, 4, 5]],
[[0, 1, 2],
[3, 4, 5]],
[[0, 1, 2],
[3, 4, 5]],
[[0, 1, 2],
[3, 4, 5]]], dtype=int32)>
axis=1
run(axis)
<tf.Tensor: shape=(1, 8, 3), dtype=int32, numpy=
array([[[0, 1, 2],
[3, 4, 5],
[0, 1, 2],
[3, 4, 5],
[0, 1, 2],
[3, 4, 5],
[0, 1, 2],
[3, 4, 5]]], dtype=int32)>
axis=2
run(axis)
<tf.Tensor: shape=(1, 2, 12), dtype=int32, numpy=
array([[[0, 1, 2, 0, 1, 2, 0, 1, 2, 0, 1, 2],
[3, 4, 5, 3, 4, 5, 3, 4, 5, 3, 4, 5]]], dtype=int32)>
Args
value a tf.distribute.DistributedValues instance, e.g. returned by Strategy.run, to be combined into a single tensor. It can also be a regular tensor when used with tf.distribute.OneDeviceStrategy or the default strategy. The tensors that constitute the DistributedValues can only be dense tensors with non-zero rank, NOT a tf.IndexedSlices.
axis 0-D int32 Tensor. Dimension along which to gather. Must be in the range [0, rank(value)).
Returns A Tensor that's the concatenation of value across replicas along axis dimension.
reduce View source
reduce(
reduce_op, value, axis
)
Reduce value across replicas and return result on current device.
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"])
def step_fn():
i = tf.distribute.get_replica_context().replica_id_in_sync_group
return tf.identity(i)
per_replica_result = strategy.run(step_fn)
total = strategy.reduce("SUM", per_replica_result, axis=None)
total
<tf.Tensor: shape=(), dtype=int32, numpy=1>
To see how this would look with multiple replicas, consider the same example with MirroredStrategy with 2 GPUs: strategy = tf.distribute.MirroredStrategy(devices=["GPU:0", "GPU:1"])
def step_fn():
i = tf.distribute.get_replica_context().replica_id_in_sync_group
return tf.identity(i)
per_replica_result = strategy.run(step_fn)
# Check devices on which per replica result is:
strategy.experimental_local_results(per_replica_result)[0].device
# /job:localhost/replica:0/task:0/device:GPU:0
strategy.experimental_local_results(per_replica_result)[1].device
# /job:localhost/replica:0/task:0/device:GPU:1
total = strategy.reduce("SUM", per_replica_result, axis=None)
# Check device on which reduced result is:
total.device
# /job:localhost/replica:0/task:0/device:CPU:0
This API is typically used for aggregating the results returned from different replicas, for reporting etc. For example, loss computed from different replicas can be averaged using this API before printing.
Note: The result is copied to the "current" device - which would typically be the CPU of the worker on which the program is running. For TPUStrategy, it is the first TPU host. For multi client MultiWorkerMirroredStrategy, this is CPU of each worker.
There are a number of different tf.distribute APIs for reducing values across replicas:
tf.distribute.ReplicaContext.all_reduce: This differs from Strategy.reduce in that it is for replica context and does not copy the results to the host device. all_reduce should be typically used for reductions inside the training step such as gradients.
tf.distribute.StrategyExtended.reduce_to and tf.distribute.StrategyExtended.batch_reduce_to: These APIs are more advanced versions of Strategy.reduce as they allow customizing the destination of the result. They are also called in cross replica context. What should axis be? Given a per-replica value returned by run, say a per-example loss, the batch will be divided across all the replicas. This function allows you to aggregate across replicas and optionally also across batch elements by specifying the axis parameter accordingly. For example, if you have a global batch size of 8 and 2 replicas, values for examples [0, 1, 2, 3] will be on replica 0 and [4, 5, 6, 7] will be on replica 1. With axis=None, reduce will aggregate only across replicas, returning [0+4, 1+5, 2+6, 3+7]. This is useful when each replica is computing a scalar or some other value that doesn't have a "batch" dimension (like a gradient or loss). strategy.reduce("sum", per_replica_result, axis=None)
Sometimes, you will want to aggregate across both the global batch and all replicas. You can get this behavior by specifying the batch dimension as the axis, typically axis=0. In this case it would return a scalar 0+1+2+3+4+5+6+7. strategy.reduce("sum", per_replica_result, axis=0)
If there is a last partial batch, you will need to specify an axis so that the resulting shape is consistent across replicas. So if the last batch has size 6 and it is divided into [0, 1, 2, 3] and [4, 5], you would get a shape mismatch unless you specify axis=0. If you specify tf.distribute.ReduceOp.MEAN, using axis=0 will use the correct denominator of 6. Contrast this with computing reduce_mean to get a scalar value on each replica and this function to average those means, which will weigh some values 1/8 and others 1/4.
Args
reduce_op a tf.distribute.ReduceOp value specifying how values should be combined. Allows using string representation of the enum such as "SUM", "MEAN".
value a tf.distribute.DistributedValues instance, e.g. returned by Strategy.run, to be combined into a single tensor. It can also be a regular tensor when used with OneDeviceStrategy or default strategy.
axis specifies the dimension to reduce along within each replica's tensor. Should typically be set to the batch dimension, or None to only reduce across replicas (e.g. if the tensor has no batch dimension).
Returns A Tensor.
run View source
run(
fn, args=(), kwargs=None, options=None
)
Invokes fn on each replica, with the given arguments. This method is the primary way to distribute your computation with a tf.distribute object. It invokes fn on each replica. If args or kwargs have tf.distribute.DistributedValues, such as those produced by a tf.distribute.DistributedDataset from tf.distribute.Strategy.experimental_distribute_dataset or tf.distribute.Strategy.distribute_datasets_from_function, when fn is executed on a particular replica, it will be executed with the component of tf.distribute.DistributedValues that correspond to that replica. fn is invoked under a replica context. fn may call tf.distribute.get_replica_context() to access members such as all_reduce. Please see the module-level docstring of tf.distribute for the concept of replica context. All arguments in args or kwargs should either be Python values of a nested structure of tensors, e.g. a list of tensors, in which case args and kwargs will be passed to the fn invoked on each replica. Or args or kwargs can be tf.distribute.DistributedValues containing tensors or composite tensors, i.e. tf.compat.v1.TensorInfo.CompositeTensor, in which case each fn call will get the component of a tf.distribute.DistributedValues corresponding to its replica. Key Point: Depending on the implementation of tf.distribute.Strategy and whether eager execution is enabled, fn may be called one or more times. If fn is annotated with tf.function or tf.distribute.Strategy.run is called inside a tf.function (eager execution is disabled inside a tf.function by default), fn is called once per replica to generate a Tensorflow graph, which will then be reused for execution with new inputs. Otherwise, if eager execution is enabled, fn will be called once per replica every step just like regular python code. Example usage: Constant tensor input.
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"])
tensor_input = tf.constant(3.0)
@tf.function
def replica_fn(input):
return input*2.0
result = strategy.run(replica_fn, args=(tensor_input,))
result
PerReplica:{
0: <tf.Tensor: shape=(), dtype=float32, numpy=6.0>,
1: <tf.Tensor: shape=(), dtype=float32, numpy=6.0>
}
DistributedValues input.
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"])
@tf.function
def run():
def value_fn(value_context):
return value_context.num_replicas_in_sync
distributed_values = (
strategy.experimental_distribute_values_from_function(
value_fn))
def replica_fn2(input):
return input*2
return strategy.run(replica_fn2, args=(distributed_values,))
result = run()
result
<tf.Tensor: shape=(), dtype=int32, numpy=4>
Use tf.distribute.ReplicaContext to allreduce values.
strategy = tf.distribute.MirroredStrategy(["gpu:0", "gpu:1"])
@tf.function
def run():
def value_fn(value_context):
return tf.constant(value_context.replica_id_in_sync_group)
distributed_values = (
strategy.experimental_distribute_values_from_function(
value_fn))
def replica_fn(input):
return tf.distribute.get_replica_context().all_reduce("sum", input)
return strategy.run(replica_fn, args=(distributed_values,))
result = run()
result
PerReplica:{
0: <tf.Tensor: shape=(), dtype=int32, numpy=1>,
1: <tf.Tensor: shape=(), dtype=int32, numpy=1>
}
Args
fn The function to run on each replica.
args Optional positional arguments to fn. Its element can be a Python value, a tensor or a tf.distribute.DistributedValues.
kwargs Optional keyword arguments to fn. Its element can be a Python value, a tensor or a tf.distribute.DistributedValues.
options An optional instance of tf.distribute.RunOptions specifying the options to run fn.
Returns Merged return value of fn across replicas. The structure of the return value is the same as the return value from fn. Each element in the structure can either be tf.distribute.DistributedValues, Tensor objects, or Tensors (for example, if running on a single replica).
scope View source
scope()
Context manager to make the strategy current and distribute variables. This method returns a context manager, and is used as follows:
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"])
# Variable created inside scope:
with strategy.scope():
mirrored_variable = tf.Variable(1.)
mirrored_variable
MirroredVariable:{
0: <tf.Variable 'Variable:0' shape=() dtype=float32, numpy=1.0>,
1: <tf.Variable 'Variable/replica_1:0' shape=() dtype=float32, numpy=1.0>
}
# Variable created outside scope:
regular_variable = tf.Variable(1.)
regular_variable
<tf.Variable 'Variable:0' shape=() dtype=float32, numpy=1.0>
What happens when Strategy.scope is entered?
strategy is installed in the global context as the "current" strategy. Inside this scope, tf.distribute.get_strategy() will now return this strategy. Outside this scope, it returns the default no-op strategy. Entering the scope also enters the "cross-replica context". See tf.distribute.StrategyExtended for an explanation on cross-replica and replica contexts. Variable creation inside scope is intercepted by the strategy. Each strategy defines how it wants to affect the variable creation. Sync strategies like MirroredStrategy, TPUStrategy and MultiWorkerMiroredStrategy create variables replicated on each replica, whereas ParameterServerStrategy creates variables on the parameter servers. This is done using a custom tf.variable_creator_scope. In some strategies, a default device scope may also be entered: in MultiWorkerMiroredStrategy, a default device scope of "/CPU:0" is entered on each worker.
Note: Entering a scope does not automatically distribute a computation, except in the case of high level training framework like keras model.fit. If you're not using model.fit, you need to use strategy.run API to explicitly distribute that computation. See an example in the custom training loop tutorial.
What should be in scope and what should be outside? There are a number of requirements on what needs to happen inside the scope. However, in places where we have information about which strategy is in use, we often enter the scope for the user, so they don't have to do it explicitly (i.e. calling those either inside or outside the scope is OK). Anything that creates variables that should be distributed variables must be in strategy.scope. This can be either by directly putting it in scope, or relying on another API like strategy.run or model.fit to enter it for you. Any variable that is created outside scope will not be distributed and may have performance implications. Common things that create variables in TF: models, optimizers, metrics. These should always be created inside the scope. Another source of variable creation can be a checkpoint restore - when variables are created lazily. Note that any variable created inside a strategy captures the strategy information. So reading and writing to these variables outside the strategy.scope can also work seamlessly, without the user having to enter the scope. Some strategy APIs (such as strategy.run and strategy.reduce) which require to be in a strategy's scope, enter the scope for you automatically, which means when using those APIs you don't need to enter the scope yourself. When a tf.keras.Model is created inside a strategy.scope, we capture this information. When high level training frameworks methods such as model.compile, model.fit etc are then called on this model, we automatically enter the scope, as well as use this strategy to distribute the training etc. See detailed example in distributed keras tutorial. Note that simply calling the model(..) is not impacted - only high level training framework APIs are. model.compile, model.fit, model.evaluate, model.predict and model.save can all be called inside or outside the scope. The following can be either inside or outside the scope: Creating the input datasets Defining tf.functions that represent your training step Saving APIs such as tf.saved_model.save. Loading creates variables, so that should go inside the scope if you want to train the model in a distributed way. Checkpoint saving. As mentioned above - checkpoint.restore may sometimes need to be inside scope if it creates variables.
Returns A context manager. | tensorflow.distribute.experimental.multiworkermirroredstrategy |
tf.distribute.experimental.ParameterServerStrategy View source on GitHub An multi-worker tf.distribute strategy with parameter servers. Inherits From: Strategy
tf.distribute.experimental.ParameterServerStrategy(
cluster_resolver, variable_partitioner=None
)
Parameter server training is a common data-parallel method to scale up a machine learning model on multiple machines. A parameter server training cluster consists of workers and parameter servers. Variables are created on parameter servers and they are read and updated by workers in each step. By default, workers read and update these variables independently without synchronizing with each other. Under this configuration, it is known as asynchronous training. In TensorFlow 2, we recommend a central coordiantion-based architecture for parameter server training, where workers and parameter servers run a tf.distribute.Server and there is another task that creates resources on workers and parameter servers, dispatches functions, and coordinates the training. We refer to this task as “coordinator”. The coordinator uses a tf.distribute.experimental.coordinator.ClusterCoordinator to coordinate the cluster, and a tf.distribute.experimental.ParameterServerStrategy to define variables on parameter servers and computation on workers. For the training to work, the coordinator dispatches tf.functions to be executed on remote workers. Upon receiving requests from the coordinator, a worker executes the tf.function by reading the variables from parameter servers, executing the ops, and updating the variables on the parameter servers. Each of the worker only processes the requests from the coordinator, and communicates with parameter servers, without direct interactions with other workers in the cluster. As a result, failures of some workers do not prevent the cluster from continuing the work, and this allows the cluster to train with instances that can be occasionally unavailable (e.g. preemptible or spot instances). The coordinator and parameter servers though, must be available at all times for the cluster to make progress. Note that the coordinator is not one of the training workers. Instead, it creates resources such as variables and datasets, dispatchs tf.functions, saving checkpoints and so on. In addition to workers, parameter servers and the coordinator, an optional evaluator can be run on the side that periodically reads the checkpoints saved by the coordinator and runs evaluations against each checkpoint. tf.distribute.experimental.ParameterServerStrategy has to work in conjunction with a tf.distribute.experimental.coordinator.ClusterCoordinator object. Standalone usage of tf.distribute.experimental.ParameterServerStrategy without central coordination is not supported at this time. Example code for coordinator Here's an example usage of the API, with a custom training loop to train a model. This code snippet is intended to be run on (the only) one task that is designated as the coordinator. Note that cluster_resolver, variable_partitioner, and dataset_fn arguments are explained in the following "Cluster setup", "Variable partitioning", and "Dataset preparation" sections. # Set the environment variable to allow reporting worker and ps failure to the
# coordinator. This a short-term workaround.
os.environ["GRPC_FAIL_FAST"] = "use_caller"
# Prepare a strategy to use with the cluster and variable partitioning info.
strategy = tf.distribute.experimental.ParameterServerStrategy(
cluster_resolver=...,
variable_partitioner=...)
coordinator = tf.distribute.experimental.coordinator.ClusterCoordinator(
strategy=strategy)
# Prepare a distribute dataset that will place datasets on the workers.
distributed_dataset = coordinator.create_per_worker_dataset(dataset_fn=...)
with strategy.scope():
model = ...
optimizer, metrics = ... # Keras optimizer/metrics are great choices
checkpoint = tf.train.Checkpoint(model=model, optimizer=optimizer)
checkpoint_manager = tf.train.CheckpointManager(
checkpoint, checkpoint_dir, max_to_keep=2)
# `load_checkpoint` infers initial epoch from `optimizer.iterations`.
initial_epoch = load_checkpoint(checkpoint_manager) or 0
@tf.function
def worker_fn(iterator):
def replica_fn(inputs):
batch_data, labels = inputs
# calculate gradient, applying gradient, metrics update etc.
strategy.run(replica_fn, args=(next(iterator),))
for epoch in range(initial_epoch, num_epoch):
distributed_iterator = iter(distributed_dataset) # Reset iterator state.
for step in range(steps_per_epoch):
# Asynchronously schedule the `worker_fn` to be executed on an arbitrary
# worker. This call returns immediately.
coordinator.schedule(worker_fn, args=(distributed_iterator,))
# `join` blocks until all scheduled `worker_fn`s finish execution. Once it
# returns, we can read the metrics and save checkpoints as needed.
coordinator.join()
logging.info('Metric result: %r', metrics.result())
train_accuracy.reset_states()
checkpoint_manager.save()
Example code for worker and parameter servers In addition to the coordinator, there should be tasks designated as "worker" or "ps". They should run the following code to start a TensorFlow server, waiting for coordinator's requests: # Set the environment variable to allow reporting worker and ps failure to the
# coordinator.
os.environ["GRPC_FAIL_FAST"] = "use_caller"
# Provide a `tf.distribute.cluster_resolver.ClusterResolver` that serves
# the cluster information. See below "Cluster setup" section.
cluster_resolver = ...
server = tf.distribute.Server(
cluster_resolver.cluster_spec(),
job_name=cluster_resolver.task_type,
task_index=cluster_resolver.task_id,
protocol="grpc")
# Blocking the process that starts a server from exiting.
server.join()
Cluster setup In order for the tasks in the cluster to know other tasks' addresses, a tf.distribute.cluster_resolver.ClusterResolver is required to be used in coordinator, worker, and ps. The tf.distribute.cluster_resolver.ClusterResolver is responsible for providing the cluster information, as well as the task type and id of the current task. See tf.distribute.cluster_resolver.ClusterResolver for more information. If TF_CONFIG environment variable is set, a tf.distribute.cluster_resolver.TFConfigClusterResolver should be used as well. Note that for legacy reason, on some platform, "chief" is used as the task type for the coordinator, as the following example demonstrates. Here we set TF_CONFIG for the task designated as a parameter server (task type "ps") and index 1 (the second task), in a cluster with 1 chief, 2 parameter servers, and 3 workers. Note that the it needs to be set before the use of tf.distribute.cluster_resolver.TFConfigClusterResolver. Example code for cluster setup: os.environ['TF_CONFIG'] = '''
{
"cluster": {
"chief": ["chief.example.com:2222"],
"ps": ["ps0.example.com:2222", "ps1.example.com:2222"],
"worker": ["worker0.example.com:2222", "worker1.example.com:2222",
"worker2.example.com:2222"]
},
"task": {
"type": "ps",
"index": 1
}
}
'''
If you prefer to run the same binary for all tasks, you will need to let the binary branch into different roles at the beginning of the program: os.environ["GRPC_FAIL_FAST"] = "use_caller"
cluster_resolver = tf.distribute.cluster_resolver.TFConfigClusterResolver()
# If coordinator, create a strategy and start the training program.
if cluster_resolver.task_type == 'chief':
strategy = tf.distribute.experimental.ParameterServerStrategy(
cluster_resolver)
...
# If worker/ps, create a server
elif cluster_resolver.task_type in ("worker", "ps"):
server = tf.distribute.Server(...)
...
Alternatively, you can also start a bunch of TensorFlow servers in advance and connect to them later. The coordinator can be in the same cluster or on any machine that has connectivity to workers and parameter server. This is covered in our guide and tutorial. Variable creation with strategy.scope() tf.distribute.experimental.ParameterServerStrategy follows the tf.distribute API contract where variable creation is expected to be inside the context manager returned by strategy.scope(), in order to be correctly placed on parameter servers in a round-robin manner: # In this example, we're assuming having 3 ps.
strategy = tf.distribute.experimental.ParameterServerStrategy(
cluster_resolver=...)
coordinator = tf.distribute.experimental.coordinator.ClusterCoordinator(
strategy=strategy)
# Variables should be created inside scope to be placed on parameter servers.
# If created outside scope such as `v1` here, it would be placed on the
# coordinator.
v1 = tf.Variable(initial_value=0.0)
with strategy.scope():
v2 = tf.Variable(initial_value=1.0)
v3 = tf.Variable(initial_value=2.0)
v4 = tf.Variable(initial_value=3.0)
v5 = tf.Variable(initial_value=4.0)
# v2 through v5 are created in scope and are distributed on parameter servers.
# Default placement is round-robin but the order should not be relied on.
assert v2.device == "/job:ps/replica:0/task:0/device:CPU:0"
assert v3.device == "/job:ps/replica:0/task:1/device:CPU:0"
assert v4.device == "/job:ps/replica:0/task:2/device:CPU:0"
assert v5.device == "/job:ps/replica:0/task:0/device:CPU:0"
See distribute.Strategy.scope for more information. Variable partitioning Having dedicated servers to store variables means being able to divide up, or "shard" the variables across the ps. Partitioning large variable among ps is a commonly used technique to boost training throughput and mitigate memory constraints. It enables parallel computations and updates on different shards of a variable, and often yields better load balancing across parameter servers . Without sharding, models with large variables (e.g, embeddings) that can't fit into one machine's memory would otherwise be unable to train. With tf.distribute.experimental.ParameterServerStrategy, if a variable_partitioner is provided to __init__ and certain conditions are satisfied, the resulting variables created in scope are sharded across the parameter servers, in a round-robin fashion. The variable reference returned from tf.Variable becomes a type that serves as the container of the sharded variables. One can access variables attribute of this container for the actual variable components. If building model with tf.Module or Keras, the variable components are collected in the variables alike attributes. class Dense(tf.Module):
def __init__(self, name=None):
super().__init__(name=name)
self.w = tf.Variable(tf.random.normal([100, 10]), name='w')
def __call__(self, x):
return x * self.w
# Partition the dense layer into 2 shards.
variable_partitioiner = (
tf.distribute.experimental.partitioners.FixedShardsPartitioner(
num_shards = 2))
strategy = ParameterServerStrategy(cluster_resolver=...,
variable_partitioner = variable_partitioner)
with strategy.scope():
dense = Dense()
assert len(dense.variables) == 2
assert isinstance(dense.variables[0], tf.Variable)
assert isinstance(dense.variables[1], tf.Variable)
assert dense.variables[0].name == "w/part_0"
assert dense.variables[1].name == "w/part_1"
The sharded variable container can be converted to a Tensor via tf.convert_to_tensor. This means the container can be directly used in most Python Ops where such Tensor convertion automatically happens. For example in the above code snippet, x * self.w would implicitly apply the said tensor convertion. Note that such convertion can be expensive, as the variable components need to be transferred from multiple parameter servers to where the value is used. tf.nn.embedding_lookup on the other hand doesn't apply the tensor convertion , and performs parallel lookups on the variable components instead. This is crutial to scale up embedding lookups when the embedding table variable is large. When a partitioned variable is saved to SavedModel, it will be saved as if it is one single variable. This improves serving efficiency by eliminating a number of Ops that handle the partiton aspects. Known limitations of variable partitioning: Number of parttions must not change across Checkpoint save/load. After saving partitioned variables to a SavedModel, the SavedModel can't be loaded via tf.saved_model.load. Partition variable doesn't directly work with tf.GradientTape, please use the variables attributes to get the actual variable components and use them in gradient APIs instead. Dataset preparation With tf.distribute.experimental.ParameterServerStrategy, a dataset is created in each of the workers to be used for training. This is done by creating a dataset_fn that takes no argument and returns a tf.data.Dataset, and passing the dataset_fn into tf.distribute.experimental.coordinator. ClusterCoordinator.create_per_worker_dataset. We recommend the dataset to be shuffled and repeated to have the examples run through the training as evenly as possible. def dataset_fn():
filenames = ...
dataset = tf.data.Dataset.from_tensor_slices(filenames)
# Dataset is recommended to be shuffled, and repeated.
return dataset.shuffle(buffer_size=...).repeat().batch(batch_size=...)
coordinator =
tf.distribute.experimental.coordinator.ClusterCoordinator(strategy=...)
distributed_dataset = coordinator.create_per_worker_dataset(dataset_fn)
Limitations tf.distribute.experimental.ParameterServerStrategy in TF2 is experimental, and the API is subject to further changes. tf.distribute.experimental.ParameterServerStrategy does not yet support training with GPU(s). This is a feature request being developed. tf.distribute.experimental.ParameterServerStrategy only supports custom training loop API currently in TF2. Usage of it with Keras compile/fit API is being developed. tf.distribute.experimental.ParameterServerStrategy must be used with tf.distribute.experimental.coordinator.ClusterCoordinator.
Args
cluster_resolver a tf.distribute.cluster_resolver.ClusterResolver object.
variable_partitioner a distribute.experimental.partitioners.Partitioner that specifies how to partition variables. If None, variables will not be partitioned. Predefined partitioners in tf.distribute.experimental.partitioners can be used for this argument. A commonly used partitioner is MinSizePartitioner(min_shard_bytes = 256 << 10, max_shards = num_ps), which allocates at least 256K per shard, and each ps gets at most one shard. variable_partitioner will be called for each variable created under strategy scope to instruct how the variable should be partitioned. Variables that have only one partition along the partitioning axis (i.e., no need for partition) will be created as normal tf.Variable. Only the first / outermost axis partitioning is supported. Div partition strategy is used to partition variables. Assuming we assign consecutive integer ids along the first axis of a variable, then ids are assigned to shards in a contiguous manner, while attempting to keep each shard size identical. If the ids do not evenly divide the number of shards, each of the first several shards will be assigned one more id. For instance, a variable whose first dimension is 13 has 13 ids, and they are split across 5 shards as: [[0, 1, 2], [3, 4, 5], [6, 7, 8], [9, 10], [11, 12]]. Variables created under strategy.extended.colocate_vars_with will not be partitioned.
Attributes
cluster_resolver Returns the cluster resolver associated with this strategy. In general, when using a multi-worker tf.distribute strategy such as tf.distribute.experimental.MultiWorkerMirroredStrategy or tf.distribute.TPUStrategy(), there is a tf.distribute.cluster_resolver.ClusterResolver associated with the strategy used, and such an instance is returned by this property. Strategies that intend to have an associated tf.distribute.cluster_resolver.ClusterResolver must set the relevant attribute, or override this property; otherwise, None is returned by default. Those strategies should also provide information regarding what is returned by this property. Single-worker strategies usually do not have a tf.distribute.cluster_resolver.ClusterResolver, and in those cases this property will return None. The tf.distribute.cluster_resolver.ClusterResolver may be useful when the user needs to access information such as the cluster spec, task type or task id. For example,
os.environ['TF_CONFIG'] = json.dumps({
'cluster': {
'worker': ["localhost:12345", "localhost:23456"],
'ps': ["localhost:34567"]
},
'task': {'type': 'worker', 'index': 0}
})
# This implicitly uses TF_CONFIG for the cluster and current task info.
strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy()
...
if strategy.cluster_resolver.task_type == 'worker':
# Perform something that's only applicable on workers. Since we set this
# as a worker above, this block will run on this particular instance.
elif strategy.cluster_resolver.task_type == 'ps':
# Perform something that's only applicable on parameter servers. Since we
# set this as a worker above, this block will not run on this particular
# instance.
For more information, please see tf.distribute.cluster_resolver.ClusterResolver's API docstring.
extended tf.distribute.StrategyExtended with additional methods.
num_replicas_in_sync Returns number of replicas over which gradients are aggregated. Methods distribute_datasets_from_function View source
distribute_datasets_from_function(
dataset_fn, options=None
)
Distributes tf.data.Dataset instances created by calls to dataset_fn. The argument dataset_fn that users pass in is an input function that has a tf.distribute.InputContext argument and returns a tf.data.Dataset instance. It is expected that the returned dataset from dataset_fn is already batched by per-replica batch size (i.e. global batch size divided by the number of replicas in sync) and sharded. tf.distribute.Strategy.distribute_datasets_from_function does not batch or shard the tf.data.Dataset instance returned from the input function. dataset_fn will be called on the CPU device of each of the workers and each generates a dataset where every replica on that worker will dequeue one batch of inputs (i.e. if a worker has two replicas, two batches will be dequeued from the Dataset every step). This method can be used for several purposes. First, it allows you to specify your own batching and sharding logic. (In contrast, tf.distribute.experimental_distribute_dataset does batching and sharding for you.) For example, where experimental_distribute_dataset is unable to shard the input files, this method might be used to manually shard the dataset (avoiding the slow fallback behavior in experimental_distribute_dataset). In cases where the dataset is infinite, this sharding can be done by creating dataset replicas that differ only in their random seed. The dataset_fn should take an tf.distribute.InputContext instance where information about batching and input replication can be accessed. You can use element_spec property of the tf.distribute.DistributedDataset returned by this API to query the tf.TypeSpec of the elements returned by the iterator. This can be used to set the input_signature property of a tf.function. Follow tf.distribute.DistributedDataset.element_spec to see an example. Key Point: The tf.data.Dataset returned by dataset_fn should have a per-replica batch size, unlike experimental_distribute_dataset, which uses the global batch size. This may be computed using input_context.get_per_replica_batch_size.
Note: If you are using TPUStrategy, the order in which the data is processed by the workers when using tf.distribute.Strategy.experimental_distribute_dataset or tf.distribute.Strategy.distribute_datasets_from_function is not guaranteed. This is typically required if you are using tf.distribute to scale prediction. You can however insert an index for each element in the batch and order outputs accordingly. Refer to this snippet for an example of how to order outputs.
Note: Stateful dataset transformations are currently not supported with tf.distribute.experimental_distribute_dataset or tf.distribute.distribute_datasets_from_function. Any stateful ops that the dataset may have are currently ignored. For example, if your dataset has a map_fn that uses tf.random.uniform to rotate an image, then you have a dataset graph that depends on state (i.e the random seed) on the local machine where the python process is being executed.
For a tutorial on more usage and properties of this method, refer to the tutorial on distributed input). If you are interested in last partial batch handling, read this section.
Args
dataset_fn A function taking a tf.distribute.InputContext instance and returning a tf.data.Dataset.
options tf.distribute.InputOptions used to control options on how this dataset is distributed.
Returns A tf.distribute.DistributedDataset.
experimental_distribute_dataset View source
experimental_distribute_dataset(
dataset, options=None
)
Creates tf.distribute.DistributedDataset from tf.data.Dataset. The returned tf.distribute.DistributedDataset can be iterated over similar to regular datasets. NOTE: The user cannot add any more transformations to a tf.distribute.DistributedDataset. You can only create an iterator or examine the tf.TypeSpec of the data generated by it. See API docs of tf.distribute.DistributedDataset to learn more. The following is an example:
global_batch_size = 2
# Passing the devices is optional.
strategy = tf.distribute.MirroredStrategy(devices=["GPU:0", "GPU:1"])
# Create a dataset
dataset = tf.data.Dataset.range(4).batch(global_batch_size)
# Distribute that dataset
dist_dataset = strategy.experimental_distribute_dataset(dataset)
@tf.function
def replica_fn(input):
return input*2
result = []
# Iterate over the `tf.distribute.DistributedDataset`
for x in dist_dataset:
# process dataset elements
result.append(strategy.run(replica_fn, args=(x,)))
print(result)
[PerReplica:{
0: <tf.Tensor: shape=(1,), dtype=int64, numpy=array([0])>,
1: <tf.Tensor: shape=(1,), dtype=int64, numpy=array([2])>
}, PerReplica:{
0: <tf.Tensor: shape=(1,), dtype=int64, numpy=array([4])>,
1: <tf.Tensor: shape=(1,), dtype=int64, numpy=array([6])>
}]
Three key actions happending under the hood of this method are batching, sharding, and prefetching. In the code snippet above, dataset is batched by global_batch_size, and calling experimental_distribute_dataset on it rebatches dataset to a new batch size that is equal to the global batch size divided by the number of replicas in sync. We iterate through it using a Pythonic for loop. x is a tf.distribute.DistributedValues containing data for all replicas, and each replica gets data of the new batch size. tf.distribute.Strategy.run will take care of feeding the right per-replica data in x to the right replica_fn executed on each replica. Sharding contains autosharding across multiple workers and within every worker. First, in multi-worker distributed training (i.e. when you use tf.distribute.experimental.MultiWorkerMirroredStrategy or tf.distribute.TPUStrategy), autosharding a dataset over a set of workers means that each worker is assigned a subset of the entire dataset (if the right tf.data.experimental.AutoShardPolicy is set). This is to ensure that at each step, a global batch size of non-overlapping dataset elements will be processed by each worker. Autosharding has a couple of different options that can be specified using tf.data.experimental.DistributeOptions. Then, sharding within each worker means the method will split the data among all the worker devices (if more than one a present). This will happen regardless of multi-worker autosharding.
Note: for autosharding across multiple workers, the default mode is tf.data.experimental.AutoShardPolicy.AUTO. This mode will attempt to shard the input dataset by files if the dataset is being created out of reader datasets (e.g. tf.data.TFRecordDataset, tf.data.TextLineDataset, etc.) or otherwise shard the dataset by data, where each of the workers will read the entire dataset and only process the shard assigned to it. However, if you have less than one input file per worker, we suggest that you disable dataset autosharding across workers by setting the tf.data.experimental.DistributeOptions.auto_shard_policy to be tf.data.experimental.AutoShardPolicy.OFF.
By default, this method adds a prefetch transformation at the end of the user provided tf.data.Dataset instance. The argument to the prefetch transformation which is buffer_size is equal to the number of replicas in sync. If the above batch splitting and dataset sharding logic is undesirable, please use tf.distribute.Strategy.distribute_datasets_from_function instead, which does not do any automatic batching or sharding for you.
Note: If you are using TPUStrategy, the order in which the data is processed by the workers when using tf.distribute.Strategy.experimental_distribute_dataset or tf.distribute.Strategy.distribute_datasets_from_function is not guaranteed. This is typically required if you are using tf.distribute to scale prediction. You can however insert an index for each element in the batch and order outputs accordingly. Refer to this snippet for an example of how to order outputs.
Note: Stateful dataset transformations are currently not supported with tf.distribute.experimental_distribute_dataset or tf.distribute.distribute_datasets_from_function. Any stateful ops that the dataset may have are currently ignored. For example, if your dataset has a map_fn that uses tf.random.uniform to rotate an image, then you have a dataset graph that depends on state (i.e the random seed) on the local machine where the python process is being executed.
For a tutorial on more usage and properties of this method, refer to the tutorial on distributed input. If you are interested in last partial batch handling, read this section.
Args
dataset tf.data.Dataset that will be sharded across all replicas using the rules stated above.
options tf.distribute.InputOptions used to control options on how this dataset is distributed.
Returns A tf.distribute.DistributedDataset.
experimental_distribute_values_from_function View source
experimental_distribute_values_from_function(
value_fn
)
Generates tf.distribute.DistributedValues from value_fn. This function is to generate tf.distribute.DistributedValues to pass into run, reduce, or other methods that take distributed values when not using datasets.
Args
value_fn The function to run to generate values. It is called for each replica with tf.distribute.ValueContext as the sole argument. It must return a Tensor or a type that can be converted to a Tensor.
Returns A tf.distribute.DistributedValues containing a value for each replica.
Example usage: Return constant value per replica:
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"])
def value_fn(ctx):
return tf.constant(1.)
distributed_values = (
strategy.experimental_distribute_values_from_function(
value_fn))
local_result = strategy.experimental_local_results(distributed_values)
local_result
(<tf.Tensor: shape=(), dtype=float32, numpy=1.0>,
<tf.Tensor: shape=(), dtype=float32, numpy=1.0>)
Distribute values in array based on replica_id:
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"])
array_value = np.array([3., 2., 1.])
def value_fn(ctx):
return array_value[ctx.replica_id_in_sync_group]
distributed_values = (
strategy.experimental_distribute_values_from_function(
value_fn))
local_result = strategy.experimental_local_results(distributed_values)
local_result
(3.0, 2.0)
Specify values using num_replicas_in_sync:
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"])
def value_fn(ctx):
return ctx.num_replicas_in_sync
distributed_values = (
strategy.experimental_distribute_values_from_function(
value_fn))
local_result = strategy.experimental_local_results(distributed_values)
local_result
(2, 2)
Place values on devices and distribute: strategy = tf.distribute.TPUStrategy()
worker_devices = strategy.extended.worker_devices
multiple_values = []
for i in range(strategy.num_replicas_in_sync):
with tf.device(worker_devices[i]):
multiple_values.append(tf.constant(1.0))
def value_fn(ctx):
return multiple_values[ctx.replica_id_in_sync_group]
distributed_values = strategy.
experimental_distribute_values_from_function(
value_fn)
experimental_local_results View source
experimental_local_results(
value
)
Returns the list of all local per-replica values contained in value.
Note: This only returns values on the worker initiated by this client. When using a tf.distribute.Strategy like tf.distribute.experimental.MultiWorkerMirroredStrategy, each worker will be its own client, and this function will only return values computed on that worker.
Args
value A value returned by experimental_run(), run(), extended.call_for_each_replica(), or a variable created in scope.
Returns A tuple of values contained in value. If value represents a single value, this returns (value,).
gather View source
gather(
value, axis
)
Gather value across replicas along axis to the current device. Given a tf.distribute.DistributedValues or tf.Tensor-like object value, this API gathers and concatenates value across replicas along the axis-th dimension. The result is copied to the "current" device which would typically be the CPU of the worker on which the program is running. For tf.distribute.TPUStrategy, it is the first TPU host. For multi-client MultiWorkerMirroredStrategy, this is CPU of each worker. This API can only be called in the cross-replica context. For a counterpart in the replica context, see tf.distribute.ReplicaContext.all_gather.
Note: For all strategies except tf.distribute.TPUStrategy, the input value on different replicas must have the same rank, and their shapes must be the same in all dimensions except the axis-th dimension. In other words, their shapes cannot be different in a dimension d where d does not equal to the axis argument. For example, given a tf.distribute.DistributedValues with component tensors of shape (1, 2, 3) and (1, 3, 3) on two replicas, you can call gather(..., axis=1, ...) on it, but not gather(..., axis=0, ...) or gather(..., axis=2, ...). However, for tf.distribute.TPUStrategy.gather, all tensors must have exactly the same rank and same shape.
Note: Given a tf.distribute.DistributedValues value, its component tensors must have a non-zero rank. Otherwise, consider using tf.expand_dims before gathering them.
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"])
# A DistributedValues with component tensor of shape (2, 1) on each replica
distributed_values = strategy.experimental_distribute_values_from_function(lambda _: tf.identity(tf.constant([[1], [2]])))
@tf.function
def run():
return strategy.gather(distributed_values, axis=0)
run()
<tf.Tensor: shape=(4, 1), dtype=int32, numpy=
array([[1],
[2],
[1],
[2]], dtype=int32)>
Consider the following example for more combinations:
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1", "GPU:2", "GPU:3"])
single_tensor = tf.reshape(tf.range(6), shape=(1,2,3))
distributed_values = strategy.experimental_distribute_values_from_function(lambda _: tf.identity(single_tensor))
@tf.function
def run(axis):
return strategy.gather(distributed_values, axis=axis)
axis=0
run(axis)
<tf.Tensor: shape=(4, 2, 3), dtype=int32, numpy=
array([[[0, 1, 2],
[3, 4, 5]],
[[0, 1, 2],
[3, 4, 5]],
[[0, 1, 2],
[3, 4, 5]],
[[0, 1, 2],
[3, 4, 5]]], dtype=int32)>
axis=1
run(axis)
<tf.Tensor: shape=(1, 8, 3), dtype=int32, numpy=
array([[[0, 1, 2],
[3, 4, 5],
[0, 1, 2],
[3, 4, 5],
[0, 1, 2],
[3, 4, 5],
[0, 1, 2],
[3, 4, 5]]], dtype=int32)>
axis=2
run(axis)
<tf.Tensor: shape=(1, 2, 12), dtype=int32, numpy=
array([[[0, 1, 2, 0, 1, 2, 0, 1, 2, 0, 1, 2],
[3, 4, 5, 3, 4, 5, 3, 4, 5, 3, 4, 5]]], dtype=int32)>
Args
value a tf.distribute.DistributedValues instance, e.g. returned by Strategy.run, to be combined into a single tensor. It can also be a regular tensor when used with tf.distribute.OneDeviceStrategy or the default strategy. The tensors that constitute the DistributedValues can only be dense tensors with non-zero rank, NOT a tf.IndexedSlices.
axis 0-D int32 Tensor. Dimension along which to gather. Must be in the range [0, rank(value)).
Returns A Tensor that's the concatenation of value across replicas along axis dimension.
reduce View source
reduce(
reduce_op, value, axis
)
Reduce value across replicas and return result on current device.
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"])
def step_fn():
i = tf.distribute.get_replica_context().replica_id_in_sync_group
return tf.identity(i)
per_replica_result = strategy.run(step_fn)
total = strategy.reduce("SUM", per_replica_result, axis=None)
total
<tf.Tensor: shape=(), dtype=int32, numpy=1>
To see how this would look with multiple replicas, consider the same example with MirroredStrategy with 2 GPUs: strategy = tf.distribute.MirroredStrategy(devices=["GPU:0", "GPU:1"])
def step_fn():
i = tf.distribute.get_replica_context().replica_id_in_sync_group
return tf.identity(i)
per_replica_result = strategy.run(step_fn)
# Check devices on which per replica result is:
strategy.experimental_local_results(per_replica_result)[0].device
# /job:localhost/replica:0/task:0/device:GPU:0
strategy.experimental_local_results(per_replica_result)[1].device
# /job:localhost/replica:0/task:0/device:GPU:1
total = strategy.reduce("SUM", per_replica_result, axis=None)
# Check device on which reduced result is:
total.device
# /job:localhost/replica:0/task:0/device:CPU:0
This API is typically used for aggregating the results returned from different replicas, for reporting etc. For example, loss computed from different replicas can be averaged using this API before printing.
Note: The result is copied to the "current" device - which would typically be the CPU of the worker on which the program is running. For TPUStrategy, it is the first TPU host. For multi client MultiWorkerMirroredStrategy, this is CPU of each worker.
There are a number of different tf.distribute APIs for reducing values across replicas:
tf.distribute.ReplicaContext.all_reduce: This differs from Strategy.reduce in that it is for replica context and does not copy the results to the host device. all_reduce should be typically used for reductions inside the training step such as gradients.
tf.distribute.StrategyExtended.reduce_to and tf.distribute.StrategyExtended.batch_reduce_to: These APIs are more advanced versions of Strategy.reduce as they allow customizing the destination of the result. They are also called in cross replica context. What should axis be? Given a per-replica value returned by run, say a per-example loss, the batch will be divided across all the replicas. This function allows you to aggregate across replicas and optionally also across batch elements by specifying the axis parameter accordingly. For example, if you have a global batch size of 8 and 2 replicas, values for examples [0, 1, 2, 3] will be on replica 0 and [4, 5, 6, 7] will be on replica 1. With axis=None, reduce will aggregate only across replicas, returning [0+4, 1+5, 2+6, 3+7]. This is useful when each replica is computing a scalar or some other value that doesn't have a "batch" dimension (like a gradient or loss). strategy.reduce("sum", per_replica_result, axis=None)
Sometimes, you will want to aggregate across both the global batch and all replicas. You can get this behavior by specifying the batch dimension as the axis, typically axis=0. In this case it would return a scalar 0+1+2+3+4+5+6+7. strategy.reduce("sum", per_replica_result, axis=0)
If there is a last partial batch, you will need to specify an axis so that the resulting shape is consistent across replicas. So if the last batch has size 6 and it is divided into [0, 1, 2, 3] and [4, 5], you would get a shape mismatch unless you specify axis=0. If you specify tf.distribute.ReduceOp.MEAN, using axis=0 will use the correct denominator of 6. Contrast this with computing reduce_mean to get a scalar value on each replica and this function to average those means, which will weigh some values 1/8 and others 1/4.
Args
reduce_op a tf.distribute.ReduceOp value specifying how values should be combined. Allows using string representation of the enum such as "SUM", "MEAN".
value a tf.distribute.DistributedValues instance, e.g. returned by Strategy.run, to be combined into a single tensor. It can also be a regular tensor when used with OneDeviceStrategy or default strategy.
axis specifies the dimension to reduce along within each replica's tensor. Should typically be set to the batch dimension, or None to only reduce across replicas (e.g. if the tensor has no batch dimension).
Returns A Tensor.
run View source
run(
fn, args=(), kwargs=None, options=None
)
Invokes fn on each replica, with the given arguments. This method is the primary way to distribute your computation with a tf.distribute object. It invokes fn on each replica. If args or kwargs have tf.distribute.DistributedValues, such as those produced by a tf.distribute.DistributedDataset from tf.distribute.Strategy.experimental_distribute_dataset or tf.distribute.Strategy.distribute_datasets_from_function, when fn is executed on a particular replica, it will be executed with the component of tf.distribute.DistributedValues that correspond to that replica. fn is invoked under a replica context. fn may call tf.distribute.get_replica_context() to access members such as all_reduce. Please see the module-level docstring of tf.distribute for the concept of replica context. All arguments in args or kwargs should either be Python values of a nested structure of tensors, e.g. a list of tensors, in which case args and kwargs will be passed to the fn invoked on each replica. Or args or kwargs can be tf.distribute.DistributedValues containing tensors or composite tensors, i.e. tf.compat.v1.TensorInfo.CompositeTensor, in which case each fn call will get the component of a tf.distribute.DistributedValues corresponding to its replica. Key Point: Depending on the implementation of tf.distribute.Strategy and whether eager execution is enabled, fn may be called one or more times. If fn is annotated with tf.function or tf.distribute.Strategy.run is called inside a tf.function (eager execution is disabled inside a tf.function by default), fn is called once per replica to generate a Tensorflow graph, which will then be reused for execution with new inputs. Otherwise, if eager execution is enabled, fn will be called once per replica every step just like regular python code. Example usage: Constant tensor input.
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"])
tensor_input = tf.constant(3.0)
@tf.function
def replica_fn(input):
return input*2.0
result = strategy.run(replica_fn, args=(tensor_input,))
result
PerReplica:{
0: <tf.Tensor: shape=(), dtype=float32, numpy=6.0>,
1: <tf.Tensor: shape=(), dtype=float32, numpy=6.0>
}
DistributedValues input.
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"])
@tf.function
def run():
def value_fn(value_context):
return value_context.num_replicas_in_sync
distributed_values = (
strategy.experimental_distribute_values_from_function(
value_fn))
def replica_fn2(input):
return input*2
return strategy.run(replica_fn2, args=(distributed_values,))
result = run()
result
<tf.Tensor: shape=(), dtype=int32, numpy=4>
Use tf.distribute.ReplicaContext to allreduce values.
strategy = tf.distribute.MirroredStrategy(["gpu:0", "gpu:1"])
@tf.function
def run():
def value_fn(value_context):
return tf.constant(value_context.replica_id_in_sync_group)
distributed_values = (
strategy.experimental_distribute_values_from_function(
value_fn))
def replica_fn(input):
return tf.distribute.get_replica_context().all_reduce("sum", input)
return strategy.run(replica_fn, args=(distributed_values,))
result = run()
result
PerReplica:{
0: <tf.Tensor: shape=(), dtype=int32, numpy=1>,
1: <tf.Tensor: shape=(), dtype=int32, numpy=1>
}
Args
fn The function to run on each replica.
args Optional positional arguments to fn. Its element can be a Python value, a tensor or a tf.distribute.DistributedValues.
kwargs Optional keyword arguments to fn. Its element can be a Python value, a tensor or a tf.distribute.DistributedValues.
options An optional instance of tf.distribute.RunOptions specifying the options to run fn.
Returns Merged return value of fn across replicas. The structure of the return value is the same as the return value from fn. Each element in the structure can either be tf.distribute.DistributedValues, Tensor objects, or Tensors (for example, if running on a single replica).
scope View source
scope()
Context manager to make the strategy current and distribute variables. This method returns a context manager, and is used as follows:
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"])
# Variable created inside scope:
with strategy.scope():
mirrored_variable = tf.Variable(1.)
mirrored_variable
MirroredVariable:{
0: <tf.Variable 'Variable:0' shape=() dtype=float32, numpy=1.0>,
1: <tf.Variable 'Variable/replica_1:0' shape=() dtype=float32, numpy=1.0>
}
# Variable created outside scope:
regular_variable = tf.Variable(1.)
regular_variable
<tf.Variable 'Variable:0' shape=() dtype=float32, numpy=1.0>
What happens when Strategy.scope is entered?
strategy is installed in the global context as the "current" strategy. Inside this scope, tf.distribute.get_strategy() will now return this strategy. Outside this scope, it returns the default no-op strategy. Entering the scope also enters the "cross-replica context". See tf.distribute.StrategyExtended for an explanation on cross-replica and replica contexts. Variable creation inside scope is intercepted by the strategy. Each strategy defines how it wants to affect the variable creation. Sync strategies like MirroredStrategy, TPUStrategy and MultiWorkerMiroredStrategy create variables replicated on each replica, whereas ParameterServerStrategy creates variables on the parameter servers. This is done using a custom tf.variable_creator_scope. In some strategies, a default device scope may also be entered: in MultiWorkerMiroredStrategy, a default device scope of "/CPU:0" is entered on each worker.
Note: Entering a scope does not automatically distribute a computation, except in the case of high level training framework like keras model.fit. If you're not using model.fit, you need to use strategy.run API to explicitly distribute that computation. See an example in the custom training loop tutorial.
What should be in scope and what should be outside? There are a number of requirements on what needs to happen inside the scope. However, in places where we have information about which strategy is in use, we often enter the scope for the user, so they don't have to do it explicitly (i.e. calling those either inside or outside the scope is OK). Anything that creates variables that should be distributed variables must be in strategy.scope. This can be either by directly putting it in scope, or relying on another API like strategy.run or model.fit to enter it for you. Any variable that is created outside scope will not be distributed and may have performance implications. Common things that create variables in TF: models, optimizers, metrics. These should always be created inside the scope. Another source of variable creation can be a checkpoint restore - when variables are created lazily. Note that any variable created inside a strategy captures the strategy information. So reading and writing to these variables outside the strategy.scope can also work seamlessly, without the user having to enter the scope. Some strategy APIs (such as strategy.run and strategy.reduce) which require to be in a strategy's scope, enter the scope for you automatically, which means when using those APIs you don't need to enter the scope yourself. When a tf.keras.Model is created inside a strategy.scope, we capture this information. When high level training frameworks methods such as model.compile, model.fit etc are then called on this model, we automatically enter the scope, as well as use this strategy to distribute the training etc. See detailed example in distributed keras tutorial. Note that simply calling the model(..) is not impacted - only high level training framework APIs are. model.compile, model.fit, model.evaluate, model.predict and model.save can all be called inside or outside the scope. The following can be either inside or outside the scope: Creating the input datasets Defining tf.functions that represent your training step Saving APIs such as tf.saved_model.save. Loading creates variables, so that should go inside the scope if you want to train the model in a distributed way. Checkpoint saving. As mentioned above - checkpoint.restore may sometimes need to be inside scope if it creates variables.
Returns A context manager. | tensorflow.distribute.experimental.parameterserverstrategy |
Module: tf.distribute.experimental.partitioners Public API for tf.distribute.experimental.partitioners namespace. Classes class FixedShardsPartitioner: Partitioner that allocates a fixed number of shards. class MaxSizePartitioner: Partitioner that keeps shards below max_shard_bytes. class MinSizePartitioner: Partitioner that allocates a minimum size per shard. class Partitioner: Partitioner base class: all partitiners inherit from this class. | tensorflow.distribute.experimental.partitioners |
tf.distribute.experimental.partitioners.FixedShardsPartitioner Partitioner that allocates a fixed number of shards. Inherits From: Partitioner
tf.distribute.experimental.partitioners.FixedShardsPartitioner(
num_shards
)
Examples:
# standalone usage:
partitioner = FixedShardsPartitioner(num_shards=2)
partitions = partitioner(tf.TensorShape([10, 3]), tf.float32)
[2, 1]
# use in ParameterServerStrategy
# strategy = tf.distribute.experimental.ParameterServerStrategy(
# cluster_resolver=cluster_resolver, variable_partitioner=partitioner)
Args
num_shards int, number of shards to partition. Methods __call__ View source
__call__(
shape, dtype, axis=0
)
Partitions the given shape and returns the partition results. Examples of a partitioner that allocates a fixed number of shards: partitioner = FixedShardsPartitioner(num_shards=2)
partitions = partitioner(tf.TensorShape([10, 3], tf.float32), axis=0)
print(partitions) # [2, 0]
Args
shape a tf.TensorShape, the shape to partition.
dtype a tf.dtypes.Dtype indicating the type of the partition value.
axis The axis to partition along. Default: outermost axis.
Returns A list of integers representing the number of partitions on each axis, where i-th value correponds to i-th axis. | tensorflow.distribute.experimental.partitioners.fixedshardspartitioner |
tf.distribute.experimental.partitioners.MaxSizePartitioner Partitioner that keeps shards below max_shard_bytes. Inherits From: Partitioner
tf.distribute.experimental.partitioners.MaxSizePartitioner(
max_shard_bytes, max_shards=None, bytes_per_string=16
)
This partitioner ensures each shard has at most max_shard_bytes, and tries to allocate as few shards as possible, i.e., keeping shard size as large as possible. If the partitioner hits the max_shards limit, then each shard may end up larger than max_shard_bytes. By default max_shards equals None and no limit on the number of shards is enforced. Examples:
partitioner = MaxSizePartitioner(max_shard_bytes=4)
partitions = partitioner(tf.TensorShape([6, 1]), tf.float32)
[6, 1]
partitioner = MaxSizePartitioner(max_shard_bytes=4, max_shards=2)
partitions = partitioner(tf.TensorShape([6, 1]), tf.float32)
[2, 1]
partitioner = MaxSizePartitioner(max_shard_bytes=1024)
partitions = partitioner(tf.TensorShape([6, 1]), tf.float32)
[1, 1]
# use in ParameterServerStrategy
# strategy = tf.distribute.experimental.ParameterServerStrategy(
# cluster_resolver=cluster_resolver, variable_partitioner=partitioner)
Args
max_shard_bytes The maximum size any given shard is allowed to be.
max_shards The maximum number of shards in int created taking precedence over max_shard_bytes.
bytes_per_string If the partition value is of type string, this provides an estimate of how large each string is. Methods __call__ View source
__call__(
shape, dtype, axis=0
)
Partitions the given shape and returns the partition results. Examples of a partitioner that allocates a fixed number of shards: partitioner = FixedShardsPartitioner(num_shards=2)
partitions = partitioner(tf.TensorShape([10, 3], tf.float32), axis=0)
print(partitions) # [2, 0]
Args
shape a tf.TensorShape, the shape to partition.
dtype a tf.dtypes.Dtype indicating the type of the partition value.
axis The axis to partition along. Default: outermost axis.
Returns A list of integers representing the number of partitions on each axis, where i-th value correponds to i-th axis. | tensorflow.distribute.experimental.partitioners.maxsizepartitioner |
tf.distribute.experimental.partitioners.MinSizePartitioner Partitioner that allocates a minimum size per shard. Inherits From: Partitioner
tf.distribute.experimental.partitioners.MinSizePartitioner(
min_shard_bytes=(256 << 10), max_shards=1, bytes_per_string=16
)
This partitioner ensures each shard has at least min_shard_bytes, and tries to allocate as many shards as possible, i.e., keeping shard size as small as possible. The maximum number of such shards (upper bound) is given by max_shards. Examples:
partitioner = MinSizePartitioner(min_shard_bytes=4, max_shards=2)
partitions = partitioner(tf.TensorShape([6, 1]), tf.float32)
[2, 1]
partitioner = MinSizePartitioner(min_shard_bytes=4, max_shards=10)
partitions = partitioner(tf.TensorShape([6, 1]), tf.float32)
[6, 1]
# use in ParameterServerStrategy
# strategy = tf.distribute.experimental.ParameterServerStrategy(
# cluster_resolver=cluster_resolver, variable_partitioner=partitioner)
Args
min_shard_bytes Minimum bytes of each shard. Defaults to 256K.
max_shards Upper bound on the number of shards. Defaults to 1.
bytes_per_string If the partition value is of type string, this provides an estimate of how large each string is. Methods __call__ View source
__call__(
shape, dtype, axis=0
)
Partitions the given shape and returns the partition results. Examples of a partitioner that allocates a fixed number of shards: partitioner = FixedShardsPartitioner(num_shards=2)
partitions = partitioner(tf.TensorShape([10, 3], tf.float32), axis=0)
print(partitions) # [2, 0]
Args
shape a tf.TensorShape, the shape to partition.
dtype a tf.dtypes.Dtype indicating the type of the partition value.
axis The axis to partition along. Default: outermost axis.
Returns A list of integers representing the number of partitions on each axis, where i-th value correponds to i-th axis. | tensorflow.distribute.experimental.partitioners.minsizepartitioner |
tf.distribute.experimental.partitioners.Partitioner Partitioner base class: all partitiners inherit from this class. Partitioners should implement a __call__ method with the following signature: def __call__(self, shape, dtype, axis=0):
# Partitions the given `shape` and returns the partition results.
# See docstring of `__call__` method for the format of partition results.
Methods __call__ View source
__call__(
shape, dtype, axis=0
)
Partitions the given shape and returns the partition results. Examples of a partitioner that allocates a fixed number of shards: partitioner = FixedShardsPartitioner(num_shards=2)
partitions = partitioner(tf.TensorShape([10, 3], tf.float32), axis=0)
print(partitions) # [2, 0]
Args
shape a tf.TensorShape, the shape to partition.
dtype a tf.dtypes.Dtype indicating the type of the partition value.
axis The axis to partition along. Default: outermost axis.
Returns A list of integers representing the number of partitions on each axis, where i-th value correponds to i-th axis. | tensorflow.distribute.experimental.partitioners.partitioner |
tf.distribute.experimental.TPUStrategy View source on GitHub Synchronous training on TPUs and TPU Pods. Inherits From: Strategy
tf.distribute.experimental.TPUStrategy(
tpu_cluster_resolver=None, device_assignment=None
)
To construct a TPUStrategy object, you need to run the initialization code as below:
resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu='')
tf.config.experimental_connect_to_cluster(resolver)
tf.tpu.experimental.initialize_tpu_system(resolver)
strategy = tf.distribute.experimental.TPUStrategy(resolver)
While using distribution strategies, the variables created within the strategy's scope will be replicated across all the replicas and can be kept in sync using all-reduce algorithms. To run TF2 programs on TPUs, you can either use .compile and .fit APIs in tf.keras with TPUStrategy, or write your own customized training loop by calling strategy.run directly. Note that TPUStrategy doesn't support pure eager execution, so please make sure the function passed into strategy.run is a tf.function or strategy.run is called inside a tf.function if eager behavior is enabled.
Args
tpu_cluster_resolver A tf.distribute.cluster_resolver.TPUClusterResolver, which provides information about the TPU cluster.
device_assignment Optional tf.tpu.experimental.DeviceAssignment to specify the placement of replicas on the TPU cluster.
Attributes
cluster_resolver Returns the cluster resolver associated with this strategy. tf.distribute.experimental.TPUStrategy provides the associated tf.distribute.cluster_resolver.ClusterResolver. If the user provides one in __init__, that instance is returned; if the user does not, a default tf.distribute.cluster_resolver.TPUClusterResolver is provided.
extended tf.distribute.StrategyExtended with additional methods.
num_replicas_in_sync Returns number of replicas over which gradients are aggregated. Methods distribute_datasets_from_function View source
distribute_datasets_from_function(
dataset_fn, options=None
)
Distributes tf.data.Dataset instances created by calls to dataset_fn. The argument dataset_fn that users pass in is an input function that has a tf.distribute.InputContext argument and returns a tf.data.Dataset instance. It is expected that the returned dataset from dataset_fn is already batched by per-replica batch size (i.e. global batch size divided by the number of replicas in sync) and sharded. tf.distribute.Strategy.distribute_datasets_from_function does not batch or shard the tf.data.Dataset instance returned from the input function. dataset_fn will be called on the CPU device of each of the workers and each generates a dataset where every replica on that worker will dequeue one batch of inputs (i.e. if a worker has two replicas, two batches will be dequeued from the Dataset every step). This method can be used for several purposes. First, it allows you to specify your own batching and sharding logic. (In contrast, tf.distribute.experimental_distribute_dataset does batching and sharding for you.) For example, where experimental_distribute_dataset is unable to shard the input files, this method might be used to manually shard the dataset (avoiding the slow fallback behavior in experimental_distribute_dataset). In cases where the dataset is infinite, this sharding can be done by creating dataset replicas that differ only in their random seed. The dataset_fn should take an tf.distribute.InputContext instance where information about batching and input replication can be accessed. You can use element_spec property of the tf.distribute.DistributedDataset returned by this API to query the tf.TypeSpec of the elements returned by the iterator. This can be used to set the input_signature property of a tf.function. Follow tf.distribute.DistributedDataset.element_spec to see an example. Key Point: The tf.data.Dataset returned by dataset_fn should have a per-replica batch size, unlike experimental_distribute_dataset, which uses the global batch size. This may be computed using input_context.get_per_replica_batch_size.
Note: If you are using TPUStrategy, the order in which the data is processed by the workers when using tf.distribute.Strategy.experimental_distribute_dataset or tf.distribute.Strategy.distribute_datasets_from_function is not guaranteed. This is typically required if you are using tf.distribute to scale prediction. You can however insert an index for each element in the batch and order outputs accordingly. Refer to this snippet for an example of how to order outputs.
Note: Stateful dataset transformations are currently not supported with tf.distribute.experimental_distribute_dataset or tf.distribute.distribute_datasets_from_function. Any stateful ops that the dataset may have are currently ignored. For example, if your dataset has a map_fn that uses tf.random.uniform to rotate an image, then you have a dataset graph that depends on state (i.e the random seed) on the local machine where the python process is being executed.
For a tutorial on more usage and properties of this method, refer to the tutorial on distributed input). If you are interested in last partial batch handling, read this section.
Args
dataset_fn A function taking a tf.distribute.InputContext instance and returning a tf.data.Dataset.
options tf.distribute.InputOptions used to control options on how this dataset is distributed.
Returns A tf.distribute.DistributedDataset.
experimental_distribute_dataset View source
experimental_distribute_dataset(
dataset, options=None
)
Creates tf.distribute.DistributedDataset from tf.data.Dataset. The returned tf.distribute.DistributedDataset can be iterated over similar to regular datasets. NOTE: The user cannot add any more transformations to a tf.distribute.DistributedDataset. You can only create an iterator or examine the tf.TypeSpec of the data generated by it. See API docs of tf.distribute.DistributedDataset to learn more. The following is an example:
global_batch_size = 2
# Passing the devices is optional.
strategy = tf.distribute.MirroredStrategy(devices=["GPU:0", "GPU:1"])
# Create a dataset
dataset = tf.data.Dataset.range(4).batch(global_batch_size)
# Distribute that dataset
dist_dataset = strategy.experimental_distribute_dataset(dataset)
@tf.function
def replica_fn(input):
return input*2
result = []
# Iterate over the `tf.distribute.DistributedDataset`
for x in dist_dataset:
# process dataset elements
result.append(strategy.run(replica_fn, args=(x,)))
print(result)
[PerReplica:{
0: <tf.Tensor: shape=(1,), dtype=int64, numpy=array([0])>,
1: <tf.Tensor: shape=(1,), dtype=int64, numpy=array([2])>
}, PerReplica:{
0: <tf.Tensor: shape=(1,), dtype=int64, numpy=array([4])>,
1: <tf.Tensor: shape=(1,), dtype=int64, numpy=array([6])>
}]
Three key actions happending under the hood of this method are batching, sharding, and prefetching. In the code snippet above, dataset is batched by global_batch_size, and calling experimental_distribute_dataset on it rebatches dataset to a new batch size that is equal to the global batch size divided by the number of replicas in sync. We iterate through it using a Pythonic for loop. x is a tf.distribute.DistributedValues containing data for all replicas, and each replica gets data of the new batch size. tf.distribute.Strategy.run will take care of feeding the right per-replica data in x to the right replica_fn executed on each replica. Sharding contains autosharding across multiple workers and within every worker. First, in multi-worker distributed training (i.e. when you use tf.distribute.experimental.MultiWorkerMirroredStrategy or tf.distribute.TPUStrategy), autosharding a dataset over a set of workers means that each worker is assigned a subset of the entire dataset (if the right tf.data.experimental.AutoShardPolicy is set). This is to ensure that at each step, a global batch size of non-overlapping dataset elements will be processed by each worker. Autosharding has a couple of different options that can be specified using tf.data.experimental.DistributeOptions. Then, sharding within each worker means the method will split the data among all the worker devices (if more than one a present). This will happen regardless of multi-worker autosharding.
Note: for autosharding across multiple workers, the default mode is tf.data.experimental.AutoShardPolicy.AUTO. This mode will attempt to shard the input dataset by files if the dataset is being created out of reader datasets (e.g. tf.data.TFRecordDataset, tf.data.TextLineDataset, etc.) or otherwise shard the dataset by data, where each of the workers will read the entire dataset and only process the shard assigned to it. However, if you have less than one input file per worker, we suggest that you disable dataset autosharding across workers by setting the tf.data.experimental.DistributeOptions.auto_shard_policy to be tf.data.experimental.AutoShardPolicy.OFF.
By default, this method adds a prefetch transformation at the end of the user provided tf.data.Dataset instance. The argument to the prefetch transformation which is buffer_size is equal to the number of replicas in sync. If the above batch splitting and dataset sharding logic is undesirable, please use tf.distribute.Strategy.distribute_datasets_from_function instead, which does not do any automatic batching or sharding for you.
Note: If you are using TPUStrategy, the order in which the data is processed by the workers when using tf.distribute.Strategy.experimental_distribute_dataset or tf.distribute.Strategy.distribute_datasets_from_function is not guaranteed. This is typically required if you are using tf.distribute to scale prediction. You can however insert an index for each element in the batch and order outputs accordingly. Refer to this snippet for an example of how to order outputs.
Note: Stateful dataset transformations are currently not supported with tf.distribute.experimental_distribute_dataset or tf.distribute.distribute_datasets_from_function. Any stateful ops that the dataset may have are currently ignored. For example, if your dataset has a map_fn that uses tf.random.uniform to rotate an image, then you have a dataset graph that depends on state (i.e the random seed) on the local machine where the python process is being executed.
For a tutorial on more usage and properties of this method, refer to the tutorial on distributed input. If you are interested in last partial batch handling, read this section.
Args
dataset tf.data.Dataset that will be sharded across all replicas using the rules stated above.
options tf.distribute.InputOptions used to control options on how this dataset is distributed.
Returns A tf.distribute.DistributedDataset.
experimental_distribute_values_from_function View source
experimental_distribute_values_from_function(
value_fn
)
Generates tf.distribute.DistributedValues from value_fn. This function is to generate tf.distribute.DistributedValues to pass into run, reduce, or other methods that take distributed values when not using datasets.
Args
value_fn The function to run to generate values. It is called for each replica with tf.distribute.ValueContext as the sole argument. It must return a Tensor or a type that can be converted to a Tensor.
Returns A tf.distribute.DistributedValues containing a value for each replica.
Example usage: Return constant value per replica:
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"])
def value_fn(ctx):
return tf.constant(1.)
distributed_values = (
strategy.experimental_distribute_values_from_function(
value_fn))
local_result = strategy.experimental_local_results(distributed_values)
local_result
(<tf.Tensor: shape=(), dtype=float32, numpy=1.0>,
<tf.Tensor: shape=(), dtype=float32, numpy=1.0>)
Distribute values in array based on replica_id:
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"])
array_value = np.array([3., 2., 1.])
def value_fn(ctx):
return array_value[ctx.replica_id_in_sync_group]
distributed_values = (
strategy.experimental_distribute_values_from_function(
value_fn))
local_result = strategy.experimental_local_results(distributed_values)
local_result
(3.0, 2.0)
Specify values using num_replicas_in_sync:
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"])
def value_fn(ctx):
return ctx.num_replicas_in_sync
distributed_values = (
strategy.experimental_distribute_values_from_function(
value_fn))
local_result = strategy.experimental_local_results(distributed_values)
local_result
(2, 2)
Place values on devices and distribute: strategy = tf.distribute.TPUStrategy()
worker_devices = strategy.extended.worker_devices
multiple_values = []
for i in range(strategy.num_replicas_in_sync):
with tf.device(worker_devices[i]):
multiple_values.append(tf.constant(1.0))
def value_fn(ctx):
return multiple_values[ctx.replica_id_in_sync_group]
distributed_values = strategy.
experimental_distribute_values_from_function(
value_fn)
experimental_local_results View source
experimental_local_results(
value
)
Returns the list of all local per-replica values contained in value.
Note: This only returns values on the worker initiated by this client. When using a tf.distribute.Strategy like tf.distribute.experimental.MultiWorkerMirroredStrategy, each worker will be its own client, and this function will only return values computed on that worker.
Args
value A value returned by experimental_run(), run(), extended.call_for_each_replica(), or a variable created in scope.
Returns A tuple of values contained in value. If value represents a single value, this returns (value,).
gather View source
gather(
value, axis
)
Gather value across replicas along axis to the current device. Given a tf.distribute.DistributedValues or tf.Tensor-like object value, this API gathers and concatenates value across replicas along the axis-th dimension. The result is copied to the "current" device which would typically be the CPU of the worker on which the program is running. For tf.distribute.TPUStrategy, it is the first TPU host. For multi-client MultiWorkerMirroredStrategy, this is CPU of each worker. This API can only be called in the cross-replica context. For a counterpart in the replica context, see tf.distribute.ReplicaContext.all_gather.
Note: For all strategies except tf.distribute.TPUStrategy, the input value on different replicas must have the same rank, and their shapes must be the same in all dimensions except the axis-th dimension. In other words, their shapes cannot be different in a dimension d where d does not equal to the axis argument. For example, given a tf.distribute.DistributedValues with component tensors of shape (1, 2, 3) and (1, 3, 3) on two replicas, you can call gather(..., axis=1, ...) on it, but not gather(..., axis=0, ...) or gather(..., axis=2, ...). However, for tf.distribute.TPUStrategy.gather, all tensors must have exactly the same rank and same shape.
Note: Given a tf.distribute.DistributedValues value, its component tensors must have a non-zero rank. Otherwise, consider using tf.expand_dims before gathering them.
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"])
# A DistributedValues with component tensor of shape (2, 1) on each replica
distributed_values = strategy.experimental_distribute_values_from_function(lambda _: tf.identity(tf.constant([[1], [2]])))
@tf.function
def run():
return strategy.gather(distributed_values, axis=0)
run()
<tf.Tensor: shape=(4, 1), dtype=int32, numpy=
array([[1],
[2],
[1],
[2]], dtype=int32)>
Consider the following example for more combinations:
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1", "GPU:2", "GPU:3"])
single_tensor = tf.reshape(tf.range(6), shape=(1,2,3))
distributed_values = strategy.experimental_distribute_values_from_function(lambda _: tf.identity(single_tensor))
@tf.function
def run(axis):
return strategy.gather(distributed_values, axis=axis)
axis=0
run(axis)
<tf.Tensor: shape=(4, 2, 3), dtype=int32, numpy=
array([[[0, 1, 2],
[3, 4, 5]],
[[0, 1, 2],
[3, 4, 5]],
[[0, 1, 2],
[3, 4, 5]],
[[0, 1, 2],
[3, 4, 5]]], dtype=int32)>
axis=1
run(axis)
<tf.Tensor: shape=(1, 8, 3), dtype=int32, numpy=
array([[[0, 1, 2],
[3, 4, 5],
[0, 1, 2],
[3, 4, 5],
[0, 1, 2],
[3, 4, 5],
[0, 1, 2],
[3, 4, 5]]], dtype=int32)>
axis=2
run(axis)
<tf.Tensor: shape=(1, 2, 12), dtype=int32, numpy=
array([[[0, 1, 2, 0, 1, 2, 0, 1, 2, 0, 1, 2],
[3, 4, 5, 3, 4, 5, 3, 4, 5, 3, 4, 5]]], dtype=int32)>
Args
value a tf.distribute.DistributedValues instance, e.g. returned by Strategy.run, to be combined into a single tensor. It can also be a regular tensor when used with tf.distribute.OneDeviceStrategy or the default strategy. The tensors that constitute the DistributedValues can only be dense tensors with non-zero rank, NOT a tf.IndexedSlices.
axis 0-D int32 Tensor. Dimension along which to gather. Must be in the range [0, rank(value)).
Returns A Tensor that's the concatenation of value across replicas along axis dimension.
reduce View source
reduce(
reduce_op, value, axis
)
Reduce value across replicas and return result on current device.
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"])
def step_fn():
i = tf.distribute.get_replica_context().replica_id_in_sync_group
return tf.identity(i)
per_replica_result = strategy.run(step_fn)
total = strategy.reduce("SUM", per_replica_result, axis=None)
total
<tf.Tensor: shape=(), dtype=int32, numpy=1>
To see how this would look with multiple replicas, consider the same example with MirroredStrategy with 2 GPUs: strategy = tf.distribute.MirroredStrategy(devices=["GPU:0", "GPU:1"])
def step_fn():
i = tf.distribute.get_replica_context().replica_id_in_sync_group
return tf.identity(i)
per_replica_result = strategy.run(step_fn)
# Check devices on which per replica result is:
strategy.experimental_local_results(per_replica_result)[0].device
# /job:localhost/replica:0/task:0/device:GPU:0
strategy.experimental_local_results(per_replica_result)[1].device
# /job:localhost/replica:0/task:0/device:GPU:1
total = strategy.reduce("SUM", per_replica_result, axis=None)
# Check device on which reduced result is:
total.device
# /job:localhost/replica:0/task:0/device:CPU:0
This API is typically used for aggregating the results returned from different replicas, for reporting etc. For example, loss computed from different replicas can be averaged using this API before printing.
Note: The result is copied to the "current" device - which would typically be the CPU of the worker on which the program is running. For TPUStrategy, it is the first TPU host. For multi client MultiWorkerMirroredStrategy, this is CPU of each worker.
There are a number of different tf.distribute APIs for reducing values across replicas:
tf.distribute.ReplicaContext.all_reduce: This differs from Strategy.reduce in that it is for replica context and does not copy the results to the host device. all_reduce should be typically used for reductions inside the training step such as gradients.
tf.distribute.StrategyExtended.reduce_to and tf.distribute.StrategyExtended.batch_reduce_to: These APIs are more advanced versions of Strategy.reduce as they allow customizing the destination of the result. They are also called in cross replica context. What should axis be? Given a per-replica value returned by run, say a per-example loss, the batch will be divided across all the replicas. This function allows you to aggregate across replicas and optionally also across batch elements by specifying the axis parameter accordingly. For example, if you have a global batch size of 8 and 2 replicas, values for examples [0, 1, 2, 3] will be on replica 0 and [4, 5, 6, 7] will be on replica 1. With axis=None, reduce will aggregate only across replicas, returning [0+4, 1+5, 2+6, 3+7]. This is useful when each replica is computing a scalar or some other value that doesn't have a "batch" dimension (like a gradient or loss). strategy.reduce("sum", per_replica_result, axis=None)
Sometimes, you will want to aggregate across both the global batch and all replicas. You can get this behavior by specifying the batch dimension as the axis, typically axis=0. In this case it would return a scalar 0+1+2+3+4+5+6+7. strategy.reduce("sum", per_replica_result, axis=0)
If there is a last partial batch, you will need to specify an axis so that the resulting shape is consistent across replicas. So if the last batch has size 6 and it is divided into [0, 1, 2, 3] and [4, 5], you would get a shape mismatch unless you specify axis=0. If you specify tf.distribute.ReduceOp.MEAN, using axis=0 will use the correct denominator of 6. Contrast this with computing reduce_mean to get a scalar value on each replica and this function to average those means, which will weigh some values 1/8 and others 1/4.
Args
reduce_op a tf.distribute.ReduceOp value specifying how values should be combined. Allows using string representation of the enum such as "SUM", "MEAN".
value a tf.distribute.DistributedValues instance, e.g. returned by Strategy.run, to be combined into a single tensor. It can also be a regular tensor when used with OneDeviceStrategy or default strategy.
axis specifies the dimension to reduce along within each replica's tensor. Should typically be set to the batch dimension, or None to only reduce across replicas (e.g. if the tensor has no batch dimension).
Returns A Tensor.
run View source
run(
fn, args=(), kwargs=None, options=None
)
See base class. scope View source
scope()
Context manager to make the strategy current and distribute variables. This method returns a context manager, and is used as follows:
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"])
# Variable created inside scope:
with strategy.scope():
mirrored_variable = tf.Variable(1.)
mirrored_variable
MirroredVariable:{
0: <tf.Variable 'Variable:0' shape=() dtype=float32, numpy=1.0>,
1: <tf.Variable 'Variable/replica_1:0' shape=() dtype=float32, numpy=1.0>
}
# Variable created outside scope:
regular_variable = tf.Variable(1.)
regular_variable
<tf.Variable 'Variable:0' shape=() dtype=float32, numpy=1.0>
What happens when Strategy.scope is entered?
strategy is installed in the global context as the "current" strategy. Inside this scope, tf.distribute.get_strategy() will now return this strategy. Outside this scope, it returns the default no-op strategy. Entering the scope also enters the "cross-replica context". See tf.distribute.StrategyExtended for an explanation on cross-replica and replica contexts. Variable creation inside scope is intercepted by the strategy. Each strategy defines how it wants to affect the variable creation. Sync strategies like MirroredStrategy, TPUStrategy and MultiWorkerMiroredStrategy create variables replicated on each replica, whereas ParameterServerStrategy creates variables on the parameter servers. This is done using a custom tf.variable_creator_scope. In some strategies, a default device scope may also be entered: in MultiWorkerMiroredStrategy, a default device scope of "/CPU:0" is entered on each worker.
Note: Entering a scope does not automatically distribute a computation, except in the case of high level training framework like keras model.fit. If you're not using model.fit, you need to use strategy.run API to explicitly distribute that computation. See an example in the custom training loop tutorial.
What should be in scope and what should be outside? There are a number of requirements on what needs to happen inside the scope. However, in places where we have information about which strategy is in use, we often enter the scope for the user, so they don't have to do it explicitly (i.e. calling those either inside or outside the scope is OK). Anything that creates variables that should be distributed variables must be in strategy.scope. This can be either by directly putting it in scope, or relying on another API like strategy.run or model.fit to enter it for you. Any variable that is created outside scope will not be distributed and may have performance implications. Common things that create variables in TF: models, optimizers, metrics. These should always be created inside the scope. Another source of variable creation can be a checkpoint restore - when variables are created lazily. Note that any variable created inside a strategy captures the strategy information. So reading and writing to these variables outside the strategy.scope can also work seamlessly, without the user having to enter the scope. Some strategy APIs (such as strategy.run and strategy.reduce) which require to be in a strategy's scope, enter the scope for you automatically, which means when using those APIs you don't need to enter the scope yourself. When a tf.keras.Model is created inside a strategy.scope, we capture this information. When high level training frameworks methods such as model.compile, model.fit etc are then called on this model, we automatically enter the scope, as well as use this strategy to distribute the training etc. See detailed example in distributed keras tutorial. Note that simply calling the model(..) is not impacted - only high level training framework APIs are. model.compile, model.fit, model.evaluate, model.predict and model.save can all be called inside or outside the scope. The following can be either inside or outside the scope: Creating the input datasets Defining tf.functions that represent your training step Saving APIs such as tf.saved_model.save. Loading creates variables, so that should go inside the scope if you want to train the model in a distributed way. Checkpoint saving. As mentioned above - checkpoint.restore may sometimes need to be inside scope if it creates variables.
Returns A context manager. | tensorflow.distribute.experimental.tpustrategy |
tf.distribute.experimental.ValueContext A class wrapping information needed by a distribute function.
tf.distribute.experimental.ValueContext(
replica_id_in_sync_group=0, num_replicas_in_sync=1
)
This is a context class that is passed to the value_fn in strategy.experimental_distribute_values_from_function and contains information about the compute replicas. The num_replicas_in_sync and replica_id can be used to customize the value on each replica. Example usage: Directly constructed.
def value_fn(context):
return context.replica_id_in_sync_group/context.num_replicas_in_sync
context = tf.distribute.experimental.ValueContext(
replica_id_in_sync_group=2, num_replicas_in_sync=4)
per_replica_value = value_fn(context)
per_replica_value
0.5
Passed in by experimental_distribute_values_from_function.
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"])
def value_fn(value_context):
return value_context.num_replicas_in_sync
distributed_values = (
strategy.experimental_distribute_values_from_function(
value_fn))
local_result = strategy.experimental_local_results(distributed_values)
local_result
(2, 2)
Args
replica_id_in_sync_group the current replica_id, should be an int in [0,num_replicas_in_sync).
num_replicas_in_sync the number of replicas that are in sync.
Attributes
num_replicas_in_sync Returns the number of compute replicas in sync.
replica_id_in_sync_group Returns the replica ID. | tensorflow.distribute.experimental.valuecontext |
tf.distribute.experimental_set_strategy View source on GitHub Set a tf.distribute.Strategy as current without with strategy.scope(). View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.distribute.experimental_set_strategy
tf.distribute.experimental_set_strategy(
strategy
)
tf.distribute.experimental_set_strategy(strategy1)
f()
tf.distribute.experimental_set_strategy(strategy2)
g()
tf.distribute.experimental_set_strategy(None)
h()
is equivalent to: with strategy1.scope():
f()
with strategy2.scope():
g()
h()
In general, you should use the with strategy.scope(): API, but this alternative may be convenient in notebooks where you would have to put each cell in a with strategy.scope(): block.
Note: This should only be called outside of any TensorFlow scope to avoid improper nesting.
Args
strategy A tf.distribute.Strategy object or None.
Raises
RuntimeError If called inside a with strategy.scope():. | tensorflow.distribute.experimental_set_strategy |
tf.distribute.get_replica_context View source on GitHub Returns the current tf.distribute.ReplicaContext or None. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.distribute.get_replica_context
tf.distribute.get_replica_context()
Returns None if in a cross-replica context. Note that execution: starts in the default (single-replica) replica context (this function will return the default ReplicaContext object); switches to cross-replica context (in which case this will return None) when entering a with tf.distribute.Strategy.scope(): block; switches to a (non-default) replica context inside strategy.run(fn, ...); if fn calls get_replica_context().merge_call(merge_fn, ...), then inside merge_fn you are back in the cross-replica context (and again this function will return None). Most tf.distribute.Strategy methods may only be executed in a cross-replica context, in a replica context you should use the API of the tf.distribute.ReplicaContext object returned by this method instead. assert tf.distribute.get_replica_context() is not None # default
with strategy.scope():
assert tf.distribute.get_replica_context() is None
def f():
replica_context = tf.distribute.get_replica_context() # for strategy
assert replica_context is not None
tf.print("Replica id: ", replica_context.replica_id_in_sync_group,
" of ", replica_context.num_replicas_in_sync)
strategy.run(f)
Returns The current tf.distribute.ReplicaContext object when in a replica context scope, else None. Within a particular block, exactly one of these two things will be true:
get_replica_context() returns non-None, or
tf.distribute.is_cross_replica_context() returns True. | tensorflow.distribute.get_replica_context |
tf.distribute.get_strategy View source on GitHub Returns the current tf.distribute.Strategy object. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.distribute.get_strategy
tf.distribute.get_strategy()
Typically only used in a cross-replica context: if tf.distribute.in_cross_replica_context():
strategy = tf.distribute.get_strategy()
...
Returns A tf.distribute.Strategy object. Inside a with strategy.scope() block, it returns strategy, otherwise it returns the default (single-replica) tf.distribute.Strategy object. | tensorflow.distribute.get_strategy |
tf.distribute.has_strategy View source on GitHub Return if there is a current non-default tf.distribute.Strategy. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.distribute.has_strategy
tf.distribute.has_strategy()
assert not tf.distribute.has_strategy()
with strategy.scope():
assert tf.distribute.has_strategy()
Returns True if inside a with strategy.scope():. | tensorflow.distribute.has_strategy |
tf.distribute.HierarchicalCopyAllReduce View source on GitHub Hierarchical copy all-reduce implementation of CrossDeviceOps. Inherits From: CrossDeviceOps View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.distribute.HierarchicalCopyAllReduce
tf.distribute.HierarchicalCopyAllReduce(
num_packs=1
)
It reduces to one GPU along edges in some hierarchy and broadcasts back to each GPU along the same path. For the batch API, tensors will be repacked or aggregated for more efficient cross-device transportation. This is a reduction created for Nvidia DGX-1 which assumes GPUs connects like that on DGX-1 machine. If you have different GPU inter-connections, it is likely that it would be slower than tf.distribute.ReductionToOneDevice. For reduces that are not all-reduce, it falls back to tf.distribute.ReductionToOneDevice. Here is how you can use HierarchicalCopyAllReduce in tf.distribute.MirroredStrategy: strategy = tf.distribute.MirroredStrategy(
cross_device_ops=tf.distribute.HierarchicalCopyAllReduce())
Args
num_packs a non-negative integer. The number of packs to split values into. If zero, no packing will be done.
Raises ValueError if num_packs is negative.
Methods batch_reduce View source
batch_reduce(
reduce_op, value_destination_pairs, options=None
)
Reduce values to destinations in batches. See tf.distribute.StrategyExtended.batch_reduce_to. This can only be called in the cross-replica context.
Args
reduce_op a tf.distribute.ReduceOp specifying how values should be combined.
value_destination_pairs a sequence of (value, destinations) pairs. See tf.distribute.CrossDeviceOps.reduce for descriptions.
options a tf.distribute.experimental.CommunicationOptions. See tf.distribute.experimental.CommunicationOptions for details.
Returns A list of tf.Tensor or tf.distribute.DistributedValues, one per pair in value_destination_pairs.
Raises
ValueError if value_destination_pairs is not an iterable of tuples of tf.distribute.DistributedValues and destinations. broadcast View source
broadcast(
tensor, destinations
)
Broadcast tensor to destinations. This can only be called in the cross-replica context.
Args
tensor a tf.Tensor like object. The value to broadcast.
destinations a tf.distribute.DistributedValues, a tf.Variable, a tf.Tensor alike object, or a device string. It specifies the devices to broadcast to. Note that if it's a tf.Variable, the value is broadcasted to the devices of that variable, this method doesn't update the variable.
Returns A tf.Tensor or tf.distribute.DistributedValues.
reduce View source
reduce(
reduce_op, per_replica_value, destinations, options=None
)
Reduce per_replica_value to destinations. See tf.distribute.StrategyExtended.reduce_to. This can only be called in the cross-replica context.
Args
reduce_op a tf.distribute.ReduceOp specifying how values should be combined.
per_replica_value a tf.distribute.DistributedValues, or a tf.Tensor like object.
destinations a tf.distribute.DistributedValues, a tf.Variable, a tf.Tensor alike object, or a device string. It specifies the devices to reduce to. To perform an all-reduce, pass the same to value and destinations. Note that if it's a tf.Variable, the value is reduced to the devices of that variable, and this method doesn't update the variable.
options a tf.distribute.experimental.CommunicationOptions. See tf.distribute.experimental.CommunicationOptions for details.
Returns A tf.Tensor or tf.distribute.DistributedValues.
Raises
ValueError if per_replica_value can't be converted to a tf.distribute.DistributedValues or if destinations is not a string, tf.Variable or tf.distribute.DistributedValues. | tensorflow.distribute.hierarchicalcopyallreduce |
tf.distribute.InputContext View source on GitHub A class wrapping information needed by an input function. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.distribute.InputContext
tf.distribute.InputContext(
num_input_pipelines=1, input_pipeline_id=0, num_replicas_in_sync=1
)
This is a context class that is passed to the user's input function and contains information about the compute replicas and input pipelines. The number of compute replicas (in sync training) helps compute the local batch size from the desired global batch size for each replica. The input pipeline information can be used to return a different subset of the input in each replica (for e.g. shard the input pipeline, use a different input source etc).
Args
num_input_pipelines the number of input pipelines in a cluster.
input_pipeline_id the current input pipeline id, should be an int in [0,num_input_pipelines).
num_replicas_in_sync the number of replicas that are in sync.
Attributes
input_pipeline_id Returns the input pipeline ID.
num_input_pipelines Returns the number of input pipelines.
num_replicas_in_sync Returns the number of compute replicas in sync. Methods get_per_replica_batch_size View source
get_per_replica_batch_size(
global_batch_size
)
Returns the per-replica batch size.
Args
global_batch_size the global batch size which should be divisible by num_replicas_in_sync.
Returns the per-replica batch size.
Raises
ValueError if global_batch_size not divisible by num_replicas_in_sync. | tensorflow.distribute.inputcontext |
tf.distribute.InputOptions Run options for experimental_distribute_dataset(s_from_function).
tf.distribute.InputOptions(
experimental_prefetch_to_device=True,
experimental_replication_mode=tf.distribute.InputReplicationMode.PER_WORKER,
experimental_place_dataset_on_device=False
)
This can be used to hold some strategy specific configs. # Setup TPUStrategy
resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu='')
tf.config.experimental_connect_to_cluster(resolver)
tf.tpu.experimental.initialize_tpu_system(resolver)
strategy = tf.distribute.TPUStrategy(resolver)
dataset = tf.data.Dataset.range(16)
distributed_dataset_on_host = (
strategy.experimental_distribute_dataset(
dataset,
tf.distribute.InputOptions(
experimental_replication_mode=
experimental_replication_mode.PER_WORKER,
experimental_place_dataset_on_device=False)))
Attributes
experimental_prefetch_to_device Boolean. Defaults to True. If True, dataset elements will be prefetched to accelerator device memory. When False, dataset elements are prefetched to host device memory. Must be False when using TPUEmbedding API. experimental_prefetch_to_device can only be used with experimental_replication_mode=PER_WORKER
experimental_replication_mode Replication mode for the input function. Currently, the InputReplicationMode.PER_REPLICA is only supported with tf.distribute.MirroredStrategy. experimental_distribute_datasets_from_function. The default value is InputReplicationMode.PER_WORKER.
experimental_place_dataset_on_device Boolean. Default to False. When True, dataset will be placed on the device, otherwise it will remain on the host. experimental_place_dataset_on_device=True can only be used with experimental_replication_mode=PER_REPLICA | tensorflow.distribute.inputoptions |
tf.distribute.InputReplicationMode View source on GitHub Replication mode for input function. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.distribute.InputReplicationMode
PER_WORKER: The input function will be called on each worker independently, creating as many input pipelines as number of workers. Replicas will dequeue from the local Dataset on their worker. tf.distribute.Strategy doesn't manage any state sharing between such separate input pipelines.
PER_REPLICA: The input function will be called on each replica seperately. tf.distribute.Strategy doesn't manage any state sharing between such separate input pipelines.
Class Variables
PER_REPLICA tf.distribute.InputReplicationMode
PER_WORKER tf.distribute.InputReplicationMode | tensorflow.distribute.inputreplicationmode |
tf.distribute.in_cross_replica_context View source on GitHub Returns True if in a cross-replica context. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.distribute.in_cross_replica_context
tf.distribute.in_cross_replica_context()
See tf.distribute.get_replica_context for details. assert not tf.distribute.in_cross_replica_context()
with strategy.scope():
assert tf.distribute.in_cross_replica_context()
def f():
assert not tf.distribute.in_cross_replica_context()
strategy.run(f)
Returns True if in a cross-replica context (get_replica_context() returns None), or False if in a replica context (get_replica_context() returns non-None). | tensorflow.distribute.in_cross_replica_context |
tf.distribute.MirroredStrategy View source on GitHub Synchronous training across multiple replicas on one machine. Inherits From: Strategy
tf.distribute.MirroredStrategy(
devices=None, cross_device_ops=None
)
This strategy is typically used for training on one machine with multiple GPUs. For TPUs, use tf.distribute.TPUStrategy. To use MirroredStrategy with multiple workers, please refer to tf.distribute.experimental.MultiWorkerMirroredStrategy. For example, a variable created under a MirroredStrategy is a MirroredVariable. If no devices are specified in the constructor argument of the strategy then it will use all the available GPUs. If no GPUs are found, it will use the available CPUs. Note that TensorFlow treats all CPUs on a machine as a single device, and uses threads internally for parallelism.
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"])
with strategy.scope():
x = tf.Variable(1.)
x
MirroredVariable:{
0: <tf.Variable ... shape=() dtype=float32, numpy=1.0>,
1: <tf.Variable ... shape=() dtype=float32, numpy=1.0>
}
While using distribution strategies, all the variable creation should be done within the strategy's scope. This will replicate the variables across all the replicas and keep them in sync using an all-reduce algorithm. Variables created inside a MirroredStrategy which is wrapped with a tf.function are still MirroredVariables.
x = []
@tf.function # Wrap the function with tf.function.
def create_variable():
if not x:
x.append(tf.Variable(1.))
return x[0]
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"])
with strategy.scope():
_ = create_variable()
print(x[0])
MirroredVariable:{
0: <tf.Variable ... shape=() dtype=float32, numpy=1.0>,
1: <tf.Variable ... shape=() dtype=float32, numpy=1.0>
}
experimental_distribute_dataset can be used to distribute the dataset across the replicas when writing your own training loop. If you are using .fit and .compile methods available in tf.keras, then tf.keras will handle the distribution for you. For example: my_strategy = tf.distribute.MirroredStrategy()
with my_strategy.scope():
@tf.function
def distribute_train_epoch(dataset):
def replica_fn(input):
# process input and return result
return result
total_result = 0
for x in dataset:
per_replica_result = my_strategy.run(replica_fn, args=(x,))
total_result += my_strategy.reduce(tf.distribute.ReduceOp.SUM,
per_replica_result, axis=None)
return total_result
dist_dataset = my_strategy.experimental_distribute_dataset(dataset)
for _ in range(EPOCHS):
train_result = distribute_train_epoch(dist_dataset)
Args
devices a list of device strings such as ['/gpu:0', '/gpu:1']. If None, all available GPUs are used. If no GPUs are found, CPU is used.
cross_device_ops optional, a descedant of CrossDeviceOps. If this is not set, NcclAllReduce() will be used by default. One would customize this if NCCL isn't available or if a special implementation that exploits the particular hardware is available.
Attributes
cluster_resolver Returns the cluster resolver associated with this strategy. In general, when using a multi-worker tf.distribute strategy such as tf.distribute.experimental.MultiWorkerMirroredStrategy or tf.distribute.TPUStrategy(), there is a tf.distribute.cluster_resolver.ClusterResolver associated with the strategy used, and such an instance is returned by this property. Strategies that intend to have an associated tf.distribute.cluster_resolver.ClusterResolver must set the relevant attribute, or override this property; otherwise, None is returned by default. Those strategies should also provide information regarding what is returned by this property. Single-worker strategies usually do not have a tf.distribute.cluster_resolver.ClusterResolver, and in those cases this property will return None. The tf.distribute.cluster_resolver.ClusterResolver may be useful when the user needs to access information such as the cluster spec, task type or task id. For example,
os.environ['TF_CONFIG'] = json.dumps({
'cluster': {
'worker': ["localhost:12345", "localhost:23456"],
'ps': ["localhost:34567"]
},
'task': {'type': 'worker', 'index': 0}
})
# This implicitly uses TF_CONFIG for the cluster and current task info.
strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy()
...
if strategy.cluster_resolver.task_type == 'worker':
# Perform something that's only applicable on workers. Since we set this
# as a worker above, this block will run on this particular instance.
elif strategy.cluster_resolver.task_type == 'ps':
# Perform something that's only applicable on parameter servers. Since we
# set this as a worker above, this block will not run on this particular
# instance.
For more information, please see tf.distribute.cluster_resolver.ClusterResolver's API docstring.
extended tf.distribute.StrategyExtended with additional methods.
num_replicas_in_sync Returns number of replicas over which gradients are aggregated. Methods distribute_datasets_from_function View source
distribute_datasets_from_function(
dataset_fn, options=None
)
Distributes tf.data.Dataset instances created by calls to dataset_fn. The argument dataset_fn that users pass in is an input function that has a tf.distribute.InputContext argument and returns a tf.data.Dataset instance. It is expected that the returned dataset from dataset_fn is already batched by per-replica batch size (i.e. global batch size divided by the number of replicas in sync) and sharded. tf.distribute.Strategy.distribute_datasets_from_function does not batch or shard the tf.data.Dataset instance returned from the input function. dataset_fn will be called on the CPU device of each of the workers and each generates a dataset where every replica on that worker will dequeue one batch of inputs (i.e. if a worker has two replicas, two batches will be dequeued from the Dataset every step). This method can be used for several purposes. First, it allows you to specify your own batching and sharding logic. (In contrast, tf.distribute.experimental_distribute_dataset does batching and sharding for you.) For example, where experimental_distribute_dataset is unable to shard the input files, this method might be used to manually shard the dataset (avoiding the slow fallback behavior in experimental_distribute_dataset). In cases where the dataset is infinite, this sharding can be done by creating dataset replicas that differ only in their random seed. The dataset_fn should take an tf.distribute.InputContext instance where information about batching and input replication can be accessed. You can use element_spec property of the tf.distribute.DistributedDataset returned by this API to query the tf.TypeSpec of the elements returned by the iterator. This can be used to set the input_signature property of a tf.function. Follow tf.distribute.DistributedDataset.element_spec to see an example. Key Point: The tf.data.Dataset returned by dataset_fn should have a per-replica batch size, unlike experimental_distribute_dataset, which uses the global batch size. This may be computed using input_context.get_per_replica_batch_size.
Note: If you are using TPUStrategy, the order in which the data is processed by the workers when using tf.distribute.Strategy.experimental_distribute_dataset or tf.distribute.Strategy.distribute_datasets_from_function is not guaranteed. This is typically required if you are using tf.distribute to scale prediction. You can however insert an index for each element in the batch and order outputs accordingly. Refer to this snippet for an example of how to order outputs.
Note: Stateful dataset transformations are currently not supported with tf.distribute.experimental_distribute_dataset or tf.distribute.distribute_datasets_from_function. Any stateful ops that the dataset may have are currently ignored. For example, if your dataset has a map_fn that uses tf.random.uniform to rotate an image, then you have a dataset graph that depends on state (i.e the random seed) on the local machine where the python process is being executed.
For a tutorial on more usage and properties of this method, refer to the tutorial on distributed input). If you are interested in last partial batch handling, read this section.
Args
dataset_fn A function taking a tf.distribute.InputContext instance and returning a tf.data.Dataset.
options tf.distribute.InputOptions used to control options on how this dataset is distributed.
Returns A tf.distribute.DistributedDataset.
experimental_distribute_dataset View source
experimental_distribute_dataset(
dataset, options=None
)
Creates tf.distribute.DistributedDataset from tf.data.Dataset. The returned tf.distribute.DistributedDataset can be iterated over similar to regular datasets. NOTE: The user cannot add any more transformations to a tf.distribute.DistributedDataset. You can only create an iterator or examine the tf.TypeSpec of the data generated by it. See API docs of tf.distribute.DistributedDataset to learn more. The following is an example:
global_batch_size = 2
# Passing the devices is optional.
strategy = tf.distribute.MirroredStrategy(devices=["GPU:0", "GPU:1"])
# Create a dataset
dataset = tf.data.Dataset.range(4).batch(global_batch_size)
# Distribute that dataset
dist_dataset = strategy.experimental_distribute_dataset(dataset)
@tf.function
def replica_fn(input):
return input*2
result = []
# Iterate over the `tf.distribute.DistributedDataset`
for x in dist_dataset:
# process dataset elements
result.append(strategy.run(replica_fn, args=(x,)))
print(result)
[PerReplica:{
0: <tf.Tensor: shape=(1,), dtype=int64, numpy=array([0])>,
1: <tf.Tensor: shape=(1,), dtype=int64, numpy=array([2])>
}, PerReplica:{
0: <tf.Tensor: shape=(1,), dtype=int64, numpy=array([4])>,
1: <tf.Tensor: shape=(1,), dtype=int64, numpy=array([6])>
}]
Three key actions happending under the hood of this method are batching, sharding, and prefetching. In the code snippet above, dataset is batched by global_batch_size, and calling experimental_distribute_dataset on it rebatches dataset to a new batch size that is equal to the global batch size divided by the number of replicas in sync. We iterate through it using a Pythonic for loop. x is a tf.distribute.DistributedValues containing data for all replicas, and each replica gets data of the new batch size. tf.distribute.Strategy.run will take care of feeding the right per-replica data in x to the right replica_fn executed on each replica. Sharding contains autosharding across multiple workers and within every worker. First, in multi-worker distributed training (i.e. when you use tf.distribute.experimental.MultiWorkerMirroredStrategy or tf.distribute.TPUStrategy), autosharding a dataset over a set of workers means that each worker is assigned a subset of the entire dataset (if the right tf.data.experimental.AutoShardPolicy is set). This is to ensure that at each step, a global batch size of non-overlapping dataset elements will be processed by each worker. Autosharding has a couple of different options that can be specified using tf.data.experimental.DistributeOptions. Then, sharding within each worker means the method will split the data among all the worker devices (if more than one a present). This will happen regardless of multi-worker autosharding.
Note: for autosharding across multiple workers, the default mode is tf.data.experimental.AutoShardPolicy.AUTO. This mode will attempt to shard the input dataset by files if the dataset is being created out of reader datasets (e.g. tf.data.TFRecordDataset, tf.data.TextLineDataset, etc.) or otherwise shard the dataset by data, where each of the workers will read the entire dataset and only process the shard assigned to it. However, if you have less than one input file per worker, we suggest that you disable dataset autosharding across workers by setting the tf.data.experimental.DistributeOptions.auto_shard_policy to be tf.data.experimental.AutoShardPolicy.OFF.
By default, this method adds a prefetch transformation at the end of the user provided tf.data.Dataset instance. The argument to the prefetch transformation which is buffer_size is equal to the number of replicas in sync. If the above batch splitting and dataset sharding logic is undesirable, please use tf.distribute.Strategy.distribute_datasets_from_function instead, which does not do any automatic batching or sharding for you.
Note: If you are using TPUStrategy, the order in which the data is processed by the workers when using tf.distribute.Strategy.experimental_distribute_dataset or tf.distribute.Strategy.distribute_datasets_from_function is not guaranteed. This is typically required if you are using tf.distribute to scale prediction. You can however insert an index for each element in the batch and order outputs accordingly. Refer to this snippet for an example of how to order outputs.
Note: Stateful dataset transformations are currently not supported with tf.distribute.experimental_distribute_dataset or tf.distribute.distribute_datasets_from_function. Any stateful ops that the dataset may have are currently ignored. For example, if your dataset has a map_fn that uses tf.random.uniform to rotate an image, then you have a dataset graph that depends on state (i.e the random seed) on the local machine where the python process is being executed.
For a tutorial on more usage and properties of this method, refer to the tutorial on distributed input. If you are interested in last partial batch handling, read this section.
Args
dataset tf.data.Dataset that will be sharded across all replicas using the rules stated above.
options tf.distribute.InputOptions used to control options on how this dataset is distributed.
Returns A tf.distribute.DistributedDataset.
experimental_distribute_values_from_function View source
experimental_distribute_values_from_function(
value_fn
)
Generates tf.distribute.DistributedValues from value_fn. This function is to generate tf.distribute.DistributedValues to pass into run, reduce, or other methods that take distributed values when not using datasets.
Args
value_fn The function to run to generate values. It is called for each replica with tf.distribute.ValueContext as the sole argument. It must return a Tensor or a type that can be converted to a Tensor.
Returns A tf.distribute.DistributedValues containing a value for each replica.
Example usage: Return constant value per replica:
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"])
def value_fn(ctx):
return tf.constant(1.)
distributed_values = (
strategy.experimental_distribute_values_from_function(
value_fn))
local_result = strategy.experimental_local_results(distributed_values)
local_result
(<tf.Tensor: shape=(), dtype=float32, numpy=1.0>,
<tf.Tensor: shape=(), dtype=float32, numpy=1.0>)
Distribute values in array based on replica_id:
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"])
array_value = np.array([3., 2., 1.])
def value_fn(ctx):
return array_value[ctx.replica_id_in_sync_group]
distributed_values = (
strategy.experimental_distribute_values_from_function(
value_fn))
local_result = strategy.experimental_local_results(distributed_values)
local_result
(3.0, 2.0)
Specify values using num_replicas_in_sync:
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"])
def value_fn(ctx):
return ctx.num_replicas_in_sync
distributed_values = (
strategy.experimental_distribute_values_from_function(
value_fn))
local_result = strategy.experimental_local_results(distributed_values)
local_result
(2, 2)
Place values on devices and distribute: strategy = tf.distribute.TPUStrategy()
worker_devices = strategy.extended.worker_devices
multiple_values = []
for i in range(strategy.num_replicas_in_sync):
with tf.device(worker_devices[i]):
multiple_values.append(tf.constant(1.0))
def value_fn(ctx):
return multiple_values[ctx.replica_id_in_sync_group]
distributed_values = strategy.
experimental_distribute_values_from_function(
value_fn)
experimental_local_results View source
experimental_local_results(
value
)
Returns the list of all local per-replica values contained in value.
Note: This only returns values on the worker initiated by this client. When using a tf.distribute.Strategy like tf.distribute.experimental.MultiWorkerMirroredStrategy, each worker will be its own client, and this function will only return values computed on that worker.
Args
value A value returned by experimental_run(), run(), extended.call_for_each_replica(), or a variable created in scope.
Returns A tuple of values contained in value. If value represents a single value, this returns (value,).
gather View source
gather(
value, axis
)
Gather value across replicas along axis to the current device. Given a tf.distribute.DistributedValues or tf.Tensor-like object value, this API gathers and concatenates value across replicas along the axis-th dimension. The result is copied to the "current" device which would typically be the CPU of the worker on which the program is running. For tf.distribute.TPUStrategy, it is the first TPU host. For multi-client MultiWorkerMirroredStrategy, this is CPU of each worker. This API can only be called in the cross-replica context. For a counterpart in the replica context, see tf.distribute.ReplicaContext.all_gather.
Note: For all strategies except tf.distribute.TPUStrategy, the input value on different replicas must have the same rank, and their shapes must be the same in all dimensions except the axis-th dimension. In other words, their shapes cannot be different in a dimension d where d does not equal to the axis argument. For example, given a tf.distribute.DistributedValues with component tensors of shape (1, 2, 3) and (1, 3, 3) on two replicas, you can call gather(..., axis=1, ...) on it, but not gather(..., axis=0, ...) or gather(..., axis=2, ...). However, for tf.distribute.TPUStrategy.gather, all tensors must have exactly the same rank and same shape.
Note: Given a tf.distribute.DistributedValues value, its component tensors must have a non-zero rank. Otherwise, consider using tf.expand_dims before gathering them.
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"])
# A DistributedValues with component tensor of shape (2, 1) on each replica
distributed_values = strategy.experimental_distribute_values_from_function(lambda _: tf.identity(tf.constant([[1], [2]])))
@tf.function
def run():
return strategy.gather(distributed_values, axis=0)
run()
<tf.Tensor: shape=(4, 1), dtype=int32, numpy=
array([[1],
[2],
[1],
[2]], dtype=int32)>
Consider the following example for more combinations:
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1", "GPU:2", "GPU:3"])
single_tensor = tf.reshape(tf.range(6), shape=(1,2,3))
distributed_values = strategy.experimental_distribute_values_from_function(lambda _: tf.identity(single_tensor))
@tf.function
def run(axis):
return strategy.gather(distributed_values, axis=axis)
axis=0
run(axis)
<tf.Tensor: shape=(4, 2, 3), dtype=int32, numpy=
array([[[0, 1, 2],
[3, 4, 5]],
[[0, 1, 2],
[3, 4, 5]],
[[0, 1, 2],
[3, 4, 5]],
[[0, 1, 2],
[3, 4, 5]]], dtype=int32)>
axis=1
run(axis)
<tf.Tensor: shape=(1, 8, 3), dtype=int32, numpy=
array([[[0, 1, 2],
[3, 4, 5],
[0, 1, 2],
[3, 4, 5],
[0, 1, 2],
[3, 4, 5],
[0, 1, 2],
[3, 4, 5]]], dtype=int32)>
axis=2
run(axis)
<tf.Tensor: shape=(1, 2, 12), dtype=int32, numpy=
array([[[0, 1, 2, 0, 1, 2, 0, 1, 2, 0, 1, 2],
[3, 4, 5, 3, 4, 5, 3, 4, 5, 3, 4, 5]]], dtype=int32)>
Args
value a tf.distribute.DistributedValues instance, e.g. returned by Strategy.run, to be combined into a single tensor. It can also be a regular tensor when used with tf.distribute.OneDeviceStrategy or the default strategy. The tensors that constitute the DistributedValues can only be dense tensors with non-zero rank, NOT a tf.IndexedSlices.
axis 0-D int32 Tensor. Dimension along which to gather. Must be in the range [0, rank(value)).
Returns A Tensor that's the concatenation of value across replicas along axis dimension.
reduce View source
reduce(
reduce_op, value, axis
)
Reduce value across replicas and return result on current device.
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"])
def step_fn():
i = tf.distribute.get_replica_context().replica_id_in_sync_group
return tf.identity(i)
per_replica_result = strategy.run(step_fn)
total = strategy.reduce("SUM", per_replica_result, axis=None)
total
<tf.Tensor: shape=(), dtype=int32, numpy=1>
To see how this would look with multiple replicas, consider the same example with MirroredStrategy with 2 GPUs: strategy = tf.distribute.MirroredStrategy(devices=["GPU:0", "GPU:1"])
def step_fn():
i = tf.distribute.get_replica_context().replica_id_in_sync_group
return tf.identity(i)
per_replica_result = strategy.run(step_fn)
# Check devices on which per replica result is:
strategy.experimental_local_results(per_replica_result)[0].device
# /job:localhost/replica:0/task:0/device:GPU:0
strategy.experimental_local_results(per_replica_result)[1].device
# /job:localhost/replica:0/task:0/device:GPU:1
total = strategy.reduce("SUM", per_replica_result, axis=None)
# Check device on which reduced result is:
total.device
# /job:localhost/replica:0/task:0/device:CPU:0
This API is typically used for aggregating the results returned from different replicas, for reporting etc. For example, loss computed from different replicas can be averaged using this API before printing.
Note: The result is copied to the "current" device - which would typically be the CPU of the worker on which the program is running. For TPUStrategy, it is the first TPU host. For multi client MultiWorkerMirroredStrategy, this is CPU of each worker.
There are a number of different tf.distribute APIs for reducing values across replicas:
tf.distribute.ReplicaContext.all_reduce: This differs from Strategy.reduce in that it is for replica context and does not copy the results to the host device. all_reduce should be typically used for reductions inside the training step such as gradients.
tf.distribute.StrategyExtended.reduce_to and tf.distribute.StrategyExtended.batch_reduce_to: These APIs are more advanced versions of Strategy.reduce as they allow customizing the destination of the result. They are also called in cross replica context. What should axis be? Given a per-replica value returned by run, say a per-example loss, the batch will be divided across all the replicas. This function allows you to aggregate across replicas and optionally also across batch elements by specifying the axis parameter accordingly. For example, if you have a global batch size of 8 and 2 replicas, values for examples [0, 1, 2, 3] will be on replica 0 and [4, 5, 6, 7] will be on replica 1. With axis=None, reduce will aggregate only across replicas, returning [0+4, 1+5, 2+6, 3+7]. This is useful when each replica is computing a scalar or some other value that doesn't have a "batch" dimension (like a gradient or loss). strategy.reduce("sum", per_replica_result, axis=None)
Sometimes, you will want to aggregate across both the global batch and all replicas. You can get this behavior by specifying the batch dimension as the axis, typically axis=0. In this case it would return a scalar 0+1+2+3+4+5+6+7. strategy.reduce("sum", per_replica_result, axis=0)
If there is a last partial batch, you will need to specify an axis so that the resulting shape is consistent across replicas. So if the last batch has size 6 and it is divided into [0, 1, 2, 3] and [4, 5], you would get a shape mismatch unless you specify axis=0. If you specify tf.distribute.ReduceOp.MEAN, using axis=0 will use the correct denominator of 6. Contrast this with computing reduce_mean to get a scalar value on each replica and this function to average those means, which will weigh some values 1/8 and others 1/4.
Args
reduce_op a tf.distribute.ReduceOp value specifying how values should be combined. Allows using string representation of the enum such as "SUM", "MEAN".
value a tf.distribute.DistributedValues instance, e.g. returned by Strategy.run, to be combined into a single tensor. It can also be a regular tensor when used with OneDeviceStrategy or default strategy.
axis specifies the dimension to reduce along within each replica's tensor. Should typically be set to the batch dimension, or None to only reduce across replicas (e.g. if the tensor has no batch dimension).
Returns A Tensor.
run View source
run(
fn, args=(), kwargs=None, options=None
)
Invokes fn on each replica, with the given arguments. This method is the primary way to distribute your computation with a tf.distribute object. It invokes fn on each replica. If args or kwargs have tf.distribute.DistributedValues, such as those produced by a tf.distribute.DistributedDataset from tf.distribute.Strategy.experimental_distribute_dataset or tf.distribute.Strategy.distribute_datasets_from_function, when fn is executed on a particular replica, it will be executed with the component of tf.distribute.DistributedValues that correspond to that replica. fn is invoked under a replica context. fn may call tf.distribute.get_replica_context() to access members such as all_reduce. Please see the module-level docstring of tf.distribute for the concept of replica context. All arguments in args or kwargs should either be Python values of a nested structure of tensors, e.g. a list of tensors, in which case args and kwargs will be passed to the fn invoked on each replica. Or args or kwargs can be tf.distribute.DistributedValues containing tensors or composite tensors, i.e. tf.compat.v1.TensorInfo.CompositeTensor, in which case each fn call will get the component of a tf.distribute.DistributedValues corresponding to its replica. Key Point: Depending on the implementation of tf.distribute.Strategy and whether eager execution is enabled, fn may be called one or more times. If fn is annotated with tf.function or tf.distribute.Strategy.run is called inside a tf.function (eager execution is disabled inside a tf.function by default), fn is called once per replica to generate a Tensorflow graph, which will then be reused for execution with new inputs. Otherwise, if eager execution is enabled, fn will be called once per replica every step just like regular python code. Example usage: Constant tensor input.
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"])
tensor_input = tf.constant(3.0)
@tf.function
def replica_fn(input):
return input*2.0
result = strategy.run(replica_fn, args=(tensor_input,))
result
PerReplica:{
0: <tf.Tensor: shape=(), dtype=float32, numpy=6.0>,
1: <tf.Tensor: shape=(), dtype=float32, numpy=6.0>
}
DistributedValues input.
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"])
@tf.function
def run():
def value_fn(value_context):
return value_context.num_replicas_in_sync
distributed_values = (
strategy.experimental_distribute_values_from_function(
value_fn))
def replica_fn2(input):
return input*2
return strategy.run(replica_fn2, args=(distributed_values,))
result = run()
result
<tf.Tensor: shape=(), dtype=int32, numpy=4>
Use tf.distribute.ReplicaContext to allreduce values.
strategy = tf.distribute.MirroredStrategy(["gpu:0", "gpu:1"])
@tf.function
def run():
def value_fn(value_context):
return tf.constant(value_context.replica_id_in_sync_group)
distributed_values = (
strategy.experimental_distribute_values_from_function(
value_fn))
def replica_fn(input):
return tf.distribute.get_replica_context().all_reduce("sum", input)
return strategy.run(replica_fn, args=(distributed_values,))
result = run()
result
PerReplica:{
0: <tf.Tensor: shape=(), dtype=int32, numpy=1>,
1: <tf.Tensor: shape=(), dtype=int32, numpy=1>
}
Args
fn The function to run on each replica.
args Optional positional arguments to fn. Its element can be a Python value, a tensor or a tf.distribute.DistributedValues.
kwargs Optional keyword arguments to fn. Its element can be a Python value, a tensor or a tf.distribute.DistributedValues.
options An optional instance of tf.distribute.RunOptions specifying the options to run fn.
Returns Merged return value of fn across replicas. The structure of the return value is the same as the return value from fn. Each element in the structure can either be tf.distribute.DistributedValues, Tensor objects, or Tensors (for example, if running on a single replica).
scope View source
scope()
Context manager to make the strategy current and distribute variables. This method returns a context manager, and is used as follows:
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"])
# Variable created inside scope:
with strategy.scope():
mirrored_variable = tf.Variable(1.)
mirrored_variable
MirroredVariable:{
0: <tf.Variable 'Variable:0' shape=() dtype=float32, numpy=1.0>,
1: <tf.Variable 'Variable/replica_1:0' shape=() dtype=float32, numpy=1.0>
}
# Variable created outside scope:
regular_variable = tf.Variable(1.)
regular_variable
<tf.Variable 'Variable:0' shape=() dtype=float32, numpy=1.0>
What happens when Strategy.scope is entered?
strategy is installed in the global context as the "current" strategy. Inside this scope, tf.distribute.get_strategy() will now return this strategy. Outside this scope, it returns the default no-op strategy. Entering the scope also enters the "cross-replica context". See tf.distribute.StrategyExtended for an explanation on cross-replica and replica contexts. Variable creation inside scope is intercepted by the strategy. Each strategy defines how it wants to affect the variable creation. Sync strategies like MirroredStrategy, TPUStrategy and MultiWorkerMiroredStrategy create variables replicated on each replica, whereas ParameterServerStrategy creates variables on the parameter servers. This is done using a custom tf.variable_creator_scope. In some strategies, a default device scope may also be entered: in MultiWorkerMiroredStrategy, a default device scope of "/CPU:0" is entered on each worker.
Note: Entering a scope does not automatically distribute a computation, except in the case of high level training framework like keras model.fit. If you're not using model.fit, you need to use strategy.run API to explicitly distribute that computation. See an example in the custom training loop tutorial.
What should be in scope and what should be outside? There are a number of requirements on what needs to happen inside the scope. However, in places where we have information about which strategy is in use, we often enter the scope for the user, so they don't have to do it explicitly (i.e. calling those either inside or outside the scope is OK). Anything that creates variables that should be distributed variables must be in strategy.scope. This can be either by directly putting it in scope, or relying on another API like strategy.run or model.fit to enter it for you. Any variable that is created outside scope will not be distributed and may have performance implications. Common things that create variables in TF: models, optimizers, metrics. These should always be created inside the scope. Another source of variable creation can be a checkpoint restore - when variables are created lazily. Note that any variable created inside a strategy captures the strategy information. So reading and writing to these variables outside the strategy.scope can also work seamlessly, without the user having to enter the scope. Some strategy APIs (such as strategy.run and strategy.reduce) which require to be in a strategy's scope, enter the scope for you automatically, which means when using those APIs you don't need to enter the scope yourself. When a tf.keras.Model is created inside a strategy.scope, we capture this information. When high level training frameworks methods such as model.compile, model.fit etc are then called on this model, we automatically enter the scope, as well as use this strategy to distribute the training etc. See detailed example in distributed keras tutorial. Note that simply calling the model(..) is not impacted - only high level training framework APIs are. model.compile, model.fit, model.evaluate, model.predict and model.save can all be called inside or outside the scope. The following can be either inside or outside the scope: Creating the input datasets Defining tf.functions that represent your training step Saving APIs such as tf.saved_model.save. Loading creates variables, so that should go inside the scope if you want to train the model in a distributed way. Checkpoint saving. As mentioned above - checkpoint.restore may sometimes need to be inside scope if it creates variables.
Returns A context manager. | tensorflow.distribute.mirroredstrategy |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.