doc_content
stringlengths 1
386k
| doc_id
stringlengths 5
188
|
---|---|
tf.keras.mixed_precision.global_policy Returns the global dtype policy. View aliases Main aliases
tf.keras.mixed_precision.experimental.global_policy
tf.keras.mixed_precision.global_policy()
The global policy is the default tf.keras.mixed_precision.Policy used for layers, if no policy is passed to the layer constructor. If no policy has been set with keras.mixed_precision.set_global_policy, this will return a policy constructed from tf.keras.backend.floatx() (floatx defaults to float32).
tf.keras.mixed_precision.global_policy()
<Policy "float32">
tf.keras.layers.Dense(10).dtype_policy # Defaults to the global policy
<Policy "float32">
If TensorFlow 2 behavior has been disabled with tf.compat.v1.disable_v2_behavior(), this will instead return a special "_infer" policy which infers the dtype from the dtype of the first input the first time the layer is called. This behavior matches the behavior that existed in TensorFlow 1. See tf.keras.mixed_precision.Policy for more information on policies.
Returns The global Policy. | tensorflow.keras.mixed_precision.global_policy |
tf.keras.mixed_precision.LossScaleOptimizer An optimizer that applies loss scaling to prevent numeric underflow. Inherits From: Optimizer View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.keras.mixed_precision.LossScaleOptimizer
tf.keras.mixed_precision.LossScaleOptimizer(
inner_optimizer, dynamic=True, initial_scale=None, dynamic_growth_steps=None
)
Loss scaling is a technique to prevent numeric underflow in intermediate gradients when float16 is used. To prevent underflow, the loss is multiplied (or "scaled") by a certain factor called the "loss scale", which causes intermediate gradients to be scaled by the loss scale as well. The final gradients are divided (or "unscaled") by the loss scale to bring them back to their original value. LossScaleOptimizer wraps another optimizer and applies loss scaling to it. By default, the loss scale is dynamically updated over time so you do not have to choose the loss scale. The minimize method automatically scales the loss, unscales the gradients, and updates the loss scale so all you have to do is wrap your optimizer with a LossScaleOptimizer if you use minimize. For example:
opt = tf.keras.optimizers.SGD(0.25)
opt = tf.keras.mixed_precision.LossScaleOptimizer(opt)
var = tf.Variable(1.)
loss_fn = lambda: var ** 2
# 'minimize' applies loss scaling and updates the loss sale.
opt.minimize(loss_fn, var_list=var)
var.numpy()
0.5
If a tf.GradientTape is used to compute gradients instead of minimize, you must scale the loss and gradients manually. This can be done with the LossScaleOptimizer.get_scaled_loss and LossScaleOptimizer.get_unscaled_gradients methods. For example:
with tf.GradientTape() as tape:
loss = loss_fn()
scaled_loss = opt.get_scaled_loss(loss)
scaled_grad = tape.gradient(scaled_loss, var)
(grad,) = opt.get_unscaled_gradients([scaled_grad])
opt.apply_gradients([(grad, var)]) # Loss scale is updated here
var.numpy()
0.25
Warning: If you forget to call get_scaled_loss or get_unscaled_gradients (or both) when using a tf.GradientTape, the model will likely converge to a worse quality. Please make sure you call each function exactly once. When mixed precision with float16 is used, there is typically no risk of underflow affecting model quality if loss scaling is properly used. See the mixed precision guide for more information on how to use mixed precision.
Args
inner_optimizer The tf.keras.optimizers.Optimizer instance to wrap.
dynamic Bool indicating whether dynamic loss scaling is used. Defaults to True. If True, the loss scale will be dynamically updated over time using an algorithm that keeps the loss scale at approximately its optimal value. If False, a single fixed loss scale is used and initial_scale must be specified, which is used as the loss scale. Recommended to keep as True, as choosing a fixed loss scale can be tricky. Currently, there is a small performance overhead to dynamic loss scaling compared to fixed loss scaling.
initial_scale The initial loss scale. If dynamic is True, this defaults to 2 ** 15. If dynamic is False, this must be specified and acts as the sole loss scale, as the loss scale does not change over time. When dynamic loss scaling is used, is better for this to be a very high number, because a loss scale that is too high gets lowered far more quickly than a loss scale that is too low gets raised.
dynamic_growth_steps With dynamic loss scaling, every dynamic_growth_steps steps with finite gradients, the loss scale is doubled. Defaults to 2000. If a nonfinite gradient is encountered, the count is reset back to zero, gradients are skipped that step, and the loss scale is halved. The count can be queried with LossScaleOptimizer.dynamic_counter. This argument can only be specified if dynamic is True. LossScaleOptimizer will occasionally skip applying gradients to the variables, in which case the trainable variables will not change that step. This is done because the dynamic loss scale will sometimes be raised too high, causing overflow in the gradients. Typically, the first 2 to 15 steps of the model are skipped as the initial loss scale is very high, but afterwards steps will only be skipped on average 0.05% of the time (the fraction of steps skipped is 1 / dynamic_growth_steps). LossScaleOptimizer delegates all public Optimizer methods to the inner optimizer. Additionally, in methods minimize and get_gradients, it scales the loss and unscales the gradients. In methodsminimizeandapply_gradients`, it additionally updates the loss scale and skips applying gradients if any gradient has a nonfinite value. Hyperparameters Hyperparameters can be accessed and set on the LossScaleOptimizer, which will be delegated to the wrapped optimizer.
opt = tf.keras.optimizers.Adam(beta_1=0.8, epsilon=1e-5)
opt = tf.keras.mixed_precision.LossScaleOptimizer(opt)
opt.beta_1 # Equivalent to `opt.inner_optimizer.beta_1`
0.8
opt.beta_1 = 0.7 # Equivalent to `opt.inner_optimizer.beta_1 = 0.7`
opt.beta_1
0.7
opt.inner_optimizer.beta_1
0.7
However, accessing or setting non-hyperparameters is not delegated to the LossScaleOptimizer. In an Adam optimizer, beta_1 is a hyperparameter but epsilon is not, as the Adam optimizer only calls Optimizer._set_hyper on beta_1.
opt.inner_optimizer.epsilon
1e-5
opt.epsilon
Traceback (most recent call last):
AttributeError: 'LossScaleOptimizer' object has no attribute 'epsilon'
opt.epsilon = 1e-4 # This does NOT set epsilon on `opt.inner_optimizer`
opt.inner_optimizer.epsilon
1e-5
In the above example, despite epsilon being set on the LossScaleOptimizer, the old epsilon value will still be used when training as epsilon was not set on the inner optimizer.
Raises
ValueError in case of any invalid argument.
Attributes
dynamic Bool indicating whether dynamic loss scaling is used.
dynamic_counter The number of steps since the loss scale was last increased or decreased. This is None if LossScaleOptimizer.dynamic is False. The counter is incremented every step. Once it reaches LossScaleOptimizer.dynamic_growth_steps, the loss scale will be doubled and the counter will be reset back to zero. If nonfinite gradients are encountered, the loss scale will be halved and the counter will be reset back to zero.
dynamic_growth_steps The number of steps it takes to increase the loss scale. This is None if LossScaleOptimizer.dynamic is False. Every dynamic_growth_steps consecutive steps with finite gradients, the loss scale is increased.
initial_scale The initial loss scale. If LossScaleOptimizer.dynamic is False, this is the same number as LossScaleOptimizer.loss_scale, as the loss scale never changes.
inner_optimizer The optimizer that this LossScaleOptimizer is wrapping.
loss_scale The current loss scale as a float32 scalar tensor. Methods get_scaled_loss View source
get_scaled_loss(
loss
)
Scales the loss by the loss scale. This method is only needed if you compute gradients manually, e.g. with tf.GradientTape. In that case, call this method to scale the loss before passing the loss to tf.GradientTape. If you use LossScaleOptimizer.minimize or LossScaleOptimizer.get_gradients, loss scaling is automatically applied and this method is unneeded. If this method is called, get_unscaled_gradients should also be called. See the tf.keras.mixed_precision.LossScaleOptimizer doc for an example.
Args
loss The loss, which will be multiplied by the loss scale. Can either be a tensor or a callable returning a tensor.
Returns loss multiplied by LossScaleOptimizer.loss_scale.
get_unscaled_gradients View source
get_unscaled_gradients(
grads
)
Unscales the gradients by the loss scale. This method is only needed if you compute gradients manually, e.g. with tf.GradientTape. In that case, call this method to unscale the gradients after computing them with tf.GradientTape. If you use LossScaleOptimizer.minimize or LossScaleOptimizer.get_gradients, loss scaling is automatically applied and this method is unneeded. If this method is called, get_scaled_loss should also be called. See the tf.keras.mixed_precision.LossScaleOptimizer doc for an example.
Args
grads A list of tensors, each which will be divided by the loss scale. Can have None values, which are ignored.
Returns A new list the same size as grads, where every non-None value in grads is divided by LossScaleOptimizer.loss_scale. | tensorflow.keras.mixed_precision.lossscaleoptimizer |
tf.keras.mixed_precision.Policy A dtype policy for a Keras layer.
tf.keras.mixed_precision.Policy(
name
)
A dtype policy determines a layer's computation and variable dtypes. Each layer has a policy. Policies can be passed to the dtype argument of layer constructors, or a global policy can be set with tf.keras.mixed_precision.set_global_policy.
Args
name The policy name, which determines the compute and variable dtypes. Can be any dtype name, such as 'float32' or 'float64', which causes both the compute and variable dtypes will be that dtype. Can also be the string 'mixed_float16' or 'mixed_bfloat16', which causes the compute dtype to be float16 or bfloat16 and the variable dtype to be float32. Typically you only need to interact with dtype policies when using mixed precision, which is the use of float16 or bfloat16 for computations and float32 for variables. This is why the term mixed_precision appears in the API name. Mixed precision can be enabled by passing 'mixed_float16' or 'mixed_bfloat16' to tf.keras.mixed_precision.set_global_policy. See the mixed precision guide for more information on how to use mixed precision.
tf.keras.mixed_precision.set_global_policy('mixed_float16')
layer1 = tf.keras.layers.Dense(10)
layer1.dtype_policy # `layer1` will automatically use mixed precision
<Policy "mixed_float16">
# Can optionally override layer to use float32 instead of mixed precision.
layer2 = tf.keras.layers.Dense(10, dtype='float32')
layer2.dtype_policy
<Policy "float32">
# Set policy back to initial float32 for future examples.
tf.keras.mixed_precision.set_global_policy('float32')
In the example above, passing dtype='float32' to the layer is equivalent to passing dtype=tf.keras.mixed_precision.Policy('float32'). In general, passing a dtype to a layer is equivalent to passing the corresponding policy, so it is never necessary to explicitly construct a Policy object.
Note: Model.compile will automatically wrap an optimizer with a tf.keras.mixed_precision.LossScaleOptimizer if you use the 'mixed_float16' policy. If you use a custom training loop instead of calling Model.compile, you should explicitly use a tf.keras.mixed_precision.LossScaleOptimizer to avoid numeric underflow with float16.
How a layer uses its policy's compute dtype A layer casts its inputs to its compute dtype. This causes the layer's computations and output to also be in the compute dtype. For example:
x = tf.ones((4, 4, 4, 4), dtype='float64')
# `layer`'s policy defaults to float32.
layer = tf.keras.layers.Conv2D(filters=4, kernel_size=2)
layer.compute_dtype # Equivalent to layer.dtype_policy.compute_dtype
'float32'
# `layer` casts it's inputs to its compute dtype and does computations in
# that dtype.
y = layer(x)
y.dtype
tf.float32
Note that the base tf.keras.layers.Layer class inserts the casts. If subclassing your own layer, you do not have to insert any casts. Currently, only tensors in the first argument to the layer's call method are casted (although this will likely be changed in a future minor release). For example:
class MyLayer(tf.keras.layers.Layer):
# Bug! `b` will not be casted.
def call(self, a, b):
return a + 1., b + 1.
a = tf.constant(1., dtype="float32")
b = tf.constant(1., dtype="float32")
layer = MyLayer(dtype="float64")
x, y = layer(a, b)
x.dtype
tf.float64
y.dtype
tf.float32
If writing your own layer with multiple inputs, you should either explicitly cast other tensors to self.compute_dtype in call or accept all tensors in the first argument as a list. The casting only occurs in TensorFlow 2. If tf.compat.v1.disable_v2_behavior() has been called, you can enable the casting behavior with tf.compat.v1.keras.layers.enable_v2_dtype_behavior(). How a layer uses its policy's variable dtype The default dtype of variables created by tf.keras.layers.Layer.add_weight is the layer's policy's variable dtype. If a layer's compute and variable dtypes differ, add_weight will wrap floating-point variables with a special wrapper called an AutoCastVariable. AutoCastVariable is identical to the original variable except it casts itself to the layer's compute dtype when used within Layer.call. This means if you are writing a layer, you do not have to explicitly cast the variables to the layer's compute dtype. For example:
class SimpleDense(tf.keras.layers.Layer):
def build(self, input_shape):
# With mixed precision, self.kernel is a float32 AutoCastVariable
self.kernel = self.add_weight('kernel', (input_shape[-1], 10))
def call(self, inputs):
# With mixed precision, self.kernel will be casted to float16
return tf.linalg.matmul(inputs, self.kernel)
dtype_policy = tf.keras.mixed_precision.Policy('mixed_float16')
layer = SimpleDense(dtype=dtype_policy)
y = layer(tf.ones((10, 10)))
y.dtype
tf.float16
layer.kernel.dtype
tf.float32
A layer author can prevent a variable from being wrapped with an AutoCastVariable by passing experimental_autocast=False to add_weight, which is useful if the float32 value of the variable must be accessed within the layer. How to write a layer that supports mixed precision and float64. For the most part, layers will automatically support mixed precision and float64 without any additional work, due to the fact the base layer automatically casts inputs, creates variables of the correct type, and in the case of mixed precision, wraps variables with AutoCastVariables. The primary case where you need extra work to support mixed precision or float64 is when you create a new tensor, such as with tf.ones or tf.random.normal, In such cases, you must create the tensor of the correct dtype. For example, if you call tf.random.normal, you must pass the compute dtype, which is the dtype the inputs have been casted to:
class AddRandom(tf.keras.layers.Layer):
def call(self, inputs):
# We must pass `dtype=inputs.dtype`, otherwise a TypeError may
# occur when adding `inputs` to `rand`.
rand = tf.random.normal(shape=inputs.shape, dtype=inputs.dtype)
return inputs + rand
dtype_policy = tf.keras.mixed_precision.Policy('mixed_float16')
layer = AddRandom(dtype=dtype_policy)
y = layer(x)
y.dtype
tf.float16
If you did not pass dtype=inputs.dtype to tf.random.normal, a TypeError would have occurred. This is because the tf.random.normal's dtype defaults to "float32", but the input dtype is float16. You cannot add a float32 tensor with a float16 tensor.
Attributes
compute_dtype The compute dtype of this policy. This is the dtype layers will do their computations in. Typically layers output tensors with the compute dtype as well. Note that even if the compute dtype is float16 or bfloat16, hardware devices may not do individual adds, multiplies, and other fundamental operations in float16 or bfloat16, but instead may do some of them in float32 for numeric stability. The compute dtype is the dtype of the inputs and outputs of the TensorFlow ops that the layer executes. Internally, many TensorFlow ops will do certain internal calculations in float32 or some other device-internal intermediate format with higher precision than float16/bfloat16, to increase numeric stability. For example, a tf.keras.layers.Dense layer, when run on a GPU with a float16 compute dtype, will pass float16 inputs to tf.linalg.matmul. But, tf.linalg.matmul will do use float32 intermediate math. The performance benefit of float16 is still apparent, due to increased memory bandwidth and the fact modern GPUs have specialized hardware for computing matmuls on float16 inputs while still keeping intermediate computations in float32.
name Returns the name of this policy.
variable_dtype The variable dtype of this policy. This is the dtype layers will create their variables in, unless a layer explicitly chooses a different dtype. If this is different than Policy.compute_dtype, Layers will cast variables to the compute dtype to avoid type errors. Variable regularizers are run in the variable dtype, not the compute dtype.
Methods from_config View source
@classmethod
from_config(
config, custom_objects=None
)
get_config View source
get_config() | tensorflow.keras.mixed_precision.policy |
tf.keras.mixed_precision.set_global_policy Sets the global dtype policy. View aliases Main aliases
tf.keras.mixed_precision.experimental.set_policy
tf.keras.mixed_precision.set_global_policy(
policy
)
The global policy is the default tf.keras.mixed_precision.Policy used for layers, if no policy is passed to the layer constructor.
tf.keras.mixed_precision.set_global_policy('mixed_float16')
tf.keras.mixed_precision.global_policy()
<Policy "mixed_float16">
tf.keras.layers.Dense(10).dtype_policy
<Policy "mixed_float16">
# Global policy is not used if a policy is directly passed to constructor
tf.keras.layers.Dense(10, dtype='float64').dtype_policy
<Policy "float64">
tf.keras.mixed_precision.set_global_policy('float32')
If no global policy is set, layers will instead default to a Policy constructed from tf.keras.backend.floatx(). To use mixed precision, the global policy should be set to 'mixed_float16' or 'mixed_bfloat16', so that every layer uses a 16-bit compute dtype and float32 variable dtype by default. Only floating point policies can be set as the global policy, such as 'float32' and 'mixed_float16'. Non-floating point policies such as 'int32' and 'complex64' cannot be set as the global policy because most layers do not support such policies. See tf.keras.mixed_precision.Policy for more information.
Args
policy A Policy, or a string that will be converted to a Policy. Can also be None, in which case the global policy will be constructed from tf.keras.backend.floatx() | tensorflow.keras.mixed_precision.set_global_policy |
tf.keras.Model View source on GitHub Model groups layers into an object with training and inference features. Inherits From: Layer, Module View aliases Main aliases
tf.keras.models.Model Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.Model, tf.compat.v1.keras.models.Model
tf.keras.Model(
*args, **kwargs
)
Arguments
inputs The input(s) of the model: a keras.Input object or list of keras.Input objects.
outputs The output(s) of the model. See Functional API example below.
name String, the name of the model. There are two ways to instantiate a Model: 1 - With the "Functional API", where you start from Input, you chain layer calls to specify the model's forward pass, and finally you create your model from inputs and outputs: import tensorflow as tf
inputs = tf.keras.Input(shape=(3,))
x = tf.keras.layers.Dense(4, activation=tf.nn.relu)(inputs)
outputs = tf.keras.layers.Dense(5, activation=tf.nn.softmax)(x)
model = tf.keras.Model(inputs=inputs, outputs=outputs)
2 - By subclassing the Model class: in that case, you should define your layers in __init__ and you should implement the model's forward pass in call. import tensorflow as tf
class MyModel(tf.keras.Model):
def __init__(self):
super(MyModel, self).__init__()
self.dense1 = tf.keras.layers.Dense(4, activation=tf.nn.relu)
self.dense2 = tf.keras.layers.Dense(5, activation=tf.nn.softmax)
def call(self, inputs):
x = self.dense1(inputs)
return self.dense2(x)
model = MyModel()
If you subclass Model, you can optionally have a training argument (boolean) in call, which you can use to specify a different behavior in training and inference: import tensorflow as tf
class MyModel(tf.keras.Model):
def __init__(self):
super(MyModel, self).__init__()
self.dense1 = tf.keras.layers.Dense(4, activation=tf.nn.relu)
self.dense2 = tf.keras.layers.Dense(5, activation=tf.nn.softmax)
self.dropout = tf.keras.layers.Dropout(0.5)
def call(self, inputs, training=False):
x = self.dense1(inputs)
if training:
x = self.dropout(x, training=training)
return self.dense2(x)
model = MyModel()
Once the model is created, you can config the model with losses and metrics with model.compile(), train the model with model.fit(), or use the model to do prediction with model.predict().
Attributes
distribute_strategy The tf.distribute.Strategy this model was created under.
layers
metrics_names Returns the model's display labels for all outputs.
Note: metrics_names are available only after a keras.Model has been trained/evaluated on actual data.
inputs = tf.keras.layers.Input(shape=(3,))
outputs = tf.keras.layers.Dense(2)(inputs)
model = tf.keras.models.Model(inputs=inputs, outputs=outputs)
model.compile(optimizer="Adam", loss="mse", metrics=["mae"])
model.metrics_names
[]
x = np.random.random((2, 3))
y = np.random.randint(0, 2, (2, 2))
model.fit(x, y)
model.metrics_names
['loss', 'mae']
inputs = tf.keras.layers.Input(shape=(3,))
d = tf.keras.layers.Dense(2, name='out')
output_1 = d(inputs)
output_2 = d(inputs)
model = tf.keras.models.Model(
inputs=inputs, outputs=[output_1, output_2])
model.compile(optimizer="Adam", loss="mse", metrics=["mae", "acc"])
model.fit(x, (y, y))
model.metrics_names
['loss', 'out_loss', 'out_1_loss', 'out_mae', 'out_acc', 'out_1_mae',
'out_1_acc']
run_eagerly Settable attribute indicating whether the model should run eagerly. Running eagerly means that your model will be run step by step, like Python code. Your model might run slower, but it should become easier for you to debug it by stepping into individual layer calls. By default, we will attempt to compile your model to a static graph to deliver the best execution performance.
Methods compile View source
compile(
optimizer='rmsprop', loss=None, metrics=None, loss_weights=None,
weighted_metrics=None, run_eagerly=None, steps_per_execution=None, **kwargs
)
Configures the model for training.
Arguments
optimizer String (name of optimizer) or optimizer instance. See tf.keras.optimizers.
loss String (name of objective function), objective function or tf.keras.losses.Loss instance. See tf.keras.losses. An objective function is any callable with the signature loss = fn(y_true, y_pred), where y_true = ground truth values with shape = [batch_size, d0, .. dN], except sparse loss functions such as sparse categorical crossentropy where shape = [batch_size, d0, .. dN-1]. y_pred = predicted values with shape = [batch_size, d0, .. dN]. It returns a weighted loss float tensor. If a custom Loss instance is used and reduction is set to NONE, return value has the shape [batch_size, d0, .. dN-1] ie. per-sample or per-timestep loss values; otherwise, it is a scalar. If the model has multiple outputs, you can use a different loss on each output by passing a dictionary or a list of losses. The loss value that will be minimized by the model will then be the sum of all individual losses.
metrics List of metrics to be evaluated by the model during training and testing. Each of this can be a string (name of a built-in function), function or a tf.keras.metrics.Metric instance. See tf.keras.metrics. Typically you will use metrics=['accuracy']. A function is any callable with the signature result = fn(y_true, y_pred). To specify different metrics for different outputs of a multi-output model, you could also pass a dictionary, such as metrics={'output_a': 'accuracy', 'output_b': ['accuracy', 'mse']}. You can also pass a list (len = len(outputs)) of lists of metrics such as metrics=[['accuracy'], ['accuracy', 'mse']] or metrics=['accuracy', ['accuracy', 'mse']]. When you pass the strings 'accuracy' or 'acc', we convert this to one of tf.keras.metrics.BinaryAccuracy, tf.keras.metrics.CategoricalAccuracy, tf.keras.metrics.SparseCategoricalAccuracy based on the loss function used and the model output shape. We do a similar conversion for the strings 'crossentropy' and 'ce' as well.
loss_weights Optional list or dictionary specifying scalar coefficients (Python floats) to weight the loss contributions of different model outputs. The loss value that will be minimized by the model will then be the weighted sum of all individual losses, weighted by the loss_weights coefficients. If a list, it is expected to have a 1:1 mapping to the model's outputs. If a dict, it is expected to map output names (strings) to scalar coefficients.
weighted_metrics List of metrics to be evaluated and weighted by sample_weight or class_weight during training and testing.
run_eagerly Bool. Defaults to False. If True, this Model's logic will not be wrapped in a tf.function. Recommended to leave this as None unless your Model cannot be run inside a tf.function.
steps_per_execution Int. Defaults to 1. The number of batches to run during each tf.function call. Running multiple batches inside a single tf.function call can greatly improve performance on TPUs or small models with a large Python overhead. At most, one full epoch will be run each execution. If a number larger than the size of the epoch is passed, the execution will be truncated to the size of the epoch. Note that if steps_per_execution is set to N, Callback.on_batch_begin and Callback.on_batch_end methods will only be called every N batches (i.e. before/after each tf.function execution).
**kwargs Arguments supported for backwards compatibility only.
Raises
ValueError In case of invalid arguments for optimizer, loss or metrics. evaluate View source
evaluate(
x=None, y=None, batch_size=None, verbose=1, sample_weight=None, steps=None,
callbacks=None, max_queue_size=10, workers=1, use_multiprocessing=False,
return_dict=False
)
Returns the loss value & metrics values for the model in test mode. Computation is done in batches (see the batch_size arg.)
Arguments
x Input data. It could be: A Numpy array (or array-like), or a list of arrays (in case the model has multiple inputs). A TensorFlow tensor, or a list of tensors (in case the model has multiple inputs). A dict mapping input names to the corresponding array/tensors, if the model has named inputs. A tf.data dataset. Should return a tuple of either (inputs, targets) or (inputs, targets, sample_weights). A generator or keras.utils.Sequence returning (inputs, targets) or (inputs, targets, sample_weights). A more detailed description of unpacking behavior for iterator types (Dataset, generator, Sequence) is given in the Unpacking behavior for iterator-like inputs section of Model.fit.
y Target data. Like the input data x, it could be either Numpy array(s) or TensorFlow tensor(s). It should be consistent with x (you cannot have Numpy inputs and tensor targets, or inversely). If x is a dataset, generator or keras.utils.Sequence instance, y should not be specified (since targets will be obtained from the iterator/dataset).
batch_size Integer or None. Number of samples per batch of computation. If unspecified, batch_size will default to 32. Do not specify the batch_size if your data is in the form of a dataset, generators, or keras.utils.Sequence instances (since they generate batches).
verbose 0 or 1. Verbosity mode. 0 = silent, 1 = progress bar.
sample_weight Optional Numpy array of weights for the test samples, used for weighting the loss function. You can either pass a flat (1D) Numpy array with the same length as the input samples (1:1 mapping between weights and samples), or in the case of temporal data, you can pass a 2D array with shape (samples, sequence_length), to apply a different weight to every timestep of every sample. This argument is not supported when x is a dataset, instead pass sample weights as the third element of x.
steps Integer or None. Total number of steps (batches of samples) before declaring the evaluation round finished. Ignored with the default value of None. If x is a tf.data dataset and steps is None, 'evaluate' will run until the dataset is exhausted. This argument is not supported with array inputs.
callbacks List of keras.callbacks.Callback instances. List of callbacks to apply during evaluation. See callbacks.
max_queue_size Integer. Used for generator or keras.utils.Sequence input only. Maximum size for the generator queue. If unspecified, max_queue_size will default to 10.
workers Integer. Used for generator or keras.utils.Sequence input only. Maximum number of processes to spin up when using process-based threading. If unspecified, workers will default to 1. If 0, will execute the generator on the main thread.
use_multiprocessing Boolean. Used for generator or keras.utils.Sequence input only. If True, use process-based threading. If unspecified, use_multiprocessing will default to False. Note that because this implementation relies on multiprocessing, you should not pass non-picklable arguments to the generator as they can't be passed easily to children processes.
return_dict If True, loss and metric results are returned as a dict, with each key being the name of the metric. If False, they are returned as a list. See the discussion of Unpacking behavior for iterator-like inputs for Model.fit.
Returns Scalar test loss (if the model has a single output and no metrics) or list of scalars (if the model has multiple outputs and/or metrics). The attribute model.metrics_names will give you the display labels for the scalar outputs.
Raises
RuntimeError If model.evaluate is wrapped in tf.function.
ValueError in case of invalid arguments. evaluate_generator View source
evaluate_generator(
generator, steps=None, callbacks=None, max_queue_size=10, workers=1,
use_multiprocessing=False, verbose=0
)
Evaluates the model on a data generator. DEPRECATED: Model.evaluate now supports generators, so there is no longer any need to use this endpoint. fit View source
fit(
x=None, y=None, batch_size=None, epochs=1, verbose=1, callbacks=None,
validation_split=0.0, validation_data=None, shuffle=True, class_weight=None,
sample_weight=None, initial_epoch=0, steps_per_epoch=None,
validation_steps=None, validation_batch_size=None, validation_freq=1,
max_queue_size=10, workers=1, use_multiprocessing=False
)
Trains the model for a fixed number of epochs (iterations on a dataset).
Arguments
x Input data. It could be: A Numpy array (or array-like), or a list of arrays (in case the model has multiple inputs). A TensorFlow tensor, or a list of tensors (in case the model has multiple inputs). A dict mapping input names to the corresponding array/tensors, if the model has named inputs. A tf.data dataset. Should return a tuple of either (inputs, targets) or (inputs, targets, sample_weights). A generator or keras.utils.Sequence returning (inputs, targets) or (inputs, targets, sample_weights). A more detailed description of unpacking behavior for iterator types (Dataset, generator, Sequence) is given below.
y Target data. Like the input data x, it could be either Numpy array(s) or TensorFlow tensor(s). It should be consistent with x (you cannot have Numpy inputs and tensor targets, or inversely). If x is a dataset, generator, or keras.utils.Sequence instance, y should not be specified (since targets will be obtained from x).
batch_size Integer or None. Number of samples per gradient update. If unspecified, batch_size will default to 32. Do not specify the batch_size if your data is in the form of datasets, generators, or keras.utils.Sequence instances (since they generate batches).
epochs Integer. Number of epochs to train the model. An epoch is an iteration over the entire x and y data provided. Note that in conjunction with initial_epoch, epochs is to be understood as "final epoch". The model is not trained for a number of iterations given by epochs, but merely until the epoch of index epochs is reached.
verbose 0, 1, or 2. Verbosity mode. 0 = silent, 1 = progress bar, 2 = one line per epoch. Note that the progress bar is not particularly useful when logged to a file, so verbose=2 is recommended when not running interactively (eg, in a production environment).
callbacks List of keras.callbacks.Callback instances. List of callbacks to apply during training. See tf.keras.callbacks. Note tf.keras.callbacks.ProgbarLogger and tf.keras.callbacks.History callbacks are created automatically and need not be passed into model.fit. tf.keras.callbacks.ProgbarLogger is created or not based on verbose argument to model.fit.
validation_split Float between 0 and 1. Fraction of the training data to be used as validation data. The model will set apart this fraction of the training data, will not train on it, and will evaluate the loss and any model metrics on this data at the end of each epoch. The validation data is selected from the last samples in the x and y data provided, before shuffling. This argument is not supported when x is a dataset, generator or keras.utils.Sequence instance.
validation_data Data on which to evaluate the loss and any model metrics at the end of each epoch. The model will not be trained on this data. Thus, note the fact that the validation loss of data provided using validation_split or validation_data is not affected by regularization layers like noise and dropout. validation_data will override validation_split. validation_data could be: tuple (x_val, y_val) of Numpy arrays or tensors tuple (x_val, y_val, val_sample_weights) of Numpy arrays dataset For the first two cases, batch_size must be provided. For the last case, validation_steps could be provided. Note that validation_data does not support all the data types that are supported in x, eg, dict, generator or keras.utils.Sequence.
shuffle Boolean (whether to shuffle the training data before each epoch) or str (for 'batch'). This argument is ignored when x is a generator. 'batch' is a special option for dealing with the limitations of HDF5 data; it shuffles in batch-sized chunks. Has no effect when steps_per_epoch is not None.
class_weight Optional dictionary mapping class indices (integers) to a weight (float) value, used for weighting the loss function (during training only). This can be useful to tell the model to "pay more attention" to samples from an under-represented class.
sample_weight Optional Numpy array of weights for the training samples, used for weighting the loss function (during training only). You can either pass a flat (1D) Numpy array with the same length as the input samples (1:1 mapping between weights and samples), or in the case of temporal data, you can pass a 2D array with shape (samples, sequence_length), to apply a different weight to every timestep of every sample. This argument is not supported when x is a dataset, generator, or keras.utils.Sequence instance, instead provide the sample_weights as the third element of x.
initial_epoch Integer. Epoch at which to start training (useful for resuming a previous training run).
steps_per_epoch Integer or None. Total number of steps (batches of samples) before declaring one epoch finished and starting the next epoch. When training with input tensors such as TensorFlow data tensors, the default None is equal to the number of samples in your dataset divided by the batch size, or 1 if that cannot be determined. If x is a tf.data dataset, and 'steps_per_epoch' is None, the epoch will run until the input dataset is exhausted. When passing an infinitely repeating dataset, you must specify the steps_per_epoch argument. This argument is not supported with array inputs.
validation_steps Only relevant if validation_data is provided and is a tf.data dataset. Total number of steps (batches of samples) to draw before stopping when performing validation at the end of every epoch. If 'validation_steps' is None, validation will run until the validation_data dataset is exhausted. In the case of an infinitely repeated dataset, it will run into an infinite loop. If 'validation_steps' is specified and only part of the dataset will be consumed, the evaluation will start from the beginning of the dataset at each epoch. This ensures that the same validation samples are used every time.
validation_batch_size Integer or None. Number of samples per validation batch. If unspecified, will default to batch_size. Do not specify the validation_batch_size if your data is in the form of datasets, generators, or keras.utils.Sequence instances (since they generate batches).
validation_freq Only relevant if validation data is provided. Integer or collections_abc.Container instance (e.g. list, tuple, etc.). If an integer, specifies how many training epochs to run before a new validation run is performed, e.g. validation_freq=2 runs validation every 2 epochs. If a Container, specifies the epochs on which to run validation, e.g. validation_freq=[1, 2, 10] runs validation at the end of the 1st, 2nd, and 10th epochs.
max_queue_size Integer. Used for generator or keras.utils.Sequence input only. Maximum size for the generator queue. If unspecified, max_queue_size will default to 10.
workers Integer. Used for generator or keras.utils.Sequence input only. Maximum number of processes to spin up when using process-based threading. If unspecified, workers will default to 1. If 0, will execute the generator on the main thread.
use_multiprocessing Boolean. Used for generator or keras.utils.Sequence input only. If True, use process-based threading. If unspecified, use_multiprocessing will default to False. Note that because this implementation relies on multiprocessing, you should not pass non-picklable arguments to the generator as they can't be passed easily to children processes. Unpacking behavior for iterator-like inputs: A common pattern is to pass a tf.data.Dataset, generator, or tf.keras.utils.Sequence to the x argument of fit, which will in fact yield not only features (x) but optionally targets (y) and sample weights. Keras requires that the output of such iterator-likes be unambiguous. The iterator should return a tuple of length 1, 2, or 3, where the optional second and third elements will be used for y and sample_weight respectively. Any other type provided will be wrapped in a length one tuple, effectively treating everything as 'x'. When yielding dicts, they should still adhere to the top-level tuple structure. e.g. ({"x0": x0, "x1": x1}, y). Keras will not attempt to separate features, targets, and weights from the keys of a single dict. A notable unsupported data type is the namedtuple. The reason is that it behaves like both an ordered datatype (tuple) and a mapping datatype (dict). So given a namedtuple of the form: namedtuple("example_tuple", ["y", "x"]) it is ambiguous whether to reverse the order of the elements when interpreting the value. Even worse is a tuple of the form: namedtuple("other_tuple", ["x", "y", "z"]) where it is unclear if the tuple was intended to be unpacked into x, y, and sample_weight or passed through as a single element to x. As a result the data processing code will simply raise a ValueError if it encounters a namedtuple. (Along with instructions to remedy the issue.)
Returns A History object. Its History.history attribute is a record of training loss values and metrics values at successive epochs, as well as validation loss values and validation metrics values (if applicable).
Raises
RuntimeError If the model was never compiled or, If model.fit is wrapped in tf.function.
ValueError In case of mismatch between the provided input data and what the model expects or when the input data is empty. fit_generator View source
fit_generator(
generator, steps_per_epoch=None, epochs=1, verbose=1, callbacks=None,
validation_data=None, validation_steps=None, validation_freq=1,
class_weight=None, max_queue_size=10, workers=1, use_multiprocessing=False,
shuffle=True, initial_epoch=0
)
Fits the model on data yielded batch-by-batch by a Python generator. DEPRECATED: Model.fit now supports generators, so there is no longer any need to use this endpoint. get_layer View source
get_layer(
name=None, index=None
)
Retrieves a layer based on either its name (unique) or index. If name and index are both provided, index will take precedence. Indices are based on order of horizontal graph traversal (bottom-up).
Arguments
name String, name of layer.
index Integer, index of layer.
Returns A layer instance.
Raises
ValueError In case of invalid layer name or index. load_weights View source
load_weights(
filepath, by_name=False, skip_mismatch=False, options=None
)
Loads all layer weights, either from a TensorFlow or an HDF5 weight file. If by_name is False weights are loaded based on the network's topology. This means the architecture should be the same as when the weights were saved. Note that layers that don't have weights are not taken into account in the topological ordering, so adding or removing layers is fine as long as they don't have weights. If by_name is True, weights are loaded into layers only if they share the same name. This is useful for fine-tuning or transfer-learning models where some of the layers have changed. Only topological loading (by_name=False) is supported when loading weights from the TensorFlow format. Note that topological loading differs slightly between TensorFlow and HDF5 formats for user-defined classes inheriting from tf.keras.Model: HDF5 loads based on a flattened list of weights, while the TensorFlow format loads based on the object-local names of attributes to which layers are assigned in the Model's constructor.
Arguments
filepath String, path to the weights file to load. For weight files in TensorFlow format, this is the file prefix (the same as was passed to save_weights).
by_name Boolean, whether to load weights by name or by topological order. Only topological loading is supported for weight files in TensorFlow format.
skip_mismatch Boolean, whether to skip loading of layers where there is a mismatch in the number of weights, or a mismatch in the shape of the weight (only valid when by_name=True).
options Optional tf.train.CheckpointOptions object that specifies options for loading weights.
Returns When loading a weight file in TensorFlow format, returns the same status object as tf.train.Checkpoint.restore. When graph building, restore ops are run automatically as soon as the network is built (on first call for user-defined classes inheriting from Model, immediately if it is already built). When loading weights in HDF5 format, returns None.
Raises
ImportError If h5py is not available and the weight file is in HDF5 format.
ValueError If skip_mismatch is set to True when by_name is False. make_predict_function View source
make_predict_function()
Creates a function that executes one step of inference. This method can be overridden to support custom inference logic. This method is called by Model.predict and Model.predict_on_batch. Typically, this method directly controls tf.function and tf.distribute.Strategy settings, and delegates the actual evaluation logic to Model.predict_step. This function is cached the first time Model.predict or Model.predict_on_batch is called. The cache is cleared whenever Model.compile is called.
Returns Function. The function created by this method should accept a tf.data.Iterator, and return the outputs of the Model.
make_test_function View source
make_test_function()
Creates a function that executes one step of evaluation. This method can be overridden to support custom evaluation logic. This method is called by Model.evaluate and Model.test_on_batch. Typically, this method directly controls tf.function and tf.distribute.Strategy settings, and delegates the actual evaluation logic to Model.test_step. This function is cached the first time Model.evaluate or Model.test_on_batch is called. The cache is cleared whenever Model.compile is called.
Returns Function. The function created by this method should accept a tf.data.Iterator, and return a dict containing values that will be passed to tf.keras.Callbacks.on_test_batch_end.
make_train_function View source
make_train_function()
Creates a function that executes one step of training. This method can be overridden to support custom training logic. This method is called by Model.fit and Model.train_on_batch. Typically, this method directly controls tf.function and tf.distribute.Strategy settings, and delegates the actual training logic to Model.train_step. This function is cached the first time Model.fit or Model.train_on_batch is called. The cache is cleared whenever Model.compile is called.
Returns Function. The function created by this method should accept a tf.data.Iterator, and return a dict containing values that will be passed to tf.keras.Callbacks.on_train_batch_end, such as {'loss': 0.2, 'accuracy': 0.7}.
predict View source
predict(
x, batch_size=None, verbose=0, steps=None, callbacks=None, max_queue_size=10,
workers=1, use_multiprocessing=False
)
Generates output predictions for the input samples. Computation is done in batches. This method is designed for performance in large scale inputs. For small amount of inputs that fit in one batch, directly using __call__ is recommended for faster execution, e.g., model(x), or model(x, training=False) if you have layers such as tf.keras.layers.BatchNormalization that behaves differently during inference. Also, note the fact that test loss is not affected by regularization layers like noise and dropout.
Arguments
x Input samples. It could be: A Numpy array (or array-like), or a list of arrays (in case the model has multiple inputs). A TensorFlow tensor, or a list of tensors (in case the model has multiple inputs). A tf.data dataset. A generator or keras.utils.Sequence instance. A more detailed description of unpacking behavior for iterator types (Dataset, generator, Sequence) is given in the Unpacking behavior for iterator-like inputs section of Model.fit.
batch_size Integer or None. Number of samples per batch. If unspecified, batch_size will default to 32. Do not specify the batch_size if your data is in the form of dataset, generators, or keras.utils.Sequence instances (since they generate batches).
verbose Verbosity mode, 0 or 1.
steps Total number of steps (batches of samples) before declaring the prediction round finished. Ignored with the default value of None. If x is a tf.data dataset and steps is None, predict will run until the input dataset is exhausted.
callbacks List of keras.callbacks.Callback instances. List of callbacks to apply during prediction. See callbacks.
max_queue_size Integer. Used for generator or keras.utils.Sequence input only. Maximum size for the generator queue. If unspecified, max_queue_size will default to 10.
workers Integer. Used for generator or keras.utils.Sequence input only. Maximum number of processes to spin up when using process-based threading. If unspecified, workers will default to 1. If 0, will execute the generator on the main thread.
use_multiprocessing Boolean. Used for generator or keras.utils.Sequence input only. If True, use process-based threading. If unspecified, use_multiprocessing will default to False. Note that because this implementation relies on multiprocessing, you should not pass non-picklable arguments to the generator as they can't be passed easily to children processes. See the discussion of Unpacking behavior for iterator-like inputs for Model.fit. Note that Model.predict uses the same interpretation rules as Model.fit and Model.evaluate, so inputs must be unambiguous for all three methods.
Returns Numpy array(s) of predictions.
Raises
RuntimeError If model.predict is wrapped in tf.function.
ValueError In case of mismatch between the provided input data and the model's expectations, or in case a stateful model receives a number of samples that is not a multiple of the batch size. predict_generator View source
predict_generator(
generator, steps=None, callbacks=None, max_queue_size=10, workers=1,
use_multiprocessing=False, verbose=0
)
Generates predictions for the input samples from a data generator. DEPRECATED: Model.predict now supports generators, so there is no longer any need to use this endpoint. predict_on_batch View source
predict_on_batch(
x
)
Returns predictions for a single batch of samples.
Arguments
x Input data. It could be: - A Numpy array (or array-like), or a list of arrays (in case the model has multiple inputs). - A TensorFlow tensor, or a list of tensors (in case the model has multiple inputs).
Returns Numpy array(s) of predictions.
Raises
RuntimeError If model.predict_on_batch is wrapped in tf.function.
ValueError In case of mismatch between given number of inputs and expectations of the model. predict_step View source
predict_step(
data
)
The logic for one inference step. This method can be overridden to support custom inference logic. This method is called by Model.make_predict_function. This method should contain the mathematical logic for one step of inference. This typically includes the forward pass. Configuration details for how this logic is run (e.g. tf.function and tf.distribute.Strategy settings), should be left to Model.make_predict_function, which can also be overridden.
Arguments
data A nested structure of Tensors.
Returns The result of one inference step, typically the output of calling the Model on data.
reset_metrics View source
reset_metrics()
Resets the state of all the metrics in the model. Examples:
inputs = tf.keras.layers.Input(shape=(3,))
outputs = tf.keras.layers.Dense(2)(inputs)
model = tf.keras.models.Model(inputs=inputs, outputs=outputs)
model.compile(optimizer="Adam", loss="mse", metrics=["mae"])
x = np.random.random((2, 3))
y = np.random.randint(0, 2, (2, 2))
_ = model.fit(x, y, verbose=0)
assert all(float(m.result()) for m in model.metrics)
model.reset_metrics()
assert all(float(m.result()) == 0 for m in model.metrics)
reset_states View source
reset_states()
save View source
save(
filepath, overwrite=True, include_optimizer=True, save_format=None,
signatures=None, options=None, save_traces=True
)
Saves the model to Tensorflow SavedModel or a single HDF5 file. Please see tf.keras.models.save_model or the Serialization and Saving guide for details.
Arguments
filepath String, PathLike, path to SavedModel or H5 file to save the model.
overwrite Whether to silently overwrite any existing file at the target location, or provide the user with a manual prompt.
include_optimizer If True, save optimizer's state together.
save_format Either 'tf' or 'h5', indicating whether to save the model to Tensorflow SavedModel or HDF5. Defaults to 'tf' in TF 2.X, and 'h5' in TF 1.X.
signatures Signatures to save with the SavedModel. Applicable to the 'tf' format only. Please see the signatures argument in tf.saved_model.save for details.
options (only applies to SavedModel format) tf.saved_model.SaveOptions object that specifies options for saving to SavedModel.
save_traces (only applies to SavedModel format) When enabled, the SavedModel will store the function traces for each layer. This can be disabled, so that only the configs of each layer are stored. Defaults to True. Disabling this will decrease serialization time and reduce file size, but it requires that all custom layers/models implement a get_config() method. Example: from keras.models import load_model
model.save('my_model.h5') # creates a HDF5 file 'my_model.h5'
del model # deletes the existing model
# returns a compiled model
# identical to the previous one
model = load_model('my_model.h5')
save_weights View source
save_weights(
filepath, overwrite=True, save_format=None, options=None
)
Saves all layer weights. Either saves in HDF5 or in TensorFlow format based on the save_format argument. When saving in HDF5 format, the weight file has:
layer_names (attribute), a list of strings (ordered names of model layers). For every layer, a group named layer.name For every such layer group, a group attribute weight_names, a list of strings (ordered names of weights tensor of the layer). For every weight in the layer, a dataset storing the weight value, named after the weight tensor.
When saving in TensorFlow format, all objects referenced by the network are saved in the same format as tf.train.Checkpoint, including any Layer instances or Optimizer instances assigned to object attributes. For networks constructed from inputs and outputs using tf.keras.Model(inputs, outputs), Layer instances used by the network are tracked/saved automatically. For user-defined classes which inherit from tf.keras.Model, Layer instances must be assigned to object attributes, typically in the constructor. See the documentation of tf.train.Checkpoint and tf.keras.Model for details. While the formats are the same, do not mix save_weights and tf.train.Checkpoint. Checkpoints saved by Model.save_weights should be loaded using Model.load_weights. Checkpoints saved using tf.train.Checkpoint.save should be restored using the corresponding tf.train.Checkpoint.restore. Prefer tf.train.Checkpoint over save_weights for training checkpoints. The TensorFlow format matches objects and variables by starting at a root object, self for save_weights, and greedily matching attribute names. For Model.save this is the Model, and for Checkpoint.save this is the Checkpoint even if the Checkpoint has a model attached. This means saving a tf.keras.Model using save_weights and loading into a tf.train.Checkpoint with a Model attached (or vice versa) will not match the Model's variables. See the guide to training checkpoints for details on the TensorFlow format.
Arguments
filepath String or PathLike, path to the file to save the weights to. When saving in TensorFlow format, this is the prefix used for checkpoint files (multiple files are generated). Note that the '.h5' suffix causes weights to be saved in HDF5 format.
overwrite Whether to silently overwrite any existing file at the target location, or provide the user with a manual prompt.
save_format Either 'tf' or 'h5'. A filepath ending in '.h5' or '.keras' will default to HDF5 if save_format is None. Otherwise None defaults to 'tf'.
options Optional tf.train.CheckpointOptions object that specifies options for saving weights.
Raises
ImportError If h5py is not available when attempting to save in HDF5 format.
ValueError For invalid/unknown format arguments. summary View source
summary(
line_length=None, positions=None, print_fn=None
)
Prints a string summary of the network.
Arguments
line_length Total length of printed lines (e.g. set this to adapt the display to different terminal window sizes).
positions Relative or absolute positions of log elements in each line. If not provided, defaults to [.33, .55, .67, 1.].
print_fn Print function to use. Defaults to print. It will be called on each line of the summary. You can set it to a custom function in order to capture the string summary.
Raises
ValueError if summary() is called before the model is built. test_on_batch View source
test_on_batch(
x, y=None, sample_weight=None, reset_metrics=True, return_dict=False
)
Test the model on a single batch of samples.
Arguments
x Input data. It could be: - A Numpy array (or array-like), or a list of arrays (in case the model has multiple inputs). - A TensorFlow tensor, or a list of tensors (in case the model has multiple inputs). A dict mapping input names to the corresponding array/tensors, if the model has named inputs.
y Target data. Like the input data x, it could be either Numpy array(s) or TensorFlow tensor(s). It should be consistent with x (you cannot have Numpy inputs and tensor targets, or inversely).
sample_weight Optional array of the same length as x, containing weights to apply to the model's loss for each sample. In the case of temporal data, you can pass a 2D array with shape (samples, sequence_length), to apply a different weight to every timestep of every sample.
reset_metrics If True, the metrics returned will be only for this batch. If False, the metrics will be statefully accumulated across batches.
return_dict If True, loss and metric results are returned as a dict, with each key being the name of the metric. If False, they are returned as a list.
Returns Scalar test loss (if the model has a single output and no metrics) or list of scalars (if the model has multiple outputs and/or metrics). The attribute model.metrics_names will give you the display labels for the scalar outputs.
Raises
RuntimeError If model.test_on_batch is wrapped in tf.function.
ValueError In case of invalid user-provided arguments. test_step View source
test_step(
data
)
The logic for one evaluation step. This method can be overridden to support custom evaluation logic. This method is called by Model.make_test_function. This function should contain the mathematical logic for one step of evaluation. This typically includes the forward pass, loss calculation, and metrics updates. Configuration details for how this logic is run (e.g. tf.function and tf.distribute.Strategy settings), should be left to Model.make_test_function, which can also be overridden.
Arguments
data A nested structure of Tensors.
Returns A dict containing values that will be passed to tf.keras.callbacks.CallbackList.on_train_batch_end. Typically, the values of the Model's metrics are returned.
to_json View source
to_json(
**kwargs
)
Returns a JSON string containing the network configuration. To load a network from a JSON save file, use keras.models.model_from_json(json_string, custom_objects={}).
Arguments
**kwargs Additional keyword arguments to be passed to json.dumps().
Returns A JSON string.
to_yaml View source
to_yaml(
**kwargs
)
Returns a yaml string containing the network configuration. To load a network from a yaml save file, use keras.models.model_from_yaml(yaml_string, custom_objects={}). custom_objects should be a dictionary mapping the names of custom losses / layers / etc to the corresponding functions / classes.
Arguments
**kwargs Additional keyword arguments to be passed to yaml.dump().
Returns A YAML string.
Raises
ImportError if yaml module is not found. train_on_batch View source
train_on_batch(
x, y=None, sample_weight=None, class_weight=None, reset_metrics=True,
return_dict=False
)
Runs a single gradient update on a single batch of data.
Arguments
x Input data. It could be: A Numpy array (or array-like), or a list of arrays (in case the model has multiple inputs). A TensorFlow tensor, or a list of tensors (in case the model has multiple inputs). A dict mapping input names to the corresponding array/tensors, if the model has named inputs.
y Target data. Like the input data x, it could be either Numpy array(s) or TensorFlow tensor(s). It should be consistent with x (you cannot have Numpy inputs and tensor targets, or inversely).
sample_weight Optional array of the same length as x, containing weights to apply to the model's loss for each sample. In the case of temporal data, you can pass a 2D array with shape (samples, sequence_length), to apply a different weight to every timestep of every sample.
class_weight Optional dictionary mapping class indices (integers) to a weight (float) to apply to the model's loss for the samples from this class during training. This can be useful to tell the model to "pay more attention" to samples from an under-represented class.
reset_metrics If True, the metrics returned will be only for this batch. If False, the metrics will be statefully accumulated across batches.
return_dict If True, loss and metric results are returned as a dict, with each key being the name of the metric. If False, they are returned as a list.
Returns Scalar training loss (if the model has a single output and no metrics) or list of scalars (if the model has multiple outputs and/or metrics). The attribute model.metrics_names will give you the display labels for the scalar outputs.
Raises
RuntimeError If model.train_on_batch is wrapped in tf.function.
ValueError In case of invalid user-provided arguments. train_step View source
train_step(
data
)
The logic for one training step. This method can be overridden to support custom training logic. This method is called by Model.make_train_function. This method should contain the mathematical logic for one step of training. This typically includes the forward pass, loss calculation, backpropagation, and metric updates. Configuration details for how this logic is run (e.g. tf.function and tf.distribute.Strategy settings), should be left to Model.make_train_function, which can also be overridden.
Arguments
data A nested structure of Tensors.
Returns A dict containing values that will be passed to tf.keras.callbacks.CallbackList.on_train_batch_end. Typically, the values of the Model's metrics are returned. Example: {'loss': 0.2, 'accuracy': 0.7}. | tensorflow.keras.model |
Module: tf.keras.models Code for model cloning, plus model-related API entries. Classes class Model: Model groups layers into an object with training and inference features. class Sequential: Sequential groups a linear stack of layers into a tf.keras.Model. Functions clone_model(...): Clone any Model instance. load_model(...): Loads a model saved via model.save(). model_from_config(...): Instantiates a Keras model from its config. model_from_json(...): Parses a JSON model configuration string and returns a model instance. model_from_yaml(...): Parses a yaml model configuration file and returns a model instance. save_model(...): Saves a model as a TensorFlow SavedModel or HDF5 file. | tensorflow.keras.models |
tf.keras.models.clone_model View source on GitHub Clone any Model instance. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.keras.models.clone_model
tf.keras.models.clone_model(
model, input_tensors=None, clone_function=None
)
Model cloning is similar to calling a model on new inputs, except that it creates new layers (and thus new weights) instead of sharing the weights of the existing layers.
Arguments
model Instance of Model (could be a functional model or a Sequential model).
input_tensors optional list of input tensors or InputLayer objects to build the model upon. If not provided, placeholders will be created.
clone_function Callable to be used to clone each layer in the target model (except InputLayer instances). It takes as argument the layer instance to be cloned, and returns the corresponding layer instance to be used in the model copy. If unspecified, this callable defaults to the following serialization/deserialization function: lambda layer: layer.__class__.from_config(layer.get_config()). By passing a custom callable, you can customize your copy of the model, e.g. by wrapping certain layers of interest (you might want to replace all LSTM instances with equivalent Bidirectional(LSTM(...)) instances, for example).
Returns An instance of Model reproducing the behavior of the original model, on top of new inputs tensors, using newly instantiated weights. The cloned model might behave differently from the original model if a custom clone_function modifies the layer.
Raises
ValueError in case of invalid model argument value. | tensorflow.keras.models.clone_model |
tf.keras.models.load_model View source on GitHub Loads a model saved via model.save(). View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.keras.models.load_model
tf.keras.models.load_model(
filepath, custom_objects=None, compile=True, options=None
)
Usage:
model = tf.keras.Sequential([
tf.keras.layers.Dense(5, input_shape=(3,)),
tf.keras.layers.Softmax()])
model.save('/tmp/model')
loaded_model = tf.keras.models.load_model('/tmp/model')
x = tf.random.uniform((10, 3))
assert np.allclose(model.predict(x), loaded_model.predict(x))
Note that the model weights may have different scoped names after being loaded. Scoped names include the model/layer names, such as "dense_1/kernel:0". It is recommended that you use the layer properties to access specific variables, e.g. model.get_layer("dense_1").kernel.
Arguments
filepath One of the following: String or pathlib.Path object, path to the saved model
h5py.File object from which to load the model
custom_objects Optional dictionary mapping names (strings) to custom classes or functions to be considered during deserialization.
compile Boolean, whether to compile the model after loading.
options Optional tf.saved_model.LoadOptions object that specifies options for loading from SavedModel.
Returns A Keras model instance. If the original model was compiled, and saved with the optimizer, then the returned model will be compiled. Otherwise, the model will be left uncompiled. In the case that an uncompiled model is returned, a warning is displayed if the compile argument is set to True.
Raises
ImportError if loading from an hdf5 file and h5py is not available.
IOError In case of an invalid savefile. | tensorflow.keras.models.load_model |
tf.keras.models.model_from_config View source on GitHub Instantiates a Keras model from its config. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.keras.models.model_from_config
tf.keras.models.model_from_config(
config, custom_objects=None
)
Usage: # for a Functional API model
tf.keras.Model().from_config(model.get_config())
# for a Sequential model
tf.keras.Sequential().from_config(model.get_config())
Arguments
config Configuration dictionary.
custom_objects Optional dictionary mapping names (strings) to custom classes or functions to be considered during deserialization.
Returns A Keras model instance (uncompiled).
Raises
TypeError if config is not a dictionary. | tensorflow.keras.models.model_from_config |
tf.keras.models.model_from_json View source on GitHub Parses a JSON model configuration string and returns a model instance. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.keras.models.model_from_json
tf.keras.models.model_from_json(
json_string, custom_objects=None
)
Usage:
model = tf.keras.Sequential([
tf.keras.layers.Dense(5, input_shape=(3,)),
tf.keras.layers.Softmax()])
config = model.to_json()
loaded_model = tf.keras.models.model_from_json(config)
Arguments
json_string JSON string encoding a model configuration.
custom_objects Optional dictionary mapping names (strings) to custom classes or functions to be considered during deserialization.
Returns A Keras model instance (uncompiled). | tensorflow.keras.models.model_from_json |
tf.keras.models.model_from_yaml View source on GitHub Parses a yaml model configuration file and returns a model instance. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.keras.models.model_from_yaml
tf.keras.models.model_from_yaml(
yaml_string, custom_objects=None
)
Usage:
model = tf.keras.Sequential([
tf.keras.layers.Dense(5, input_shape=(3,)),
tf.keras.layers.Softmax()])
try:
import yaml
config = model.to_yaml()
loaded_model = tf.keras.models.model_from_yaml(config)
except ImportError:
pass
Arguments
yaml_string YAML string or open file encoding a model configuration.
custom_objects Optional dictionary mapping names (strings) to custom classes or functions to be considered during deserialization.
Returns A Keras model instance (uncompiled).
Raises
ImportError if yaml module is not found. | tensorflow.keras.models.model_from_yaml |
tf.keras.models.save_model View source on GitHub Saves a model as a TensorFlow SavedModel or HDF5 file. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.keras.models.save_model
tf.keras.models.save_model(
model, filepath, overwrite=True, include_optimizer=True, save_format=None,
signatures=None, options=None, save_traces=True
)
See the Serialization and Saving guide for details. Usage:
model = tf.keras.Sequential([
tf.keras.layers.Dense(5, input_shape=(3,)),
tf.keras.layers.Softmax()])
model.save('/tmp/model')
loaded_model = tf.keras.models.load_model('/tmp/model')
x = tf.random.uniform((10, 3))
assert np.allclose(model.predict(x), loaded_model.predict(x))
The SavedModel and HDF5 file contains: the model's configuration (topology) the model's weights the model's optimizer's state (if any) Thus models can be reinstantiated in the exact same state, without any of the code used for model definition or training. Note that the model weights may have different scoped names after being loaded. Scoped names include the model/layer names, such as "dense_1/kernel:0". It is recommended that you use the layer properties to access specific variables, e.g. model.get_layer("dense_1").kernel. SavedModel serialization format Keras SavedModel uses tf.saved_model.save to save the model and all trackable objects attached to the model (e.g. layers and variables). The model config, weights, and optimizer are saved in the SavedModel. Additionally, for every Keras layer attached to the model, the SavedModel stores: the config and metadata -- e.g. name, dtype, trainable status traced call and loss functions, which are stored as TensorFlow subgraphs. The traced functions allow the SavedModel format to save and load custom layers without the original class definition. You can choose to not save the traced functions by disabling the save_traces option. This will decrease the time it takes to save the model and the amount of disk space occupied by the output SavedModel. If you enable this option, then you must provide all custom class definitions when loading the model. See the custom_objects argument in tf.keras.models.load_model.
Arguments
model Keras model instance to be saved.
filepath One of the following: String or pathlib.Path object, path where to save the model
h5py.File object where to save the model
overwrite Whether we should overwrite any existing model at the target location, or instead ask the user with a manual prompt.
include_optimizer If True, save optimizer's state together.
save_format Either 'tf' or 'h5', indicating whether to save the model to Tensorflow SavedModel or HDF5. Defaults to 'tf' in TF 2.X, and 'h5' in TF 1.X.
signatures Signatures to save with the SavedModel. Applicable to the 'tf' format only. Please see the signatures argument in tf.saved_model.save for details.
options (only applies to SavedModel format) tf.saved_model.SaveOptions object that specifies options for saving to SavedModel.
save_traces (only applies to SavedModel format) When enabled, the SavedModel will store the function traces for each layer. This can be disabled, so that only the configs of each layer are stored. Defaults to True. Disabling this will decrease serialization time and reduce file size, but it requires that all custom layers/models implement a get_config() method.
Raises
ImportError If save format is hdf5, and h5py is not available. | tensorflow.keras.models.save_model |
Module: tf.keras.optimizers Built-in optimizer classes. View aliases Main aliases
tf.optimizers For more examples see the base class tf.keras.optimizers.Optimizer. Modules schedules module: Public API for tf.keras.optimizers.schedules namespace. Classes class Adadelta: Optimizer that implements the Adadelta algorithm. class Adagrad: Optimizer that implements the Adagrad algorithm. class Adam: Optimizer that implements the Adam algorithm. class Adamax: Optimizer that implements the Adamax algorithm. class Ftrl: Optimizer that implements the FTRL algorithm. class Nadam: Optimizer that implements the NAdam algorithm. class Optimizer: Base class for Keras optimizers. class RMSprop: Optimizer that implements the RMSprop algorithm. class SGD: Gradient descent (with momentum) optimizer. Functions deserialize(...): Inverse of the serialize function. get(...): Retrieves a Keras Optimizer instance. serialize(...) | tensorflow.keras.optimizers |
tf.keras.optimizers.Adadelta View source on GitHub Optimizer that implements the Adadelta algorithm. Inherits From: Optimizer View aliases Main aliases
tf.optimizers.Adadelta Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.optimizers.Adadelta
tf.keras.optimizers.Adadelta(
learning_rate=0.001, rho=0.95, epsilon=1e-07, name='Adadelta',
**kwargs
)
Adadelta optimization is a stochastic gradient descent method that is based on adaptive learning rate per dimension to address two drawbacks: The continual decay of learning rates throughout training The need for a manually selected global learning rate Adadelta is a more robust extension of Adagrad that adapts learning rates based on a moving window of gradient updates, instead of accumulating all past gradients. This way, Adadelta continues learning even when many updates have been done. Compared to Adagrad, in the original version of Adadelta you don't have to set an initial learning rate. In this version, initial learning rate can be set, as in most other Keras optimizers. According to section 4.3 ("Effective Learning rates"), near the end of training step sizes converge to 1 which is effectively a high learning rate which would cause divergence. This occurs only near the end of the training as gradients and step sizes are small, and the epsilon constant in the numerator and denominator dominate past gradients and parameter updates which converge the learning rate to 1. According to section 4.4("Speech Data"),where a large neural network with 4 hidden layers was trained on a corpus of US English data, ADADELTA was used with 100 network replicas.The epsilon used is 1e-6 with rho=0.95 which converged faster than ADAGRAD, by the following construction: def init(self, lr=1.0, rho=0.95, epsilon=1e-6, decay=0., **kwargs):
Args
learning_rate A Tensor, floating point value, or a schedule that is a tf.keras.optimizers.schedules.LearningRateSchedule. The learning rate. To match the exact form in the original paper use 1.0.
rho A Tensor or a floating point value. The decay rate.
epsilon A Tensor or a floating point value. A constant epsilon used to better conditioning the grad update.
name Optional name prefix for the operations created when applying gradients. Defaults to "Adadelta".
**kwargs Keyword arguments. Allowed to be one of "clipnorm" or "clipvalue". "clipnorm" (float) clips gradients by norm; "clipvalue" (float) clips gradients by value. Reference: Zeiler, 2012
Raises
ValueError in case of any invalid argument. | tensorflow.keras.optimizers.adadelta |
tf.keras.optimizers.Adagrad View source on GitHub Optimizer that implements the Adagrad algorithm. Inherits From: Optimizer View aliases Main aliases
tf.optimizers.Adagrad Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.optimizers.Adagrad
tf.keras.optimizers.Adagrad(
learning_rate=0.001, initial_accumulator_value=0.1, epsilon=1e-07,
name='Adagrad', **kwargs
)
Adagrad is an optimizer with parameter-specific learning rates, which are adapted relative to how frequently a parameter gets updated during training. The more updates a parameter receives, the smaller the updates.
Args
learning_rate A Tensor, floating point value, or a schedule that is a tf.keras.optimizers.schedules.LearningRateSchedule. The learning rate.
initial_accumulator_value A floating point value. Starting value for the accumulators, must be non-negative.
epsilon A small floating point value to avoid zero denominator.
name Optional name prefix for the operations created when applying gradients. Defaults to "Adagrad".
**kwargs Keyword arguments. Allowed to be one of "clipnorm" or "clipvalue". "clipnorm" (float) clips gradients by norm; "clipvalue" (float) clips gradients by value. Reference:
Duchi et al., 2011.
Raises
ValueError in case of any invalid argument. | tensorflow.keras.optimizers.adagrad |
tf.keras.optimizers.Adam View source on GitHub Optimizer that implements the Adam algorithm. Inherits From: Optimizer View aliases Main aliases
tf.optimizers.Adam Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.optimizers.Adam
tf.keras.optimizers.Adam(
learning_rate=0.001, beta_1=0.9, beta_2=0.999, epsilon=1e-07, amsgrad=False,
name='Adam', **kwargs
)
Adam optimization is a stochastic gradient descent method that is based on adaptive estimation of first-order and second-order moments. According to Kingma et al., 2014, the method is "computationally efficient, has little memory requirement, invariant to diagonal rescaling of gradients, and is well suited for problems that are large in terms of data/parameters".
Args
learning_rate A Tensor, floating point value, or a schedule that is a tf.keras.optimizers.schedules.LearningRateSchedule, or a callable that takes no arguments and returns the actual value to use, The learning rate. Defaults to 0.001.
beta_1 A float value or a constant float tensor, or a callable that takes no arguments and returns the actual value to use. The exponential decay rate for the 1st moment estimates. Defaults to 0.9.
beta_2 A float value or a constant float tensor, or a callable that takes no arguments and returns the actual value to use, The exponential decay rate for the 2nd moment estimates. Defaults to 0.999.
epsilon A small constant for numerical stability. This epsilon is "epsilon hat" in the Kingma and Ba paper (in the formula just before Section 2.1), not the epsilon in Algorithm 1 of the paper. Defaults to 1e-7.
amsgrad Boolean. Whether to apply AMSGrad variant of this algorithm from the paper "On the Convergence of Adam and beyond". Defaults to False.
name Optional name for the operations created when applying gradients. Defaults to "Adam".
**kwargs Keyword arguments. Allowed to be one of "clipnorm" or "clipvalue". "clipnorm" (float) clips gradients by norm; "clipvalue" (float) clips gradients by value. Usage:
opt = tf.keras.optimizers.Adam(learning_rate=0.1)
var1 = tf.Variable(10.0)
loss = lambda: (var1 ** 2)/2.0 # d(loss)/d(var1) == var1
step_count = opt.minimize(loss, [var1]).numpy()
# The first step is `-learning_rate*sign(grad)`
var1.numpy()
9.9
Reference: Kingma et al., 2014
Reddi et al., 2018 for amsgrad. Notes: The default value of 1e-7 for epsilon might not be a good default in general. For example, when training an Inception network on ImageNet a current good choice is 1.0 or 0.1. Note that since Adam uses the formulation just before Section 2.1 of the Kingma and Ba paper rather than the formulation in Algorithm 1, the "epsilon" referred to here is "epsilon hat" in the paper. The sparse implementation of this algorithm (used when the gradient is an IndexedSlices object, typically because of tf.gather or an embedding lookup in the forward pass) does apply momentum to variable slices even if they were not used in the forward pass (meaning they have a gradient equal to zero). Momentum decay (beta1) is also applied to the entire momentum accumulator. This means that the sparse behavior is equivalent to the dense behavior (in contrast to some momentum implementations which ignore momentum unless a variable slice was actually used).
Raises
ValueError in case of any invalid argument. | tensorflow.keras.optimizers.adam |
tf.keras.optimizers.Adamax View source on GitHub Optimizer that implements the Adamax algorithm. Inherits From: Optimizer View aliases Main aliases
tf.optimizers.Adamax Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.optimizers.Adamax
tf.keras.optimizers.Adamax(
learning_rate=0.001, beta_1=0.9, beta_2=0.999, epsilon=1e-07,
name='Adamax', **kwargs
)
It is a variant of Adam based on the infinity norm. Default parameters follow those provided in the paper. Adamax is sometimes superior to adam, specially in models with embeddings. Initialization: m = 0 # Initialize initial 1st moment vector
v = 0 # Initialize the exponentially weighted infinity norm
t = 0 # Initialize timestep
The update rule for parameter w with gradient g is described at the end of section 7.1 of the paper: t += 1
m = beta1 * m + (1 - beta) * g
v = max(beta2 * v, abs(g))
current_lr = learning_rate / (1 - beta1 ** t)
w = w - current_lr * m / (v + epsilon)
Similarly to Adam, the epsilon is added for numerical stability (especially to get rid of division by zero when v_t == 0). In contrast to Adam, the sparse implementation of this algorithm (used when the gradient is an IndexedSlices object, typically because of tf.gather or an embedding lookup in the forward pass) only updates variable slices and corresponding m_t, v_t terms when that part of the variable was used in the forward pass. This means that the sparse behavior is contrast to the dense behavior (similar to some momentum implementations which ignore momentum unless a variable slice was actually used).
Args
learning_rate A Tensor, floating point value, or a schedule that is a tf.keras.optimizers.schedules.LearningRateSchedule. The learning rate.
beta_1 A float value or a constant float tensor. The exponential decay rate for the 1st moment estimates.
beta_2 A float value or a constant float tensor. The exponential decay rate for the exponentially weighted infinity norm.
epsilon A small constant for numerical stability.
name Optional name for the operations created when applying gradients. Defaults to "Adamax".
**kwargs Keyword arguments. Allowed to be one of "clipnorm" or "clipvalue". "clipnorm" (float) clips gradients by norm; "clipvalue" (float) clips gradients by value. Reference: Kingma et al., 2014
Raises
ValueError in case of any invalid argument. | tensorflow.keras.optimizers.adamax |
tf.keras.optimizers.deserialize View source on GitHub Inverse of the serialize function. View aliases Main aliases
tf.optimizers.deserialize Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.optimizers.deserialize
tf.keras.optimizers.deserialize(
config, custom_objects=None
)
Arguments
config Optimizer configuration dictionary.
custom_objects Optional dictionary mapping names (strings) to custom objects (classes and functions) to be considered during deserialization.
Returns A Keras Optimizer instance. | tensorflow.keras.optimizers.deserialize |
tf.keras.optimizers.Ftrl View source on GitHub Optimizer that implements the FTRL algorithm. Inherits From: Optimizer View aliases Main aliases
tf.optimizers.Ftrl Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.optimizers.Ftrl
tf.keras.optimizers.Ftrl(
learning_rate=0.001, learning_rate_power=-0.5, initial_accumulator_value=0.1,
l1_regularization_strength=0.0, l2_regularization_strength=0.0,
name='Ftrl', l2_shrinkage_regularization_strength=0.0, beta=0.0,
**kwargs
)
See Algorithm 1 of this paper. This version has support for both online L2 (the L2 penalty given in the paper above) and shrinkage-type L2 (which is the addition of an L2 penalty to the loss function). Initialization: $$t = 0$$ $$n_{0} = 0$$ $$\sigma_{0} = 0$$ $$z_{0} = 0$$ Update ( $$i$$ is variable index, $$\alpha$$ is the learning rate): $$t = t + 1$$ $$n_{t,i} = n_{t-1,i} + g_{t,i}^{2}$$ $$\sigma_{t,i} = (\sqrt{n_{t,i} } - \sqrt{n_{t-1,i} }) / \alpha$$ $$z_{t,i} = z_{t-1,i} + g_{t,i} - \sigma_{t,i} * w_{t,i}$$ $$w_{t,i} = - ((\beta+\sqrt{n_{t,i} }) / \alpha + 2 * \lambda_{2})^{-1} * (z_{i} - sgn(z_{i}) * \lambda_{1}) if \abs{z_{i} } > \lambda_{i} else 0$$ Check the documentation for the l2_shrinkage_regularization_strength parameter for more details when shrinkage is enabled, in which case gradient is replaced with gradient_with_shrinkage.
Args
learning_rate A Tensor, floating point value, or a schedule that is a tf.keras.optimizers.schedules.LearningRateSchedule. The learning rate.
learning_rate_power A float value, must be less or equal to zero. Controls how the learning rate decreases during training. Use zero for a fixed learning rate.
initial_accumulator_value The starting value for accumulators. Only zero or positive values are allowed.
l1_regularization_strength A float value, must be greater than or equal to zero.
l2_regularization_strength A float value, must be greater than or equal to zero.
name Optional name prefix for the operations created when applying gradients. Defaults to "Ftrl".
l2_shrinkage_regularization_strength A float value, must be greater than or equal to zero. This differs from L2 above in that the L2 above is a stabilization penalty, whereas this L2 shrinkage is a magnitude penalty. When input is sparse shrinkage will only happen on the active weights.
beta A float value, representing the beta value from the paper.
**kwargs Keyword arguments. Allowed to be one of "clipnorm" or "clipvalue". "clipnorm" (float) clips gradients by norm; "clipvalue" (float) clips gradients by value. Reference: paper
Raises
ValueError in case of any invalid argument. | tensorflow.keras.optimizers.ftrl |
tf.keras.optimizers.get View source on GitHub Retrieves a Keras Optimizer instance. View aliases Main aliases
tf.optimizers.get Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.optimizers.get
tf.keras.optimizers.get(
identifier
)
Arguments
identifier Optimizer identifier, one of String: name of an optimizer Dictionary: configuration dictionary. - Keras Optimizer instance (it will be returned unchanged). - TensorFlow Optimizer instance (it will be wrapped as a Keras Optimizer).
Returns A Keras Optimizer instance.
Raises
ValueError If identifier cannot be interpreted. | tensorflow.keras.optimizers.get |
tf.keras.optimizers.Nadam View source on GitHub Optimizer that implements the NAdam algorithm. Inherits From: Optimizer View aliases Main aliases
tf.optimizers.Nadam Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.optimizers.Nadam
tf.keras.optimizers.Nadam(
learning_rate=0.001, beta_1=0.9, beta_2=0.999, epsilon=1e-07,
name='Nadam', **kwargs
)
Much like Adam is essentially RMSprop with momentum, Nadam is Adam with Nesterov momentum.
Args
learning_rate A Tensor or a floating point value. The learning rate.
beta_1 A float value or a constant float tensor. The exponential decay rate for the 1st moment estimates.
beta_2 A float value or a constant float tensor. The exponential decay rate for the exponentially weighted infinity norm.
epsilon A small constant for numerical stability.
name Optional name for the operations created when applying gradients. Defaults to "Nadam".
**kwargs Keyword arguments. Allowed to be one of "clipnorm" or "clipvalue". "clipnorm" (float) clips gradients by norm; "clipvalue" (float) clips gradients by value. Reference:
Dozat, 2015.
Raises
ValueError in case of any invalid argument. | tensorflow.keras.optimizers.nadam |
tf.keras.optimizers.Optimizer View source on GitHub Base class for Keras optimizers. View aliases Main aliases
tf.optimizers.Optimizer Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.optimizers.Optimizer
tf.keras.optimizers.Optimizer(
name, gradient_aggregator=None, gradient_transformers=None, **kwargs
)
You should not use this class directly, but instead instantiate one of its subclasses such as tf.keras.optimizers.SGD, tf.keras.optimizers.Adam, etc. Usage # Create an optimizer with the desired parameters.
opt = tf.keras.optimizers.SGD(learning_rate=0.1)
# `loss` is a callable that takes no argument and returns the value
# to minimize.
loss = lambda: 3 * var1 * var1 + 2 * var2 * var2
# In graph mode, returns op that minimizes the loss by updating the listed
# variables.
opt_op = opt.minimize(loss, var_list=[var1, var2])
opt_op.run()
# In eager mode, simply call minimize to update the list of variables.
opt.minimize(loss, var_list=[var1, var2])
Usage in custom training loops In Keras models, sometimes variables are created when the model is first called, instead of construction time. Examples include 1) sequential models without input shape pre-defined, or 2) subclassed models. Pass var_list as callable in these cases. Example: opt = tf.keras.optimizers.SGD(learning_rate=0.1)
model = tf.keras.Sequential()
model.add(tf.keras.layers.Dense(num_hidden, activation='relu'))
model.add(tf.keras.layers.Dense(num_classes, activation='sigmoid'))
loss_fn = lambda: tf.keras.losses.mse(model(input), output)
var_list_fn = lambda: model.trainable_weights
for input, output in data:
opt.minimize(loss_fn, var_list_fn)
Processing gradients before applying them Calling minimize() takes care of both computing the gradients and applying them to the variables. If you want to process the gradients before applying them you can instead use the optimizer in three steps: Compute the gradients with tf.GradientTape. Process the gradients as you wish. Apply the processed gradients with apply_gradients(). Example: # Create an optimizer.
opt = tf.keras.optimizers.SGD(learning_rate=0.1)
# Compute the gradients for a list of variables.
with tf.GradientTape() as tape:
loss = <call_loss_function>
vars = <list_of_variables>
grads = tape.gradient(loss, vars)
# Process the gradients, for example cap them, etc.
# capped_grads = [MyCapper(g) for g in grads]
processed_grads = [process_gradient(g) for g in grads]
# Ask the optimizer to apply the processed gradients.
opt.apply_gradients(zip(processed_grads, var_list))
Use with tf.distribute.Strategy
This optimizer class is tf.distribute.Strategy aware, which means it automatically sums gradients across all replicas. To average gradients, you divide your loss by the global batch size, which is done automatically if you use tf.keras built-in training or evaluation loops. See the reduction argument of your loss which should be set to tf.keras.losses.Reduction.SUM_OVER_BATCH_SIZE for averaging or tf.keras.losses.Reduction.SUM for not. To aggregate gradients yourself, call apply_gradients with experimental_aggregate_gradients set to False. This is useful if you need to process aggregated gradients. If you are not using these and you want to average gradients, you should use tf.math.reduce_sum to add up your per-example losses and then divide by the global batch size. Note that when using tf.distribute.Strategy, the first component of a tensor's shape is the replica-local batch size, which is off by a factor equal to the number of replicas being used to compute a single step. As a result, using tf.math.reduce_mean will give the wrong answer, resulting in gradients that can be many times too big. Variable Constraints All Keras optimizers respect variable constraints. If constraint function is passed to any variable, the constraint will be applied to the variable after the gradient has been applied to the variable. Important: If gradient is sparse tensor, variable constraint is not supported. Thread Compatibility The entire optimizer is currently thread compatible, not thread-safe. The user needs to perform synchronization if necessary. Slots Many optimizer subclasses, such as Adam and Adagrad allocate and manage additional variables associated with the variables to train. These are called Slots. Slots have names and you can ask the optimizer for the names of the slots that it uses. Once you have a slot name you can ask the optimizer for the variable it created to hold the slot value. This can be useful if you want to log debug a training algorithm, report stats about the slots, etc. Hyperparameters These are arguments passed to the optimizer subclass constructor (the __init__ method), and then passed to self._set_hyper(). They can be either regular Python values (like 1.0), tensors, or callables. If they are callable, the callable will be called during apply_gradients() to get the value for the hyper parameter. Hyperparameters can be overwritten through user code: Example: # Create an optimizer with the desired parameters.
opt = tf.keras.optimizers.SGD(learning_rate=0.1)
# `loss` is a callable that takes no argument and returns the value
# to minimize.
loss = lambda: 3 * var1 + 2 * var2
# In eager mode, simply call minimize to update the list of variables.
opt.minimize(loss, var_list=[var1, var2])
# update learning rate
opt.learning_rate = 0.05
opt.minimize(loss, var_list=[var1, var2])
Callable learning rate Optimizer accepts a callable learning rate in two ways. The first way is through built-in or customized tf.keras.optimizers.schedules.LearningRateSchedule. The schedule will be called on each iteration with schedule(iteration), a tf.Variable owned by the optimizer. Example:
var = tf.Variable(np.random.random(size=(1,)))
learning_rate = tf.keras.optimizers.schedules.ExponentialDecay(
initial_learning_rate=.01, decay_steps=20, decay_rate=.1)
opt = tf.keras.optimizers.SGD(learning_rate=learning_rate)
loss = lambda: 3 * var
opt.minimize(loss, var_list=[var])
<tf.Variable...
The second way is through a callable function that does not accept any arguments. Example:
var = tf.Variable(np.random.random(size=(1,)))
def lr_callable():
return .1
opt = tf.keras.optimizers.SGD(learning_rate=lr_callable)
loss = lambda: 3 * var
opt.minimize(loss, var_list=[var])
<tf.Variable...
Creating a custom optimizer If you intend to create your own optimization algorithm, simply inherit from this class and override the following methods:
_resource_apply_dense (update variable given gradient tensor is dense)
_resource_apply_sparse (update variable given gradient tensor is sparse)
_create_slots (if your optimizer algorithm requires additional variables)
get_config (serialization of the optimizer, include all hyper parameters)
Args
name String. The name to use for momentum accumulator weights created by the optimizer.
gradient_aggregator The function to use to aggregate gradients across devices (when using tf.distribute.Strategy). If None, defaults to summing the gradients across devices. The function should accept and return a list of (gradient, variable) tuples.
gradient_transformers Optional. List of functions to use to transform gradients before applying updates to Variables. The functions are applied after gradient_aggregator. The functions should accept and return a list of (gradient, variable) tuples.
**kwargs keyword arguments. Allowed arguments are clipvalue, clipnorm, global_clipnorm. If clipvalue (float) is set, the gradient of each weight is clipped to be no higher than this value. If clipnorm (float) is set, the gradient of each weight is individually clipped so that its norm is no higher than this value. If global_clipnorm (float) is set the gradient of all weights is clipped so that their global norm is no higher than this value.
Raises
ValueError in case of any invalid argument.
Attributes
clipnorm float or None. If set, clips gradients to a maximum norm.
clipvalue float or None. If set, clips gradients to a maximum value.
global_clipnorm float or None. If set, clips gradients to a maximum norm.
iterations Variable. The number of training steps this Optimizer has run.
weights Returns variables of this Optimizer based on the order created. Methods add_slot View source
add_slot(
var, slot_name, initializer='zeros'
)
Add a new slot variable for var. add_weight View source
add_weight(
name, shape, dtype=None, initializer='zeros', trainable=None,
synchronization=tf.VariableSynchronization.AUTO,
aggregation=tf.compat.v1.VariableAggregation.NONE
)
apply_gradients View source
apply_gradients(
grads_and_vars, name=None, experimental_aggregate_gradients=True
)
Apply gradients to variables. This is the second part of minimize(). It returns an Operation that applies gradients. The method sums gradients from all replicas in the presence of tf.distribute.Strategy by default. You can aggregate gradients yourself by passing experimental_aggregate_gradients=False. Example: grads = tape.gradient(loss, vars)
grads = tf.distribute.get_replica_context().all_reduce('sum', grads)
# Processing aggregated gradients.
optimizer.apply_gradients(zip(grads, vars),
experimental_aggregate_gradients=False)
Args
grads_and_vars List of (gradient, variable) pairs.
name Optional name for the returned operation. Default to the name passed to the Optimizer constructor.
experimental_aggregate_gradients Whether to sum gradients from different replicas in the presense of tf.distribute.Strategy. If False, it's user responsibility to aggregate the gradients. Default to True.
Returns An Operation that applies the specified gradients. The iterations will be automatically increased by 1.
Raises
TypeError If grads_and_vars is malformed.
ValueError If none of the variables have gradients.
RuntimeError If called in a cross-replica context. from_config View source
@classmethod
from_config(
config, custom_objects=None
)
Creates an optimizer from its config. This method is the reverse of get_config, capable of instantiating the same optimizer from the config dictionary.
Arguments
config A Python dictionary, typically the output of get_config.
custom_objects A Python dictionary mapping names to additional Python objects used to create this optimizer, such as a function used for a hyperparameter.
Returns An optimizer instance.
get_config View source
@abc.abstractmethod
get_config()
Returns the config of the optimizer. An optimizer config is a Python dictionary (serializable) containing the configuration of an optimizer. The same optimizer can be reinstantiated later (without any saved state) from this configuration.
Returns Python dictionary.
get_gradients View source
get_gradients(
loss, params
)
Returns gradients of loss with respect to params. Should be used only in legacy v1 graph mode.
Arguments
loss Loss tensor.
params List of variables.
Returns List of gradient tensors.
Raises
ValueError In case any gradient cannot be computed (e.g. if gradient function not implemented). get_slot View source
get_slot(
var, slot_name
)
get_slot_names View source
get_slot_names()
A list of names for this optimizer's slots. get_updates View source
get_updates(
loss, params
)
get_weights View source
get_weights()
Returns the current weights of the optimizer. The weights of an optimizer are its state (ie, variables). This function returns the weight values associated with this optimizer as a list of Numpy arrays. The first value is always the iterations count of the optimizer, followed by the optimizer's state variables in the order they were created. The returned list can in turn be used to load state into similarly parameterized optimizers. For example, the RMSprop optimizer for this simple model returns a list of three values-- the iteration count, followed by the root-mean-square value of the kernel and bias of the single Dense layer:
opt = tf.keras.optimizers.RMSprop()
m = tf.keras.models.Sequential([tf.keras.layers.Dense(10)])
m.compile(opt, loss='mse')
data = np.arange(100).reshape(5, 20)
labels = np.zeros(5)
print('Training'); results = m.fit(data, labels)
Training ...
len(opt.get_weights())
3
Returns Weights values as a list of numpy arrays.
minimize View source
minimize(
loss, var_list, grad_loss=None, name=None, tape=None
)
Minimize loss by updating var_list. This method simply computes gradient using tf.GradientTape and calls apply_gradients(). If you want to process the gradient before applying then call tf.GradientTape and apply_gradients() explicitly instead of using this function.
Args
loss Tensor or callable. If a callable, loss should take no arguments and return the value to minimize. If a Tensor, the tape argument must be passed.
var_list list or tuple of Variable objects to update to minimize loss, or a callable returning the list or tuple of Variable objects. Use callable when the variable list would otherwise be incomplete before minimize since the variables are created at the first time loss is called.
grad_loss (Optional). A Tensor holding the gradient computed for loss.
name (Optional) str. Name for the returned operation.
tape (Optional) tf.GradientTape. If loss is provided as a Tensor, the tape that computed the loss must be provided.
Returns An Operation that updates the variables in var_list. The iterations will be automatically increased by 1.
Raises
ValueError If some of the variables are not Variable objects. set_weights View source
set_weights(
weights
)
Set the weights of the optimizer. The weights of an optimizer are its state (ie, variables). This function takes the weight values associated with this optimizer as a list of Numpy arrays. The first value is always the iterations count of the optimizer, followed by the optimizer's state variables in the order they are created. The passed values are used to set the new state of the optimizer. For example, the RMSprop optimizer for this simple model takes a list of three values-- the iteration count, followed by the root-mean-square value of the kernel and bias of the single Dense layer:
opt = tf.keras.optimizers.RMSprop()
m = tf.keras.models.Sequential([tf.keras.layers.Dense(10)])
m.compile(opt, loss='mse')
data = np.arange(100).reshape(5, 20)
labels = np.zeros(5)
print('Training'); results = m.fit(data, labels)
Training ...
new_weights = [np.array(10), np.ones([20, 10]), np.zeros([10])]
opt.set_weights(new_weights)
opt.iterations
<tf.Variable 'RMSprop/iter:0' shape=() dtype=int64, numpy=10>
Arguments
weights weight values as a list of numpy arrays. variables View source
variables()
Returns variables of this Optimizer based on the order created. | tensorflow.keras.optimizers.optimizer |
tf.keras.optimizers.RMSprop View source on GitHub Optimizer that implements the RMSprop algorithm. Inherits From: Optimizer View aliases Main aliases
tf.optimizers.RMSprop Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.optimizers.RMSprop
tf.keras.optimizers.RMSprop(
learning_rate=0.001, rho=0.9, momentum=0.0, epsilon=1e-07, centered=False,
name='RMSprop', **kwargs
)
The gist of RMSprop is to: Maintain a moving (discounted) average of the square of gradients Divide the gradient by the root of this average This implementation of RMSprop uses plain momentum, not Nesterov momentum. The centered version additionally maintains a moving average of the gradients, and uses that average to estimate the variance.
Args
learning_rate A Tensor, floating point value, or a schedule that is a tf.keras.optimizers.schedules.LearningRateSchedule, or a callable that takes no arguments and returns the actual value to use. The learning rate. Defaults to 0.001.
rho Discounting factor for the history/coming gradient. Defaults to 0.9.
momentum A scalar or a scalar Tensor. Defaults to 0.0.
epsilon A small constant for numerical stability. This epsilon is "epsilon hat" in the Kingma and Ba paper (in the formula just before Section 2.1), not the epsilon in Algorithm 1 of the paper. Defaults to 1e-7.
centered Boolean. If True, gradients are normalized by the estimated variance of the gradient; if False, by the uncentered second moment. Setting this to True may help with training, but is slightly more expensive in terms of computation and memory. Defaults to False.
name Optional name prefix for the operations created when applying gradients. Defaults to "RMSprop".
**kwargs Keyword arguments. Allowed to be one of "clipnorm" or "clipvalue". "clipnorm" (float) clips gradients by norm; "clipvalue" (float) clips gradients by value. Note that in the dense implementation of this algorithm, variables and their corresponding accumulators (momentum, gradient moving average, square gradient moving average) will be updated even if the gradient is zero (i.e. accumulators will decay, momentum will be applied). The sparse implementation (used when the gradient is an IndexedSlices object, typically because of tf.gather or an embedding lookup in the forward pass) will not update variable slices or their accumulators unless those slices were used in the forward pass (nor is there an "eventual" correction to account for these omitted updates). This leads to more efficient updates for large embedding lookup tables (where most of the slices are not accessed in a particular graph execution), but differs from the published algorithm. Usage:
opt = tf.keras.optimizers.RMSprop(learning_rate=0.1)
var1 = tf.Variable(10.0)
loss = lambda: (var1 ** 2) / 2.0 # d(loss) / d(var1) = var1
step_count = opt.minimize(loss, [var1]).numpy()
var1.numpy()
9.683772
Reference: Hinton, 2012 | tensorflow.keras.optimizers.rmsprop |
Module: tf.keras.optimizers.schedules Public API for tf.keras.optimizers.schedules namespace. View aliases Main aliases
tf.optimizers.schedules Classes class ExponentialDecay: A LearningRateSchedule that uses an exponential decay schedule. class InverseTimeDecay: A LearningRateSchedule that uses an inverse time decay schedule. class LearningRateSchedule: A serializable learning rate decay schedule. class PiecewiseConstantDecay: A LearningRateSchedule that uses a piecewise constant decay schedule. class PolynomialDecay: A LearningRateSchedule that uses a polynomial decay schedule. Functions deserialize(...) serialize(...) | tensorflow.keras.optimizers.schedules |
tf.keras.optimizers.schedules.deserialize View source on GitHub View aliases Main aliases
tf.optimizers.schedules.deserialize Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.optimizers.schedules.deserialize
tf.keras.optimizers.schedules.deserialize(
config, custom_objects=None
) | tensorflow.keras.optimizers.schedules.deserialize |
tf.keras.optimizers.schedules.ExponentialDecay View source on GitHub A LearningRateSchedule that uses an exponential decay schedule. Inherits From: LearningRateSchedule View aliases Main aliases
tf.optimizers.schedules.ExponentialDecay Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.optimizers.schedules.ExponentialDecay
tf.keras.optimizers.schedules.ExponentialDecay(
initial_learning_rate, decay_steps, decay_rate, staircase=False, name=None
)
When training a model, it is often recommended to lower the learning rate as the training progresses. This schedule applies an exponential decay function to an optimizer step, given a provided initial learning rate. The schedule a 1-arg callable that produces a decayed learning rate when passed the current optimizer step. This can be useful for changing the learning rate value across different invocations of optimizer functions. It is computed as: def decayed_learning_rate(step):
return initial_learning_rate * decay_rate ^ (step / decay_steps)
If the argument staircase is True, then step / decay_steps is an integer division and the decayed learning rate follows a staircase function. You can pass this schedule directly into a tf.keras.optimizers.Optimizer as the learning rate. Example: When fitting a Keras model, decay every 100000 steps with a base of 0.96: initial_learning_rate = 0.1
lr_schedule = tf.keras.optimizers.schedules.ExponentialDecay(
initial_learning_rate,
decay_steps=100000,
decay_rate=0.96,
staircase=True)
model.compile(optimizer=tf.keras.optimizers.SGD(learning_rate=lr_schedule),
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
model.fit(data, labels, epochs=5)
The learning rate schedule is also serializable and deserializable using tf.keras.optimizers.schedules.serialize and tf.keras.optimizers.schedules.deserialize.
Returns A 1-arg callable learning rate schedule that takes the current optimizer step and outputs the decayed learning rate, a scalar Tensor of the same type as initial_learning_rate.
Args
initial_learning_rate A scalar float32 or float64 Tensor or a Python number. The initial learning rate.
decay_steps A scalar int32 or int64 Tensor or a Python number. Must be positive. See the decay computation above.
decay_rate A scalar float32 or float64 Tensor or a Python number. The decay rate.
staircase Boolean. If True decay the learning rate at discrete intervals
name String. Optional name of the operation. Defaults to 'ExponentialDecay'. Methods from_config View source
@classmethod
from_config(
config
)
Instantiates a LearningRateSchedule from its config.
Args
config Output of get_config().
Returns A LearningRateSchedule instance.
get_config View source
get_config()
__call__ View source
__call__(
step
)
Call self as a function. | tensorflow.keras.optimizers.schedules.exponentialdecay |
tf.keras.optimizers.schedules.InverseTimeDecay View source on GitHub A LearningRateSchedule that uses an inverse time decay schedule. Inherits From: LearningRateSchedule View aliases Main aliases
tf.optimizers.schedules.InverseTimeDecay Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.optimizers.schedules.InverseTimeDecay
tf.keras.optimizers.schedules.InverseTimeDecay(
initial_learning_rate, decay_steps, decay_rate, staircase=False, name=None
)
When training a model, it is often recommended to lower the learning rate as the training progresses. This schedule applies the inverse decay function to an optimizer step, given a provided initial learning rate. It requires a step value to compute the decayed learning rate. You can just pass a TensorFlow variable that you increment at each training step. The schedule a 1-arg callable that produces a decayed learning rate when passed the current optimizer step. This can be useful for changing the learning rate value across different invocations of optimizer functions. It is computed as: def decayed_learning_rate(step):
return initial_learning_rate / (1 + decay_rate * step / decay_step)
or, if staircase is True, as: def decayed_learning_rate(step):
return initial_learning_rate / (1 + decay_rate * floor(step / decay_step))
You can pass this schedule directly into a tf.keras.optimizers.Optimizer as the learning rate. Example: Fit a Keras model when decaying 1/t with a rate of 0.5: ...
initial_learning_rate = 0.1
decay_steps = 1.0
decay_rate = 0.5
learning_rate_fn = keras.optimizers.schedules.InverseTimeDecay(
initial_learning_rate, decay_steps, decay_rate)
model.compile(optimizer=tf.keras.optimizers.SGD(
learning_rate=learning_rate_fn),
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
model.fit(data, labels, epochs=5)
Returns A 1-arg callable learning rate schedule that takes the current optimizer step and outputs the decayed learning rate, a scalar Tensor of the same type as initial_learning_rate.
Args
initial_learning_rate A scalar float32 or float64 Tensor or a Python number. The initial learning rate.
decay_steps How often to apply decay.
decay_rate A Python number. The decay rate.
staircase Whether to apply decay in a discrete staircase, as opposed to continuous, fashion.
name String. Optional name of the operation. Defaults to 'InverseTimeDecay'. Methods from_config View source
@classmethod
from_config(
config
)
Instantiates a LearningRateSchedule from its config.
Args
config Output of get_config().
Returns A LearningRateSchedule instance.
get_config View source
get_config()
__call__ View source
__call__(
step
)
Call self as a function. | tensorflow.keras.optimizers.schedules.inversetimedecay |
tf.keras.optimizers.schedules.LearningRateSchedule View source on GitHub A serializable learning rate decay schedule. View aliases Main aliases
tf.optimizers.schedules.LearningRateSchedule Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.optimizers.schedules.LearningRateSchedule LearningRateSchedules can be passed in as the learning rate of optimizers in tf.keras.optimizers. They can be serialized and deserialized using tf.keras.optimizers.schedules.serialize and tf.keras.optimizers.schedules.deserialize. Methods from_config View source
@classmethod
from_config(
config
)
Instantiates a LearningRateSchedule from its config.
Args
config Output of get_config().
Returns A LearningRateSchedule instance.
get_config View source
@abc.abstractmethod
get_config()
__call__ View source
@abc.abstractmethod
__call__(
step
)
Call self as a function. | tensorflow.keras.optimizers.schedules.learningrateschedule |
tf.keras.optimizers.schedules.PiecewiseConstantDecay View source on GitHub A LearningRateSchedule that uses a piecewise constant decay schedule. Inherits From: LearningRateSchedule View aliases Main aliases
tf.optimizers.schedules.PiecewiseConstantDecay Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.optimizers.schedules.PiecewiseConstantDecay
tf.keras.optimizers.schedules.PiecewiseConstantDecay(
boundaries, values, name=None
)
The function returns a 1-arg callable to compute the piecewise constant when passed the current optimizer step. This can be useful for changing the learning rate value across different invocations of optimizer functions. Example: use a learning rate that's 1.0 for the first 100001 steps, 0.5 for the next 10000 steps, and 0.1 for any additional steps. step = tf.Variable(0, trainable=False)
boundaries = [100000, 110000]
values = [1.0, 0.5, 0.1]
learning_rate_fn = keras.optimizers.schedules.PiecewiseConstantDecay(
boundaries, values)
# Later, whenever we perform an optimization step, we pass in the step.
learning_rate = learning_rate_fn(step)
You can pass this schedule directly into a tf.keras.optimizers.Optimizer as the learning rate. The learning rate schedule is also serializable and deserializable using tf.keras.optimizers.schedules.serialize and tf.keras.optimizers.schedules.deserialize.
Returns A 1-arg callable learning rate schedule that takes the current optimizer step and outputs the decayed learning rate, a scalar Tensor of the same type as the boundary tensors. The output of the 1-arg function that takes the step is values[0] when step <= boundaries[0], values[1] when step > boundaries[0] and step <= boundaries[1], ..., and values[-1] when step > boundaries[-1].
Args
boundaries A list of Tensors or ints or floats with strictly increasing entries, and with all elements having the same type as the optimizer step.
values A list of Tensors or floats or ints that specifies the values for the intervals defined by boundaries. It should have one more element than boundaries, and all elements should have the same type.
name A string. Optional name of the operation. Defaults to 'PiecewiseConstant'.
Raises
ValueError if the number of elements in the lists do not match. Methods from_config View source
@classmethod
from_config(
config
)
Instantiates a LearningRateSchedule from its config.
Args
config Output of get_config().
Returns A LearningRateSchedule instance.
get_config View source
get_config()
__call__ View source
__call__(
step
)
Call self as a function. | tensorflow.keras.optimizers.schedules.piecewiseconstantdecay |
tf.keras.optimizers.schedules.PolynomialDecay View source on GitHub A LearningRateSchedule that uses a polynomial decay schedule. Inherits From: LearningRateSchedule View aliases Main aliases
tf.optimizers.schedules.PolynomialDecay Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.optimizers.schedules.PolynomialDecay
tf.keras.optimizers.schedules.PolynomialDecay(
initial_learning_rate, decay_steps, end_learning_rate=0.0001, power=1.0,
cycle=False, name=None
)
It is commonly observed that a monotonically decreasing learning rate, whose degree of change is carefully chosen, results in a better performing model. This schedule applies a polynomial decay function to an optimizer step, given a provided initial_learning_rate, to reach an end_learning_rate in the given decay_steps. It requires a step value to compute the decayed learning rate. You can just pass a TensorFlow variable that you increment at each training step. The schedule is a 1-arg callable that produces a decayed learning rate when passed the current optimizer step. This can be useful for changing the learning rate value across different invocations of optimizer functions. It is computed as: def decayed_learning_rate(step):
step = min(step, decay_steps)
return ((initial_learning_rate - end_learning_rate) *
(1 - step / decay_steps) ^ (power)
) + end_learning_rate
If cycle is True then a multiple of decay_steps is used, the first one that is bigger than step. def decayed_learning_rate(step):
decay_steps = decay_steps * ceil(step / decay_steps)
return ((initial_learning_rate - end_learning_rate) *
(1 - step / decay_steps) ^ (power)
) + end_learning_rate
You can pass this schedule directly into a tf.keras.optimizers.Optimizer as the learning rate. Example: Fit a model while decaying from 0.1 to 0.01 in 10000 steps using sqrt (i.e. power=0.5): ...
starter_learning_rate = 0.1
end_learning_rate = 0.01
decay_steps = 10000
learning_rate_fn = tf.keras.optimizers.schedules.PolynomialDecay(
starter_learning_rate,
decay_steps,
end_learning_rate,
power=0.5)
model.compile(optimizer=tf.keras.optimizers.SGD(
learning_rate=learning_rate_fn),
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
model.fit(data, labels, epochs=5)
The learning rate schedule is also serializable and deserializable using tf.keras.optimizers.schedules.serialize and tf.keras.optimizers.schedules.deserialize.
Returns A 1-arg callable learning rate schedule that takes the current optimizer step and outputs the decayed learning rate, a scalar Tensor of the same type as initial_learning_rate.
Args
initial_learning_rate A scalar float32 or float64 Tensor or a Python number. The initial learning rate.
decay_steps A scalar int32 or int64 Tensor or a Python number. Must be positive. See the decay computation above.
end_learning_rate A scalar float32 or float64 Tensor or a Python number. The minimal end learning rate.
power A scalar float32 or float64 Tensor or a Python number. The power of the polynomial. Defaults to linear, 1.0.
cycle A boolean, whether or not it should cycle beyond decay_steps.
name String. Optional name of the operation. Defaults to 'PolynomialDecay'. Methods from_config View source
@classmethod
from_config(
config
)
Instantiates a LearningRateSchedule from its config.
Args
config Output of get_config().
Returns A LearningRateSchedule instance.
get_config View source
get_config()
__call__ View source
__call__(
step
)
Call self as a function. | tensorflow.keras.optimizers.schedules.polynomialdecay |
tf.keras.optimizers.schedules.serialize View source on GitHub View aliases Main aliases
tf.optimizers.schedules.serialize Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.optimizers.schedules.serialize
tf.keras.optimizers.schedules.serialize(
learning_rate_schedule
) | tensorflow.keras.optimizers.schedules.serialize |
tf.keras.optimizers.serialize View source on GitHub View aliases Main aliases
tf.optimizers.serialize Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.optimizers.serialize
tf.keras.optimizers.serialize(
optimizer
) | tensorflow.keras.optimizers.serialize |
tf.keras.optimizers.SGD View source on GitHub Gradient descent (with momentum) optimizer. Inherits From: Optimizer View aliases Main aliases
tf.optimizers.SGD Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.optimizers.SGD
tf.keras.optimizers.SGD(
learning_rate=0.01, momentum=0.0, nesterov=False, name='SGD', **kwargs
)
Update rule for parameter w with gradient g when momentum is 0: w = w - learning_rate * g
Update rule when momentum is larger than 0: velocity = momentum * velocity - learning_rate * g
w = w + velocity
When nesterov=True, this rule becomes: velocity = momentum * velocity - learning_rate * g
w = w + momentum * velocity - learning_rate * g
Args
learning_rate A Tensor, floating point value, or a schedule that is a tf.keras.optimizers.schedules.LearningRateSchedule, or a callable that takes no arguments and returns the actual value to use. The learning rate. Defaults to 0.01.
momentum float hyperparameter >= 0 that accelerates gradient descent in the relevant direction and dampens oscillations. Defaults to 0, i.e., vanilla gradient descent.
nesterov boolean. Whether to apply Nesterov momentum. Defaults to False.
name Optional name prefix for the operations created when applying gradients. Defaults to "SGD".
**kwargs Keyword arguments. Allowed to be one of "clipnorm" or "clipvalue". "clipnorm" (float) clips gradients by norm; "clipvalue" (float) clips gradients by value. Usage:
opt = tf.keras.optimizers.SGD(learning_rate=0.1)
var = tf.Variable(1.0)
loss = lambda: (var ** 2)/2.0 # d(loss)/d(var1) = var1
step_count = opt.minimize(loss, [var]).numpy()
# Step is `- learning_rate * grad`
var.numpy()
0.9
opt = tf.keras.optimizers.SGD(learning_rate=0.1, momentum=0.9)
var = tf.Variable(1.0)
val0 = var.value()
loss = lambda: (var ** 2)/2.0 # d(loss)/d(var1) = var1
# First step is `- learning_rate * grad`
step_count = opt.minimize(loss, [var]).numpy()
val1 = var.value()
(val0 - val1).numpy()
0.1
# On later steps, step-size increases because of momentum
step_count = opt.minimize(loss, [var]).numpy()
val2 = var.value()
(val1 - val2).numpy()
0.18
Reference: For nesterov=True, See Sutskever et al., 2013.
Raises
ValueError in case of any invalid argument. | tensorflow.keras.optimizers.sgd |
Module: tf.keras.preprocessing Keras data preprocessing utils. Modules image module: Set of tools for real-time data augmentation on image data. sequence module: Utilities for preprocessing sequence data. text module: Utilities for text input preprocessing. Functions image_dataset_from_directory(...): Generates a tf.data.Dataset from image files in a directory. text_dataset_from_directory(...): Generates a tf.data.Dataset from text files in a directory. timeseries_dataset_from_array(...): Creates a dataset of sliding windows over a timeseries provided as array. | tensorflow.keras.preprocessing |
Module: tf.keras.preprocessing.image Set of tools for real-time data augmentation on image data. Classes class DirectoryIterator: Iterator capable of reading images from a directory on disk. class ImageDataGenerator: Generate batches of tensor image data with real-time data augmentation. class Iterator: Base class for image data iterators. class NumpyArrayIterator: Iterator yielding data from a Numpy array. Functions apply_affine_transform(...): Applies an affine transformation specified by the parameters given. apply_brightness_shift(...): Performs a brightness shift. apply_channel_shift(...): Performs a channel shift. array_to_img(...): Converts a 3D Numpy array to a PIL Image instance. img_to_array(...): Converts a PIL Image instance to a Numpy array. load_img(...): Loads an image into PIL format. random_brightness(...): Performs a random brightness shift. random_channel_shift(...): Performs a random channel shift. random_rotation(...): Performs a random rotation of a Numpy image tensor. random_shear(...): Performs a random spatial shear of a Numpy image tensor. random_shift(...): Performs a random spatial shift of a Numpy image tensor. random_zoom(...): Performs a random spatial zoom of a Numpy image tensor. save_img(...): Saves an image stored as a Numpy array to a path or file object. smart_resize(...): Resize images to a target size without aspect ratio distortion. | tensorflow.keras.preprocessing.image |
tf.keras.preprocessing.image.apply_affine_transform View source on GitHub Applies an affine transformation specified by the parameters given. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.keras.preprocessing.image.apply_affine_transform
tf.keras.preprocessing.image.apply_affine_transform(
x, theta=0, tx=0, ty=0, shear=0, zx=1, zy=1, row_axis=0, col_axis=1,
channel_axis=2, fill_mode='nearest', cval=0.0, order=1
)
Arguments
x 2D numpy array, single image.
theta Rotation angle in degrees.
tx Width shift.
ty Heigh shift.
shear Shear angle in degrees.
zx Zoom in x direction.
zy Zoom in y direction
row_axis Index of axis for rows in the input image.
col_axis Index of axis for columns in the input image.
channel_axis Index of axis for channels in the input image.
fill_mode Points outside the boundaries of the input are filled according to the given mode (one of {'constant', 'nearest', 'reflect', 'wrap'}).
cval Value used for points outside the boundaries of the input if mode='constant'.
order int, order of interpolation
Returns The transformed version of the input. | tensorflow.keras.preprocessing.image.apply_affine_transform |
tf.keras.preprocessing.image.apply_brightness_shift View source on GitHub Performs a brightness shift. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.keras.preprocessing.image.apply_brightness_shift
tf.keras.preprocessing.image.apply_brightness_shift(
x, brightness
)
Arguments
x Input tensor. Must be 3D.
brightness Float. The new brightness value.
channel_axis Index of axis for channels in the input tensor.
Returns Numpy image tensor.
Raises ValueError if brightness_range isn't a tuple. | tensorflow.keras.preprocessing.image.apply_brightness_shift |
tf.keras.preprocessing.image.apply_channel_shift View source on GitHub Performs a channel shift. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.keras.preprocessing.image.apply_channel_shift
tf.keras.preprocessing.image.apply_channel_shift(
x, intensity, channel_axis=0
)
Arguments
x Input tensor. Must be 3D.
intensity Transformation intensity.
channel_axis Index of axis for channels in the input tensor.
Returns Numpy image tensor. | tensorflow.keras.preprocessing.image.apply_channel_shift |
tf.keras.preprocessing.image.array_to_img View source on GitHub Converts a 3D Numpy array to a PIL Image instance. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.keras.preprocessing.image.array_to_img
tf.keras.preprocessing.image.array_to_img(
x, data_format=None, scale=True, dtype=None
)
Usage: from PIL import Image
img = np.random.random(size=(100, 100, 3))
pil_img = tf.keras.preprocessing.image.array_to_img(img)
Arguments
x Input Numpy array.
data_format Image data format, can be either "channels_first" or "channels_last". Defaults to None, in which case the global setting tf.keras.backend.image_data_format() is used (unless you changed it, it defaults to "channels_last").
scale Whether to rescale image values to be within [0, 255]. Defaults to True.
dtype Dtype to use. Default to None, in which case the global setting tf.keras.backend.floatx() is used (unless you changed it, it defaults to "float32")
Returns A PIL Image instance.
Raises
ImportError if PIL is not available.
ValueError if invalid x or data_format is passed. | tensorflow.keras.preprocessing.image.array_to_img |
tf.keras.preprocessing.image.DirectoryIterator View source on GitHub Iterator capable of reading images from a directory on disk. Inherits From: Iterator, Sequence View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.keras.preprocessing.image.DirectoryIterator
tf.keras.preprocessing.image.DirectoryIterator(
directory, image_data_generator, target_size=(256, 256),
color_mode='rgb', classes=None, class_mode='categorical',
batch_size=32, shuffle=True, seed=None, data_format=None, save_to_dir=None,
save_prefix='', save_format='png', follow_links=False,
subset=None, interpolation='nearest', dtype=None
)
Arguments
directory Path to the directory to read images from. Each subdirectory in this directory will be considered to contain images from one class, or alternatively you could specify class subdirectories via the classes argument.
image_data_generator Instance of ImageDataGenerator to use for random transformations and normalization.
target_size tuple of integers, dimensions to resize input images to.
color_mode One of "rgb", "rgba", "grayscale". Color mode to read images.
classes Optional list of strings, names of subdirectories containing images from each class (e.g. ["dogs", "cats"]). It will be computed automatically if not set.
class_mode Mode for yielding the targets: "binary": binary targets (if there are only two classes), "categorical": categorical targets, "sparse": integer targets, "input": targets are images identical to input images (mainly used to work with autoencoders), None: no targets get yielded (only input images are yielded).
batch_size Integer, size of a batch.
shuffle Boolean, whether to shuffle the data between epochs.
seed Random seed for data shuffling.
data_format String, one of channels_first, channels_last.
save_to_dir Optional directory where to save the pictures being yielded, in a viewable format. This is useful for visualizing the random transformations being applied, for debugging purposes.
save_prefix String prefix to use for saving sample images (if save_to_dir is set).
save_format Format to use for saving sample images (if save_to_dir is set).
subset Subset of data ("training" or "validation") if validation_split is set in ImageDataGenerator.
interpolation Interpolation method used to resample the image if the target size is different from that of the loaded image. Supported methods are "nearest", "bilinear", and "bicubic". If PIL version 1.1.3 or newer is installed, "lanczos" is also supported. If PIL version 3.4.0 or newer is installed, "box" and "hamming" are also supported. By default, "nearest" is used.
dtype Dtype to use for generated arrays.
Attributes
filepaths List of absolute paths to image files
labels Class labels of every observation
sample_weight
Methods next View source
next()
For python 2.x.
Returns The next batch.
on_epoch_end View source
on_epoch_end()
reset View source
reset()
set_processing_attrs View source
set_processing_attrs(
image_data_generator, target_size, color_mode, data_format, save_to_dir,
save_prefix, save_format, subset, interpolation
)
Sets attributes to use later for processing files into a batch.
Arguments
image_data_generator Instance of ImageDataGenerator to use for random transformations and normalization.
target_size tuple of integers, dimensions to resize input images to.
color_mode One of "rgb", "rgba", "grayscale". Color mode to read images.
data_format String, one of channels_first, channels_last.
save_to_dir Optional directory where to save the pictures being yielded, in a viewable format. This is useful for visualizing the random transformations being applied, for debugging purposes.
save_prefix String prefix to use for saving sample images (if save_to_dir is set).
save_format Format to use for saving sample images (if save_to_dir is set).
subset Subset of data ("training" or "validation") if validation_split is set in ImageDataGenerator.
interpolation Interpolation method used to resample the image if the target size is different from that of the loaded image. Supported methods are "nearest", "bilinear", and "bicubic". If PIL version 1.1.3 or newer is installed, "lanczos" is also supported. If PIL version 3.4.0 or newer is installed, "box" and "hamming" are also supported. By default, "nearest" is used. __getitem__ View source
__getitem__(
idx
)
__iter__ View source
__iter__()
__len__ View source
__len__()
Class Variables
allowed_class_modes
white_list_formats | tensorflow.keras.preprocessing.image.directoryiterator |
tf.keras.preprocessing.image.ImageDataGenerator View source on GitHub Generate batches of tensor image data with real-time data augmentation. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.keras.preprocessing.image.ImageDataGenerator
tf.keras.preprocessing.image.ImageDataGenerator(
featurewise_center=False, samplewise_center=False,
featurewise_std_normalization=False, samplewise_std_normalization=False,
zca_whitening=False, zca_epsilon=1e-06, rotation_range=0, width_shift_range=0.0,
height_shift_range=0.0, brightness_range=None, shear_range=0.0, zoom_range=0.0,
channel_shift_range=0.0, fill_mode='nearest', cval=0.0,
horizontal_flip=False, vertical_flip=False, rescale=None,
preprocessing_function=None, data_format=None, validation_split=0.0, dtype=None
)
The data will be looped over (in batches).
Arguments
featurewise_center Boolean. Set input mean to 0 over the dataset, feature-wise.
samplewise_center Boolean. Set each sample mean to 0.
featurewise_std_normalization Boolean. Divide inputs by std of the dataset, feature-wise.
samplewise_std_normalization Boolean. Divide each input by its std.
zca_epsilon epsilon for ZCA whitening. Default is 1e-6.
zca_whitening Boolean. Apply ZCA whitening.
rotation_range Int. Degree range for random rotations.
width_shift_range Float, 1-D array-like or int float: fraction of total width, if < 1, or pixels if >= 1. 1-D array-like: random elements from the array. int: integer number of pixels from interval (-width_shift_range, +width_shift_range)
With width_shift_range=2 possible values are integers [-1, 0, +1], same as with width_shift_range=[-1, 0, +1], while with width_shift_range=1.0 possible values are floats in the interval [-1.0, +1.0).
height_shift_range Float, 1-D array-like or int float: fraction of total height, if < 1, or pixels if >= 1. 1-D array-like: random elements from the array. int: integer number of pixels from interval (-height_shift_range, +height_shift_range)
With height_shift_range=2 possible values are integers [-1, 0, +1], same as with height_shift_range=[-1, 0, +1], while with height_shift_range=1.0 possible values are floats in the interval [-1.0, +1.0).
brightness_range Tuple or list of two floats. Range for picking a brightness shift value from.
shear_range Float. Shear Intensity (Shear angle in counter-clockwise direction in degrees)
zoom_range Float or [lower, upper]. Range for random zoom. If a float, [lower, upper] = [1-zoom_range, 1+zoom_range].
channel_shift_range Float. Range for random channel shifts.
fill_mode One of {"constant", "nearest", "reflect" or "wrap"}. Default is 'nearest'. Points outside the boundaries of the input are filled according to the given mode: 'constant': kkkkkkkk|abcd|kkkkkkkk (cval=k) 'nearest': aaaaaaaa|abcd|dddddddd 'reflect': abcddcba|abcd|dcbaabcd 'wrap': abcdabcd|abcd|abcdabcd
cval Float or Int. Value used for points outside the boundaries when fill_mode = "constant".
horizontal_flip Boolean. Randomly flip inputs horizontally.
vertical_flip Boolean. Randomly flip inputs vertically.
rescale rescaling factor. Defaults to None. If None or 0, no rescaling is applied, otherwise we multiply the data by the value provided (after applying all other transformations).
preprocessing_function function that will be applied on each input. The function will run after the image is resized and augmented. The function should take one argument: one image (Numpy tensor with rank 3), and should output a Numpy tensor with the same shape.
data_format Image data format, either "channels_first" or "channels_last". "channels_last" mode means that the images should have shape (samples, height, width, channels), "channels_first" mode means that the images should have shape (samples, channels, height, width). It defaults to the image_data_format value found in your Keras config file at ~/.keras/keras.json. If you never set it, then it will be "channels_last".
validation_split Float. Fraction of images reserved for validation (strictly between 0 and 1).
dtype Dtype to use for the generated arrays. Examples: Example of using .flow(x, y): (x_train, y_train), (x_test, y_test) = cifar10.load_data()
y_train = np_utils.to_categorical(y_train, num_classes)
y_test = np_utils.to_categorical(y_test, num_classes)
datagen = ImageDataGenerator(
featurewise_center=True,
featurewise_std_normalization=True,
rotation_range=20,
width_shift_range=0.2,
height_shift_range=0.2,
horizontal_flip=True)
# compute quantities required for featurewise normalization
# (std, mean, and principal components if ZCA whitening is applied)
datagen.fit(x_train)
# fits the model on batches with real-time data augmentation:
model.fit(datagen.flow(x_train, y_train, batch_size=32),
steps_per_epoch=len(x_train) / 32, epochs=epochs)
# here's a more "manual" example
for e in range(epochs):
print('Epoch', e)
batches = 0
for x_batch, y_batch in datagen.flow(x_train, y_train, batch_size=32):
model.fit(x_batch, y_batch)
batches += 1
if batches >= len(x_train) / 32:
# we need to break the loop by hand because
# the generator loops indefinitely
break
Example of using .flow_from_directory(directory): train_datagen = ImageDataGenerator(
rescale=1./255,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True)
test_datagen = ImageDataGenerator(rescale=1./255)
train_generator = train_datagen.flow_from_directory(
'data/train',
target_size=(150, 150),
batch_size=32,
class_mode='binary')
validation_generator = test_datagen.flow_from_directory(
'data/validation',
target_size=(150, 150),
batch_size=32,
class_mode='binary')
model.fit(
train_generator,
steps_per_epoch=2000,
epochs=50,
validation_data=validation_generator,
validation_steps=800)
Example of transforming images and masks together. # we create two instances with the same arguments
data_gen_args = dict(featurewise_center=True,
featurewise_std_normalization=True,
rotation_range=90,
width_shift_range=0.1,
height_shift_range=0.1,
zoom_range=0.2)
image_datagen = ImageDataGenerator(**data_gen_args)
mask_datagen = ImageDataGenerator(**data_gen_args)
# Provide the same seed and keyword arguments to the fit and flow methods
seed = 1
image_datagen.fit(images, augment=True, seed=seed)
mask_datagen.fit(masks, augment=True, seed=seed)
image_generator = image_datagen.flow_from_directory(
'data/images',
class_mode=None,
seed=seed)
mask_generator = mask_datagen.flow_from_directory(
'data/masks',
class_mode=None,
seed=seed)
# combine generators into one which yields image and masks
train_generator = zip(image_generator, mask_generator)
model.fit(
train_generator,
steps_per_epoch=2000,
epochs=50)
Methods apply_transform View source
apply_transform(
x, transform_parameters
)
Applies a transformation to an image according to given parameters.
Arguments
x 3D tensor, single image.
transform_parameters Dictionary with string - parameter pairs describing the transformation. Currently, the following parameters from the dictionary are used:
'theta': Float. Rotation angle in degrees.
'tx': Float. Shift in the x direction.
'ty': Float. Shift in the y direction.
'shear': Float. Shear angle in degrees.
'zx': Float. Zoom in the x direction.
'zy': Float. Zoom in the y direction.
'flip_horizontal': Boolean. Horizontal flip.
'flip_vertical': Boolean. Vertical flip.
'channel_shift_intensity': Float. Channel shift intensity.
'brightness': Float. Brightness shift intensity.
Returns A transformed version of the input (same shape).
fit View source
fit(
x, augment=False, rounds=1, seed=None
)
Fits the data generator to some sample data. This computes the internal data stats related to the data-dependent transformations, based on an array of sample data. Only required if featurewise_center or featurewise_std_normalization or zca_whitening are set to True. When rescale is set to a value, rescaling is applied to sample data before computing the internal data stats.
Arguments
x Sample data. Should have rank 4. In case of grayscale data, the channels axis should have value 1, in case of RGB data, it should have value 3, and in case of RGBA data, it should have value 4.
augment Boolean (default: False). Whether to fit on randomly augmented samples.
rounds Int (default: 1). If using data augmentation (augment=True), this is how many augmentation passes over the data to use.
seed Int (default: None). Random seed. flow View source
flow(
x, y=None, batch_size=32, shuffle=True, sample_weight=None, seed=None,
save_to_dir=None, save_prefix='', save_format='png',
subset=None
)
Takes data & label arrays, generates batches of augmented data.
Arguments
x Input data. Numpy array of rank 4 or a tuple. If tuple, the first element should contain the images and the second element another numpy array or a list of numpy arrays that gets passed to the output without any modifications. Can be used to feed the model miscellaneous data along with the images. In case of grayscale data, the channels axis of the image array should have value 1, in case of RGB data, it should have value 3, and in case of RGBA data, it should have value 4.
y Labels.
batch_size Int (default: 32).
shuffle Boolean (default: True).
sample_weight Sample weights.
seed Int (default: None).
save_to_dir None or str (default: None). This allows you to optionally specify a directory to which to save the augmented pictures being generated (useful for visualizing what you are doing).
save_prefix Str (default: ''). Prefix to use for filenames of saved pictures (only relevant if save_to_dir is set).
save_format one of "png", "jpeg" (only relevant if save_to_dir is set). Default: "png".
subset Subset of data ("training" or "validation") if validation_split is set in ImageDataGenerator.
Returns An Iterator yielding tuples of (x, y) where x is a numpy array of image data (in the case of a single image input) or a list of numpy arrays (in the case with additional inputs) and y is a numpy array of corresponding labels. If 'sample_weight' is not None, the yielded tuples are of the form (x, y, sample_weight). If y is None, only the numpy array x is returned.
flow_from_dataframe View source
flow_from_dataframe(
dataframe, directory=None, x_col='filename', y_col='class',
weight_col=None, target_size=(256, 256), color_mode='rgb',
classes=None, class_mode='categorical', batch_size=32, shuffle=True,
seed=None, save_to_dir=None, save_prefix='',
save_format='png', subset=None, interpolation='nearest',
validate_filenames=True, **kwargs
)
Takes the dataframe and the path to a directory + generates batches. The generated batches contain augmented/normalized data. **A simple tutorial can be found **here.
Arguments
dataframe Pandas dataframe containing the filepaths relative to directory (or absolute paths if directory is None) of the images in a string column. It should include other column/s depending on the class_mode: - if class_mode is "categorical" (default value) it must include the y_col column with the class/es of each image. Values in column can be string/list/tuple if a single class or list/tuple if multiple classes. - if class_mode is "binary" or "sparse" it must include the given y_col column with class values as strings. - if class_mode is "raw" or "multi_output" it should contain the columns specified in y_col. - if class_mode is "input" or None no extra column is needed.
directory string, path to the directory to read images from. If None, data in x_col column should be absolute paths.
x_col string, column in dataframe that contains the filenames (or absolute paths if directory is None).
y_col string or list, column/s in dataframe that has the target data.
weight_col string, column in dataframe that contains the sample weights. Default: None.
target_size tuple of integers (height, width), default: (256, 256). The dimensions to which all images found will be resized.
color_mode one of "grayscale", "rgb", "rgba". Default: "rgb". Whether the images will be converted to have 1 or 3 color channels.
classes optional list of classes (e.g. ['dogs', 'cats']). Default is None. If not provided, the list of classes will be automatically inferred from the y_col, which will map to the label indices, will be alphanumeric). The dictionary containing the mapping from class names to class indices can be obtained via the attribute class_indices.
class_mode one of "binary", "categorical", "input", "multi_output", "raw", sparse" or None. Default: "categorical". Mode for yielding the targets:
"binary": 1D numpy array of binary labels,
"categorical": 2D numpy array of one-hot encoded labels. Supports multi-label output.
"input": images identical to input images (mainly used to work with autoencoders),
"multi_output": list with the values of the different columns,
"raw": numpy array of values in y_col column(s),
"sparse": 1D numpy array of integer labels, - None, no targets are returned (the generator will only yield batches of image data, which is useful to use in model.predict()).
batch_size size of the batches of data (default: 32).
shuffle whether to shuffle the data (default: True)
seed optional random seed for shuffling and transformations.
save_to_dir None or str (default: None). This allows you to optionally specify a directory to which to save the augmented pictures being generated (useful for visualizing what you are doing).
save_prefix str. Prefix to use for filenames of saved pictures (only relevant if save_to_dir is set).
save_format one of "png", "jpeg" (only relevant if save_to_dir is set). Default: "png".
subset Subset of data ("training" or "validation") if validation_split is set in ImageDataGenerator.
interpolation Interpolation method used to resample the image if the target size is different from that of the loaded image. Supported methods are "nearest", "bilinear", and "bicubic". If PIL version 1.1.3 or newer is installed, "lanczos" is also supported. If PIL version 3.4.0 or newer is installed, "box" and "hamming" are also supported. By default, "nearest" is used.
validate_filenames Boolean, whether to validate image filenames in x_col. If True, invalid images will be ignored. Disabling this option can lead to speed-up in the execution of this function. Defaults to True.
**kwargs legacy arguments for raising deprecation warnings.
Returns A DataFrameIterator yielding tuples of (x, y) where x is a numpy array containing a batch of images with shape (batch_size, *target_size, channels) and y is a numpy array of corresponding labels.
flow_from_directory View source
flow_from_directory(
directory, target_size=(256, 256), color_mode='rgb', classes=None,
class_mode='categorical', batch_size=32, shuffle=True, seed=None,
save_to_dir=None, save_prefix='', save_format='png',
follow_links=False, subset=None, interpolation='nearest'
)
Takes the path to a directory & generates batches of augmented data.
Arguments
directory string, path to the target directory. It should contain one subdirectory per class. Any PNG, JPG, BMP, PPM or TIF images inside each of the subdirectories directory tree will be included in the generator. See this script for more details.
target_size Tuple of integers (height, width), defaults to (256, 256). The dimensions to which all images found will be resized.
color_mode One of "grayscale", "rgb", "rgba". Default: "rgb". Whether the images will be converted to have 1, 3, or 4 channels.
classes Optional list of class subdirectories (e.g. ['dogs', 'cats']). Default: None. If not provided, the list of classes will be automatically inferred from the subdirectory names/structure under directory, where each subdirectory will be treated as a different class (and the order of the classes, which will map to the label indices, will be alphanumeric). The dictionary containing the mapping from class names to class indices can be obtained via the attribute class_indices.
class_mode One of "categorical", "binary", "sparse", "input", or None. Default: "categorical". Determines the type of label arrays that are returned: - "categorical" will be 2D one-hot encoded labels, - "binary" will be 1D binary labels, "sparse" will be 1D integer labels, - "input" will be images identical to input images (mainly used to work with autoencoders). - If None, no labels are returned (the generator will only yield batches of image data, which is useful to use with model.predict()). Please note that in case of class_mode None, the data still needs to reside in a subdirectory of directory for it to work correctly.
batch_size Size of the batches of data (default: 32).
shuffle Whether to shuffle the data (default: True) If set to False, sorts the data in alphanumeric order.
seed Optional random seed for shuffling and transformations.
save_to_dir None or str (default: None). This allows you to optionally specify a directory to which to save the augmented pictures being generated (useful for visualizing what you are doing).
save_prefix Str. Prefix to use for filenames of saved pictures (only relevant if save_to_dir is set).
save_format One of "png", "jpeg" (only relevant if save_to_dir is set). Default: "png".
follow_links Whether to follow symlinks inside class subdirectories (default: False).
subset Subset of data ("training" or "validation") if validation_split is set in ImageDataGenerator.
interpolation Interpolation method used to resample the image if the target size is different from that of the loaded image. Supported methods are "nearest", "bilinear", and "bicubic". If PIL version 1.1.3 or newer is installed, "lanczos" is also supported. If PIL version 3.4.0 or newer is installed, "box" and "hamming" are also supported. By default, "nearest" is used.
Returns A DirectoryIterator yielding tuples of (x, y) where x is a numpy array containing a batch of images with shape (batch_size, *target_size, channels) and y is a numpy array of corresponding labels.
get_random_transform View source
get_random_transform(
img_shape, seed=None
)
Generates random parameters for a transformation.
Arguments
seed Random seed.
img_shape Tuple of integers. Shape of the image that is transformed.
Returns A dictionary containing randomly chosen parameters describing the transformation.
random_transform View source
random_transform(
x, seed=None
)
Applies a random transformation to an image.
Arguments
x 3D tensor, single image.
seed Random seed.
Returns A randomly transformed version of the input (same shape).
standardize View source
standardize(
x
)
Applies the normalization configuration in-place to a batch of inputs. x is changed in-place since the function is mainly used internally to standardize images and feed them to your network. If a copy of x would be created instead it would have a significant performance cost. If you want to apply this method without changing the input in-place you can call the method creating a copy before: standardize(np.copy(x))
Arguments
x Batch of inputs to be normalized.
Returns The inputs, normalized. | tensorflow.keras.preprocessing.image.imagedatagenerator |
tf.keras.preprocessing.image.img_to_array View source on GitHub Converts a PIL Image instance to a Numpy array. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.keras.preprocessing.image.img_to_array
tf.keras.preprocessing.image.img_to_array(
img, data_format=None, dtype=None
)
Usage: from PIL import Image
img_data = np.random.random(size=(100, 100, 3))
img = tf.keras.preprocessing.image.array_to_img(img_data)
array = tf.keras.preprocessing.image.img_to_array(img)
Arguments
img Input PIL Image instance.
data_format Image data format, can be either "channels_first" or "channels_last". Defaults to None, in which case the global setting tf.keras.backend.image_data_format() is used (unless you changed it, it defaults to "channels_last").
dtype Dtype to use. Default to None, in which case the global setting tf.keras.backend.floatx() is used (unless you changed it, it defaults to "float32")
Returns A 3D Numpy array.
Raises
ValueError if invalid img or data_format is passed. | tensorflow.keras.preprocessing.image.img_to_array |
tf.keras.preprocessing.image.Iterator View source on GitHub Base class for image data iterators. Inherits From: Sequence View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.keras.preprocessing.image.Iterator
tf.keras.preprocessing.image.Iterator(
n, batch_size, shuffle, seed
)
Every Iterator must implement the _get_batches_of_transformed_samples method.
Arguments
n Integer, total number of samples in the dataset to loop over.
batch_size Integer, size of a batch.
shuffle Boolean, whether to shuffle the data between epochs.
seed Random seeding for data shuffling. Methods next View source
next()
For python 2.x.
Returns The next batch.
on_epoch_end View source
on_epoch_end()
reset View source
reset()
__getitem__ View source
__getitem__(
idx
)
__iter__ View source
__iter__()
__len__ View source
__len__()
Class Variables
white_list_formats | tensorflow.keras.preprocessing.image.iterator |
tf.keras.preprocessing.image.load_img View source on GitHub Loads an image into PIL format. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.keras.preprocessing.image.load_img
tf.keras.preprocessing.image.load_img(
path, grayscale=False, color_mode='rgb', target_size=None,
interpolation='nearest'
)
Usage: image = tf.keras.preprocessing.image.load_img(image_path)
input_arr = keras.preprocessing.image.img_to_array(image)
input_arr = np.array([input_arr]) # Convert single image to a batch.
predictions = model.predict(input_arr)
Arguments
path Path to image file.
grayscale DEPRECATED use color_mode="grayscale".
color_mode One of "grayscale", "rgb", "rgba". Default: "rgb". The desired image format.
target_size Either None (default to original size) or tuple of ints (img_height, img_width).
interpolation Interpolation method used to resample the image if the target size is different from that of the loaded image. Supported methods are "nearest", "bilinear", and "bicubic". If PIL version 1.1.3 or newer is installed, "lanczos" is also supported. If PIL version 3.4.0 or newer is installed, "box" and "hamming" are also supported. By default, "nearest" is used.
Returns A PIL Image instance.
Raises
ImportError if PIL is not available.
ValueError if interpolation method is not supported. | tensorflow.keras.preprocessing.image.load_img |
tf.keras.preprocessing.image.NumpyArrayIterator View source on GitHub Iterator yielding data from a Numpy array. Inherits From: Iterator, Sequence View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.keras.preprocessing.image.NumpyArrayIterator
tf.keras.preprocessing.image.NumpyArrayIterator(
x, y, image_data_generator, batch_size=32, shuffle=False, sample_weight=None,
seed=None, data_format=None, save_to_dir=None, save_prefix='',
save_format='png', subset=None, dtype=None
)
Arguments
x Numpy array of input data or tuple. If tuple, the second elements is either another numpy array or a list of numpy arrays, each of which gets passed through as an output without any modifications.
y Numpy array of targets data.
image_data_generator Instance of ImageDataGenerator to use for random transformations and normalization.
batch_size Integer, size of a batch.
shuffle Boolean, whether to shuffle the data between epochs.
sample_weight Numpy array of sample weights.
seed Random seed for data shuffling.
data_format String, one of channels_first, channels_last.
save_to_dir Optional directory where to save the pictures being yielded, in a viewable format. This is useful for visualizing the random transformations being applied, for debugging purposes.
save_prefix String prefix to use for saving sample images (if save_to_dir is set).
save_format Format to use for saving sample images (if save_to_dir is set).
subset Subset of data ("training" or "validation") if validation_split is set in ImageDataGenerator.
dtype Dtype to use for the generated arrays. Methods next View source
next()
For python 2.x.
Returns The next batch.
on_epoch_end View source
on_epoch_end()
reset View source
reset()
__getitem__ View source
__getitem__(
idx
)
__iter__ View source
__iter__()
__len__ View source
__len__()
Class Variables
white_list_formats | tensorflow.keras.preprocessing.image.numpyarrayiterator |
tf.keras.preprocessing.image.random_brightness View source on GitHub Performs a random brightness shift. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.keras.preprocessing.image.random_brightness
tf.keras.preprocessing.image.random_brightness(
x, brightness_range
)
Arguments
x Input tensor. Must be 3D.
brightness_range Tuple of floats; brightness range.
channel_axis Index of axis for channels in the input tensor.
Returns Numpy image tensor.
Raises ValueError if brightness_range isn't a tuple. | tensorflow.keras.preprocessing.image.random_brightness |
tf.keras.preprocessing.image.random_channel_shift View source on GitHub Performs a random channel shift. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.keras.preprocessing.image.random_channel_shift
tf.keras.preprocessing.image.random_channel_shift(
x, intensity_range, channel_axis=0
)
Arguments
x Input tensor. Must be 3D.
intensity_range Transformation intensity.
channel_axis Index of axis for channels in the input tensor.
Returns Numpy image tensor. | tensorflow.keras.preprocessing.image.random_channel_shift |
tf.keras.preprocessing.image.random_rotation View source on GitHub Performs a random rotation of a Numpy image tensor. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.keras.preprocessing.image.random_rotation
tf.keras.preprocessing.image.random_rotation(
x, rg, row_axis=1, col_axis=2, channel_axis=0, fill_mode='nearest',
cval=0.0, interpolation_order=1
)
Arguments
x Input tensor. Must be 3D.
rg Rotation range, in degrees.
row_axis Index of axis for rows in the input tensor.
col_axis Index of axis for columns in the input tensor.
channel_axis Index of axis for channels in the input tensor.
fill_mode Points outside the boundaries of the input are filled according to the given mode (one of {'constant', 'nearest', 'reflect', 'wrap'}).
cval Value used for points outside the boundaries of the input if mode='constant'.
interpolation_order int, order of spline interpolation. see ndimage.interpolation.affine_transform
Returns Rotated Numpy image tensor. | tensorflow.keras.preprocessing.image.random_rotation |
tf.keras.preprocessing.image.random_shear View source on GitHub Performs a random spatial shear of a Numpy image tensor. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.keras.preprocessing.image.random_shear
tf.keras.preprocessing.image.random_shear(
x, intensity, row_axis=1, col_axis=2, channel_axis=0,
fill_mode='nearest', cval=0.0, interpolation_order=1
)
Arguments
x Input tensor. Must be 3D.
intensity Transformation intensity in degrees.
row_axis Index of axis for rows in the input tensor.
col_axis Index of axis for columns in the input tensor.
channel_axis Index of axis for channels in the input tensor.
fill_mode Points outside the boundaries of the input are filled according to the given mode (one of {'constant', 'nearest', 'reflect', 'wrap'}).
cval Value used for points outside the boundaries of the input if mode='constant'.
interpolation_order int, order of spline interpolation. see ndimage.interpolation.affine_transform
Returns Sheared Numpy image tensor. | tensorflow.keras.preprocessing.image.random_shear |
tf.keras.preprocessing.image.random_shift View source on GitHub Performs a random spatial shift of a Numpy image tensor. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.keras.preprocessing.image.random_shift
tf.keras.preprocessing.image.random_shift(
x, wrg, hrg, row_axis=1, col_axis=2, channel_axis=0,
fill_mode='nearest', cval=0.0, interpolation_order=1
)
Arguments
x Input tensor. Must be 3D.
wrg Width shift range, as a float fraction of the width.
hrg Height shift range, as a float fraction of the height.
row_axis Index of axis for rows in the input tensor.
col_axis Index of axis for columns in the input tensor.
channel_axis Index of axis for channels in the input tensor.
fill_mode Points outside the boundaries of the input are filled according to the given mode (one of {'constant', 'nearest', 'reflect', 'wrap'}).
cval Value used for points outside the boundaries of the input if mode='constant'.
interpolation_order int, order of spline interpolation. see ndimage.interpolation.affine_transform
Returns Shifted Numpy image tensor. | tensorflow.keras.preprocessing.image.random_shift |
tf.keras.preprocessing.image.random_zoom View source on GitHub Performs a random spatial zoom of a Numpy image tensor. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.keras.preprocessing.image.random_zoom
tf.keras.preprocessing.image.random_zoom(
x, zoom_range, row_axis=1, col_axis=2, channel_axis=0,
fill_mode='nearest', cval=0.0, interpolation_order=1
)
Arguments
x Input tensor. Must be 3D.
zoom_range Tuple of floats; zoom range for width and height.
row_axis Index of axis for rows in the input tensor.
col_axis Index of axis for columns in the input tensor.
channel_axis Index of axis for channels in the input tensor.
fill_mode Points outside the boundaries of the input are filled according to the given mode (one of {'constant', 'nearest', 'reflect', 'wrap'}).
cval Value used for points outside the boundaries of the input if mode='constant'.
interpolation_order int, order of spline interpolation. see ndimage.interpolation.affine_transform
Returns Zoomed Numpy image tensor.
Raises
ValueError if zoom_range isn't a tuple. | tensorflow.keras.preprocessing.image.random_zoom |
tf.keras.preprocessing.image.save_img View source on GitHub Saves an image stored as a Numpy array to a path or file object. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.keras.preprocessing.image.save_img
tf.keras.preprocessing.image.save_img(
path, x, data_format=None, file_format=None, scale=True, **kwargs
)
Arguments
path Path or file object.
x Numpy array.
data_format Image data format, either "channels_first" or "channels_last".
file_format Optional file format override. If omitted, the format to use is determined from the filename extension. If a file object was used instead of a filename, this parameter should always be used.
scale Whether to rescale image values to be within [0, 255].
**kwargs Additional keyword arguments passed to PIL.Image.save(). | tensorflow.keras.preprocessing.image.save_img |
tf.keras.preprocessing.image.smart_resize Resize images to a target size without aspect ratio distortion.
tf.keras.preprocessing.image.smart_resize(
x, size, interpolation='bilinear'
)
TensorFlow image datasets typically yield images that have each a different size. However, these images need to be batched before they can be processed by Keras layers. To be batched, images need to share the same height and width. You could simply do: size = (200, 200)
ds = ds.map(lambda img: tf.image.resize(img, size))
```
However, if you do this, you distort the aspect ratio of your images, since
in general they do not all have the same aspect ratio as `size`. This is
fine in many cases, but not always (e.g. for GANs this can be a problem).
Note that passing the argument `preserve_aspect_ratio=True` to `resize`
will preserve the aspect ratio, but at the cost of no longer respecting the
provided target size. Because <a href="../../../../tf/image/resize"><code>tf.image.resize</code></a> doesn't crop images,
your output images will still have different sizes.
#### This calls for:
```python
size = (200, 200)
ds = ds.map(lambda img: smart_resize(img, size))
```
Your output images will actually be `(200, 200)`, and will not be distorted.
Instead, the parts of the image that do not fit within the target size
get cropped out.
The resizing process is:
1. Take the largest centered crop of the image that has the same aspect ratio
as the target size. For instance, if `size=(200, 200)` and the input image has
size `(340, 500)`, we take a crop of `(340, 340)` centered along the width.
2. Resize the cropped image to the target size. In the example above,
we resize the `(340, 340)` crop to `(200, 200)`.
<!-- Tabular view -->
<table class="responsive fixed orange">
<colgroup><col width="214px"><col></colgroup>
<tr><th colspan="2"><h2 class="add-link">Arguments</h2></th></tr>
<tr>
<td>
`x`
</td>
<td>
Input image (as a tensor or NumPy array). Must be in format
`(height, width, channels)`.
</td>
</tr><tr>
<td>
`size`
</td>
<td>
Tuple of `(height, width)` integer. Target size.
</td>
</tr><tr>
<td>
`interpolation`
</td>
<td>
String, interpolation to use for resizing.
Defaults to `'bilinear'`. Supports `bilinear`, `nearest`, `bicubic`,
`area`, `lanczos3`, `lanczos5`, `gaussian`, `mitchellcubic`.
</td>
</tr>
</table>
<!-- Tabular view -->
<table class="responsive fixed orange">
<colgroup><col width="214px"><col></colgroup>
<tr><th colspan="2"><h2 class="add-link">Returns</h2></th></tr>
<tr class="alt">
<td colspan="2">
Array with shape `(size[0], size[1], channels)`. If the input image was a
NumPy array, the output is a NumPy array, and if it was a TF tensor,
the output is a TF tensor.
</td>
</tr>
</table> | tensorflow.keras.preprocessing.image.smart_resize |
tf.keras.preprocessing.image_dataset_from_directory Generates a tf.data.Dataset from image files in a directory.
tf.keras.preprocessing.image_dataset_from_directory(
directory, labels='inferred', label_mode='int',
class_names=None, color_mode='rgb', batch_size=32, image_size=(256,
256), shuffle=True, seed=None, validation_split=None, subset=None,
interpolation='bilinear', follow_links=False
)
If your directory structure is: main_directory/
...class_a/
......a_image_1.jpg
......a_image_2.jpg
...class_b/
......b_image_1.jpg
......b_image_2.jpg
Then calling image_dataset_from_directory(main_directory, labels='inferred') will return a tf.data.Dataset that yields batches of images from the subdirectories class_a and class_b, together with labels 0 and 1 (0 corresponding to class_a and 1 corresponding to class_b). Supported image formats: jpeg, png, bmp, gif. Animated gifs are truncated to the first frame.
Arguments
directory Directory where the data is located. If labels is "inferred", it should contain subdirectories, each containing images for a class. Otherwise, the directory structure is ignored.
labels Either "inferred" (labels are generated from the directory structure), or a list/tuple of integer labels of the same size as the number of image files found in the directory. Labels should be sorted according to the alphanumeric order of the image file paths (obtained via os.walk(directory) in Python).
label_mode 'int': means that the labels are encoded as integers (e.g. for sparse_categorical_crossentropy loss). 'categorical' means that the labels are encoded as a categorical vector (e.g. for categorical_crossentropy loss). 'binary' means that the labels (there can be only 2) are encoded as float32 scalars with values 0 or 1 (e.g. for binary_crossentropy). None (no labels).
class_names Only valid if "labels" is "inferred". This is the explict list of class names (must match names of subdirectories). Used to control the order of the classes (otherwise alphanumerical order is used).
color_mode One of "grayscale", "rgb", "rgba". Default: "rgb". Whether the images will be converted to have 1, 3, or 4 channels.
batch_size Size of the batches of data. Default: 32.
image_size Size to resize images to after they are read from disk. Defaults to (256, 256). Since the pipeline processes batches of images that must all have the same size, this must be provided.
shuffle Whether to shuffle the data. Default: True. If set to False, sorts the data in alphanumeric order.
seed Optional random seed for shuffling and transformations.
validation_split Optional float between 0 and 1, fraction of data to reserve for validation.
subset One of "training" or "validation". Only used if validation_split is set.
interpolation String, the interpolation method used when resizing images. Defaults to bilinear. Supports bilinear, nearest, bicubic, area, lanczos3, lanczos5, gaussian, mitchellcubic.
follow_links Whether to visits subdirectories pointed to by symlinks. Defaults to False.
Returns A tf.data.Dataset object. If label_mode is None, it yields float32 tensors of shape (batch_size, image_size[0], image_size[1], num_channels), encoding images (see below for rules regarding num_channels). Otherwise, it yields a tuple (images, labels), where images has shape (batch_size, image_size[0], image_size[1], num_channels), and labels follows the format described below.
Rules regarding labels format: if label_mode is int, the labels are an int32 tensor of shape (batch_size,). if label_mode is binary, the labels are a float32 tensor of 1s and 0s of shape (batch_size, 1). if label_mode is categorial, the labels are a float32 tensor of shape (batch_size, num_classes), representing a one-hot encoding of the class index. Rules regarding number of channels in the yielded images: if color_mode is grayscale, there's 1 channel in the image tensors. if color_mode is rgb, there are 3 channel in the image tensors. if color_mode is rgba, there are 4 channel in the image tensors. | tensorflow.keras.preprocessing.image_dataset_from_directory |
Module: tf.keras.preprocessing.sequence Utilities for preprocessing sequence data. Classes class TimeseriesGenerator: Utility class for generating batches of temporal data. Functions make_sampling_table(...): Generates a word rank-based probabilistic sampling table. pad_sequences(...): Pads sequences to the same length. skipgrams(...): Generates skipgram word pairs. | tensorflow.keras.preprocessing.sequence |
tf.keras.preprocessing.sequence.make_sampling_table View source on GitHub Generates a word rank-based probabilistic sampling table. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.keras.preprocessing.sequence.make_sampling_table
tf.keras.preprocessing.sequence.make_sampling_table(
size, sampling_factor=1e-05
)
Used for generating the sampling_table argument for skipgrams. sampling_table[i] is the probability of sampling the word i-th most common word in a dataset (more common words should be sampled less frequently, for balance). The sampling probabilities are generated according to the sampling distribution used in word2vec: p(word) = (min(1, sqrt(word_frequency / sampling_factor) /
(word_frequency / sampling_factor)))
We assume that the word frequencies follow Zipf's law (s=1) to derive a numerical approximation of frequency(rank): frequency(rank) ~ 1/(rank * (log(rank) + gamma) + 1/2 - 1/(12*rank)) where gamma is the Euler-Mascheroni constant.
Arguments
size Int, number of possible words to sample.
sampling_factor The sampling factor in the word2vec formula.
Returns A 1D Numpy array of length size where the ith entry is the probability that a word of rank i should be sampled. | tensorflow.keras.preprocessing.sequence.make_sampling_table |
tf.keras.preprocessing.sequence.pad_sequences View source on GitHub Pads sequences to the same length. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.keras.preprocessing.sequence.pad_sequences
tf.keras.preprocessing.sequence.pad_sequences(
sequences, maxlen=None, dtype='int32', padding='pre',
truncating='pre', value=0.0
)
This function transforms a list (of length num_samples) of sequences (lists of integers) into a 2D Numpy array of shape (num_samples, num_timesteps). num_timesteps is either the maxlen argument if provided, or the length of the longest sequence in the list. Sequences that are shorter than num_timesteps are padded with value until they are num_timesteps long. Sequences longer than num_timesteps are truncated so that they fit the desired length. The position where padding or truncation happens is determined by the arguments padding and truncating, respectively. Pre-padding or removing values from the beginning of the sequence is the default.
sequence = [[1], [2, 3], [4, 5, 6]]
tf.keras.preprocessing.sequence.pad_sequences(sequence)
array([[0, 0, 1],
[0, 2, 3],
[4, 5, 6]], dtype=int32)
tf.keras.preprocessing.sequence.pad_sequences(sequence, value=-1)
array([[-1, -1, 1],
[-1, 2, 3],
[ 4, 5, 6]], dtype=int32)
tf.keras.preprocessing.sequence.pad_sequences(sequence, padding='post')
array([[1, 0, 0],
[2, 3, 0],
[4, 5, 6]], dtype=int32)
tf.keras.preprocessing.sequence.pad_sequences(sequence, maxlen=2)
array([[0, 1],
[2, 3],
[5, 6]], dtype=int32)
Arguments
sequences List of sequences (each sequence is a list of integers).
maxlen Optional Int, maximum length of all sequences. If not provided, sequences will be padded to the length of the longest individual sequence.
dtype (Optional, defaults to int32). Type of the output sequences. To pad sequences with variable length strings, you can use object.
padding String, 'pre' or 'post' (optional, defaults to 'pre'): pad either before or after each sequence.
truncating String, 'pre' or 'post' (optional, defaults to 'pre'): remove values from sequences larger than maxlen, either at the beginning or at the end of the sequences.
value Float or String, padding value. (Optional, defaults to 0.)
Returns Numpy array with shape (len(sequences), maxlen)
Raises
ValueError In case of invalid values for truncating or padding, or in case of invalid shape for a sequences entry. | tensorflow.keras.preprocessing.sequence.pad_sequences |
tf.keras.preprocessing.sequence.skipgrams View source on GitHub Generates skipgram word pairs. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.keras.preprocessing.sequence.skipgrams
tf.keras.preprocessing.sequence.skipgrams(
sequence, vocabulary_size, window_size=4, negative_samples=1.0, shuffle=True,
categorical=False, sampling_table=None, seed=None
)
This function transforms a sequence of word indexes (list of integers) into tuples of words of the form: (word, word in the same window), with label 1 (positive samples). (word, random word from the vocabulary), with label 0 (negative samples). Read more about Skipgram in this gnomic paper by Mikolov et al.: Efficient Estimation of Word Representations in Vector Space
Arguments
sequence A word sequence (sentence), encoded as a list of word indices (integers). If using a sampling_table, word indices are expected to match the rank of the words in a reference dataset (e.g. 10 would encode the 10-th most frequently occurring token). Note that index 0 is expected to be a non-word and will be skipped.
vocabulary_size Int, maximum possible word index + 1
window_size Int, size of sampling windows (technically half-window). The window of a word w_i will be [i - window_size, i + window_size+1].
negative_samples Float >= 0. 0 for no negative (i.e. random) samples. 1 for same number as positive samples.
shuffle Whether to shuffle the word couples before returning them.
categorical bool. if False, labels will be integers (eg. [0, 1, 1 .. ]), if True, labels will be categorical, e.g. [[1,0],[0,1],[0,1] .. ].
sampling_table 1D array of size vocabulary_size where the entry i encodes the probability to sample a word of rank i.
seed Random seed.
Returns couples, labels: where couples are int pairs and labels are either 0 or 1.
Note: By convention, index 0 in the vocabulary is a non-word and will be skipped. | tensorflow.keras.preprocessing.sequence.skipgrams |
tf.keras.preprocessing.sequence.TimeseriesGenerator View source on GitHub Utility class for generating batches of temporal data. Inherits From: Sequence View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.keras.preprocessing.sequence.TimeseriesGenerator
tf.keras.preprocessing.sequence.TimeseriesGenerator(
data, targets, length, sampling_rate=1, stride=1, start_index=0, end_index=None,
shuffle=False, reverse=False, batch_size=128
)
This class takes in a sequence of data-points gathered at equal intervals, along with time series parameters such as stride, length of history, etc., to produce batches for training/validation. Arguments: data: Indexable generator (such as list or Numpy array) containing consecutive data points (timesteps). The data should be at 2D, and axis 0 is expected to be the time dimension. targets: Targets corresponding to timesteps in data. It should have same length as data. length: Length of the output sequences (in number of timesteps). sampling_rate: Period between successive individual timesteps within sequences. For rate r, timesteps data[i], data[i-r], ... data[i - length] are used for create a sample sequence. stride: Period between successive output sequences. For stride s, consecutive output samples would be centered around data[i], data[i+s], data[i+2*s], etc. start_index: Data points earlier than start_index will not be used in the output sequences. This is useful to reserve part of the data for test or validation. end_index: Data points later than end_index will not be used in the output sequences. This is useful to reserve part of the data for test or validation. shuffle: Whether to shuffle output samples, or instead draw them in chronological order. reverse: Boolean: if true, timesteps in each output sample will be in reverse chronological order. batch_size: Number of timeseries samples in each batch (except maybe the last one). Returns: A Sequence instance. Examples: from keras.preprocessing.sequence import TimeseriesGenerator
import numpy as np
data = np.array([[i] for i in range(50)])
targets = np.array([[i] for i in range(50)])
data_gen = TimeseriesGenerator(data, targets,
length=10, sampling_rate=2,
batch_size=2)
assert len(data_gen) == 20
batch_0 = data_gen[0]
x, y = batch_0
assert np.array_equal(x,
np.array([[[0], [2], [4], [6], [8]],
[[1], [3], [5], [7], [9]]]))
assert np.array_equal(y,
np.array([[10], [11]]))
Methods get_config View source
get_config()
Returns the TimeseriesGenerator configuration as Python dictionary.
Returns A Python dictionary with the TimeseriesGenerator configuration.
on_epoch_end View source
on_epoch_end()
Method called at the end of every epoch. to_json View source
to_json(
**kwargs
)
Returns a JSON string containing the timeseries generator configuration. To load a generator from a JSON string, use keras.preprocessing.sequence.timeseries_generator_from_json(json_string).
Arguments
**kwargs Additional keyword arguments to be passed to json.dumps().
Returns A JSON string containing the tokenizer configuration.
__getitem__ View source
__getitem__(
index
)
__iter__ View source
__iter__()
Create a generator that iterate over the Sequence. __len__ View source
__len__() | tensorflow.keras.preprocessing.sequence.timeseriesgenerator |
Module: tf.keras.preprocessing.text Utilities for text input preprocessing. Classes class Tokenizer: Text tokenization utility class. Functions hashing_trick(...): Converts a text to a sequence of indexes in a fixed-size hashing space. one_hot(...): One-hot encodes a text into a list of word indexes of size n. text_to_word_sequence(...): Converts a text to a sequence of words (or tokens). tokenizer_from_json(...): Parses a JSON tokenizer configuration file and returns a | tensorflow.keras.preprocessing.text |
tf.keras.preprocessing.text.hashing_trick View source on GitHub Converts a text to a sequence of indexes in a fixed-size hashing space. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.keras.preprocessing.text.hashing_trick
tf.keras.preprocessing.text.hashing_trick(
text, n, hash_function=None,
filters='!"#$%&()*+,-./:;<=>?@[\\]^_`{|}~\t\n',
lower=True, split=' '
)
Arguments
text Input text (string).
n Dimension of the hashing space.
hash_function defaults to python hash function, can be 'md5' or any function that takes in input a string and returns a int. Note that 'hash' is not a stable hashing function, so it is not consistent across different runs, while 'md5' is a stable hashing function.
filters list (or concatenation) of characters to filter out, such as punctuation. Default: !"#$%&()*+,-./:;<=>?@[\]^_`{|}~\t\n, includes basic punctuation, tabs, and newlines.
lower boolean. Whether to set the text to lowercase.
split str. Separator for word splitting.
Returns A list of integer word indices (unicity non-guaranteed).
0 is a reserved index that won't be assigned to any word. Two or more words may be assigned to the same index, due to possible collisions by the hashing function. The probability of a collision is in relation to the dimension of the hashing space and the number of distinct objects. | tensorflow.keras.preprocessing.text.hashing_trick |
tf.keras.preprocessing.text.one_hot View source on GitHub One-hot encodes a text into a list of word indexes of size n. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.keras.preprocessing.text.one_hot
tf.keras.preprocessing.text.one_hot(
input_text, n,
filters='!"#$%&()*+,-./:;<=>?@[\\]^_`{|}~\t\n',
lower=True, split=' '
)
This function receives as input a string of text and returns a list of encoded integers each corresponding to a word (or token) in the given input string.
Arguments
input_text Input text (string).
n int. Size of vocabulary.
filters list (or concatenation) of characters to filter out, such as punctuation. Default: '!"#$%&()*+,-./:;<=>?@[\]^_`{|}~\t\n, includes basic punctuation, tabs, and newlines.
lower boolean. Whether to set the text to lowercase.
split str. Separator for word splitting.
Returns List of integers in [1, n]. Each integer encodes a word (unicity non-guaranteed). | tensorflow.keras.preprocessing.text.one_hot |
tf.keras.preprocessing.text.text_to_word_sequence View source on GitHub Converts a text to a sequence of words (or tokens). View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.keras.preprocessing.text.text_to_word_sequence
tf.keras.preprocessing.text.text_to_word_sequence(
input_text,
filters='!"#$%&()*+,-./:;<=>?@[\\]^_`{|}~\t\n',
lower=True, split=' '
)
This function transforms a string of text into a list of words while ignoring filters which include punctuations by default.
sample_text = 'This is a sample sentence.'
tf.keras.preprocessing.text.text_to_word_sequence(sample_text)
['this', 'is', 'a', 'sample', 'sentence']
Arguments
input_text Input text (string).
filters list (or concatenation) of characters to filter out, such as punctuation. Default: '!"#$%&()*+,-./:;<=>?@[\]^_{|}~\t\n', includes basic punctuation, tabs, and newlines. </td> </tr><tr> <td>lower</td> <td> boolean. Whether to convert the input to lowercase. </td> </tr><tr> <td>split` str. Separator for word splitting.
Returns A list of words (or tokens). | tensorflow.keras.preprocessing.text.text_to_word_sequence |
tf.keras.preprocessing.text.Tokenizer View source on GitHub Text tokenization utility class. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.keras.preprocessing.text.Tokenizer
tf.keras.preprocessing.text.Tokenizer(
num_words=None,
filters='!"#$%&()*+,-./:;<=>?@[\\]^_`{|}~\t\n',
lower=True, split=' ', char_level=False, oov_token=None,
document_count=0, **kwargs
)
This class allows to vectorize a text corpus, by turning each text into either a sequence of integers (each integer being the index of a token in a dictionary) or into a vector where the coefficient for each token could be binary, based on word count, based on tf-idf...
Arguments
num_words the maximum number of words to keep, based on word frequency. Only the most common num_words-1 words will be kept.
filters a string where each element is a character that will be filtered from the texts. The default is all punctuation, plus tabs and line breaks, minus the ' character.
lower boolean. Whether to convert the texts to lowercase.
split str. Separator for word splitting.
char_level if True, every character will be treated as a token.
oov_token if given, it will be added to word_index and used to replace out-of-vocabulary words during text_to_sequence calls By default, all punctuation is removed, turning the texts into space-separated sequences of words (words maybe include the ' character). These sequences are then split into lists of tokens. They will then be indexed or vectorized. 0 is a reserved index that won't be assigned to any word. Methods fit_on_sequences View source
fit_on_sequences(
sequences
)
Updates internal vocabulary based on a list of sequences. Required before using sequences_to_matrix (if fit_on_texts was never called).
Arguments
sequences A list of sequence. A "sequence" is a list of integer word indices. fit_on_texts View source
fit_on_texts(
texts
)
Updates internal vocabulary based on a list of texts. In the case where texts contains lists, we assume each entry of the lists to be a token. Required before using texts_to_sequences or texts_to_matrix.
Arguments
texts can be a list of strings, a generator of strings (for memory-efficiency), or a list of list of strings. get_config View source
get_config()
Returns the tokenizer configuration as Python dictionary. The word count dictionaries used by the tokenizer get serialized into plain JSON, so that the configuration can be read by other projects.
Returns A Python dictionary with the tokenizer configuration.
sequences_to_matrix View source
sequences_to_matrix(
sequences, mode='binary'
)
Converts a list of sequences into a Numpy matrix.
Arguments
sequences list of sequences (a sequence is a list of integer word indices).
mode one of "binary", "count", "tfidf", "freq"
Returns A Numpy matrix.
Raises
ValueError In case of invalid mode argument, or if the Tokenizer requires to be fit to sample data. sequences_to_texts View source
sequences_to_texts(
sequences
)
Transforms each sequence into a list of text. Only top num_words-1 most frequent words will be taken into account. Only words known by the tokenizer will be taken into account.
Arguments
sequences A list of sequences (list of integers).
Returns A list of texts (strings)
sequences_to_texts_generator View source
sequences_to_texts_generator(
sequences
)
Transforms each sequence in sequences to a list of texts(strings). Each sequence has to a list of integers. In other words, sequences should be a list of sequences Only top num_words-1 most frequent words will be taken into account. Only words known by the tokenizer will be taken into account.
Arguments
sequences A list of sequences.
Yields Yields individual texts.
texts_to_matrix View source
texts_to_matrix(
texts, mode='binary'
)
Convert a list of texts to a Numpy matrix.
Arguments
texts list of strings.
mode one of "binary", "count", "tfidf", "freq".
Returns A Numpy matrix.
texts_to_sequences View source
texts_to_sequences(
texts
)
Transforms each text in texts to a sequence of integers. Only top num_words-1 most frequent words will be taken into account. Only words known by the tokenizer will be taken into account.
Arguments
texts A list of texts (strings).
Returns A list of sequences.
texts_to_sequences_generator View source
texts_to_sequences_generator(
texts
)
Transforms each text in texts to a sequence of integers. Each item in texts can also be a list, in which case we assume each item of that list to be a token. Only top num_words-1 most frequent words will be taken into account. Only words known by the tokenizer will be taken into account.
Arguments
texts A list of texts (strings).
Yields Yields individual sequences.
to_json View source
to_json(
**kwargs
)
Returns a JSON string containing the tokenizer configuration. To load a tokenizer from a JSON string, use keras.preprocessing.text.tokenizer_from_json(json_string).
Arguments
**kwargs Additional keyword arguments to be passed to json.dumps().
Returns A JSON string containing the tokenizer configuration. | tensorflow.keras.preprocessing.text.tokenizer |
tf.keras.preprocessing.text.tokenizer_from_json Parses a JSON tokenizer configuration file and returns a View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.keras.preprocessing.text.tokenizer_from_json
tf.keras.preprocessing.text.tokenizer_from_json(
json_string
)
tokenizer instance.
Arguments
json_string JSON string encoding a tokenizer configuration.
Returns A Keras Tokenizer instance | tensorflow.keras.preprocessing.text.tokenizer_from_json |
tf.keras.preprocessing.text_dataset_from_directory Generates a tf.data.Dataset from text files in a directory.
tf.keras.preprocessing.text_dataset_from_directory(
directory, labels='inferred', label_mode='int',
class_names=None, batch_size=32, max_length=None, shuffle=True, seed=None,
validation_split=None, subset=None, follow_links=False
)
If your directory structure is: main_directory/
...class_a/
......a_text_1.txt
......a_text_2.txt
...class_b/
......b_text_1.txt
......b_text_2.txt
Then calling text_dataset_from_directory(main_directory, labels='inferred') will return a tf.data.Dataset that yields batches of texts from the subdirectories class_a and class_b, together with labels 0 and 1 (0 corresponding to class_a and 1 corresponding to class_b). Only .txt files are supported at this time.
Arguments
directory Directory where the data is located. If labels is "inferred", it should contain subdirectories, each containing text files for a class. Otherwise, the directory structure is ignored.
labels Either "inferred" (labels are generated from the directory structure), or a list/tuple of integer labels of the same size as the number of text files found in the directory. Labels should be sorted according to the alphanumeric order of the text file paths (obtained via os.walk(directory) in Python).
label_mode 'int': means that the labels are encoded as integers (e.g. for sparse_categorical_crossentropy loss). 'categorical' means that the labels are encoded as a categorical vector (e.g. for categorical_crossentropy loss). 'binary' means that the labels (there can be only 2) are encoded as float32 scalars with values 0 or 1 (e.g. for binary_crossentropy). None (no labels).
class_names Only valid if "labels" is "inferred". This is the explict list of class names (must match names of subdirectories). Used to control the order of the classes (otherwise alphanumerical order is used).
batch_size Size of the batches of data. Default: 32.
max_length Maximum size of a text string. Texts longer than this will be truncated to max_length.
shuffle Whether to shuffle the data. Default: True. If set to False, sorts the data in alphanumeric order.
seed Optional random seed for shuffling and transformations.
validation_split Optional float between 0 and 1, fraction of data to reserve for validation.
subset One of "training" or "validation". Only used if validation_split is set.
follow_links Whether to visits subdirectories pointed to by symlinks. Defaults to False.
Returns A tf.data.Dataset object. If label_mode is None, it yields string tensors of shape (batch_size,), containing the contents of a batch of text files. Otherwise, it yields a tuple (texts, labels), where texts has shape (batch_size,) and labels follows the format described below.
Rules regarding labels format: if label_mode is int, the labels are an int32 tensor of shape (batch_size,). if label_mode is binary, the labels are a float32 tensor of 1s and 0s of shape (batch_size, 1). if label_mode is categorial, the labels are a float32 tensor of shape (batch_size, num_classes), representing a one-hot encoding of the class index. | tensorflow.keras.preprocessing.text_dataset_from_directory |
tf.keras.preprocessing.timeseries_dataset_from_array Creates a dataset of sliding windows over a timeseries provided as array.
tf.keras.preprocessing.timeseries_dataset_from_array(
data, targets, sequence_length, sequence_stride=1, sampling_rate=1,
batch_size=128, shuffle=False, seed=None, start_index=None, end_index=None
)
This function takes in a sequence of data-points gathered at equal intervals, along with time series parameters such as length of the sequences/windows, spacing between two sequence/windows, etc., to produce batches of timeseries inputs and targets.
Arguments
data Numpy array or eager tensor containing consecutive data points (timesteps). Axis 0 is expected to be the time dimension.
targets Targets corresponding to timesteps in data. It should have same length as data. targets[i] should be the target corresponding to the window that starts at index i (see example 2 below). Pass None if you don't have target data (in this case the dataset will only yield the input data).
sequence_length Length of the output sequences (in number of timesteps).
sequence_stride Period between successive output sequences. For stride s, output samples would start at index data[i], data[i + s], data[i + 2 * s], etc.
sampling_rate Period between successive individual timesteps within sequences. For rate r, timesteps data[i], data[i + r], ... data[i + sequence_length] are used for create a sample sequence.
batch_size Number of timeseries samples in each batch (except maybe the last one).
shuffle Whether to shuffle output samples, or instead draw them in chronological order.
seed Optional int; random seed for shuffling.
start_index Optional int; data points earlier (exclusive) than start_index will not be used in the output sequences. This is useful to reserve part of the data for test or validation.
end_index Optional int; data points later (exclusive) than end_index will not be used in the output sequences. This is useful to reserve part of the data for test or validation.
Returns A tf.data.Dataset instance. If targets was passed, the dataset yields tuple (batch_of_sequences, batch_of_targets). If not, the dataset yields only batch_of_sequences.
Example 1: Consider indices [0, 1, ... 99]. With sequence_length=10, sampling_rate=2, sequence_stride=3, shuffle=False, the dataset will yield batches of sequences composed of the following indices: First sequence: [0 2 4 6 8 10 12 14 16 18]
Second sequence: [3 5 7 9 11 13 15 17 19 21]
Third sequence: [6 8 10 12 14 16 18 20 22 24]
...
Last sequence: [78 80 82 84 86 88 90 92 94 96]
In this case the last 3 data points are discarded since no full sequence can be generated to include them (the next sequence would have started at index 81, and thus its last step would have gone over 99). Example 2: temporal regression. Consider an array data of scalar values, of shape (steps,). To generate a dataset that uses the past 10 timesteps to predict the next timestep, you would use: input_data = data[:-10]
targets = data[10:]
dataset = tf.keras.preprocessing.timeseries_dataset_from_array(
input_data, targets, sequence_length=10)
for batch in dataset:
inputs, targets = batch
assert np.array_equal(inputs[0], data[:10]) # First sequence: steps [0-9]
assert np.array_equal(targets[0], data[10]) # Corresponding target: step 10
break | tensorflow.keras.preprocessing.timeseries_dataset_from_array |
Module: tf.keras.regularizers Built-in regularizers. Classes class L1: A regularizer that applies a L1 regularization penalty. class L1L2: A regularizer that applies both L1 and L2 regularization penalties. class L2: A regularizer that applies a L2 regularization penalty. class Regularizer: Regularizer base class. class l1: A regularizer that applies a L1 regularization penalty. class l2: A regularizer that applies a L2 regularization penalty. Functions deserialize(...) get(...): Retrieve a regularizer instance from a config or identifier. l1_l2(...): Create a regularizer that applies both L1 and L2 penalties. serialize(...) | tensorflow.keras.regularizers |
tf.keras.regularizers.deserialize View source on GitHub View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.keras.regularizers.deserialize
tf.keras.regularizers.deserialize(
config, custom_objects=None
) | tensorflow.keras.regularizers.deserialize |
tf.keras.regularizers.get View source on GitHub Retrieve a regularizer instance from a config or identifier. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.keras.regularizers.get
tf.keras.regularizers.get(
identifier
) | tensorflow.keras.regularizers.get |
tf.keras.regularizers.L1 A regularizer that applies a L1 regularization penalty. Inherits From: Regularizer View aliases Main aliases
tf.keras.regularizers.l1 Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.regularizers.L1, tf.compat.v1.keras.regularizers.l1
tf.keras.regularizers.L1(
l1=0.01, **kwargs
)
The L1 regularization penalty is computed as: loss = l1 * reduce_sum(abs(x)) L1 may be passed to a layer as a string identifier:
dense = tf.keras.layers.Dense(3, kernel_regularizer='l1')
In this case, the default value used is l1=0.01.
Attributes
l1 Float; L1 regularization factor. Methods from_config View source
@classmethod
from_config(
config
)
Creates a regularizer from its config. This method is the reverse of get_config, capable of instantiating the same regularizer from the config dictionary. This method is used by Keras model_to_estimator, saving and loading models to HDF5 formats, Keras model cloning, some visualization utilities, and exporting models to and from JSON.
Arguments
config A Python dictionary, typically the output of get_config.
Returns A regularizer instance.
get_config View source
get_config()
Returns the config of the regularizer. An regularizer config is a Python dictionary (serializable) containing all configuration parameters of the regularizer. The same regularizer can be reinstantiated later (without any saved state) from this configuration. This method is optional if you are just training and executing models, exporting to and from SavedModels, or using weight checkpoints. This method is required for Keras model_to_estimator, saving and loading models to HDF5 formats, Keras model cloning, some visualization utilities, and exporting models to and from JSON.
Returns Python dictionary.
__call__ View source
__call__(
x
)
Compute a regularization penalty from an input tensor. | tensorflow.keras.regularizers.l1 |
tf.keras.regularizers.L1L2 View source on GitHub A regularizer that applies both L1 and L2 regularization penalties. Inherits From: Regularizer View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.keras.regularizers.L1L2
tf.keras.regularizers.L1L2(
l1=0.0, l2=0.0
)
The L1 regularization penalty is computed as: loss = l1 * reduce_sum(abs(x)) The L2 regularization penalty is computed as loss = l2 * reduce_sum(square(x)) L1L2 may be passed to a layer as a string identifier:
dense = tf.keras.layers.Dense(3, kernel_regularizer='l1_l2')
In this case, the default values used are l1=0.01 and l2=0.01.
Attributes
l1 Float; L1 regularization factor.
l2 Float; L2 regularization factor. Methods from_config View source
@classmethod
from_config(
config
)
Creates a regularizer from its config. This method is the reverse of get_config, capable of instantiating the same regularizer from the config dictionary. This method is used by Keras model_to_estimator, saving and loading models to HDF5 formats, Keras model cloning, some visualization utilities, and exporting models to and from JSON.
Arguments
config A Python dictionary, typically the output of get_config.
Returns A regularizer instance.
get_config View source
get_config()
Returns the config of the regularizer. An regularizer config is a Python dictionary (serializable) containing all configuration parameters of the regularizer. The same regularizer can be reinstantiated later (without any saved state) from this configuration. This method is optional if you are just training and executing models, exporting to and from SavedModels, or using weight checkpoints. This method is required for Keras model_to_estimator, saving and loading models to HDF5 formats, Keras model cloning, some visualization utilities, and exporting models to and from JSON.
Returns Python dictionary.
__call__ View source
__call__(
x
)
Compute a regularization penalty from an input tensor. | tensorflow.keras.regularizers.l1l2 |
tf.keras.regularizers.l1_l2 View source on GitHub Create a regularizer that applies both L1 and L2 penalties. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.keras.regularizers.l1_l2
tf.keras.regularizers.l1_l2(
l1=0.01, l2=0.01
)
The L1 regularization penalty is computed as: loss = l1 * reduce_sum(abs(x)) The L2 regularization penalty is computed as: loss = l2 * reduce_sum(square(x))
Arguments
l1 Float; L1 regularization factor.
l2 Float; L2 regularization factor.
Returns An L1L2 Regularizer with the given regularization factors. | tensorflow.keras.regularizers.l1_l2 |
tf.keras.regularizers.L2 A regularizer that applies a L2 regularization penalty. Inherits From: Regularizer View aliases Main aliases
tf.keras.regularizers.l2 Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.regularizers.L2, tf.compat.v1.keras.regularizers.l2
tf.keras.regularizers.L2(
l2=0.01, **kwargs
)
The L2 regularization penalty is computed as: loss = l2 * reduce_sum(square(x)) L2 may be passed to a layer as a string identifier:
dense = tf.keras.layers.Dense(3, kernel_regularizer='l2')
In this case, the default value used is l2=0.01.
Attributes
l2 Float; L2 regularization factor. Methods from_config View source
@classmethod
from_config(
config
)
Creates a regularizer from its config. This method is the reverse of get_config, capable of instantiating the same regularizer from the config dictionary. This method is used by Keras model_to_estimator, saving and loading models to HDF5 formats, Keras model cloning, some visualization utilities, and exporting models to and from JSON.
Arguments
config A Python dictionary, typically the output of get_config.
Returns A regularizer instance.
get_config View source
get_config()
Returns the config of the regularizer. An regularizer config is a Python dictionary (serializable) containing all configuration parameters of the regularizer. The same regularizer can be reinstantiated later (without any saved state) from this configuration. This method is optional if you are just training and executing models, exporting to and from SavedModels, or using weight checkpoints. This method is required for Keras model_to_estimator, saving and loading models to HDF5 formats, Keras model cloning, some visualization utilities, and exporting models to and from JSON.
Returns Python dictionary.
__call__ View source
__call__(
x
)
Compute a regularization penalty from an input tensor. | tensorflow.keras.regularizers.l2 |
tf.keras.regularizers.Regularizer View source on GitHub Regularizer base class. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.keras.regularizers.Regularizer Regularizers allow you to apply penalties on layer parameters or layer activity during optimization. These penalties are summed into the loss function that the network optimizes. Regularization penalties are applied on a per-layer basis. The exact API will depend on the layer, but many layers (e.g. Dense, Conv1D, Conv2D and Conv3D) have a unified API. These layers expose 3 keyword arguments:
kernel_regularizer: Regularizer to apply a penalty on the layer's kernel
bias_regularizer: Regularizer to apply a penalty on the layer's bias
activity_regularizer: Regularizer to apply a penalty on the layer's output All layers (including custom layers) expose activity_regularizer as a settable property, whether or not it is in the constructor arguments. The value returned by the activity_regularizer is divided by the input batch size so that the relative weighting between the weight regularizers and the activity regularizers does not change with the batch size. You can access a layer's regularization penalties by calling layer.losses after calling the layer on inputs. Example
layer = tf.keras.layers.Dense(
5, input_dim=5,
kernel_initializer='ones',
kernel_regularizer=tf.keras.regularizers.L1(0.01),
activity_regularizer=tf.keras.regularizers.L2(0.01))
tensor = tf.ones(shape=(5, 5)) * 2.0
out = layer(tensor)
# The kernel regularization term is 0.25
# The activity regularization term (after dividing by the batch size) is 5
tf.math.reduce_sum(layer.losses)
<tf.Tensor: shape=(), dtype=float32, numpy=5.25>
Available penalties tf.keras.regularizers.L1(0.3) # L1 Regularization Penalty
tf.keras.regularizers.L2(0.1) # L2 Regularization Penalty
tf.keras.regularizers.L1L2(l1=0.01, l2=0.01) # L1 + L2 penalties
Directly calling a regularizer Compute a regularization loss on a tensor by directly calling a regularizer as if it is a one-argument function. E.g. >>> regularizer = tf.keras.regularizers.L2(2.)
>>> tensor = tf.ones(shape=(5, 5))
>>> regularizer(tensor)
<tf.Tensor: shape=(), dtype=float32, numpy=50.0>
Developing new regularizers Any function that takes in a weight matrix and returns a scalar tensor can be used as a regularizer, e.g.:
@tf.keras.utils.register_keras_serializable(package='Custom', name='l1')
def l1_reg(weight_matrix):
return 0.01 * tf.math.reduce_sum(tf.math.abs(weight_matrix))
layer = tf.keras.layers.Dense(5, input_dim=5,
kernel_initializer='ones', kernel_regularizer=l1_reg)
tensor = tf.ones(shape=(5, 5))
out = layer(tensor)
layer.losses
[<tf.Tensor: shape=(), dtype=float32, numpy=0.25>]
Alternatively, you can write your custom regularizers in an object-oriented way by extending this regularizer base class, e.g.:
@tf.keras.utils.register_keras_serializable(package='Custom', name='l2')
class L2Regularizer(tf.keras.regularizers.Regularizer):
def __init__(self, l2=0.):
self.l2 = l2
def __call__(self, x):
return self.l2 * tf.math.reduce_sum(tf.math.square(x))
def get_config(self):
return {'l2': float(self.l2)}
layer = tf.keras.layers.Dense(
5, input_dim=5, kernel_initializer='ones',
kernel_regularizer=L2Regularizer(l2=0.5))
tensor = tf.ones(shape=(5, 5))
out = layer(tensor)
layer.losses
[<tf.Tensor: shape=(), dtype=float32, numpy=12.5>]
A note on serialization and deserialization: Registering the regularizers as serializable is optional if you are just training and executing models, exporting to and from SavedModels, or saving and loading weight checkpoints. Registration is required for Keras model_to_estimator, saving and loading models to HDF5 formats, Keras model cloning, some visualization utilities, and exporting models to and from JSON. If using this functionality, you must make sure any python process running your model has also defined and registered your custom regularizer. tf.keras.utils.register_keras_serializable is only available in TF 2.1 and beyond. In earlier versions of TensorFlow you must pass your custom regularizer to the custom_objects argument of methods that expect custom regularizers to be registered as serializable. Methods from_config View source
@classmethod
from_config(
config
)
Creates a regularizer from its config. This method is the reverse of get_config, capable of instantiating the same regularizer from the config dictionary. This method is used by Keras model_to_estimator, saving and loading models to HDF5 formats, Keras model cloning, some visualization utilities, and exporting models to and from JSON.
Arguments
config A Python dictionary, typically the output of get_config.
Returns A regularizer instance.
get_config View source
get_config()
Returns the config of the regularizer. An regularizer config is a Python dictionary (serializable) containing all configuration parameters of the regularizer. The same regularizer can be reinstantiated later (without any saved state) from this configuration. This method is optional if you are just training and executing models, exporting to and from SavedModels, or using weight checkpoints. This method is required for Keras model_to_estimator, saving and loading models to HDF5 formats, Keras model cloning, some visualization utilities, and exporting models to and from JSON.
Returns Python dictionary.
__call__ View source
__call__(
x
)
Compute a regularization penalty from an input tensor. | tensorflow.keras.regularizers.regularizer |
tf.keras.regularizers.serialize View source on GitHub View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.keras.regularizers.serialize
tf.keras.regularizers.serialize(
regularizer
) | tensorflow.keras.regularizers.serialize |
tf.keras.Sequential View source on GitHub Sequential groups a linear stack of layers into a tf.keras.Model. Inherits From: Model, Layer, Module View aliases Main aliases
tf.keras.models.Sequential Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.Sequential, tf.compat.v1.keras.models.Sequential
tf.keras.Sequential(
layers=None, name=None
)
Sequential provides training and inference features on this model. Examples:
# Optionally, the first layer can receive an `input_shape` argument:
model = tf.keras.Sequential()
model.add(tf.keras.layers.Dense(8, input_shape=(16,)))
# Afterwards, we do automatic shape inference:
model.add(tf.keras.layers.Dense(4))
# This is identical to the following:
model = tf.keras.Sequential()
model.add(tf.keras.Input(shape=(16,)))
model.add(tf.keras.layers.Dense(8))
# Note that you can also omit the `input_shape` argument.
# In that case the model doesn't have any weights until the first call
# to a training/evaluation method (since it isn't yet built):
model = tf.keras.Sequential()
model.add(tf.keras.layers.Dense(8))
model.add(tf.keras.layers.Dense(4))
# model.weights not created yet
# Whereas if you specify the input shape, the model gets built
# continuously as you are adding layers:
model = tf.keras.Sequential()
model.add(tf.keras.layers.Dense(8, input_shape=(16,)))
model.add(tf.keras.layers.Dense(4))
len(model.weights)
4
# When using the delayed-build pattern (no input shape specified), you can
# choose to manually build your model by calling
# `build(batch_input_shape)`:
model = tf.keras.Sequential()
model.add(tf.keras.layers.Dense(8))
model.add(tf.keras.layers.Dense(4))
model.build((None, 16))
len(model.weights)
4
# Note that when using the delayed-build pattern (no input shape specified),
# the model gets built the first time you call `fit`, `eval`, or `predict`,
# or the first time you call the model on some input data.
model = tf.keras.Sequential()
model.add(tf.keras.layers.Dense(8))
model.add(tf.keras.layers.Dense(1))
model.compile(optimizer='sgd', loss='mse')
# This builds the model for the first time:
model.fit(x, y, batch_size=32, epochs=10)
Args
layers Optional list of layers to add to the model.
name Optional name for the model.
Attributes
distribute_strategy The tf.distribute.Strategy this model was created under.
layers
metrics_names Returns the model's display labels for all outputs.
Note: metrics_names are available only after a keras.Model has been trained/evaluated on actual data.
inputs = tf.keras.layers.Input(shape=(3,))
outputs = tf.keras.layers.Dense(2)(inputs)
model = tf.keras.models.Model(inputs=inputs, outputs=outputs)
model.compile(optimizer="Adam", loss="mse", metrics=["mae"])
model.metrics_names
[]
x = np.random.random((2, 3))
y = np.random.randint(0, 2, (2, 2))
model.fit(x, y)
model.metrics_names
['loss', 'mae']
inputs = tf.keras.layers.Input(shape=(3,))
d = tf.keras.layers.Dense(2, name='out')
output_1 = d(inputs)
output_2 = d(inputs)
model = tf.keras.models.Model(
inputs=inputs, outputs=[output_1, output_2])
model.compile(optimizer="Adam", loss="mse", metrics=["mae", "acc"])
model.fit(x, (y, y))
model.metrics_names
['loss', 'out_loss', 'out_1_loss', 'out_mae', 'out_acc', 'out_1_mae',
'out_1_acc']
run_eagerly Settable attribute indicating whether the model should run eagerly. Running eagerly means that your model will be run step by step, like Python code. Your model might run slower, but it should become easier for you to debug it by stepping into individual layer calls. By default, we will attempt to compile your model to a static graph to deliver the best execution performance.
Methods add View source
add(
layer
)
Adds a layer instance on top of the layer stack.
Arguments
layer layer instance.
Raises
TypeError If layer is not a layer instance.
ValueError In case the layer argument does not know its input shape.
ValueError In case the layer argument has multiple output tensors, or is already connected somewhere else (forbidden in Sequential models). compile View source
compile(
optimizer='rmsprop', loss=None, metrics=None, loss_weights=None,
weighted_metrics=None, run_eagerly=None, steps_per_execution=None, **kwargs
)
Configures the model for training.
Arguments
optimizer String (name of optimizer) or optimizer instance. See tf.keras.optimizers.
loss String (name of objective function), objective function or tf.keras.losses.Loss instance. See tf.keras.losses. An objective function is any callable with the signature loss = fn(y_true, y_pred), where y_true = ground truth values with shape = [batch_size, d0, .. dN], except sparse loss functions such as sparse categorical crossentropy where shape = [batch_size, d0, .. dN-1]. y_pred = predicted values with shape = [batch_size, d0, .. dN]. It returns a weighted loss float tensor. If a custom Loss instance is used and reduction is set to NONE, return value has the shape [batch_size, d0, .. dN-1] ie. per-sample or per-timestep loss values; otherwise, it is a scalar. If the model has multiple outputs, you can use a different loss on each output by passing a dictionary or a list of losses. The loss value that will be minimized by the model will then be the sum of all individual losses.
metrics List of metrics to be evaluated by the model during training and testing. Each of this can be a string (name of a built-in function), function or a tf.keras.metrics.Metric instance. See tf.keras.metrics. Typically you will use metrics=['accuracy']. A function is any callable with the signature result = fn(y_true, y_pred). To specify different metrics for different outputs of a multi-output model, you could also pass a dictionary, such as metrics={'output_a': 'accuracy', 'output_b': ['accuracy', 'mse']}. You can also pass a list (len = len(outputs)) of lists of metrics such as metrics=[['accuracy'], ['accuracy', 'mse']] or metrics=['accuracy', ['accuracy', 'mse']]. When you pass the strings 'accuracy' or 'acc', we convert this to one of tf.keras.metrics.BinaryAccuracy, tf.keras.metrics.CategoricalAccuracy, tf.keras.metrics.SparseCategoricalAccuracy based on the loss function used and the model output shape. We do a similar conversion for the strings 'crossentropy' and 'ce' as well.
loss_weights Optional list or dictionary specifying scalar coefficients (Python floats) to weight the loss contributions of different model outputs. The loss value that will be minimized by the model will then be the weighted sum of all individual losses, weighted by the loss_weights coefficients. If a list, it is expected to have a 1:1 mapping to the model's outputs. If a dict, it is expected to map output names (strings) to scalar coefficients.
weighted_metrics List of metrics to be evaluated and weighted by sample_weight or class_weight during training and testing.
run_eagerly Bool. Defaults to False. If True, this Model's logic will not be wrapped in a tf.function. Recommended to leave this as None unless your Model cannot be run inside a tf.function.
steps_per_execution Int. Defaults to 1. The number of batches to run during each tf.function call. Running multiple batches inside a single tf.function call can greatly improve performance on TPUs or small models with a large Python overhead. At most, one full epoch will be run each execution. If a number larger than the size of the epoch is passed, the execution will be truncated to the size of the epoch. Note that if steps_per_execution is set to N, Callback.on_batch_begin and Callback.on_batch_end methods will only be called every N batches (i.e. before/after each tf.function execution).
**kwargs Arguments supported for backwards compatibility only.
Raises
ValueError In case of invalid arguments for optimizer, loss or metrics. evaluate View source
evaluate(
x=None, y=None, batch_size=None, verbose=1, sample_weight=None, steps=None,
callbacks=None, max_queue_size=10, workers=1, use_multiprocessing=False,
return_dict=False
)
Returns the loss value & metrics values for the model in test mode. Computation is done in batches (see the batch_size arg.)
Arguments
x Input data. It could be: A Numpy array (or array-like), or a list of arrays (in case the model has multiple inputs). A TensorFlow tensor, or a list of tensors (in case the model has multiple inputs). A dict mapping input names to the corresponding array/tensors, if the model has named inputs. A tf.data dataset. Should return a tuple of either (inputs, targets) or (inputs, targets, sample_weights). A generator or keras.utils.Sequence returning (inputs, targets) or (inputs, targets, sample_weights). A more detailed description of unpacking behavior for iterator types (Dataset, generator, Sequence) is given in the Unpacking behavior for iterator-like inputs section of Model.fit.
y Target data. Like the input data x, it could be either Numpy array(s) or TensorFlow tensor(s). It should be consistent with x (you cannot have Numpy inputs and tensor targets, or inversely). If x is a dataset, generator or keras.utils.Sequence instance, y should not be specified (since targets will be obtained from the iterator/dataset).
batch_size Integer or None. Number of samples per batch of computation. If unspecified, batch_size will default to 32. Do not specify the batch_size if your data is in the form of a dataset, generators, or keras.utils.Sequence instances (since they generate batches).
verbose 0 or 1. Verbosity mode. 0 = silent, 1 = progress bar.
sample_weight Optional Numpy array of weights for the test samples, used for weighting the loss function. You can either pass a flat (1D) Numpy array with the same length as the input samples (1:1 mapping between weights and samples), or in the case of temporal data, you can pass a 2D array with shape (samples, sequence_length), to apply a different weight to every timestep of every sample. This argument is not supported when x is a dataset, instead pass sample weights as the third element of x.
steps Integer or None. Total number of steps (batches of samples) before declaring the evaluation round finished. Ignored with the default value of None. If x is a tf.data dataset and steps is None, 'evaluate' will run until the dataset is exhausted. This argument is not supported with array inputs.
callbacks List of keras.callbacks.Callback instances. List of callbacks to apply during evaluation. See callbacks.
max_queue_size Integer. Used for generator or keras.utils.Sequence input only. Maximum size for the generator queue. If unspecified, max_queue_size will default to 10.
workers Integer. Used for generator or keras.utils.Sequence input only. Maximum number of processes to spin up when using process-based threading. If unspecified, workers will default to 1. If 0, will execute the generator on the main thread.
use_multiprocessing Boolean. Used for generator or keras.utils.Sequence input only. If True, use process-based threading. If unspecified, use_multiprocessing will default to False. Note that because this implementation relies on multiprocessing, you should not pass non-picklable arguments to the generator as they can't be passed easily to children processes.
return_dict If True, loss and metric results are returned as a dict, with each key being the name of the metric. If False, they are returned as a list. See the discussion of Unpacking behavior for iterator-like inputs for Model.fit.
Returns Scalar test loss (if the model has a single output and no metrics) or list of scalars (if the model has multiple outputs and/or metrics). The attribute model.metrics_names will give you the display labels for the scalar outputs.
Raises
RuntimeError If model.evaluate is wrapped in tf.function.
ValueError in case of invalid arguments. evaluate_generator View source
evaluate_generator(
generator, steps=None, callbacks=None, max_queue_size=10, workers=1,
use_multiprocessing=False, verbose=0
)
Evaluates the model on a data generator. DEPRECATED: Model.evaluate now supports generators, so there is no longer any need to use this endpoint. fit View source
fit(
x=None, y=None, batch_size=None, epochs=1, verbose=1, callbacks=None,
validation_split=0.0, validation_data=None, shuffle=True, class_weight=None,
sample_weight=None, initial_epoch=0, steps_per_epoch=None,
validation_steps=None, validation_batch_size=None, validation_freq=1,
max_queue_size=10, workers=1, use_multiprocessing=False
)
Trains the model for a fixed number of epochs (iterations on a dataset).
Arguments
x Input data. It could be: A Numpy array (or array-like), or a list of arrays (in case the model has multiple inputs). A TensorFlow tensor, or a list of tensors (in case the model has multiple inputs). A dict mapping input names to the corresponding array/tensors, if the model has named inputs. A tf.data dataset. Should return a tuple of either (inputs, targets) or (inputs, targets, sample_weights). A generator or keras.utils.Sequence returning (inputs, targets) or (inputs, targets, sample_weights). A more detailed description of unpacking behavior for iterator types (Dataset, generator, Sequence) is given below.
y Target data. Like the input data x, it could be either Numpy array(s) or TensorFlow tensor(s). It should be consistent with x (you cannot have Numpy inputs and tensor targets, or inversely). If x is a dataset, generator, or keras.utils.Sequence instance, y should not be specified (since targets will be obtained from x).
batch_size Integer or None. Number of samples per gradient update. If unspecified, batch_size will default to 32. Do not specify the batch_size if your data is in the form of datasets, generators, or keras.utils.Sequence instances (since they generate batches).
epochs Integer. Number of epochs to train the model. An epoch is an iteration over the entire x and y data provided. Note that in conjunction with initial_epoch, epochs is to be understood as "final epoch". The model is not trained for a number of iterations given by epochs, but merely until the epoch of index epochs is reached.
verbose 0, 1, or 2. Verbosity mode. 0 = silent, 1 = progress bar, 2 = one line per epoch. Note that the progress bar is not particularly useful when logged to a file, so verbose=2 is recommended when not running interactively (eg, in a production environment).
callbacks List of keras.callbacks.Callback instances. List of callbacks to apply during training. See tf.keras.callbacks. Note tf.keras.callbacks.ProgbarLogger and tf.keras.callbacks.History callbacks are created automatically and need not be passed into model.fit. tf.keras.callbacks.ProgbarLogger is created or not based on verbose argument to model.fit.
validation_split Float between 0 and 1. Fraction of the training data to be used as validation data. The model will set apart this fraction of the training data, will not train on it, and will evaluate the loss and any model metrics on this data at the end of each epoch. The validation data is selected from the last samples in the x and y data provided, before shuffling. This argument is not supported when x is a dataset, generator or keras.utils.Sequence instance.
validation_data Data on which to evaluate the loss and any model metrics at the end of each epoch. The model will not be trained on this data. Thus, note the fact that the validation loss of data provided using validation_split or validation_data is not affected by regularization layers like noise and dropout. validation_data will override validation_split. validation_data could be: tuple (x_val, y_val) of Numpy arrays or tensors tuple (x_val, y_val, val_sample_weights) of Numpy arrays dataset For the first two cases, batch_size must be provided. For the last case, validation_steps could be provided. Note that validation_data does not support all the data types that are supported in x, eg, dict, generator or keras.utils.Sequence.
shuffle Boolean (whether to shuffle the training data before each epoch) or str (for 'batch'). This argument is ignored when x is a generator. 'batch' is a special option for dealing with the limitations of HDF5 data; it shuffles in batch-sized chunks. Has no effect when steps_per_epoch is not None.
class_weight Optional dictionary mapping class indices (integers) to a weight (float) value, used for weighting the loss function (during training only). This can be useful to tell the model to "pay more attention" to samples from an under-represented class.
sample_weight Optional Numpy array of weights for the training samples, used for weighting the loss function (during training only). You can either pass a flat (1D) Numpy array with the same length as the input samples (1:1 mapping between weights and samples), or in the case of temporal data, you can pass a 2D array with shape (samples, sequence_length), to apply a different weight to every timestep of every sample. This argument is not supported when x is a dataset, generator, or keras.utils.Sequence instance, instead provide the sample_weights as the third element of x.
initial_epoch Integer. Epoch at which to start training (useful for resuming a previous training run).
steps_per_epoch Integer or None. Total number of steps (batches of samples) before declaring one epoch finished and starting the next epoch. When training with input tensors such as TensorFlow data tensors, the default None is equal to the number of samples in your dataset divided by the batch size, or 1 if that cannot be determined. If x is a tf.data dataset, and 'steps_per_epoch' is None, the epoch will run until the input dataset is exhausted. When passing an infinitely repeating dataset, you must specify the steps_per_epoch argument. This argument is not supported with array inputs.
validation_steps Only relevant if validation_data is provided and is a tf.data dataset. Total number of steps (batches of samples) to draw before stopping when performing validation at the end of every epoch. If 'validation_steps' is None, validation will run until the validation_data dataset is exhausted. In the case of an infinitely repeated dataset, it will run into an infinite loop. If 'validation_steps' is specified and only part of the dataset will be consumed, the evaluation will start from the beginning of the dataset at each epoch. This ensures that the same validation samples are used every time.
validation_batch_size Integer or None. Number of samples per validation batch. If unspecified, will default to batch_size. Do not specify the validation_batch_size if your data is in the form of datasets, generators, or keras.utils.Sequence instances (since they generate batches).
validation_freq Only relevant if validation data is provided. Integer or collections_abc.Container instance (e.g. list, tuple, etc.). If an integer, specifies how many training epochs to run before a new validation run is performed, e.g. validation_freq=2 runs validation every 2 epochs. If a Container, specifies the epochs on which to run validation, e.g. validation_freq=[1, 2, 10] runs validation at the end of the 1st, 2nd, and 10th epochs.
max_queue_size Integer. Used for generator or keras.utils.Sequence input only. Maximum size for the generator queue. If unspecified, max_queue_size will default to 10.
workers Integer. Used for generator or keras.utils.Sequence input only. Maximum number of processes to spin up when using process-based threading. If unspecified, workers will default to 1. If 0, will execute the generator on the main thread.
use_multiprocessing Boolean. Used for generator or keras.utils.Sequence input only. If True, use process-based threading. If unspecified, use_multiprocessing will default to False. Note that because this implementation relies on multiprocessing, you should not pass non-picklable arguments to the generator as they can't be passed easily to children processes. Unpacking behavior for iterator-like inputs: A common pattern is to pass a tf.data.Dataset, generator, or tf.keras.utils.Sequence to the x argument of fit, which will in fact yield not only features (x) but optionally targets (y) and sample weights. Keras requires that the output of such iterator-likes be unambiguous. The iterator should return a tuple of length 1, 2, or 3, where the optional second and third elements will be used for y and sample_weight respectively. Any other type provided will be wrapped in a length one tuple, effectively treating everything as 'x'. When yielding dicts, they should still adhere to the top-level tuple structure. e.g. ({"x0": x0, "x1": x1}, y). Keras will not attempt to separate features, targets, and weights from the keys of a single dict. A notable unsupported data type is the namedtuple. The reason is that it behaves like both an ordered datatype (tuple) and a mapping datatype (dict). So given a namedtuple of the form: namedtuple("example_tuple", ["y", "x"]) it is ambiguous whether to reverse the order of the elements when interpreting the value. Even worse is a tuple of the form: namedtuple("other_tuple", ["x", "y", "z"]) where it is unclear if the tuple was intended to be unpacked into x, y, and sample_weight or passed through as a single element to x. As a result the data processing code will simply raise a ValueError if it encounters a namedtuple. (Along with instructions to remedy the issue.)
Returns A History object. Its History.history attribute is a record of training loss values and metrics values at successive epochs, as well as validation loss values and validation metrics values (if applicable).
Raises
RuntimeError If the model was never compiled or, If model.fit is wrapped in tf.function.
ValueError In case of mismatch between the provided input data and what the model expects or when the input data is empty. fit_generator View source
fit_generator(
generator, steps_per_epoch=None, epochs=1, verbose=1, callbacks=None,
validation_data=None, validation_steps=None, validation_freq=1,
class_weight=None, max_queue_size=10, workers=1, use_multiprocessing=False,
shuffle=True, initial_epoch=0
)
Fits the model on data yielded batch-by-batch by a Python generator. DEPRECATED: Model.fit now supports generators, so there is no longer any need to use this endpoint. get_layer View source
get_layer(
name=None, index=None
)
Retrieves a layer based on either its name (unique) or index. If name and index are both provided, index will take precedence. Indices are based on order of horizontal graph traversal (bottom-up).
Arguments
name String, name of layer.
index Integer, index of layer.
Returns A layer instance.
Raises
ValueError In case of invalid layer name or index. load_weights View source
load_weights(
filepath, by_name=False, skip_mismatch=False, options=None
)
Loads all layer weights, either from a TensorFlow or an HDF5 weight file. If by_name is False weights are loaded based on the network's topology. This means the architecture should be the same as when the weights were saved. Note that layers that don't have weights are not taken into account in the topological ordering, so adding or removing layers is fine as long as they don't have weights. If by_name is True, weights are loaded into layers only if they share the same name. This is useful for fine-tuning or transfer-learning models where some of the layers have changed. Only topological loading (by_name=False) is supported when loading weights from the TensorFlow format. Note that topological loading differs slightly between TensorFlow and HDF5 formats for user-defined classes inheriting from tf.keras.Model: HDF5 loads based on a flattened list of weights, while the TensorFlow format loads based on the object-local names of attributes to which layers are assigned in the Model's constructor.
Arguments
filepath String, path to the weights file to load. For weight files in TensorFlow format, this is the file prefix (the same as was passed to save_weights).
by_name Boolean, whether to load weights by name or by topological order. Only topological loading is supported for weight files in TensorFlow format.
skip_mismatch Boolean, whether to skip loading of layers where there is a mismatch in the number of weights, or a mismatch in the shape of the weight (only valid when by_name=True).
options Optional tf.train.CheckpointOptions object that specifies options for loading weights.
Returns When loading a weight file in TensorFlow format, returns the same status object as tf.train.Checkpoint.restore. When graph building, restore ops are run automatically as soon as the network is built (on first call for user-defined classes inheriting from Model, immediately if it is already built). When loading weights in HDF5 format, returns None.
Raises
ImportError If h5py is not available and the weight file is in HDF5 format.
ValueError If skip_mismatch is set to True when by_name is False. make_predict_function View source
make_predict_function()
Creates a function that executes one step of inference. This method can be overridden to support custom inference logic. This method is called by Model.predict and Model.predict_on_batch. Typically, this method directly controls tf.function and tf.distribute.Strategy settings, and delegates the actual evaluation logic to Model.predict_step. This function is cached the first time Model.predict or Model.predict_on_batch is called. The cache is cleared whenever Model.compile is called.
Returns Function. The function created by this method should accept a tf.data.Iterator, and return the outputs of the Model.
make_test_function View source
make_test_function()
Creates a function that executes one step of evaluation. This method can be overridden to support custom evaluation logic. This method is called by Model.evaluate and Model.test_on_batch. Typically, this method directly controls tf.function and tf.distribute.Strategy settings, and delegates the actual evaluation logic to Model.test_step. This function is cached the first time Model.evaluate or Model.test_on_batch is called. The cache is cleared whenever Model.compile is called.
Returns Function. The function created by this method should accept a tf.data.Iterator, and return a dict containing values that will be passed to tf.keras.Callbacks.on_test_batch_end.
make_train_function View source
make_train_function()
Creates a function that executes one step of training. This method can be overridden to support custom training logic. This method is called by Model.fit and Model.train_on_batch. Typically, this method directly controls tf.function and tf.distribute.Strategy settings, and delegates the actual training logic to Model.train_step. This function is cached the first time Model.fit or Model.train_on_batch is called. The cache is cleared whenever Model.compile is called.
Returns Function. The function created by this method should accept a tf.data.Iterator, and return a dict containing values that will be passed to tf.keras.Callbacks.on_train_batch_end, such as {'loss': 0.2, 'accuracy': 0.7}.
pop View source
pop()
Removes the last layer in the model.
Raises
TypeError if there are no layers in the model. predict View source
predict(
x, batch_size=None, verbose=0, steps=None, callbacks=None, max_queue_size=10,
workers=1, use_multiprocessing=False
)
Generates output predictions for the input samples. Computation is done in batches. This method is designed for performance in large scale inputs. For small amount of inputs that fit in one batch, directly using __call__ is recommended for faster execution, e.g., model(x), or model(x, training=False) if you have layers such as tf.keras.layers.BatchNormalization that behaves differently during inference. Also, note the fact that test loss is not affected by regularization layers like noise and dropout.
Arguments
x Input samples. It could be: A Numpy array (or array-like), or a list of arrays (in case the model has multiple inputs). A TensorFlow tensor, or a list of tensors (in case the model has multiple inputs). A tf.data dataset. A generator or keras.utils.Sequence instance. A more detailed description of unpacking behavior for iterator types (Dataset, generator, Sequence) is given in the Unpacking behavior for iterator-like inputs section of Model.fit.
batch_size Integer or None. Number of samples per batch. If unspecified, batch_size will default to 32. Do not specify the batch_size if your data is in the form of dataset, generators, or keras.utils.Sequence instances (since they generate batches).
verbose Verbosity mode, 0 or 1.
steps Total number of steps (batches of samples) before declaring the prediction round finished. Ignored with the default value of None. If x is a tf.data dataset and steps is None, predict will run until the input dataset is exhausted.
callbacks List of keras.callbacks.Callback instances. List of callbacks to apply during prediction. See callbacks.
max_queue_size Integer. Used for generator or keras.utils.Sequence input only. Maximum size for the generator queue. If unspecified, max_queue_size will default to 10.
workers Integer. Used for generator or keras.utils.Sequence input only. Maximum number of processes to spin up when using process-based threading. If unspecified, workers will default to 1. If 0, will execute the generator on the main thread.
use_multiprocessing Boolean. Used for generator or keras.utils.Sequence input only. If True, use process-based threading. If unspecified, use_multiprocessing will default to False. Note that because this implementation relies on multiprocessing, you should not pass non-picklable arguments to the generator as they can't be passed easily to children processes. See the discussion of Unpacking behavior for iterator-like inputs for Model.fit. Note that Model.predict uses the same interpretation rules as Model.fit and Model.evaluate, so inputs must be unambiguous for all three methods.
Returns Numpy array(s) of predictions.
Raises
RuntimeError If model.predict is wrapped in tf.function.
ValueError In case of mismatch between the provided input data and the model's expectations, or in case a stateful model receives a number of samples that is not a multiple of the batch size. predict_classes View source
predict_classes(
x, batch_size=32, verbose=0
)
Generate class predictions for the input samples. The input samples are processed batch by batch.
Arguments
x input data, as a Numpy array or list of Numpy arrays (if the model has multiple inputs).
batch_size integer.
verbose verbosity mode, 0 or 1.
Returns A numpy array of class predictions.
predict_generator View source
predict_generator(
generator, steps=None, callbacks=None, max_queue_size=10, workers=1,
use_multiprocessing=False, verbose=0
)
Generates predictions for the input samples from a data generator. DEPRECATED: Model.predict now supports generators, so there is no longer any need to use this endpoint. predict_on_batch View source
predict_on_batch(
x
)
Returns predictions for a single batch of samples.
Arguments
x Input data. It could be: - A Numpy array (or array-like), or a list of arrays (in case the model has multiple inputs). - A TensorFlow tensor, or a list of tensors (in case the model has multiple inputs).
Returns Numpy array(s) of predictions.
Raises
RuntimeError If model.predict_on_batch is wrapped in tf.function.
ValueError In case of mismatch between given number of inputs and expectations of the model. predict_proba View source
predict_proba(
x, batch_size=32, verbose=0
)
Generates class probability predictions for the input samples. The input samples are processed batch by batch.
Arguments
x input data, as a Numpy array or list of Numpy arrays (if the model has multiple inputs).
batch_size integer.
verbose verbosity mode, 0 or 1.
Returns A Numpy array of probability predictions.
predict_step View source
predict_step(
data
)
The logic for one inference step. This method can be overridden to support custom inference logic. This method is called by Model.make_predict_function. This method should contain the mathematical logic for one step of inference. This typically includes the forward pass. Configuration details for how this logic is run (e.g. tf.function and tf.distribute.Strategy settings), should be left to Model.make_predict_function, which can also be overridden.
Arguments
data A nested structure of Tensors.
Returns The result of one inference step, typically the output of calling the Model on data.
reset_metrics View source
reset_metrics()
Resets the state of all the metrics in the model. Examples:
inputs = tf.keras.layers.Input(shape=(3,))
outputs = tf.keras.layers.Dense(2)(inputs)
model = tf.keras.models.Model(inputs=inputs, outputs=outputs)
model.compile(optimizer="Adam", loss="mse", metrics=["mae"])
x = np.random.random((2, 3))
y = np.random.randint(0, 2, (2, 2))
_ = model.fit(x, y, verbose=0)
assert all(float(m.result()) for m in model.metrics)
model.reset_metrics()
assert all(float(m.result()) == 0 for m in model.metrics)
reset_states View source
reset_states()
save View source
save(
filepath, overwrite=True, include_optimizer=True, save_format=None,
signatures=None, options=None, save_traces=True
)
Saves the model to Tensorflow SavedModel or a single HDF5 file. Please see tf.keras.models.save_model or the Serialization and Saving guide for details.
Arguments
filepath String, PathLike, path to SavedModel or H5 file to save the model.
overwrite Whether to silently overwrite any existing file at the target location, or provide the user with a manual prompt.
include_optimizer If True, save optimizer's state together.
save_format Either 'tf' or 'h5', indicating whether to save the model to Tensorflow SavedModel or HDF5. Defaults to 'tf' in TF 2.X, and 'h5' in TF 1.X.
signatures Signatures to save with the SavedModel. Applicable to the 'tf' format only. Please see the signatures argument in tf.saved_model.save for details.
options (only applies to SavedModel format) tf.saved_model.SaveOptions object that specifies options for saving to SavedModel.
save_traces (only applies to SavedModel format) When enabled, the SavedModel will store the function traces for each layer. This can be disabled, so that only the configs of each layer are stored. Defaults to True. Disabling this will decrease serialization time and reduce file size, but it requires that all custom layers/models implement a get_config() method. Example: from keras.models import load_model
model.save('my_model.h5') # creates a HDF5 file 'my_model.h5'
del model # deletes the existing model
# returns a compiled model
# identical to the previous one
model = load_model('my_model.h5')
save_weights View source
save_weights(
filepath, overwrite=True, save_format=None, options=None
)
Saves all layer weights. Either saves in HDF5 or in TensorFlow format based on the save_format argument. When saving in HDF5 format, the weight file has:
layer_names (attribute), a list of strings (ordered names of model layers). For every layer, a group named layer.name For every such layer group, a group attribute weight_names, a list of strings (ordered names of weights tensor of the layer). For every weight in the layer, a dataset storing the weight value, named after the weight tensor.
When saving in TensorFlow format, all objects referenced by the network are saved in the same format as tf.train.Checkpoint, including any Layer instances or Optimizer instances assigned to object attributes. For networks constructed from inputs and outputs using tf.keras.Model(inputs, outputs), Layer instances used by the network are tracked/saved automatically. For user-defined classes which inherit from tf.keras.Model, Layer instances must be assigned to object attributes, typically in the constructor. See the documentation of tf.train.Checkpoint and tf.keras.Model for details. While the formats are the same, do not mix save_weights and tf.train.Checkpoint. Checkpoints saved by Model.save_weights should be loaded using Model.load_weights. Checkpoints saved using tf.train.Checkpoint.save should be restored using the corresponding tf.train.Checkpoint.restore. Prefer tf.train.Checkpoint over save_weights for training checkpoints. The TensorFlow format matches objects and variables by starting at a root object, self for save_weights, and greedily matching attribute names. For Model.save this is the Model, and for Checkpoint.save this is the Checkpoint even if the Checkpoint has a model attached. This means saving a tf.keras.Model using save_weights and loading into a tf.train.Checkpoint with a Model attached (or vice versa) will not match the Model's variables. See the guide to training checkpoints for details on the TensorFlow format.
Arguments
filepath String or PathLike, path to the file to save the weights to. When saving in TensorFlow format, this is the prefix used for checkpoint files (multiple files are generated). Note that the '.h5' suffix causes weights to be saved in HDF5 format.
overwrite Whether to silently overwrite any existing file at the target location, or provide the user with a manual prompt.
save_format Either 'tf' or 'h5'. A filepath ending in '.h5' or '.keras' will default to HDF5 if save_format is None. Otherwise None defaults to 'tf'.
options Optional tf.train.CheckpointOptions object that specifies options for saving weights.
Raises
ImportError If h5py is not available when attempting to save in HDF5 format.
ValueError For invalid/unknown format arguments. summary View source
summary(
line_length=None, positions=None, print_fn=None
)
Prints a string summary of the network.
Arguments
line_length Total length of printed lines (e.g. set this to adapt the display to different terminal window sizes).
positions Relative or absolute positions of log elements in each line. If not provided, defaults to [.33, .55, .67, 1.].
print_fn Print function to use. Defaults to print. It will be called on each line of the summary. You can set it to a custom function in order to capture the string summary.
Raises
ValueError if summary() is called before the model is built. test_on_batch View source
test_on_batch(
x, y=None, sample_weight=None, reset_metrics=True, return_dict=False
)
Test the model on a single batch of samples.
Arguments
x Input data. It could be: - A Numpy array (or array-like), or a list of arrays (in case the model has multiple inputs). - A TensorFlow tensor, or a list of tensors (in case the model has multiple inputs). A dict mapping input names to the corresponding array/tensors, if the model has named inputs.
y Target data. Like the input data x, it could be either Numpy array(s) or TensorFlow tensor(s). It should be consistent with x (you cannot have Numpy inputs and tensor targets, or inversely).
sample_weight Optional array of the same length as x, containing weights to apply to the model's loss for each sample. In the case of temporal data, you can pass a 2D array with shape (samples, sequence_length), to apply a different weight to every timestep of every sample.
reset_metrics If True, the metrics returned will be only for this batch. If False, the metrics will be statefully accumulated across batches.
return_dict If True, loss and metric results are returned as a dict, with each key being the name of the metric. If False, they are returned as a list.
Returns Scalar test loss (if the model has a single output and no metrics) or list of scalars (if the model has multiple outputs and/or metrics). The attribute model.metrics_names will give you the display labels for the scalar outputs.
Raises
RuntimeError If model.test_on_batch is wrapped in tf.function.
ValueError In case of invalid user-provided arguments. test_step View source
test_step(
data
)
The logic for one evaluation step. This method can be overridden to support custom evaluation logic. This method is called by Model.make_test_function. This function should contain the mathematical logic for one step of evaluation. This typically includes the forward pass, loss calculation, and metrics updates. Configuration details for how this logic is run (e.g. tf.function and tf.distribute.Strategy settings), should be left to Model.make_test_function, which can also be overridden.
Arguments
data A nested structure of Tensors.
Returns A dict containing values that will be passed to tf.keras.callbacks.CallbackList.on_train_batch_end. Typically, the values of the Model's metrics are returned.
to_json View source
to_json(
**kwargs
)
Returns a JSON string containing the network configuration. To load a network from a JSON save file, use keras.models.model_from_json(json_string, custom_objects={}).
Arguments
**kwargs Additional keyword arguments to be passed to json.dumps().
Returns A JSON string.
to_yaml View source
to_yaml(
**kwargs
)
Returns a yaml string containing the network configuration. To load a network from a yaml save file, use keras.models.model_from_yaml(yaml_string, custom_objects={}). custom_objects should be a dictionary mapping the names of custom losses / layers / etc to the corresponding functions / classes.
Arguments
**kwargs Additional keyword arguments to be passed to yaml.dump().
Returns A YAML string.
Raises
ImportError if yaml module is not found. train_on_batch View source
train_on_batch(
x, y=None, sample_weight=None, class_weight=None, reset_metrics=True,
return_dict=False
)
Runs a single gradient update on a single batch of data.
Arguments
x Input data. It could be: A Numpy array (or array-like), or a list of arrays (in case the model has multiple inputs). A TensorFlow tensor, or a list of tensors (in case the model has multiple inputs). A dict mapping input names to the corresponding array/tensors, if the model has named inputs.
y Target data. Like the input data x, it could be either Numpy array(s) or TensorFlow tensor(s). It should be consistent with x (you cannot have Numpy inputs and tensor targets, or inversely).
sample_weight Optional array of the same length as x, containing weights to apply to the model's loss for each sample. In the case of temporal data, you can pass a 2D array with shape (samples, sequence_length), to apply a different weight to every timestep of every sample.
class_weight Optional dictionary mapping class indices (integers) to a weight (float) to apply to the model's loss for the samples from this class during training. This can be useful to tell the model to "pay more attention" to samples from an under-represented class.
reset_metrics If True, the metrics returned will be only for this batch. If False, the metrics will be statefully accumulated across batches.
return_dict If True, loss and metric results are returned as a dict, with each key being the name of the metric. If False, they are returned as a list.
Returns Scalar training loss (if the model has a single output and no metrics) or list of scalars (if the model has multiple outputs and/or metrics). The attribute model.metrics_names will give you the display labels for the scalar outputs.
Raises
RuntimeError If model.train_on_batch is wrapped in tf.function.
ValueError In case of invalid user-provided arguments. train_step View source
train_step(
data
)
The logic for one training step. This method can be overridden to support custom training logic. This method is called by Model.make_train_function. This method should contain the mathematical logic for one step of training. This typically includes the forward pass, loss calculation, backpropagation, and metric updates. Configuration details for how this logic is run (e.g. tf.function and tf.distribute.Strategy settings), should be left to Model.make_train_function, which can also be overridden.
Arguments
data A nested structure of Tensors.
Returns A dict containing values that will be passed to tf.keras.callbacks.CallbackList.on_train_batch_end. Typically, the values of the Model's metrics are returned. Example: {'loss': 0.2, 'accuracy': 0.7}. | tensorflow.keras.sequential |
Module: tf.keras.utils Public API for tf.keras.utils namespace. Classes class CustomObjectScope: Exposes custom classes/functions to Keras deserialization internals. class GeneratorEnqueuer: Builds a queue out of a data generator. class OrderedEnqueuer: Builds a Enqueuer from a Sequence. class Progbar: Displays a progress bar. class Sequence: Base object for fitting to a sequence of data, such as a dataset. class SequenceEnqueuer: Base class to enqueue inputs. class custom_object_scope: Exposes custom classes/functions to Keras deserialization internals. Functions deserialize_keras_object(...): Turns the serialized form of a Keras object back into an actual object. get_custom_objects(...): Retrieves a live reference to the global dictionary of custom objects. get_file(...): Downloads a file from a URL if it not already in the cache. get_registered_name(...): Returns the name registered to an object within the Keras framework. get_registered_object(...): Returns the class associated with name if it is registered with Keras. get_source_inputs(...): Returns the list of input tensors necessary to compute tensor. model_to_dot(...): Convert a Keras model to dot format. normalize(...): Normalizes a Numpy array. pack_x_y_sample_weight(...): Packs user-provided data into a tuple. plot_model(...): Converts a Keras model to dot format and save to a file. register_keras_serializable(...): Registers an object with the Keras serialization framework. serialize_keras_object(...): Serialize a Keras object into a JSON-compatible representation. to_categorical(...): Converts a class vector (integers) to binary class matrix. unpack_x_y_sample_weight(...): Unpacks user-provided data tuple. | tensorflow.keras.utils |
tf.keras.utils.CustomObjectScope View source on GitHub Exposes custom classes/functions to Keras deserialization internals. View aliases Main aliases
tf.keras.utils.custom_object_scope Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.utils.CustomObjectScope, tf.compat.v1.keras.utils.custom_object_scope
tf.keras.utils.CustomObjectScope(
*args
)
Under a scope with custom_object_scope(objects_dict), Keras methods such as tf.keras.models.load_model or tf.keras.models.model_from_config will be able to deserialize any custom object referenced by a saved config (e.g. a custom layer or metric). Example: Consider a custom regularizer my_regularizer: layer = Dense(3, kernel_regularizer=my_regularizer)
config = layer.get_config() # Config contains a reference to `my_regularizer`
...
# Later:
with custom_object_scope({'my_regularizer': my_regularizer}):
layer = Dense.from_config(config)
Arguments
*args Dictionary or dictionaries of {name: object} pairs. Methods __enter__ View source
__enter__()
__exit__ View source
__exit__(
*args, **kwargs
) | tensorflow.keras.utils.customobjectscope |
tf.keras.utils.deserialize_keras_object View source on GitHub Turns the serialized form of a Keras object back into an actual object. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.keras.utils.deserialize_keras_object
tf.keras.utils.deserialize_keras_object(
identifier, module_objects=None, custom_objects=None,
printable_module_name='object'
) | tensorflow.keras.utils.deserialize_keras_object |
tf.keras.utils.GeneratorEnqueuer View source on GitHub Builds a queue out of a data generator. Inherits From: SequenceEnqueuer View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.keras.utils.GeneratorEnqueuer
tf.keras.utils.GeneratorEnqueuer(
sequence, use_multiprocessing=False, random_seed=None
)
The provided generator can be finite in which case the class will throw a StopIteration exception. Used in fit_generator, evaluate_generator, predict_generator.
Arguments
generator a generator function which yields data
use_multiprocessing use multiprocessing if True, otherwise threading
wait_time time to sleep in-between calls to put()
random_seed Initial seed for workers, will be incremented by one for each worker. Methods get View source
get()
Creates a generator to extract data from the queue. Skip the data if it is None.
Yields The next element in the queue, i.e. a tuple (inputs, targets) or (inputs, targets, sample_weights).
is_running View source
is_running()
start View source
start(
workers=1, max_queue_size=10
)
Starts the handler's workers.
Arguments
workers Number of workers.
max_queue_size queue size (when full, workers could block on put()) stop View source
stop(
timeout=None
)
Stops running threads and wait for them to exit, if necessary. Should be called by the same thread which called start().
Arguments
timeout maximum time to wait on thread.join() | tensorflow.keras.utils.generatorenqueuer |
tf.keras.utils.get_custom_objects View source on GitHub Retrieves a live reference to the global dictionary of custom objects. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.keras.utils.get_custom_objects
tf.keras.utils.get_custom_objects()
Updating and clearing custom objects using custom_object_scope is preferred, but get_custom_objects can be used to directly access the current collection of custom objects. Example: get_custom_objects().clear()
get_custom_objects()['MyObject'] = MyObject
Returns Global dictionary of names to classes (_GLOBAL_CUSTOM_OBJECTS). | tensorflow.keras.utils.get_custom_objects |
tf.keras.utils.get_file View source on GitHub Downloads a file from a URL if it not already in the cache. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.keras.utils.get_file
tf.keras.utils.get_file(
fname, origin, untar=False, md5_hash=None, file_hash=None,
cache_subdir='datasets', hash_algorithm='auto',
extract=False, archive_format='auto', cache_dir=None
)
By default the file at the url origin is downloaded to the cache_dir ~/.keras, placed in the cache_subdir datasets, and given the filename fname. The final location of a file example.txt would therefore be ~/.keras/datasets/example.txt. Files in tar, tar.gz, tar.bz, and zip formats can also be extracted. Passing a hash will verify the file after download. The command line programs shasum and sha256sum can compute the hash. Example: path_to_downloaded_file = tf.keras.utils.get_file(
"flower_photos",
"https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz",
untar=True)
Arguments
fname Name of the file. If an absolute path /path/to/file.txt is specified the file will be saved at that location.
origin Original URL of the file.
untar Deprecated in favor of extract argument. boolean, whether the file should be decompressed
md5_hash Deprecated in favor of file_hash argument. md5 hash of the file for verification
file_hash The expected hash string of the file after download. The sha256 and md5 hash algorithms are both supported.
cache_subdir Subdirectory under the Keras cache dir where the file is saved. If an absolute path /path/to/folder is specified the file will be saved at that location.
hash_algorithm Select the hash algorithm to verify the file. options are 'md5', 'sha256', and 'auto'. The default 'auto' detects the hash algorithm in use.
extract True tries extracting the file as an Archive, like tar or zip.
archive_format Archive format to try for extracting the file. Options are 'auto', 'tar', 'zip', and None. 'tar' includes tar, tar.gz, and tar.bz files. The default 'auto' corresponds to ['tar', 'zip']. None or an empty list will return no matches found.
cache_dir Location to store cached files, when None it defaults to the default directory ~/.keras/.
Returns Path to the downloaded file | tensorflow.keras.utils.get_file |
tf.keras.utils.get_registered_name Returns the name registered to an object within the Keras framework. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.keras.utils.get_registered_name
tf.keras.utils.get_registered_name(
obj
)
This function is part of the Keras serialization and deserialization framework. It maps objects to the string names associated with those objects for serialization/deserialization.
Args
obj The object to look up.
Returns The name associated with the object, or the default Python name if the object is not registered. | tensorflow.keras.utils.get_registered_name |
tf.keras.utils.get_registered_object Returns the class associated with name if it is registered with Keras. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.keras.utils.get_registered_object
tf.keras.utils.get_registered_object(
name, custom_objects=None, module_objects=None
)
This function is part of the Keras serialization and deserialization framework. It maps strings to the objects associated with them for serialization/deserialization. Example: def from_config(cls, config, custom_objects=None):
if 'my_custom_object_name' in config:
config['hidden_cls'] = tf.keras.utils.get_registered_object(
config['my_custom_object_name'], custom_objects=custom_objects)
Args
name The name to look up.
custom_objects A dictionary of custom objects to look the name up in. Generally, custom_objects is provided by the user.
module_objects A dictionary of custom objects to look the name up in. Generally, module_objects is provided by midlevel library implementers.
Returns An instantiable class associated with 'name', or None if no such class exists. | tensorflow.keras.utils.get_registered_object |
tf.keras.utils.get_source_inputs View source on GitHub Returns the list of input tensors necessary to compute tensor. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.keras.utils.get_source_inputs
tf.keras.utils.get_source_inputs(
tensor, layer=None, node_index=None
)
Output will always be a list of tensors (potentially with 1 element).
Arguments
tensor The tensor to start from.
layer Origin layer of the tensor. Will be determined via tensor._keras_history if not provided.
node_index Origin node index of the tensor.
Returns List of input tensors. | tensorflow.keras.utils.get_source_inputs |
tf.keras.utils.model_to_dot View source on GitHub Convert a Keras model to dot format. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.keras.utils.model_to_dot
tf.keras.utils.model_to_dot(
model, show_shapes=False, show_dtype=False, show_layer_names=True,
rankdir='TB', expand_nested=False, dpi=96, subgraph=False
)
Arguments
model A Keras model instance.
show_shapes whether to display shape information.
show_dtype whether to display layer dtypes.
show_layer_names whether to display layer names.
rankdir rankdir argument passed to PyDot, a string specifying the format of the plot: 'TB' creates a vertical plot; 'LR' creates a horizontal plot.
expand_nested whether to expand nested models into clusters.
dpi Dots per inch.
subgraph whether to return a pydot.Cluster instance.
Returns A pydot.Dot instance representing the Keras model or a pydot.Cluster instance representing nested model if subgraph=True.
Raises
ImportError if graphviz or pydot are not available. | tensorflow.keras.utils.model_to_dot |
tf.keras.utils.normalize View source on GitHub Normalizes a Numpy array. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.keras.utils.normalize
tf.keras.utils.normalize(
x, axis=-1, order=2
)
Arguments
x Numpy array to normalize.
axis axis along which to normalize.
order Normalization order (e.g. order=2 for L2 norm).
Returns A normalized copy of the array. | tensorflow.keras.utils.normalize |
tf.keras.utils.OrderedEnqueuer View source on GitHub Builds a Enqueuer from a Sequence. Inherits From: SequenceEnqueuer View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.keras.utils.OrderedEnqueuer
tf.keras.utils.OrderedEnqueuer(
sequence, use_multiprocessing=False, shuffle=False
)
Used in fit_generator, evaluate_generator, predict_generator.
Arguments
sequence A tf.keras.utils.data_utils.Sequence object.
use_multiprocessing use multiprocessing if True, otherwise threading
shuffle whether to shuffle the data at the beginning of each epoch Methods get View source
get()
Creates a generator to extract data from the queue. Skip the data if it is None.
Yields The next element in the queue, i.e. a tuple (inputs, targets) or (inputs, targets, sample_weights).
is_running View source
is_running()
start View source
start(
workers=1, max_queue_size=10
)
Starts the handler's workers.
Arguments
workers Number of workers.
max_queue_size queue size (when full, workers could block on put()) stop View source
stop(
timeout=None
)
Stops running threads and wait for them to exit, if necessary. Should be called by the same thread which called start().
Arguments
timeout maximum time to wait on thread.join() | tensorflow.keras.utils.orderedenqueuer |
tf.keras.utils.pack_x_y_sample_weight Packs user-provided data into a tuple.
tf.keras.utils.pack_x_y_sample_weight(
x, y=None, sample_weight=None
)
This is a convenience utility for packing data into the tuple formats that Model.fit uses. Standalone usage:
x = tf.ones((10, 1))
data = tf.keras.utils.pack_x_y_sample_weight(x)
isinstance(data, tf.Tensor)
True
y = tf.ones((10, 1))
data = tf.keras.utils.pack_x_y_sample_weight(x, y)
isinstance(data, tuple)
True
x, y = data
Arguments
x Features to pass to Model.
y Ground-truth targets to pass to Model.
sample_weight Sample weight for each element.
Returns Tuple in the format used in Model.fit. | tensorflow.keras.utils.pack_x_y_sample_weight |
tf.keras.utils.plot_model View source on GitHub Converts a Keras model to dot format and save to a file. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.keras.utils.plot_model
tf.keras.utils.plot_model(
model, to_file='model.png', show_shapes=False, show_dtype=False,
show_layer_names=True, rankdir='TB', expand_nested=False, dpi=96
)
Example: input = tf.keras.Input(shape=(100,), dtype='int32', name='input')
x = tf.keras.layers.Embedding(
output_dim=512, input_dim=10000, input_length=100)(input)
x = tf.keras.layers.LSTM(32)(x)
x = tf.keras.layers.Dense(64, activation='relu')(x)
x = tf.keras.layers.Dense(64, activation='relu')(x)
x = tf.keras.layers.Dense(64, activation='relu')(x)
output = tf.keras.layers.Dense(1, activation='sigmoid', name='output')(x)
model = tf.keras.Model(inputs=[input], outputs=[output])
dot_img_file = '/tmp/model_1.png'
tf.keras.utils.plot_model(model, to_file=dot_img_file, show_shapes=True)
Arguments
model A Keras model instance
to_file File name of the plot image.
show_shapes whether to display shape information.
show_dtype whether to display layer dtypes.
show_layer_names whether to display layer names.
rankdir rankdir argument passed to PyDot, a string specifying the format of the plot: 'TB' creates a vertical plot; 'LR' creates a horizontal plot.
expand_nested Whether to expand nested models into clusters.
dpi Dots per inch.
Returns A Jupyter notebook Image object if Jupyter is installed. This enables in-line display of the model plots in notebooks. | tensorflow.keras.utils.plot_model |
tf.keras.utils.Progbar View source on GitHub Displays a progress bar. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.keras.utils.Progbar
tf.keras.utils.Progbar(
target, width=30, verbose=1, interval=0.05, stateful_metrics=None,
unit_name='step'
)
Arguments
target Total number of steps expected, None if unknown.
width Progress bar width on screen.
verbose Verbosity mode, 0 (silent), 1 (verbose), 2 (semi-verbose)
stateful_metrics Iterable of string names of metrics that should not be averaged over time. Metrics in this list will be displayed as-is. All others will be averaged by the progbar before display.
interval Minimum visual progress update interval (in seconds).
unit_name Display name for step counts (usually "step" or "sample"). Methods add View source
add(
n, values=None
)
update View source
update(
current, values=None, finalize=None
)
Updates the progress bar.
Arguments
current Index of current step.
values List of tuples: (name, value_for_last_step). If name is in stateful_metrics, value_for_last_step will be displayed as-is. Else, an average of the metric over time will be displayed.
finalize Whether this is the last update for the progress bar. If None, defaults to current >= self.target. | tensorflow.keras.utils.progbar |
tf.keras.utils.register_keras_serializable Registers an object with the Keras serialization framework. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.keras.utils.register_keras_serializable
tf.keras.utils.register_keras_serializable(
package='Custom', name=None
)
This decorator injects the decorated class or function into the Keras custom object dictionary, so that it can be serialized and deserialized without needing an entry in the user-provided custom object dict. It also injects a function that Keras will call to get the object's serializable string key. Note that to be serialized and deserialized, classes must implement the get_config() method. Functions do not have this requirement. The object will be registered under the key 'package>name' where name, defaults to the object name if not passed.
Arguments
package The package that this class belongs to.
name The name to serialize this class under in this package. If None, the class' name will be used.
Returns A decorator that registers the decorated class with the passed names. | tensorflow.keras.utils.register_keras_serializable |
tf.keras.utils.Sequence View source on GitHub Base object for fitting to a sequence of data, such as a dataset. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.keras.utils.Sequence Every Sequence must implement the __getitem__ and the __len__ methods. If you want to modify your dataset between epochs you may implement on_epoch_end. The method __getitem__ should return a complete batch. Notes: Sequence are a safer way to do multiprocessing. This structure guarantees that the network will only train once on each sample per epoch which is not the case with generators. Examples: from skimage.io import imread
from skimage.transform import resize
import numpy as np
import math
# Here, `x_set` is list of path to the images
# and `y_set` are the associated classes.
class CIFAR10Sequence(Sequence):
def __init__(self, x_set, y_set, batch_size):
self.x, self.y = x_set, y_set
self.batch_size = batch_size
def __len__(self):
return math.ceil(len(self.x) / self.batch_size)
def __getitem__(self, idx):
batch_x = self.x[idx * self.batch_size:(idx + 1) *
self.batch_size]
batch_y = self.y[idx * self.batch_size:(idx + 1) *
self.batch_size]
return np.array([
resize(imread(file_name), (200, 200))
for file_name in batch_x]), np.array(batch_y)
Methods on_epoch_end View source
on_epoch_end()
Method called at the end of every epoch. __getitem__ View source
__getitem__(
index
)
Gets batch at position index.
Arguments
index position of the batch in the Sequence.
Returns A batch
__iter__ View source
__iter__()
Create a generator that iterate over the Sequence. __len__ View source
__len__()
Number of batch in the Sequence.
Returns The number of batches in the Sequence. | tensorflow.keras.utils.sequence |
tf.keras.utils.SequenceEnqueuer View source on GitHub Base class to enqueue inputs. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.keras.utils.SequenceEnqueuer
tf.keras.utils.SequenceEnqueuer(
sequence, use_multiprocessing=False
)
The task of an Enqueuer is to use parallelism to speed up preprocessing. This is done with processes or threads. Example: enqueuer = SequenceEnqueuer(...)
enqueuer.start()
datas = enqueuer.get()
for data in datas:
# Use the inputs; training, evaluating, predicting.
# ... stop sometime.
enqueuer.stop()
The enqueuer.get() should be an infinite stream of datas. Methods get View source
get()
Creates a generator to extract data from the queue. Skip the data if it is None. Returns: Generator yielding tuples (inputs, targets) or (inputs, targets, sample_weights). is_running View source
is_running()
start View source
start(
workers=1, max_queue_size=10
)
Starts the handler's workers.
Arguments
workers Number of workers.
max_queue_size queue size (when full, workers could block on put()) stop View source
stop(
timeout=None
)
Stops running threads and wait for them to exit, if necessary. Should be called by the same thread which called start().
Arguments
timeout maximum time to wait on thread.join() | tensorflow.keras.utils.sequenceenqueuer |
tf.keras.utils.serialize_keras_object View source on GitHub Serialize a Keras object into a JSON-compatible representation. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.keras.utils.serialize_keras_object
tf.keras.utils.serialize_keras_object(
instance
) | tensorflow.keras.utils.serialize_keras_object |
tf.keras.utils.to_categorical View source on GitHub Converts a class vector (integers) to binary class matrix. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.keras.utils.to_categorical
tf.keras.utils.to_categorical(
y, num_classes=None, dtype='float32'
)
E.g. for use with categorical_crossentropy.
Arguments
y class vector to be converted into a matrix (integers from 0 to num_classes).
num_classes total number of classes. If None, this would be inferred as the (largest number in y) + 1.
dtype The data type expected by the input. Default: 'float32'.
Returns A binary matrix representation of the input. The classes axis is placed last.
Example:
a = tf.keras.utils.to_categorical([0, 1, 2, 3], num_classes=4)
a = tf.constant(a, shape=[4, 4])
print(a)
tf.Tensor(
[[1. 0. 0. 0.]
[0. 1. 0. 0.]
[0. 0. 1. 0.]
[0. 0. 0. 1.]], shape=(4, 4), dtype=float32)
b = tf.constant([.9, .04, .03, .03,
.3, .45, .15, .13,
.04, .01, .94, .05,
.12, .21, .5, .17],
shape=[4, 4])
loss = tf.keras.backend.categorical_crossentropy(a, b)
print(np.around(loss, 5))
[0.10536 0.82807 0.1011 1.77196]
loss = tf.keras.backend.categorical_crossentropy(a, a)
print(np.around(loss, 5))
[0. 0. 0. 0.]
Raises Value Error: If input contains string value | tensorflow.keras.utils.to_categorical |
tf.keras.utils.unpack_x_y_sample_weight Unpacks user-provided data tuple.
tf.keras.utils.unpack_x_y_sample_weight(
data
)
This is a convenience utility to be used when overriding Model.train_step, Model.test_step, or Model.predict_step. This utility makes it easy to support data of the form (x,), (x, y), or (x, y, sample_weight). Standalone usage:
features_batch = tf.ones((10, 5))
labels_batch = tf.zeros((10, 5))
data = (features_batch, labels_batch)
# `y` and `sample_weight` will default to `None` if not provided.
x, y, sample_weight = tf.keras.utils.unpack_x_y_sample_weight(data)
sample_weight is None
True
Example in overridden Model.train_step: class MyModel(tf.keras.Model):
def train_step(self, data):
# If `sample_weight` is not provided, all samples will be weighted
# equally.
x, y, sample_weight = tf.keras.utils.unpack_x_y_sample_weight(data)
with tf.GradientTape() as tape:
y_pred = self(x, training=True)
loss = self.compiled_loss(
y, y_pred, sample_weight, regularization_losses=self.losses)
trainable_variables = self.trainable_variables
gradients = tape.gradient(loss, trainable_variables)
self.optimizer.apply_gradients(zip(gradients, trainable_variables))
self.compiled_metrics.update_state(y, y_pred, sample_weight)
return {m.name: m.result() for m in self.metrics}
Arguments
data A tuple of the form (x,), (x, y), or (x, y, sample_weight).
Returns The unpacked tuple, with Nones for y and sample_weight if they are not provided. | tensorflow.keras.utils.unpack_x_y_sample_weight |
Module: tf.keras.wrappers Public API for tf.keras.wrappers namespace. Modules scikit_learn module: Wrapper for using the Scikit-Learn API with Keras models. | tensorflow.keras.wrappers |
Module: tf.keras.wrappers.scikit_learn Wrapper for using the Scikit-Learn API with Keras models. Classes class KerasClassifier: Implementation of the scikit-learn classifier API for Keras. class KerasRegressor: Implementation of the scikit-learn regressor API for Keras. | tensorflow.keras.wrappers.scikit_learn |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.