doc_content
stringlengths
1
386k
doc_id
stringlengths
5
188
Module: tf.keras.applications.resnet50 Public API for tf.keras.applications.resnet50 namespace. Functions ResNet50(...): Instantiates the ResNet50 architecture. decode_predictions(...): Decodes the prediction of an ImageNet model. preprocess_input(...): Preprocesses a tensor or Numpy array encoding a batch of images.
tensorflow.keras.applications.resnet50
tf.keras.applications.ResNet50V2 View source on GitHub Instantiates the ResNet50V2 architecture. View aliases Main aliases tf.keras.applications.resnet_v2.ResNet50V2 Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.applications.ResNet50V2, tf.compat.v1.keras.applications.resnet_v2.ResNet50V2 tf.keras.applications.ResNet50V2( include_top=True, weights='imagenet', input_tensor=None, input_shape=None, pooling=None, classes=1000, classifier_activation='softmax' ) Reference: Identity Mappings in Deep Residual Networks (CVPR 2016) Optionally loads weights pre-trained on ImageNet. Note that the data format convention used by the model is the one specified in your Keras config at ~/.keras/keras.json. Note: each Keras Application expects a specific kind of input preprocessing. For ResNetV2, call tf.keras.applications.resnet_v2.preprocess_input on your inputs before passing them to the model. Arguments include_top whether to include the fully-connected layer at the top of the network. weights one of None (random initialization), 'imagenet' (pre-training on ImageNet), or the path to the weights file to be loaded. input_tensor optional Keras tensor (i.e. output of layers.Input()) to use as image input for the model. input_shape optional shape tuple, only to be specified if include_top is False (otherwise the input shape has to be (224, 224, 3) (with 'channels_last' data format) or (3, 224, 224) (with 'channels_first' data format). It should have exactly 3 inputs channels, and width and height should be no smaller than 32. E.g. (200, 200, 3) would be one valid value. pooling Optional pooling mode for feature extraction when include_top is False. None means that the output of the model will be the 4D tensor output of the last convolutional block. avg means that global average pooling will be applied to the output of the last convolutional block, and thus the output of the model will be a 2D tensor. max means that global max pooling will be applied. classes optional number of classes to classify images into, only to be specified if include_top is True, and if no weights argument is specified. classifier_activation A str or callable. The activation function to use on the "top" layer. Ignored unless include_top=True. Set classifier_activation=None to return the logits of the "top" layer. Returns A keras.Model instance.
tensorflow.keras.applications.resnet50v2
Module: tf.keras.applications.resnet_v2 ResNet v2 models for Keras. Reference: Identity Mappings in Deep Residual Networks (CVPR 2016) Functions ResNet101V2(...): Instantiates the ResNet101V2 architecture. ResNet152V2(...): Instantiates the ResNet152V2 architecture. ResNet50V2(...): Instantiates the ResNet50V2 architecture. decode_predictions(...): Decodes the prediction of an ImageNet model. preprocess_input(...): Preprocesses a tensor or Numpy array encoding a batch of images.
tensorflow.keras.applications.resnet_v2
tf.keras.applications.resnet_v2.decode_predictions View source on GitHub Decodes the prediction of an ImageNet model. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.applications.resnet_v2.decode_predictions tf.keras.applications.resnet_v2.decode_predictions( preds, top=5 ) Arguments preds Numpy array encoding a batch of predictions. top Integer, how many top-guesses to return. Defaults to 5. Returns A list of lists of top class prediction tuples (class_name, class_description, score). One list of tuples per sample in batch input. Raises ValueError In case of invalid shape of the pred array (must be 2D).
tensorflow.keras.applications.resnet_v2.decode_predictions
tf.keras.applications.resnet_v2.preprocess_input View source on GitHub Preprocesses a tensor or Numpy array encoding a batch of images. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.applications.resnet_v2.preprocess_input tf.keras.applications.resnet_v2.preprocess_input( x, data_format=None ) Usage example with applications.MobileNet: i = tf.keras.layers.Input([None, None, 3], dtype = tf.uint8) x = tf.cast(i, tf.float32) x = tf.keras.applications.mobilenet.preprocess_input(x) core = tf.keras.applications.MobileNet() x = core(x) model = tf.keras.Model(inputs=[i], outputs=[x]) image = tf.image.decode_png(tf.io.read_file('file.png')) result = model(image) Arguments x A floating point numpy.array or a tf.Tensor, 3D or 4D with 3 color channels, with values in the range [0, 255]. The preprocessed data are written over the input data if the data types are compatible. To avoid this behaviour, numpy.copy(x) can be used. data_format Optional data format of the image tensor/array. Defaults to None, in which case the global setting tf.keras.backend.image_data_format() is used (unless you changed it, it defaults to "channels_last"). Returns Preprocessed numpy.array or a tf.Tensor with type float32. The inputs pixel values are scaled between -1 and 1, sample-wise. Raises ValueError In case of unknown data_format argument.
tensorflow.keras.applications.resnet_v2.preprocess_input
Module: tf.keras.applications.vgg16 VGG16 model for Keras. Reference: Very Deep Convolutional Networks for Large-Scale Image Recognition (ICLR 2015) Functions VGG16(...): Instantiates the VGG16 model. decode_predictions(...): Decodes the prediction of an ImageNet model. preprocess_input(...): Preprocesses a tensor or Numpy array encoding a batch of images.
tensorflow.keras.applications.vgg16
tf.keras.applications.vgg16.decode_predictions View source on GitHub Decodes the prediction of an ImageNet model. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.applications.vgg16.decode_predictions tf.keras.applications.vgg16.decode_predictions( preds, top=5 ) Arguments preds Numpy array encoding a batch of predictions. top Integer, how many top-guesses to return. Defaults to 5. Returns A list of lists of top class prediction tuples (class_name, class_description, score). One list of tuples per sample in batch input. Raises ValueError In case of invalid shape of the pred array (must be 2D).
tensorflow.keras.applications.vgg16.decode_predictions
tf.keras.applications.vgg16.preprocess_input View source on GitHub Preprocesses a tensor or Numpy array encoding a batch of images. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.applications.vgg16.preprocess_input tf.keras.applications.vgg16.preprocess_input( x, data_format=None ) Usage example with applications.MobileNet: i = tf.keras.layers.Input([None, None, 3], dtype = tf.uint8) x = tf.cast(i, tf.float32) x = tf.keras.applications.mobilenet.preprocess_input(x) core = tf.keras.applications.MobileNet() x = core(x) model = tf.keras.Model(inputs=[i], outputs=[x]) image = tf.image.decode_png(tf.io.read_file('file.png')) result = model(image) Arguments x A floating point numpy.array or a tf.Tensor, 3D or 4D with 3 color channels, with values in the range [0, 255]. The preprocessed data are written over the input data if the data types are compatible. To avoid this behaviour, numpy.copy(x) can be used. data_format Optional data format of the image tensor/array. Defaults to None, in which case the global setting tf.keras.backend.image_data_format() is used (unless you changed it, it defaults to "channels_last"). Returns Preprocessed numpy.array or a tf.Tensor with type float32. The images are converted from RGB to BGR, then each color channel is zero-centered with respect to the ImageNet dataset, without scaling. Raises ValueError In case of unknown data_format argument.
tensorflow.keras.applications.vgg16.preprocess_input
Module: tf.keras.applications.vgg19 VGG19 model for Keras. Reference: Very Deep Convolutional Networks for Large-Scale Image Recognition (ICLR 2015) Functions VGG19(...): Instantiates the VGG19 architecture. decode_predictions(...): Decodes the prediction of an ImageNet model. preprocess_input(...): Preprocesses a tensor or Numpy array encoding a batch of images.
tensorflow.keras.applications.vgg19
tf.keras.applications.vgg19.decode_predictions View source on GitHub Decodes the prediction of an ImageNet model. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.applications.vgg19.decode_predictions tf.keras.applications.vgg19.decode_predictions( preds, top=5 ) Arguments preds Numpy array encoding a batch of predictions. top Integer, how many top-guesses to return. Defaults to 5. Returns A list of lists of top class prediction tuples (class_name, class_description, score). One list of tuples per sample in batch input. Raises ValueError In case of invalid shape of the pred array (must be 2D).
tensorflow.keras.applications.vgg19.decode_predictions
tf.keras.applications.vgg19.preprocess_input View source on GitHub Preprocesses a tensor or Numpy array encoding a batch of images. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.applications.vgg19.preprocess_input tf.keras.applications.vgg19.preprocess_input( x, data_format=None ) Usage example with applications.MobileNet: i = tf.keras.layers.Input([None, None, 3], dtype = tf.uint8) x = tf.cast(i, tf.float32) x = tf.keras.applications.mobilenet.preprocess_input(x) core = tf.keras.applications.MobileNet() x = core(x) model = tf.keras.Model(inputs=[i], outputs=[x]) image = tf.image.decode_png(tf.io.read_file('file.png')) result = model(image) Arguments x A floating point numpy.array or a tf.Tensor, 3D or 4D with 3 color channels, with values in the range [0, 255]. The preprocessed data are written over the input data if the data types are compatible. To avoid this behaviour, numpy.copy(x) can be used. data_format Optional data format of the image tensor/array. Defaults to None, in which case the global setting tf.keras.backend.image_data_format() is used (unless you changed it, it defaults to "channels_last"). Returns Preprocessed numpy.array or a tf.Tensor with type float32. The images are converted from RGB to BGR, then each color channel is zero-centered with respect to the ImageNet dataset, without scaling. Raises ValueError In case of unknown data_format argument.
tensorflow.keras.applications.vgg19.preprocess_input
Module: tf.keras.applications.xception Xception V1 model for Keras. On ImageNet, this model gets to a top-1 validation accuracy of 0.790 and a top-5 validation accuracy of 0.945. Reference: Xception: Deep Learning with Depthwise Separable Convolutions (CVPR 2017) Functions Xception(...): Instantiates the Xception architecture. decode_predictions(...): Decodes the prediction of an ImageNet model. preprocess_input(...): Preprocesses a tensor or Numpy array encoding a batch of images.
tensorflow.keras.applications.xception
tf.keras.applications.xception.decode_predictions View source on GitHub Decodes the prediction of an ImageNet model. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.applications.xception.decode_predictions tf.keras.applications.xception.decode_predictions( preds, top=5 ) Arguments preds Numpy array encoding a batch of predictions. top Integer, how many top-guesses to return. Defaults to 5. Returns A list of lists of top class prediction tuples (class_name, class_description, score). One list of tuples per sample in batch input. Raises ValueError In case of invalid shape of the pred array (must be 2D).
tensorflow.keras.applications.xception.decode_predictions
tf.keras.applications.xception.preprocess_input View source on GitHub Preprocesses a tensor or Numpy array encoding a batch of images. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.applications.xception.preprocess_input tf.keras.applications.xception.preprocess_input( x, data_format=None ) Usage example with applications.MobileNet: i = tf.keras.layers.Input([None, None, 3], dtype = tf.uint8) x = tf.cast(i, tf.float32) x = tf.keras.applications.mobilenet.preprocess_input(x) core = tf.keras.applications.MobileNet() x = core(x) model = tf.keras.Model(inputs=[i], outputs=[x]) image = tf.image.decode_png(tf.io.read_file('file.png')) result = model(image) Arguments x A floating point numpy.array or a tf.Tensor, 3D or 4D with 3 color channels, with values in the range [0, 255]. The preprocessed data are written over the input data if the data types are compatible. To avoid this behaviour, numpy.copy(x) can be used. data_format Optional data format of the image tensor/array. Defaults to None, in which case the global setting tf.keras.backend.image_data_format() is used (unless you changed it, it defaults to "channels_last"). Returns Preprocessed numpy.array or a tf.Tensor with type float32. The inputs pixel values are scaled between -1 and 1, sample-wise. Raises ValueError In case of unknown data_format argument.
tensorflow.keras.applications.xception.preprocess_input
Module: tf.keras.backend Keras backend API. Functions clear_session(...): Resets all state generated by Keras. epsilon(...): Returns the value of the fuzz factor used in numeric expressions. floatx(...): Returns the default float type, as a string. get_uid(...): Associates a string prefix with an integer counter in a TensorFlow graph. image_data_format(...): Returns the default image data format convention. is_keras_tensor(...): Returns whether x is a Keras tensor. reset_uids(...): Resets graph identifiers. rnn(...): Iterates over the time dimension of a tensor. set_epsilon(...): Sets the value of the fuzz factor used in numeric expressions. set_floatx(...): Sets the default float type. set_image_data_format(...): Sets the value of the image data format convention.
tensorflow.keras.backend
tf.keras.backend.clear_session View source on GitHub Resets all state generated by Keras. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.backend.clear_session tf.keras.backend.clear_session() Keras manages a global state, which it uses to implement the Functional model-building API and to uniquify autogenerated layer names. If you are creating many models in a loop, this global state will consume an increasing amount of memory over time, and you may want to clear it. Calling clear_session() releases the global state: this helps avoid clutter from old models and layers, especially when memory is limited. Example 1: calling clear_session() when creating models in a loop for _ in range(100): # Without `clear_session()`, each iteration of this loop will # slightly increase the size of the global state managed by Keras model = tf.keras.Sequential([tf.keras.layers.Dense(10) for _ in range(10)]) for _ in range(100): # With `clear_session()` called at the beginning, # Keras starts with a blank state at each iteration # and memory consumption is constant over time. tf.keras.backend.clear_session() model = tf.keras.Sequential([tf.keras.layers.Dense(10) for _ in range(10)]) Example 2: resetting the layer name generation counter import tensorflow as tf layers = [tf.keras.layers.Dense(10) for _ in range(10)] new_layer = tf.keras.layers.Dense(10) print(new_layer.name) dense_10 tf.keras.backend.set_learning_phase(1) print(tf.keras.backend.learning_phase()) 1 tf.keras.backend.clear_session() new_layer = tf.keras.layers.Dense(10) print(new_layer.name) dense
tensorflow.keras.backend.clear_session
tf.keras.backend.epsilon View source on GitHub Returns the value of the fuzz factor used in numeric expressions. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.backend.epsilon tf.keras.backend.epsilon() Returns A float. Example: tf.keras.backend.epsilon() 1e-07
tensorflow.keras.backend.epsilon
tf.keras.backend.floatx View source on GitHub Returns the default float type, as a string. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.backend.floatx tf.keras.backend.floatx() E.g. 'float16', 'float32', 'float64'. Returns String, the current default float type. Example: tf.keras.backend.floatx() 'float32'
tensorflow.keras.backend.floatx
tf.keras.backend.get_uid View source on GitHub Associates a string prefix with an integer counter in a TensorFlow graph. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.backend.get_uid tf.keras.backend.get_uid( prefix='' ) Arguments prefix String prefix to index. Returns Unique integer ID. Example: get_uid('dense') 1 get_uid('dense') 2
tensorflow.keras.backend.get_uid
tf.keras.backend.image_data_format View source on GitHub Returns the default image data format convention. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.backend.image_data_format tf.keras.backend.image_data_format() Returns A string, either 'channels_first' or 'channels_last' Example: tf.keras.backend.image_data_format() 'channels_last'
tensorflow.keras.backend.image_data_format
tf.keras.backend.is_keras_tensor Returns whether x is a Keras tensor. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.backend.is_keras_tensor tf.keras.backend.is_keras_tensor( x ) A "Keras tensor" is a tensor that was returned by a Keras layer, (Layer class) or by Input. Arguments x A candidate tensor. Returns A boolean: Whether the argument is a Keras tensor. Raises ValueError In case x is not a symbolic tensor. Examples: np_var = np.array([1, 2]) # A numpy array is not a symbolic tensor. tf.keras.backend.is_keras_tensor(np_var) Traceback (most recent call last): ValueError: Unexpectedly found an instance of type `<class 'numpy.ndarray'>`. Expected a symbolic tensor instance. keras_var = tf.keras.backend.variable(np_var) # A variable created with the keras backend is not a Keras tensor. tf.keras.backend.is_keras_tensor(keras_var) False keras_placeholder = tf.keras.backend.placeholder(shape=(2, 4, 5)) # A placeholder is a Keras tensor. tf.keras.backend.is_keras_tensor(keras_placeholder) True keras_input = tf.keras.layers.Input([10]) # An Input is a Keras tensor. tf.keras.backend.is_keras_tensor(keras_input) True keras_layer_output = tf.keras.layers.Dense(10)(keras_input) # Any Keras layer output is a Keras tensor. tf.keras.backend.is_keras_tensor(keras_layer_output) True
tensorflow.keras.backend.is_keras_tensor
tf.keras.backend.reset_uids View source on GitHub Resets graph identifiers. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.backend.reset_uids tf.keras.backend.reset_uids()
tensorflow.keras.backend.reset_uids
tf.keras.backend.rnn View source on GitHub Iterates over the time dimension of a tensor. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.backend.rnn tf.keras.backend.rnn( step_function, inputs, initial_states, go_backwards=False, mask=None, constants=None, unroll=False, input_length=None, time_major=False, zero_output_for_mask=False ) Arguments step_function RNN step function. Args; input; Tensor with shape (samples, ...) (no time dimension), representing input for the batch of samples at a certain time step. states; List of tensors. Returns; output; Tensor with shape (samples, output_dim) (no time dimension). new_states; List of tensors, same length and shapes as 'states'. The first state in the list must be the output tensor at the previous timestep. inputs Tensor of temporal data of shape (samples, time, ...) (at least 3D), or nested tensors, and each of which has shape (samples, time, ...). initial_states Tensor with shape (samples, state_size) (no time dimension), containing the initial values for the states used in the step function. In the case that state_size is in a nested shape, the shape of initial_states will also follow the nested structure. go_backwards Boolean. If True, do the iteration over the time dimension in reverse order and return the reversed sequence. mask Binary tensor with shape (samples, time, 1), with a zero for every element that is masked. constants List of constant values passed at each step. unroll Whether to unroll the RNN or to use a symbolic while_loop. input_length An integer or a 1-D Tensor, depending on whether the time dimension is fixed-length or not. In case of variable length input, it is used for masking in case there's no mask specified. time_major Boolean. If true, the inputs and outputs will be in shape (timesteps, batch, ...), whereas in the False case, it will be (batch, timesteps, ...). Using time_major = True is a bit more efficient because it avoids transposes at the beginning and end of the RNN calculation. However, most TensorFlow data is batch-major, so by default this function accepts input and emits output in batch-major form. zero_output_for_mask Boolean. If True, the output for masked timestep will be zeros, whereas in the False case, output from previous timestep is returned. Returns A tuple, (last_output, outputs, new_states). last_output: the latest output of the rnn, of shape (samples, ...) outputs: tensor with shape (samples, time, ...) where each entry outputs[s, t] is the output of the step function at time t for sample s. new_states: list of tensors, latest states returned by the step function, of shape (samples, ...). Raises ValueError if input dimension is less than 3. ValueError if unroll is True but input timestep is not a fixed number. ValueError if mask is provided (not None) but states is not provided (len(states) == 0).
tensorflow.keras.backend.rnn
tf.keras.backend.set_epsilon View source on GitHub Sets the value of the fuzz factor used in numeric expressions. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.backend.set_epsilon tf.keras.backend.set_epsilon( value ) Arguments value float. New value of epsilon. Example: tf.keras.backend.epsilon() 1e-07 tf.keras.backend.set_epsilon(1e-5) tf.keras.backend.epsilon() 1e-05 tf.keras.backend.set_epsilon(1e-7)
tensorflow.keras.backend.set_epsilon
tf.keras.backend.set_floatx View source on GitHub Sets the default float type. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.backend.set_floatx tf.keras.backend.set_floatx( value ) Note: It is not recommended to set this to float16 for training, as this will likely cause numeric stability issues. Instead, mixed precision, which is using a mix of float16 and float32, can be used by calling tf.keras.mixed_precision.experimental.set_policy('mixed_float16'). See the mixed precision guide for details. Arguments value String; 'float16', 'float32', or 'float64'. Example: tf.keras.backend.floatx() 'float32' tf.keras.backend.set_floatx('float64') tf.keras.backend.floatx() 'float64' tf.keras.backend.set_floatx('float32') Raises ValueError In case of invalid value.
tensorflow.keras.backend.set_floatx
tf.keras.backend.set_image_data_format View source on GitHub Sets the value of the image data format convention. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.backend.set_image_data_format tf.keras.backend.set_image_data_format( data_format ) Arguments data_format string. 'channels_first' or 'channels_last'. Example: tf.keras.backend.image_data_format() 'channels_last' tf.keras.backend.set_image_data_format('channels_first') tf.keras.backend.image_data_format() 'channels_first' tf.keras.backend.set_image_data_format('channels_last') Raises ValueError In case of invalid data_format value.
tensorflow.keras.backend.set_image_data_format
Module: tf.keras.callbacks Callbacks: utilities called at certain points during model training. Modules experimental module: Public API for tf.keras.callbacks.experimental namespace. Classes class BaseLogger: Callback that accumulates epoch averages of metrics. class CSVLogger: Callback that streams epoch results to a CSV file. class Callback: Abstract base class used to build new callbacks. class CallbackList: Container abstracting a list of callbacks. class EarlyStopping: Stop training when a monitored metric has stopped improving. class History: Callback that records events into a History object. class LambdaCallback: Callback for creating simple, custom callbacks on-the-fly. class LearningRateScheduler: Learning rate scheduler. class ModelCheckpoint: Callback to save the Keras model or model weights at some frequency. class ProgbarLogger: Callback that prints metrics to stdout. class ReduceLROnPlateau: Reduce learning rate when a metric has stopped improving. class RemoteMonitor: Callback used to stream events to a server. class TensorBoard: Enable visualizations for TensorBoard. class TerminateOnNaN: Callback that terminates training when a NaN loss is encountered.
tensorflow.keras.callbacks
tf.keras.callbacks.BaseLogger View source on GitHub Callback that accumulates epoch averages of metrics. Inherits From: Callback View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.callbacks.BaseLogger tf.keras.callbacks.BaseLogger( stateful_metrics=None ) This callback is automatically applied to every Keras model. Arguments stateful_metrics Iterable of string names of metrics that should not be averaged over an epoch. Metrics in this list will be logged as-is in on_epoch_end. All others will be averaged in on_epoch_end. Methods set_model View source set_model( model ) set_params View source set_params( params )
tensorflow.keras.callbacks.baselogger
tf.keras.callbacks.Callback View source on GitHub Abstract base class used to build new callbacks. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.callbacks.Callback tf.keras.callbacks.Callback() The logs dictionary that callback methods take as argument will contain keys for quantities relevant to the current batch or epoch (see method-specific docstrings). Attributes params Dict. Training parameters (eg. verbosity, batch size, number of epochs...). model Instance of keras.models.Model. Reference of the model being trained. Methods on_batch_begin View source on_batch_begin( batch, logs=None ) A backwards compatibility alias for on_train_batch_begin. on_batch_end View source on_batch_end( batch, logs=None ) A backwards compatibility alias for on_train_batch_end. on_epoch_begin View source on_epoch_begin( epoch, logs=None ) Called at the start of an epoch. Subclasses should override for any actions to run. This function should only be called during TRAIN mode. Arguments epoch Integer, index of epoch. logs Dict. Currently no data is passed to this argument for this method but that may change in the future. on_epoch_end View source on_epoch_end( epoch, logs=None ) Called at the end of an epoch. Subclasses should override for any actions to run. This function should only be called during TRAIN mode. Arguments epoch Integer, index of epoch. logs Dict, metric results for this training epoch, and for the validation epoch if validation is performed. Validation result keys are prefixed with val_. For training epoch, the values of the Model's metrics are returned. Example : {'loss': 0.2, 'acc': 0.7}. on_predict_batch_begin View source on_predict_batch_begin( batch, logs=None ) Called at the beginning of a batch in predict methods. Subclasses should override for any actions to run. Note that if the steps_per_execution argument to compile in tf.keras.Model is set to N, this method will only be called every N batches. Arguments batch Integer, index of batch within the current epoch. logs Dict, contains the return value of model.predict_step, it typically returns a dict with a key 'outputs' containing the model's outputs. on_predict_batch_end View source on_predict_batch_end( batch, logs=None ) Called at the end of a batch in predict methods. Subclasses should override for any actions to run. Note that if the steps_per_execution argument to compile in tf.keras.Model is set to N, this method will only be called every N batches. Arguments batch Integer, index of batch within the current epoch. logs Dict. Aggregated metric results up until this batch. on_predict_begin View source on_predict_begin( logs=None ) Called at the beginning of prediction. Subclasses should override for any actions to run. Arguments logs Dict. Currently no data is passed to this argument for this method but that may change in the future. on_predict_end View source on_predict_end( logs=None ) Called at the end of prediction. Subclasses should override for any actions to run. Arguments logs Dict. Currently no data is passed to this argument for this method but that may change in the future. on_test_batch_begin View source on_test_batch_begin( batch, logs=None ) Called at the beginning of a batch in evaluate methods. Also called at the beginning of a validation batch in the fit methods, if validation data is provided. Subclasses should override for any actions to run. Note that if the steps_per_execution argument to compile in tf.keras.Model is set to N, this method will only be called every N batches. Arguments batch Integer, index of batch within the current epoch. logs Dict, contains the return value of model.test_step. Typically, the values of the Model's metrics are returned. Example: {'loss': 0.2, 'accuracy': 0.7}. on_test_batch_end View source on_test_batch_end( batch, logs=None ) Called at the end of a batch in evaluate methods. Also called at the end of a validation batch in the fit methods, if validation data is provided. Subclasses should override for any actions to run. Note that if the steps_per_execution argument to compile in tf.keras.Model is set to N, this method will only be called every N batches. Arguments batch Integer, index of batch within the current epoch. logs Dict. Aggregated metric results up until this batch. on_test_begin View source on_test_begin( logs=None ) Called at the beginning of evaluation or validation. Subclasses should override for any actions to run. Arguments logs Dict. Currently no data is passed to this argument for this method but that may change in the future. on_test_end View source on_test_end( logs=None ) Called at the end of evaluation or validation. Subclasses should override for any actions to run. Arguments logs Dict. Currently the output of the last call to on_test_batch_end() is passed to this argument for this method but that may change in the future. on_train_batch_begin View source on_train_batch_begin( batch, logs=None ) Called at the beginning of a training batch in fit methods. Subclasses should override for any actions to run. Note that if the steps_per_execution argument to compile in tf.keras.Model is set to N, this method will only be called every N batches. Arguments batch Integer, index of batch within the current epoch. logs Dict, contains the return value of model.train_step. Typically, the values of the Model's metrics are returned. Example: {'loss': 0.2, 'accuracy': 0.7}. on_train_batch_end View source on_train_batch_end( batch, logs=None ) Called at the end of a training batch in fit methods. Subclasses should override for any actions to run. Note that if the steps_per_execution argument to compile in tf.keras.Model is set to N, this method will only be called every N batches. Arguments batch Integer, index of batch within the current epoch. logs Dict. Aggregated metric results up until this batch. on_train_begin View source on_train_begin( logs=None ) Called at the beginning of training. Subclasses should override for any actions to run. Arguments logs Dict. Currently no data is passed to this argument for this method but that may change in the future. on_train_end View source on_train_end( logs=None ) Called at the end of training. Subclasses should override for any actions to run. Arguments logs Dict. Currently the output of the last call to on_epoch_end() is passed to this argument for this method but that may change in the future. set_model View source set_model( model ) set_params View source set_params( params )
tensorflow.keras.callbacks.callback
tf.keras.callbacks.CallbackList Container abstracting a list of callbacks. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.callbacks.CallbackList tf.keras.callbacks.CallbackList( callbacks=None, add_history=False, add_progbar=False, model=None, **params ) Arguments callbacks List of Callback instances. add_history Whether a History callback should be added, if one does not already exist in the callbacks list. add_progbar Whether a ProgbarLogger callback should be added, if one does not already exist in the callbacks list. model The Model these callbacks are used with. **params If provided, parameters will be passed to each Callback via Callback.set_params. Methods append View source append( callback ) on_batch_begin View source on_batch_begin( batch, logs=None ) on_batch_end View source on_batch_end( batch, logs=None ) on_epoch_begin View source on_epoch_begin( epoch, logs=None ) Calls the on_epoch_begin methods of its callbacks. This function should only be called during TRAIN mode. Arguments epoch Integer, index of epoch. logs Dict. Currently no data is passed to this argument for this method but that may change in the future. on_epoch_end View source on_epoch_end( epoch, logs=None ) Calls the on_epoch_end methods of its callbacks. This function should only be called during TRAIN mode. Arguments epoch Integer, index of epoch. logs Dict, metric results for this training epoch, and for the validation epoch if validation is performed. Validation result keys are prefixed with val_. on_predict_batch_begin View source on_predict_batch_begin( batch, logs=None ) Calls the on_predict_batch_begin methods of its callbacks. Arguments batch Integer, index of batch within the current epoch. logs Dict, contains the return value of model.predict_step, it typically returns a dict with a key 'outputs' containing the model's outputs. on_predict_batch_end View source on_predict_batch_end( batch, logs=None ) Calls the on_predict_batch_end methods of its callbacks. Arguments batch Integer, index of batch within the current epoch. logs Dict. Aggregated metric results up until this batch. on_predict_begin View source on_predict_begin( logs=None ) Calls the 'on_predict_begin` methods of its callbacks. Arguments logs Dict. Currently no data is passed to this argument for this method but that may change in the future. on_predict_end View source on_predict_end( logs=None ) Calls the on_predict_end methods of its callbacks. Arguments logs Dict. Currently no data is passed to this argument for this method but that may change in the future. on_test_batch_begin View source on_test_batch_begin( batch, logs=None ) Calls the on_test_batch_begin methods of its callbacks. Arguments batch Integer, index of batch within the current epoch. logs Dict, contains the return value of model.test_step. Typically, the values of the Model's metrics are returned. Example: {'loss': 0.2, 'accuracy': 0.7}. on_test_batch_end View source on_test_batch_end( batch, logs=None ) Calls the on_test_batch_end methods of its callbacks. Arguments batch Integer, index of batch within the current epoch. logs Dict. Aggregated metric results up until this batch. on_test_begin View source on_test_begin( logs=None ) Calls the on_test_begin methods of its callbacks. Arguments logs Dict. Currently no data is passed to this argument for this method but that may change in the future. on_test_end View source on_test_end( logs=None ) Calls the on_test_end methods of its callbacks. Arguments logs Dict. Currently no data is passed to this argument for this method but that may change in the future. on_train_batch_begin View source on_train_batch_begin( batch, logs=None ) Calls the on_train_batch_begin methods of its callbacks. Arguments batch Integer, index of batch within the current epoch. logs Dict, contains the return value of model.train_step. Typically, the values of the Model's metrics are returned. Example: {'loss': 0.2, 'accuracy': 0.7}. on_train_batch_end View source on_train_batch_end( batch, logs=None ) Calls the on_train_batch_end methods of its callbacks. Arguments batch Integer, index of batch within the current epoch. logs Dict. Aggregated metric results up until this batch. on_train_begin View source on_train_begin( logs=None ) Calls the on_train_begin methods of its callbacks. Arguments logs Dict. Currently no data is passed to this argument for this method but that may change in the future. on_train_end View source on_train_end( logs=None ) Calls the on_train_end methods of its callbacks. Arguments logs Dict. Currently no data is passed to this argument for this method but that may change in the future. set_model View source set_model( model ) set_params View source set_params( params ) __iter__ View source __iter__()
tensorflow.keras.callbacks.callbacklist
tf.keras.callbacks.CSVLogger View source on GitHub Callback that streams epoch results to a CSV file. Inherits From: Callback View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.callbacks.CSVLogger tf.keras.callbacks.CSVLogger( filename, separator=',', append=False ) Supports all values that can be represented as a string, including 1D iterables such as np.ndarray. Example: csv_logger = CSVLogger('training.log') model.fit(X_train, Y_train, callbacks=[csv_logger]) Arguments filename Filename of the CSV file, e.g. 'run/log.csv'. separator String used to separate elements in the CSV file. append Boolean. True: append if file exists (useful for continuing training). False: overwrite existing file. Methods set_model View source set_model( model ) set_params View source set_params( params )
tensorflow.keras.callbacks.csvlogger
tf.keras.callbacks.EarlyStopping View source on GitHub Stop training when a monitored metric has stopped improving. Inherits From: Callback View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.callbacks.EarlyStopping tf.keras.callbacks.EarlyStopping( monitor='val_loss', min_delta=0, patience=0, verbose=0, mode='auto', baseline=None, restore_best_weights=False ) Assuming the goal of a training is to minimize the loss. With this, the metric to be monitored would be 'loss', and mode would be 'min'. A model.fit() training loop will check at end of every epoch whether the loss is no longer decreasing, considering the min_delta and patience if applicable. Once it's found no longer decreasing, model.stop_training is marked True and the training terminates. The quantity to be monitored needs to be available in logs dict. To make it so, pass the loss or metrics at model.compile(). Arguments monitor Quantity to be monitored. min_delta Minimum change in the monitored quantity to qualify as an improvement, i.e. an absolute change of less than min_delta, will count as no improvement. patience Number of epochs with no improvement after which training will be stopped. verbose verbosity mode. mode One of {"auto", "min", "max"}. In min mode, training will stop when the quantity monitored has stopped decreasing; in "max" mode it will stop when the quantity monitored has stopped increasing; in "auto" mode, the direction is automatically inferred from the name of the monitored quantity. baseline Baseline value for the monitored quantity. Training will stop if the model doesn't show improvement over the baseline. restore_best_weights Whether to restore model weights from the epoch with the best value of the monitored quantity. If False, the model weights obtained at the last step of training are used. Example: callback = tf.keras.callbacks.EarlyStopping(monitor='loss', patience=3) # This callback will stop the training when there is no improvement in # the validation loss for three consecutive epochs. model = tf.keras.models.Sequential([tf.keras.layers.Dense(10)]) model.compile(tf.keras.optimizers.SGD(), loss='mse') history = model.fit(np.arange(100).reshape(5, 20), np.zeros(5), epochs=10, batch_size=1, callbacks=[callback], verbose=0) len(history.history['loss']) # Only 4 epochs are run. 4 Methods get_monitor_value View source get_monitor_value( logs ) set_model View source set_model( model ) set_params View source set_params( params )
tensorflow.keras.callbacks.earlystopping
Module: tf.keras.callbacks.experimental Public API for tf.keras.callbacks.experimental namespace. Classes class BackupAndRestore: Callback to back up and restore the training state.
tensorflow.keras.callbacks.experimental
tf.keras.callbacks.experimental.BackupAndRestore Callback to back up and restore the training state. Inherits From: Callback tf.keras.callbacks.experimental.BackupAndRestore( backup_dir ) BackupAndRestore callback is intended to recover from interruptions that happened in the middle of a model.fit execution by backing up the training states in a temporary checkpoint file (based on TF CheckpointManager) at the end of each epoch. If training restarted before completion, the training state and model are restored to the most recently saved state at the beginning of a new model.fit() run. Note that user is responsible to bring jobs back up. This callback is important for the backup and restore mechanism for fault tolerance purpose. And the model to be restored from an previous checkpoint is expected to be the same as the one used to back up. If user changes arguments passed to compile or fit, the checkpoint saved for fault tolerance can become invalid. Note: This callback is not compatible with disabling eager execution. A checkpoint is saved at the end of each epoch, when restoring we'll redo any partial work from an unfinished epoch in which the training got restarted (so the work done before a interruption doesn't affect the final model state). This works for both single worker and multi-worker mode, only MirroredStrategy and MultiWorkerMirroredStrategy are supported for now. Example: class InterruptingCallback(tf.keras.callbacks.Callback): def on_epoch_begin(self, epoch, logs=None): if epoch == 4: raise RuntimeError('Interrupting!') callback = tf.keras.callbacks.experimental.BackupAndRestore( backup_dir="/tmp") model = tf.keras.models.Sequential([tf.keras.layers.Dense(10)]) model.compile(tf.keras.optimizers.SGD(), loss='mse') try: model.fit(np.arange(100).reshape(5, 20), np.zeros(5), epochs=10, batch_size=1, callbacks=[callback, InterruptingCallback()], verbose=0) except: pass history = model.fit(np.arange(100).reshape(5, 20), np.zeros(5), epochs=10, batch_size=1, callbacks=[callback], verbose=0) # Only 6 more epochs are run, since first trainning got interrupted at # zero-indexed epoch 4, second training will continue from 4 to 9. len(history.history['loss']) 6 Arguments backup_dir String, path to save the model file. This is the directory in which the system stores temporary files to recover the model from jobs terminated unexpectedly. The directory cannot be reused elsewhere to store other checkpoints, e.g. by BackupAndRestore callback of another training, or by another callback (ModelCheckpoint) of the same training. Methods set_model View source set_model( model ) set_params View source set_params( params )
tensorflow.keras.callbacks.experimental.backupandrestore
tf.keras.callbacks.History View source on GitHub Callback that records events into a History object. Inherits From: Callback View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.callbacks.History tf.keras.callbacks.History() This callback is automatically applied to every Keras model. The History object gets returned by the fit method of models. Methods set_model View source set_model( model ) set_params View source set_params( params )
tensorflow.keras.callbacks.history
tf.keras.callbacks.LambdaCallback View source on GitHub Callback for creating simple, custom callbacks on-the-fly. Inherits From: Callback View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.callbacks.LambdaCallback tf.keras.callbacks.LambdaCallback( on_epoch_begin=None, on_epoch_end=None, on_batch_begin=None, on_batch_end=None, on_train_begin=None, on_train_end=None, **kwargs ) This callback is constructed with anonymous functions that will be called at the appropriate time. Note that the callbacks expects positional arguments, as: on_epoch_begin and on_epoch_end expect two positional arguments: epoch, logs on_batch_begin and on_batch_end expect two positional arguments: batch, logs on_train_begin and on_train_end expect one positional argument: logs Arguments on_epoch_begin called at the beginning of every epoch. on_epoch_end called at the end of every epoch. on_batch_begin called at the beginning of every batch. on_batch_end called at the end of every batch. on_train_begin called at the beginning of model training. on_train_end called at the end of model training. Example: # Print the batch number at the beginning of every batch. batch_print_callback = LambdaCallback( on_batch_begin=lambda batch,logs: print(batch)) # Stream the epoch loss to a file in JSON format. The file content # is not well-formed JSON but rather has a JSON object per line. import json json_log = open('loss_log.json', mode='wt', buffering=1) json_logging_callback = LambdaCallback( on_epoch_end=lambda epoch, logs: json_log.write( json.dumps({'epoch': epoch, 'loss': logs['loss']}) + '\n'), on_train_end=lambda logs: json_log.close() ) # Terminate some processes after having finished model training. processes = ... cleanup_callback = LambdaCallback( on_train_end=lambda logs: [ p.terminate() for p in processes if p.is_alive()]) model.fit(..., callbacks=[batch_print_callback, json_logging_callback, cleanup_callback]) Methods set_model View source set_model( model ) set_params View source set_params( params )
tensorflow.keras.callbacks.lambdacallback
tf.keras.callbacks.LearningRateScheduler View source on GitHub Learning rate scheduler. Inherits From: Callback View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.callbacks.LearningRateScheduler tf.keras.callbacks.LearningRateScheduler( schedule, verbose=0 ) At the beginning of every epoch, this callback gets the updated learning rate value from schedule function provided at __init__, with the current epoch and current learning rate, and applies the updated learning rate on the optimizer. Arguments schedule a function that takes an epoch index (integer, indexed from 0) and current learning rate (float) as inputs and returns a new learning rate as output (float). verbose int. 0: quiet, 1: update messages. Example: # This function keeps the initial learning rate for the first ten epochs # and decreases it exponentially after that. def scheduler(epoch, lr): if epoch < 10: return lr else: return lr * tf.math.exp(-0.1) model = tf.keras.models.Sequential([tf.keras.layers.Dense(10)]) model.compile(tf.keras.optimizers.SGD(), loss='mse') round(model.optimizer.lr.numpy(), 5) 0.01 callback = tf.keras.callbacks.LearningRateScheduler(scheduler) history = model.fit(np.arange(100).reshape(5, 20), np.zeros(5), epochs=15, callbacks=[callback], verbose=0) round(model.optimizer.lr.numpy(), 5) 0.00607 Methods set_model View source set_model( model ) set_params View source set_params( params )
tensorflow.keras.callbacks.learningratescheduler
tf.keras.callbacks.ModelCheckpoint View source on GitHub Callback to save the Keras model or model weights at some frequency. Inherits From: Callback View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.callbacks.ModelCheckpoint tf.keras.callbacks.ModelCheckpoint( filepath, monitor='val_loss', verbose=0, save_best_only=False, save_weights_only=False, mode='auto', save_freq='epoch', options=None, **kwargs ) ModelCheckpoint callback is used in conjunction with training using model.fit() to save a model or weights (in a checkpoint file) at some interval, so the model or weights can be loaded later to continue the training from the state saved. A few options this callback provides include: Whether to only keep the model that has achieved the "best performance" so far, or whether to save the model at the end of every epoch regardless of performance. Definition of 'best'; which quantity to monitor and whether it should be maximized or minimized. The frequency it should save at. Currently, the callback supports saving at the end of every epoch, or after a fixed number of training batches. Whether only weights are saved, or the whole model is saved. Note: If you get WARNING:tensorflow:Can save best model only with <name> available, skipping see the description of the monitor argument for details on how to get this right. Example: model.compile(loss=..., optimizer=..., metrics=['accuracy']) EPOCHS = 10 checkpoint_filepath = '/tmp/checkpoint' model_checkpoint_callback = tf.keras.callbacks.ModelCheckpoint( filepath=checkpoint_filepath, save_weights_only=True, monitor='val_accuracy', mode='max', save_best_only=True) # Model weights are saved at the end of every epoch, if it's the best seen # so far. model.fit(epochs=EPOCHS, callbacks=[model_checkpoint_callback]) # The model weights (that are considered the best) are loaded into the model. model.load_weights(checkpoint_filepath) Arguments filepath string or PathLike, path to save the model file. filepath can contain named formatting options, which will be filled the value of epoch and keys in logs (passed in on_epoch_end). For example: if filepath is weights.{epoch:02d}-{val_loss:.2f}.hdf5, then the model checkpoints will be saved with the epoch number and the validation loss in the filename. monitor The metric name to monitor. Typically the metrics are set by the Model.compile method. Note: Prefix the name with "val_" to monitor validation metrics. Use "loss" or "val_loss" to monitor the model's total loss. If you specify metrics as strings, like "accuracy", pass the same string (with or without the "val_" prefix). If you pass metrics.Metric objects, monitor should be set to metric.name If you're not sure about the metric names you can check the contents of the history.history dictionary returned by history = model.fit() Multi-output models set additional prefixes on the metric names. verbose verbosity mode, 0 or 1. save_best_only if save_best_only=True, it only saves when the model is considered the "best" and the latest best model according to the quantity monitored will not be overwritten. If filepath doesn't contain formatting options like {epoch} then filepath will be overwritten by each new better model. mode one of {'auto', 'min', 'max'}. If save_best_only=True, the decision to overwrite the current save file is made based on either the maximization or the minimization of the monitored quantity. For val_acc, this should be max, for val_loss this should be min, etc. In auto mode, the direction is automatically inferred from the name of the monitored quantity. save_weights_only if True, then only the model's weights will be saved (model.save_weights(filepath)), else the full model is saved (model.save(filepath)). save_freq 'epoch' or integer. When using 'epoch', the callback saves the model after each epoch. When using integer, the callback saves the model at end of this many batches. If the Model is compiled with steps_per_execution=N, then the saving criteria will be checked every Nth batch. Note that if the saving isn't aligned to epochs, the monitored metric may potentially be less reliable (it could reflect as little as 1 batch, since the metrics get reset every epoch). Defaults to 'epoch'. options Optional tf.train.CheckpointOptions object if save_weights_only is true or optional tf.saved_model.SaveOptions object if save_weights_only is false. **kwargs Additional arguments for backwards compatibility. Possible key is period. Methods set_model View source set_model( model ) set_params View source set_params( params )
tensorflow.keras.callbacks.modelcheckpoint
tf.keras.callbacks.ProgbarLogger View source on GitHub Callback that prints metrics to stdout. Inherits From: Callback View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.callbacks.ProgbarLogger tf.keras.callbacks.ProgbarLogger( count_mode='samples', stateful_metrics=None ) Arguments count_mode One of "steps" or "samples". Whether the progress bar should count samples seen or steps (batches) seen. stateful_metrics Iterable of string names of metrics that should not be averaged over an epoch. Metrics in this list will be logged as-is. All others will be averaged over time (e.g. loss, etc). If not provided, defaults to the Model's metrics. Raises ValueError In case of invalid count_mode. Methods set_model View source set_model( model ) set_params View source set_params( params )
tensorflow.keras.callbacks.progbarlogger
tf.keras.callbacks.ReduceLROnPlateau View source on GitHub Reduce learning rate when a metric has stopped improving. Inherits From: Callback View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.callbacks.ReduceLROnPlateau tf.keras.callbacks.ReduceLROnPlateau( monitor='val_loss', factor=0.1, patience=10, verbose=0, mode='auto', min_delta=0.0001, cooldown=0, min_lr=0, **kwargs ) Models often benefit from reducing the learning rate by a factor of 2-10 once learning stagnates. This callback monitors a quantity and if no improvement is seen for a 'patience' number of epochs, the learning rate is reduced. Example: reduce_lr = ReduceLROnPlateau(monitor='val_loss', factor=0.2, patience=5, min_lr=0.001) model.fit(X_train, Y_train, callbacks=[reduce_lr]) Arguments monitor quantity to be monitored. factor factor by which the learning rate will be reduced. new_lr = lr * factor. patience number of epochs with no improvement after which learning rate will be reduced. verbose int. 0: quiet, 1: update messages. mode one of {'auto', 'min', 'max'}. In 'min' mode, the learning rate will be reduced when the quantity monitored has stopped decreasing; in 'max' mode it will be reduced when the quantity monitored has stopped increasing; in 'auto' mode, the direction is automatically inferred from the name of the monitored quantity. min_delta threshold for measuring the new optimum, to only focus on significant changes. cooldown number of epochs to wait before resuming normal operation after lr has been reduced. min_lr lower bound on the learning rate. Methods in_cooldown View source in_cooldown() set_model View source set_model( model ) set_params View source set_params( params )
tensorflow.keras.callbacks.reducelronplateau
tf.keras.callbacks.RemoteMonitor View source on GitHub Callback used to stream events to a server. Inherits From: Callback View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.callbacks.RemoteMonitor tf.keras.callbacks.RemoteMonitor( root='http://localhost:9000', path='/publish/epoch/end/', field='data', headers=None, send_as_json=False ) Requires the requests library. Events are sent to root + '/publish/epoch/end/' by default. Calls are HTTP POST, with a data argument which is a JSON-encoded dictionary of event data. If send_as_json=True, the content type of the request will be "application/json". Otherwise the serialized JSON will be sent within a form. Arguments root String; root url of the target server. path String; path relative to root to which the events will be sent. field String; JSON field under which the data will be stored. The field is used only if the payload is sent within a form (i.e. send_as_json is set to False). headers Dictionary; optional custom HTTP headers. send_as_json Boolean; whether the request should be sent as "application/json". Methods set_model View source set_model( model ) set_params View source set_params( params )
tensorflow.keras.callbacks.remotemonitor
tf.keras.callbacks.TensorBoard View source on GitHub Enable visualizations for TensorBoard. Inherits From: Callback tf.keras.callbacks.TensorBoard( log_dir='logs', histogram_freq=0, write_graph=True, write_images=False, update_freq='epoch', profile_batch=2, embeddings_freq=0, embeddings_metadata=None, **kwargs ) TensorBoard is a visualization tool provided with TensorFlow. This callback logs events for TensorBoard, including: Metrics summary plots Training graph visualization Activation histograms Sampled profiling If you have installed TensorFlow with pip, you should be able to launch TensorBoard from the command line: tensorboard --logdir=path_to_your_logs You can find more information about TensorBoard here. Arguments log_dir the path of the directory where to save the log files to be parsed by TensorBoard. histogram_freq frequency (in epochs) at which to compute activation and weight histograms for the layers of the model. If set to 0, histograms won't be computed. Validation data (or split) must be specified for histogram visualizations. write_graph whether to visualize the graph in TensorBoard. The log file can become quite large when write_graph is set to True. write_images whether to write model weights to visualize as image in TensorBoard. update_freq 'batch' or 'epoch' or integer. When using 'batch', writes the losses and metrics to TensorBoard after each batch. The same applies for 'epoch'. If using an integer, let's say 1000, the callback will write the metrics and losses to TensorBoard every 1000 batches. Note that writing too frequently to TensorBoard can slow down your training. profile_batch Profile the batch(es) to sample compute characteristics. profile_batch must be a non-negative integer or a tuple of integers. A pair of positive integers signify a range of batches to profile. By default, it will profile the second batch. Set profile_batch=0 to disable profiling. embeddings_freq frequency (in epochs) at which embedding layers will be visualized. If set to 0, embeddings won't be visualized. embeddings_metadata a dictionary which maps layer name to a file name in which metadata for this embedding layer is saved. See the details about metadata files format. In case if the same metadata file is used for all embedding layers, string can be passed. Examples: Basic usage: tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir="./logs") model.fit(x_train, y_train, epochs=2, callbacks=[tensorboard_callback]) # Then run the tensorboard command to view the visualizations. Custom batch-level summaries in a subclassed Model: class MyModel(tf.keras.Model): def build(self, _): self.dense = tf.keras.layers.Dense(10) def call(self, x): outputs = self.dense(x) tf.summary.histogram('outputs', outputs) return outputs model = MyModel() model.compile('sgd', 'mse') # Make sure to set `update_freq=N` to log a batch-level summary every N batches. # In addition to any `tf.summary` contained in `Model.call`, metrics added in # `Model.compile` will be logged every N batches. tb_callback = tf.keras.callbacks.TensorBoard('./logs', update_freq=1) model.fit(x_train, y_train, callbacks=[tb_callback]) Custom batch-level summaries in a Functional API Model: def my_summary(x): tf.summary.histogram('x', x) return x inputs = tf.keras.Input(10) x = tf.keras.layers.Dense(10)(inputs) outputs = tf.keras.layers.Lambda(my_summary)(x) model = tf.keras.Model(inputs, outputs) model.compile('sgd', 'mse') # Make sure to set `update_freq=N` to log a batch-level summary every N batches. # In addition to any `tf.summary` contained in `Model.call`, metrics added in # `Model.compile` will be logged every N batches. tb_callback = tf.keras.callbacks.TensorBoard('./logs', update_freq=1) model.fit(x_train, y_train, callbacks=[tb_callback]) Profiling: # Profile a single batch, e.g. the 5th batch. tensorboard_callback = tf.keras.callbacks.TensorBoard( log_dir='./logs', profile_batch=5) model.fit(x_train, y_train, epochs=2, callbacks=[tensorboard_callback]) # Profile a range of batches, e.g. from 10 to 20. tensorboard_callback = tf.keras.callbacks.TensorBoard( log_dir='./logs', profile_batch=(10,20)) model.fit(x_train, y_train, epochs=2, callbacks=[tensorboard_callback]) Methods set_model View source set_model( model ) Sets Keras model and writes graph if specified. set_params View source set_params( params )
tensorflow.keras.callbacks.tensorboard
tf.keras.callbacks.TerminateOnNaN View source on GitHub Callback that terminates training when a NaN loss is encountered. Inherits From: Callback View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.callbacks.TerminateOnNaN tf.keras.callbacks.TerminateOnNaN() Methods set_model View source set_model( model ) set_params View source set_params( params )
tensorflow.keras.callbacks.terminateonnan
Module: tf.keras.constraints Constraints: functions that impose constraints on weight values. Classes class Constraint class MaxNorm: MaxNorm weight constraint. class MinMaxNorm: MinMaxNorm weight constraint. class NonNeg: Constrains the weights to be non-negative. class RadialConstraint: Constrains Conv2D kernel weights to be the same for each radius. class UnitNorm: Constrains the weights incident to each hidden unit to have unit norm. class max_norm: MaxNorm weight constraint. class min_max_norm: MinMaxNorm weight constraint. class non_neg: Constrains the weights to be non-negative. class radial_constraint: Constrains Conv2D kernel weights to be the same for each radius. class unit_norm: Constrains the weights incident to each hidden unit to have unit norm. Functions deserialize(...) get(...) serialize(...)
tensorflow.keras.constraints
tf.keras.constraints.Constraint View source on GitHub View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.constraints.Constraint Methods get_config View source get_config() __call__ View source __call__( w ) Call self as a function.
tensorflow.keras.constraints.constraint
tf.keras.constraints.deserialize View source on GitHub View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.constraints.deserialize tf.keras.constraints.deserialize( config, custom_objects=None )
tensorflow.keras.constraints.deserialize
tf.keras.constraints.get View source on GitHub View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.constraints.get tf.keras.constraints.get( identifier )
tensorflow.keras.constraints.get
tf.keras.constraints.MaxNorm View source on GitHub MaxNorm weight constraint. Inherits From: Constraint View aliases Main aliases tf.keras.constraints.max_norm Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.constraints.MaxNorm, tf.compat.v1.keras.constraints.max_norm tf.keras.constraints.MaxNorm( max_value=2, axis=0 ) Constrains the weights incident to each hidden unit to have a norm less than or equal to a desired value. Also available via the shortcut function tf.keras.constraints.max_norm. Arguments max_value the maximum norm value for the incoming weights. axis integer, axis along which to calculate weight norms. For instance, in a Dense layer the weight matrix has shape (input_dim, output_dim), set axis to 0 to constrain each weight vector of length (input_dim,). In a Conv2D layer with data_format="channels_last", the weight tensor has shape (rows, cols, input_depth, output_depth), set axis to [0, 1, 2] to constrain the weights of each filter tensor of size (rows, cols, input_depth).
tensorflow.keras.constraints.maxnorm
tf.keras.constraints.MinMaxNorm View source on GitHub MinMaxNorm weight constraint. Inherits From: Constraint View aliases Main aliases tf.keras.constraints.min_max_norm Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.constraints.MinMaxNorm, tf.compat.v1.keras.constraints.min_max_norm tf.keras.constraints.MinMaxNorm( min_value=0.0, max_value=1.0, rate=1.0, axis=0 ) Constrains the weights incident to each hidden unit to have the norm between a lower bound and an upper bound. Also available via the shortcut function tf.keras.constraints.min_max_norm. Arguments min_value the minimum norm for the incoming weights. max_value the maximum norm for the incoming weights. rate rate for enforcing the constraint: weights will be rescaled to yield (1 - rate) * norm + rate * norm.clip(min_value, max_value). Effectively, this means that rate=1.0 stands for strict enforcement of the constraint, while rate<1.0 means that weights will be rescaled at each step to slowly move towards a value inside the desired interval. axis integer, axis along which to calculate weight norms. For instance, in a Dense layer the weight matrix has shape (input_dim, output_dim), set axis to 0 to constrain each weight vector of length (input_dim,). In a Conv2D layer with data_format="channels_last", the weight tensor has shape (rows, cols, input_depth, output_depth), set axis to [0, 1, 2] to constrain the weights of each filter tensor of size (rows, cols, input_depth).
tensorflow.keras.constraints.minmaxnorm
tf.keras.constraints.NonNeg View source on GitHub Constrains the weights to be non-negative. Inherits From: Constraint View aliases Main aliases tf.keras.constraints.non_neg Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.constraints.NonNeg, tf.compat.v1.keras.constraints.non_neg Also available via the shortcut function tf.keras.constraints.non_neg. Methods get_config View source get_config() __call__ View source __call__( w ) Call self as a function.
tensorflow.keras.constraints.nonneg
tf.keras.constraints.RadialConstraint View source on GitHub Constrains Conv2D kernel weights to be the same for each radius. Inherits From: Constraint View aliases Main aliases tf.keras.constraints.radial_constraint Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.constraints.RadialConstraint, tf.compat.v1.keras.constraints.radial_constraint Also available via the shortcut function tf.keras.constraints.radial_constraint. For example, the desired output for the following 4-by-4 kernel: kernel = [[v_00, v_01, v_02, v_03], [v_10, v_11, v_12, v_13], [v_20, v_21, v_22, v_23], [v_30, v_31, v_32, v_33]] is this:: kernel = [[v_11, v_11, v_11, v_11], [v_11, v_33, v_33, v_11], [v_11, v_33, v_33, v_11], [v_11, v_11, v_11, v_11]] This constraint can be applied to any Conv2D layer version, including Conv2DTranspose and SeparableConv2D, and with either "channels_last" or "channels_first" data format. The method assumes the weight tensor is of shape (rows, cols, input_depth, output_depth). Methods get_config View source get_config()
tensorflow.keras.constraints.radialconstraint
tf.keras.constraints.serialize View source on GitHub View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.constraints.serialize tf.keras.constraints.serialize( constraint )
tensorflow.keras.constraints.serialize
tf.keras.constraints.UnitNorm View source on GitHub Constrains the weights incident to each hidden unit to have unit norm. Inherits From: Constraint View aliases Main aliases tf.keras.constraints.unit_norm Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.constraints.UnitNorm, tf.compat.v1.keras.constraints.unit_norm tf.keras.constraints.UnitNorm( axis=0 ) Also available via the shortcut function tf.keras.constraints.unit_norm. Arguments axis integer, axis along which to calculate weight norms. For instance, in a Dense layer the weight matrix has shape (input_dim, output_dim), set axis to 0 to constrain each weight vector of length (input_dim,). In a Conv2D layer with data_format="channels_last", the weight tensor has shape (rows, cols, input_depth, output_depth), set axis to [0, 1, 2] to constrain the weights of each filter tensor of size (rows, cols, input_depth).
tensorflow.keras.constraints.unitnorm
Module: tf.keras.datasets Public API for tf.keras.datasets namespace. Modules boston_housing module: Boston housing price regression dataset. cifar10 module: CIFAR10 small images classification dataset. cifar100 module: CIFAR100 small images classification dataset. fashion_mnist module: Fashion-MNIST dataset. imdb module: IMDB sentiment classification dataset. mnist module: MNIST handwritten digits dataset. reuters module: Reuters topic classification dataset.
tensorflow.keras.datasets
Module: tf.keras.datasets.boston_housing Boston housing price regression dataset. Functions load_data(...): Loads the Boston Housing dataset.
tensorflow.keras.datasets.boston_housing
tf.keras.datasets.boston_housing.load_data View source on GitHub Loads the Boston Housing dataset. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.datasets.boston_housing.load_data tf.keras.datasets.boston_housing.load_data( path='boston_housing.npz', test_split=0.2, seed=113 ) This is a dataset taken from the StatLib library which is maintained at Carnegie Mellon University. Samples contain 13 attributes of houses at different locations around the Boston suburbs in the late 1970s. Targets are the median values of the houses at a location (in k$). The attributes themselves are defined in the StatLib website. Arguments path path where to cache the dataset locally (relative to ~/.keras/datasets). test_split fraction of the data to reserve as test set. seed Random seed for shuffling the data before computing the test split. Returns Tuple of Numpy arrays: (x_train, y_train), (x_test, y_test). x_train, x_test: numpy arrays with shape (num_samples, 13) containing either the training samples (for x_train), or test samples (for y_train). y_train, y_test: numpy arrays of shape (num_samples,) containing the target scalars. The targets are float scalars typically between 10 and 50 that represent the home prices in k$.
tensorflow.keras.datasets.boston_housing.load_data
Module: tf.keras.datasets.cifar10 CIFAR10 small images classification dataset. Functions load_data(...): Loads CIFAR10 dataset.
tensorflow.keras.datasets.cifar10
tf.keras.datasets.cifar10.load_data View source on GitHub Loads CIFAR10 dataset. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.datasets.cifar10.load_data tf.keras.datasets.cifar10.load_data() This is a dataset of 50,000 32x32 color training images and 10,000 test images, labeled over 10 categories. See more info at the CIFAR homepage. Returns Tuple of Numpy arrays: (x_train, y_train), (x_test, y_test). x_train, x_test: uint8 arrays of RGB image data with shape (num_samples, 3, 32, 32) if tf.keras.backend.image_data_format() is 'channels_first', or (num_samples, 32, 32, 3) if the data format is 'channels_last'. y_train, y_test: uint8 arrays of category labels (integers in range 0-9) each with shape (num_samples, 1).
tensorflow.keras.datasets.cifar10.load_data
Module: tf.keras.datasets.cifar100 CIFAR100 small images classification dataset. Functions load_data(...): Loads CIFAR100 dataset.
tensorflow.keras.datasets.cifar100
tf.keras.datasets.cifar100.load_data View source on GitHub Loads CIFAR100 dataset. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.datasets.cifar100.load_data tf.keras.datasets.cifar100.load_data( label_mode='fine' ) This is a dataset of 50,000 32x32 color training images and 10,000 test images, labeled over 100 fine-grained classes that are grouped into 20 coarse-grained classes. See more info at the CIFAR homepage. Arguments label_mode one of "fine", "coarse". If it is "fine" the category labels are the fine-grained labels, if it is "coarse" the output labels are the coarse-grained superclasses. Returns Tuple of Numpy arrays: (x_train, y_train), (x_test, y_test). x_train, x_test: uint8 arrays of RGB image data with shape (num_samples, 3, 32, 32) if tf.keras.backend.image_data_format() is 'channels_first', or (num_samples, 32, 32, 3) if the data format is 'channels_last'. y_train, y_test: uint8 arrays of category labels with shape (num_samples, 1). Raises ValueError in case of invalid label_mode.
tensorflow.keras.datasets.cifar100.load_data
Module: tf.keras.datasets.fashion_mnist Fashion-MNIST dataset. Functions load_data(...): Loads the Fashion-MNIST dataset.
tensorflow.keras.datasets.fashion_mnist
tf.keras.datasets.fashion_mnist.load_data View source on GitHub Loads the Fashion-MNIST dataset. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.datasets.fashion_mnist.load_data tf.keras.datasets.fashion_mnist.load_data() This is a dataset of 60,000 28x28 grayscale images of 10 fashion categories, along with a test set of 10,000 images. This dataset can be used as a drop-in replacement for MNIST. The class labels are: Label Description 0 T-shirt/top 1 Trouser 2 Pullover 3 Dress 4 Coat 5 Sandal 6 Shirt 7 Sneaker 8 Bag 9 Ankle boot Returns Tuple of Numpy arrays: (x_train, y_train), (x_test, y_test). x_train, x_test: uint8 arrays of grayscale image data with shape (num_samples, 28, 28). y_train, y_test: uint8 arrays of labels (integers in range 0-9) with shape (num_samples,). License: The copyright for Fashion-MNIST is held by Zalando SE. Fashion-MNIST is licensed under the MIT license.
tensorflow.keras.datasets.fashion_mnist.load_data
Module: tf.keras.datasets.imdb IMDB sentiment classification dataset. Functions get_word_index(...): Retrieves a dict mapping words to their index in the IMDB dataset. load_data(...): Loads the IMDB dataset.
tensorflow.keras.datasets.imdb
tf.keras.datasets.imdb.get_word_index View source on GitHub Retrieves a dict mapping words to their index in the IMDB dataset. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.datasets.imdb.get_word_index tf.keras.datasets.imdb.get_word_index( path='imdb_word_index.json' ) Arguments path where to cache the data (relative to ~/.keras/dataset). Returns The word index dictionary. Keys are word strings, values are their index.
tensorflow.keras.datasets.imdb.get_word_index
tf.keras.datasets.imdb.load_data View source on GitHub Loads the IMDB dataset. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.datasets.imdb.load_data tf.keras.datasets.imdb.load_data( path='imdb.npz', num_words=None, skip_top=0, maxlen=None, seed=113, start_char=1, oov_char=2, index_from=3, **kwargs ) This is a dataset of 25,000 movies reviews from IMDB, labeled by sentiment (positive/negative). Reviews have been preprocessed, and each review is encoded as a list of word indexes (integers). For convenience, words are indexed by overall frequency in the dataset, so that for instance the integer "3" encodes the 3rd most frequent word in the data. This allows for quick filtering operations such as: "only consider the top 10,000 most common words, but eliminate the top 20 most common words". As a convention, "0" does not stand for a specific word, but instead is used to encode any unknown word. Arguments path where to cache the data (relative to ~/.keras/dataset). num_words integer or None. Words are ranked by how often they occur (in the training set) and only the num_words most frequent words are kept. Any less frequent word will appear as oov_char value in the sequence data. If None, all words are kept. Defaults to None, so all words are kept. skip_top skip the top N most frequently occurring words (which may not be informative). These words will appear as oov_char value in the dataset. Defaults to 0, so no words are skipped. maxlen int or None. Maximum sequence length. Any longer sequence will be truncated. Defaults to None, which means no truncation. seed int. Seed for reproducible data shuffling. start_char int. The start of a sequence will be marked with this character. Defaults to 1 because 0 is usually the padding character. oov_char int. The out-of-vocabulary character. Words that were cut out because of the num_words or skip_top limits will be replaced with this character. index_from int. Index actual words with this index and higher. **kwargs Used for backwards compatibility. Returns Tuple of Numpy arrays: (x_train, y_train), (x_test, y_test). x_train, x_test: lists of sequences, which are lists of indexes (integers). If the num_words argument was specific, the maximum possible index value is num_words - 1. If the maxlen argument was specified, the largest possible sequence length is maxlen. y_train, y_test: lists of integer labels (1 or 0). Raises ValueError in case maxlen is so low that no input sequence could be kept. Note that the 'out of vocabulary' character is only used for words that were present in the training set but are not included because they're not making the num_words cut here. Words that were not seen in the training set but are in the test set have simply been skipped.
tensorflow.keras.datasets.imdb.load_data
Module: tf.keras.datasets.mnist MNIST handwritten digits dataset. Functions load_data(...): Loads the MNIST dataset.
tensorflow.keras.datasets.mnist
tf.keras.datasets.mnist.load_data View source on GitHub Loads the MNIST dataset. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.datasets.mnist.load_data tf.keras.datasets.mnist.load_data( path='mnist.npz' ) This is a dataset of 60,000 28x28 grayscale images of the 10 digits, along with a test set of 10,000 images. More info can be found at the MNIST homepage. Arguments path path where to cache the dataset locally (relative to ~/.keras/datasets). Returns Tuple of Numpy arrays: (x_train, y_train), (x_test, y_test). x_train, x_test: uint8 arrays of grayscale image data with shapes (num_samples, 28, 28). y_train, y_test: uint8 arrays of digit labels (integers in range 0-9) with shapes (num_samples,). License: Yann LeCun and Corinna Cortes hold the copyright of MNIST dataset, which is a derivative work from original NIST datasets. MNIST dataset is made available under the terms of the Creative Commons Attribution-Share Alike 3.0 license.
tensorflow.keras.datasets.mnist.load_data
Module: tf.keras.datasets.reuters Reuters topic classification dataset. Functions get_word_index(...): Retrieves a dict mapping words to their index in the Reuters dataset. load_data(...): Loads the Reuters newswire classification dataset.
tensorflow.keras.datasets.reuters
tf.keras.datasets.reuters.get_word_index View source on GitHub Retrieves a dict mapping words to their index in the Reuters dataset. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.datasets.reuters.get_word_index tf.keras.datasets.reuters.get_word_index( path='reuters_word_index.json' ) Arguments path where to cache the data (relative to ~/.keras/dataset). Returns The word index dictionary. Keys are word strings, values are their index.
tensorflow.keras.datasets.reuters.get_word_index
tf.keras.datasets.reuters.load_data View source on GitHub Loads the Reuters newswire classification dataset. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.datasets.reuters.load_data tf.keras.datasets.reuters.load_data( path='reuters.npz', num_words=None, skip_top=0, maxlen=None, test_split=0.2, seed=113, start_char=1, oov_char=2, index_from=3, **kwargs ) This is a dataset of 11,228 newswires from Reuters, labeled over 46 topics. This was originally generated by parsing and preprocessing the classic Reuters-21578 dataset, but the preprocessing code is no longer packaged with Keras. See this github discussion for more info. Each newswire is encoded as a list of word indexes (integers). For convenience, words are indexed by overall frequency in the dataset, so that for instance the integer "3" encodes the 3rd most frequent word in the data. This allows for quick filtering operations such as: "only consider the top 10,000 most common words, but eliminate the top 20 most common words". As a convention, "0" does not stand for a specific word, but instead is used to encode any unknown word. Arguments path where to cache the data (relative to ~/.keras/dataset). num_words integer or None. Words are ranked by how often they occur (in the training set) and only the num_words most frequent words are kept. Any less frequent word will appear as oov_char value in the sequence data. If None, all words are kept. Defaults to None, so all words are kept. skip_top skip the top N most frequently occurring words (which may not be informative). These words will appear as oov_char value in the dataset. Defaults to 0, so no words are skipped. maxlen int or None. Maximum sequence length. Any longer sequence will be truncated. Defaults to None, which means no truncation. test_split Float between 0 and 1. Fraction of the dataset to be used as test data. Defaults to 0.2, meaning 20% of the dataset is used as test data. seed int. Seed for reproducible data shuffling. start_char int. The start of a sequence will be marked with this character. Defaults to 1 because 0 is usually the padding character. oov_char int. The out-of-vocabulary character. Words that were cut out because of the num_words or skip_top limits will be replaced with this character. index_from int. Index actual words with this index and higher. **kwargs Used for backwards compatibility. Returns Tuple of Numpy arrays: (x_train, y_train), (x_test, y_test). x_train, x_test: lists of sequences, which are lists of indexes (integers). If the num_words argument was specific, the maximum possible index value is num_words - 1. If the maxlen argument was specified, the largest possible sequence length is maxlen. y_train, y_test: lists of integer labels (1 or 0). Note: The 'out of vocabulary' character is only used for words that were present in the training set but are not included because they're not making the num_words cut here. Words that were not seen in the training set but are in the test set have simply been skipped.
tensorflow.keras.datasets.reuters.load_data
Module: tf.keras.estimator Keras estimator API. Functions model_to_estimator(...): Constructs an Estimator instance from given keras model.
tensorflow.keras.estimator
tf.keras.estimator.model_to_estimator View source on GitHub Constructs an Estimator instance from given keras model. tf.keras.estimator.model_to_estimator( keras_model=None, keras_model_path=None, custom_objects=None, model_dir=None, config=None, checkpoint_format='checkpoint', metric_names_map=None, export_outputs=None ) If you use infrastructure or other tooling that relies on Estimators, you can still build a Keras model and use model_to_estimator to convert the Keras model to an Estimator for use with downstream systems. For usage example, please see: Creating estimators from Keras Models. Sample Weights: Estimators returned by model_to_estimator are configured so that they can handle sample weights (similar to keras_model.fit(x, y, sample_weights)). To pass sample weights when training or evaluating the Estimator, the first item returned by the input function should be a dictionary with keys features and sample_weights. Example below: keras_model = tf.keras.Model(...) keras_model.compile(...) estimator = tf.keras.estimator.model_to_estimator(keras_model) def input_fn(): return dataset_ops.Dataset.from_tensors( ({'features': features, 'sample_weights': sample_weights}, targets)) estimator.train(input_fn, steps=1) Example with customized export signature: inputs = {'a': tf.keras.Input(..., name='a'), 'b': tf.keras.Input(..., name='b')} outputs = {'c': tf.keras.layers.Dense(..., name='c')(inputs['a']), 'd': tf.keras.layers.Dense(..., name='d')(inputs['b'])} keras_model = tf.keras.Model(inputs, outputs) keras_model.compile(...) export_outputs = {'c': tf.estimator.export.RegressionOutput, 'd': tf.estimator.export.ClassificationOutput} estimator = tf.keras.estimator.model_to_estimator( keras_model, export_outputs=export_outputs) def input_fn(): return dataset_ops.Dataset.from_tensors( ({'features': features, 'sample_weights': sample_weights}, targets)) estimator.train(input_fn, steps=1) Note: We do not support creating weighted metrics in Keras and converting them to weighted metrics in the Estimator API using model_to_estimator. You will have to create these metrics directly on the estimator spec using the add_metrics function. To customize the estimator eval_metric_ops names, you can pass in the metric_names_map dictionary mapping the keras model output metric names to the custom names as follows: input_a = tf.keras.layers.Input(shape=(16,), name='input_a') input_b = tf.keras.layers.Input(shape=(16,), name='input_b') dense = tf.keras.layers.Dense(8, name='dense_1') interm_a = dense(input_a) interm_b = dense(input_b) merged = tf.keras.layers.concatenate([interm_a, interm_b], name='merge') output_a = tf.keras.layers.Dense(3, activation='softmax', name='dense_2')( merged) output_b = tf.keras.layers.Dense(2, activation='softmax', name='dense_3')( merged) keras_model = tf.keras.models.Model( inputs=[input_a, input_b], outputs=[output_a, output_b]) keras_model.compile( loss='categorical_crossentropy', optimizer='rmsprop', metrics={ 'dense_2': 'categorical_accuracy', 'dense_3': 'categorical_accuracy' }) metric_names_map = { 'dense_2_categorical_accuracy': 'acc_1', 'dense_3_categorical_accuracy': 'acc_2', } keras_est = tf.keras.estimator.model_to_estimator( keras_model=keras_model, config=config, metric_names_map=metric_names_map) Args keras_model A compiled Keras model object. This argument is mutually exclusive with keras_model_path. Estimator's model_fn uses the structure of the model to clone the model. Defaults to None. keras_model_path Path to a compiled Keras model saved on disk, in HDF5 format, which can be generated with the save() method of a Keras model. This argument is mutually exclusive with keras_model. Defaults to None. custom_objects Dictionary for cloning customized objects. This is used with classes that is not part of this pip package. For example, if user maintains a relu6 class that inherits from tf.keras.layers.Layer, then pass custom_objects={'relu6': relu6}. Defaults to None. model_dir Directory to save Estimator model parameters, graph, summary files for TensorBoard, etc. If unset a directory will be created with tempfile.mkdtemp config RunConfig to config Estimator. Allows setting up things in model_fn based on configuration such as num_ps_replicas, or model_dir. Defaults to None. If both config.model_dir and the model_dir argument (above) are specified the model_dir argument takes precedence. checkpoint_format Sets the format of the checkpoint saved by the estimator when training. May be saver or checkpoint, depending on whether to save checkpoints from tf.compat.v1.train.Saver or tf.train.Checkpoint. The default is checkpoint. Estimators use name-based tf.train.Saver checkpoints, while Keras models use object-based checkpoints from tf.train.Checkpoint. Currently, saving object-based checkpoints from model_to_estimator is only supported by Functional and Sequential models. Defaults to 'checkpoint'. metric_names_map Optional dictionary mapping Keras model output metric names to custom names. This can be used to override the default Keras model output metrics names in a multi IO model use case and provide custom names for the eval_metric_ops in Estimator. The Keras model metric names can be obtained using model.metrics_names excluding any loss metrics such as total loss and output losses. For example, if your Keras model has two outputs out_1 and out_2, with mse loss and acc metric, then model.metrics_names will be ['loss', 'out_1_loss', 'out_2_loss', 'out_1_acc', 'out_2_acc']. The model metric names excluding the loss metrics will be ['out_1_acc', 'out_2_acc']. export_outputs Optional dictionary. This can be used to override the default Keras model output exports in a multi IO model use case and provide custom names for the export_outputs in tf.estimator.EstimatorSpec. Default is None, which is equivalent to {'serving_default': tf.estimator.export.PredictOutput}. If not None, the keys must match the keys of model.output_names. A dict {name: output} where: name: An arbitrary name for this output. output: an ExportOutput class such as ClassificationOutput, RegressionOutput, or PredictOutput. Single-headed models only need to specify one entry in this dictionary. Multi-headed models should specify one entry for each head, one of which must be named using tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY If no entry is provided, a default PredictOutput mapping to predictions will be created. Returns An Estimator from given keras model. Raises ValueError If neither keras_model nor keras_model_path was given. ValueError If both keras_model and keras_model_path was given. ValueError If the keras_model_path is a GCS URI. ValueError If keras_model has not been compiled. ValueError If an invalid checkpoint_format was given.
tensorflow.keras.estimator.model_to_estimator
Module: tf.keras.experimental Public API for tf.keras.experimental namespace. Classes class CosineDecay: A LearningRateSchedule that uses a cosine decay schedule. class CosineDecayRestarts: A LearningRateSchedule that uses a cosine decay schedule with restarts. class LinearCosineDecay: A LearningRateSchedule that uses a linear cosine decay schedule. class LinearModel: Linear Model for regression and classification problems. class NoisyLinearCosineDecay: A LearningRateSchedule that uses a noisy linear cosine decay schedule. class PeepholeLSTMCell: Equivalent to LSTMCell class but adds peephole connections. class SequenceFeatures: A layer for sequence input. class WideDeepModel: Wide & Deep Model for regression and classification problems.
tensorflow.keras.experimental
tf.keras.experimental.CosineDecay View source on GitHub A LearningRateSchedule that uses a cosine decay schedule. Inherits From: LearningRateSchedule View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.experimental.CosineDecay tf.keras.experimental.CosineDecay( initial_learning_rate, decay_steps, alpha=0.0, name=None ) See [Loshchilov & Hutter, ICLR2016], SGDR: Stochastic Gradient Descent with Warm Restarts. https://arxiv.org/abs/1608.03983 When training a model, it is often recommended to lower the learning rate as the training progresses. This schedule applies a cosine decay function to an optimizer step, given a provided initial learning rate. It requires a step value to compute the decayed learning rate. You can just pass a TensorFlow variable that you increment at each training step. The schedule a 1-arg callable that produces a decayed learning rate when passed the current optimizer step. This can be useful for changing the learning rate value across different invocations of optimizer functions. It is computed as: def decayed_learning_rate(step): step = min(step, decay_steps) cosine_decay = 0.5 * (1 + cos(pi * step / decay_steps)) decayed = (1 - alpha) * cosine_decay + alpha return initial_learning_rate * decayed Example usage: decay_steps = 1000 lr_decayed_fn = tf.keras.experimental.CosineDecay( initial_learning_rate, decay_steps) You can pass this schedule directly into a tf.keras.optimizers.Optimizer as the learning rate. The learning rate schedule is also serializable and deserializable using tf.keras.optimizers.schedules.serialize and tf.keras.optimizers.schedules.deserialize. Returns A 1-arg callable learning rate schedule that takes the current optimizer step and outputs the decayed learning rate, a scalar Tensor of the same type as initial_learning_rate. Args initial_learning_rate A scalar float32 or float64 Tensor or a Python number. The initial learning rate. decay_steps A scalar int32 or int64 Tensor or a Python number. Number of steps to decay over. alpha A scalar float32 or float64 Tensor or a Python number. Minimum learning rate value as a fraction of initial_learning_rate. name String. Optional name of the operation. Defaults to 'CosineDecay'. Methods from_config View source @classmethod from_config( config ) Instantiates a LearningRateSchedule from its config. Args config Output of get_config(). Returns A LearningRateSchedule instance. get_config View source get_config() __call__ View source __call__( step ) Call self as a function.
tensorflow.keras.experimental.cosinedecay
tf.keras.experimental.CosineDecayRestarts View source on GitHub A LearningRateSchedule that uses a cosine decay schedule with restarts. Inherits From: LearningRateSchedule View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.experimental.CosineDecayRestarts tf.keras.experimental.CosineDecayRestarts( initial_learning_rate, first_decay_steps, t_mul=2.0, m_mul=1.0, alpha=0.0, name=None ) See [Loshchilov & Hutter, ICLR2016], SGDR: Stochastic Gradient Descent with Warm Restarts. https://arxiv.org/abs/1608.03983 When training a model, it is often recommended to lower the learning rate as the training progresses. This schedule applies a cosine decay function with restarts to an optimizer step, given a provided initial learning rate. It requires a step value to compute the decayed learning rate. You can just pass a TensorFlow variable that you increment at each training step. The schedule a 1-arg callable that produces a decayed learning rate when passed the current optimizer step. This can be useful for changing the learning rate value across different invocations of optimizer functions. The learning rate multiplier first decays from 1 to alpha for first_decay_steps steps. Then, a warm restart is performed. Each new warm restart runs for t_mul times more steps and with m_mul times smaller initial learning rate. Example usage: first_decay_steps = 1000 lr_decayed_fn = ( tf.keras.experimental.CosineDecayRestarts( initial_learning_rate, first_decay_steps)) You can pass this schedule directly into a tf.keras.optimizers.Optimizer as the learning rate. The learning rate schedule is also serializable and deserializable using tf.keras.optimizers.schedules.serialize and tf.keras.optimizers.schedules.deserialize. Returns A 1-arg callable learning rate schedule that takes the current optimizer step and outputs the decayed learning rate, a scalar Tensor of the same type as initial_learning_rate. Args initial_learning_rate A scalar float32 or float64 Tensor or a Python number. The initial learning rate. first_decay_steps A scalar int32 or int64 Tensor or a Python number. Number of steps to decay over. t_mul A scalar float32 or float64 Tensor or a Python number. Used to derive the number of iterations in the i-th period m_mul A scalar float32 or float64 Tensor or a Python number. Used to derive the initial learning rate of the i-th period: alpha A scalar float32 or float64 Tensor or a Python number. Minimum learning rate value as a fraction of the initial_learning_rate. name String. Optional name of the operation. Defaults to 'SGDRDecay'. Methods from_config View source @classmethod from_config( config ) Instantiates a LearningRateSchedule from its config. Args config Output of get_config(). Returns A LearningRateSchedule instance. get_config View source get_config() __call__ View source __call__( step ) Call self as a function.
tensorflow.keras.experimental.cosinedecayrestarts
tf.keras.experimental.LinearCosineDecay View source on GitHub A LearningRateSchedule that uses a linear cosine decay schedule. Inherits From: LearningRateSchedule View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.experimental.LinearCosineDecay tf.keras.experimental.LinearCosineDecay( initial_learning_rate, decay_steps, num_periods=0.5, alpha=0.0, beta=0.001, name=None ) See [Bello et al., ICML2017] Neural Optimizer Search with RL. https://arxiv.org/abs/1709.07417 For the idea of warm starts here controlled by num_periods, see [Loshchilov & Hutter, ICLR2016] SGDR: Stochastic Gradient Descent with Warm Restarts. https://arxiv.org/abs/1608.03983 Note that linear cosine decay is more aggressive than cosine decay and larger initial learning rates can typically be used. When training a model, it is often recommended to lower the learning rate as the training progresses. This schedule applies a linear cosine decay function to an optimizer step, given a provided initial learning rate. It requires a step value to compute the decayed learning rate. You can just pass a TensorFlow variable that you increment at each training step. The schedule a 1-arg callable that produces a decayed learning rate when passed the current optimizer step. This can be useful for changing the learning rate value across different invocations of optimizer functions. It is computed as: def decayed_learning_rate(step): step = min(step, decay_steps) linear_decay = (decay_steps - step) / decay_steps cosine_decay = 0.5 * ( 1 + cos(pi * 2 * num_periods * step / decay_steps)) decayed = (alpha + linear_decay) * cosine_decay + beta return initial_learning_rate * decayed Example usage: decay_steps = 1000 lr_decayed_fn = ( tf.keras.experimental.LinearCosineDecay( initial_learning_rate, decay_steps)) You can pass this schedule directly into a tf.keras.optimizers.Optimizer as the learning rate. The learning rate schedule is also serializable and deserializable using tf.keras.optimizers.schedules.serialize and tf.keras.optimizers.schedules.deserialize. Returns A 1-arg callable learning rate schedule that takes the current optimizer step and outputs the decayed learning rate, a scalar Tensor of the same type as initial_learning_rate. Args initial_learning_rate A scalar float32 or float64 Tensor or a Python number. The initial learning rate. decay_steps A scalar int32 or int64 Tensor or a Python number. Number of steps to decay over. num_periods Number of periods in the cosine part of the decay. See computation above. alpha See computation above. beta See computation above. name String. Optional name of the operation. Defaults to 'LinearCosineDecay'. Methods from_config View source @classmethod from_config( config ) Instantiates a LearningRateSchedule from its config. Args config Output of get_config(). Returns A LearningRateSchedule instance. get_config View source get_config() __call__ View source __call__( step ) Call self as a function.
tensorflow.keras.experimental.linearcosinedecay
tf.keras.experimental.LinearModel View source on GitHub Linear Model for regression and classification problems. Inherits From: Model, Layer, Module View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.experimental.LinearModel tf.keras.experimental.LinearModel( units=1, activation=None, use_bias=True, kernel_initializer='zeros', bias_initializer='zeros', kernel_regularizer=None, bias_regularizer=None, **kwargs ) This model approximates the following function: $$y = \beta + \sum_{i=1}^{N} w_{i} * x_{i}$$ where $$\beta$$ is the bias and $$w_{i}$$ is the weight for each feature. Example: model = LinearModel() model.compile(optimizer='sgd', loss='mse') model.fit(x, y, epochs=epochs) This model accepts sparse float inputs as well: Example: model = LinearModel() opt = tf.keras.optimizers.Adam() loss_fn = tf.keras.losses.MeanSquaredError() with tf.GradientTape() as tape: output = model(sparse_input) loss = tf.reduce_mean(loss_fn(target, output)) grads = tape.gradient(loss, model.weights) opt.apply_gradients(zip(grads, model.weights)) Args units Positive integer, output dimension without the batch size. activation Activation function to use. If you don't specify anything, no activation is applied. use_bias whether to calculate the bias/intercept for this model. If set to False, no bias/intercept will be used in calculations, e.g., the data is already centered. kernel_initializer Initializer for the kernel weights matrices. bias_initializer Initializer for the bias vector. kernel_regularizer regularizer for kernel vectors. bias_regularizer regularizer for bias vector. **kwargs The keyword arguments that are passed on to BaseLayer.init. Attributes distribute_strategy The tf.distribute.Strategy this model was created under. layers metrics_names Returns the model's display labels for all outputs. Note: metrics_names are available only after a keras.Model has been trained/evaluated on actual data. inputs = tf.keras.layers.Input(shape=(3,)) outputs = tf.keras.layers.Dense(2)(inputs) model = tf.keras.models.Model(inputs=inputs, outputs=outputs) model.compile(optimizer="Adam", loss="mse", metrics=["mae"]) model.metrics_names [] x = np.random.random((2, 3)) y = np.random.randint(0, 2, (2, 2)) model.fit(x, y) model.metrics_names ['loss', 'mae'] inputs = tf.keras.layers.Input(shape=(3,)) d = tf.keras.layers.Dense(2, name='out') output_1 = d(inputs) output_2 = d(inputs) model = tf.keras.models.Model( inputs=inputs, outputs=[output_1, output_2]) model.compile(optimizer="Adam", loss="mse", metrics=["mae", "acc"]) model.fit(x, (y, y)) model.metrics_names ['loss', 'out_loss', 'out_1_loss', 'out_mae', 'out_acc', 'out_1_mae', 'out_1_acc'] run_eagerly Settable attribute indicating whether the model should run eagerly. Running eagerly means that your model will be run step by step, like Python code. Your model might run slower, but it should become easier for you to debug it by stepping into individual layer calls. By default, we will attempt to compile your model to a static graph to deliver the best execution performance. Methods compile View source compile( optimizer='rmsprop', loss=None, metrics=None, loss_weights=None, weighted_metrics=None, run_eagerly=None, steps_per_execution=None, **kwargs ) Configures the model for training. Arguments optimizer String (name of optimizer) or optimizer instance. See tf.keras.optimizers. loss String (name of objective function), objective function or tf.keras.losses.Loss instance. See tf.keras.losses. An objective function is any callable with the signature loss = fn(y_true, y_pred), where y_true = ground truth values with shape = [batch_size, d0, .. dN], except sparse loss functions such as sparse categorical crossentropy where shape = [batch_size, d0, .. dN-1]. y_pred = predicted values with shape = [batch_size, d0, .. dN]. It returns a weighted loss float tensor. If a custom Loss instance is used and reduction is set to NONE, return value has the shape [batch_size, d0, .. dN-1] ie. per-sample or per-timestep loss values; otherwise, it is a scalar. If the model has multiple outputs, you can use a different loss on each output by passing a dictionary or a list of losses. The loss value that will be minimized by the model will then be the sum of all individual losses. metrics List of metrics to be evaluated by the model during training and testing. Each of this can be a string (name of a built-in function), function or a tf.keras.metrics.Metric instance. See tf.keras.metrics. Typically you will use metrics=['accuracy']. A function is any callable with the signature result = fn(y_true, y_pred). To specify different metrics for different outputs of a multi-output model, you could also pass a dictionary, such as metrics={'output_a': 'accuracy', 'output_b': ['accuracy', 'mse']}. You can also pass a list (len = len(outputs)) of lists of metrics such as metrics=[['accuracy'], ['accuracy', 'mse']] or metrics=['accuracy', ['accuracy', 'mse']]. When you pass the strings 'accuracy' or 'acc', we convert this to one of tf.keras.metrics.BinaryAccuracy, tf.keras.metrics.CategoricalAccuracy, tf.keras.metrics.SparseCategoricalAccuracy based on the loss function used and the model output shape. We do a similar conversion for the strings 'crossentropy' and 'ce' as well. loss_weights Optional list or dictionary specifying scalar coefficients (Python floats) to weight the loss contributions of different model outputs. The loss value that will be minimized by the model will then be the weighted sum of all individual losses, weighted by the loss_weights coefficients. If a list, it is expected to have a 1:1 mapping to the model's outputs. If a dict, it is expected to map output names (strings) to scalar coefficients. weighted_metrics List of metrics to be evaluated and weighted by sample_weight or class_weight during training and testing. run_eagerly Bool. Defaults to False. If True, this Model's logic will not be wrapped in a tf.function. Recommended to leave this as None unless your Model cannot be run inside a tf.function. steps_per_execution Int. Defaults to 1. The number of batches to run during each tf.function call. Running multiple batches inside a single tf.function call can greatly improve performance on TPUs or small models with a large Python overhead. At most, one full epoch will be run each execution. If a number larger than the size of the epoch is passed, the execution will be truncated to the size of the epoch. Note that if steps_per_execution is set to N, Callback.on_batch_begin and Callback.on_batch_end methods will only be called every N batches (i.e. before/after each tf.function execution). **kwargs Arguments supported for backwards compatibility only. Raises ValueError In case of invalid arguments for optimizer, loss or metrics. evaluate View source evaluate( x=None, y=None, batch_size=None, verbose=1, sample_weight=None, steps=None, callbacks=None, max_queue_size=10, workers=1, use_multiprocessing=False, return_dict=False ) Returns the loss value & metrics values for the model in test mode. Computation is done in batches (see the batch_size arg.) Arguments x Input data. It could be: A Numpy array (or array-like), or a list of arrays (in case the model has multiple inputs). A TensorFlow tensor, or a list of tensors (in case the model has multiple inputs). A dict mapping input names to the corresponding array/tensors, if the model has named inputs. A tf.data dataset. Should return a tuple of either (inputs, targets) or (inputs, targets, sample_weights). A generator or keras.utils.Sequence returning (inputs, targets) or (inputs, targets, sample_weights). A more detailed description of unpacking behavior for iterator types (Dataset, generator, Sequence) is given in the Unpacking behavior for iterator-like inputs section of Model.fit. y Target data. Like the input data x, it could be either Numpy array(s) or TensorFlow tensor(s). It should be consistent with x (you cannot have Numpy inputs and tensor targets, or inversely). If x is a dataset, generator or keras.utils.Sequence instance, y should not be specified (since targets will be obtained from the iterator/dataset). batch_size Integer or None. Number of samples per batch of computation. If unspecified, batch_size will default to 32. Do not specify the batch_size if your data is in the form of a dataset, generators, or keras.utils.Sequence instances (since they generate batches). verbose 0 or 1. Verbosity mode. 0 = silent, 1 = progress bar. sample_weight Optional Numpy array of weights for the test samples, used for weighting the loss function. You can either pass a flat (1D) Numpy array with the same length as the input samples (1:1 mapping between weights and samples), or in the case of temporal data, you can pass a 2D array with shape (samples, sequence_length), to apply a different weight to every timestep of every sample. This argument is not supported when x is a dataset, instead pass sample weights as the third element of x. steps Integer or None. Total number of steps (batches of samples) before declaring the evaluation round finished. Ignored with the default value of None. If x is a tf.data dataset and steps is None, 'evaluate' will run until the dataset is exhausted. This argument is not supported with array inputs. callbacks List of keras.callbacks.Callback instances. List of callbacks to apply during evaluation. See callbacks. max_queue_size Integer. Used for generator or keras.utils.Sequence input only. Maximum size for the generator queue. If unspecified, max_queue_size will default to 10. workers Integer. Used for generator or keras.utils.Sequence input only. Maximum number of processes to spin up when using process-based threading. If unspecified, workers will default to 1. If 0, will execute the generator on the main thread. use_multiprocessing Boolean. Used for generator or keras.utils.Sequence input only. If True, use process-based threading. If unspecified, use_multiprocessing will default to False. Note that because this implementation relies on multiprocessing, you should not pass non-picklable arguments to the generator as they can't be passed easily to children processes. return_dict If True, loss and metric results are returned as a dict, with each key being the name of the metric. If False, they are returned as a list. See the discussion of Unpacking behavior for iterator-like inputs for Model.fit. Returns Scalar test loss (if the model has a single output and no metrics) or list of scalars (if the model has multiple outputs and/or metrics). The attribute model.metrics_names will give you the display labels for the scalar outputs. Raises RuntimeError If model.evaluate is wrapped in tf.function. ValueError in case of invalid arguments. evaluate_generator View source evaluate_generator( generator, steps=None, callbacks=None, max_queue_size=10, workers=1, use_multiprocessing=False, verbose=0 ) Evaluates the model on a data generator. DEPRECATED: Model.evaluate now supports generators, so there is no longer any need to use this endpoint. fit View source fit( x=None, y=None, batch_size=None, epochs=1, verbose=1, callbacks=None, validation_split=0.0, validation_data=None, shuffle=True, class_weight=None, sample_weight=None, initial_epoch=0, steps_per_epoch=None, validation_steps=None, validation_batch_size=None, validation_freq=1, max_queue_size=10, workers=1, use_multiprocessing=False ) Trains the model for a fixed number of epochs (iterations on a dataset). Arguments x Input data. It could be: A Numpy array (or array-like), or a list of arrays (in case the model has multiple inputs). A TensorFlow tensor, or a list of tensors (in case the model has multiple inputs). A dict mapping input names to the corresponding array/tensors, if the model has named inputs. A tf.data dataset. Should return a tuple of either (inputs, targets) or (inputs, targets, sample_weights). A generator or keras.utils.Sequence returning (inputs, targets) or (inputs, targets, sample_weights). A more detailed description of unpacking behavior for iterator types (Dataset, generator, Sequence) is given below. y Target data. Like the input data x, it could be either Numpy array(s) or TensorFlow tensor(s). It should be consistent with x (you cannot have Numpy inputs and tensor targets, or inversely). If x is a dataset, generator, or keras.utils.Sequence instance, y should not be specified (since targets will be obtained from x). batch_size Integer or None. Number of samples per gradient update. If unspecified, batch_size will default to 32. Do not specify the batch_size if your data is in the form of datasets, generators, or keras.utils.Sequence instances (since they generate batches). epochs Integer. Number of epochs to train the model. An epoch is an iteration over the entire x and y data provided. Note that in conjunction with initial_epoch, epochs is to be understood as "final epoch". The model is not trained for a number of iterations given by epochs, but merely until the epoch of index epochs is reached. verbose 0, 1, or 2. Verbosity mode. 0 = silent, 1 = progress bar, 2 = one line per epoch. Note that the progress bar is not particularly useful when logged to a file, so verbose=2 is recommended when not running interactively (eg, in a production environment). callbacks List of keras.callbacks.Callback instances. List of callbacks to apply during training. See tf.keras.callbacks. Note tf.keras.callbacks.ProgbarLogger and tf.keras.callbacks.History callbacks are created automatically and need not be passed into model.fit. tf.keras.callbacks.ProgbarLogger is created or not based on verbose argument to model.fit. validation_split Float between 0 and 1. Fraction of the training data to be used as validation data. The model will set apart this fraction of the training data, will not train on it, and will evaluate the loss and any model metrics on this data at the end of each epoch. The validation data is selected from the last samples in the x and y data provided, before shuffling. This argument is not supported when x is a dataset, generator or keras.utils.Sequence instance. validation_data Data on which to evaluate the loss and any model metrics at the end of each epoch. The model will not be trained on this data. Thus, note the fact that the validation loss of data provided using validation_split or validation_data is not affected by regularization layers like noise and dropout. validation_data will override validation_split. validation_data could be: tuple (x_val, y_val) of Numpy arrays or tensors tuple (x_val, y_val, val_sample_weights) of Numpy arrays dataset For the first two cases, batch_size must be provided. For the last case, validation_steps could be provided. Note that validation_data does not support all the data types that are supported in x, eg, dict, generator or keras.utils.Sequence. shuffle Boolean (whether to shuffle the training data before each epoch) or str (for 'batch'). This argument is ignored when x is a generator. 'batch' is a special option for dealing with the limitations of HDF5 data; it shuffles in batch-sized chunks. Has no effect when steps_per_epoch is not None. class_weight Optional dictionary mapping class indices (integers) to a weight (float) value, used for weighting the loss function (during training only). This can be useful to tell the model to "pay more attention" to samples from an under-represented class. sample_weight Optional Numpy array of weights for the training samples, used for weighting the loss function (during training only). You can either pass a flat (1D) Numpy array with the same length as the input samples (1:1 mapping between weights and samples), or in the case of temporal data, you can pass a 2D array with shape (samples, sequence_length), to apply a different weight to every timestep of every sample. This argument is not supported when x is a dataset, generator, or keras.utils.Sequence instance, instead provide the sample_weights as the third element of x. initial_epoch Integer. Epoch at which to start training (useful for resuming a previous training run). steps_per_epoch Integer or None. Total number of steps (batches of samples) before declaring one epoch finished and starting the next epoch. When training with input tensors such as TensorFlow data tensors, the default None is equal to the number of samples in your dataset divided by the batch size, or 1 if that cannot be determined. If x is a tf.data dataset, and 'steps_per_epoch' is None, the epoch will run until the input dataset is exhausted. When passing an infinitely repeating dataset, you must specify the steps_per_epoch argument. This argument is not supported with array inputs. validation_steps Only relevant if validation_data is provided and is a tf.data dataset. Total number of steps (batches of samples) to draw before stopping when performing validation at the end of every epoch. If 'validation_steps' is None, validation will run until the validation_data dataset is exhausted. In the case of an infinitely repeated dataset, it will run into an infinite loop. If 'validation_steps' is specified and only part of the dataset will be consumed, the evaluation will start from the beginning of the dataset at each epoch. This ensures that the same validation samples are used every time. validation_batch_size Integer or None. Number of samples per validation batch. If unspecified, will default to batch_size. Do not specify the validation_batch_size if your data is in the form of datasets, generators, or keras.utils.Sequence instances (since they generate batches). validation_freq Only relevant if validation data is provided. Integer or collections_abc.Container instance (e.g. list, tuple, etc.). If an integer, specifies how many training epochs to run before a new validation run is performed, e.g. validation_freq=2 runs validation every 2 epochs. If a Container, specifies the epochs on which to run validation, e.g. validation_freq=[1, 2, 10] runs validation at the end of the 1st, 2nd, and 10th epochs. max_queue_size Integer. Used for generator or keras.utils.Sequence input only. Maximum size for the generator queue. If unspecified, max_queue_size will default to 10. workers Integer. Used for generator or keras.utils.Sequence input only. Maximum number of processes to spin up when using process-based threading. If unspecified, workers will default to 1. If 0, will execute the generator on the main thread. use_multiprocessing Boolean. Used for generator or keras.utils.Sequence input only. If True, use process-based threading. If unspecified, use_multiprocessing will default to False. Note that because this implementation relies on multiprocessing, you should not pass non-picklable arguments to the generator as they can't be passed easily to children processes. Unpacking behavior for iterator-like inputs: A common pattern is to pass a tf.data.Dataset, generator, or tf.keras.utils.Sequence to the x argument of fit, which will in fact yield not only features (x) but optionally targets (y) and sample weights. Keras requires that the output of such iterator-likes be unambiguous. The iterator should return a tuple of length 1, 2, or 3, where the optional second and third elements will be used for y and sample_weight respectively. Any other type provided will be wrapped in a length one tuple, effectively treating everything as 'x'. When yielding dicts, they should still adhere to the top-level tuple structure. e.g. ({"x0": x0, "x1": x1}, y). Keras will not attempt to separate features, targets, and weights from the keys of a single dict. A notable unsupported data type is the namedtuple. The reason is that it behaves like both an ordered datatype (tuple) and a mapping datatype (dict). So given a namedtuple of the form: namedtuple("example_tuple", ["y", "x"]) it is ambiguous whether to reverse the order of the elements when interpreting the value. Even worse is a tuple of the form: namedtuple("other_tuple", ["x", "y", "z"]) where it is unclear if the tuple was intended to be unpacked into x, y, and sample_weight or passed through as a single element to x. As a result the data processing code will simply raise a ValueError if it encounters a namedtuple. (Along with instructions to remedy the issue.) Returns A History object. Its History.history attribute is a record of training loss values and metrics values at successive epochs, as well as validation loss values and validation metrics values (if applicable). Raises RuntimeError If the model was never compiled or, If model.fit is wrapped in tf.function. ValueError In case of mismatch between the provided input data and what the model expects or when the input data is empty. fit_generator View source fit_generator( generator, steps_per_epoch=None, epochs=1, verbose=1, callbacks=None, validation_data=None, validation_steps=None, validation_freq=1, class_weight=None, max_queue_size=10, workers=1, use_multiprocessing=False, shuffle=True, initial_epoch=0 ) Fits the model on data yielded batch-by-batch by a Python generator. DEPRECATED: Model.fit now supports generators, so there is no longer any need to use this endpoint. get_layer View source get_layer( name=None, index=None ) Retrieves a layer based on either its name (unique) or index. If name and index are both provided, index will take precedence. Indices are based on order of horizontal graph traversal (bottom-up). Arguments name String, name of layer. index Integer, index of layer. Returns A layer instance. Raises ValueError In case of invalid layer name or index. load_weights View source load_weights( filepath, by_name=False, skip_mismatch=False, options=None ) Loads all layer weights, either from a TensorFlow or an HDF5 weight file. If by_name is False weights are loaded based on the network's topology. This means the architecture should be the same as when the weights were saved. Note that layers that don't have weights are not taken into account in the topological ordering, so adding or removing layers is fine as long as they don't have weights. If by_name is True, weights are loaded into layers only if they share the same name. This is useful for fine-tuning or transfer-learning models where some of the layers have changed. Only topological loading (by_name=False) is supported when loading weights from the TensorFlow format. Note that topological loading differs slightly between TensorFlow and HDF5 formats for user-defined classes inheriting from tf.keras.Model: HDF5 loads based on a flattened list of weights, while the TensorFlow format loads based on the object-local names of attributes to which layers are assigned in the Model's constructor. Arguments filepath String, path to the weights file to load. For weight files in TensorFlow format, this is the file prefix (the same as was passed to save_weights). by_name Boolean, whether to load weights by name or by topological order. Only topological loading is supported for weight files in TensorFlow format. skip_mismatch Boolean, whether to skip loading of layers where there is a mismatch in the number of weights, or a mismatch in the shape of the weight (only valid when by_name=True). options Optional tf.train.CheckpointOptions object that specifies options for loading weights. Returns When loading a weight file in TensorFlow format, returns the same status object as tf.train.Checkpoint.restore. When graph building, restore ops are run automatically as soon as the network is built (on first call for user-defined classes inheriting from Model, immediately if it is already built). When loading weights in HDF5 format, returns None. Raises ImportError If h5py is not available and the weight file is in HDF5 format. ValueError If skip_mismatch is set to True when by_name is False. make_predict_function View source make_predict_function() Creates a function that executes one step of inference. This method can be overridden to support custom inference logic. This method is called by Model.predict and Model.predict_on_batch. Typically, this method directly controls tf.function and tf.distribute.Strategy settings, and delegates the actual evaluation logic to Model.predict_step. This function is cached the first time Model.predict or Model.predict_on_batch is called. The cache is cleared whenever Model.compile is called. Returns Function. The function created by this method should accept a tf.data.Iterator, and return the outputs of the Model. make_test_function View source make_test_function() Creates a function that executes one step of evaluation. This method can be overridden to support custom evaluation logic. This method is called by Model.evaluate and Model.test_on_batch. Typically, this method directly controls tf.function and tf.distribute.Strategy settings, and delegates the actual evaluation logic to Model.test_step. This function is cached the first time Model.evaluate or Model.test_on_batch is called. The cache is cleared whenever Model.compile is called. Returns Function. The function created by this method should accept a tf.data.Iterator, and return a dict containing values that will be passed to tf.keras.Callbacks.on_test_batch_end. make_train_function View source make_train_function() Creates a function that executes one step of training. This method can be overridden to support custom training logic. This method is called by Model.fit and Model.train_on_batch. Typically, this method directly controls tf.function and tf.distribute.Strategy settings, and delegates the actual training logic to Model.train_step. This function is cached the first time Model.fit or Model.train_on_batch is called. The cache is cleared whenever Model.compile is called. Returns Function. The function created by this method should accept a tf.data.Iterator, and return a dict containing values that will be passed to tf.keras.Callbacks.on_train_batch_end, such as {'loss': 0.2, 'accuracy': 0.7}. predict View source predict( x, batch_size=None, verbose=0, steps=None, callbacks=None, max_queue_size=10, workers=1, use_multiprocessing=False ) Generates output predictions for the input samples. Computation is done in batches. This method is designed for performance in large scale inputs. For small amount of inputs that fit in one batch, directly using __call__ is recommended for faster execution, e.g., model(x), or model(x, training=False) if you have layers such as tf.keras.layers.BatchNormalization that behaves differently during inference. Also, note the fact that test loss is not affected by regularization layers like noise and dropout. Arguments x Input samples. It could be: A Numpy array (or array-like), or a list of arrays (in case the model has multiple inputs). A TensorFlow tensor, or a list of tensors (in case the model has multiple inputs). A tf.data dataset. A generator or keras.utils.Sequence instance. A more detailed description of unpacking behavior for iterator types (Dataset, generator, Sequence) is given in the Unpacking behavior for iterator-like inputs section of Model.fit. batch_size Integer or None. Number of samples per batch. If unspecified, batch_size will default to 32. Do not specify the batch_size if your data is in the form of dataset, generators, or keras.utils.Sequence instances (since they generate batches). verbose Verbosity mode, 0 or 1. steps Total number of steps (batches of samples) before declaring the prediction round finished. Ignored with the default value of None. If x is a tf.data dataset and steps is None, predict will run until the input dataset is exhausted. callbacks List of keras.callbacks.Callback instances. List of callbacks to apply during prediction. See callbacks. max_queue_size Integer. Used for generator or keras.utils.Sequence input only. Maximum size for the generator queue. If unspecified, max_queue_size will default to 10. workers Integer. Used for generator or keras.utils.Sequence input only. Maximum number of processes to spin up when using process-based threading. If unspecified, workers will default to 1. If 0, will execute the generator on the main thread. use_multiprocessing Boolean. Used for generator or keras.utils.Sequence input only. If True, use process-based threading. If unspecified, use_multiprocessing will default to False. Note that because this implementation relies on multiprocessing, you should not pass non-picklable arguments to the generator as they can't be passed easily to children processes. See the discussion of Unpacking behavior for iterator-like inputs for Model.fit. Note that Model.predict uses the same interpretation rules as Model.fit and Model.evaluate, so inputs must be unambiguous for all three methods. Returns Numpy array(s) of predictions. Raises RuntimeError If model.predict is wrapped in tf.function. ValueError In case of mismatch between the provided input data and the model's expectations, or in case a stateful model receives a number of samples that is not a multiple of the batch size. predict_generator View source predict_generator( generator, steps=None, callbacks=None, max_queue_size=10, workers=1, use_multiprocessing=False, verbose=0 ) Generates predictions for the input samples from a data generator. DEPRECATED: Model.predict now supports generators, so there is no longer any need to use this endpoint. predict_on_batch View source predict_on_batch( x ) Returns predictions for a single batch of samples. Arguments x Input data. It could be: - A Numpy array (or array-like), or a list of arrays (in case the model has multiple inputs). - A TensorFlow tensor, or a list of tensors (in case the model has multiple inputs). Returns Numpy array(s) of predictions. Raises RuntimeError If model.predict_on_batch is wrapped in tf.function. ValueError In case of mismatch between given number of inputs and expectations of the model. predict_step View source predict_step( data ) The logic for one inference step. This method can be overridden to support custom inference logic. This method is called by Model.make_predict_function. This method should contain the mathematical logic for one step of inference. This typically includes the forward pass. Configuration details for how this logic is run (e.g. tf.function and tf.distribute.Strategy settings), should be left to Model.make_predict_function, which can also be overridden. Arguments data A nested structure of Tensors. Returns The result of one inference step, typically the output of calling the Model on data. reset_metrics View source reset_metrics() Resets the state of all the metrics in the model. Examples: inputs = tf.keras.layers.Input(shape=(3,)) outputs = tf.keras.layers.Dense(2)(inputs) model = tf.keras.models.Model(inputs=inputs, outputs=outputs) model.compile(optimizer="Adam", loss="mse", metrics=["mae"]) x = np.random.random((2, 3)) y = np.random.randint(0, 2, (2, 2)) _ = model.fit(x, y, verbose=0) assert all(float(m.result()) for m in model.metrics) model.reset_metrics() assert all(float(m.result()) == 0 for m in model.metrics) reset_states View source reset_states() save View source save( filepath, overwrite=True, include_optimizer=True, save_format=None, signatures=None, options=None, save_traces=True ) Saves the model to Tensorflow SavedModel or a single HDF5 file. Please see tf.keras.models.save_model or the Serialization and Saving guide for details. Arguments filepath String, PathLike, path to SavedModel or H5 file to save the model. overwrite Whether to silently overwrite any existing file at the target location, or provide the user with a manual prompt. include_optimizer If True, save optimizer's state together. save_format Either 'tf' or 'h5', indicating whether to save the model to Tensorflow SavedModel or HDF5. Defaults to 'tf' in TF 2.X, and 'h5' in TF 1.X. signatures Signatures to save with the SavedModel. Applicable to the 'tf' format only. Please see the signatures argument in tf.saved_model.save for details. options (only applies to SavedModel format) tf.saved_model.SaveOptions object that specifies options for saving to SavedModel. save_traces (only applies to SavedModel format) When enabled, the SavedModel will store the function traces for each layer. This can be disabled, so that only the configs of each layer are stored. Defaults to True. Disabling this will decrease serialization time and reduce file size, but it requires that all custom layers/models implement a get_config() method. Example: from keras.models import load_model model.save('my_model.h5') # creates a HDF5 file 'my_model.h5' del model # deletes the existing model # returns a compiled model # identical to the previous one model = load_model('my_model.h5') save_weights View source save_weights( filepath, overwrite=True, save_format=None, options=None ) Saves all layer weights. Either saves in HDF5 or in TensorFlow format based on the save_format argument. When saving in HDF5 format, the weight file has: layer_names (attribute), a list of strings (ordered names of model layers). For every layer, a group named layer.name For every such layer group, a group attribute weight_names, a list of strings (ordered names of weights tensor of the layer). For every weight in the layer, a dataset storing the weight value, named after the weight tensor. When saving in TensorFlow format, all objects referenced by the network are saved in the same format as tf.train.Checkpoint, including any Layer instances or Optimizer instances assigned to object attributes. For networks constructed from inputs and outputs using tf.keras.Model(inputs, outputs), Layer instances used by the network are tracked/saved automatically. For user-defined classes which inherit from tf.keras.Model, Layer instances must be assigned to object attributes, typically in the constructor. See the documentation of tf.train.Checkpoint and tf.keras.Model for details. While the formats are the same, do not mix save_weights and tf.train.Checkpoint. Checkpoints saved by Model.save_weights should be loaded using Model.load_weights. Checkpoints saved using tf.train.Checkpoint.save should be restored using the corresponding tf.train.Checkpoint.restore. Prefer tf.train.Checkpoint over save_weights for training checkpoints. The TensorFlow format matches objects and variables by starting at a root object, self for save_weights, and greedily matching attribute names. For Model.save this is the Model, and for Checkpoint.save this is the Checkpoint even if the Checkpoint has a model attached. This means saving a tf.keras.Model using save_weights and loading into a tf.train.Checkpoint with a Model attached (or vice versa) will not match the Model's variables. See the guide to training checkpoints for details on the TensorFlow format. Arguments filepath String or PathLike, path to the file to save the weights to. When saving in TensorFlow format, this is the prefix used for checkpoint files (multiple files are generated). Note that the '.h5' suffix causes weights to be saved in HDF5 format. overwrite Whether to silently overwrite any existing file at the target location, or provide the user with a manual prompt. save_format Either 'tf' or 'h5'. A filepath ending in '.h5' or '.keras' will default to HDF5 if save_format is None. Otherwise None defaults to 'tf'. options Optional tf.train.CheckpointOptions object that specifies options for saving weights. Raises ImportError If h5py is not available when attempting to save in HDF5 format. ValueError For invalid/unknown format arguments. summary View source summary( line_length=None, positions=None, print_fn=None ) Prints a string summary of the network. Arguments line_length Total length of printed lines (e.g. set this to adapt the display to different terminal window sizes). positions Relative or absolute positions of log elements in each line. If not provided, defaults to [.33, .55, .67, 1.]. print_fn Print function to use. Defaults to print. It will be called on each line of the summary. You can set it to a custom function in order to capture the string summary. Raises ValueError if summary() is called before the model is built. test_on_batch View source test_on_batch( x, y=None, sample_weight=None, reset_metrics=True, return_dict=False ) Test the model on a single batch of samples. Arguments x Input data. It could be: - A Numpy array (or array-like), or a list of arrays (in case the model has multiple inputs). - A TensorFlow tensor, or a list of tensors (in case the model has multiple inputs). A dict mapping input names to the corresponding array/tensors, if the model has named inputs. y Target data. Like the input data x, it could be either Numpy array(s) or TensorFlow tensor(s). It should be consistent with x (you cannot have Numpy inputs and tensor targets, or inversely). sample_weight Optional array of the same length as x, containing weights to apply to the model's loss for each sample. In the case of temporal data, you can pass a 2D array with shape (samples, sequence_length), to apply a different weight to every timestep of every sample. reset_metrics If True, the metrics returned will be only for this batch. If False, the metrics will be statefully accumulated across batches. return_dict If True, loss and metric results are returned as a dict, with each key being the name of the metric. If False, they are returned as a list. Returns Scalar test loss (if the model has a single output and no metrics) or list of scalars (if the model has multiple outputs and/or metrics). The attribute model.metrics_names will give you the display labels for the scalar outputs. Raises RuntimeError If model.test_on_batch is wrapped in tf.function. ValueError In case of invalid user-provided arguments. test_step View source test_step( data ) The logic for one evaluation step. This method can be overridden to support custom evaluation logic. This method is called by Model.make_test_function. This function should contain the mathematical logic for one step of evaluation. This typically includes the forward pass, loss calculation, and metrics updates. Configuration details for how this logic is run (e.g. tf.function and tf.distribute.Strategy settings), should be left to Model.make_test_function, which can also be overridden. Arguments data A nested structure of Tensors. Returns A dict containing values that will be passed to tf.keras.callbacks.CallbackList.on_train_batch_end. Typically, the values of the Model's metrics are returned. to_json View source to_json( **kwargs ) Returns a JSON string containing the network configuration. To load a network from a JSON save file, use keras.models.model_from_json(json_string, custom_objects={}). Arguments **kwargs Additional keyword arguments to be passed to json.dumps(). Returns A JSON string. to_yaml View source to_yaml( **kwargs ) Returns a yaml string containing the network configuration. To load a network from a yaml save file, use keras.models.model_from_yaml(yaml_string, custom_objects={}). custom_objects should be a dictionary mapping the names of custom losses / layers / etc to the corresponding functions / classes. Arguments **kwargs Additional keyword arguments to be passed to yaml.dump(). Returns A YAML string. Raises ImportError if yaml module is not found. train_on_batch View source train_on_batch( x, y=None, sample_weight=None, class_weight=None, reset_metrics=True, return_dict=False ) Runs a single gradient update on a single batch of data. Arguments x Input data. It could be: A Numpy array (or array-like), or a list of arrays (in case the model has multiple inputs). A TensorFlow tensor, or a list of tensors (in case the model has multiple inputs). A dict mapping input names to the corresponding array/tensors, if the model has named inputs. y Target data. Like the input data x, it could be either Numpy array(s) or TensorFlow tensor(s). It should be consistent with x (you cannot have Numpy inputs and tensor targets, or inversely). sample_weight Optional array of the same length as x, containing weights to apply to the model's loss for each sample. In the case of temporal data, you can pass a 2D array with shape (samples, sequence_length), to apply a different weight to every timestep of every sample. class_weight Optional dictionary mapping class indices (integers) to a weight (float) to apply to the model's loss for the samples from this class during training. This can be useful to tell the model to "pay more attention" to samples from an under-represented class. reset_metrics If True, the metrics returned will be only for this batch. If False, the metrics will be statefully accumulated across batches. return_dict If True, loss and metric results are returned as a dict, with each key being the name of the metric. If False, they are returned as a list. Returns Scalar training loss (if the model has a single output and no metrics) or list of scalars (if the model has multiple outputs and/or metrics). The attribute model.metrics_names will give you the display labels for the scalar outputs. Raises RuntimeError If model.train_on_batch is wrapped in tf.function. ValueError In case of invalid user-provided arguments. train_step View source train_step( data ) The logic for one training step. This method can be overridden to support custom training logic. This method is called by Model.make_train_function. This method should contain the mathematical logic for one step of training. This typically includes the forward pass, loss calculation, backpropagation, and metric updates. Configuration details for how this logic is run (e.g. tf.function and tf.distribute.Strategy settings), should be left to Model.make_train_function, which can also be overridden. Arguments data A nested structure of Tensors. Returns A dict containing values that will be passed to tf.keras.callbacks.CallbackList.on_train_batch_end. Typically, the values of the Model's metrics are returned. Example: {'loss': 0.2, 'accuracy': 0.7}.
tensorflow.keras.experimental.linearmodel
tf.keras.experimental.NoisyLinearCosineDecay View source on GitHub A LearningRateSchedule that uses a noisy linear cosine decay schedule. Inherits From: LearningRateSchedule View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.experimental.NoisyLinearCosineDecay tf.keras.experimental.NoisyLinearCosineDecay( initial_learning_rate, decay_steps, initial_variance=1.0, variance_decay=0.55, num_periods=0.5, alpha=0.0, beta=0.001, name=None ) See [Bello et al., ICML2017] Neural Optimizer Search with RL. https://arxiv.org/abs/1709.07417 For the idea of warm starts here controlled by num_periods, see [Loshchilov & Hutter, ICLR2016] SGDR: Stochastic Gradient Descent with Warm Restarts. https://arxiv.org/abs/1608.03983 Note that linear cosine decay is more aggressive than cosine decay and larger initial learning rates can typically be used. When training a model, it is often recommended to lower the learning rate as the training progresses. This schedule applies a noisy linear cosine decay function to an optimizer step, given a provided initial learning rate. It requires a step value to compute the decayed learning rate. You can just pass a TensorFlow variable that you increment at each training step. The schedule a 1-arg callable that produces a decayed learning rate when passed the current optimizer step. This can be useful for changing the learning rate value across different invocations of optimizer functions. It is computed as: def decayed_learning_rate(step): step = min(step, decay_steps) linear_decay = (decay_steps - step) / decay_steps) cosine_decay = 0.5 * ( 1 + cos(pi * 2 * num_periods * step / decay_steps)) decayed = (alpha + linear_decay + eps_t) * cosine_decay + beta return initial_learning_rate * decayed where eps_t is 0-centered gaussian noise with variance initial_variance / (1 + global_step) ** variance_decay Example usage: decay_steps = 1000 lr_decayed_fn = ( tf.keras.experimental.NoisyLinearCosineDecay( initial_learning_rate, decay_steps)) You can pass this schedule directly into a tf.keras.optimizers.Optimizer as the learning rate. The learning rate schedule is also serializable and deserializable using tf.keras.optimizers.schedules.serialize and tf.keras.optimizers.schedules.deserialize. Returns A 1-arg callable learning rate schedule that takes the current optimizer step and outputs the decayed learning rate, a scalar Tensor of the same type as initial_learning_rate. Args initial_learning_rate A scalar float32 or float64 Tensor or a Python number. The initial learning rate. decay_steps A scalar int32 or int64 Tensor or a Python number. Number of steps to decay over. initial_variance initial variance for the noise. See computation above. variance_decay decay for the noise's variance. See computation above. num_periods Number of periods in the cosine part of the decay. See computation above. alpha See computation above. beta See computation above. name String. Optional name of the operation. Defaults to 'NoisyLinearCosineDecay'. Methods from_config View source @classmethod from_config( config ) Instantiates a LearningRateSchedule from its config. Args config Output of get_config(). Returns A LearningRateSchedule instance. get_config View source get_config() __call__ View source __call__( step ) Call self as a function.
tensorflow.keras.experimental.noisylinearcosinedecay
tf.keras.experimental.PeepholeLSTMCell View source on GitHub Equivalent to LSTMCell class but adds peephole connections. Inherits From: LSTMCell, Layer, Module View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.experimental.PeepholeLSTMCell tf.keras.experimental.PeepholeLSTMCell( units, activation='tanh', recurrent_activation='hard_sigmoid', use_bias=True, kernel_initializer='glorot_uniform', recurrent_initializer='orthogonal', bias_initializer='zeros', unit_forget_bias=True, kernel_regularizer=None, recurrent_regularizer=None, bias_regularizer=None, kernel_constraint=None, recurrent_constraint=None, bias_constraint=None, dropout=0.0, recurrent_dropout=0.0, **kwargs ) Peephole connections allow the gates to utilize the previous internal state as well as the previous hidden state (which is what LSTMCell is limited to). This allows PeepholeLSTMCell to better learn precise timings over LSTMCell. From Gers et al., 2002: "We find that LSTM augmented by 'peephole connections' from its internal cells to its multiplicative gates can learn the fine distinction between sequences of spikes spaced either 50 or 49 time steps apart without the help of any short training exemplars." The peephole implementation is based on: Sak et al., 2014 Example: # Create 2 PeepholeLSTMCells peephole_lstm_cells = [PeepholeLSTMCell(size) for size in [128, 256]] # Create a layer composed sequentially of the peephole LSTM cells. layer = RNN(peephole_lstm_cells) input = keras.Input((timesteps, input_dim)) output = layer(input) Methods get_dropout_mask_for_cell View source get_dropout_mask_for_cell( inputs, training, count=1 ) Get the dropout mask for RNN cell's input. It will create mask based on context if there isn't any existing cached mask. If a new mask is generated, it will update the cache in the cell. Args inputs The input tensor whose shape will be used to generate dropout mask. training Boolean tensor, whether its in training mode, dropout will be ignored in non-training mode. count Int, how many dropout mask will be generated. It is useful for cell that has internal weights fused together. Returns List of mask tensor, generated or cached mask based on context. get_initial_state View source get_initial_state( inputs=None, batch_size=None, dtype=None ) get_recurrent_dropout_mask_for_cell View source get_recurrent_dropout_mask_for_cell( inputs, training, count=1 ) Get the recurrent dropout mask for RNN cell. It will create mask based on context if there isn't any existing cached mask. If a new mask is generated, it will update the cache in the cell. Args inputs The input tensor whose shape will be used to generate dropout mask. training Boolean tensor, whether its in training mode, dropout will be ignored in non-training mode. count Int, how many dropout mask will be generated. It is useful for cell that has internal weights fused together. Returns List of mask tensor, generated or cached mask based on context. reset_dropout_mask View source reset_dropout_mask() Reset the cached dropout masks if any. This is important for the RNN layer to invoke this in it call() method so that the cached mask is cleared before calling the cell.call(). The mask should be cached across the timestep within the same batch, but shouldn't be cached between batches. Otherwise it will introduce unreasonable bias against certain index of data within the batch. reset_recurrent_dropout_mask View source reset_recurrent_dropout_mask() Reset the cached recurrent dropout masks if any. This is important for the RNN layer to invoke this in it call() method so that the cached mask is cleared before calling the cell.call(). The mask should be cached across the timestep within the same batch, but shouldn't be cached between batches. Otherwise it will introduce unreasonable bias against certain index of data within the batch.
tensorflow.keras.experimental.peepholelstmcell
tf.keras.experimental.SequenceFeatures View source on GitHub A layer for sequence input. Inherits From: Layer, Module View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.experimental.SequenceFeatures tf.keras.experimental.SequenceFeatures( feature_columns, trainable=True, name=None, **kwargs ) All feature_columns must be sequence dense columns with the same sequence_length. The output of this method can be fed into sequence networks, such as RNN. The output of this method is a 3D Tensor of shape [batch_size, T, D]. T is the maximum sequence length for this batch, which could differ from batch to batch. If multiple feature_columns are given with Di num_elements each, their outputs are concatenated. So, the final Tensor has shape [batch_size, T, D0 + D1 + ... + Dn]. Example: # Behavior of some cells or feature columns may depend on whether we are in # training or inference mode, e.g. applying dropout. training = True rating = sequence_numeric_column('rating') watches = sequence_categorical_column_with_identity( 'watches', num_buckets=1000) watches_embedding = embedding_column(watches, dimension=10) columns = [rating, watches_embedding] sequence_input_layer = SequenceFeatures(columns) features = tf.io.parse_example(..., features=make_parse_example_spec(columns)) sequence_input, sequence_length = sequence_input_layer( features, training=training) sequence_length_mask = tf.sequence_mask(sequence_length) rnn_cell = tf.keras.layers.SimpleRNNCell(hidden_size, training=training) rnn_layer = tf.keras.layers.RNN(rnn_cell, training=training) outputs, state = rnn_layer(sequence_input, mask=sequence_length_mask) Args feature_columns An iterable of dense sequence columns. Valid columns are embedding_column that wraps a sequence_categorical_column_with_* sequence_numeric_column. trainable Boolean, whether the layer's variables will be updated via gradient descent during training. name Name to give to the SequenceFeatures. **kwargs Keyword arguments to construct a layer. Raises ValueError If any of the feature_columns is not a SequenceDenseColumn.
tensorflow.keras.experimental.sequencefeatures
tf.keras.experimental.WideDeepModel View source on GitHub Wide & Deep Model for regression and classification problems. Inherits From: Model, Layer, Module View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.experimental.WideDeepModel tf.keras.experimental.WideDeepModel( linear_model, dnn_model, activation=None, **kwargs ) This model jointly train a linear and a dnn model. Example: linear_model = LinearModel() dnn_model = keras.Sequential([keras.layers.Dense(units=64), keras.layers.Dense(units=1)]) combined_model = WideDeepModel(linear_model, dnn_model) combined_model.compile(optimizer=['sgd', 'adam'], 'mse', ['mse']) # define dnn_inputs and linear_inputs as separate numpy arrays or # a single numpy array if dnn_inputs is same as linear_inputs. combined_model.fit([linear_inputs, dnn_inputs], y, epochs) # or define a single `tf.data.Dataset` that contains a single tensor or # separate tensors for dnn_inputs and linear_inputs. dataset = tf.data.Dataset.from_tensors(([linear_inputs, dnn_inputs], y)) combined_model.fit(dataset, epochs) Both linear and dnn model can be pre-compiled and trained separately before jointly training: Example: linear_model = LinearModel() linear_model.compile('adagrad', 'mse') linear_model.fit(linear_inputs, y, epochs) dnn_model = keras.Sequential([keras.layers.Dense(units=1)]) dnn_model.compile('rmsprop', 'mse') dnn_model.fit(dnn_inputs, y, epochs) combined_model = WideDeepModel(linear_model, dnn_model) combined_model.compile(optimizer=['sgd', 'adam'], 'mse', ['mse']) combined_model.fit([linear_inputs, dnn_inputs], y, epochs) Args linear_model a premade LinearModel, its output must match the output of the dnn model. dnn_model a tf.keras.Model, its output must match the output of the linear model. activation Activation function. Set it to None to maintain a linear activation. **kwargs The keyword arguments that are passed on to BaseLayer.init. Allowed keyword arguments include name. Attributes distribute_strategy The tf.distribute.Strategy this model was created under. layers metrics_names Returns the model's display labels for all outputs. Note: metrics_names are available only after a keras.Model has been trained/evaluated on actual data. inputs = tf.keras.layers.Input(shape=(3,)) outputs = tf.keras.layers.Dense(2)(inputs) model = tf.keras.models.Model(inputs=inputs, outputs=outputs) model.compile(optimizer="Adam", loss="mse", metrics=["mae"]) model.metrics_names [] x = np.random.random((2, 3)) y = np.random.randint(0, 2, (2, 2)) model.fit(x, y) model.metrics_names ['loss', 'mae'] inputs = tf.keras.layers.Input(shape=(3,)) d = tf.keras.layers.Dense(2, name='out') output_1 = d(inputs) output_2 = d(inputs) model = tf.keras.models.Model( inputs=inputs, outputs=[output_1, output_2]) model.compile(optimizer="Adam", loss="mse", metrics=["mae", "acc"]) model.fit(x, (y, y)) model.metrics_names ['loss', 'out_loss', 'out_1_loss', 'out_mae', 'out_acc', 'out_1_mae', 'out_1_acc'] run_eagerly Settable attribute indicating whether the model should run eagerly. Running eagerly means that your model will be run step by step, like Python code. Your model might run slower, but it should become easier for you to debug it by stepping into individual layer calls. By default, we will attempt to compile your model to a static graph to deliver the best execution performance. Methods compile View source compile( optimizer='rmsprop', loss=None, metrics=None, loss_weights=None, weighted_metrics=None, run_eagerly=None, steps_per_execution=None, **kwargs ) Configures the model for training. Arguments optimizer String (name of optimizer) or optimizer instance. See tf.keras.optimizers. loss String (name of objective function), objective function or tf.keras.losses.Loss instance. See tf.keras.losses. An objective function is any callable with the signature loss = fn(y_true, y_pred), where y_true = ground truth values with shape = [batch_size, d0, .. dN], except sparse loss functions such as sparse categorical crossentropy where shape = [batch_size, d0, .. dN-1]. y_pred = predicted values with shape = [batch_size, d0, .. dN]. It returns a weighted loss float tensor. If a custom Loss instance is used and reduction is set to NONE, return value has the shape [batch_size, d0, .. dN-1] ie. per-sample or per-timestep loss values; otherwise, it is a scalar. If the model has multiple outputs, you can use a different loss on each output by passing a dictionary or a list of losses. The loss value that will be minimized by the model will then be the sum of all individual losses. metrics List of metrics to be evaluated by the model during training and testing. Each of this can be a string (name of a built-in function), function or a tf.keras.metrics.Metric instance. See tf.keras.metrics. Typically you will use metrics=['accuracy']. A function is any callable with the signature result = fn(y_true, y_pred). To specify different metrics for different outputs of a multi-output model, you could also pass a dictionary, such as metrics={'output_a': 'accuracy', 'output_b': ['accuracy', 'mse']}. You can also pass a list (len = len(outputs)) of lists of metrics such as metrics=[['accuracy'], ['accuracy', 'mse']] or metrics=['accuracy', ['accuracy', 'mse']]. When you pass the strings 'accuracy' or 'acc', we convert this to one of tf.keras.metrics.BinaryAccuracy, tf.keras.metrics.CategoricalAccuracy, tf.keras.metrics.SparseCategoricalAccuracy based on the loss function used and the model output shape. We do a similar conversion for the strings 'crossentropy' and 'ce' as well. loss_weights Optional list or dictionary specifying scalar coefficients (Python floats) to weight the loss contributions of different model outputs. The loss value that will be minimized by the model will then be the weighted sum of all individual losses, weighted by the loss_weights coefficients. If a list, it is expected to have a 1:1 mapping to the model's outputs. If a dict, it is expected to map output names (strings) to scalar coefficients. weighted_metrics List of metrics to be evaluated and weighted by sample_weight or class_weight during training and testing. run_eagerly Bool. Defaults to False. If True, this Model's logic will not be wrapped in a tf.function. Recommended to leave this as None unless your Model cannot be run inside a tf.function. steps_per_execution Int. Defaults to 1. The number of batches to run during each tf.function call. Running multiple batches inside a single tf.function call can greatly improve performance on TPUs or small models with a large Python overhead. At most, one full epoch will be run each execution. If a number larger than the size of the epoch is passed, the execution will be truncated to the size of the epoch. Note that if steps_per_execution is set to N, Callback.on_batch_begin and Callback.on_batch_end methods will only be called every N batches (i.e. before/after each tf.function execution). **kwargs Arguments supported for backwards compatibility only. Raises ValueError In case of invalid arguments for optimizer, loss or metrics. evaluate View source evaluate( x=None, y=None, batch_size=None, verbose=1, sample_weight=None, steps=None, callbacks=None, max_queue_size=10, workers=1, use_multiprocessing=False, return_dict=False ) Returns the loss value & metrics values for the model in test mode. Computation is done in batches (see the batch_size arg.) Arguments x Input data. It could be: A Numpy array (or array-like), or a list of arrays (in case the model has multiple inputs). A TensorFlow tensor, or a list of tensors (in case the model has multiple inputs). A dict mapping input names to the corresponding array/tensors, if the model has named inputs. A tf.data dataset. Should return a tuple of either (inputs, targets) or (inputs, targets, sample_weights). A generator or keras.utils.Sequence returning (inputs, targets) or (inputs, targets, sample_weights). A more detailed description of unpacking behavior for iterator types (Dataset, generator, Sequence) is given in the Unpacking behavior for iterator-like inputs section of Model.fit. y Target data. Like the input data x, it could be either Numpy array(s) or TensorFlow tensor(s). It should be consistent with x (you cannot have Numpy inputs and tensor targets, or inversely). If x is a dataset, generator or keras.utils.Sequence instance, y should not be specified (since targets will be obtained from the iterator/dataset). batch_size Integer or None. Number of samples per batch of computation. If unspecified, batch_size will default to 32. Do not specify the batch_size if your data is in the form of a dataset, generators, or keras.utils.Sequence instances (since they generate batches). verbose 0 or 1. Verbosity mode. 0 = silent, 1 = progress bar. sample_weight Optional Numpy array of weights for the test samples, used for weighting the loss function. You can either pass a flat (1D) Numpy array with the same length as the input samples (1:1 mapping between weights and samples), or in the case of temporal data, you can pass a 2D array with shape (samples, sequence_length), to apply a different weight to every timestep of every sample. This argument is not supported when x is a dataset, instead pass sample weights as the third element of x. steps Integer or None. Total number of steps (batches of samples) before declaring the evaluation round finished. Ignored with the default value of None. If x is a tf.data dataset and steps is None, 'evaluate' will run until the dataset is exhausted. This argument is not supported with array inputs. callbacks List of keras.callbacks.Callback instances. List of callbacks to apply during evaluation. See callbacks. max_queue_size Integer. Used for generator or keras.utils.Sequence input only. Maximum size for the generator queue. If unspecified, max_queue_size will default to 10. workers Integer. Used for generator or keras.utils.Sequence input only. Maximum number of processes to spin up when using process-based threading. If unspecified, workers will default to 1. If 0, will execute the generator on the main thread. use_multiprocessing Boolean. Used for generator or keras.utils.Sequence input only. If True, use process-based threading. If unspecified, use_multiprocessing will default to False. Note that because this implementation relies on multiprocessing, you should not pass non-picklable arguments to the generator as they can't be passed easily to children processes. return_dict If True, loss and metric results are returned as a dict, with each key being the name of the metric. If False, they are returned as a list. See the discussion of Unpacking behavior for iterator-like inputs for Model.fit. Returns Scalar test loss (if the model has a single output and no metrics) or list of scalars (if the model has multiple outputs and/or metrics). The attribute model.metrics_names will give you the display labels for the scalar outputs. Raises RuntimeError If model.evaluate is wrapped in tf.function. ValueError in case of invalid arguments. evaluate_generator View source evaluate_generator( generator, steps=None, callbacks=None, max_queue_size=10, workers=1, use_multiprocessing=False, verbose=0 ) Evaluates the model on a data generator. DEPRECATED: Model.evaluate now supports generators, so there is no longer any need to use this endpoint. fit View source fit( x=None, y=None, batch_size=None, epochs=1, verbose=1, callbacks=None, validation_split=0.0, validation_data=None, shuffle=True, class_weight=None, sample_weight=None, initial_epoch=0, steps_per_epoch=None, validation_steps=None, validation_batch_size=None, validation_freq=1, max_queue_size=10, workers=1, use_multiprocessing=False ) Trains the model for a fixed number of epochs (iterations on a dataset). Arguments x Input data. It could be: A Numpy array (or array-like), or a list of arrays (in case the model has multiple inputs). A TensorFlow tensor, or a list of tensors (in case the model has multiple inputs). A dict mapping input names to the corresponding array/tensors, if the model has named inputs. A tf.data dataset. Should return a tuple of either (inputs, targets) or (inputs, targets, sample_weights). A generator or keras.utils.Sequence returning (inputs, targets) or (inputs, targets, sample_weights). A more detailed description of unpacking behavior for iterator types (Dataset, generator, Sequence) is given below. y Target data. Like the input data x, it could be either Numpy array(s) or TensorFlow tensor(s). It should be consistent with x (you cannot have Numpy inputs and tensor targets, or inversely). If x is a dataset, generator, or keras.utils.Sequence instance, y should not be specified (since targets will be obtained from x). batch_size Integer or None. Number of samples per gradient update. If unspecified, batch_size will default to 32. Do not specify the batch_size if your data is in the form of datasets, generators, or keras.utils.Sequence instances (since they generate batches). epochs Integer. Number of epochs to train the model. An epoch is an iteration over the entire x and y data provided. Note that in conjunction with initial_epoch, epochs is to be understood as "final epoch". The model is not trained for a number of iterations given by epochs, but merely until the epoch of index epochs is reached. verbose 0, 1, or 2. Verbosity mode. 0 = silent, 1 = progress bar, 2 = one line per epoch. Note that the progress bar is not particularly useful when logged to a file, so verbose=2 is recommended when not running interactively (eg, in a production environment). callbacks List of keras.callbacks.Callback instances. List of callbacks to apply during training. See tf.keras.callbacks. Note tf.keras.callbacks.ProgbarLogger and tf.keras.callbacks.History callbacks are created automatically and need not be passed into model.fit. tf.keras.callbacks.ProgbarLogger is created or not based on verbose argument to model.fit. validation_split Float between 0 and 1. Fraction of the training data to be used as validation data. The model will set apart this fraction of the training data, will not train on it, and will evaluate the loss and any model metrics on this data at the end of each epoch. The validation data is selected from the last samples in the x and y data provided, before shuffling. This argument is not supported when x is a dataset, generator or keras.utils.Sequence instance. validation_data Data on which to evaluate the loss and any model metrics at the end of each epoch. The model will not be trained on this data. Thus, note the fact that the validation loss of data provided using validation_split or validation_data is not affected by regularization layers like noise and dropout. validation_data will override validation_split. validation_data could be: tuple (x_val, y_val) of Numpy arrays or tensors tuple (x_val, y_val, val_sample_weights) of Numpy arrays dataset For the first two cases, batch_size must be provided. For the last case, validation_steps could be provided. Note that validation_data does not support all the data types that are supported in x, eg, dict, generator or keras.utils.Sequence. shuffle Boolean (whether to shuffle the training data before each epoch) or str (for 'batch'). This argument is ignored when x is a generator. 'batch' is a special option for dealing with the limitations of HDF5 data; it shuffles in batch-sized chunks. Has no effect when steps_per_epoch is not None. class_weight Optional dictionary mapping class indices (integers) to a weight (float) value, used for weighting the loss function (during training only). This can be useful to tell the model to "pay more attention" to samples from an under-represented class. sample_weight Optional Numpy array of weights for the training samples, used for weighting the loss function (during training only). You can either pass a flat (1D) Numpy array with the same length as the input samples (1:1 mapping between weights and samples), or in the case of temporal data, you can pass a 2D array with shape (samples, sequence_length), to apply a different weight to every timestep of every sample. This argument is not supported when x is a dataset, generator, or keras.utils.Sequence instance, instead provide the sample_weights as the third element of x. initial_epoch Integer. Epoch at which to start training (useful for resuming a previous training run). steps_per_epoch Integer or None. Total number of steps (batches of samples) before declaring one epoch finished and starting the next epoch. When training with input tensors such as TensorFlow data tensors, the default None is equal to the number of samples in your dataset divided by the batch size, or 1 if that cannot be determined. If x is a tf.data dataset, and 'steps_per_epoch' is None, the epoch will run until the input dataset is exhausted. When passing an infinitely repeating dataset, you must specify the steps_per_epoch argument. This argument is not supported with array inputs. validation_steps Only relevant if validation_data is provided and is a tf.data dataset. Total number of steps (batches of samples) to draw before stopping when performing validation at the end of every epoch. If 'validation_steps' is None, validation will run until the validation_data dataset is exhausted. In the case of an infinitely repeated dataset, it will run into an infinite loop. If 'validation_steps' is specified and only part of the dataset will be consumed, the evaluation will start from the beginning of the dataset at each epoch. This ensures that the same validation samples are used every time. validation_batch_size Integer or None. Number of samples per validation batch. If unspecified, will default to batch_size. Do not specify the validation_batch_size if your data is in the form of datasets, generators, or keras.utils.Sequence instances (since they generate batches). validation_freq Only relevant if validation data is provided. Integer or collections_abc.Container instance (e.g. list, tuple, etc.). If an integer, specifies how many training epochs to run before a new validation run is performed, e.g. validation_freq=2 runs validation every 2 epochs. If a Container, specifies the epochs on which to run validation, e.g. validation_freq=[1, 2, 10] runs validation at the end of the 1st, 2nd, and 10th epochs. max_queue_size Integer. Used for generator or keras.utils.Sequence input only. Maximum size for the generator queue. If unspecified, max_queue_size will default to 10. workers Integer. Used for generator or keras.utils.Sequence input only. Maximum number of processes to spin up when using process-based threading. If unspecified, workers will default to 1. If 0, will execute the generator on the main thread. use_multiprocessing Boolean. Used for generator or keras.utils.Sequence input only. If True, use process-based threading. If unspecified, use_multiprocessing will default to False. Note that because this implementation relies on multiprocessing, you should not pass non-picklable arguments to the generator as they can't be passed easily to children processes. Unpacking behavior for iterator-like inputs: A common pattern is to pass a tf.data.Dataset, generator, or tf.keras.utils.Sequence to the x argument of fit, which will in fact yield not only features (x) but optionally targets (y) and sample weights. Keras requires that the output of such iterator-likes be unambiguous. The iterator should return a tuple of length 1, 2, or 3, where the optional second and third elements will be used for y and sample_weight respectively. Any other type provided will be wrapped in a length one tuple, effectively treating everything as 'x'. When yielding dicts, they should still adhere to the top-level tuple structure. e.g. ({"x0": x0, "x1": x1}, y). Keras will not attempt to separate features, targets, and weights from the keys of a single dict. A notable unsupported data type is the namedtuple. The reason is that it behaves like both an ordered datatype (tuple) and a mapping datatype (dict). So given a namedtuple of the form: namedtuple("example_tuple", ["y", "x"]) it is ambiguous whether to reverse the order of the elements when interpreting the value. Even worse is a tuple of the form: namedtuple("other_tuple", ["x", "y", "z"]) where it is unclear if the tuple was intended to be unpacked into x, y, and sample_weight or passed through as a single element to x. As a result the data processing code will simply raise a ValueError if it encounters a namedtuple. (Along with instructions to remedy the issue.) Returns A History object. Its History.history attribute is a record of training loss values and metrics values at successive epochs, as well as validation loss values and validation metrics values (if applicable). Raises RuntimeError If the model was never compiled or, If model.fit is wrapped in tf.function. ValueError In case of mismatch between the provided input data and what the model expects or when the input data is empty. fit_generator View source fit_generator( generator, steps_per_epoch=None, epochs=1, verbose=1, callbacks=None, validation_data=None, validation_steps=None, validation_freq=1, class_weight=None, max_queue_size=10, workers=1, use_multiprocessing=False, shuffle=True, initial_epoch=0 ) Fits the model on data yielded batch-by-batch by a Python generator. DEPRECATED: Model.fit now supports generators, so there is no longer any need to use this endpoint. get_layer View source get_layer( name=None, index=None ) Retrieves a layer based on either its name (unique) or index. If name and index are both provided, index will take precedence. Indices are based on order of horizontal graph traversal (bottom-up). Arguments name String, name of layer. index Integer, index of layer. Returns A layer instance. Raises ValueError In case of invalid layer name or index. load_weights View source load_weights( filepath, by_name=False, skip_mismatch=False, options=None ) Loads all layer weights, either from a TensorFlow or an HDF5 weight file. If by_name is False weights are loaded based on the network's topology. This means the architecture should be the same as when the weights were saved. Note that layers that don't have weights are not taken into account in the topological ordering, so adding or removing layers is fine as long as they don't have weights. If by_name is True, weights are loaded into layers only if they share the same name. This is useful for fine-tuning or transfer-learning models where some of the layers have changed. Only topological loading (by_name=False) is supported when loading weights from the TensorFlow format. Note that topological loading differs slightly between TensorFlow and HDF5 formats for user-defined classes inheriting from tf.keras.Model: HDF5 loads based on a flattened list of weights, while the TensorFlow format loads based on the object-local names of attributes to which layers are assigned in the Model's constructor. Arguments filepath String, path to the weights file to load. For weight files in TensorFlow format, this is the file prefix (the same as was passed to save_weights). by_name Boolean, whether to load weights by name or by topological order. Only topological loading is supported for weight files in TensorFlow format. skip_mismatch Boolean, whether to skip loading of layers where there is a mismatch in the number of weights, or a mismatch in the shape of the weight (only valid when by_name=True). options Optional tf.train.CheckpointOptions object that specifies options for loading weights. Returns When loading a weight file in TensorFlow format, returns the same status object as tf.train.Checkpoint.restore. When graph building, restore ops are run automatically as soon as the network is built (on first call for user-defined classes inheriting from Model, immediately if it is already built). When loading weights in HDF5 format, returns None. Raises ImportError If h5py is not available and the weight file is in HDF5 format. ValueError If skip_mismatch is set to True when by_name is False. make_predict_function View source make_predict_function() Creates a function that executes one step of inference. This method can be overridden to support custom inference logic. This method is called by Model.predict and Model.predict_on_batch. Typically, this method directly controls tf.function and tf.distribute.Strategy settings, and delegates the actual evaluation logic to Model.predict_step. This function is cached the first time Model.predict or Model.predict_on_batch is called. The cache is cleared whenever Model.compile is called. Returns Function. The function created by this method should accept a tf.data.Iterator, and return the outputs of the Model. make_test_function View source make_test_function() Creates a function that executes one step of evaluation. This method can be overridden to support custom evaluation logic. This method is called by Model.evaluate and Model.test_on_batch. Typically, this method directly controls tf.function and tf.distribute.Strategy settings, and delegates the actual evaluation logic to Model.test_step. This function is cached the first time Model.evaluate or Model.test_on_batch is called. The cache is cleared whenever Model.compile is called. Returns Function. The function created by this method should accept a tf.data.Iterator, and return a dict containing values that will be passed to tf.keras.Callbacks.on_test_batch_end. make_train_function View source make_train_function() Creates a function that executes one step of training. This method can be overridden to support custom training logic. This method is called by Model.fit and Model.train_on_batch. Typically, this method directly controls tf.function and tf.distribute.Strategy settings, and delegates the actual training logic to Model.train_step. This function is cached the first time Model.fit or Model.train_on_batch is called. The cache is cleared whenever Model.compile is called. Returns Function. The function created by this method should accept a tf.data.Iterator, and return a dict containing values that will be passed to tf.keras.Callbacks.on_train_batch_end, such as {'loss': 0.2, 'accuracy': 0.7}. predict View source predict( x, batch_size=None, verbose=0, steps=None, callbacks=None, max_queue_size=10, workers=1, use_multiprocessing=False ) Generates output predictions for the input samples. Computation is done in batches. This method is designed for performance in large scale inputs. For small amount of inputs that fit in one batch, directly using __call__ is recommended for faster execution, e.g., model(x), or model(x, training=False) if you have layers such as tf.keras.layers.BatchNormalization that behaves differently during inference. Also, note the fact that test loss is not affected by regularization layers like noise and dropout. Arguments x Input samples. It could be: A Numpy array (or array-like), or a list of arrays (in case the model has multiple inputs). A TensorFlow tensor, or a list of tensors (in case the model has multiple inputs). A tf.data dataset. A generator or keras.utils.Sequence instance. A more detailed description of unpacking behavior for iterator types (Dataset, generator, Sequence) is given in the Unpacking behavior for iterator-like inputs section of Model.fit. batch_size Integer or None. Number of samples per batch. If unspecified, batch_size will default to 32. Do not specify the batch_size if your data is in the form of dataset, generators, or keras.utils.Sequence instances (since they generate batches). verbose Verbosity mode, 0 or 1. steps Total number of steps (batches of samples) before declaring the prediction round finished. Ignored with the default value of None. If x is a tf.data dataset and steps is None, predict will run until the input dataset is exhausted. callbacks List of keras.callbacks.Callback instances. List of callbacks to apply during prediction. See callbacks. max_queue_size Integer. Used for generator or keras.utils.Sequence input only. Maximum size for the generator queue. If unspecified, max_queue_size will default to 10. workers Integer. Used for generator or keras.utils.Sequence input only. Maximum number of processes to spin up when using process-based threading. If unspecified, workers will default to 1. If 0, will execute the generator on the main thread. use_multiprocessing Boolean. Used for generator or keras.utils.Sequence input only. If True, use process-based threading. If unspecified, use_multiprocessing will default to False. Note that because this implementation relies on multiprocessing, you should not pass non-picklable arguments to the generator as they can't be passed easily to children processes. See the discussion of Unpacking behavior for iterator-like inputs for Model.fit. Note that Model.predict uses the same interpretation rules as Model.fit and Model.evaluate, so inputs must be unambiguous for all three methods. Returns Numpy array(s) of predictions. Raises RuntimeError If model.predict is wrapped in tf.function. ValueError In case of mismatch between the provided input data and the model's expectations, or in case a stateful model receives a number of samples that is not a multiple of the batch size. predict_generator View source predict_generator( generator, steps=None, callbacks=None, max_queue_size=10, workers=1, use_multiprocessing=False, verbose=0 ) Generates predictions for the input samples from a data generator. DEPRECATED: Model.predict now supports generators, so there is no longer any need to use this endpoint. predict_on_batch View source predict_on_batch( x ) Returns predictions for a single batch of samples. Arguments x Input data. It could be: - A Numpy array (or array-like), or a list of arrays (in case the model has multiple inputs). - A TensorFlow tensor, or a list of tensors (in case the model has multiple inputs). Returns Numpy array(s) of predictions. Raises RuntimeError If model.predict_on_batch is wrapped in tf.function. ValueError In case of mismatch between given number of inputs and expectations of the model. predict_step View source predict_step( data ) The logic for one inference step. This method can be overridden to support custom inference logic. This method is called by Model.make_predict_function. This method should contain the mathematical logic for one step of inference. This typically includes the forward pass. Configuration details for how this logic is run (e.g. tf.function and tf.distribute.Strategy settings), should be left to Model.make_predict_function, which can also be overridden. Arguments data A nested structure of Tensors. Returns The result of one inference step, typically the output of calling the Model on data. reset_metrics View source reset_metrics() Resets the state of all the metrics in the model. Examples: inputs = tf.keras.layers.Input(shape=(3,)) outputs = tf.keras.layers.Dense(2)(inputs) model = tf.keras.models.Model(inputs=inputs, outputs=outputs) model.compile(optimizer="Adam", loss="mse", metrics=["mae"]) x = np.random.random((2, 3)) y = np.random.randint(0, 2, (2, 2)) _ = model.fit(x, y, verbose=0) assert all(float(m.result()) for m in model.metrics) model.reset_metrics() assert all(float(m.result()) == 0 for m in model.metrics) reset_states View source reset_states() save View source save( filepath, overwrite=True, include_optimizer=True, save_format=None, signatures=None, options=None, save_traces=True ) Saves the model to Tensorflow SavedModel or a single HDF5 file. Please see tf.keras.models.save_model or the Serialization and Saving guide for details. Arguments filepath String, PathLike, path to SavedModel or H5 file to save the model. overwrite Whether to silently overwrite any existing file at the target location, or provide the user with a manual prompt. include_optimizer If True, save optimizer's state together. save_format Either 'tf' or 'h5', indicating whether to save the model to Tensorflow SavedModel or HDF5. Defaults to 'tf' in TF 2.X, and 'h5' in TF 1.X. signatures Signatures to save with the SavedModel. Applicable to the 'tf' format only. Please see the signatures argument in tf.saved_model.save for details. options (only applies to SavedModel format) tf.saved_model.SaveOptions object that specifies options for saving to SavedModel. save_traces (only applies to SavedModel format) When enabled, the SavedModel will store the function traces for each layer. This can be disabled, so that only the configs of each layer are stored. Defaults to True. Disabling this will decrease serialization time and reduce file size, but it requires that all custom layers/models implement a get_config() method. Example: from keras.models import load_model model.save('my_model.h5') # creates a HDF5 file 'my_model.h5' del model # deletes the existing model # returns a compiled model # identical to the previous one model = load_model('my_model.h5') save_weights View source save_weights( filepath, overwrite=True, save_format=None, options=None ) Saves all layer weights. Either saves in HDF5 or in TensorFlow format based on the save_format argument. When saving in HDF5 format, the weight file has: layer_names (attribute), a list of strings (ordered names of model layers). For every layer, a group named layer.name For every such layer group, a group attribute weight_names, a list of strings (ordered names of weights tensor of the layer). For every weight in the layer, a dataset storing the weight value, named after the weight tensor. When saving in TensorFlow format, all objects referenced by the network are saved in the same format as tf.train.Checkpoint, including any Layer instances or Optimizer instances assigned to object attributes. For networks constructed from inputs and outputs using tf.keras.Model(inputs, outputs), Layer instances used by the network are tracked/saved automatically. For user-defined classes which inherit from tf.keras.Model, Layer instances must be assigned to object attributes, typically in the constructor. See the documentation of tf.train.Checkpoint and tf.keras.Model for details. While the formats are the same, do not mix save_weights and tf.train.Checkpoint. Checkpoints saved by Model.save_weights should be loaded using Model.load_weights. Checkpoints saved using tf.train.Checkpoint.save should be restored using the corresponding tf.train.Checkpoint.restore. Prefer tf.train.Checkpoint over save_weights for training checkpoints. The TensorFlow format matches objects and variables by starting at a root object, self for save_weights, and greedily matching attribute names. For Model.save this is the Model, and for Checkpoint.save this is the Checkpoint even if the Checkpoint has a model attached. This means saving a tf.keras.Model using save_weights and loading into a tf.train.Checkpoint with a Model attached (or vice versa) will not match the Model's variables. See the guide to training checkpoints for details on the TensorFlow format. Arguments filepath String or PathLike, path to the file to save the weights to. When saving in TensorFlow format, this is the prefix used for checkpoint files (multiple files are generated). Note that the '.h5' suffix causes weights to be saved in HDF5 format. overwrite Whether to silently overwrite any existing file at the target location, or provide the user with a manual prompt. save_format Either 'tf' or 'h5'. A filepath ending in '.h5' or '.keras' will default to HDF5 if save_format is None. Otherwise None defaults to 'tf'. options Optional tf.train.CheckpointOptions object that specifies options for saving weights. Raises ImportError If h5py is not available when attempting to save in HDF5 format. ValueError For invalid/unknown format arguments. summary View source summary( line_length=None, positions=None, print_fn=None ) Prints a string summary of the network. Arguments line_length Total length of printed lines (e.g. set this to adapt the display to different terminal window sizes). positions Relative or absolute positions of log elements in each line. If not provided, defaults to [.33, .55, .67, 1.]. print_fn Print function to use. Defaults to print. It will be called on each line of the summary. You can set it to a custom function in order to capture the string summary. Raises ValueError if summary() is called before the model is built. test_on_batch View source test_on_batch( x, y=None, sample_weight=None, reset_metrics=True, return_dict=False ) Test the model on a single batch of samples. Arguments x Input data. It could be: - A Numpy array (or array-like), or a list of arrays (in case the model has multiple inputs). - A TensorFlow tensor, or a list of tensors (in case the model has multiple inputs). A dict mapping input names to the corresponding array/tensors, if the model has named inputs. y Target data. Like the input data x, it could be either Numpy array(s) or TensorFlow tensor(s). It should be consistent with x (you cannot have Numpy inputs and tensor targets, or inversely). sample_weight Optional array of the same length as x, containing weights to apply to the model's loss for each sample. In the case of temporal data, you can pass a 2D array with shape (samples, sequence_length), to apply a different weight to every timestep of every sample. reset_metrics If True, the metrics returned will be only for this batch. If False, the metrics will be statefully accumulated across batches. return_dict If True, loss and metric results are returned as a dict, with each key being the name of the metric. If False, they are returned as a list. Returns Scalar test loss (if the model has a single output and no metrics) or list of scalars (if the model has multiple outputs and/or metrics). The attribute model.metrics_names will give you the display labels for the scalar outputs. Raises RuntimeError If model.test_on_batch is wrapped in tf.function. ValueError In case of invalid user-provided arguments. test_step View source test_step( data ) The logic for one evaluation step. This method can be overridden to support custom evaluation logic. This method is called by Model.make_test_function. This function should contain the mathematical logic for one step of evaluation. This typically includes the forward pass, loss calculation, and metrics updates. Configuration details for how this logic is run (e.g. tf.function and tf.distribute.Strategy settings), should be left to Model.make_test_function, which can also be overridden. Arguments data A nested structure of Tensors. Returns A dict containing values that will be passed to tf.keras.callbacks.CallbackList.on_train_batch_end. Typically, the values of the Model's metrics are returned. to_json View source to_json( **kwargs ) Returns a JSON string containing the network configuration. To load a network from a JSON save file, use keras.models.model_from_json(json_string, custom_objects={}). Arguments **kwargs Additional keyword arguments to be passed to json.dumps(). Returns A JSON string. to_yaml View source to_yaml( **kwargs ) Returns a yaml string containing the network configuration. To load a network from a yaml save file, use keras.models.model_from_yaml(yaml_string, custom_objects={}). custom_objects should be a dictionary mapping the names of custom losses / layers / etc to the corresponding functions / classes. Arguments **kwargs Additional keyword arguments to be passed to yaml.dump(). Returns A YAML string. Raises ImportError if yaml module is not found. train_on_batch View source train_on_batch( x, y=None, sample_weight=None, class_weight=None, reset_metrics=True, return_dict=False ) Runs a single gradient update on a single batch of data. Arguments x Input data. It could be: A Numpy array (or array-like), or a list of arrays (in case the model has multiple inputs). A TensorFlow tensor, or a list of tensors (in case the model has multiple inputs). A dict mapping input names to the corresponding array/tensors, if the model has named inputs. y Target data. Like the input data x, it could be either Numpy array(s) or TensorFlow tensor(s). It should be consistent with x (you cannot have Numpy inputs and tensor targets, or inversely). sample_weight Optional array of the same length as x, containing weights to apply to the model's loss for each sample. In the case of temporal data, you can pass a 2D array with shape (samples, sequence_length), to apply a different weight to every timestep of every sample. class_weight Optional dictionary mapping class indices (integers) to a weight (float) to apply to the model's loss for the samples from this class during training. This can be useful to tell the model to "pay more attention" to samples from an under-represented class. reset_metrics If True, the metrics returned will be only for this batch. If False, the metrics will be statefully accumulated across batches. return_dict If True, loss and metric results are returned as a dict, with each key being the name of the metric. If False, they are returned as a list. Returns Scalar training loss (if the model has a single output and no metrics) or list of scalars (if the model has multiple outputs and/or metrics). The attribute model.metrics_names will give you the display labels for the scalar outputs. Raises RuntimeError If model.train_on_batch is wrapped in tf.function. ValueError In case of invalid user-provided arguments. train_step View source train_step( data ) The logic for one training step. This method can be overridden to support custom training logic. This method is called by Model.make_train_function. This method should contain the mathematical logic for one step of training. This typically includes the forward pass, loss calculation, backpropagation, and metric updates. Configuration details for how this logic is run (e.g. tf.function and tf.distribute.Strategy settings), should be left to Model.make_train_function, which can also be overridden. Arguments data A nested structure of Tensors. Returns A dict containing values that will be passed to tf.keras.callbacks.CallbackList.on_train_batch_end. Typically, the values of the Model's metrics are returned. Example: {'loss': 0.2, 'accuracy': 0.7}.
tensorflow.keras.experimental.widedeepmodel
Module: tf.keras.initializers Keras initializer serialization / deserialization. View aliases Main aliases tf.initializers Classes class Constant: Initializer that generates tensors with constant values. class GlorotNormal: The Glorot normal initializer, also called Xavier normal initializer. class GlorotUniform: The Glorot uniform initializer, also called Xavier uniform initializer. class HeNormal: He normal initializer. class HeUniform: He uniform variance scaling initializer. class Identity: Initializer that generates the identity matrix. class Initializer: Initializer base class: all Keras initializers inherit from this class. class LecunNormal: Lecun normal initializer. class LecunUniform: Lecun uniform initializer. class Ones: Initializer that generates tensors initialized to 1. class Orthogonal: Initializer that generates an orthogonal matrix. class RandomNormal: Initializer that generates tensors with a normal distribution. class RandomUniform: Initializer that generates tensors with a uniform distribution. class TruncatedNormal: Initializer that generates a truncated normal distribution. class VarianceScaling: Initializer capable of adapting its scale to the shape of weights tensors. class Zeros: Initializer that generates tensors initialized to 0. class constant: Initializer that generates tensors with constant values. class glorot_normal: The Glorot normal initializer, also called Xavier normal initializer. class glorot_uniform: The Glorot uniform initializer, also called Xavier uniform initializer. class he_normal: He normal initializer. class he_uniform: He uniform variance scaling initializer. class identity: Initializer that generates the identity matrix. class lecun_normal: Lecun normal initializer. class lecun_uniform: Lecun uniform initializer. class ones: Initializer that generates tensors initialized to 1. class orthogonal: Initializer that generates an orthogonal matrix. class random_normal: Initializer that generates tensors with a normal distribution. class random_uniform: Initializer that generates tensors with a uniform distribution. class truncated_normal: Initializer that generates a truncated normal distribution. class variance_scaling: Initializer capable of adapting its scale to the shape of weights tensors. class zeros: Initializer that generates tensors initialized to 0. Functions deserialize(...): Return an Initializer object from its config. get(...) serialize(...)
tensorflow.keras.initializers
tf.keras.initializers.Constant Initializer that generates tensors with constant values. Inherits From: Initializer View aliases Main aliases tf.initializers.Constant, tf.initializers.constant, tf.keras.initializers.constant tf.keras.initializers.Constant( value=0 ) Also available via the shortcut function tf.keras.initializers.constant. Only scalar values are allowed. The constant value provided must be convertible to the dtype requested when calling the initializer. Examples: # Standalone usage: initializer = tf.keras.initializers.Constant(3.) values = initializer(shape=(2, 2)) # Usage in a Keras layer: initializer = tf.keras.initializers.Constant(3.) layer = tf.keras.layers.Dense(3, kernel_initializer=initializer) Args value A Python scalar. Methods from_config View source @classmethod from_config( config ) Instantiates an initializer from a configuration dictionary. Example: initializer = RandomUniform(-1, 1) config = initializer.get_config() initializer = RandomUniform.from_config(config) Args config A Python dictionary, the output of get_config. Returns A tf.keras.initializers.Initializer instance. get_config View source get_config() Returns the configuration of the initializer as a JSON-serializable dict. Returns A JSON-serializable Python dict. __call__ View source __call__( shape, dtype=None, **kwargs ) Returns a tensor object initialized to self.value. Args shape Shape of the tensor. dtype Optional dtype of the tensor. If not specified, tf.keras.backend.floatx() is used, which default to float32 unless you configured it otherwise (via tf.keras.backend.set_floatx(float_dtype)). **kwargs Additional keyword arguments.
tensorflow.keras.initializers.constant
tf.keras.initializers.deserialize View source on GitHub Return an Initializer object from its config. View aliases Main aliases tf.initializers.deserialize Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.initializers.deserialize tf.keras.initializers.deserialize( config, custom_objects=None )
tensorflow.keras.initializers.deserialize
tf.keras.initializers.get View source on GitHub View aliases Main aliases tf.initializers.get Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.initializers.get tf.keras.initializers.get( identifier )
tensorflow.keras.initializers.get
tf.keras.initializers.GlorotNormal The Glorot normal initializer, also called Xavier normal initializer. Inherits From: VarianceScaling, Initializer View aliases Main aliases tf.initializers.GlorotNormal, tf.initializers.glorot_normal, tf.keras.initializers.glorot_normal tf.keras.initializers.GlorotNormal( seed=None ) Also available via the shortcut function tf.keras.initializers.glorot_normal. Draws samples from a truncated normal distribution centered on 0 with stddev = sqrt(2 / (fan_in + fan_out)) where fan_in is the number of input units in the weight tensor and fan_out is the number of output units in the weight tensor. Examples: # Standalone usage: initializer = tf.keras.initializers.GlorotNormal() values = initializer(shape=(2, 2)) # Usage in a Keras layer: initializer = tf.keras.initializers.GlorotNormal() layer = tf.keras.layers.Dense(3, kernel_initializer=initializer) Args seed A Python integer. An initializer created with a given seed will always produce the same random tensor for a given shape and dtype. References: Glorot et al., 2010 (pdf) Methods from_config View source @classmethod from_config( config ) Instantiates an initializer from a configuration dictionary. Example: initializer = RandomUniform(-1, 1) config = initializer.get_config() initializer = RandomUniform.from_config(config) Args config A Python dictionary. It will typically be the output of get_config. Returns An Initializer instance. get_config View source get_config() Returns the configuration of the initializer as a JSON-serializable dict. Returns A JSON-serializable Python dict. __call__ View source __call__( shape, dtype=None, **kwargs ) Returns a tensor object initialized as specified by the initializer. Args shape Shape of the tensor. dtype Optional dtype of the tensor. Only floating point types are supported. If not specified, tf.keras.backend.floatx() is used, which default to float32 unless you configured it otherwise (via tf.keras.backend.set_floatx(float_dtype)) **kwargs Additional keyword arguments.
tensorflow.keras.initializers.glorotnormal
tf.keras.initializers.GlorotUniform The Glorot uniform initializer, also called Xavier uniform initializer. Inherits From: VarianceScaling, Initializer View aliases Main aliases tf.initializers.GlorotUniform, tf.initializers.glorot_uniform, tf.keras.initializers.glorot_uniform tf.keras.initializers.GlorotUniform( seed=None ) Also available via the shortcut function tf.keras.initializers.glorot_uniform. Draws samples from a uniform distribution within [-limit, limit], where limit = sqrt(6 / (fan_in + fan_out)) (fan_in is the number of input units in the weight tensor and fan_out is the number of output units). Examples: # Standalone usage: initializer = tf.keras.initializers.GlorotUniform() values = initializer(shape=(2, 2)) # Usage in a Keras layer: initializer = tf.keras.initializers.GlorotUniform() layer = tf.keras.layers.Dense(3, kernel_initializer=initializer) Args seed A Python integer. An initializer created with a given seed will always produce the same random tensor for a given shape and dtype. References: Glorot et al., 2010 (pdf) Methods from_config View source @classmethod from_config( config ) Instantiates an initializer from a configuration dictionary. Example: initializer = RandomUniform(-1, 1) config = initializer.get_config() initializer = RandomUniform.from_config(config) Args config A Python dictionary. It will typically be the output of get_config. Returns An Initializer instance. get_config View source get_config() Returns the configuration of the initializer as a JSON-serializable dict. Returns A JSON-serializable Python dict. __call__ View source __call__( shape, dtype=None, **kwargs ) Returns a tensor object initialized as specified by the initializer. Args shape Shape of the tensor. dtype Optional dtype of the tensor. Only floating point types are supported. If not specified, tf.keras.backend.floatx() is used, which default to float32 unless you configured it otherwise (via tf.keras.backend.set_floatx(float_dtype)) **kwargs Additional keyword arguments.
tensorflow.keras.initializers.glorotuniform
tf.keras.initializers.HeNormal He normal initializer. Inherits From: VarianceScaling, Initializer View aliases Main aliases tf.initializers.HeNormal, tf.initializers.he_normal, tf.keras.initializers.he_normal tf.keras.initializers.HeNormal( seed=None ) Also available via the shortcut function tf.keras.initializers.he_normal. It draws samples from a truncated normal distribution centered on 0 with stddev = sqrt(2 / fan_in) where fan_in is the number of input units in the weight tensor. Examples: # Standalone usage: initializer = tf.keras.initializers.HeNormal() values = initializer(shape=(2, 2)) # Usage in a Keras layer: initializer = tf.keras.initializers.HeNormal() layer = tf.keras.layers.Dense(3, kernel_initializer=initializer) Arguments seed A Python integer. An initializer created with a given seed will always produce the same random tensor for a given shape and dtype. References: He et al., 2015 (pdf) Methods from_config View source @classmethod from_config( config ) Instantiates an initializer from a configuration dictionary. Example: initializer = RandomUniform(-1, 1) config = initializer.get_config() initializer = RandomUniform.from_config(config) Args config A Python dictionary. It will typically be the output of get_config. Returns An Initializer instance. get_config View source get_config() Returns the configuration of the initializer as a JSON-serializable dict. Returns A JSON-serializable Python dict. __call__ View source __call__( shape, dtype=None, **kwargs ) Returns a tensor object initialized as specified by the initializer. Args shape Shape of the tensor. dtype Optional dtype of the tensor. Only floating point types are supported. If not specified, tf.keras.backend.floatx() is used, which default to float32 unless you configured it otherwise (via tf.keras.backend.set_floatx(float_dtype)) **kwargs Additional keyword arguments.
tensorflow.keras.initializers.henormal
tf.keras.initializers.HeUniform He uniform variance scaling initializer. Inherits From: VarianceScaling, Initializer View aliases Main aliases tf.initializers.HeUniform, tf.initializers.he_uniform, tf.keras.initializers.he_uniform tf.keras.initializers.HeUniform( seed=None ) Also available via the shortcut function tf.keras.initializers.he_uniform. Draws samples from a uniform distribution within [-limit, limit], where limit = sqrt(6 / fan_in) (fan_in is the number of input units in the weight tensor). Examples: # Standalone usage: initializer = tf.keras.initializers.HeUniform() values = initializer(shape=(2, 2)) # Usage in a Keras layer: initializer = tf.keras.initializers.HeUniform() layer = tf.keras.layers.Dense(3, kernel_initializer=initializer) Arguments seed A Python integer. An initializer created with a given seed will always produce the same random tensor for a given shape and dtype. References: He et al., 2015 (pdf) Methods from_config View source @classmethod from_config( config ) Instantiates an initializer from a configuration dictionary. Example: initializer = RandomUniform(-1, 1) config = initializer.get_config() initializer = RandomUniform.from_config(config) Args config A Python dictionary. It will typically be the output of get_config. Returns An Initializer instance. get_config View source get_config() Returns the configuration of the initializer as a JSON-serializable dict. Returns A JSON-serializable Python dict. __call__ View source __call__( shape, dtype=None, **kwargs ) Returns a tensor object initialized as specified by the initializer. Args shape Shape of the tensor. dtype Optional dtype of the tensor. Only floating point types are supported. If not specified, tf.keras.backend.floatx() is used, which default to float32 unless you configured it otherwise (via tf.keras.backend.set_floatx(float_dtype)) **kwargs Additional keyword arguments.
tensorflow.keras.initializers.heuniform
tf.keras.initializers.Identity Initializer that generates the identity matrix. Inherits From: Initializer View aliases Main aliases tf.initializers.Identity, tf.initializers.identity, tf.keras.initializers.identity tf.keras.initializers.Identity( gain=1.0 ) Also available via the shortcut function tf.keras.initializers.identity. Only usable for generating 2D matrices. Examples: # Standalone usage: initializer = tf.keras.initializers.Identity() values = initializer(shape=(2, 2)) # Usage in a Keras layer: initializer = tf.keras.initializers.Identity() layer = tf.keras.layers.Dense(3, kernel_initializer=initializer) Args gain Multiplicative factor to apply to the identity matrix. Methods from_config View source @classmethod from_config( config ) Instantiates an initializer from a configuration dictionary. Example: initializer = RandomUniform(-1, 1) config = initializer.get_config() initializer = RandomUniform.from_config(config) Args config A Python dictionary. It will typically be the output of get_config. Returns An Initializer instance. get_config View source get_config() Returns the configuration of the initializer as a JSON-serializable dict. Returns A JSON-serializable Python dict. __call__ View source __call__( shape, dtype=None, **kwargs ) Returns a tensor object initialized to a 2D identity matrix. Args shape Shape of the tensor. It should have exactly rank 2. dtype Optional dtype of the tensor. Only floating point types are supported. If not specified, tf.keras.backend.floatx() is used, which default to float32 unless you configured it otherwise (via tf.keras.backend.set_floatx(float_dtype)) **kwargs Additional keyword arguments.
tensorflow.keras.initializers.identity
tf.keras.initializers.Initializer View source on GitHub Initializer base class: all Keras initializers inherit from this class. View aliases Main aliases tf.initializers.Initializer Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.initializers.Initializer Initializers should implement a __call__ method with the following signature: def __call__(self, shape, dtype=None, **kwargs): # returns a tensor of shape `shape` and dtype `dtype` # containing values drawn from a distribution of your choice. Optionally, you an also implement the method get_config and the class method from_config in order to support serialization -- just like with any Keras object. Here's a simple example: a random normal initializer. import tensorflow as tf class ExampleRandomNormal(tf.keras.initializers.Initializer): def __init__(self, mean, stddev): self.mean = mean self.stddev = stddev def __call__(self, shape, dtype=None, **kwargs): return tf.random.normal( shape, mean=self.mean, stddev=self.stddev, dtype=dtype) def get_config(self): # To support serialization return {"mean": self.mean, "stddev": self.stddev} Note that we don't have to implement from_config in the example above since the constructor arguments of the class the keys in the config returned by get_config are the same. In this case, the default from_config works fine. Methods from_config View source @classmethod from_config( config ) Instantiates an initializer from a configuration dictionary. Example: initializer = RandomUniform(-1, 1) config = initializer.get_config() initializer = RandomUniform.from_config(config) Args config A Python dictionary, the output of get_config. Returns A tf.keras.initializers.Initializer instance. get_config View source get_config() Returns the configuration of the initializer as a JSON-serializable dict. Returns A JSON-serializable Python dict. __call__ View source __call__( shape, dtype=None, **kwargs ) Returns a tensor object initialized as specified by the initializer. Args shape Shape of the tensor. dtype Optional dtype of the tensor. **kwargs Additional keyword arguments.
tensorflow.keras.initializers.initializer
tf.keras.initializers.LecunNormal Lecun normal initializer. Inherits From: VarianceScaling, Initializer View aliases Main aliases tf.initializers.LecunNormal, tf.initializers.lecun_normal, tf.keras.initializers.lecun_normal tf.keras.initializers.LecunNormal( seed=None ) Also available via the shortcut function tf.keras.initializers.lecun_normal. Initializers allow you to pre-specify an initialization strategy, encoded in the Initializer object, without knowing the shape and dtype of the variable being initialized. Draws samples from a truncated normal distribution centered on 0 with stddev = sqrt(1 / fan_in) where fan_in is the number of input units in the weight tensor. Examples: # Standalone usage: initializer = tf.keras.initializers.LecunNormal() values = initializer(shape=(2, 2)) # Usage in a Keras layer: initializer = tf.keras.initializers.LecunNormal() layer = tf.keras.layers.Dense(3, kernel_initializer=initializer) Arguments seed A Python integer. Used to seed the random generator. References: Self-Normalizing Neural Networks, Klambauer et al., 2017 (pdf) Efficient Backprop, Lecun et al., 1998 Methods from_config View source @classmethod from_config( config ) Instantiates an initializer from a configuration dictionary. Example: initializer = RandomUniform(-1, 1) config = initializer.get_config() initializer = RandomUniform.from_config(config) Args config A Python dictionary. It will typically be the output of get_config. Returns An Initializer instance. get_config View source get_config() Returns the configuration of the initializer as a JSON-serializable dict. Returns A JSON-serializable Python dict. __call__ View source __call__( shape, dtype=None, **kwargs ) Returns a tensor object initialized as specified by the initializer. Args shape Shape of the tensor. dtype Optional dtype of the tensor. Only floating point types are supported. If not specified, tf.keras.backend.floatx() is used, which default to float32 unless you configured it otherwise (via tf.keras.backend.set_floatx(float_dtype)) **kwargs Additional keyword arguments.
tensorflow.keras.initializers.lecunnormal
tf.keras.initializers.LecunUniform Lecun uniform initializer. Inherits From: VarianceScaling, Initializer View aliases Main aliases tf.initializers.LecunUniform, tf.initializers.lecun_uniform, tf.keras.initializers.lecun_uniform tf.keras.initializers.LecunUniform( seed=None ) Also available via the shortcut function tf.keras.initializers.lecun_uniform. Draws samples from a uniform distribution within [-limit, limit], where limit = sqrt(3 / fan_in) (fan_in is the number of input units in the weight tensor). Examples: # Standalone usage: initializer = tf.keras.initializers.LecunUniform() values = initializer(shape=(2, 2)) # Usage in a Keras layer: initializer = tf.keras.initializers.LecunUniform() layer = tf.keras.layers.Dense(3, kernel_initializer=initializer) Arguments seed A Python integer. An initializer created with a given seed will always produce the same random tensor for a given shape and dtype. References: Self-Normalizing Neural Networks, Klambauer et al., 2017 (pdf) Efficient Backprop, Lecun et al., 1998 Methods from_config View source @classmethod from_config( config ) Instantiates an initializer from a configuration dictionary. Example: initializer = RandomUniform(-1, 1) config = initializer.get_config() initializer = RandomUniform.from_config(config) Args config A Python dictionary. It will typically be the output of get_config. Returns An Initializer instance. get_config View source get_config() Returns the configuration of the initializer as a JSON-serializable dict. Returns A JSON-serializable Python dict. __call__ View source __call__( shape, dtype=None, **kwargs ) Returns a tensor object initialized as specified by the initializer. Args shape Shape of the tensor. dtype Optional dtype of the tensor. Only floating point types are supported. If not specified, tf.keras.backend.floatx() is used, which default to float32 unless you configured it otherwise (via tf.keras.backend.set_floatx(float_dtype)) **kwargs Additional keyword arguments.
tensorflow.keras.initializers.lecununiform
tf.keras.initializers.Ones Initializer that generates tensors initialized to 1. Inherits From: ones_initializer, Initializer View aliases Main aliases tf.initializers.Ones, tf.initializers.ones, tf.keras.initializers.ones Also available via the shortcut function tf.keras.initializers.ones. Examples: # Standalone usage: initializer = tf.keras.initializers.Ones() values = initializer(shape=(2, 2)) # Usage in a Keras layer: initializer = tf.keras.initializers.Ones() layer = tf.keras.layers.Dense(3, kernel_initializer=initializer) Methods from_config View source @classmethod from_config( config ) Instantiates an initializer from a configuration dictionary. Example: initializer = RandomUniform(-1, 1) config = initializer.get_config() initializer = RandomUniform.from_config(config) Args config A Python dictionary. It will typically be the output of get_config. Returns An Initializer instance. get_config View source get_config() Returns the configuration of the initializer as a JSON-serializable dict. Returns A JSON-serializable Python dict. __call__ View source __call__( shape, dtype=None, **kwargs ) Returns a tensor object initialized as specified by the initializer. Args shape Shape of the tensor. dtype Optional dtype of the tensor. Only numeric or boolean dtypes are supported. If not specified, tf.keras.backend.floatx() is used, which default to float32 unless you configured it otherwise (via tf.keras.backend.set_floatx(float_dtype)). **kwargs Additional keyword arguments.
tensorflow.keras.initializers.ones
tf.keras.initializers.Orthogonal Initializer that generates an orthogonal matrix. Inherits From: Initializer View aliases Main aliases tf.initializers.Orthogonal, tf.initializers.orthogonal, tf.keras.initializers.orthogonal tf.keras.initializers.Orthogonal( gain=1.0, seed=None ) Also available via the shortcut function tf.keras.initializers.orthogonal. If the shape of the tensor to initialize is two-dimensional, it is initialized with an orthogonal matrix obtained from the QR decomposition of a matrix of random numbers drawn from a normal distribution. If the matrix has fewer rows than columns then the output will have orthogonal rows. Otherwise, the output will have orthogonal columns. If the shape of the tensor to initialize is more than two-dimensional, a matrix of shape (shape[0] * ... * shape[n - 2], shape[n - 1]) is initialized, where n is the length of the shape vector. The matrix is subsequently reshaped to give a tensor of the desired shape. Examples: # Standalone usage: initializer = tf.keras.initializers.Orthogonal() values = initializer(shape=(2, 2)) # Usage in a Keras layer: initializer = tf.keras.initializers.Orthogonal() layer = tf.keras.layers.Dense(3, kernel_initializer=initializer) Args gain multiplicative factor to apply to the orthogonal matrix seed A Python integer. An initializer created with a given seed will always produce the same random tensor for a given shape and dtype. References: Saxe et al., 2014 (pdf) Methods from_config View source @classmethod from_config( config ) Instantiates an initializer from a configuration dictionary. Example: initializer = RandomUniform(-1, 1) config = initializer.get_config() initializer = RandomUniform.from_config(config) Args config A Python dictionary. It will typically be the output of get_config. Returns An Initializer instance. get_config View source get_config() Returns the configuration of the initializer as a JSON-serializable dict. Returns A JSON-serializable Python dict. __call__ View source __call__( shape, dtype=None, **kwargs ) Returns a tensor object initialized to an orthogonal matrix. Args shape Shape of the tensor. dtype Optional dtype of the tensor. Only floating point types are supported. If not specified, tf.keras.backend.floatx() is used, which default to float32 unless you configured it otherwise (via tf.keras.backend.set_floatx(float_dtype)) **kwargs Additional keyword arguments.
tensorflow.keras.initializers.orthogonal
tf.keras.initializers.RandomNormal View source on GitHub Initializer that generates tensors with a normal distribution. Inherits From: random_normal_initializer, Initializer View aliases Main aliases tf.initializers.RandomNormal, tf.initializers.random_normal, tf.keras.initializers.random_normal tf.keras.initializers.RandomNormal( mean=0.0, stddev=0.05, seed=None ) Also available via the shortcut function tf.keras.initializers.random_normal. Examples: # Standalone usage: initializer = tf.keras.initializers.RandomNormal(mean=0., stddev=1.) values = initializer(shape=(2, 2)) # Usage in a Keras layer: initializer = tf.keras.initializers.RandomNormal(mean=0., stddev=1.) layer = tf.keras.layers.Dense(3, kernel_initializer=initializer) Args mean a python scalar or a scalar tensor. Mean of the random values to generate. stddev a python scalar or a scalar tensor. Standard deviation of the random values to generate. seed A Python integer. An initializer created with a given seed will always produce the same random tensor for a given shape and dtype. Methods from_config View source @classmethod from_config( config ) Instantiates an initializer from a configuration dictionary. Example: initializer = RandomUniform(-1, 1) config = initializer.get_config() initializer = RandomUniform.from_config(config) Args config A Python dictionary. It will typically be the output of get_config. Returns An Initializer instance. get_config View source get_config() Returns the configuration of the initializer as a JSON-serializable dict. Returns A JSON-serializable Python dict. __call__ View source __call__( shape, dtype=None, **kwargs ) Returns a tensor object initialized to random normal values. Args shape Shape of the tensor. dtype Optional dtype of the tensor. Only floating point types are supported. If not specified, tf.keras.backend.floatx() is used, which default to float32 unless you configured it otherwise (via tf.keras.backend.set_floatx(float_dtype)) **kwargs Additional keyword arguments.
tensorflow.keras.initializers.randomnormal
tf.keras.initializers.RandomUniform View source on GitHub Initializer that generates tensors with a uniform distribution. Inherits From: random_uniform_initializer, Initializer View aliases Main aliases tf.initializers.RandomUniform, tf.initializers.random_uniform, tf.keras.initializers.random_uniform tf.keras.initializers.RandomUniform( minval=-0.05, maxval=0.05, seed=None ) Also available via the shortcut function tf.keras.initializers.random_uniform. Examples: # Standalone usage: initializer = tf.keras.initializers.RandomUniform(minval=0., maxval=1.) values = initializer(shape=(2, 2)) # Usage in a Keras layer: initializer = tf.keras.initializers.RandomUniform(minval=0., maxval=1.) layer = tf.keras.layers.Dense(3, kernel_initializer=initializer) Args minval A python scalar or a scalar tensor. Lower bound of the range of random values to generate (inclusive). maxval A python scalar or a scalar tensor. Upper bound of the range of random values to generate (exclusive). seed A Python integer. An initializer created with a given seed will always produce the same random tensor for a given shape and dtype. Methods from_config View source @classmethod from_config( config ) Instantiates an initializer from a configuration dictionary. Example: initializer = RandomUniform(-1, 1) config = initializer.get_config() initializer = RandomUniform.from_config(config) Args config A Python dictionary. It will typically be the output of get_config. Returns An Initializer instance. get_config View source get_config() Returns the configuration of the initializer as a JSON-serializable dict. Returns A JSON-serializable Python dict. __call__ View source __call__( shape, dtype=None, **kwargs ) Returns a tensor object initialized as specified by the initializer. Args shape Shape of the tensor. dtype Optional dtype of the tensor. Only floating point and integer types are supported. If not specified, tf.keras.backend.floatx() is used, which default to float32 unless you configured it otherwise (via tf.keras.backend.set_floatx(float_dtype)). **kwargs Additional keyword arguments.
tensorflow.keras.initializers.randomuniform
tf.keras.initializers.serialize View source on GitHub View aliases Main aliases tf.initializers.serialize Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.initializers.serialize tf.keras.initializers.serialize( initializer )
tensorflow.keras.initializers.serialize
tf.keras.initializers.TruncatedNormal View source on GitHub Initializer that generates a truncated normal distribution. Inherits From: Initializer View aliases Main aliases tf.initializers.TruncatedNormal, tf.initializers.truncated_normal, tf.keras.initializers.truncated_normal tf.keras.initializers.TruncatedNormal( mean=0.0, stddev=0.05, seed=None ) Also available via the shortcut function tf.keras.initializers.truncated_normal. The values generated are similar to values from a tf.keras.initializers.RandomNormal initializer except that values more than two standard deviations from the mean are discarded and re-drawn. Examples: # Standalone usage: initializer = tf.keras.initializers.TruncatedNormal(mean=0., stddev=1.) values = initializer(shape=(2, 2)) # Usage in a Keras layer: initializer = tf.keras.initializers.TruncatedNormal(mean=0., stddev=1.) layer = tf.keras.layers.Dense(3, kernel_initializer=initializer) Args mean a python scalar or a scalar tensor. Mean of the random values to generate. stddev a python scalar or a scalar tensor. Standard deviation of the random values to generate. seed A Python integer. An initializer created with a given seed will always produce the same random tensor for a given shape and dtype. Methods from_config View source @classmethod from_config( config ) Instantiates an initializer from a configuration dictionary. Example: initializer = RandomUniform(-1, 1) config = initializer.get_config() initializer = RandomUniform.from_config(config) Args config A Python dictionary. It will typically be the output of get_config. Returns An Initializer instance. get_config View source get_config() Returns the configuration of the initializer as a JSON-serializable dict. Returns A JSON-serializable Python dict. __call__ View source __call__( shape, dtype=None, **kwargs ) Returns a tensor object initialized to random normal values (truncated). Args shape Shape of the tensor. dtype Optional dtype of the tensor. Only floating point types are supported. If not specified, tf.keras.backend.floatx() is used, which default to float32 unless you configured it otherwise (via tf.keras.backend.set_floatx(float_dtype)) **kwargs Additional keyword arguments.
tensorflow.keras.initializers.truncatednormal
tf.keras.initializers.VarianceScaling Initializer capable of adapting its scale to the shape of weights tensors. Inherits From: Initializer View aliases Main aliases tf.initializers.VarianceScaling, tf.initializers.variance_scaling, tf.keras.initializers.variance_scaling tf.keras.initializers.VarianceScaling( scale=1.0, mode='fan_in', distribution='truncated_normal', seed=None ) Also available via the shortcut function tf.keras.initializers.variance_scaling. With distribution="truncated_normal" or "untruncated_normal", samples are drawn from a truncated/untruncated normal distribution with a mean of zero and a standard deviation (after truncation, if used) stddev = sqrt(scale / n), where n is: number of input units in the weight tensor, if mode="fan_in" number of output units, if mode="fan_out" average of the numbers of input and output units, if mode="fan_avg" With distribution="uniform", samples are drawn from a uniform distribution within [-limit, limit], where limit = sqrt(3 * scale / n). Examples: # Standalone usage: initializer = tf.keras.initializers.VarianceScaling( scale=0.1, mode='fan_in', distribution='uniform') values = initializer(shape=(2, 2)) # Usage in a Keras layer: initializer = tf.keras.initializers.VarianceScaling( scale=0.1, mode='fan_in', distribution='uniform') layer = tf.keras.layers.Dense(3, kernel_initializer=initializer) Args scale Scaling factor (positive float). mode One of "fan_in", "fan_out", "fan_avg". distribution Random distribution to use. One of "truncated_normal", "untruncated_normal" and "uniform". seed A Python integer. An initializer created with a given seed will always produce the same random tensor for a given shape and dtype. Methods from_config View source @classmethod from_config( config ) Instantiates an initializer from a configuration dictionary. Example: initializer = RandomUniform(-1, 1) config = initializer.get_config() initializer = RandomUniform.from_config(config) Args config A Python dictionary. It will typically be the output of get_config. Returns An Initializer instance. get_config View source get_config() Returns the configuration of the initializer as a JSON-serializable dict. Returns A JSON-serializable Python dict. __call__ View source __call__( shape, dtype=None, **kwargs ) Returns a tensor object initialized as specified by the initializer. Args shape Shape of the tensor. dtype Optional dtype of the tensor. Only floating point types are supported. If not specified, tf.keras.backend.floatx() is used, which default to float32 unless you configured it otherwise (via tf.keras.backend.set_floatx(float_dtype)) **kwargs Additional keyword arguments.
tensorflow.keras.initializers.variancescaling