doc_content
stringlengths
1
386k
doc_id
stringlengths
5
188
tf.keras.layers.Softmax View source on GitHub Softmax activation function. Inherits From: Layer, Module View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.layers.Softmax tf.keras.layers.Softmax( axis=-1, **kwargs ) Example without mask: inp = np.asarray([1., 2., 1.]) layer = tf.keras.layers.Softmax() layer(inp).numpy() array([0.21194157, 0.5761169 , 0.21194157], dtype=float32) mask = np.asarray([True, False, True], dtype=bool) layer(inp, mask).numpy() array([0.5, 0. , 0.5], dtype=float32) Input shape: Arbitrary. Use the keyword argument input_shape (tuple of integers, does not include the samples axis) when using this layer as the first layer in a model. Output shape: Same shape as the input. Arguments axis Integer, or list of Integers, axis along which the softmax normalization is applied. Call arguments: inputs: The inputs, or logits to the softmax layer. mask: A boolean mask of the same shape as inputs. Defaults to None. Returns softmaxed output with the same shape as inputs.
tensorflow.keras.layers.softmax
tf.keras.layers.SpatialDropout1D View source on GitHub Spatial 1D version of Dropout. Inherits From: Dropout, Layer, Module View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.layers.SpatialDropout1D tf.keras.layers.SpatialDropout1D( rate, **kwargs ) This version performs the same function as Dropout, however, it drops entire 1D feature maps instead of individual elements. If adjacent frames within feature maps are strongly correlated (as is normally the case in early convolution layers) then regular dropout will not regularize the activations and will otherwise just result in an effective learning rate decrease. In this case, SpatialDropout1D will help promote independence between feature maps and should be used instead. Arguments rate Float between 0 and 1. Fraction of the input units to drop. Call arguments: inputs: A 3D tensor. training: Python boolean indicating whether the layer should behave in training mode (adding dropout) or in inference mode (doing nothing). Input shape: 3D tensor with shape: (samples, timesteps, channels) Output shape: Same as input. References: Efficient Object Localization Using Convolutional Networks
tensorflow.keras.layers.spatialdropout1d
tf.keras.layers.SpatialDropout2D View source on GitHub Spatial 2D version of Dropout. Inherits From: Dropout, Layer, Module View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.layers.SpatialDropout2D tf.keras.layers.SpatialDropout2D( rate, data_format=None, **kwargs ) This version performs the same function as Dropout, however, it drops entire 2D feature maps instead of individual elements. If adjacent pixels within feature maps are strongly correlated (as is normally the case in early convolution layers) then regular dropout will not regularize the activations and will otherwise just result in an effective learning rate decrease. In this case, SpatialDropout2D will help promote independence between feature maps and should be used instead. Arguments rate Float between 0 and 1. Fraction of the input units to drop. data_format 'channels_first' or 'channels_last'. In 'channels_first' mode, the channels dimension (the depth) is at index 1, in 'channels_last' mode is it at index 3. It defaults to the image_data_format value found in your Keras config file at ~/.keras/keras.json. If you never set it, then it will be "channels_last". Call arguments: inputs: A 4D tensor. training: Python boolean indicating whether the layer should behave in training mode (adding dropout) or in inference mode (doing nothing). Input shape: 4D tensor with shape: (samples, channels, rows, cols) if data_format='channels_first' or 4D tensor with shape: (samples, rows, cols, channels) if data_format='channels_last'. Output shape: Same as input. References: Efficient Object Localization Using Convolutional Networks
tensorflow.keras.layers.spatialdropout2d
tf.keras.layers.SpatialDropout3D View source on GitHub Spatial 3D version of Dropout. Inherits From: Dropout, Layer, Module View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.layers.SpatialDropout3D tf.keras.layers.SpatialDropout3D( rate, data_format=None, **kwargs ) This version performs the same function as Dropout, however, it drops entire 3D feature maps instead of individual elements. If adjacent voxels within feature maps are strongly correlated (as is normally the case in early convolution layers) then regular dropout will not regularize the activations and will otherwise just result in an effective learning rate decrease. In this case, SpatialDropout3D will help promote independence between feature maps and should be used instead. Arguments rate Float between 0 and 1. Fraction of the input units to drop. data_format 'channels_first' or 'channels_last'. In 'channels_first' mode, the channels dimension (the depth) is at index 1, in 'channels_last' mode is it at index 4. It defaults to the image_data_format value found in your Keras config file at ~/.keras/keras.json. If you never set it, then it will be "channels_last". Call arguments: inputs: A 5D tensor. training: Python boolean indicating whether the layer should behave in training mode (adding dropout) or in inference mode (doing nothing). Input shape: 5D tensor with shape: (samples, channels, dim1, dim2, dim3) if data_format='channels_first' or 5D tensor with shape: (samples, dim1, dim2, dim3, channels) if data_format='channels_last'. Output shape: Same as input. References: Efficient Object Localization Using Convolutional Networks
tensorflow.keras.layers.spatialdropout3d
tf.keras.layers.StackedRNNCells View source on GitHub Wrapper allowing a stack of RNN cells to behave as a single cell. Inherits From: Layer, Module View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.layers.StackedRNNCells tf.keras.layers.StackedRNNCells( cells, **kwargs ) Used to implement efficient stacked RNNs. Arguments cells List of RNN cell instances. Examples: batch_size = 3 sentence_max_length = 5 n_features = 2 new_shape = (batch_size, sentence_max_length, n_features) x = tf.constant(np.reshape(np.arange(30), new_shape), dtype = tf.float32) rnn_cells = [tf.keras.layers.LSTMCell(128) for _ in range(2)] stacked_lstm = tf.keras.layers.StackedRNNCells(rnn_cells) lstm_layer = tf.keras.layers.RNN(stacked_lstm) result = lstm_layer(x) Attributes output_size state_size Methods get_initial_state View source get_initial_state( inputs=None, batch_size=None, dtype=None )
tensorflow.keras.layers.stackedrnncells
tf.keras.layers.Subtract View source on GitHub Layer that subtracts two inputs. Inherits From: Layer, Module View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.layers.Subtract tf.keras.layers.Subtract( **kwargs ) It takes as input a list of tensors of size 2, both of the same shape, and returns a single tensor, (inputs[0] - inputs[1]), also of the same shape. Examples: import keras input1 = keras.layers.Input(shape=(16,)) x1 = keras.layers.Dense(8, activation='relu')(input1) input2 = keras.layers.Input(shape=(32,)) x2 = keras.layers.Dense(8, activation='relu')(input2) # Equivalent to subtracted = keras.layers.subtract([x1, x2]) subtracted = keras.layers.Subtract()([x1, x2]) out = keras.layers.Dense(4)(subtracted) model = keras.models.Model(inputs=[input1, input2], outputs=out) Arguments **kwargs standard layer keyword arguments.
tensorflow.keras.layers.subtract
tf.keras.layers.ThresholdedReLU View source on GitHub Thresholded Rectified Linear Unit. Inherits From: Layer, Module View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.layers.ThresholdedReLU tf.keras.layers.ThresholdedReLU( theta=1.0, **kwargs ) It follows: f(x) = x for x > theta f(x) = 0 otherwise` Input shape: Arbitrary. Use the keyword argument input_shape (tuple of integers, does not include the samples axis) when using this layer as the first layer in a model. Output shape: Same shape as the input. Arguments theta Float >= 0. Threshold location of activation.
tensorflow.keras.layers.thresholdedrelu
tf.keras.layers.TimeDistributed View source on GitHub This wrapper allows to apply a layer to every temporal slice of an input. Inherits From: Wrapper, Layer, Module View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.layers.TimeDistributed tf.keras.layers.TimeDistributed( layer, **kwargs ) The input should be at least 3D, and the dimension of index one will be considered to be the temporal dimension. Consider a batch of 32 video samples, where each sample is a 128x128 RGB image with channels_last data format, across 10 timesteps. The batch input shape is (32, 10, 128, 128, 3). You can then use TimeDistributed to apply the same Conv2D layer to each of the 10 timesteps, independently: inputs = tf.keras.Input(shape=(10, 128, 128, 3)) conv_2d_layer = tf.keras.layers.Conv2D(64, (3, 3)) outputs = tf.keras.layers.TimeDistributed(conv_2d_layer)(inputs) outputs.shape TensorShape([None, 10, 126, 126, 64]) Because TimeDistributed applies the same instance of Conv2D to each of the timestamps, the same set of weights are used at each timestamp. Arguments layer a tf.keras.layers.Layer instance. Call arguments: inputs: Input tensor. training: Python boolean indicating whether the layer should behave in training mode or in inference mode. This argument is passed to the wrapped layer (only if the layer supports this argument). mask: Binary tensor of shape (samples, timesteps) indicating whether a given timestep should be masked. This argument is passed to the wrapped layer (only if the layer supports this argument). Raises ValueError If not initialized with a tf.keras.layers.Layer instance.
tensorflow.keras.layers.timedistributed
tf.keras.layers.UpSampling1D View source on GitHub Upsampling layer for 1D inputs. Inherits From: Layer, Module View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.layers.UpSampling1D tf.keras.layers.UpSampling1D( size=2, **kwargs ) Repeats each temporal step size times along the time axis. Examples: input_shape = (2, 2, 3) x = np.arange(np.prod(input_shape)).reshape(input_shape) print(x) [[[ 0 1 2] [ 3 4 5]] [[ 6 7 8] [ 9 10 11]]] y = tf.keras.layers.UpSampling1D(size=2)(x) print(y) tf.Tensor( [[[ 0 1 2] [ 0 1 2] [ 3 4 5] [ 3 4 5]] [[ 6 7 8] [ 6 7 8] [ 9 10 11] [ 9 10 11]]], shape=(2, 4, 3), dtype=int64) Arguments size Integer. Upsampling factor. Input shape: 3D tensor with shape: (batch_size, steps, features). Output shape: 3D tensor with shape: (batch_size, upsampled_steps, features).
tensorflow.keras.layers.upsampling1d
tf.keras.layers.UpSampling2D View source on GitHub Upsampling layer for 2D inputs. Inherits From: Layer, Module View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.layers.UpSampling2D tf.keras.layers.UpSampling2D( size=(2, 2), data_format=None, interpolation='nearest', **kwargs ) Repeats the rows and columns of the data by size[0] and size[1] respectively. Examples: input_shape = (2, 2, 1, 3) x = np.arange(np.prod(input_shape)).reshape(input_shape) print(x) [[[[ 0 1 2]] [[ 3 4 5]]] [[[ 6 7 8]] [[ 9 10 11]]]] y = tf.keras.layers.UpSampling2D(size=(1, 2))(x) print(y) tf.Tensor( [[[[ 0 1 2] [ 0 1 2]] [[ 3 4 5] [ 3 4 5]]] [[[ 6 7 8] [ 6 7 8]] [[ 9 10 11] [ 9 10 11]]]], shape=(2, 2, 2, 3), dtype=int64) Arguments size Int, or tuple of 2 integers. The upsampling factors for rows and columns. data_format A string, one of channels_last (default) or channels_first. The ordering of the dimensions in the inputs. channels_last corresponds to inputs with shape (batch_size, height, width, channels) while channels_first corresponds to inputs with shape (batch_size, channels, height, width). It defaults to the image_data_format value found in your Keras config file at ~/.keras/keras.json. If you never set it, then it will be "channels_last". interpolation A string, one of nearest or bilinear. Input shape: 4D tensor with shape: If data_format is "channels_last": (batch_size, rows, cols, channels) If data_format is "channels_first": (batch_size, channels, rows, cols) Output shape: 4D tensor with shape: If data_format is "channels_last": (batch_size, upsampled_rows, upsampled_cols, channels) If data_format is "channels_first": (batch_size, channels, upsampled_rows, upsampled_cols)
tensorflow.keras.layers.upsampling2d
tf.keras.layers.UpSampling3D View source on GitHub Upsampling layer for 3D inputs. Inherits From: Layer, Module View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.layers.UpSampling3D tf.keras.layers.UpSampling3D( size=(2, 2, 2), data_format=None, **kwargs ) Repeats the 1st, 2nd and 3rd dimensions of the data by size[0], size[1] and size[2] respectively. Examples: input_shape = (2, 1, 2, 1, 3) x = tf.constant(1, shape=input_shape) y = tf.keras.layers.UpSampling3D(size=2)(x) print(y.shape) (2, 2, 4, 2, 3) Arguments size Int, or tuple of 3 integers. The upsampling factors for dim1, dim2 and dim3. data_format A string, one of channels_last (default) or channels_first. The ordering of the dimensions in the inputs. channels_last corresponds to inputs with shape (batch_size, spatial_dim1, spatial_dim2, spatial_dim3, channels) while channels_first corresponds to inputs with shape (batch_size, channels, spatial_dim1, spatial_dim2, spatial_dim3). It defaults to the image_data_format value found in your Keras config file at ~/.keras/keras.json. If you never set it, then it will be "channels_last". Input shape: 5D tensor with shape: If data_format is "channels_last": (batch_size, dim1, dim2, dim3, channels) If data_format is "channels_first": (batch_size, channels, dim1, dim2, dim3) Output shape: 5D tensor with shape: If data_format is "channels_last": (batch_size, upsampled_dim1, upsampled_dim2, upsampled_dim3, channels) If data_format is "channels_first": (batch_size, channels, upsampled_dim1, upsampled_dim2, upsampled_dim3)
tensorflow.keras.layers.upsampling3d
tf.keras.layers.Wrapper View source on GitHub Abstract wrapper base class. Inherits From: Layer, Module View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.layers.Wrapper tf.keras.layers.Wrapper( layer, **kwargs ) Wrappers take another layer and augment it in various ways. Do not use this class as a layer, it is only an abstract base class. Two usable wrappers are the TimeDistributed and Bidirectional wrappers. Arguments layer The layer to be wrapped.
tensorflow.keras.layers.wrapper
tf.keras.layers.ZeroPadding1D View source on GitHub Zero-padding layer for 1D input (e.g. temporal sequence). Inherits From: Layer, Module View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.layers.ZeroPadding1D tf.keras.layers.ZeroPadding1D( padding=1, **kwargs ) Examples: input_shape = (2, 2, 3) x = np.arange(np.prod(input_shape)).reshape(input_shape) print(x) [[[ 0 1 2] [ 3 4 5]] [[ 6 7 8] [ 9 10 11]]] y = tf.keras.layers.ZeroPadding1D(padding=2)(x) print(y) tf.Tensor( [[[ 0 0 0] [ 0 0 0] [ 0 1 2] [ 3 4 5] [ 0 0 0] [ 0 0 0]] [[ 0 0 0] [ 0 0 0] [ 6 7 8] [ 9 10 11] [ 0 0 0] [ 0 0 0]]], shape=(2, 6, 3), dtype=int64) Arguments padding Int, or tuple of int (length 2), or dictionary. If int: How many zeros to add at the beginning and end of the padding dimension (axis 1). If tuple of int (length 2): How many zeros to add at the beginning and the end of the padding dimension ((left_pad, right_pad)). Input shape: 3D tensor with shape (batch_size, axis_to_pad, features) Output shape: 3D tensor with shape (batch_size, padded_axis, features)
tensorflow.keras.layers.zeropadding1d
tf.keras.layers.ZeroPadding2D View source on GitHub Zero-padding layer for 2D input (e.g. picture). Inherits From: Layer, Module View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.layers.ZeroPadding2D tf.keras.layers.ZeroPadding2D( padding=(1, 1), data_format=None, **kwargs ) This layer can add rows and columns of zeros at the top, bottom, left and right side of an image tensor. Examples: input_shape = (1, 1, 2, 2) x = np.arange(np.prod(input_shape)).reshape(input_shape) print(x) [[[[0 1] [2 3]]]] y = tf.keras.layers.ZeroPadding2D(padding=1)(x) print(y) tf.Tensor( [[[[0 0] [0 0] [0 0] [0 0]] [[0 0] [0 1] [2 3] [0 0]] [[0 0] [0 0] [0 0] [0 0]]]], shape=(1, 3, 4, 2), dtype=int64) Arguments padding Int, or tuple of 2 ints, or tuple of 2 tuples of 2 ints. If int: the same symmetric padding is applied to height and width. If tuple of 2 ints: interpreted as two different symmetric padding values for height and width: (symmetric_height_pad, symmetric_width_pad). If tuple of 2 tuples of 2 ints: interpreted as ((top_pad, bottom_pad), (left_pad, right_pad)) data_format A string, one of channels_last (default) or channels_first. The ordering of the dimensions in the inputs. channels_last corresponds to inputs with shape (batch_size, height, width, channels) while channels_first corresponds to inputs with shape (batch_size, channels, height, width). It defaults to the image_data_format value found in your Keras config file at ~/.keras/keras.json. If you never set it, then it will be "channels_last". Input shape: 4D tensor with shape: If data_format is "channels_last": (batch_size, rows, cols, channels) If data_format is "channels_first": (batch_size, channels, rows, cols) Output shape: 4D tensor with shape: If data_format is "channels_last": (batch_size, padded_rows, padded_cols, channels) If data_format is "channels_first": (batch_size, channels, padded_rows, padded_cols)
tensorflow.keras.layers.zeropadding2d
tf.keras.layers.ZeroPadding3D View source on GitHub Zero-padding layer for 3D data (spatial or spatio-temporal). Inherits From: Layer, Module View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.layers.ZeroPadding3D tf.keras.layers.ZeroPadding3D( padding=(1, 1, 1), data_format=None, **kwargs ) Examples: input_shape = (1, 1, 2, 2, 3) x = np.arange(np.prod(input_shape)).reshape(input_shape) y = tf.keras.layers.ZeroPadding3D(padding=2)(x) print(y.shape) (1, 5, 6, 6, 3) Arguments padding Int, or tuple of 3 ints, or tuple of 3 tuples of 2 ints. If int: the same symmetric padding is applied to height and width. If tuple of 3 ints: interpreted as two different symmetric padding values for height and width: (symmetric_dim1_pad, symmetric_dim2_pad, symmetric_dim3_pad). If tuple of 3 tuples of 2 ints: interpreted as ((left_dim1_pad, right_dim1_pad), (left_dim2_pad, right_dim2_pad), (left_dim3_pad, right_dim3_pad)) data_format A string, one of channels_last (default) or channels_first. The ordering of the dimensions in the inputs. channels_last corresponds to inputs with shape (batch_size, spatial_dim1, spatial_dim2, spatial_dim3, channels) while channels_first corresponds to inputs with shape (batch_size, channels, spatial_dim1, spatial_dim2, spatial_dim3). It defaults to the image_data_format value found in your Keras config file at ~/.keras/keras.json. If you never set it, then it will be "channels_last". Input shape: 5D tensor with shape: If data_format is "channels_last": (batch_size, first_axis_to_pad, second_axis_to_pad, third_axis_to_pad, depth) If data_format is "channels_first": (batch_size, depth, first_axis_to_pad, second_axis_to_pad, third_axis_to_pad) Output shape: 5D tensor with shape: If data_format is "channels_last": (batch_size, first_padded_axis, second_padded_axis, third_axis_to_pad, depth) If data_format is "channels_first": (batch_size, depth, first_padded_axis, second_padded_axis, third_axis_to_pad)
tensorflow.keras.layers.zeropadding3d
Module: tf.keras.losses Built-in loss functions. View aliases Main aliases tf.losses Classes class BinaryCrossentropy: Computes the cross-entropy loss between true labels and predicted labels. class CategoricalCrossentropy: Computes the crossentropy loss between the labels and predictions. class CategoricalHinge: Computes the categorical hinge loss between y_true and y_pred. class CosineSimilarity: Computes the cosine similarity between labels and predictions. class Hinge: Computes the hinge loss between y_true and y_pred. class Huber: Computes the Huber loss between y_true and y_pred. class KLDivergence: Computes Kullback-Leibler divergence loss between y_true and y_pred. class LogCosh: Computes the logarithm of the hyperbolic cosine of the prediction error. class Loss: Loss base class. class MeanAbsoluteError: Computes the mean of absolute difference between labels and predictions. class MeanAbsolutePercentageError: Computes the mean absolute percentage error between y_true and y_pred. class MeanSquaredError: Computes the mean of squares of errors between labels and predictions. class MeanSquaredLogarithmicError: Computes the mean squared logarithmic error between y_true and y_pred. class Poisson: Computes the Poisson loss between y_true and y_pred. class Reduction: Types of loss reduction. class SparseCategoricalCrossentropy: Computes the crossentropy loss between the labels and predictions. class SquaredHinge: Computes the squared hinge loss between y_true and y_pred. Functions KLD(...): Computes Kullback-Leibler divergence loss between y_true and y_pred. MAE(...): Computes the mean absolute error between labels and predictions. MAPE(...): Computes the mean absolute percentage error between y_true and y_pred. MSE(...): Computes the mean squared error between labels and predictions. MSLE(...): Computes the mean squared logarithmic error between y_true and y_pred. binary_crossentropy(...): Computes the binary crossentropy loss. categorical_crossentropy(...): Computes the categorical crossentropy loss. categorical_hinge(...): Computes the categorical hinge loss between y_true and y_pred. cosine_similarity(...): Computes the cosine similarity between labels and predictions. deserialize(...): Deserializes a serialized loss class/function instance. get(...): Retrieves a Keras loss as a function/Loss class instance. hinge(...): Computes the hinge loss between y_true and y_pred. huber(...): Computes Huber loss value. kl_divergence(...): Computes Kullback-Leibler divergence loss between y_true and y_pred. kld(...): Computes Kullback-Leibler divergence loss between y_true and y_pred. kullback_leibler_divergence(...): Computes Kullback-Leibler divergence loss between y_true and y_pred. log_cosh(...): Logarithm of the hyperbolic cosine of the prediction error. logcosh(...): Logarithm of the hyperbolic cosine of the prediction error. mae(...): Computes the mean absolute error between labels and predictions. mape(...): Computes the mean absolute percentage error between y_true and y_pred. mean_absolute_error(...): Computes the mean absolute error between labels and predictions. mean_absolute_percentage_error(...): Computes the mean absolute percentage error between y_true and y_pred. mean_squared_error(...): Computes the mean squared error between labels and predictions. mean_squared_logarithmic_error(...): Computes the mean squared logarithmic error between y_true and y_pred. mse(...): Computes the mean squared error between labels and predictions. msle(...): Computes the mean squared logarithmic error between y_true and y_pred. poisson(...): Computes the Poisson loss between y_true and y_pred. serialize(...): Serializes loss function or Loss instance. sparse_categorical_crossentropy(...): Computes the sparse categorical crossentropy loss. squared_hinge(...): Computes the squared hinge loss between y_true and y_pred.
tensorflow.keras.losses
tf.keras.losses.BinaryCrossentropy View source on GitHub Computes the cross-entropy loss between true labels and predicted labels. Inherits From: Loss View aliases Main aliases tf.losses.BinaryCrossentropy Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.losses.BinaryCrossentropy tf.keras.losses.BinaryCrossentropy( from_logits=False, label_smoothing=0, reduction=losses_utils.ReductionV2.AUTO, name='binary_crossentropy' ) Use this cross-entropy loss when there are only two label classes (assumed to be 0 and 1). For each example, there should be a single floating-point value per prediction. In the snippet below, each of the four examples has only a single floating-pointing value, and both y_pred and y_true have the shape [batch_size]. Standalone usage: y_true = [[0., 1.], [0., 0.]] y_pred = [[0.6, 0.4], [0.4, 0.6]] # Using 'auto'/'sum_over_batch_size' reduction type. bce = tf.keras.losses.BinaryCrossentropy() bce(y_true, y_pred).numpy() 0.815 # Calling with 'sample_weight'. bce(y_true, y_pred, sample_weight=[1, 0]).numpy() 0.458 # Using 'sum' reduction type. bce = tf.keras.losses.BinaryCrossentropy( reduction=tf.keras.losses.Reduction.SUM) bce(y_true, y_pred).numpy() 1.630 # Using 'none' reduction type. bce = tf.keras.losses.BinaryCrossentropy( reduction=tf.keras.losses.Reduction.NONE) bce(y_true, y_pred).numpy() array([0.916 , 0.714], dtype=float32) Usage with the tf.keras API: model.compile(optimizer='sgd', loss=tf.keras.losses.BinaryCrossentropy()) Args from_logits Whether to interpret y_pred as a tensor of logit values. By default, we assume that y_pred contains probabilities (i.e., values in [0, 1]). **Note - Using from_logits=True may be more numerically stable. label_smoothing Float in [0, 1]. When 0, no smoothing occurs. When > 0, we compute the loss between the predicted labels and a smoothed version of the true labels, where the smoothing squeezes the labels towards 0.5. Larger values of label_smoothing correspond to heavier smoothing. reduction (Optional) Type of tf.keras.losses.Reduction to apply to loss. Default value is AUTO. AUTO indicates that the reduction option will be determined by the usage context. For almost all cases this defaults to SUM_OVER_BATCH_SIZE. When used with tf.distribute.Strategy, outside of built-in training loops such as tf.keras compile and fit, using AUTO or SUM_OVER_BATCH_SIZE will raise an error. Please see this custom training tutorial for more details. name (Optional) Name for the op. Defaults to 'binary_crossentropy'. Methods from_config View source @classmethod from_config( config ) Instantiates a Loss from its config (output of get_config()). Args config Output of get_config(). Returns A Loss instance. get_config View source get_config() Returns the config dictionary for a Loss instance. __call__ View source __call__( y_true, y_pred, sample_weight=None ) Invokes the Loss instance. Args y_true Ground truth values. shape = [batch_size, d0, .. dN], except sparse loss functions such as sparse categorical crossentropy where shape = [batch_size, d0, .. dN-1] y_pred The predicted values. shape = [batch_size, d0, .. dN] sample_weight Optional sample_weight acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If sample_weight is a tensor of size [batch_size], then the total loss for each sample of the batch is rescaled by the corresponding element in the sample_weight vector. If the shape of sample_weight is [batch_size, d0, .. dN-1] (or can be broadcasted to this shape), then each loss element of y_pred is scaled by the corresponding value of sample_weight. (Note ondN-1: all loss functions reduce by 1 dimension, usually axis=-1.) Returns Weighted loss float Tensor. If reduction is NONE, this has shape [batch_size, d0, .. dN-1]; otherwise, it is scalar. (Note dN-1 because all loss functions reduce by 1 dimension, usually axis=-1.) Raises ValueError If the shape of sample_weight is invalid.
tensorflow.keras.losses.binarycrossentropy
tf.keras.losses.binary_crossentropy View source on GitHub Computes the binary crossentropy loss. View aliases Main aliases tf.keras.metrics.binary_crossentropy, tf.losses.binary_crossentropy, tf.metrics.binary_crossentropy Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.losses.binary_crossentropy, tf.compat.v1.keras.metrics.binary_crossentropy tf.keras.losses.binary_crossentropy( y_true, y_pred, from_logits=False, label_smoothing=0 ) Standalone usage: y_true = [[0, 1], [0, 0]] y_pred = [[0.6, 0.4], [0.4, 0.6]] loss = tf.keras.losses.binary_crossentropy(y_true, y_pred) assert loss.shape == (2,) loss.numpy() array([0.916 , 0.714], dtype=float32) Args y_true Ground truth values. shape = [batch_size, d0, .. dN]. y_pred The predicted values. shape = [batch_size, d0, .. dN]. from_logits Whether y_pred is expected to be a logits tensor. By default, we assume that y_pred encodes a probability distribution. label_smoothing Float in [0, 1]. If > 0 then smooth the labels. Returns Binary crossentropy loss value. shape = [batch_size, d0, .. dN-1].
tensorflow.keras.losses.binary_crossentropy
tf.keras.losses.CategoricalCrossentropy View source on GitHub Computes the crossentropy loss between the labels and predictions. Inherits From: Loss View aliases Main aliases tf.losses.CategoricalCrossentropy Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.losses.CategoricalCrossentropy tf.keras.losses.CategoricalCrossentropy( from_logits=False, label_smoothing=0, reduction=losses_utils.ReductionV2.AUTO, name='categorical_crossentropy' ) Use this crossentropy loss function when there are two or more label classes. We expect labels to be provided in a one_hot representation. If you want to provide labels as integers, please use SparseCategoricalCrossentropy loss. There should be # classes floating point values per feature. In the snippet below, there is # classes floating pointing values per example. The shape of both y_pred and y_true are [batch_size, num_classes]. Standalone usage: y_true = [[0, 1, 0], [0, 0, 1]] y_pred = [[0.05, 0.95, 0], [0.1, 0.8, 0.1]] # Using 'auto'/'sum_over_batch_size' reduction type. cce = tf.keras.losses.CategoricalCrossentropy() cce(y_true, y_pred).numpy() 1.177 # Calling with 'sample_weight'. cce(y_true, y_pred, sample_weight=tf.constant([0.3, 0.7])).numpy() 0.814 # Using 'sum' reduction type. cce = tf.keras.losses.CategoricalCrossentropy( reduction=tf.keras.losses.Reduction.SUM) cce(y_true, y_pred).numpy() 2.354 # Using 'none' reduction type. cce = tf.keras.losses.CategoricalCrossentropy( reduction=tf.keras.losses.Reduction.NONE) cce(y_true, y_pred).numpy() array([0.0513, 2.303], dtype=float32) Usage with the compile() API: model.compile(optimizer='sgd', loss=tf.keras.losses.CategoricalCrossentropy()) Args from_logits Whether y_pred is expected to be a logits tensor. By default, we assume that y_pred encodes a probability distribution. Note - Using from_logits=True is more numerically stable. label_smoothing Float in [0, 1]. When > 0, label values are smoothed, meaning the confidence on label values are relaxed. e.g. label_smoothing=0.2 means that we will use a value of 0.1 for label 0 and 0.9 for label 1" reduction (Optional) Type of tf.keras.losses.Reduction to apply to loss. Default value is AUTO. AUTO indicates that the reduction option will be determined by the usage context. For almost all cases this defaults to SUM_OVER_BATCH_SIZE. When used with tf.distribute.Strategy, outside of built-in training loops such as tf.keras compile and fit, using AUTO or SUM_OVER_BATCH_SIZE will raise an error. Please see this custom training tutorial for more details. name Optional name for the op. Defaults to 'categorical_crossentropy'. Methods from_config View source @classmethod from_config( config ) Instantiates a Loss from its config (output of get_config()). Args config Output of get_config(). Returns A Loss instance. get_config View source get_config() Returns the config dictionary for a Loss instance. __call__ View source __call__( y_true, y_pred, sample_weight=None ) Invokes the Loss instance. Args y_true Ground truth values. shape = [batch_size, d0, .. dN], except sparse loss functions such as sparse categorical crossentropy where shape = [batch_size, d0, .. dN-1] y_pred The predicted values. shape = [batch_size, d0, .. dN] sample_weight Optional sample_weight acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If sample_weight is a tensor of size [batch_size], then the total loss for each sample of the batch is rescaled by the corresponding element in the sample_weight vector. If the shape of sample_weight is [batch_size, d0, .. dN-1] (or can be broadcasted to this shape), then each loss element of y_pred is scaled by the corresponding value of sample_weight. (Note ondN-1: all loss functions reduce by 1 dimension, usually axis=-1.) Returns Weighted loss float Tensor. If reduction is NONE, this has shape [batch_size, d0, .. dN-1]; otherwise, it is scalar. (Note dN-1 because all loss functions reduce by 1 dimension, usually axis=-1.) Raises ValueError If the shape of sample_weight is invalid.
tensorflow.keras.losses.categoricalcrossentropy
tf.keras.losses.CategoricalHinge View source on GitHub Computes the categorical hinge loss between y_true and y_pred. Inherits From: Loss View aliases Main aliases tf.losses.CategoricalHinge Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.losses.CategoricalHinge tf.keras.losses.CategoricalHinge( reduction=losses_utils.ReductionV2.AUTO, name='categorical_hinge' ) loss = maximum(neg - pos + 1, 0) where neg=maximum((1-y_true)*y_pred) and pos=sum(y_true*y_pred) Standalone usage: y_true = [[0, 1], [0, 0]] y_pred = [[0.6, 0.4], [0.4, 0.6]] # Using 'auto'/'sum_over_batch_size' reduction type. h = tf.keras.losses.CategoricalHinge() h(y_true, y_pred).numpy() 1.4 # Calling with 'sample_weight'. h(y_true, y_pred, sample_weight=[1, 0]).numpy() 0.6 # Using 'sum' reduction type. h = tf.keras.losses.CategoricalHinge( reduction=tf.keras.losses.Reduction.SUM) h(y_true, y_pred).numpy() 2.8 # Using 'none' reduction type. h = tf.keras.losses.CategoricalHinge( reduction=tf.keras.losses.Reduction.NONE) h(y_true, y_pred).numpy() array([1.2, 1.6], dtype=float32) Usage with the compile() API: model.compile(optimizer='sgd', loss=tf.keras.losses.CategoricalHinge()) Args reduction (Optional) Type of tf.keras.losses.Reduction to apply to loss. Default value is AUTO. AUTO indicates that the reduction option will be determined by the usage context. For almost all cases this defaults to SUM_OVER_BATCH_SIZE. When used with tf.distribute.Strategy, outside of built-in training loops such as tf.keras compile and fit, using AUTO or SUM_OVER_BATCH_SIZE will raise an error. Please see this custom training tutorial for more details. name Optional name for the op. Defaults to 'categorical_hinge'. Methods from_config View source @classmethod from_config( config ) Instantiates a Loss from its config (output of get_config()). Args config Output of get_config(). Returns A Loss instance. get_config View source get_config() Returns the config dictionary for a Loss instance. __call__ View source __call__( y_true, y_pred, sample_weight=None ) Invokes the Loss instance. Args y_true Ground truth values. shape = [batch_size, d0, .. dN], except sparse loss functions such as sparse categorical crossentropy where shape = [batch_size, d0, .. dN-1] y_pred The predicted values. shape = [batch_size, d0, .. dN] sample_weight Optional sample_weight acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If sample_weight is a tensor of size [batch_size], then the total loss for each sample of the batch is rescaled by the corresponding element in the sample_weight vector. If the shape of sample_weight is [batch_size, d0, .. dN-1] (or can be broadcasted to this shape), then each loss element of y_pred is scaled by the corresponding value of sample_weight. (Note ondN-1: all loss functions reduce by 1 dimension, usually axis=-1.) Returns Weighted loss float Tensor. If reduction is NONE, this has shape [batch_size, d0, .. dN-1]; otherwise, it is scalar. (Note dN-1 because all loss functions reduce by 1 dimension, usually axis=-1.) Raises ValueError If the shape of sample_weight is invalid.
tensorflow.keras.losses.categoricalhinge
tf.keras.losses.categorical_crossentropy View source on GitHub Computes the categorical crossentropy loss. View aliases Main aliases tf.keras.metrics.categorical_crossentropy, tf.losses.categorical_crossentropy, tf.metrics.categorical_crossentropy Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.losses.categorical_crossentropy, tf.compat.v1.keras.metrics.categorical_crossentropy tf.keras.losses.categorical_crossentropy( y_true, y_pred, from_logits=False, label_smoothing=0 ) Standalone usage: y_true = [[0, 1, 0], [0, 0, 1]] y_pred = [[0.05, 0.95, 0], [0.1, 0.8, 0.1]] loss = tf.keras.losses.categorical_crossentropy(y_true, y_pred) assert loss.shape == (2,) loss.numpy() array([0.0513, 2.303], dtype=float32) Args y_true Tensor of one-hot true targets. y_pred Tensor of predicted targets. from_logits Whether y_pred is expected to be a logits tensor. By default, we assume that y_pred encodes a probability distribution. label_smoothing Float in [0, 1]. If > 0 then smooth the labels. Returns Categorical crossentropy loss value.
tensorflow.keras.losses.categorical_crossentropy
tf.keras.losses.categorical_hinge View source on GitHub Computes the categorical hinge loss between y_true and y_pred. View aliases Main aliases tf.losses.categorical_hinge Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.losses.categorical_hinge tf.keras.losses.categorical_hinge( y_true, y_pred ) loss = maximum(neg - pos + 1, 0) where neg=maximum((1-y_true)*y_pred) and pos=sum(y_true*y_pred) Standalone usage: y_true = np.random.randint(0, 3, size=(2,)) y_true = tf.keras.utils.to_categorical(y_true, num_classes=3) y_pred = np.random.random(size=(2, 3)) loss = tf.keras.losses.categorical_hinge(y_true, y_pred) assert loss.shape == (2,) pos = np.sum(y_true * y_pred, axis=-1) neg = np.amax((1. - y_true) * y_pred, axis=-1) assert np.array_equal(loss.numpy(), np.maximum(0., neg - pos + 1.)) Args y_true The ground truth values. y_true values are expected to be 0 or 1. y_pred The predicted values. Returns Categorical hinge loss values.
tensorflow.keras.losses.categorical_hinge
tf.keras.losses.CosineSimilarity View source on GitHub Computes the cosine similarity between labels and predictions. Inherits From: Loss View aliases Main aliases tf.losses.CosineSimilarity Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.losses.CosineSimilarity tf.keras.losses.CosineSimilarity( axis=-1, reduction=losses_utils.ReductionV2.AUTO, name='cosine_similarity' ) Note that it is a number between -1 and 1. When it is a negative number between -1 and 0, 0 indicates orthogonality and values closer to -1 indicate greater similarity. The values closer to 1 indicate greater dissimilarity. This makes it usable as a loss function in a setting where you try to maximize the proximity between predictions and targets. If either y_true or y_pred is a zero vector, cosine similarity will be 0 regardless of the proximity between predictions and targets. loss = -sum(l2_norm(y_true) * l2_norm(y_pred)) Standalone usage: y_true = [[0., 1.], [1., 1.]] y_pred = [[1., 0.], [1., 1.]] # Using 'auto'/'sum_over_batch_size' reduction type. cosine_loss = tf.keras.losses.CosineSimilarity(axis=1) # l2_norm(y_true) = [[0., 1.], [1./1.414], 1./1.414]]] # l2_norm(y_pred) = [[1., 0.], [1./1.414], 1./1.414]]] # l2_norm(y_true) . l2_norm(y_pred) = [[0., 0.], [0.5, 0.5]] # loss = mean(sum(l2_norm(y_true) . l2_norm(y_pred), axis=1)) # = -((0. + 0.) + (0.5 + 0.5)) / 2 cosine_loss(y_true, y_pred).numpy() -0.5 # Calling with 'sample_weight'. cosine_loss(y_true, y_pred, sample_weight=[0.8, 0.2]).numpy() -0.0999 # Using 'sum' reduction type. cosine_loss = tf.keras.losses.CosineSimilarity(axis=1, reduction=tf.keras.losses.Reduction.SUM) cosine_loss(y_true, y_pred).numpy() -0.999 # Using 'none' reduction type. cosine_loss = tf.keras.losses.CosineSimilarity(axis=1, reduction=tf.keras.losses.Reduction.NONE) cosine_loss(y_true, y_pred).numpy() array([-0., -0.999], dtype=float32) Usage with the compile() API: model.compile(optimizer='sgd', loss=tf.keras.losses.CosineSimilarity(axis=1)) Args axis (Optional) Defaults to -1. The dimension along which the cosine similarity is computed. reduction (Optional) Type of tf.keras.losses.Reduction to apply to loss. Default value is AUTO. AUTO indicates that the reduction option will be determined by the usage context. For almost all cases this defaults to SUM_OVER_BATCH_SIZE. When used with tf.distribute.Strategy, outside of built-in training loops such as tf.keras compile and fit, using AUTO or SUM_OVER_BATCH_SIZE will raise an error. Please see this custom training tutorial for more details. name Optional name for the op. Methods from_config View source @classmethod from_config( config ) Instantiates a Loss from its config (output of get_config()). Args config Output of get_config(). Returns A Loss instance. get_config View source get_config() Returns the config dictionary for a Loss instance. __call__ View source __call__( y_true, y_pred, sample_weight=None ) Invokes the Loss instance. Args y_true Ground truth values. shape = [batch_size, d0, .. dN], except sparse loss functions such as sparse categorical crossentropy where shape = [batch_size, d0, .. dN-1] y_pred The predicted values. shape = [batch_size, d0, .. dN] sample_weight Optional sample_weight acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If sample_weight is a tensor of size [batch_size], then the total loss for each sample of the batch is rescaled by the corresponding element in the sample_weight vector. If the shape of sample_weight is [batch_size, d0, .. dN-1] (or can be broadcasted to this shape), then each loss element of y_pred is scaled by the corresponding value of sample_weight. (Note ondN-1: all loss functions reduce by 1 dimension, usually axis=-1.) Returns Weighted loss float Tensor. If reduction is NONE, this has shape [batch_size, d0, .. dN-1]; otherwise, it is scalar. (Note dN-1 because all loss functions reduce by 1 dimension, usually axis=-1.) Raises ValueError If the shape of sample_weight is invalid.
tensorflow.keras.losses.cosinesimilarity
tf.keras.losses.cosine_similarity Computes the cosine similarity between labels and predictions. View aliases Main aliases tf.losses.cosine_similarity Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.losses.cosine, tf.compat.v1.keras.losses.cosine_proximity, tf.compat.v1.keras.losses.cosine_similarity, tf.compat.v1.keras.metrics.cosine, tf.compat.v1.keras.metrics.cosine_proximity tf.keras.losses.cosine_similarity( y_true, y_pred, axis=-1 ) Note that it is a number between -1 and 1. When it is a negative number between -1 and 0, 0 indicates orthogonality and values closer to -1 indicate greater similarity. The values closer to 1 indicate greater dissimilarity. This makes it usable as a loss function in a setting where you try to maximize the proximity between predictions and targets. If either y_true or y_pred is a zero vector, cosine similarity will be 0 regardless of the proximity between predictions and targets. loss = -sum(l2_norm(y_true) * l2_norm(y_pred)) Standalone usage: y_true = [[0., 1.], [1., 1.], [1., 1.]] y_pred = [[1., 0.], [1., 1.], [-1., -1.]] loss = tf.keras.losses.cosine_similarity(y_true, y_pred, axis=1) loss.numpy() array([-0., -0.999, 0.999], dtype=float32) Args y_true Tensor of true targets. y_pred Tensor of predicted targets. axis Axis along which to determine similarity. Returns Cosine similarity tensor.
tensorflow.keras.losses.cosine_similarity
tf.keras.losses.deserialize View source on GitHub Deserializes a serialized loss class/function instance. View aliases Main aliases tf.losses.deserialize Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.losses.deserialize tf.keras.losses.deserialize( name, custom_objects=None ) Arguments name Loss configuration. custom_objects Optional dictionary mapping names (strings) to custom objects (classes and functions) to be considered during deserialization. Returns A Keras Loss instance or a loss function.
tensorflow.keras.losses.deserialize
tf.keras.losses.get View source on GitHub Retrieves a Keras loss as a function/Loss class instance. View aliases Main aliases tf.losses.get Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.losses.get tf.keras.losses.get( identifier ) The identifier may be the string name of a loss function or Loss class. loss = tf.keras.losses.get("categorical_crossentropy") type(loss) <class 'function'> loss = tf.keras.losses.get("CategoricalCrossentropy") type(loss) <class '...tensorflow.python.keras.losses.CategoricalCrossentropy'> You can also specify config of the loss to this function by passing dict containing class_name and config as an identifier. Also note that the class_name must map to a Loss class identifier = {"class_name": "CategoricalCrossentropy", "config": {"from_logits": True} } loss = tf.keras.losses.get(identifier) type(loss) <class '...tensorflow.python.keras.losses.CategoricalCrossentropy'> Arguments identifier A loss identifier. One of None or string name of a loss function/class or loss configuration dictionary or a loss function or a loss class instance Returns A Keras loss as a function/ Loss class instance. Raises ValueError If identifier cannot be interpreted.
tensorflow.keras.losses.get
tf.keras.losses.Hinge View source on GitHub Computes the hinge loss between y_true and y_pred. Inherits From: Loss View aliases Main aliases tf.losses.Hinge Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.losses.Hinge tf.keras.losses.Hinge( reduction=losses_utils.ReductionV2.AUTO, name='hinge' ) loss = maximum(1 - y_true * y_pred, 0) y_true values are expected to be -1 or 1. If binary (0 or 1) labels are provided we will convert them to -1 or 1. Standalone usage: y_true = [[0., 1.], [0., 0.]] y_pred = [[0.6, 0.4], [0.4, 0.6]] # Using 'auto'/'sum_over_batch_size' reduction type. h = tf.keras.losses.Hinge() h(y_true, y_pred).numpy() 1.3 # Calling with 'sample_weight'. h(y_true, y_pred, sample_weight=[1, 0]).numpy() 0.55 # Using 'sum' reduction type. h = tf.keras.losses.Hinge( reduction=tf.keras.losses.Reduction.SUM) h(y_true, y_pred).numpy() 2.6 # Using 'none' reduction type. h = tf.keras.losses.Hinge( reduction=tf.keras.losses.Reduction.NONE) h(y_true, y_pred).numpy() array([1.1, 1.5], dtype=float32) Usage with the compile() API: model.compile(optimizer='sgd', loss=tf.keras.losses.Hinge()) Args reduction (Optional) Type of tf.keras.losses.Reduction to apply to loss. Default value is AUTO. AUTO indicates that the reduction option will be determined by the usage context. For almost all cases this defaults to SUM_OVER_BATCH_SIZE. When used with tf.distribute.Strategy, outside of built-in training loops such as tf.keras compile and fit, using AUTO or SUM_OVER_BATCH_SIZE will raise an error. Please see this custom training tutorial for more details. name Optional name for the op. Defaults to 'hinge'. Methods from_config View source @classmethod from_config( config ) Instantiates a Loss from its config (output of get_config()). Args config Output of get_config(). Returns A Loss instance. get_config View source get_config() Returns the config dictionary for a Loss instance. __call__ View source __call__( y_true, y_pred, sample_weight=None ) Invokes the Loss instance. Args y_true Ground truth values. shape = [batch_size, d0, .. dN], except sparse loss functions such as sparse categorical crossentropy where shape = [batch_size, d0, .. dN-1] y_pred The predicted values. shape = [batch_size, d0, .. dN] sample_weight Optional sample_weight acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If sample_weight is a tensor of size [batch_size], then the total loss for each sample of the batch is rescaled by the corresponding element in the sample_weight vector. If the shape of sample_weight is [batch_size, d0, .. dN-1] (or can be broadcasted to this shape), then each loss element of y_pred is scaled by the corresponding value of sample_weight. (Note ondN-1: all loss functions reduce by 1 dimension, usually axis=-1.) Returns Weighted loss float Tensor. If reduction is NONE, this has shape [batch_size, d0, .. dN-1]; otherwise, it is scalar. (Note dN-1 because all loss functions reduce by 1 dimension, usually axis=-1.) Raises ValueError If the shape of sample_weight is invalid.
tensorflow.keras.losses.hinge
tf.keras.losses.Huber View source on GitHub Computes the Huber loss between y_true and y_pred. Inherits From: Loss View aliases Main aliases tf.losses.Huber Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.losses.Huber tf.keras.losses.Huber( delta=1.0, reduction=losses_utils.ReductionV2.AUTO, name='huber_loss' ) For each value x in error = y_true - y_pred: loss = 0.5 * x^2 if |x| <= d loss = 0.5 * d^2 + d * (|x| - d) if |x| > d where d is delta. See: https://en.wikipedia.org/wiki/Huber_loss Standalone usage: y_true = [[0, 1], [0, 0]] y_pred = [[0.6, 0.4], [0.4, 0.6]] # Using 'auto'/'sum_over_batch_size' reduction type. h = tf.keras.losses.Huber() h(y_true, y_pred).numpy() 0.155 # Calling with 'sample_weight'. h(y_true, y_pred, sample_weight=[1, 0]).numpy() 0.09 # Using 'sum' reduction type. h = tf.keras.losses.Huber( reduction=tf.keras.losses.Reduction.SUM) h(y_true, y_pred).numpy() 0.31 # Using 'none' reduction type. h = tf.keras.losses.Huber( reduction=tf.keras.losses.Reduction.NONE) h(y_true, y_pred).numpy() array([0.18, 0.13], dtype=float32) Usage with the compile() API: model.compile(optimizer='sgd', loss=tf.keras.losses.Huber()) Args delta A float, the point where the Huber loss function changes from a quadratic to linear. reduction (Optional) Type of tf.keras.losses.Reduction to apply to loss. Default value is AUTO. AUTO indicates that the reduction option will be determined by the usage context. For almost all cases this defaults to SUM_OVER_BATCH_SIZE. When used with tf.distribute.Strategy, outside of built-in training loops such as tf.keras compile and fit, using AUTO or SUM_OVER_BATCH_SIZE will raise an error. Please see this custom training tutorial for more details. name Optional name for the op. Defaults to 'huber_loss'. Methods from_config View source @classmethod from_config( config ) Instantiates a Loss from its config (output of get_config()). Args config Output of get_config(). Returns A Loss instance. get_config View source get_config() Returns the config dictionary for a Loss instance. __call__ View source __call__( y_true, y_pred, sample_weight=None ) Invokes the Loss instance. Args y_true Ground truth values. shape = [batch_size, d0, .. dN], except sparse loss functions such as sparse categorical crossentropy where shape = [batch_size, d0, .. dN-1] y_pred The predicted values. shape = [batch_size, d0, .. dN] sample_weight Optional sample_weight acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If sample_weight is a tensor of size [batch_size], then the total loss for each sample of the batch is rescaled by the corresponding element in the sample_weight vector. If the shape of sample_weight is [batch_size, d0, .. dN-1] (or can be broadcasted to this shape), then each loss element of y_pred is scaled by the corresponding value of sample_weight. (Note ondN-1: all loss functions reduce by 1 dimension, usually axis=-1.) Returns Weighted loss float Tensor. If reduction is NONE, this has shape [batch_size, d0, .. dN-1]; otherwise, it is scalar. (Note dN-1 because all loss functions reduce by 1 dimension, usually axis=-1.) Raises ValueError If the shape of sample_weight is invalid.
tensorflow.keras.losses.huber
tf.keras.losses.KLD View source on GitHub Computes Kullback-Leibler divergence loss between y_true and y_pred. View aliases Main aliases tf.keras.losses.kl_divergence, tf.keras.losses.kld, tf.keras.losses.kullback_leibler_divergence, tf.keras.metrics.KLD, tf.keras.metrics.kl_divergence, tf.keras.metrics.kld, tf.keras.metrics.kullback_leibler_divergence, tf.losses.KLD, tf.losses.kl_divergence, tf.losses.kld, tf.losses.kullback_leibler_divergence, tf.metrics.KLD, tf.metrics.kl_divergence, tf.metrics.kld, tf.metrics.kullback_leibler_divergence Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.losses.KLD, tf.compat.v1.keras.losses.kl_divergence, tf.compat.v1.keras.losses.kld, tf.compat.v1.keras.losses.kullback_leibler_divergence, tf.compat.v1.keras.metrics.KLD, tf.compat.v1.keras.metrics.kl_divergence, tf.compat.v1.keras.metrics.kld, tf.compat.v1.keras.metrics.kullback_leibler_divergence tf.keras.losses.KLD( y_true, y_pred ) loss = y_true * log(y_true / y_pred) See: https://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_divergence Standalone usage: y_true = np.random.randint(0, 2, size=(2, 3)).astype(np.float64) y_pred = np.random.random(size=(2, 3)) loss = tf.keras.losses.kullback_leibler_divergence(y_true, y_pred) assert loss.shape == (2,) y_true = tf.keras.backend.clip(y_true, 1e-7, 1) y_pred = tf.keras.backend.clip(y_pred, 1e-7, 1) assert np.array_equal( loss.numpy(), np.sum(y_true * np.log(y_true / y_pred), axis=-1)) Args y_true Tensor of true targets. y_pred Tensor of predicted targets. Returns A Tensor with loss. Raises TypeError If y_true cannot be cast to the y_pred.dtype.
tensorflow.keras.losses.kld
tf.keras.losses.KLDivergence View source on GitHub Computes Kullback-Leibler divergence loss between y_true and y_pred. Inherits From: Loss View aliases Main aliases tf.losses.KLDivergence Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.losses.KLDivergence tf.keras.losses.KLDivergence( reduction=losses_utils.ReductionV2.AUTO, name='kl_divergence' ) loss = y_true * log(y_true / y_pred) See: https://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_divergence Standalone usage: y_true = [[0, 1], [0, 0]] y_pred = [[0.6, 0.4], [0.4, 0.6]] # Using 'auto'/'sum_over_batch_size' reduction type. kl = tf.keras.losses.KLDivergence() kl(y_true, y_pred).numpy() 0.458 # Calling with 'sample_weight'. kl(y_true, y_pred, sample_weight=[0.8, 0.2]).numpy() 0.366 # Using 'sum' reduction type. kl = tf.keras.losses.KLDivergence( reduction=tf.keras.losses.Reduction.SUM) kl(y_true, y_pred).numpy() 0.916 # Using 'none' reduction type. kl = tf.keras.losses.KLDivergence( reduction=tf.keras.losses.Reduction.NONE) kl(y_true, y_pred).numpy() array([0.916, -3.08e-06], dtype=float32) Usage with the compile() API: model.compile(optimizer='sgd', loss=tf.keras.losses.KLDivergence()) Args reduction (Optional) Type of tf.keras.losses.Reduction to apply to loss. Default value is AUTO. AUTO indicates that the reduction option will be determined by the usage context. For almost all cases this defaults to SUM_OVER_BATCH_SIZE. When used with tf.distribute.Strategy, outside of built-in training loops such as tf.keras compile and fit, using AUTO or SUM_OVER_BATCH_SIZE will raise an error. Please see this custom training tutorial for more details. name Optional name for the op. Defaults to 'kl_divergence'. Methods from_config View source @classmethod from_config( config ) Instantiates a Loss from its config (output of get_config()). Args config Output of get_config(). Returns A Loss instance. get_config View source get_config() Returns the config dictionary for a Loss instance. __call__ View source __call__( y_true, y_pred, sample_weight=None ) Invokes the Loss instance. Args y_true Ground truth values. shape = [batch_size, d0, .. dN], except sparse loss functions such as sparse categorical crossentropy where shape = [batch_size, d0, .. dN-1] y_pred The predicted values. shape = [batch_size, d0, .. dN] sample_weight Optional sample_weight acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If sample_weight is a tensor of size [batch_size], then the total loss for each sample of the batch is rescaled by the corresponding element in the sample_weight vector. If the shape of sample_weight is [batch_size, d0, .. dN-1] (or can be broadcasted to this shape), then each loss element of y_pred is scaled by the corresponding value of sample_weight. (Note ondN-1: all loss functions reduce by 1 dimension, usually axis=-1.) Returns Weighted loss float Tensor. If reduction is NONE, this has shape [batch_size, d0, .. dN-1]; otherwise, it is scalar. (Note dN-1 because all loss functions reduce by 1 dimension, usually axis=-1.) Raises ValueError If the shape of sample_weight is invalid.
tensorflow.keras.losses.kldivergence
tf.keras.losses.LogCosh View source on GitHub Computes the logarithm of the hyperbolic cosine of the prediction error. Inherits From: Loss View aliases Main aliases tf.losses.LogCosh Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.losses.LogCosh tf.keras.losses.LogCosh( reduction=losses_utils.ReductionV2.AUTO, name='log_cosh' ) logcosh = log((exp(x) + exp(-x))/2), where x is the error y_pred - y_true. Standalone usage: y_true = [[0., 1.], [0., 0.]] y_pred = [[1., 1.], [0., 0.]] # Using 'auto'/'sum_over_batch_size' reduction type. l = tf.keras.losses.LogCosh() l(y_true, y_pred).numpy() 0.108 # Calling with 'sample_weight'. l(y_true, y_pred, sample_weight=[0.8, 0.2]).numpy() 0.087 # Using 'sum' reduction type. l = tf.keras.losses.LogCosh( reduction=tf.keras.losses.Reduction.SUM) l(y_true, y_pred).numpy() 0.217 # Using 'none' reduction type. l = tf.keras.losses.LogCosh( reduction=tf.keras.losses.Reduction.NONE) l(y_true, y_pred).numpy() array([0.217, 0.], dtype=float32) Usage with the compile() API: model.compile(optimizer='sgd', loss=tf.keras.losses.LogCosh()) Args reduction (Optional) Type of tf.keras.losses.Reduction to apply to loss. Default value is AUTO. AUTO indicates that the reduction option will be determined by the usage context. For almost all cases this defaults to SUM_OVER_BATCH_SIZE. When used with tf.distribute.Strategy, outside of built-in training loops such as tf.keras compile and fit, using AUTO or SUM_OVER_BATCH_SIZE will raise an error. Please see this custom training tutorial for more details. name Optional name for the op. Defaults to 'log_cosh'. Methods from_config View source @classmethod from_config( config ) Instantiates a Loss from its config (output of get_config()). Args config Output of get_config(). Returns A Loss instance. get_config View source get_config() Returns the config dictionary for a Loss instance. __call__ View source __call__( y_true, y_pred, sample_weight=None ) Invokes the Loss instance. Args y_true Ground truth values. shape = [batch_size, d0, .. dN], except sparse loss functions such as sparse categorical crossentropy where shape = [batch_size, d0, .. dN-1] y_pred The predicted values. shape = [batch_size, d0, .. dN] sample_weight Optional sample_weight acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If sample_weight is a tensor of size [batch_size], then the total loss for each sample of the batch is rescaled by the corresponding element in the sample_weight vector. If the shape of sample_weight is [batch_size, d0, .. dN-1] (or can be broadcasted to this shape), then each loss element of y_pred is scaled by the corresponding value of sample_weight. (Note ondN-1: all loss functions reduce by 1 dimension, usually axis=-1.) Returns Weighted loss float Tensor. If reduction is NONE, this has shape [batch_size, d0, .. dN-1]; otherwise, it is scalar. (Note dN-1 because all loss functions reduce by 1 dimension, usually axis=-1.) Raises ValueError If the shape of sample_weight is invalid.
tensorflow.keras.losses.logcosh
tf.keras.losses.log_cosh Logarithm of the hyperbolic cosine of the prediction error. View aliases Main aliases tf.keras.losses.logcosh, tf.keras.metrics.log_cosh, tf.keras.metrics.logcosh, tf.losses.log_cosh, tf.losses.logcosh, tf.metrics.log_cosh, tf.metrics.logcosh Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.losses.log_cosh, tf.compat.v1.keras.losses.logcosh, tf.compat.v1.keras.metrics.log_cosh, tf.compat.v1.keras.metrics.logcosh tf.keras.losses.log_cosh( y_true, y_pred ) log(cosh(x)) is approximately equal to (x ** 2) / 2 for small x and to abs(x) - log(2) for large x. This means that 'logcosh' works mostly like the mean squared error, but will not be so strongly affected by the occasional wildly incorrect prediction. Standalone usage: y_true = np.random.random(size=(2, 3)) y_pred = np.random.random(size=(2, 3)) loss = tf.keras.losses.logcosh(y_true, y_pred) assert loss.shape == (2,) x = y_pred - y_true assert np.allclose( loss.numpy(), np.mean(x + np.log(np.exp(-2. * x) + 1.) - math_ops.log(2.), axis=-1), atol=1e-5) Args y_true Ground truth values. shape = [batch_size, d0, .. dN]. y_pred The predicted values. shape = [batch_size, d0, .. dN]. Returns Logcosh error values. shape = [batch_size, d0, .. dN-1].
tensorflow.keras.losses.log_cosh
tf.keras.losses.Loss View source on GitHub Loss base class. View aliases Main aliases tf.losses.Loss Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.losses.Loss tf.keras.losses.Loss( reduction=losses_utils.ReductionV2.AUTO, name=None ) To be implemented by subclasses: call(): Contains the logic for loss calculation using y_true, y_pred. Example subclass implementation: class MeanSquaredError(Loss): def call(self, y_true, y_pred): y_pred = tf.convert_to_tensor_v2(y_pred) y_true = tf.cast(y_true, y_pred.dtype) return tf.reduce_mean(math_ops.square(y_pred - y_true), axis=-1) When used with tf.distribute.Strategy, outside of built-in training loops such as tf.keras compile and fit, please use 'SUM' or 'NONE' reduction types, and reduce losses explicitly in your training loop. Using 'AUTO' or 'SUM_OVER_BATCH_SIZE' will raise an error. Please see this custom training tutorial for more details on this. You can implement 'SUM_OVER_BATCH_SIZE' using global batch size like: with strategy.scope(): loss_obj = tf.keras.losses.CategoricalCrossentropy( reduction=tf.keras.losses.Reduction.NONE) .... loss = (tf.reduce_sum(loss_obj(labels, predictions)) * (1. / global_batch_size)) Args reduction (Optional) Type of tf.keras.losses.Reduction to apply to loss. Default value is AUTO. AUTO indicates that the reduction option will be determined by the usage context. For almost all cases this defaults to SUM_OVER_BATCH_SIZE. When used with tf.distribute.Strategy, outside of built-in training loops such as tf.keras compile and fit, using AUTO or SUM_OVER_BATCH_SIZE will raise an error. Please see this custom training tutorial for more details. name Optional name for the op. Methods call View source @abc.abstractmethod call( y_true, y_pred ) Invokes the Loss instance. Args y_true Ground truth values. shape = [batch_size, d0, .. dN], except sparse loss functions such as sparse categorical crossentropy where shape = [batch_size, d0, .. dN-1] y_pred The predicted values. shape = [batch_size, d0, .. dN] Returns Loss values with the shape [batch_size, d0, .. dN-1]. from_config View source @classmethod from_config( config ) Instantiates a Loss from its config (output of get_config()). Args config Output of get_config(). Returns A Loss instance. get_config View source get_config() Returns the config dictionary for a Loss instance. __call__ View source __call__( y_true, y_pred, sample_weight=None ) Invokes the Loss instance. Args y_true Ground truth values. shape = [batch_size, d0, .. dN], except sparse loss functions such as sparse categorical crossentropy where shape = [batch_size, d0, .. dN-1] y_pred The predicted values. shape = [batch_size, d0, .. dN] sample_weight Optional sample_weight acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If sample_weight is a tensor of size [batch_size], then the total loss for each sample of the batch is rescaled by the corresponding element in the sample_weight vector. If the shape of sample_weight is [batch_size, d0, .. dN-1] (or can be broadcasted to this shape), then each loss element of y_pred is scaled by the corresponding value of sample_weight. (Note ondN-1: all loss functions reduce by 1 dimension, usually axis=-1.) Returns Weighted loss float Tensor. If reduction is NONE, this has shape [batch_size, d0, .. dN-1]; otherwise, it is scalar. (Note dN-1 because all loss functions reduce by 1 dimension, usually axis=-1.) Raises ValueError If the shape of sample_weight is invalid.
tensorflow.keras.losses.loss
tf.keras.losses.MAE View source on GitHub Computes the mean absolute error between labels and predictions. View aliases Main aliases tf.keras.losses.mae, tf.keras.losses.mean_absolute_error, tf.keras.metrics.MAE, tf.keras.metrics.mae, tf.keras.metrics.mean_absolute_error, tf.losses.MAE, tf.losses.mae, tf.losses.mean_absolute_error, tf.metrics.MAE, tf.metrics.mae, tf.metrics.mean_absolute_error Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.losses.MAE, tf.compat.v1.keras.losses.mae, tf.compat.v1.keras.losses.mean_absolute_error, tf.compat.v1.keras.metrics.MAE, tf.compat.v1.keras.metrics.mae, tf.compat.v1.keras.metrics.mean_absolute_error tf.keras.losses.MAE( y_true, y_pred ) loss = mean(abs(y_true - y_pred), axis=-1) Standalone usage: y_true = np.random.randint(0, 2, size=(2, 3)) y_pred = np.random.random(size=(2, 3)) loss = tf.keras.losses.mean_absolute_error(y_true, y_pred) assert loss.shape == (2,) assert np.array_equal( loss.numpy(), np.mean(np.abs(y_true - y_pred), axis=-1)) Args y_true Ground truth values. shape = [batch_size, d0, .. dN]. y_pred The predicted values. shape = [batch_size, d0, .. dN]. Returns Mean absolute error values. shape = [batch_size, d0, .. dN-1].
tensorflow.keras.losses.mae
tf.keras.losses.MAPE View source on GitHub Computes the mean absolute percentage error between y_true and y_pred. View aliases Main aliases tf.keras.losses.mape, tf.keras.losses.mean_absolute_percentage_error, tf.keras.metrics.MAPE, tf.keras.metrics.mape, tf.keras.metrics.mean_absolute_percentage_error, tf.losses.MAPE, tf.losses.mape, tf.losses.mean_absolute_percentage_error, tf.metrics.MAPE, tf.metrics.mape, tf.metrics.mean_absolute_percentage_error Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.losses.MAPE, tf.compat.v1.keras.losses.mape, tf.compat.v1.keras.losses.mean_absolute_percentage_error, tf.compat.v1.keras.metrics.MAPE, tf.compat.v1.keras.metrics.mape, tf.compat.v1.keras.metrics.mean_absolute_percentage_error tf.keras.losses.MAPE( y_true, y_pred ) loss = 100 * mean(abs((y_true - y_pred) / y_true), axis=-1) Standalone usage: y_true = np.random.random(size=(2, 3)) y_true = np.maximum(y_true, 1e-7) # Prevent division by zero y_pred = np.random.random(size=(2, 3)) loss = tf.keras.losses.mean_absolute_percentage_error(y_true, y_pred) assert loss.shape == (2,) assert np.array_equal( loss.numpy(), 100. * np.mean(np.abs((y_true - y_pred) / y_true), axis=-1)) Args y_true Ground truth values. shape = [batch_size, d0, .. dN]. y_pred The predicted values. shape = [batch_size, d0, .. dN]. Returns Mean absolute percentage error values. shape = [batch_size, d0, .. dN-1].
tensorflow.keras.losses.mape
tf.keras.losses.MeanAbsoluteError View source on GitHub Computes the mean of absolute difference between labels and predictions. Inherits From: Loss View aliases Main aliases tf.losses.MeanAbsoluteError Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.losses.MeanAbsoluteError tf.keras.losses.MeanAbsoluteError( reduction=losses_utils.ReductionV2.AUTO, name='mean_absolute_error' ) loss = abs(y_true - y_pred) Standalone usage: y_true = [[0., 1.], [0., 0.]] y_pred = [[1., 1.], [1., 0.]] # Using 'auto'/'sum_over_batch_size' reduction type. mae = tf.keras.losses.MeanAbsoluteError() mae(y_true, y_pred).numpy() 0.5 # Calling with 'sample_weight'. mae(y_true, y_pred, sample_weight=[0.7, 0.3]).numpy() 0.25 # Using 'sum' reduction type. mae = tf.keras.losses.MeanAbsoluteError( reduction=tf.keras.losses.Reduction.SUM) mae(y_true, y_pred).numpy() 1.0 # Using 'none' reduction type. mae = tf.keras.losses.MeanAbsoluteError( reduction=tf.keras.losses.Reduction.NONE) mae(y_true, y_pred).numpy() array([0.5, 0.5], dtype=float32) Usage with the compile() API: model.compile(optimizer='sgd', loss=tf.keras.losses.MeanAbsoluteError()) Args reduction (Optional) Type of tf.keras.losses.Reduction to apply to loss. Default value is AUTO. AUTO indicates that the reduction option will be determined by the usage context. For almost all cases this defaults to SUM_OVER_BATCH_SIZE. When used with tf.distribute.Strategy, outside of built-in training loops such as tf.keras compile and fit, using AUTO or SUM_OVER_BATCH_SIZE will raise an error. Please see this custom training tutorial for more details. name Optional name for the op. Defaults to 'mean_absolute_error'. Methods from_config View source @classmethod from_config( config ) Instantiates a Loss from its config (output of get_config()). Args config Output of get_config(). Returns A Loss instance. get_config View source get_config() Returns the config dictionary for a Loss instance. __call__ View source __call__( y_true, y_pred, sample_weight=None ) Invokes the Loss instance. Args y_true Ground truth values. shape = [batch_size, d0, .. dN], except sparse loss functions such as sparse categorical crossentropy where shape = [batch_size, d0, .. dN-1] y_pred The predicted values. shape = [batch_size, d0, .. dN] sample_weight Optional sample_weight acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If sample_weight is a tensor of size [batch_size], then the total loss for each sample of the batch is rescaled by the corresponding element in the sample_weight vector. If the shape of sample_weight is [batch_size, d0, .. dN-1] (or can be broadcasted to this shape), then each loss element of y_pred is scaled by the corresponding value of sample_weight. (Note ondN-1: all loss functions reduce by 1 dimension, usually axis=-1.) Returns Weighted loss float Tensor. If reduction is NONE, this has shape [batch_size, d0, .. dN-1]; otherwise, it is scalar. (Note dN-1 because all loss functions reduce by 1 dimension, usually axis=-1.) Raises ValueError If the shape of sample_weight is invalid.
tensorflow.keras.losses.meanabsoluteerror
tf.keras.losses.MeanAbsolutePercentageError View source on GitHub Computes the mean absolute percentage error between y_true and y_pred. Inherits From: Loss View aliases Main aliases tf.losses.MeanAbsolutePercentageError Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.losses.MeanAbsolutePercentageError tf.keras.losses.MeanAbsolutePercentageError( reduction=losses_utils.ReductionV2.AUTO, name='mean_absolute_percentage_error' ) loss = 100 * abs(y_true - y_pred) / y_true Standalone usage: y_true = [[2., 1.], [2., 3.]] y_pred = [[1., 1.], [1., 0.]] # Using 'auto'/'sum_over_batch_size' reduction type. mape = tf.keras.losses.MeanAbsolutePercentageError() mape(y_true, y_pred).numpy() 50. # Calling with 'sample_weight'. mape(y_true, y_pred, sample_weight=[0.7, 0.3]).numpy() 20. # Using 'sum' reduction type. mape = tf.keras.losses.MeanAbsolutePercentageError( reduction=tf.keras.losses.Reduction.SUM) mape(y_true, y_pred).numpy() 100. # Using 'none' reduction type. mape = tf.keras.losses.MeanAbsolutePercentageError( reduction=tf.keras.losses.Reduction.NONE) mape(y_true, y_pred).numpy() array([25., 75.], dtype=float32) Usage with the compile() API: model.compile(optimizer='sgd', loss=tf.keras.losses.MeanAbsolutePercentageError()) Args reduction (Optional) Type of tf.keras.losses.Reduction to apply to loss. Default value is AUTO. AUTO indicates that the reduction option will be determined by the usage context. For almost all cases this defaults to SUM_OVER_BATCH_SIZE. When used with tf.distribute.Strategy, outside of built-in training loops such as tf.keras compile and fit, using AUTO or SUM_OVER_BATCH_SIZE will raise an error. Please see this custom training tutorial for more details. name Optional name for the op. Defaults to 'mean_absolute_percentage_error'. Methods from_config View source @classmethod from_config( config ) Instantiates a Loss from its config (output of get_config()). Args config Output of get_config(). Returns A Loss instance. get_config View source get_config() Returns the config dictionary for a Loss instance. __call__ View source __call__( y_true, y_pred, sample_weight=None ) Invokes the Loss instance. Args y_true Ground truth values. shape = [batch_size, d0, .. dN], except sparse loss functions such as sparse categorical crossentropy where shape = [batch_size, d0, .. dN-1] y_pred The predicted values. shape = [batch_size, d0, .. dN] sample_weight Optional sample_weight acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If sample_weight is a tensor of size [batch_size], then the total loss for each sample of the batch is rescaled by the corresponding element in the sample_weight vector. If the shape of sample_weight is [batch_size, d0, .. dN-1] (or can be broadcasted to this shape), then each loss element of y_pred is scaled by the corresponding value of sample_weight. (Note ondN-1: all loss functions reduce by 1 dimension, usually axis=-1.) Returns Weighted loss float Tensor. If reduction is NONE, this has shape [batch_size, d0, .. dN-1]; otherwise, it is scalar. (Note dN-1 because all loss functions reduce by 1 dimension, usually axis=-1.) Raises ValueError If the shape of sample_weight is invalid.
tensorflow.keras.losses.meanabsolutepercentageerror
tf.keras.losses.MeanSquaredError View source on GitHub Computes the mean of squares of errors between labels and predictions. Inherits From: Loss View aliases Main aliases tf.losses.MeanSquaredError Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.losses.MeanSquaredError tf.keras.losses.MeanSquaredError( reduction=losses_utils.ReductionV2.AUTO, name='mean_squared_error' ) loss = square(y_true - y_pred) Standalone usage: y_true = [[0., 1.], [0., 0.]] y_pred = [[1., 1.], [1., 0.]] # Using 'auto'/'sum_over_batch_size' reduction type. mse = tf.keras.losses.MeanSquaredError() mse(y_true, y_pred).numpy() 0.5 # Calling with 'sample_weight'. mse(y_true, y_pred, sample_weight=[0.7, 0.3]).numpy() 0.25 # Using 'sum' reduction type. mse = tf.keras.losses.MeanSquaredError( reduction=tf.keras.losses.Reduction.SUM) mse(y_true, y_pred).numpy() 1.0 # Using 'none' reduction type. mse = tf.keras.losses.MeanSquaredError( reduction=tf.keras.losses.Reduction.NONE) mse(y_true, y_pred).numpy() array([0.5, 0.5], dtype=float32) Usage with the compile() API: model.compile(optimizer='sgd', loss=tf.keras.losses.MeanSquaredError()) Args reduction (Optional) Type of tf.keras.losses.Reduction to apply to loss. Default value is AUTO. AUTO indicates that the reduction option will be determined by the usage context. For almost all cases this defaults to SUM_OVER_BATCH_SIZE. When used with tf.distribute.Strategy, outside of built-in training loops such as tf.keras compile and fit, using AUTO or SUM_OVER_BATCH_SIZE will raise an error. Please see this custom training tutorial for more details. name Optional name for the op. Defaults to 'mean_squared_error'. Methods from_config View source @classmethod from_config( config ) Instantiates a Loss from its config (output of get_config()). Args config Output of get_config(). Returns A Loss instance. get_config View source get_config() Returns the config dictionary for a Loss instance. __call__ View source __call__( y_true, y_pred, sample_weight=None ) Invokes the Loss instance. Args y_true Ground truth values. shape = [batch_size, d0, .. dN], except sparse loss functions such as sparse categorical crossentropy where shape = [batch_size, d0, .. dN-1] y_pred The predicted values. shape = [batch_size, d0, .. dN] sample_weight Optional sample_weight acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If sample_weight is a tensor of size [batch_size], then the total loss for each sample of the batch is rescaled by the corresponding element in the sample_weight vector. If the shape of sample_weight is [batch_size, d0, .. dN-1] (or can be broadcasted to this shape), then each loss element of y_pred is scaled by the corresponding value of sample_weight. (Note ondN-1: all loss functions reduce by 1 dimension, usually axis=-1.) Returns Weighted loss float Tensor. If reduction is NONE, this has shape [batch_size, d0, .. dN-1]; otherwise, it is scalar. (Note dN-1 because all loss functions reduce by 1 dimension, usually axis=-1.) Raises ValueError If the shape of sample_weight is invalid.
tensorflow.keras.losses.meansquarederror
tf.keras.losses.MeanSquaredLogarithmicError View source on GitHub Computes the mean squared logarithmic error between y_true and y_pred. Inherits From: Loss View aliases Main aliases tf.losses.MeanSquaredLogarithmicError Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.losses.MeanSquaredLogarithmicError tf.keras.losses.MeanSquaredLogarithmicError( reduction=losses_utils.ReductionV2.AUTO, name='mean_squared_logarithmic_error' ) loss = square(log(y_true + 1.) - log(y_pred + 1.)) Standalone usage: y_true = [[0., 1.], [0., 0.]] y_pred = [[1., 1.], [1., 0.]] # Using 'auto'/'sum_over_batch_size' reduction type. msle = tf.keras.losses.MeanSquaredLogarithmicError() msle(y_true, y_pred).numpy() 0.240 # Calling with 'sample_weight'. msle(y_true, y_pred, sample_weight=[0.7, 0.3]).numpy() 0.120 # Using 'sum' reduction type. msle = tf.keras.losses.MeanSquaredLogarithmicError( reduction=tf.keras.losses.Reduction.SUM) msle(y_true, y_pred).numpy() 0.480 # Using 'none' reduction type. msle = tf.keras.losses.MeanSquaredLogarithmicError( reduction=tf.keras.losses.Reduction.NONE) msle(y_true, y_pred).numpy() array([0.240, 0.240], dtype=float32) Usage with the compile() API: model.compile(optimizer='sgd', loss=tf.keras.losses.MeanSquaredLogarithmicError()) Args reduction (Optional) Type of tf.keras.losses.Reduction to apply to loss. Default value is AUTO. AUTO indicates that the reduction option will be determined by the usage context. For almost all cases this defaults to SUM_OVER_BATCH_SIZE. When used with tf.distribute.Strategy, outside of built-in training loops such as tf.keras compile and fit, using AUTO or SUM_OVER_BATCH_SIZE will raise an error. Please see this custom training tutorial for more details. name Optional name for the op. Defaults to 'mean_squared_logarithmic_error'. Methods from_config View source @classmethod from_config( config ) Instantiates a Loss from its config (output of get_config()). Args config Output of get_config(). Returns A Loss instance. get_config View source get_config() Returns the config dictionary for a Loss instance. __call__ View source __call__( y_true, y_pred, sample_weight=None ) Invokes the Loss instance. Args y_true Ground truth values. shape = [batch_size, d0, .. dN], except sparse loss functions such as sparse categorical crossentropy where shape = [batch_size, d0, .. dN-1] y_pred The predicted values. shape = [batch_size, d0, .. dN] sample_weight Optional sample_weight acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If sample_weight is a tensor of size [batch_size], then the total loss for each sample of the batch is rescaled by the corresponding element in the sample_weight vector. If the shape of sample_weight is [batch_size, d0, .. dN-1] (or can be broadcasted to this shape), then each loss element of y_pred is scaled by the corresponding value of sample_weight. (Note ondN-1: all loss functions reduce by 1 dimension, usually axis=-1.) Returns Weighted loss float Tensor. If reduction is NONE, this has shape [batch_size, d0, .. dN-1]; otherwise, it is scalar. (Note dN-1 because all loss functions reduce by 1 dimension, usually axis=-1.) Raises ValueError If the shape of sample_weight is invalid.
tensorflow.keras.losses.meansquaredlogarithmicerror
tf.keras.losses.MSE View source on GitHub Computes the mean squared error between labels and predictions. View aliases Main aliases tf.keras.losses.mean_squared_error, tf.keras.losses.mse, tf.keras.metrics.MSE, tf.keras.metrics.mean_squared_error, tf.keras.metrics.mse, tf.losses.MSE, tf.losses.mean_squared_error, tf.losses.mse, tf.metrics.MSE, tf.metrics.mean_squared_error, tf.metrics.mse Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.losses.MSE, tf.compat.v1.keras.losses.mean_squared_error, tf.compat.v1.keras.losses.mse, tf.compat.v1.keras.metrics.MSE, tf.compat.v1.keras.metrics.mean_squared_error, tf.compat.v1.keras.metrics.mse tf.keras.losses.MSE( y_true, y_pred ) After computing the squared distance between the inputs, the mean value over the last dimension is returned. loss = mean(square(y_true - y_pred), axis=-1) Standalone usage: y_true = np.random.randint(0, 2, size=(2, 3)) y_pred = np.random.random(size=(2, 3)) loss = tf.keras.losses.mean_squared_error(y_true, y_pred) assert loss.shape == (2,) assert np.array_equal( loss.numpy(), np.mean(np.square(y_true - y_pred), axis=-1)) Args y_true Ground truth values. shape = [batch_size, d0, .. dN]. y_pred The predicted values. shape = [batch_size, d0, .. dN]. Returns Mean squared error values. shape = [batch_size, d0, .. dN-1].
tensorflow.keras.losses.mse
tf.keras.losses.MSLE View source on GitHub Computes the mean squared logarithmic error between y_true and y_pred. View aliases Main aliases tf.keras.losses.mean_squared_logarithmic_error, tf.keras.losses.msle, tf.keras.metrics.MSLE, tf.keras.metrics.mean_squared_logarithmic_error, tf.keras.metrics.msle, tf.losses.MSLE, tf.losses.mean_squared_logarithmic_error, tf.losses.msle, tf.metrics.MSLE, tf.metrics.mean_squared_logarithmic_error, tf.metrics.msle Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.losses.MSLE, tf.compat.v1.keras.losses.mean_squared_logarithmic_error, tf.compat.v1.keras.losses.msle, tf.compat.v1.keras.metrics.MSLE, tf.compat.v1.keras.metrics.mean_squared_logarithmic_error, tf.compat.v1.keras.metrics.msle tf.keras.losses.MSLE( y_true, y_pred ) loss = mean(square(log(y_true + 1) - log(y_pred + 1)), axis=-1) Standalone usage: y_true = np.random.randint(0, 2, size=(2, 3)) y_pred = np.random.random(size=(2, 3)) loss = tf.keras.losses.mean_squared_logarithmic_error(y_true, y_pred) assert loss.shape == (2,) y_true = np.maximum(y_true, 1e-7) y_pred = np.maximum(y_pred, 1e-7) assert np.allclose( loss.numpy(), np.mean( np.square(np.log(y_true + 1.) - np.log(y_pred + 1.)), axis=-1)) Args y_true Ground truth values. shape = [batch_size, d0, .. dN]. y_pred The predicted values. shape = [batch_size, d0, .. dN]. Returns Mean squared logarithmic error values. shape = [batch_size, d0, .. dN-1].
tensorflow.keras.losses.msle
tf.keras.losses.Poisson View source on GitHub Computes the Poisson loss between y_true and y_pred. Inherits From: Loss View aliases Main aliases tf.losses.Poisson Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.losses.Poisson tf.keras.losses.Poisson( reduction=losses_utils.ReductionV2.AUTO, name='poisson' ) loss = y_pred - y_true * log(y_pred) Standalone usage: y_true = [[0., 1.], [0., 0.]] y_pred = [[1., 1.], [0., 0.]] # Using 'auto'/'sum_over_batch_size' reduction type. p = tf.keras.losses.Poisson() p(y_true, y_pred).numpy() 0.5 # Calling with 'sample_weight'. p(y_true, y_pred, sample_weight=[0.8, 0.2]).numpy() 0.4 # Using 'sum' reduction type. p = tf.keras.losses.Poisson( reduction=tf.keras.losses.Reduction.SUM) p(y_true, y_pred).numpy() 0.999 # Using 'none' reduction type. p = tf.keras.losses.Poisson( reduction=tf.keras.losses.Reduction.NONE) p(y_true, y_pred).numpy() array([0.999, 0.], dtype=float32) Usage with the compile() API: model.compile(optimizer='sgd', loss=tf.keras.losses.Poisson()) Args reduction (Optional) Type of tf.keras.losses.Reduction to apply to loss. Default value is AUTO. AUTO indicates that the reduction option will be determined by the usage context. For almost all cases this defaults to SUM_OVER_BATCH_SIZE. When used with tf.distribute.Strategy, outside of built-in training loops such as tf.keras compile and fit, using AUTO or SUM_OVER_BATCH_SIZE will raise an error. Please see this custom training tutorial for more details. name Optional name for the op. Defaults to 'poisson'. Methods from_config View source @classmethod from_config( config ) Instantiates a Loss from its config (output of get_config()). Args config Output of get_config(). Returns A Loss instance. get_config View source get_config() Returns the config dictionary for a Loss instance. __call__ View source __call__( y_true, y_pred, sample_weight=None ) Invokes the Loss instance. Args y_true Ground truth values. shape = [batch_size, d0, .. dN], except sparse loss functions such as sparse categorical crossentropy where shape = [batch_size, d0, .. dN-1] y_pred The predicted values. shape = [batch_size, d0, .. dN] sample_weight Optional sample_weight acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If sample_weight is a tensor of size [batch_size], then the total loss for each sample of the batch is rescaled by the corresponding element in the sample_weight vector. If the shape of sample_weight is [batch_size, d0, .. dN-1] (or can be broadcasted to this shape), then each loss element of y_pred is scaled by the corresponding value of sample_weight. (Note ondN-1: all loss functions reduce by 1 dimension, usually axis=-1.) Returns Weighted loss float Tensor. If reduction is NONE, this has shape [batch_size, d0, .. dN-1]; otherwise, it is scalar. (Note dN-1 because all loss functions reduce by 1 dimension, usually axis=-1.) Raises ValueError If the shape of sample_weight is invalid.
tensorflow.keras.losses.poisson
tf.keras.losses.Reduction Types of loss reduction. View aliases Main aliases tf.losses.Reduction Contains the following values: AUTO: Indicates that the reduction option will be determined by the usage context. For almost all cases this defaults to SUM_OVER_BATCH_SIZE. When used with tf.distribute.Strategy, outside of built-in training loops such as tf.keras compile and fit, we expect reduction value to be SUM or NONE. Using AUTO in that case will raise an error. NONE: Weighted losses with one dimension reduced (axis=-1, or axis specified by loss function). When this reduction type used with built-in Keras training loops like fit/evaluate, the unreduced vector loss is passed to the optimizer but the reported loss will be a scalar value. SUM: Scalar sum of weighted losses. SUM_OVER_BATCH_SIZE: Scalar SUM divided by number of elements in losses. This reduction type is not supported when used with tf.distribute.Strategy outside of built-in training loops like tf.keras compile/fit. You can implement 'SUM_OVER_BATCH_SIZE' using global batch size like: with strategy.scope(): loss_obj = tf.keras.losses.CategoricalCrossentropy( reduction=tf.keras.losses.Reduction.NONE) .... loss = tf.reduce_sum(loss_obj(labels, predictions)) * (1. / global_batch_size) Please see the custom training guide for more details on this. Methods all View source @classmethod all() validate View source @classmethod validate( key ) Class Variables AUTO 'auto' NONE 'none' SUM 'sum' SUM_OVER_BATCH_SIZE 'sum_over_batch_size'
tensorflow.keras.losses.reduction
tf.keras.losses.serialize View source on GitHub Serializes loss function or Loss instance. View aliases Main aliases tf.losses.serialize Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.losses.serialize tf.keras.losses.serialize( loss ) Arguments loss A Keras Loss instance or a loss function. Returns Loss configuration dictionary.
tensorflow.keras.losses.serialize
tf.keras.losses.SparseCategoricalCrossentropy View source on GitHub Computes the crossentropy loss between the labels and predictions. Inherits From: Loss View aliases Main aliases tf.losses.SparseCategoricalCrossentropy Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.losses.SparseCategoricalCrossentropy tf.keras.losses.SparseCategoricalCrossentropy( from_logits=False, reduction=losses_utils.ReductionV2.AUTO, name='sparse_categorical_crossentropy' ) Use this crossentropy loss function when there are two or more label classes. We expect labels to be provided as integers. If you want to provide labels using one-hot representation, please use CategoricalCrossentropy loss. There should be # classes floating point values per feature for y_pred and a single floating point value per feature for y_true. In the snippet below, there is a single floating point value per example for y_true and # classes floating pointing values per example for y_pred. The shape of y_true is [batch_size] and the shape of y_pred is [batch_size, num_classes]. Standalone usage: y_true = [1, 2] y_pred = [[0.05, 0.95, 0], [0.1, 0.8, 0.1]] # Using 'auto'/'sum_over_batch_size' reduction type. scce = tf.keras.losses.SparseCategoricalCrossentropy() scce(y_true, y_pred).numpy() 1.177 # Calling with 'sample_weight'. scce(y_true, y_pred, sample_weight=tf.constant([0.3, 0.7])).numpy() 0.814 # Using 'sum' reduction type. scce = tf.keras.losses.SparseCategoricalCrossentropy( reduction=tf.keras.losses.Reduction.SUM) scce(y_true, y_pred).numpy() 2.354 # Using 'none' reduction type. scce = tf.keras.losses.SparseCategoricalCrossentropy( reduction=tf.keras.losses.Reduction.NONE) scce(y_true, y_pred).numpy() array([0.0513, 2.303], dtype=float32) Usage with the compile() API: model.compile(optimizer='sgd', loss=tf.keras.losses.SparseCategoricalCrossentropy()) Args from_logits Whether y_pred is expected to be a logits tensor. By default, we assume that y_pred encodes a probability distribution. **Note - Using from_logits=True may be more numerically stable. reduction (Optional) Type of tf.keras.losses.Reduction to apply to loss. Default value is AUTO. AUTO indicates that the reduction option will be determined by the usage context. For almost all cases this defaults to SUM_OVER_BATCH_SIZE. When used with tf.distribute.Strategy, outside of built-in training loops such as tf.keras compile and fit, using AUTO or SUM_OVER_BATCH_SIZE will raise an error. Please see this custom training tutorial for more details. name Optional name for the op. Defaults to 'sparse_categorical_crossentropy'. Methods from_config View source @classmethod from_config( config ) Instantiates a Loss from its config (output of get_config()). Args config Output of get_config(). Returns A Loss instance. get_config View source get_config() Returns the config dictionary for a Loss instance. __call__ View source __call__( y_true, y_pred, sample_weight=None ) Invokes the Loss instance. Args y_true Ground truth values. shape = [batch_size, d0, .. dN], except sparse loss functions such as sparse categorical crossentropy where shape = [batch_size, d0, .. dN-1] y_pred The predicted values. shape = [batch_size, d0, .. dN] sample_weight Optional sample_weight acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If sample_weight is a tensor of size [batch_size], then the total loss for each sample of the batch is rescaled by the corresponding element in the sample_weight vector. If the shape of sample_weight is [batch_size, d0, .. dN-1] (or can be broadcasted to this shape), then each loss element of y_pred is scaled by the corresponding value of sample_weight. (Note ondN-1: all loss functions reduce by 1 dimension, usually axis=-1.) Returns Weighted loss float Tensor. If reduction is NONE, this has shape [batch_size, d0, .. dN-1]; otherwise, it is scalar. (Note dN-1 because all loss functions reduce by 1 dimension, usually axis=-1.) Raises ValueError If the shape of sample_weight is invalid.
tensorflow.keras.losses.sparsecategoricalcrossentropy
tf.keras.losses.sparse_categorical_crossentropy View source on GitHub Computes the sparse categorical crossentropy loss. View aliases Main aliases tf.keras.metrics.sparse_categorical_crossentropy, tf.losses.sparse_categorical_crossentropy, tf.metrics.sparse_categorical_crossentropy Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.losses.sparse_categorical_crossentropy, tf.compat.v1.keras.metrics.sparse_categorical_crossentropy tf.keras.losses.sparse_categorical_crossentropy( y_true, y_pred, from_logits=False, axis=-1 ) Standalone usage: y_true = [1, 2] y_pred = [[0.05, 0.95, 0], [0.1, 0.8, 0.1]] loss = tf.keras.losses.sparse_categorical_crossentropy(y_true, y_pred) assert loss.shape == (2,) loss.numpy() array([0.0513, 2.303], dtype=float32) Args y_true Ground truth values. y_pred The predicted values. from_logits Whether y_pred is expected to be a logits tensor. By default, we assume that y_pred encodes a probability distribution. axis (Optional) Defaults to -1. The dimension along which the entropy is computed. Returns Sparse categorical crossentropy loss value.
tensorflow.keras.losses.sparse_categorical_crossentropy
tf.keras.losses.SquaredHinge View source on GitHub Computes the squared hinge loss between y_true and y_pred. Inherits From: Loss View aliases Main aliases tf.losses.SquaredHinge Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.losses.SquaredHinge tf.keras.losses.SquaredHinge( reduction=losses_utils.ReductionV2.AUTO, name='squared_hinge' ) loss = square(maximum(1 - y_true * y_pred, 0)) y_true values are expected to be -1 or 1. If binary (0 or 1) labels are provided we will convert them to -1 or 1. Standalone usage: y_true = [[0., 1.], [0., 0.]] y_pred = [[0.6, 0.4], [0.4, 0.6]] # Using 'auto'/'sum_over_batch_size' reduction type. h = tf.keras.losses.SquaredHinge() h(y_true, y_pred).numpy() 1.86 # Calling with 'sample_weight'. h(y_true, y_pred, sample_weight=[1, 0]).numpy() 0.73 # Using 'sum' reduction type. h = tf.keras.losses.SquaredHinge( reduction=tf.keras.losses.Reduction.SUM) h(y_true, y_pred).numpy() 3.72 # Using 'none' reduction type. h = tf.keras.losses.SquaredHinge( reduction=tf.keras.losses.Reduction.NONE) h(y_true, y_pred).numpy() array([1.46, 2.26], dtype=float32) Usage with the compile() API: model.compile(optimizer='sgd', loss=tf.keras.losses.SquaredHinge()) Args reduction (Optional) Type of tf.keras.losses.Reduction to apply to loss. Default value is AUTO. AUTO indicates that the reduction option will be determined by the usage context. For almost all cases this defaults to SUM_OVER_BATCH_SIZE. When used with tf.distribute.Strategy, outside of built-in training loops such as tf.keras compile and fit, using AUTO or SUM_OVER_BATCH_SIZE will raise an error. Please see this custom training tutorial for more details. name Optional name for the op. Defaults to 'squared_hinge'. Methods from_config View source @classmethod from_config( config ) Instantiates a Loss from its config (output of get_config()). Args config Output of get_config(). Returns A Loss instance. get_config View source get_config() Returns the config dictionary for a Loss instance. __call__ View source __call__( y_true, y_pred, sample_weight=None ) Invokes the Loss instance. Args y_true Ground truth values. shape = [batch_size, d0, .. dN], except sparse loss functions such as sparse categorical crossentropy where shape = [batch_size, d0, .. dN-1] y_pred The predicted values. shape = [batch_size, d0, .. dN] sample_weight Optional sample_weight acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If sample_weight is a tensor of size [batch_size], then the total loss for each sample of the batch is rescaled by the corresponding element in the sample_weight vector. If the shape of sample_weight is [batch_size, d0, .. dN-1] (or can be broadcasted to this shape), then each loss element of y_pred is scaled by the corresponding value of sample_weight. (Note ondN-1: all loss functions reduce by 1 dimension, usually axis=-1.) Returns Weighted loss float Tensor. If reduction is NONE, this has shape [batch_size, d0, .. dN-1]; otherwise, it is scalar. (Note dN-1 because all loss functions reduce by 1 dimension, usually axis=-1.) Raises ValueError If the shape of sample_weight is invalid.
tensorflow.keras.losses.squaredhinge
tf.keras.losses.squared_hinge View source on GitHub Computes the squared hinge loss between y_true and y_pred. View aliases Main aliases tf.keras.metrics.squared_hinge, tf.losses.squared_hinge, tf.metrics.squared_hinge Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.losses.squared_hinge, tf.compat.v1.keras.metrics.squared_hinge tf.keras.losses.squared_hinge( y_true, y_pred ) loss = mean(square(maximum(1 - y_true * y_pred, 0)), axis=-1) Standalone usage: y_true = np.random.choice([-1, 1], size=(2, 3)) y_pred = np.random.random(size=(2, 3)) loss = tf.keras.losses.squared_hinge(y_true, y_pred) assert loss.shape == (2,) assert np.array_equal( loss.numpy(), np.mean(np.square(np.maximum(1. - y_true * y_pred, 0.)), axis=-1)) Args y_true The ground truth values. y_true values are expected to be -1 or 1. If binary (0 or 1) labels are provided we will convert them to -1 or 1. shape = [batch_size, d0, .. dN]. y_pred The predicted values. shape = [batch_size, d0, .. dN]. Returns Squared hinge loss values. shape = [batch_size, d0, .. dN-1].
tensorflow.keras.losses.squared_hinge
Module: tf.keras.metrics Built-in metrics. View aliases Main aliases tf.metrics Classes class AUC: Computes the approximate AUC (Area under the curve) via a Riemann sum. class Accuracy: Calculates how often predictions equal labels. class BinaryAccuracy: Calculates how often predictions match binary labels. class BinaryCrossentropy: Computes the crossentropy metric between the labels and predictions. class CategoricalAccuracy: Calculates how often predictions matches one-hot labels. class CategoricalCrossentropy: Computes the crossentropy metric between the labels and predictions. class CategoricalHinge: Computes the categorical hinge metric between y_true and y_pred. class CosineSimilarity: Computes the cosine similarity between the labels and predictions. class FalseNegatives: Calculates the number of false negatives. class FalsePositives: Calculates the number of false positives. class Hinge: Computes the hinge metric between y_true and y_pred. class KLDivergence: Computes Kullback-Leibler divergence metric between y_true and y_pred. class LogCoshError: Computes the logarithm of the hyperbolic cosine of the prediction error. class Mean: Computes the (weighted) mean of the given values. class MeanAbsoluteError: Computes the mean absolute error between the labels and predictions. class MeanAbsolutePercentageError: Computes the mean absolute percentage error between y_true and y_pred. class MeanIoU: Computes the mean Intersection-Over-Union metric. class MeanRelativeError: Computes the mean relative error by normalizing with the given values. class MeanSquaredError: Computes the mean squared error between y_true and y_pred. class MeanSquaredLogarithmicError: Computes the mean squared logarithmic error between y_true and y_pred. class MeanTensor: Computes the element-wise (weighted) mean of the given tensors. class Metric: Encapsulates metric logic and state. class Poisson: Computes the Poisson metric between y_true and y_pred. class Precision: Computes the precision of the predictions with respect to the labels. class PrecisionAtRecall: Computes best precision where recall is >= specified value. class Recall: Computes the recall of the predictions with respect to the labels. class RecallAtPrecision: Computes best recall where precision is >= specified value. class RootMeanSquaredError: Computes root mean squared error metric between y_true and y_pred. class SensitivityAtSpecificity: Computes best sensitivity where specificity is >= specified value. class SparseCategoricalAccuracy: Calculates how often predictions matches integer labels. class SparseCategoricalCrossentropy: Computes the crossentropy metric between the labels and predictions. class SparseTopKCategoricalAccuracy: Computes how often integer targets are in the top K predictions. class SpecificityAtSensitivity: Computes best specificity where sensitivity is >= specified value. class SquaredHinge: Computes the squared hinge metric between y_true and y_pred. class Sum: Computes the (weighted) sum of the given values. class TopKCategoricalAccuracy: Computes how often targets are in the top K predictions. class TrueNegatives: Calculates the number of true negatives. class TruePositives: Calculates the number of true positives. Functions KLD(...): Computes Kullback-Leibler divergence loss between y_true and y_pred. MAE(...): Computes the mean absolute error between labels and predictions. MAPE(...): Computes the mean absolute percentage error between y_true and y_pred. MSE(...): Computes the mean squared error between labels and predictions. MSLE(...): Computes the mean squared logarithmic error between y_true and y_pred. binary_accuracy(...): Calculates how often predictions matches binary labels. binary_crossentropy(...): Computes the binary crossentropy loss. categorical_accuracy(...): Calculates how often predictions matches one-hot labels. categorical_crossentropy(...): Computes the categorical crossentropy loss. deserialize(...): Deserializes a serialized metric class/function instance. get(...): Retrieves a Keras metric as a function/Metric class instance. hinge(...): Computes the hinge loss between y_true and y_pred. kl_divergence(...): Computes Kullback-Leibler divergence loss between y_true and y_pred. kld(...): Computes Kullback-Leibler divergence loss between y_true and y_pred. kullback_leibler_divergence(...): Computes Kullback-Leibler divergence loss between y_true and y_pred. log_cosh(...): Logarithm of the hyperbolic cosine of the prediction error. logcosh(...): Logarithm of the hyperbolic cosine of the prediction error. mae(...): Computes the mean absolute error between labels and predictions. mape(...): Computes the mean absolute percentage error between y_true and y_pred. mean_absolute_error(...): Computes the mean absolute error between labels and predictions. mean_absolute_percentage_error(...): Computes the mean absolute percentage error between y_true and y_pred. mean_squared_error(...): Computes the mean squared error between labels and predictions. mean_squared_logarithmic_error(...): Computes the mean squared logarithmic error between y_true and y_pred. mse(...): Computes the mean squared error between labels and predictions. msle(...): Computes the mean squared logarithmic error between y_true and y_pred. poisson(...): Computes the Poisson loss between y_true and y_pred. serialize(...): Serializes metric function or Metric instance. sparse_categorical_accuracy(...): Calculates how often predictions matches integer labels. sparse_categorical_crossentropy(...): Computes the sparse categorical crossentropy loss. sparse_top_k_categorical_accuracy(...): Computes how often integer targets are in the top K predictions. squared_hinge(...): Computes the squared hinge loss between y_true and y_pred. top_k_categorical_accuracy(...): Computes how often targets are in the top K predictions.
tensorflow.keras.metrics
tf.keras.metrics.Accuracy View source on GitHub Calculates how often predictions equal labels. Inherits From: Mean, Metric, Layer, Module View aliases Main aliases tf.metrics.Accuracy Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.metrics.Accuracy tf.keras.metrics.Accuracy( name='accuracy', dtype=None ) This metric creates two local variables, total and count that are used to compute the frequency with which y_pred matches y_true. This frequency is ultimately returned as binary accuracy: an idempotent operation that simply divides total by count. If sample_weight is None, weights default to 1. Use sample_weight of 0 to mask values. Args name (Optional) string name of the metric instance. dtype (Optional) data type of the metric result. Standalone usage: m = tf.keras.metrics.Accuracy() m.update_state([[1], [2], [3], [4]], [[0], [2], [3], [4]]) m.result().numpy() 0.75 m.reset_states() m.update_state([[1], [2], [3], [4]], [[0], [2], [3], [4]], sample_weight=[1, 1, 0, 0]) m.result().numpy() 0.5 Usage with compile() API: model.compile(optimizer='sgd', loss='mse', metrics=[tf.keras.metrics.Accuracy()]) Methods reset_states View source reset_states() Resets all of the metric state variables. This function is called between epochs/steps, when a metric is evaluated during training. result View source result() Computes and returns the metric value tensor. Result computation is an idempotent operation that simply calculates the metric value using the state variables. update_state View source update_state( y_true, y_pred, sample_weight=None ) Accumulates metric statistics. y_true and y_pred should have the same shape. Args y_true Ground truth values. shape = [batch_size, d0, .. dN]. y_pred The predicted values. shape = [batch_size, d0, .. dN]. sample_weight Optional sample_weight acts as a coefficient for the metric. If a scalar is provided, then the metric is simply scaled by the given value. If sample_weight is a tensor of size [batch_size], then the metric for each sample of the batch is rescaled by the corresponding element in the sample_weight vector. If the shape of sample_weight is [batch_size, d0, .. dN-1] (or can be broadcasted to this shape), then each metric element of y_pred is scaled by the corresponding value of sample_weight. (Note on dN-1: all metric functions reduce by 1 dimension, usually the last axis (-1)). Returns Update op.
tensorflow.keras.metrics.accuracy
tf.keras.metrics.AUC View source on GitHub Computes the approximate AUC (Area under the curve) via a Riemann sum. Inherits From: Metric, Layer, Module View aliases Main aliases tf.metrics.AUC Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.metrics.AUC tf.keras.metrics.AUC( num_thresholds=200, curve='ROC', summation_method='interpolation', name=None, dtype=None, thresholds=None, multi_label=False, label_weights=None ) This metric creates four local variables, true_positives, true_negatives, false_positives and false_negatives that are used to compute the AUC. To discretize the AUC curve, a linearly spaced set of thresholds is used to compute pairs of recall and precision values. The area under the ROC-curve is therefore computed using the height of the recall values by the false positive rate, while the area under the PR-curve is the computed using the height of the precision values by the recall. This value is ultimately returned as auc, an idempotent operation that computes the area under a discretized curve of precision versus recall values (computed using the aforementioned variables). The num_thresholds variable controls the degree of discretization with larger numbers of thresholds more closely approximating the true AUC. The quality of the approximation may vary dramatically depending on num_thresholds. The thresholds parameter can be used to manually specify thresholds which split the predictions more evenly. For best results, predictions should be distributed approximately uniformly in the range [0, 1] and not peaked around 0 or 1. The quality of the AUC approximation may be poor if this is not the case. Setting summation_method to 'minoring' or 'majoring' can help quantify the error in the approximation by providing lower or upper bound estimate of the AUC. If sample_weight is None, weights default to 1. Use sample_weight of 0 to mask values. Args num_thresholds (Optional) Defaults to 200. The number of thresholds to use when discretizing the roc curve. Values must be > 1. curve (Optional) Specifies the name of the curve to be computed, 'ROC' [default] or 'PR' for the Precision-Recall-curve. summation_method (Optional) Specifies the Riemann summation method used. 'interpolation' (default) applies mid-point summation scheme for ROC. For PR-AUC, interpolates (true/false) positives but not the ratio that is precision (see Davis & Goadrich 2006 for details); 'minoring' applies left summation for increasing intervals and right summation for decreasing intervals; 'majoring' does the opposite. name (Optional) string name of the metric instance. dtype (Optional) data type of the metric result. thresholds (Optional) A list of floating point values to use as the thresholds for discretizing the curve. If set, the num_thresholds parameter is ignored. Values should be in [0, 1]. Endpoint thresholds equal to {-epsilon, 1+epsilon} for a small positive epsilon value will be automatically included with these to correctly handle predictions equal to exactly 0 or 1. multi_label boolean indicating whether multilabel data should be treated as such, wherein AUC is computed separately for each label and then averaged across labels, or (when False) if the data should be flattened into a single label before AUC computation. In the latter case, when multilabel data is passed to AUC, each label-prediction pair is treated as an individual data point. Should be set to False for multi-class data. label_weights (optional) list, array, or tensor of non-negative weights used to compute AUCs for multilabel data. When multi_label is True, the weights are applied to the individual label AUCs when they are averaged to produce the multi-label AUC. When it's False, they are used to weight the individual label predictions in computing the confusion matrix on the flattened data. Note that this is unlike class_weights in that class_weights weights the example depending on the value of its label, whereas label_weights depends only on the index of that label before flattening; therefore label_weights should not be used for multi-class data. Standalone usage: m = tf.keras.metrics.AUC(num_thresholds=3) m.update_state([0, 0, 1, 1], [0, 0.5, 0.3, 0.9]) # threshold values are [0 - 1e-7, 0.5, 1 + 1e-7] # tp = [2, 1, 0], fp = [2, 0, 0], fn = [0, 1, 2], tn = [0, 2, 2] # recall = [1, 0.5, 0], fp_rate = [1, 0, 0] # auc = ((((1+0.5)/2)*(1-0))+ (((0.5+0)/2)*(0-0))) = 0.75 m.result().numpy() 0.75 m.reset_states() m.update_state([0, 0, 1, 1], [0, 0.5, 0.3, 0.9], sample_weight=[1, 0, 0, 1]) m.result().numpy() 1.0 Usage with compile() API: model.compile(optimizer='sgd', loss='mse', metrics=[tf.keras.metrics.AUC()]) Attributes thresholds The thresholds used for evaluating AUC. Methods interpolate_pr_auc View source interpolate_pr_auc() Interpolation formula inspired by section 4 of Davis & Goadrich 2006. https://www.biostat.wisc.edu/~page/rocpr.pdf Note here we derive & use a closed formula not present in the paper as follows: Precision = TP / (TP + FP) = TP / P Modeling all of TP (true positive), FP (false positive) and their sum P = TP + FP (predicted positive) as varying linearly within each interval [A, B] between successive thresholds, we get Precision slope = dTP / dP = (TP_B - TP_A) / (P_B - P_A) = (TP - TP_A) / (P - P_A) Precision = (TP_A + slope * (P - P_A)) / P The area within the interval is (slope / total_pos_weight) times int_A^B{Precision.dP} = int_A^B{(TP_A + slope * (P - P_A)) * dP / P} int_A^B{Precision.dP} = int_A^B{slope * dP + intercept * dP / P} where intercept = TP_A - slope * P_A = TP_B - slope * P_B, resulting in int_A^B{Precision.dP} = TP_B - TP_A + intercept * log(P_B / P_A) Bringing back the factor (slope / total_pos_weight) we'd put aside, we get slope * [dTP + intercept * log(P_B / P_A)] / total_pos_weight where dTP == TP_B - TP_A. Note that when P_A == 0 the above calculation simplifies into int_A^B{Precision.dTP} = int_A^B{slope * dTP} = slope * (TP_B - TP_A) which is really equivalent to imputing constant precision throughout the first bucket having >0 true positives. Returns pr_auc an approximation of the area under the P-R curve. reset_states View source reset_states() Resets all of the metric state variables. This function is called between epochs/steps, when a metric is evaluated during training. result View source result() Computes and returns the metric value tensor. Result computation is an idempotent operation that simply calculates the metric value using the state variables. update_state View source update_state( y_true, y_pred, sample_weight=None ) Accumulates confusion matrix statistics. Args y_true The ground truth values. y_pred The predicted values. sample_weight Optional weighting of each example. Defaults to 1. Can be a Tensor whose rank is either 0, or the same rank as y_true, and must be broadcastable to y_true. Returns Update op.
tensorflow.keras.metrics.auc
tf.keras.metrics.BinaryAccuracy View source on GitHub Calculates how often predictions match binary labels. Inherits From: Mean, Metric, Layer, Module View aliases Main aliases tf.metrics.BinaryAccuracy Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.metrics.BinaryAccuracy tf.keras.metrics.BinaryAccuracy( name='binary_accuracy', dtype=None, threshold=0.5 ) This metric creates two local variables, total and count that are used to compute the frequency with which y_pred matches y_true. This frequency is ultimately returned as binary accuracy: an idempotent operation that simply divides total by count. If sample_weight is None, weights default to 1. Use sample_weight of 0 to mask values. Args name (Optional) string name of the metric instance. dtype (Optional) data type of the metric result. threshold (Optional) Float representing the threshold for deciding whether prediction values are 1 or 0. Standalone usage: m = tf.keras.metrics.BinaryAccuracy() m.update_state([[1], [1], [0], [0]], [[0.98], [1], [0], [0.6]]) m.result().numpy() 0.75 m.reset_states() m.update_state([[1], [1], [0], [0]], [[0.98], [1], [0], [0.6]], sample_weight=[1, 0, 0, 1]) m.result().numpy() 0.5 Usage with compile() API: model.compile(optimizer='sgd', loss='mse', metrics=[tf.keras.metrics.BinaryAccuracy()]) Methods reset_states View source reset_states() Resets all of the metric state variables. This function is called between epochs/steps, when a metric is evaluated during training. result View source result() Computes and returns the metric value tensor. Result computation is an idempotent operation that simply calculates the metric value using the state variables. update_state View source update_state( y_true, y_pred, sample_weight=None ) Accumulates metric statistics. y_true and y_pred should have the same shape. Args y_true Ground truth values. shape = [batch_size, d0, .. dN]. y_pred The predicted values. shape = [batch_size, d0, .. dN]. sample_weight Optional sample_weight acts as a coefficient for the metric. If a scalar is provided, then the metric is simply scaled by the given value. If sample_weight is a tensor of size [batch_size], then the metric for each sample of the batch is rescaled by the corresponding element in the sample_weight vector. If the shape of sample_weight is [batch_size, d0, .. dN-1] (or can be broadcasted to this shape), then each metric element of y_pred is scaled by the corresponding value of sample_weight. (Note on dN-1: all metric functions reduce by 1 dimension, usually the last axis (-1)). Returns Update op.
tensorflow.keras.metrics.binaryaccuracy
tf.keras.metrics.BinaryCrossentropy View source on GitHub Computes the crossentropy metric between the labels and predictions. Inherits From: Mean, Metric, Layer, Module View aliases Main aliases tf.metrics.BinaryCrossentropy Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.metrics.BinaryCrossentropy tf.keras.metrics.BinaryCrossentropy( name='binary_crossentropy', dtype=None, from_logits=False, label_smoothing=0 ) This is the crossentropy metric class to be used when there are only two label classes (0 and 1). Args name (Optional) string name of the metric instance. dtype (Optional) data type of the metric result. from_logits (Optional )Whether output is expected to be a logits tensor. By default, we consider that output encodes a probability distribution. label_smoothing (Optional) Float in [0, 1]. When > 0, label values are smoothed, meaning the confidence on label values are relaxed. e.g. label_smoothing=0.2 means that we will use a value of 0.1 for label 0 and 0.9 for label 1". Standalone usage: m = tf.keras.metrics.BinaryCrossentropy() m.update_state([[0, 1], [0, 0]], [[0.6, 0.4], [0.4, 0.6]]) m.result().numpy() 0.81492424 m.reset_states() m.update_state([[0, 1], [0, 0]], [[0.6, 0.4], [0.4, 0.6]], sample_weight=[1, 0]) m.result().numpy() 0.9162905 Usage with compile() API: model.compile( optimizer='sgd', loss='mse', metrics=[tf.keras.metrics.BinaryCrossentropy()]) Methods reset_states View source reset_states() Resets all of the metric state variables. This function is called between epochs/steps, when a metric is evaluated during training. result View source result() Computes and returns the metric value tensor. Result computation is an idempotent operation that simply calculates the metric value using the state variables. update_state View source update_state( y_true, y_pred, sample_weight=None ) Accumulates metric statistics. y_true and y_pred should have the same shape. Args y_true Ground truth values. shape = [batch_size, d0, .. dN]. y_pred The predicted values. shape = [batch_size, d0, .. dN]. sample_weight Optional sample_weight acts as a coefficient for the metric. If a scalar is provided, then the metric is simply scaled by the given value. If sample_weight is a tensor of size [batch_size], then the metric for each sample of the batch is rescaled by the corresponding element in the sample_weight vector. If the shape of sample_weight is [batch_size, d0, .. dN-1] (or can be broadcasted to this shape), then each metric element of y_pred is scaled by the corresponding value of sample_weight. (Note on dN-1: all metric functions reduce by 1 dimension, usually the last axis (-1)). Returns Update op.
tensorflow.keras.metrics.binarycrossentropy
tf.keras.metrics.binary_accuracy View source on GitHub Calculates how often predictions matches binary labels. View aliases Main aliases tf.metrics.binary_accuracy Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.metrics.binary_accuracy tf.keras.metrics.binary_accuracy( y_true, y_pred, threshold=0.5 ) Standalone usage: y_true = [[1], [1], [0], [0]] y_pred = [[1], [1], [0], [0]] m = tf.keras.metrics.binary_accuracy(y_true, y_pred) assert m.shape == (4,) m.numpy() array([1., 1., 1., 1.], dtype=float32) Args y_true Ground truth values. shape = [batch_size, d0, .. dN]. y_pred The predicted values. shape = [batch_size, d0, .. dN]. threshold (Optional) Float representing the threshold for deciding whether prediction values are 1 or 0. Returns Binary accuracy values. shape = [batch_size, d0, .. dN-1]
tensorflow.keras.metrics.binary_accuracy
tf.keras.metrics.CategoricalAccuracy View source on GitHub Calculates how often predictions matches one-hot labels. Inherits From: Mean, Metric, Layer, Module View aliases Main aliases tf.metrics.CategoricalAccuracy Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.metrics.CategoricalAccuracy tf.keras.metrics.CategoricalAccuracy( name='categorical_accuracy', dtype=None ) You can provide logits of classes as y_pred, since argmax of logits and probabilities are same. This metric creates two local variables, total and count that are used to compute the frequency with which y_pred matches y_true. This frequency is ultimately returned as categorical accuracy: an idempotent operation that simply divides total by count. y_pred and y_true should be passed in as vectors of probabilities, rather than as labels. If necessary, use tf.one_hot to expand y_true as a vector. If sample_weight is None, weights default to 1. Use sample_weight of 0 to mask values. Args name (Optional) string name of the metric instance. dtype (Optional) data type of the metric result. Standalone usage: m = tf.keras.metrics.CategoricalAccuracy() m.update_state([[0, 0, 1], [0, 1, 0]], [[0.1, 0.9, 0.8], [0.05, 0.95, 0]]) m.result().numpy() 0.5 m.reset_states() m.update_state([[0, 0, 1], [0, 1, 0]], [[0.1, 0.9, 0.8], [0.05, 0.95, 0]], sample_weight=[0.7, 0.3]) m.result().numpy() 0.3 Usage with compile() API: model.compile( optimizer='sgd', loss='mse', metrics=[tf.keras.metrics.CategoricalAccuracy()]) Methods reset_states View source reset_states() Resets all of the metric state variables. This function is called between epochs/steps, when a metric is evaluated during training. result View source result() Computes and returns the metric value tensor. Result computation is an idempotent operation that simply calculates the metric value using the state variables. update_state View source update_state( y_true, y_pred, sample_weight=None ) Accumulates metric statistics. y_true and y_pred should have the same shape. Args y_true Ground truth values. shape = [batch_size, d0, .. dN]. y_pred The predicted values. shape = [batch_size, d0, .. dN]. sample_weight Optional sample_weight acts as a coefficient for the metric. If a scalar is provided, then the metric is simply scaled by the given value. If sample_weight is a tensor of size [batch_size], then the metric for each sample of the batch is rescaled by the corresponding element in the sample_weight vector. If the shape of sample_weight is [batch_size, d0, .. dN-1] (or can be broadcasted to this shape), then each metric element of y_pred is scaled by the corresponding value of sample_weight. (Note on dN-1: all metric functions reduce by 1 dimension, usually the last axis (-1)). Returns Update op.
tensorflow.keras.metrics.categoricalaccuracy
tf.keras.metrics.CategoricalCrossentropy View source on GitHub Computes the crossentropy metric between the labels and predictions. Inherits From: Mean, Metric, Layer, Module View aliases Main aliases tf.metrics.CategoricalCrossentropy Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.metrics.CategoricalCrossentropy tf.keras.metrics.CategoricalCrossentropy( name='categorical_crossentropy', dtype=None, from_logits=False, label_smoothing=0 ) This is the crossentropy metric class to be used when there are multiple label classes (2 or more). Here we assume that labels are given as a one_hot representation. eg., When labels values are [2, 0, 1], y_true = [[0, 0, 1], [1, 0, 0], [0, 1, 0]]. Args name (Optional) string name of the metric instance. dtype (Optional) data type of the metric result. from_logits (Optional) Whether output is expected to be a logits tensor. By default, we consider that output encodes a probability distribution. label_smoothing (Optional) Float in [0, 1]. When > 0, label values are smoothed, meaning the confidence on label values are relaxed. e.g. label_smoothing=0.2 means that we will use a value of 0.1 for label 0 and 0.9 for label 1" Standalone usage: # EPSILON = 1e-7, y = y_true, y` = y_pred # y` = clip_ops.clip_by_value(output, EPSILON, 1. - EPSILON) # y` = [[0.05, 0.95, EPSILON], [0.1, 0.8, 0.1]] # xent = -sum(y * log(y'), axis = -1) # = -((log 0.95), (log 0.1)) # = [0.051, 2.302] # Reduced xent = (0.051 + 2.302) / 2 m = tf.keras.metrics.CategoricalCrossentropy() m.update_state([[0, 1, 0], [0, 0, 1]], [[0.05, 0.95, 0], [0.1, 0.8, 0.1]]) m.result().numpy() 1.1769392 m.reset_states() m.update_state([[0, 1, 0], [0, 0, 1]], [[0.05, 0.95, 0], [0.1, 0.8, 0.1]], sample_weight=tf.constant([0.3, 0.7])) m.result().numpy() 1.6271976 Usage with compile() API: model.compile( optimizer='sgd', loss='mse', metrics=[tf.keras.metrics.CategoricalCrossentropy()]) Methods reset_states View source reset_states() Resets all of the metric state variables. This function is called between epochs/steps, when a metric is evaluated during training. result View source result() Computes and returns the metric value tensor. Result computation is an idempotent operation that simply calculates the metric value using the state variables. update_state View source update_state( y_true, y_pred, sample_weight=None ) Accumulates metric statistics. y_true and y_pred should have the same shape. Args y_true Ground truth values. shape = [batch_size, d0, .. dN]. y_pred The predicted values. shape = [batch_size, d0, .. dN]. sample_weight Optional sample_weight acts as a coefficient for the metric. If a scalar is provided, then the metric is simply scaled by the given value. If sample_weight is a tensor of size [batch_size], then the metric for each sample of the batch is rescaled by the corresponding element in the sample_weight vector. If the shape of sample_weight is [batch_size, d0, .. dN-1] (or can be broadcasted to this shape), then each metric element of y_pred is scaled by the corresponding value of sample_weight. (Note on dN-1: all metric functions reduce by 1 dimension, usually the last axis (-1)). Returns Update op.
tensorflow.keras.metrics.categoricalcrossentropy
tf.keras.metrics.CategoricalHinge View source on GitHub Computes the categorical hinge metric between y_true and y_pred. Inherits From: Mean, Metric, Layer, Module View aliases Main aliases tf.metrics.CategoricalHinge Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.metrics.CategoricalHinge tf.keras.metrics.CategoricalHinge( name='categorical_hinge', dtype=None ) Args name (Optional) string name of the metric instance. dtype (Optional) data type of the metric result. Standalone usage: m = tf.keras.metrics.CategoricalHinge() m.update_state([[0, 1], [0, 0]], [[0.6, 0.4], [0.4, 0.6]]) m.result().numpy() 1.4000001 m.reset_states() m.update_state([[0, 1], [0, 0]], [[0.6, 0.4], [0.4, 0.6]], sample_weight=[1, 0]) m.result().numpy() 1.2 Usage with compile() API: model.compile( optimizer='sgd', loss='mse', metrics=[tf.keras.metrics.CategoricalHinge()]) Methods reset_states View source reset_states() Resets all of the metric state variables. This function is called between epochs/steps, when a metric is evaluated during training. result View source result() Computes and returns the metric value tensor. Result computation is an idempotent operation that simply calculates the metric value using the state variables. update_state View source update_state( y_true, y_pred, sample_weight=None ) Accumulates metric statistics. y_true and y_pred should have the same shape. Args y_true Ground truth values. shape = [batch_size, d0, .. dN]. y_pred The predicted values. shape = [batch_size, d0, .. dN]. sample_weight Optional sample_weight acts as a coefficient for the metric. If a scalar is provided, then the metric is simply scaled by the given value. If sample_weight is a tensor of size [batch_size], then the metric for each sample of the batch is rescaled by the corresponding element in the sample_weight vector. If the shape of sample_weight is [batch_size, d0, .. dN-1] (or can be broadcasted to this shape), then each metric element of y_pred is scaled by the corresponding value of sample_weight. (Note on dN-1: all metric functions reduce by 1 dimension, usually the last axis (-1)). Returns Update op.
tensorflow.keras.metrics.categoricalhinge
tf.keras.metrics.categorical_accuracy View source on GitHub Calculates how often predictions matches one-hot labels. View aliases Main aliases tf.metrics.categorical_accuracy Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.metrics.categorical_accuracy tf.keras.metrics.categorical_accuracy( y_true, y_pred ) Standalone usage: y_true = [[0, 0, 1], [0, 1, 0]] y_pred = [[0.1, 0.9, 0.8], [0.05, 0.95, 0]] m = tf.keras.metrics.categorical_accuracy(y_true, y_pred) assert m.shape == (2,) m.numpy() array([0., 1.], dtype=float32) You can provide logits of classes as y_pred, since argmax of logits and probabilities are same. Args y_true One-hot ground truth values. y_pred The prediction values. Returns Categorical accuracy values.
tensorflow.keras.metrics.categorical_accuracy
tf.keras.metrics.CosineSimilarity View source on GitHub Computes the cosine similarity between the labels and predictions. Inherits From: Mean, Metric, Layer, Module View aliases Main aliases tf.metrics.CosineSimilarity Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.metrics.CosineSimilarity tf.keras.metrics.CosineSimilarity( name='cosine_similarity', dtype=None, axis=-1 ) cosine similarity = (a . b) / ||a|| ||b|| See: Cosine Similarity. This metric keeps the average cosine similarity between predictions and labels over a stream of data. Args name (Optional) string name of the metric instance. dtype (Optional) data type of the metric result. axis (Optional) Defaults to -1. The dimension along which the cosine similarity is computed. Standalone usage: # l2_norm(y_true) = [[0., 1.], [1./1.414], 1./1.414]]] # l2_norm(y_pred) = [[1., 0.], [1./1.414], 1./1.414]]] # l2_norm(y_true) . l2_norm(y_pred) = [[0., 0.], [0.5, 0.5]] # result = mean(sum(l2_norm(y_true) . l2_norm(y_pred), axis=1)) # = ((0. + 0.) + (0.5 + 0.5)) / 2 m = tf.keras.metrics.CosineSimilarity(axis=1) m.update_state([[0., 1.], [1., 1.]], [[1., 0.], [1., 1.]]) m.result().numpy() 0.49999997 m.reset_states() m.update_state([[0., 1.], [1., 1.]], [[1., 0.], [1., 1.]], sample_weight=[0.3, 0.7]) m.result().numpy() 0.6999999 Usage with compile() API: model.compile( optimizer='sgd', loss='mse', metrics=[tf.keras.metrics.CosineSimilarity(axis=1)]) Methods reset_states View source reset_states() Resets all of the metric state variables. This function is called between epochs/steps, when a metric is evaluated during training. result View source result() Computes and returns the metric value tensor. Result computation is an idempotent operation that simply calculates the metric value using the state variables. update_state View source update_state( y_true, y_pred, sample_weight=None ) Accumulates metric statistics. y_true and y_pred should have the same shape. Args y_true Ground truth values. shape = [batch_size, d0, .. dN]. y_pred The predicted values. shape = [batch_size, d0, .. dN]. sample_weight Optional sample_weight acts as a coefficient for the metric. If a scalar is provided, then the metric is simply scaled by the given value. If sample_weight is a tensor of size [batch_size], then the metric for each sample of the batch is rescaled by the corresponding element in the sample_weight vector. If the shape of sample_weight is [batch_size, d0, .. dN-1] (or can be broadcasted to this shape), then each metric element of y_pred is scaled by the corresponding value of sample_weight. (Note on dN-1: all metric functions reduce by 1 dimension, usually the last axis (-1)). Returns Update op.
tensorflow.keras.metrics.cosinesimilarity
tf.keras.metrics.deserialize View source on GitHub Deserializes a serialized metric class/function instance. View aliases Main aliases tf.metrics.deserialize Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.metrics.deserialize tf.keras.metrics.deserialize( config, custom_objects=None ) Arguments config Metric configuration. custom_objects Optional dictionary mapping names (strings) to custom objects (classes and functions) to be considered during deserialization. Returns A Keras Metric instance or a metric function.
tensorflow.keras.metrics.deserialize
tf.keras.metrics.FalseNegatives View source on GitHub Calculates the number of false negatives. Inherits From: Metric, Layer, Module View aliases Main aliases tf.metrics.FalseNegatives Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.metrics.FalseNegatives tf.keras.metrics.FalseNegatives( thresholds=None, name=None, dtype=None ) If sample_weight is given, calculates the sum of the weights of false negatives. This metric creates one local variable, accumulator that is used to keep track of the number of false negatives. If sample_weight is None, weights default to 1. Use sample_weight of 0 to mask values. Args thresholds (Optional) Defaults to 0.5. A float value or a python list/tuple of float threshold values in [0, 1]. A threshold is compared with prediction values to determine the truth value of predictions (i.e., above the threshold is true, below is false). One metric value is generated for each threshold value. name (Optional) string name of the metric instance. dtype (Optional) data type of the metric result. Standalone usage: m = tf.keras.metrics.FalseNegatives() m.update_state([0, 1, 1, 1], [0, 1, 0, 0]) m.result().numpy() 2.0 m.reset_states() m.update_state([0, 1, 1, 1], [0, 1, 0, 0], sample_weight=[0, 0, 1, 0]) m.result().numpy() 1.0 Usage with compile() API: model.compile(optimizer='sgd', loss='mse', metrics=[tf.keras.metrics.FalseNegatives()]) Methods reset_states View source reset_states() Resets all of the metric state variables. This function is called between epochs/steps, when a metric is evaluated during training. result View source result() Computes and returns the metric value tensor. Result computation is an idempotent operation that simply calculates the metric value using the state variables. update_state View source update_state( y_true, y_pred, sample_weight=None ) Accumulates the metric statistics. Args y_true The ground truth values. y_pred The predicted values. sample_weight Optional weighting of each example. Defaults to 1. Can be a Tensor whose rank is either 0, or the same rank as y_true, and must be broadcastable to y_true. Returns Update op.
tensorflow.keras.metrics.falsenegatives
tf.keras.metrics.FalsePositives View source on GitHub Calculates the number of false positives. Inherits From: Metric, Layer, Module View aliases Main aliases tf.metrics.FalsePositives Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.metrics.FalsePositives tf.keras.metrics.FalsePositives( thresholds=None, name=None, dtype=None ) If sample_weight is given, calculates the sum of the weights of false positives. This metric creates one local variable, accumulator that is used to keep track of the number of false positives. If sample_weight is None, weights default to 1. Use sample_weight of 0 to mask values. Args thresholds (Optional) Defaults to 0.5. A float value or a python list/tuple of float threshold values in [0, 1]. A threshold is compared with prediction values to determine the truth value of predictions (i.e., above the threshold is true, below is false). One metric value is generated for each threshold value. name (Optional) string name of the metric instance. dtype (Optional) data type of the metric result. Standalone usage: m = tf.keras.metrics.FalsePositives() m.update_state([0, 1, 0, 0], [0, 0, 1, 1]) m.result().numpy() 2.0 m.reset_states() m.update_state([0, 1, 0, 0], [0, 0, 1, 1], sample_weight=[0, 0, 1, 0]) m.result().numpy() 1.0 Usage with compile() API: model.compile(optimizer='sgd', loss='mse', metrics=[tf.keras.metrics.FalsePositives()]) Methods reset_states View source reset_states() Resets all of the metric state variables. This function is called between epochs/steps, when a metric is evaluated during training. result View source result() Computes and returns the metric value tensor. Result computation is an idempotent operation that simply calculates the metric value using the state variables. update_state View source update_state( y_true, y_pred, sample_weight=None ) Accumulates the metric statistics. Args y_true The ground truth values. y_pred The predicted values. sample_weight Optional weighting of each example. Defaults to 1. Can be a Tensor whose rank is either 0, or the same rank as y_true, and must be broadcastable to y_true. Returns Update op.
tensorflow.keras.metrics.falsepositives
tf.keras.metrics.get View source on GitHub Retrieves a Keras metric as a function/Metric class instance. View aliases Main aliases tf.metrics.get Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.metrics.get tf.keras.metrics.get( identifier ) The identifier may be the string name of a metric function or class. metric = tf.keras.metrics.get("categorical_crossentropy") type(metric) <class 'function'> metric = tf.keras.metrics.get("CategoricalCrossentropy") type(metric) <class '...tensorflow.python.keras.metrics.CategoricalCrossentropy'> You can also specify config of the metric to this function by passing dict containing class_name and config as an identifier. Also note that the class_name must map to a Metric class identifier = {"class_name": "CategoricalCrossentropy", "config": {"from_logits": True} } metric = tf.keras.metrics.get(identifier) type(metric) <class '...tensorflow.python.keras.metrics.CategoricalCrossentropy'> Arguments identifier A metric identifier. One of None or string name of a metric function/class or metric configuration dictionary or a metric function or a metric class instance Returns A Keras metric as a function/ Metric class instance. Raises ValueError If identifier cannot be interpreted.
tensorflow.keras.metrics.get
tf.keras.metrics.Hinge View source on GitHub Computes the hinge metric between y_true and y_pred. Inherits From: Mean, Metric, Layer, Module View aliases Main aliases tf.metrics.Hinge Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.metrics.Hinge tf.keras.metrics.Hinge( name='hinge', dtype=None ) y_true values are expected to be -1 or 1. If binary (0 or 1) labels are provided we will convert them to -1 or 1. Args name (Optional) string name of the metric instance. dtype (Optional) data type of the metric result. Standalone usage: m = tf.keras.metrics.Hinge() m.update_state([[0, 1], [0, 0]], [[0.6, 0.4], [0.4, 0.6]]) m.result().numpy() 1.3 m.reset_states() m.update_state([[0, 1], [0, 0]], [[0.6, 0.4], [0.4, 0.6]], sample_weight=[1, 0]) m.result().numpy() 1.1 Usage with compile() API: model.compile(optimizer='sgd', loss='mse', metrics=[tf.keras.metrics.Hinge()]) Methods reset_states View source reset_states() Resets all of the metric state variables. This function is called between epochs/steps, when a metric is evaluated during training. result View source result() Computes and returns the metric value tensor. Result computation is an idempotent operation that simply calculates the metric value using the state variables. update_state View source update_state( y_true, y_pred, sample_weight=None ) Accumulates metric statistics. y_true and y_pred should have the same shape. Args y_true Ground truth values. shape = [batch_size, d0, .. dN]. y_pred The predicted values. shape = [batch_size, d0, .. dN]. sample_weight Optional sample_weight acts as a coefficient for the metric. If a scalar is provided, then the metric is simply scaled by the given value. If sample_weight is a tensor of size [batch_size], then the metric for each sample of the batch is rescaled by the corresponding element in the sample_weight vector. If the shape of sample_weight is [batch_size, d0, .. dN-1] (or can be broadcasted to this shape), then each metric element of y_pred is scaled by the corresponding value of sample_weight. (Note on dN-1: all metric functions reduce by 1 dimension, usually the last axis (-1)). Returns Update op.
tensorflow.keras.metrics.hinge
tf.keras.metrics.KLDivergence View source on GitHub Computes Kullback-Leibler divergence metric between y_true and y_pred. Inherits From: Mean, Metric, Layer, Module View aliases Main aliases tf.metrics.KLDivergence Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.metrics.KLDivergence tf.keras.metrics.KLDivergence( name='kullback_leibler_divergence', dtype=None ) metric = y_true * log(y_true / y_pred) Args name (Optional) string name of the metric instance. dtype (Optional) data type of the metric result. Standalone usage: m = tf.keras.metrics.KLDivergence() m.update_state([[0, 1], [0, 0]], [[0.6, 0.4], [0.4, 0.6]]) m.result().numpy() 0.45814306 m.reset_states() m.update_state([[0, 1], [0, 0]], [[0.6, 0.4], [0.4, 0.6]], sample_weight=[1, 0]) m.result().numpy() 0.9162892 Usage with compile() API: model.compile(optimizer='sgd', loss='mse', metrics=[tf.keras.metrics.KLDivergence()]) Methods reset_states View source reset_states() Resets all of the metric state variables. This function is called between epochs/steps, when a metric is evaluated during training. result View source result() Computes and returns the metric value tensor. Result computation is an idempotent operation that simply calculates the metric value using the state variables. update_state View source update_state( y_true, y_pred, sample_weight=None ) Accumulates metric statistics. y_true and y_pred should have the same shape. Args y_true Ground truth values. shape = [batch_size, d0, .. dN]. y_pred The predicted values. shape = [batch_size, d0, .. dN]. sample_weight Optional sample_weight acts as a coefficient for the metric. If a scalar is provided, then the metric is simply scaled by the given value. If sample_weight is a tensor of size [batch_size], then the metric for each sample of the batch is rescaled by the corresponding element in the sample_weight vector. If the shape of sample_weight is [batch_size, d0, .. dN-1] (or can be broadcasted to this shape), then each metric element of y_pred is scaled by the corresponding value of sample_weight. (Note on dN-1: all metric functions reduce by 1 dimension, usually the last axis (-1)). Returns Update op.
tensorflow.keras.metrics.kldivergence
tf.keras.metrics.LogCoshError View source on GitHub Computes the logarithm of the hyperbolic cosine of the prediction error. Inherits From: Mean, Metric, Layer, Module View aliases Main aliases tf.metrics.LogCoshError Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.metrics.LogCoshError tf.keras.metrics.LogCoshError( name='logcosh', dtype=None ) logcosh = log((exp(x) + exp(-x))/2), where x is the error (y_pred - y_true) Args name (Optional) string name of the metric instance. dtype (Optional) data type of the metric result. Standalone usage: m = tf.keras.metrics.LogCoshError() m.update_state([[0, 1], [0, 0]], [[1, 1], [0, 0]]) m.result().numpy() 0.10844523 m.reset_states() m.update_state([[0, 1], [0, 0]], [[1, 1], [0, 0]], sample_weight=[1, 0]) m.result().numpy() 0.21689045 Usage with compile() API: model.compile(optimizer='sgd', loss='mse', metrics=[tf.keras.metrics.LogCoshError()]) Methods reset_states View source reset_states() Resets all of the metric state variables. This function is called between epochs/steps, when a metric is evaluated during training. result View source result() Computes and returns the metric value tensor. Result computation is an idempotent operation that simply calculates the metric value using the state variables. update_state View source update_state( y_true, y_pred, sample_weight=None ) Accumulates metric statistics. y_true and y_pred should have the same shape. Args y_true Ground truth values. shape = [batch_size, d0, .. dN]. y_pred The predicted values. shape = [batch_size, d0, .. dN]. sample_weight Optional sample_weight acts as a coefficient for the metric. If a scalar is provided, then the metric is simply scaled by the given value. If sample_weight is a tensor of size [batch_size], then the metric for each sample of the batch is rescaled by the corresponding element in the sample_weight vector. If the shape of sample_weight is [batch_size, d0, .. dN-1] (or can be broadcasted to this shape), then each metric element of y_pred is scaled by the corresponding value of sample_weight. (Note on dN-1: all metric functions reduce by 1 dimension, usually the last axis (-1)). Returns Update op.
tensorflow.keras.metrics.logcosherror
tf.keras.metrics.Mean View source on GitHub Computes the (weighted) mean of the given values. Inherits From: Metric, Layer, Module View aliases Main aliases tf.metrics.Mean Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.metrics.Mean tf.keras.metrics.Mean( name='mean', dtype=None ) For example, if values is [1, 3, 5, 7] then the mean is 4. If the weights were specified as [1, 1, 0, 0] then the mean would be 2. This metric creates two variables, total and count that are used to compute the average of values. This average is ultimately returned as mean which is an idempotent operation that simply divides total by count. If sample_weight is None, weights default to 1. Use sample_weight of 0 to mask values. Args name (Optional) string name of the metric instance. dtype (Optional) data type of the metric result. Standalone usage: m = tf.keras.metrics.Mean() m.update_state([1, 3, 5, 7]) m.result().numpy() 4.0 m.reset_states() m.update_state([1, 3, 5, 7], sample_weight=[1, 1, 0, 0]) m.result().numpy() 2.0 Usage with compile() API: model.add_metric(tf.keras.metrics.Mean(name='mean_1')(outputs)) model.compile(optimizer='sgd', loss='mse') Methods reset_states View source reset_states() Resets all of the metric state variables. This function is called between epochs/steps, when a metric is evaluated during training. result View source result() Computes and returns the metric value tensor. Result computation is an idempotent operation that simply calculates the metric value using the state variables. update_state View source update_state( values, sample_weight=None ) Accumulates statistics for computing the metric. Args values Per-example value. sample_weight Optional weighting of each example. Defaults to 1. Returns Update op.
tensorflow.keras.metrics.mean
tf.keras.metrics.MeanAbsoluteError View source on GitHub Computes the mean absolute error between the labels and predictions. Inherits From: Mean, Metric, Layer, Module View aliases Main aliases tf.metrics.MeanAbsoluteError Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.metrics.MeanAbsoluteError tf.keras.metrics.MeanAbsoluteError( name='mean_absolute_error', dtype=None ) Args name (Optional) string name of the metric instance. dtype (Optional) data type of the metric result. Standalone usage: m = tf.keras.metrics.MeanAbsoluteError() m.update_state([[0, 1], [0, 0]], [[1, 1], [0, 0]]) m.result().numpy() 0.25 m.reset_states() m.update_state([[0, 1], [0, 0]], [[1, 1], [0, 0]], sample_weight=[1, 0]) m.result().numpy() 0.5 Usage with compile() API: model.compile( optimizer='sgd', loss='mse', metrics=[tf.keras.metrics.MeanAbsoluteError()]) Methods reset_states View source reset_states() Resets all of the metric state variables. This function is called between epochs/steps, when a metric is evaluated during training. result View source result() Computes and returns the metric value tensor. Result computation is an idempotent operation that simply calculates the metric value using the state variables. update_state View source update_state( y_true, y_pred, sample_weight=None ) Accumulates metric statistics. y_true and y_pred should have the same shape. Args y_true Ground truth values. shape = [batch_size, d0, .. dN]. y_pred The predicted values. shape = [batch_size, d0, .. dN]. sample_weight Optional sample_weight acts as a coefficient for the metric. If a scalar is provided, then the metric is simply scaled by the given value. If sample_weight is a tensor of size [batch_size], then the metric for each sample of the batch is rescaled by the corresponding element in the sample_weight vector. If the shape of sample_weight is [batch_size, d0, .. dN-1] (or can be broadcasted to this shape), then each metric element of y_pred is scaled by the corresponding value of sample_weight. (Note on dN-1: all metric functions reduce by 1 dimension, usually the last axis (-1)). Returns Update op.
tensorflow.keras.metrics.meanabsoluteerror
tf.keras.metrics.MeanAbsolutePercentageError View source on GitHub Computes the mean absolute percentage error between y_true and y_pred. Inherits From: Mean, Metric, Layer, Module View aliases Main aliases tf.metrics.MeanAbsolutePercentageError Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.metrics.MeanAbsolutePercentageError tf.keras.metrics.MeanAbsolutePercentageError( name='mean_absolute_percentage_error', dtype=None ) Args name (Optional) string name of the metric instance. dtype (Optional) data type of the metric result. Standalone usage: m = tf.keras.metrics.MeanAbsolutePercentageError() m.update_state([[0, 1], [0, 0]], [[1, 1], [0, 0]]) m.result().numpy() 250000000.0 m.reset_states() m.update_state([[0, 1], [0, 0]], [[1, 1], [0, 0]], sample_weight=[1, 0]) m.result().numpy() 500000000.0 Usage with compile() API: model.compile( optimizer='sgd', loss='mse', metrics=[tf.keras.metrics.MeanAbsolutePercentageError()]) Methods reset_states View source reset_states() Resets all of the metric state variables. This function is called between epochs/steps, when a metric is evaluated during training. result View source result() Computes and returns the metric value tensor. Result computation is an idempotent operation that simply calculates the metric value using the state variables. update_state View source update_state( y_true, y_pred, sample_weight=None ) Accumulates metric statistics. y_true and y_pred should have the same shape. Args y_true Ground truth values. shape = [batch_size, d0, .. dN]. y_pred The predicted values. shape = [batch_size, d0, .. dN]. sample_weight Optional sample_weight acts as a coefficient for the metric. If a scalar is provided, then the metric is simply scaled by the given value. If sample_weight is a tensor of size [batch_size], then the metric for each sample of the batch is rescaled by the corresponding element in the sample_weight vector. If the shape of sample_weight is [batch_size, d0, .. dN-1] (or can be broadcasted to this shape), then each metric element of y_pred is scaled by the corresponding value of sample_weight. (Note on dN-1: all metric functions reduce by 1 dimension, usually the last axis (-1)). Returns Update op.
tensorflow.keras.metrics.meanabsolutepercentageerror
tf.keras.metrics.MeanIoU View source on GitHub Computes the mean Intersection-Over-Union metric. Inherits From: Metric, Layer, Module View aliases Main aliases tf.metrics.MeanIoU Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.metrics.MeanIoU tf.keras.metrics.MeanIoU( num_classes, name=None, dtype=None ) Mean Intersection-Over-Union is a common evaluation metric for semantic image segmentation, which first computes the IOU for each semantic class and then computes the average over classes. IOU is defined as follows: IOU = true_positive / (true_positive + false_positive + false_negative). The predictions are accumulated in a confusion matrix, weighted by sample_weight and the metric is then calculated from it. If sample_weight is None, weights default to 1. Use sample_weight of 0 to mask values. Args num_classes The possible number of labels the prediction task can have. This value must be provided, since a confusion matrix of dimension = [num_classes, num_classes] will be allocated. name (Optional) string name of the metric instance. dtype (Optional) data type of the metric result. Standalone usage: # cm = [[1, 1], # [1, 1]] # sum_row = [2, 2], sum_col = [2, 2], true_positives = [1, 1] # iou = true_positives / (sum_row + sum_col - true_positives)) # result = (1 / (2 + 2 - 1) + 1 / (2 + 2 - 1)) / 2 = 0.33 m = tf.keras.metrics.MeanIoU(num_classes=2) m.update_state([0, 0, 1, 1], [0, 1, 0, 1]) m.result().numpy() 0.33333334 m.reset_states() m.update_state([0, 0, 1, 1], [0, 1, 0, 1], sample_weight=[0.3, 0.3, 0.3, 0.1]) m.result().numpy() 0.23809525 Usage with compile() API: model.compile( optimizer='sgd', loss='mse', metrics=[tf.keras.metrics.MeanIoU(num_classes=2)]) Methods reset_states View source reset_states() Resets all of the metric state variables. This function is called between epochs/steps, when a metric is evaluated during training. result View source result() Compute the mean intersection-over-union via the confusion matrix. update_state View source update_state( y_true, y_pred, sample_weight=None ) Accumulates the confusion matrix statistics. Args y_true The ground truth values. y_pred The predicted values. sample_weight Optional weighting of each example. Defaults to 1. Can be a Tensor whose rank is either 0, or the same rank as y_true, and must be broadcastable to y_true. Returns Update op.
tensorflow.keras.metrics.meaniou
tf.keras.metrics.MeanRelativeError View source on GitHub Computes the mean relative error by normalizing with the given values. Inherits From: Mean, Metric, Layer, Module View aliases Main aliases tf.metrics.MeanRelativeError Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.metrics.MeanRelativeError tf.keras.metrics.MeanRelativeError( normalizer, name=None, dtype=None ) This metric creates two local variables, total and count that are used to compute the mean relative error. This is weighted by sample_weight, and it is ultimately returned as mean_relative_error: an idempotent operation that simply divides total by count. If sample_weight is None, weights default to 1. Use sample_weight of 0 to mask values. Args normalizer The normalizer values with same shape as predictions. name (Optional) string name of the metric instance. dtype (Optional) data type of the metric result. Standalone usage: m = tf.keras.metrics.MeanRelativeError(normalizer=[1, 3, 2, 3]) m.update_state([1, 3, 2, 3], [2, 4, 6, 8]) # metric = mean(|y_pred - y_true| / normalizer) # = mean([1, 1, 4, 5] / [1, 3, 2, 3]) = mean([1, 1/3, 2, 5/3]) # = 5/4 = 1.25 m.result().numpy() 1.25 Usage with compile() API: model.compile( optimizer='sgd', loss='mse', metrics=[tf.keras.metrics.MeanRelativeError(normalizer=[1, 3])]) Methods reset_states View source reset_states() Resets all of the metric state variables. This function is called between epochs/steps, when a metric is evaluated during training. result View source result() Computes and returns the metric value tensor. Result computation is an idempotent operation that simply calculates the metric value using the state variables. update_state View source update_state( y_true, y_pred, sample_weight=None ) Accumulates metric statistics. Args y_true The ground truth values. y_pred The predicted values. sample_weight Optional weighting of each example. Defaults to 1. Can be a Tensor whose rank is either 0, or the same rank as y_true, and must be broadcastable to y_true. Returns Update op.
tensorflow.keras.metrics.meanrelativeerror
tf.keras.metrics.MeanSquaredError View source on GitHub Computes the mean squared error between y_true and y_pred. Inherits From: Mean, Metric, Layer, Module View aliases Main aliases tf.metrics.MeanSquaredError Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.metrics.MeanSquaredError tf.keras.metrics.MeanSquaredError( name='mean_squared_error', dtype=None ) Args name (Optional) string name of the metric instance. dtype (Optional) data type of the metric result. Standalone usage: m = tf.keras.metrics.MeanSquaredError() m.update_state([[0, 1], [0, 0]], [[1, 1], [0, 0]]) m.result().numpy() 0.25 m.reset_states() m.update_state([[0, 1], [0, 0]], [[1, 1], [0, 0]], sample_weight=[1, 0]) m.result().numpy() 0.5 Usage with compile() API: model.compile( optimizer='sgd', loss='mse', metrics=[tf.keras.metrics.MeanSquaredError()]) Methods reset_states View source reset_states() Resets all of the metric state variables. This function is called between epochs/steps, when a metric is evaluated during training. result View source result() Computes and returns the metric value tensor. Result computation is an idempotent operation that simply calculates the metric value using the state variables. update_state View source update_state( y_true, y_pred, sample_weight=None ) Accumulates metric statistics. y_true and y_pred should have the same shape. Args y_true Ground truth values. shape = [batch_size, d0, .. dN]. y_pred The predicted values. shape = [batch_size, d0, .. dN]. sample_weight Optional sample_weight acts as a coefficient for the metric. If a scalar is provided, then the metric is simply scaled by the given value. If sample_weight is a tensor of size [batch_size], then the metric for each sample of the batch is rescaled by the corresponding element in the sample_weight vector. If the shape of sample_weight is [batch_size, d0, .. dN-1] (or can be broadcasted to this shape), then each metric element of y_pred is scaled by the corresponding value of sample_weight. (Note on dN-1: all metric functions reduce by 1 dimension, usually the last axis (-1)). Returns Update op.
tensorflow.keras.metrics.meansquarederror
tf.keras.metrics.MeanSquaredLogarithmicError View source on GitHub Computes the mean squared logarithmic error between y_true and y_pred. Inherits From: Mean, Metric, Layer, Module View aliases Main aliases tf.metrics.MeanSquaredLogarithmicError Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.metrics.MeanSquaredLogarithmicError tf.keras.metrics.MeanSquaredLogarithmicError( name='mean_squared_logarithmic_error', dtype=None ) Args name (Optional) string name of the metric instance. dtype (Optional) data type of the metric result. Standalone usage: m = tf.keras.metrics.MeanSquaredLogarithmicError() m.update_state([[0, 1], [0, 0]], [[1, 1], [0, 0]]) m.result().numpy() 0.12011322 m.reset_states() m.update_state([[0, 1], [0, 0]], [[1, 1], [0, 0]], sample_weight=[1, 0]) m.result().numpy() 0.24022643 Usage with compile() API: model.compile( optimizer='sgd', loss='mse', metrics=[tf.keras.metrics.MeanSquaredLogarithmicError()]) Methods reset_states View source reset_states() Resets all of the metric state variables. This function is called between epochs/steps, when a metric is evaluated during training. result View source result() Computes and returns the metric value tensor. Result computation is an idempotent operation that simply calculates the metric value using the state variables. update_state View source update_state( y_true, y_pred, sample_weight=None ) Accumulates metric statistics. y_true and y_pred should have the same shape. Args y_true Ground truth values. shape = [batch_size, d0, .. dN]. y_pred The predicted values. shape = [batch_size, d0, .. dN]. sample_weight Optional sample_weight acts as a coefficient for the metric. If a scalar is provided, then the metric is simply scaled by the given value. If sample_weight is a tensor of size [batch_size], then the metric for each sample of the batch is rescaled by the corresponding element in the sample_weight vector. If the shape of sample_weight is [batch_size, d0, .. dN-1] (or can be broadcasted to this shape), then each metric element of y_pred is scaled by the corresponding value of sample_weight. (Note on dN-1: all metric functions reduce by 1 dimension, usually the last axis (-1)). Returns Update op.
tensorflow.keras.metrics.meansquaredlogarithmicerror
tf.keras.metrics.MeanTensor View source on GitHub Computes the element-wise (weighted) mean of the given tensors. Inherits From: Metric, Layer, Module View aliases Main aliases tf.metrics.MeanTensor Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.metrics.MeanTensor tf.keras.metrics.MeanTensor( name='mean_tensor', dtype=None ) MeanTensor returns a tensor with the same shape of the input tensors. The mean value is updated by keeping local variables total and count. The total tracks the sum of the weighted values, and count stores the sum of the weighted counts. Args name (Optional) string name of the metric instance. dtype (Optional) data type of the metric result. Standalone usage: m = tf.keras.metrics.MeanTensor() m.update_state([0, 1, 2, 3]) m.update_state([4, 5, 6, 7]) m.result().numpy() array([2., 3., 4., 5.], dtype=float32) m.update_state([12, 10, 8, 6], sample_weight= [0, 0.2, 0.5, 1]) m.result().numpy() array([2. , 3.6363635, 4.8 , 5.3333335], dtype=float32) Attributes count total Methods reset_states View source reset_states() Resets all of the metric state variables. This function is called between epochs/steps, when a metric is evaluated during training. result View source result() Computes and returns the metric value tensor. Result computation is an idempotent operation that simply calculates the metric value using the state variables. update_state View source update_state( values, sample_weight=None ) Accumulates statistics for computing the element-wise mean. Args values Per-example value. sample_weight Optional weighting of each example. Defaults to 1. Returns Update op.
tensorflow.keras.metrics.meantensor
tf.keras.metrics.Metric View source on GitHub Encapsulates metric logic and state. Inherits From: Layer, Module View aliases Main aliases tf.metrics.Metric Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.metrics.Metric tf.keras.metrics.Metric( name=None, dtype=None, **kwargs ) Args name (Optional) string name of the metric instance. dtype (Optional) data type of the metric result. **kwargs Additional layer keywords arguments. Standalone usage: m = SomeMetric(...) for input in ...: m.update_state(input) print('Final result: ', m.result().numpy()) Usage with compile() API: model = tf.keras.Sequential() model.add(tf.keras.layers.Dense(64, activation='relu')) model.add(tf.keras.layers.Dense(64, activation='relu')) model.add(tf.keras.layers.Dense(10, activation='softmax')) model.compile(optimizer=tf.keras.optimizers.RMSprop(0.01), loss=tf.keras.losses.CategoricalCrossentropy(), metrics=[tf.keras.metrics.CategoricalAccuracy()]) data = np.random.random((1000, 32)) labels = np.random.random((1000, 10)) dataset = tf.data.Dataset.from_tensor_slices((data, labels)) dataset = dataset.batch(32) model.fit(dataset, epochs=10) To be implemented by subclasses: __init__(): All state variables should be created in this method by calling self.add_weight() like: self.var = self.add_weight(...) update_state(): Has all updates to the state variables like: self.var.assign_add(...). result(): Computes and returns a value for the metric from the state variables. Example subclass implementation: class BinaryTruePositives(tf.keras.metrics.Metric): def __init__(self, name='binary_true_positives', **kwargs): super(BinaryTruePositives, self).__init__(name=name, **kwargs) self.true_positives = self.add_weight(name='tp', initializer='zeros') def update_state(self, y_true, y_pred, sample_weight=None): y_true = tf.cast(y_true, tf.bool) y_pred = tf.cast(y_pred, tf.bool) values = tf.logical_and(tf.equal(y_true, True), tf.equal(y_pred, True)) values = tf.cast(values, self.dtype) if sample_weight is not None: sample_weight = tf.cast(sample_weight, self.dtype) sample_weight = tf.broadcast_to(sample_weight, values.shape) values = tf.multiply(values, sample_weight) self.true_positives.assign_add(tf.reduce_sum(values)) def result(self): return self.true_positives Methods add_weight View source add_weight( name, shape=(), aggregation=tf.compat.v1.VariableAggregation.SUM, synchronization=tf.VariableSynchronization.ON_READ, initializer=None, dtype=None ) Adds state variable. Only for use by subclasses. reset_states View source reset_states() Resets all of the metric state variables. This function is called between epochs/steps, when a metric is evaluated during training. result View source @abc.abstractmethod result() Computes and returns the metric value tensor. Result computation is an idempotent operation that simply calculates the metric value using the state variables. update_state View source @abc.abstractmethod update_state( *args, **kwargs ) Accumulates statistics for the metric. Note: This function is executed as a graph function in graph mode. This means: a) Operations on the same resource are executed in textual order. This should make it easier to do things like add the updated value of a variable to another, for example. b) You don't need to worry about collecting the update ops to execute. All update ops added to the graph by this function will be executed. As a result, code should generally work the same way with graph or eager execution. Args *args **kwargs A mini-batch of inputs to the Metric.
tensorflow.keras.metrics.metric
tf.keras.metrics.Poisson View source on GitHub Computes the Poisson metric between y_true and y_pred. Inherits From: Mean, Metric, Layer, Module View aliases Main aliases tf.metrics.Poisson Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.metrics.Poisson tf.keras.metrics.Poisson( name='poisson', dtype=None ) metric = y_pred - y_true * log(y_pred) Args name (Optional) string name of the metric instance. dtype (Optional) data type of the metric result. Standalone usage: m = tf.keras.metrics.Poisson() m.update_state([[0, 1], [0, 0]], [[1, 1], [0, 0]]) m.result().numpy() 0.49999997 m.reset_states() m.update_state([[0, 1], [0, 0]], [[1, 1], [0, 0]], sample_weight=[1, 0]) m.result().numpy() 0.99999994 Usage with compile() API: model.compile(optimizer='sgd', loss='mse', metrics=[tf.keras.metrics.Poisson()]) Methods reset_states View source reset_states() Resets all of the metric state variables. This function is called between epochs/steps, when a metric is evaluated during training. result View source result() Computes and returns the metric value tensor. Result computation is an idempotent operation that simply calculates the metric value using the state variables. update_state View source update_state( y_true, y_pred, sample_weight=None ) Accumulates metric statistics. y_true and y_pred should have the same shape. Args y_true Ground truth values. shape = [batch_size, d0, .. dN]. y_pred The predicted values. shape = [batch_size, d0, .. dN]. sample_weight Optional sample_weight acts as a coefficient for the metric. If a scalar is provided, then the metric is simply scaled by the given value. If sample_weight is a tensor of size [batch_size], then the metric for each sample of the batch is rescaled by the corresponding element in the sample_weight vector. If the shape of sample_weight is [batch_size, d0, .. dN-1] (or can be broadcasted to this shape), then each metric element of y_pred is scaled by the corresponding value of sample_weight. (Note on dN-1: all metric functions reduce by 1 dimension, usually the last axis (-1)). Returns Update op.
tensorflow.keras.metrics.poisson
tf.keras.metrics.Precision View source on GitHub Computes the precision of the predictions with respect to the labels. Inherits From: Metric, Layer, Module View aliases Main aliases tf.metrics.Precision Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.metrics.Precision tf.keras.metrics.Precision( thresholds=None, top_k=None, class_id=None, name=None, dtype=None ) The metric creates two local variables, true_positives and false_positives that are used to compute the precision. This value is ultimately returned as precision, an idempotent operation that simply divides true_positives by the sum of true_positives and false_positives. If sample_weight is None, weights default to 1. Use sample_weight of 0 to mask values. If top_k is set, we'll calculate precision as how often on average a class among the top-k classes with the highest predicted values of a batch entry is correct and can be found in the label for that entry. If class_id is specified, we calculate precision by considering only the entries in the batch for which class_id is above the threshold and/or in the top-k highest predictions, and computing the fraction of them for which class_id is indeed a correct label. Args thresholds (Optional) A float value or a python list/tuple of float threshold values in [0, 1]. A threshold is compared with prediction values to determine the truth value of predictions (i.e., above the threshold is true, below is false). One metric value is generated for each threshold value. If neither thresholds nor top_k are set, the default is to calculate precision with thresholds=0.5. top_k (Optional) Unset by default. An int value specifying the top-k predictions to consider when calculating precision. class_id (Optional) Integer class ID for which we want binary metrics. This must be in the half-open interval [0, num_classes), where num_classes is the last dimension of predictions. name (Optional) string name of the metric instance. dtype (Optional) data type of the metric result. Standalone usage: m = tf.keras.metrics.Precision() m.update_state([0, 1, 1, 1], [1, 0, 1, 1]) m.result().numpy() 0.6666667 m.reset_states() m.update_state([0, 1, 1, 1], [1, 0, 1, 1], sample_weight=[0, 0, 1, 0]) m.result().numpy() 1.0 # With top_k=2, it will calculate precision over y_true[:2] and y_pred[:2] m = tf.keras.metrics.Precision(top_k=2) m.update_state([0, 0, 1, 1], [1, 1, 1, 1]) m.result().numpy() 0.0 # With top_k=4, it will calculate precision over y_true[:4] and y_pred[:4] m = tf.keras.metrics.Precision(top_k=4) m.update_state([0, 0, 1, 1], [1, 1, 1, 1]) m.result().numpy() 0.5 Usage with compile() API: model.compile(optimizer='sgd', loss='mse', metrics=[tf.keras.metrics.Precision()]) Methods reset_states View source reset_states() Resets all of the metric state variables. This function is called between epochs/steps, when a metric is evaluated during training. result View source result() Computes and returns the metric value tensor. Result computation is an idempotent operation that simply calculates the metric value using the state variables. update_state View source update_state( y_true, y_pred, sample_weight=None ) Accumulates true positive and false positive statistics. Args y_true The ground truth values, with the same dimensions as y_pred. Will be cast to bool. y_pred The predicted values. Each element must be in the range [0, 1]. sample_weight Optional weighting of each example. Defaults to 1. Can be a Tensor whose rank is either 0, or the same rank as y_true, and must be broadcastable to y_true. Returns Update op.
tensorflow.keras.metrics.precision
tf.keras.metrics.PrecisionAtRecall Computes best precision where recall is >= specified value. Inherits From: Metric, Layer, Module View aliases Main aliases tf.metrics.PrecisionAtRecall Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.metrics.PrecisionAtRecall tf.keras.metrics.PrecisionAtRecall( recall, num_thresholds=200, name=None, dtype=None ) This metric creates four local variables, true_positives, true_negatives, false_positives and false_negatives that are used to compute the precision at the given recall. The threshold for the given recall value is computed and used to evaluate the corresponding precision. If sample_weight is None, weights default to 1. Use sample_weight of 0 to mask values. Args recall A scalar value in range [0, 1]. num_thresholds (Optional) Defaults to 200. The number of thresholds to use for matching the given recall. name (Optional) string name of the metric instance. dtype (Optional) data type of the metric result. Standalone usage: m = tf.keras.metrics.PrecisionAtRecall(0.5) m.update_state([0, 0, 0, 1, 1], [0, 0.3, 0.8, 0.3, 0.8]) m.result().numpy() 0.5 m.reset_states() m.update_state([0, 0, 0, 1, 1], [0, 0.3, 0.8, 0.3, 0.8], sample_weight=[2, 2, 2, 1, 1]) m.result().numpy() 0.33333333 Usage with compile() API: model.compile( optimizer='sgd', loss='mse', metrics=[tf.keras.metrics.PrecisionAtRecall(recall=0.8)]) Methods reset_states View source reset_states() Resets all of the metric state variables. This function is called between epochs/steps, when a metric is evaluated during training. result View source result() Computes and returns the metric value tensor. Result computation is an idempotent operation that simply calculates the metric value using the state variables. update_state View source update_state( y_true, y_pred, sample_weight=None ) Accumulates confusion matrix statistics. Args y_true The ground truth values. y_pred The predicted values. sample_weight Optional weighting of each example. Defaults to 1. Can be a Tensor whose rank is either 0, or the same rank as y_true, and must be broadcastable to y_true. Returns Update op.
tensorflow.keras.metrics.precisionatrecall
tf.keras.metrics.Recall View source on GitHub Computes the recall of the predictions with respect to the labels. Inherits From: Metric, Layer, Module View aliases Main aliases tf.metrics.Recall Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.metrics.Recall tf.keras.metrics.Recall( thresholds=None, top_k=None, class_id=None, name=None, dtype=None ) This metric creates two local variables, true_positives and false_negatives, that are used to compute the recall. This value is ultimately returned as recall, an idempotent operation that simply divides true_positives by the sum of true_positives and false_negatives. If sample_weight is None, weights default to 1. Use sample_weight of 0 to mask values. If top_k is set, recall will be computed as how often on average a class among the labels of a batch entry is in the top-k predictions. If class_id is specified, we calculate recall by considering only the entries in the batch for which class_id is in the label, and computing the fraction of them for which class_id is above the threshold and/or in the top-k predictions. Args thresholds (Optional) A float value or a python list/tuple of float threshold values in [0, 1]. A threshold is compared with prediction values to determine the truth value of predictions (i.e., above the threshold is true, below is false). One metric value is generated for each threshold value. If neither thresholds nor top_k are set, the default is to calculate recall with thresholds=0.5. top_k (Optional) Unset by default. An int value specifying the top-k predictions to consider when calculating recall. class_id (Optional) Integer class ID for which we want binary metrics. This must be in the half-open interval [0, num_classes), where num_classes is the last dimension of predictions. name (Optional) string name of the metric instance. dtype (Optional) data type of the metric result. Standalone usage: m = tf.keras.metrics.Recall() m.update_state([0, 1, 1, 1], [1, 0, 1, 1]) m.result().numpy() 0.6666667 m.reset_states() m.update_state([0, 1, 1, 1], [1, 0, 1, 1], sample_weight=[0, 0, 1, 0]) m.result().numpy() 1.0 Usage with compile() API: model.compile(optimizer='sgd', loss='mse', metrics=[tf.keras.metrics.Recall()]) Methods reset_states View source reset_states() Resets all of the metric state variables. This function is called between epochs/steps, when a metric is evaluated during training. result View source result() Computes and returns the metric value tensor. Result computation is an idempotent operation that simply calculates the metric value using the state variables. update_state View source update_state( y_true, y_pred, sample_weight=None ) Accumulates true positive and false negative statistics. Args y_true The ground truth values, with the same dimensions as y_pred. Will be cast to bool. y_pred The predicted values. Each element must be in the range [0, 1]. sample_weight Optional weighting of each example. Defaults to 1. Can be a Tensor whose rank is either 0, or the same rank as y_true, and must be broadcastable to y_true. Returns Update op.
tensorflow.keras.metrics.recall
tf.keras.metrics.RecallAtPrecision Computes best recall where precision is >= specified value. Inherits From: Metric, Layer, Module View aliases Main aliases tf.metrics.RecallAtPrecision Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.metrics.RecallAtPrecision tf.keras.metrics.RecallAtPrecision( precision, num_thresholds=200, name=None, dtype=None ) For a given score-label-distribution the required precision might not be achievable, in this case 0.0 is returned as recall. This metric creates four local variables, true_positives, true_negatives, false_positives and false_negatives that are used to compute the recall at the given precision. The threshold for the given precision value is computed and used to evaluate the corresponding recall. If sample_weight is None, weights default to 1. Use sample_weight of 0 to mask values. Args precision A scalar value in range [0, 1]. num_thresholds (Optional) Defaults to 200. The number of thresholds to use for matching the given precision. name (Optional) string name of the metric instance. dtype (Optional) data type of the metric result. Standalone usage: m = tf.keras.metrics.RecallAtPrecision(0.8) m.update_state([0, 0, 1, 1], [0, 0.5, 0.3, 0.9]) m.result().numpy() 0.5 m.reset_states() m.update_state([0, 0, 1, 1], [0, 0.5, 0.3, 0.9], sample_weight=[1, 0, 0, 1]) m.result().numpy() 1.0 Usage with compile() API: model.compile( optimizer='sgd', loss='mse', metrics=[tf.keras.metrics.RecallAtPrecision(precision=0.8)]) Methods reset_states View source reset_states() Resets all of the metric state variables. This function is called between epochs/steps, when a metric is evaluated during training. result View source result() Computes and returns the metric value tensor. Result computation is an idempotent operation that simply calculates the metric value using the state variables. update_state View source update_state( y_true, y_pred, sample_weight=None ) Accumulates confusion matrix statistics. Args y_true The ground truth values. y_pred The predicted values. sample_weight Optional weighting of each example. Defaults to 1. Can be a Tensor whose rank is either 0, or the same rank as y_true, and must be broadcastable to y_true. Returns Update op.
tensorflow.keras.metrics.recallatprecision
tf.keras.metrics.RootMeanSquaredError View source on GitHub Computes root mean squared error metric between y_true and y_pred. Inherits From: Mean, Metric, Layer, Module View aliases Main aliases tf.metrics.RootMeanSquaredError Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.metrics.RootMeanSquaredError tf.keras.metrics.RootMeanSquaredError( name='root_mean_squared_error', dtype=None ) Standalone usage: m = tf.keras.metrics.RootMeanSquaredError() m.update_state([[0, 1], [0, 0]], [[1, 1], [0, 0]]) m.result().numpy() 0.5 m.reset_states() m.update_state([[0, 1], [0, 0]], [[1, 1], [0, 0]], sample_weight=[1, 0]) m.result().numpy() 0.70710677 Usage with compile() API: model.compile( optimizer='sgd', loss='mse', metrics=[tf.keras.metrics.RootMeanSquaredError()]) Methods reset_states View source reset_states() Resets all of the metric state variables. This function is called between epochs/steps, when a metric is evaluated during training. result View source result() Computes and returns the metric value tensor. Result computation is an idempotent operation that simply calculates the metric value using the state variables. update_state View source update_state( y_true, y_pred, sample_weight=None ) Accumulates root mean squared error statistics. Args y_true The ground truth values. y_pred The predicted values. sample_weight Optional weighting of each example. Defaults to 1. Can be a Tensor whose rank is either 0, or the same rank as y_true, and must be broadcastable to y_true. Returns Update op.
tensorflow.keras.metrics.rootmeansquarederror
tf.keras.metrics.SensitivityAtSpecificity View source on GitHub Computes best sensitivity where specificity is >= specified value. Inherits From: Metric, Layer, Module View aliases Main aliases tf.metrics.SensitivityAtSpecificity Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.metrics.SensitivityAtSpecificity tf.keras.metrics.SensitivityAtSpecificity( specificity, num_thresholds=200, name=None, dtype=None ) the sensitivity at a given specificity. Sensitivity measures the proportion of actual positives that are correctly identified as such (tp / (tp + fn)). Specificity measures the proportion of actual negatives that are correctly identified as such (tn / (tn + fp)). This metric creates four local variables, true_positives, true_negatives, false_positives and false_negatives that are used to compute the sensitivity at the given specificity. The threshold for the given specificity value is computed and used to evaluate the corresponding sensitivity. If sample_weight is None, weights default to 1. Use sample_weight of 0 to mask values. For additional information about specificity and sensitivity, see the following. Args specificity A scalar value in range [0, 1]. num_thresholds (Optional) Defaults to 200. The number of thresholds to use for matching the given specificity. name (Optional) string name of the metric instance. dtype (Optional) data type of the metric result. Standalone usage: m = tf.keras.metrics.SensitivityAtSpecificity(0.5) m.update_state([0, 0, 0, 1, 1], [0, 0.3, 0.8, 0.3, 0.8]) m.result().numpy() 0.5 m.reset_states() m.update_state([0, 0, 0, 1, 1], [0, 0.3, 0.8, 0.3, 0.8], sample_weight=[1, 1, 2, 2, 1]) m.result().numpy() 0.333333 Usage with compile() API: model.compile( optimizer='sgd', loss='mse', metrics=[tf.keras.metrics.SensitivityAtSpecificity()]) Methods reset_states View source reset_states() Resets all of the metric state variables. This function is called between epochs/steps, when a metric is evaluated during training. result View source result() Computes and returns the metric value tensor. Result computation is an idempotent operation that simply calculates the metric value using the state variables. update_state View source update_state( y_true, y_pred, sample_weight=None ) Accumulates confusion matrix statistics. Args y_true The ground truth values. y_pred The predicted values. sample_weight Optional weighting of each example. Defaults to 1. Can be a Tensor whose rank is either 0, or the same rank as y_true, and must be broadcastable to y_true. Returns Update op.
tensorflow.keras.metrics.sensitivityatspecificity
tf.keras.metrics.serialize View source on GitHub Serializes metric function or Metric instance. View aliases Main aliases tf.metrics.serialize Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.metrics.serialize tf.keras.metrics.serialize( metric ) Arguments metric A Keras Metric instance or a metric function. Returns Metric configuration dictionary.
tensorflow.keras.metrics.serialize
tf.keras.metrics.SparseCategoricalAccuracy View source on GitHub Calculates how often predictions matches integer labels. Inherits From: Mean, Metric, Layer, Module View aliases Main aliases tf.metrics.SparseCategoricalAccuracy Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.metrics.SparseCategoricalAccuracy tf.keras.metrics.SparseCategoricalAccuracy( name='sparse_categorical_accuracy', dtype=None ) acc = np.dot(sample_weight, np.equal(y_true, np.argmax(y_pred, axis=1)) You can provide logits of classes as y_pred, since argmax of logits and probabilities are same. This metric creates two local variables, total and count that are used to compute the frequency with which y_pred matches y_true. This frequency is ultimately returned as sparse categorical accuracy: an idempotent operation that simply divides total by count. If sample_weight is None, weights default to 1. Use sample_weight of 0 to mask values. Args name (Optional) string name of the metric instance. dtype (Optional) data type of the metric result. Standalone usage: m = tf.keras.metrics.SparseCategoricalAccuracy() m.update_state([[2], [1]], [[0.1, 0.6, 0.3], [0.05, 0.95, 0]]) m.result().numpy() 0.5 m.reset_states() m.update_state([[2], [1]], [[0.1, 0.6, 0.3], [0.05, 0.95, 0]], sample_weight=[0.7, 0.3]) m.result().numpy() 0.3 Usage with compile() API: model.compile( optimizer='sgd', loss='mse', metrics=[tf.keras.metrics.SparseCategoricalAccuracy()]) Methods reset_states View source reset_states() Resets all of the metric state variables. This function is called between epochs/steps, when a metric is evaluated during training. result View source result() Computes and returns the metric value tensor. Result computation is an idempotent operation that simply calculates the metric value using the state variables. update_state View source update_state( y_true, y_pred, sample_weight=None ) Accumulates metric statistics. y_true and y_pred should have the same shape. Args y_true Ground truth values. shape = [batch_size, d0, .. dN]. y_pred The predicted values. shape = [batch_size, d0, .. dN]. sample_weight Optional sample_weight acts as a coefficient for the metric. If a scalar is provided, then the metric is simply scaled by the given value. If sample_weight is a tensor of size [batch_size], then the metric for each sample of the batch is rescaled by the corresponding element in the sample_weight vector. If the shape of sample_weight is [batch_size, d0, .. dN-1] (or can be broadcasted to this shape), then each metric element of y_pred is scaled by the corresponding value of sample_weight. (Note on dN-1: all metric functions reduce by 1 dimension, usually the last axis (-1)). Returns Update op.
tensorflow.keras.metrics.sparsecategoricalaccuracy
tf.keras.metrics.SparseCategoricalCrossentropy View source on GitHub Computes the crossentropy metric between the labels and predictions. Inherits From: Mean, Metric, Layer, Module View aliases Main aliases tf.metrics.SparseCategoricalCrossentropy Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.metrics.SparseCategoricalCrossentropy tf.keras.metrics.SparseCategoricalCrossentropy( name='sparse_categorical_crossentropy', dtype=None, from_logits=False, axis=-1 ) Use this crossentropy metric when there are two or more label classes. We expect labels to be provided as integers. If you want to provide labels using one-hot representation, please use CategoricalCrossentropy metric. There should be # classes floating point values per feature for y_pred and a single floating point value per feature for y_true. In the snippet below, there is a single floating point value per example for y_true and # classes floating pointing values per example for y_pred. The shape of y_true is [batch_size] and the shape of y_pred is [batch_size, num_classes]. Args name (Optional) string name of the metric instance. dtype (Optional) data type of the metric result. from_logits (Optional) Whether output is expected to be a logits tensor. By default, we consider that output encodes a probability distribution. axis (Optional) Defaults to -1. The dimension along which the metric is computed. Standalone usage: # y_true = one_hot(y_true) = [[0, 1, 0], [0, 0, 1]] # logits = log(y_pred) # softmax = exp(logits) / sum(exp(logits), axis=-1) # softmax = [[0.05, 0.95, EPSILON], [0.1, 0.8, 0.1]] # xent = -sum(y * log(softmax), 1) # log(softmax) = [[-2.9957, -0.0513, -16.1181], # [-2.3026, -0.2231, -2.3026]] # y_true * log(softmax) = [[0, -0.0513, 0], [0, 0, -2.3026]] # xent = [0.0513, 2.3026] # Reduced xent = (0.0513 + 2.3026) / 2 m = tf.keras.metrics.SparseCategoricalCrossentropy() m.update_state([1, 2], [[0.05, 0.95, 0], [0.1, 0.8, 0.1]]) m.result().numpy() 1.1769392 m.reset_states() m.update_state([1, 2], [[0.05, 0.95, 0], [0.1, 0.8, 0.1]], sample_weight=tf.constant([0.3, 0.7])) m.result().numpy() 1.6271976 Usage with compile() API: model.compile( optimizer='sgd', loss='mse', metrics=[tf.keras.metrics.SparseCategoricalCrossentropy()]) Methods reset_states View source reset_states() Resets all of the metric state variables. This function is called between epochs/steps, when a metric is evaluated during training. result View source result() Computes and returns the metric value tensor. Result computation is an idempotent operation that simply calculates the metric value using the state variables. update_state View source update_state( y_true, y_pred, sample_weight=None ) Accumulates metric statistics. y_true and y_pred should have the same shape. Args y_true Ground truth values. shape = [batch_size, d0, .. dN]. y_pred The predicted values. shape = [batch_size, d0, .. dN]. sample_weight Optional sample_weight acts as a coefficient for the metric. If a scalar is provided, then the metric is simply scaled by the given value. If sample_weight is a tensor of size [batch_size], then the metric for each sample of the batch is rescaled by the corresponding element in the sample_weight vector. If the shape of sample_weight is [batch_size, d0, .. dN-1] (or can be broadcasted to this shape), then each metric element of y_pred is scaled by the corresponding value of sample_weight. (Note on dN-1: all metric functions reduce by 1 dimension, usually the last axis (-1)). Returns Update op.
tensorflow.keras.metrics.sparsecategoricalcrossentropy
tf.keras.metrics.SparseTopKCategoricalAccuracy View source on GitHub Computes how often integer targets are in the top K predictions. Inherits From: Mean, Metric, Layer, Module View aliases Main aliases tf.metrics.SparseTopKCategoricalAccuracy Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.metrics.SparseTopKCategoricalAccuracy tf.keras.metrics.SparseTopKCategoricalAccuracy( k=5, name='sparse_top_k_categorical_accuracy', dtype=None ) Args k (Optional) Number of top elements to look at for computing accuracy. Defaults to 5. name (Optional) string name of the metric instance. dtype (Optional) data type of the metric result. Standalone usage: m = tf.keras.metrics.SparseTopKCategoricalAccuracy(k=1) m.update_state([2, 1], [[0.1, 0.9, 0.8], [0.05, 0.95, 0]]) m.result().numpy() 0.5 m.reset_states() m.update_state([2, 1], [[0.1, 0.9, 0.8], [0.05, 0.95, 0]], sample_weight=[0.7, 0.3]) m.result().numpy() 0.3 Usage with compile() API: model.compile( optimizer='sgd', loss='mse', metrics=[tf.keras.metrics.SparseTopKCategoricalAccuracy()]) Methods reset_states View source reset_states() Resets all of the metric state variables. This function is called between epochs/steps, when a metric is evaluated during training. result View source result() Computes and returns the metric value tensor. Result computation is an idempotent operation that simply calculates the metric value using the state variables. update_state View source update_state( y_true, y_pred, sample_weight=None ) Accumulates metric statistics. y_true and y_pred should have the same shape. Args y_true Ground truth values. shape = [batch_size, d0, .. dN]. y_pred The predicted values. shape = [batch_size, d0, .. dN]. sample_weight Optional sample_weight acts as a coefficient for the metric. If a scalar is provided, then the metric is simply scaled by the given value. If sample_weight is a tensor of size [batch_size], then the metric for each sample of the batch is rescaled by the corresponding element in the sample_weight vector. If the shape of sample_weight is [batch_size, d0, .. dN-1] (or can be broadcasted to this shape), then each metric element of y_pred is scaled by the corresponding value of sample_weight. (Note on dN-1: all metric functions reduce by 1 dimension, usually the last axis (-1)). Returns Update op.
tensorflow.keras.metrics.sparsetopkcategoricalaccuracy
tf.keras.metrics.sparse_categorical_accuracy View source on GitHub Calculates how often predictions matches integer labels. View aliases Main aliases tf.metrics.sparse_categorical_accuracy Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.metrics.sparse_categorical_accuracy tf.keras.metrics.sparse_categorical_accuracy( y_true, y_pred ) Standalone usage: y_true = [2, 1] y_pred = [[0.1, 0.9, 0.8], [0.05, 0.95, 0]] m = tf.keras.metrics.sparse_categorical_accuracy(y_true, y_pred) assert m.shape == (2,) m.numpy() array([0., 1.], dtype=float32) You can provide logits of classes as y_pred, since argmax of logits and probabilities are same. Args y_true Integer ground truth values. y_pred The prediction values. Returns Sparse categorical accuracy values.
tensorflow.keras.metrics.sparse_categorical_accuracy
tf.keras.metrics.sparse_top_k_categorical_accuracy View source on GitHub Computes how often integer targets are in the top K predictions. View aliases Main aliases tf.metrics.sparse_top_k_categorical_accuracy Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.metrics.sparse_top_k_categorical_accuracy tf.keras.metrics.sparse_top_k_categorical_accuracy( y_true, y_pred, k=5 ) Standalone usage: y_true = [2, 1] y_pred = [[0.1, 0.9, 0.8], [0.05, 0.95, 0]] m = tf.keras.metrics.sparse_top_k_categorical_accuracy( y_true, y_pred, k=3) assert m.shape == (2,) m.numpy() array([1., 1.], dtype=float32) Args y_true tensor of true targets. y_pred tensor of predicted targets. k (Optional) Number of top elements to look at for computing accuracy. Defaults to 5. Returns Sparse top K categorical accuracy value.
tensorflow.keras.metrics.sparse_top_k_categorical_accuracy
tf.keras.metrics.SpecificityAtSensitivity View source on GitHub Computes best specificity where sensitivity is >= specified value. Inherits From: Metric, Layer, Module View aliases Main aliases tf.metrics.SpecificityAtSensitivity Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.metrics.SpecificityAtSensitivity tf.keras.metrics.SpecificityAtSensitivity( sensitivity, num_thresholds=200, name=None, dtype=None ) Sensitivity measures the proportion of actual positives that are correctly identified as such (tp / (tp + fn)). Specificity measures the proportion of actual negatives that are correctly identified as such (tn / (tn + fp)). This metric creates four local variables, true_positives, true_negatives, false_positives and false_negatives that are used to compute the specificity at the given sensitivity. The threshold for the given sensitivity value is computed and used to evaluate the corresponding specificity. If sample_weight is None, weights default to 1. Use sample_weight of 0 to mask values. For additional information about specificity and sensitivity, see the following. Args sensitivity A scalar value in range [0, 1]. num_thresholds (Optional) Defaults to 200. The number of thresholds to use for matching the given sensitivity. name (Optional) string name of the metric instance. dtype (Optional) data type of the metric result. Standalone usage: m = tf.keras.metrics.SpecificityAtSensitivity(0.5) m.update_state([0, 0, 0, 1, 1], [0, 0.3, 0.8, 0.3, 0.8]) m.result().numpy() 0.66666667 m.reset_states() m.update_state([0, 0, 0, 1, 1], [0, 0.3, 0.8, 0.3, 0.8], sample_weight=[1, 1, 2, 2, 2]) m.result().numpy() 0.5 Usage with compile() API: model.compile( optimizer='sgd', loss='mse', metrics=[tf.keras.metrics.SpecificityAtSensitivity()]) Methods reset_states View source reset_states() Resets all of the metric state variables. This function is called between epochs/steps, when a metric is evaluated during training. result View source result() Computes and returns the metric value tensor. Result computation is an idempotent operation that simply calculates the metric value using the state variables. update_state View source update_state( y_true, y_pred, sample_weight=None ) Accumulates confusion matrix statistics. Args y_true The ground truth values. y_pred The predicted values. sample_weight Optional weighting of each example. Defaults to 1. Can be a Tensor whose rank is either 0, or the same rank as y_true, and must be broadcastable to y_true. Returns Update op.
tensorflow.keras.metrics.specificityatsensitivity
tf.keras.metrics.SquaredHinge View source on GitHub Computes the squared hinge metric between y_true and y_pred. Inherits From: Mean, Metric, Layer, Module View aliases Main aliases tf.metrics.SquaredHinge Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.metrics.SquaredHinge tf.keras.metrics.SquaredHinge( name='squared_hinge', dtype=None ) y_true values are expected to be -1 or 1. If binary (0 or 1) labels are provided we will convert them to -1 or 1. Args name (Optional) string name of the metric instance. dtype (Optional) data type of the metric result. Standalone usage: m = tf.keras.metrics.SquaredHinge() m.update_state([[0, 1], [0, 0]], [[0.6, 0.4], [0.4, 0.6]]) m.result().numpy() 1.86 m.reset_states() m.update_state([[0, 1], [0, 0]], [[0.6, 0.4], [0.4, 0.6]], sample_weight=[1, 0]) m.result().numpy() 1.46 Usage with compile() API: model.compile( optimizer='sgd', loss='mse', metrics=[tf.keras.metrics.SquaredHinge()]) Methods reset_states View source reset_states() Resets all of the metric state variables. This function is called between epochs/steps, when a metric is evaluated during training. result View source result() Computes and returns the metric value tensor. Result computation is an idempotent operation that simply calculates the metric value using the state variables. update_state View source update_state( y_true, y_pred, sample_weight=None ) Accumulates metric statistics. y_true and y_pred should have the same shape. Args y_true Ground truth values. shape = [batch_size, d0, .. dN]. y_pred The predicted values. shape = [batch_size, d0, .. dN]. sample_weight Optional sample_weight acts as a coefficient for the metric. If a scalar is provided, then the metric is simply scaled by the given value. If sample_weight is a tensor of size [batch_size], then the metric for each sample of the batch is rescaled by the corresponding element in the sample_weight vector. If the shape of sample_weight is [batch_size, d0, .. dN-1] (or can be broadcasted to this shape), then each metric element of y_pred is scaled by the corresponding value of sample_weight. (Note on dN-1: all metric functions reduce by 1 dimension, usually the last axis (-1)). Returns Update op.
tensorflow.keras.metrics.squaredhinge
tf.keras.metrics.Sum View source on GitHub Computes the (weighted) sum of the given values. Inherits From: Metric, Layer, Module View aliases Main aliases tf.metrics.Sum Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.metrics.Sum tf.keras.metrics.Sum( name='sum', dtype=None ) For example, if values is [1, 3, 5, 7] then the sum is 16. If the weights were specified as [1, 1, 0, 0] then the sum would be 4. This metric creates one variable, total, that is used to compute the sum of values. This is ultimately returned as sum. If sample_weight is None, weights default to 1. Use sample_weight of 0 to mask values. Args name (Optional) string name of the metric instance. dtype (Optional) data type of the metric result. Standalone usage: m = tf.keras.metrics.Sum() m.update_state([1, 3, 5, 7]) m.result().numpy() 16.0 Usage with compile() API: model.add_metric(tf.keras.metrics.Sum(name='sum_1')(outputs)) model.compile(optimizer='sgd', loss='mse') Methods reset_states View source reset_states() Resets all of the metric state variables. This function is called between epochs/steps, when a metric is evaluated during training. result View source result() Computes and returns the metric value tensor. Result computation is an idempotent operation that simply calculates the metric value using the state variables. update_state View source update_state( values, sample_weight=None ) Accumulates statistics for computing the metric. Args values Per-example value. sample_weight Optional weighting of each example. Defaults to 1. Returns Update op.
tensorflow.keras.metrics.sum
tf.keras.metrics.TopKCategoricalAccuracy View source on GitHub Computes how often targets are in the top K predictions. Inherits From: Mean, Metric, Layer, Module View aliases Main aliases tf.metrics.TopKCategoricalAccuracy Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.metrics.TopKCategoricalAccuracy tf.keras.metrics.TopKCategoricalAccuracy( k=5, name='top_k_categorical_accuracy', dtype=None ) Args k (Optional) Number of top elements to look at for computing accuracy. Defaults to 5. name (Optional) string name of the metric instance. dtype (Optional) data type of the metric result. Standalone usage: m = tf.keras.metrics.TopKCategoricalAccuracy(k=1) m.update_state([[0, 0, 1], [0, 1, 0]], [[0.1, 0.9, 0.8], [0.05, 0.95, 0]]) m.result().numpy() 0.5 m.reset_states() m.update_state([[0, 0, 1], [0, 1, 0]], [[0.1, 0.9, 0.8], [0.05, 0.95, 0]], sample_weight=[0.7, 0.3]) m.result().numpy() 0.3 Usage with compile() API: model.compile(optimizer='sgd', loss='mse', metrics=[tf.keras.metrics.TopKCategoricalAccuracy()]) Methods reset_states View source reset_states() Resets all of the metric state variables. This function is called between epochs/steps, when a metric is evaluated during training. result View source result() Computes and returns the metric value tensor. Result computation is an idempotent operation that simply calculates the metric value using the state variables. update_state View source update_state( y_true, y_pred, sample_weight=None ) Accumulates metric statistics. y_true and y_pred should have the same shape. Args y_true Ground truth values. shape = [batch_size, d0, .. dN]. y_pred The predicted values. shape = [batch_size, d0, .. dN]. sample_weight Optional sample_weight acts as a coefficient for the metric. If a scalar is provided, then the metric is simply scaled by the given value. If sample_weight is a tensor of size [batch_size], then the metric for each sample of the batch is rescaled by the corresponding element in the sample_weight vector. If the shape of sample_weight is [batch_size, d0, .. dN-1] (or can be broadcasted to this shape), then each metric element of y_pred is scaled by the corresponding value of sample_weight. (Note on dN-1: all metric functions reduce by 1 dimension, usually the last axis (-1)). Returns Update op.
tensorflow.keras.metrics.topkcategoricalaccuracy
tf.keras.metrics.top_k_categorical_accuracy View source on GitHub Computes how often targets are in the top K predictions. View aliases Main aliases tf.metrics.top_k_categorical_accuracy Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.metrics.top_k_categorical_accuracy tf.keras.metrics.top_k_categorical_accuracy( y_true, y_pred, k=5 ) Standalone usage: y_true = [[0, 0, 1], [0, 1, 0]] y_pred = [[0.1, 0.9, 0.8], [0.05, 0.95, 0]] m = tf.keras.metrics.top_k_categorical_accuracy(y_true, y_pred, k=3) assert m.shape == (2,) m.numpy() array([1., 1.], dtype=float32) Args y_true The ground truth values. y_pred The prediction values. k (Optional) Number of top elements to look at for computing accuracy. Defaults to 5. Returns Top K categorical accuracy value.
tensorflow.keras.metrics.top_k_categorical_accuracy
tf.keras.metrics.TrueNegatives View source on GitHub Calculates the number of true negatives. Inherits From: Metric, Layer, Module View aliases Main aliases tf.metrics.TrueNegatives Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.metrics.TrueNegatives tf.keras.metrics.TrueNegatives( thresholds=None, name=None, dtype=None ) If sample_weight is given, calculates the sum of the weights of true negatives. This metric creates one local variable, accumulator that is used to keep track of the number of true negatives. If sample_weight is None, weights default to 1. Use sample_weight of 0 to mask values. Args thresholds (Optional) Defaults to 0.5. A float value or a python list/tuple of float threshold values in [0, 1]. A threshold is compared with prediction values to determine the truth value of predictions (i.e., above the threshold is true, below is false). One metric value is generated for each threshold value. name (Optional) string name of the metric instance. dtype (Optional) data type of the metric result. Standalone usage: m = tf.keras.metrics.TrueNegatives() m.update_state([0, 1, 0, 0], [1, 1, 0, 0]) m.result().numpy() 2.0 m.reset_states() m.update_state([0, 1, 0, 0], [1, 1, 0, 0], sample_weight=[0, 0, 1, 0]) m.result().numpy() 1.0 Usage with compile() API: model.compile(optimizer='sgd', loss='mse', metrics=[tf.keras.metrics.TrueNegatives()]) Methods reset_states View source reset_states() Resets all of the metric state variables. This function is called between epochs/steps, when a metric is evaluated during training. result View source result() Computes and returns the metric value tensor. Result computation is an idempotent operation that simply calculates the metric value using the state variables. update_state View source update_state( y_true, y_pred, sample_weight=None ) Accumulates the metric statistics. Args y_true The ground truth values. y_pred The predicted values. sample_weight Optional weighting of each example. Defaults to 1. Can be a Tensor whose rank is either 0, or the same rank as y_true, and must be broadcastable to y_true. Returns Update op.
tensorflow.keras.metrics.truenegatives
tf.keras.metrics.TruePositives View source on GitHub Calculates the number of true positives. Inherits From: Metric, Layer, Module View aliases Main aliases tf.metrics.TruePositives Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.metrics.TruePositives tf.keras.metrics.TruePositives( thresholds=None, name=None, dtype=None ) If sample_weight is given, calculates the sum of the weights of true positives. This metric creates one local variable, true_positives that is used to keep track of the number of true positives. If sample_weight is None, weights default to 1. Use sample_weight of 0 to mask values. Args thresholds (Optional) Defaults to 0.5. A float value or a python list/tuple of float threshold values in [0, 1]. A threshold is compared with prediction values to determine the truth value of predictions (i.e., above the threshold is true, below is false). One metric value is generated for each threshold value. name (Optional) string name of the metric instance. dtype (Optional) data type of the metric result. Standalone usage: m = tf.keras.metrics.TruePositives() m.update_state([0, 1, 1, 1], [1, 0, 1, 1]) m.result().numpy() 2.0 m.reset_states() m.update_state([0, 1, 1, 1], [1, 0, 1, 1], sample_weight=[0, 0, 1, 0]) m.result().numpy() 1.0 Usage with compile() API: model.compile(optimizer='sgd', loss='mse', metrics=[tf.keras.metrics.TruePositives()]) Methods reset_states View source reset_states() Resets all of the metric state variables. This function is called between epochs/steps, when a metric is evaluated during training. result View source result() Computes and returns the metric value tensor. Result computation is an idempotent operation that simply calculates the metric value using the state variables. update_state View source update_state( y_true, y_pred, sample_weight=None ) Accumulates the metric statistics. Args y_true The ground truth values. y_pred The predicted values. sample_weight Optional weighting of each example. Defaults to 1. Can be a Tensor whose rank is either 0, or the same rank as y_true, and must be broadcastable to y_true. Returns Update op.
tensorflow.keras.metrics.truepositives
Module: tf.keras.mixed_precision Keras mixed precision API. See the mixed precision guide to learn how to use the API. Modules experimental module: Public API for tf.keras.mixed_precision.experimental namespace. Classes class LossScaleOptimizer: An optimizer that applies loss scaling to prevent numeric underflow. class Policy: A dtype policy for a Keras layer. Functions global_policy(...): Returns the global dtype policy. set_global_policy(...): Sets the global dtype policy.
tensorflow.keras.mixed_precision
Module: tf.keras.mixed_precision.experimental Public API for tf.keras.mixed_precision.experimental namespace. Classes class LossScaleOptimizer: An deprecated optimizer that applies loss scaling. class Policy: A deprecated dtype policy for a Keras layer. Functions get_layer_policy(...): Returns the dtype policy of a layer. global_policy(...): Returns the global dtype policy. set_policy(...): Sets the global dtype policy.
tensorflow.keras.mixed_precision.experimental
tf.keras.mixed_precision.experimental.get_layer_policy Returns the dtype policy of a layer. tf.keras.mixed_precision.experimental.get_layer_policy( layer ) Warning: This function is deprecated. Use tf.keras.layers.Layer.dtype_policy instead. Args layer A tf.keras.layers.Layer. Returns The tf.keras.mixed_precision.Policy of the layer.
tensorflow.keras.mixed_precision.experimental.get_layer_policy
tf.keras.mixed_precision.experimental.LossScaleOptimizer View source on GitHub An deprecated optimizer that applies loss scaling. Inherits From: LossScaleOptimizer, Optimizer View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.mixed_precision.experimental.LossScaleOptimizer tf.keras.mixed_precision.experimental.LossScaleOptimizer( optimizer, loss_scale ) Warning: This class is deprecated and will be removed in TensorFlow 2.5. Please use the non-experimental class tf.keras.mixed_precision.LossScaleOptimizer instead. This class is identical to the non-experimental keras.mixed_precision.LossScaleOptimizer except its constructor takes different arguments. For this class (the experimental version), the constructor takes a loss_scale argument. For the non-experimental class, the constructor encodes the loss scaling information in multiple arguments. Note that unlike this class, the non-experimental class does not accept a tf.compat.v1.mixed_precision.LossScale, which is deprecated. If you currently use this class, you should switch to the non-experimental tf.keras.mixed_precision.LossScaleOptimizer instead. We show several examples of converting the use of the experimental class to the equivalent non-experimental class. # In all of the the examples below, `opt1` and `opt2` are identical opt1 = tf.keras.mixed_precision.experimental.LossScaleOptimizer( tf.keras.optimizers.SGD(), loss_scale='dynamic') opt2 = tf.keras.mixed_precision.LossScaleOptimizer( tf.keras.optimizers.SGD()) assert opt1.get_config() == opt2.get_config() opt1 = tf.keras.mixed_precision.experimental.LossScaleOptimizer( tf.keras.optimizers.SGD(), loss_scale=123) # dynamic=False indicates to use fixed loss scaling. initial_scale=123 # refers to the initial loss scale, which is the single fixed loss scale # when dynamic=False. opt2 = tf.keras.mixed_precision.LossScaleOptimizer( tf.keras.optimizers.SGD(), dynamic=False, initial_scale=123) assert opt1.get_config() == opt2.get_config() loss_scale = tf.compat.v1.mixed_precision.experimental.DynamicLossScale( initial_loss_scale=2048, increment_period=500) opt1 = tf.keras.mixed_precision.experimental.LossScaleOptimizer( tf.keras.optimizers.SGD(), loss_scale=loss_scale) opt2 = tf.keras.mixed_precision.LossScaleOptimizer( tf.keras.optimizers.SGD(), initial_scale=2048, dynamic_growth_steps=500) assert opt1.get_config() == opt2.get_config() Make sure to also switch from this class to the non-experimental class in isinstance checks, if you have any. If you do not do this, your model may run into hard-to-debug issues, as the experimental LossScaleOptimizer subclasses the non-experimental LossScaleOptimizer, but not vice versa. It is safe to switch isinstance checks to the non-experimental LossScaleOptimizer even before using the non-experimental LossScaleOptimizer. opt1 = tf.keras.mixed_precision.experimental.LossScaleOptimizer( tf.keras.optimizers.SGD(), loss_scale='dynamic') # The experimental class subclasses the non-experimental class isinstance(opt1, tf.keras.mixed_precision.LossScaleOptimizer) True opt2 = tf.keras.mixed_precision.LossScaleOptimizer( tf.keras.optimizers.SGD()) # The non-experimental class does NOT subclass the experimental class. isinstance(opt2, tf.keras.mixed_precision.experimental.LossScaleOptimizer) False Args optimizer The Optimizer instance to wrap. loss_scale The loss scale to scale the loss and gradients. This can either be an int/float to use a fixed loss scale, the string "dynamic" to use dynamic loss scaling, or an instance of a LossScale. The string "dynamic" equivalent to passing DynamicLossScale(), and passing an int/float is equivalent to passing a FixedLossScale with the given loss scale. If a DynamicLossScale is passed, DynamicLossScale.multiplier must be 2 (the default). Raises ValueError in case of any invalid argument. Attributes dynamic Bool indicating whether dynamic loss scaling is used. dynamic_counter The number of steps since the loss scale was last increased or decreased. This is None if LossScaleOptimizer.dynamic is False. The counter is incremented every step. Once it reaches LossScaleOptimizer.dynamic_growth_steps, the loss scale will be doubled and the counter will be reset back to zero. If nonfinite gradients are encountered, the loss scale will be halved and the counter will be reset back to zero. dynamic_growth_steps The number of steps it takes to increase the loss scale. This is None if LossScaleOptimizer.dynamic is False. Every dynamic_growth_steps consecutive steps with finite gradients, the loss scale is increased. initial_scale The initial loss scale. If LossScaleOptimizer.dynamic is False, this is the same number as LossScaleOptimizer.loss_scale, as the loss scale never changes. inner_optimizer The optimizer that this LossScaleOptimizer is wrapping. loss_scale The current loss scale as a float32 scalar tensor. Methods get_scaled_loss View source get_scaled_loss( loss ) Scales the loss by the loss scale. This method is only needed if you compute gradients manually, e.g. with tf.GradientTape. In that case, call this method to scale the loss before passing the loss to tf.GradientTape. If you use LossScaleOptimizer.minimize or LossScaleOptimizer.get_gradients, loss scaling is automatically applied and this method is unneeded. If this method is called, get_unscaled_gradients should also be called. See the tf.keras.mixed_precision.LossScaleOptimizer doc for an example. Args loss The loss, which will be multiplied by the loss scale. Can either be a tensor or a callable returning a tensor. Returns loss multiplied by LossScaleOptimizer.loss_scale. get_unscaled_gradients View source get_unscaled_gradients( grads ) Unscales the gradients by the loss scale. This method is only needed if you compute gradients manually, e.g. with tf.GradientTape. In that case, call this method to unscale the gradients after computing them with tf.GradientTape. If you use LossScaleOptimizer.minimize or LossScaleOptimizer.get_gradients, loss scaling is automatically applied and this method is unneeded. If this method is called, get_scaled_loss should also be called. See the tf.keras.mixed_precision.LossScaleOptimizer doc for an example. Args grads A list of tensors, each which will be divided by the loss scale. Can have None values, which are ignored. Returns A new list the same size as grads, where every non-None value in grads is divided by LossScaleOptimizer.loss_scale.
tensorflow.keras.mixed_precision.experimental.lossscaleoptimizer
tf.keras.mixed_precision.experimental.Policy View source on GitHub A deprecated dtype policy for a Keras layer. Inherits From: Policy tf.keras.mixed_precision.experimental.Policy( name, loss_scale='auto' ) Warning: This class is now deprecated and will be removed soon. Please use the non-experimental class tf.keras.mixed_precision.Policy instead. The difference between this class and the non-experimental class is that this class has a loss_scale field and the non-experimental class does not. The loss scale is only used by tf.keras.Model.compile, which automatically wraps the optimizer with a LossScaleOptimizer if the optimzier is not already a LossScaleOptimizer. For the non-experimental Policy class, Model.compile instead wraps the optimizer with a LossScaleOptimizer if Policy.name is "mixed_float16". When deserializing objects with an experimental policy using functions like tf.keras.utils.deserialize_keras_object, the policy will be deserialized as the non-experimental tf.keras.mixed_precision.Policy, and the loss scale will silently be dropped. This is so that SavedModels that are generated with an expeirmental policy can be restored after the experimental policy is removed. Args name A string. Can be one of the following values: Any dtype name, such as 'float32' or 'float64'. Both the variable and compute dtypes will be that dtype. 'mixed_float16' or 'mixed_bfloat16': The compute dtype is float16 or bfloat16, while the variable dtype is float32. With 'mixed_float16', a dynamic loss scale is used. These policies are used for mixed precision training. loss_scale A tf.compat.v1.mixed_precision.LossScale, an int (which uses a FixedLossScale), the string "dynamic" (which uses a DynamicLossScale), or None (which uses no loss scale). Defaults to "auto". In the "auto" case: 1) if name is "mixed_float16", then use loss_scale="dynamic". 2) otherwise, do not use a loss scale. Only tf.keras.Models, not layers, use the loss scale, and it is only used during Model.fit, Model.train_on_batch, and other similar methods. Attributes compute_dtype The compute dtype of this policy. This is the dtype layers will do their computations in. Typically layers output tensors with the compute dtype as well. Note that even if the compute dtype is float16 or bfloat16, hardware devices may not do individual adds, multiplies, and other fundamental operations in float16 or bfloat16, but instead may do some of them in float32 for numeric stability. The compute dtype is the dtype of the inputs and outputs of the TensorFlow ops that the layer executes. Internally, many TensorFlow ops will do certain internal calculations in float32 or some other device-internal intermediate format with higher precision than float16/bfloat16, to increase numeric stability. For example, a tf.keras.layers.Dense layer, when run on a GPU with a float16 compute dtype, will pass float16 inputs to tf.linalg.matmul. But, tf.linalg.matmul will do use float32 intermediate math. The performance benefit of float16 is still apparent, due to increased memory bandwidth and the fact modern GPUs have specialized hardware for computing matmuls on float16 inputs while still keeping intermediate computations in float32. loss_scale Returns the loss scale of this Policy. name Returns the name of this policy. variable_dtype The variable dtype of this policy. This is the dtype layers will create their variables in, unless a layer explicitly chooses a different dtype. If this is different than Policy.compute_dtype, Layers will cast variables to the compute dtype to avoid type errors. Variable regularizers are run in the variable dtype, not the compute dtype. Methods from_config View source @classmethod from_config( config, custom_objects=None ) get_config View source get_config()
tensorflow.keras.mixed_precision.experimental.policy