doc_content
stringlengths 1
386k
| doc_id
stringlengths 5
188
|
---|---|
tf.math.special.bessel_j1 Computes the Bessel j1 function of x element-wise. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.math.special.bessel_j1
tf.math.special.bessel_j1(
x, name=None
)
Modified Bessel function of order 1.
tf.math.special.bessel_j1([0.5, 1., 2., 4.]).numpy()
array([ 0.24226846, 0.44005059, 0.57672481, -0.06604333], dtype=float32)
Args
x A Tensor or SparseTensor. Must be one of the following types: half, float32, float64.
name A name for the operation (optional).
Returns A Tensor or SparseTensor, respectively. Has the same type as x.
Scipy Compatibility Equivalent to scipy.special.j1 | tensorflow.math.special.bessel_j1 |
tf.math.special.bessel_k0 Computes the Bessel k0 function of x element-wise. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.math.special.bessel_k0
tf.math.special.bessel_k0(
x, name=None
)
Modified Bessel function of order 0. It is preferable to use the numerically stabler function k0e(x) instead.
tf.math.special.bessel_k0([0.5, 1., 2., 4.]).numpy()
array([0.92441907, 0.42102444, 0.11389387, 0.01115968], dtype=float32)
Args
x A Tensor or SparseTensor. Must be one of the following types: half, float32, float64.
name A name for the operation (optional).
Returns A Tensor or SparseTensor, respectively. Has the same type as x.
Scipy Compatibility Equivalent to scipy.special.k0 | tensorflow.math.special.bessel_k0 |
tf.math.special.bessel_k0e Computes the Bessel k0e function of x element-wise. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.math.special.bessel_k0e
tf.math.special.bessel_k0e(
x, name=None
)
Modified Bessel function of order 0.
tf.math.special.bessel_k0e([0.5, 1., 2., 4.]).numpy()
array([1.52410939, 1.14446308, 0.84156822, 0.60929767], dtype=float32)
Args
x A Tensor or SparseTensor. Must be one of the following types: half, float32, float64.
name A name for the operation (optional).
Returns A Tensor or SparseTensor, respectively. Has the same type as x.
Scipy Compatibility Equivalent to scipy.special.k0e | tensorflow.math.special.bessel_k0e |
tf.math.special.bessel_k1 Computes the Bessel k1 function of x element-wise. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.math.special.bessel_k1
tf.math.special.bessel_k1(
x, name=None
)
Modified Bessel function of order 1. It is preferable to use the numerically stabler function k1e(x) instead.
tf.math.special.bessel_k1([0.5, 1., 2., 4.]).numpy()
array([1.65644112, 0.60190723, 0.13986588, 0.0124835 ], dtype=float32)
Args
x A Tensor or SparseTensor. Must be one of the following types: half, float32, float64.
name A name for the operation (optional).
Returns A Tensor or SparseTensor, respectively. Has the same type as x.
Scipy Compatibility Equivalent to scipy.special.k1 | tensorflow.math.special.bessel_k1 |
tf.math.special.bessel_k1e Computes the Bessel k1e function of x element-wise. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.math.special.bessel_k1e
tf.math.special.bessel_k1e(
x, name=None
)
Modified Bessel function of order 1.
tf.math.special.bessel_k1e([0.5, 1., 2., 4.]).numpy()
array([2.73100971, 1.63615349, 1.03347685, 0.68157595], dtype=float32)
Args
x A Tensor or SparseTensor. Must be one of the following types: half, float32, float64.
name A name for the operation (optional).
Returns A Tensor or SparseTensor, respectively. Has the same type as x.
Scipy Compatibility Equivalent to scipy.special.k1e | tensorflow.math.special.bessel_k1e |
tf.math.special.bessel_y0 Computes the Bessel y0 function of x element-wise. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.math.special.bessel_y0
tf.math.special.bessel_y0(
x, name=None
)
Modified Bessel function of order 0.
tf.math.special.bessel_y0([0.5, 1., 2., 4.]).numpy()
array([-0.44451873, 0.08825696, 0.51037567, -0.01694074], dtype=float32)
Args
x A Tensor or SparseTensor. Must be one of the following types: half, float32, float64.
name A name for the operation (optional).
Returns A Tensor or SparseTensor, respectively. Has the same type as x.
Scipy Compatibility Equivalent to scipy.special.y0 | tensorflow.math.special.bessel_y0 |
tf.math.special.bessel_y1 Computes the Bessel y1 function of x element-wise. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.math.special.bessel_y1
tf.math.special.bessel_y1(
x, name=None
)
Modified Bessel function of order 1.
tf.math.special.bessel_y1([0.5, 1., 2., 4.]).numpy()
array([-1.47147239, -0.78121282, -0.10703243, 0.39792571], dtype=float32)
Args
x A Tensor or SparseTensor. Must be one of the following types: half, float32, float64.
name A name for the operation (optional).
Returns A Tensor or SparseTensor, respectively. Has the same type as x.
Scipy Compatibility Equivalent to scipy.special.y1 | tensorflow.math.special.bessel_y1 |
tf.math.special.dawsn Computes Dawson's integral of x element-wise. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.math.special.dawsn
tf.math.special.dawsn(
x, name=None
)
Dawson's integral is defined as exp(-x**2) times the integral of exp(t**2) from 0 to x, with the domain of definition all real numbers. Dawson's function is odd. >>> tf.math.special.dawsn([-1., -0.5, 0.5, 1.]).numpy()
array([-0.5380795, -0.4244364, 0.4244364, 0.5380795], dtype=float32)
This implementation is based off of the Cephes math library.
Args
x A Tensor or SparseTensor. Must be one of the following types: float32, float64.
name A name for the operation (optional).
Returns A Tensor or SparseTensor, respectively. Has the same type as x.
Scipy Compatibility Equivalent to scipy.special.dawsn | tensorflow.math.special.dawsn |
tf.math.special.expint Computes the Exponential integral of x element-wise. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.math.special.expint
tf.math.special.expint(
x, name=None
)
The Exponential integral is defined as the integral of exp(t) / t from -inf to x, with the domain of definition all positive real numbers.
tf.math.special.expint([1., 1.1, 2.1, 4.1]).numpy()
array([ 1.8951179, 2.1673784, 5.3332353, 21.048464], dtype=float32)
This implementation is based off of the Cephes math library.
Args
x A Tensor or SparseTensor. Must be one of the following types: float32, float64.
name A name for the operation (optional).
Returns A Tensor or SparseTensor, respectively. Has the same type as x.
Scipy Compatibility Equivalent to scipy.special.expi | tensorflow.math.special.expint |
tf.math.special.fresnel_cos Computes Fresnel's cosine integral of x element-wise. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.math.special.fresnel_cos
tf.math.special.fresnel_cos(
x, name=None
)
The Fresnel cosine integral is defined as the integral of cos(t^2) from 0 to x, with the domain of definition all real numbers. The Fresnel cosine integral is odd. >>> tf.math.special.fresnel_cos([-1., -0.1, 0.1, 1.]).numpy()
array([-0.7798934 , -0.09999753, 0.09999753, 0.7798934 ], dtype=float32)
This implementation is based off of the Cephes math library.
Args
x A Tensor or SparseTensor. Must be one of the following types: float32, float64.
name A name for the operation (optional).
Returns A Tensor or SparseTensor, respectively. Has the same type as x.
Scipy Compatibility Equivalent to scipy.special.fresnel second output. | tensorflow.math.special.fresnel_cos |
tf.math.special.fresnel_sin Computes Fresnel's sine integral of x element-wise. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.math.special.fresnel_sin
tf.math.special.fresnel_sin(
x, name=None
)
The Fresnel sine integral is defined as the integral of sin(t^2) from 0 to x, with the domain of definition all real numbers.
tf.math.special.fresnel_sin([-1., -0.1, 0.1, 1.]).numpy()
array([-0.43825912, -0.00052359, 0.00052359, 0.43825912], dtype=float32)
This implementation is based off of the Cephes math library.
Args
x A Tensor or SparseTensor. Must be one of the following types: float32, float64.
name A name for the operation (optional).
Returns A Tensor or SparseTensor, respectively. Has the same type as x.
Scipy Compatibility Equivalent to scipy.special.fresnel first output. | tensorflow.math.special.fresnel_sin |
tf.math.special.spence Computes Spence's integral of x element-wise. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.math.special.spence
tf.math.special.spence(
x, name=None
)
Spence's integral is defined as the integral of log(t) / (1 - t) from 1 to x, with the domain of definition all non-negative real numbers.
tf.math.special.spence([0.5, 1., 2., 3.]).numpy()
array([ 0.58224034, 0. , -0.82246685, -1.4367464], dtype=float32)
This implementation is based off of the Cephes math library.
Args
x A Tensor or SparseTensor. Must be one of the following types: float32, float64.
name A name for the operation (optional).
Returns A Tensor or SparseTensor, respectively. Has the same type as x.
Scipy Compatibility Equivalent to scipy.special.spence | tensorflow.math.special.spence |
tf.math.sqrt View source on GitHub Computes element-wise square root of the input tensor. View aliases Main aliases
tf.sqrt Compat aliases for migration See Migration guide for more details. tf.compat.v1.math.sqrt, tf.compat.v1.sqrt
tf.math.sqrt(
x, name=None
)
Note: This operation does not support integer types.
x = tf.constant([[4.0], [16.0]])
tf.sqrt(x)
<tf.Tensor: shape=(2, 1), dtype=float32, numpy=
array([[2.],
[4.]], dtype=float32)>
y = tf.constant([[-4.0], [16.0]])
tf.sqrt(y)
<tf.Tensor: shape=(2, 1), dtype=float32, numpy=
array([[nan],
[ 4.]], dtype=float32)>
z = tf.constant([[-1.0], [16.0]], dtype=tf.complex128)
tf.sqrt(z)
<tf.Tensor: shape=(2, 1), dtype=complex128, numpy=
array([[0.0+1.j],
[4.0+0.j]])>
Note: In order to support complex complex, please provide an input tensor of complex64 or complex128.
Args
x A tf.Tensor of type bfloat16, half, float32, float64, complex64, complex128
name A name for the operation (optional).
Returns A tf.Tensor of same size, type and sparsity as x. If x is a SparseTensor, returns SparseTensor(x.indices, tf.math.sqrt(x.values, ...), x.dense_shape) | tensorflow.math.sqrt |
tf.math.square Computes square of x element-wise. View aliases Main aliases
tf.square Compat aliases for migration See Migration guide for more details. tf.compat.v1.math.square, tf.compat.v1.square
tf.math.square(
x, name=None
)
I.e., \(y = x * x = x^2\).
tf.math.square([-2., 0., 3.])
<tf.Tensor: shape=(3,), dtype=float32, numpy=array([4., 0., 9.], dtype=float32)>
Args
x A Tensor. Must be one of the following types: bfloat16, half, float32, float64, int8, int16, int32, int64, complex64, complex128.
name A name for the operation (optional).
Returns A Tensor. Has the same type as x. If x is a SparseTensor, returns SparseTensor(x.indices, tf.math.square(x.values, ...), x.dense_shape) | tensorflow.math.square |
tf.math.squared_difference Returns conj(x - y)(x - y) element-wise. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.math.squared_difference, tf.compat.v1.squared_difference
tf.math.squared_difference(
x, y, name=None
)
Note: math.squared_difference supports broadcasting. More about broadcasting here
Args
x A Tensor. Must be one of the following types: bfloat16, half, float32, float64, int32, int64, complex64, complex128.
y A Tensor. Must have the same type as x.
name A name for the operation (optional).
Returns A Tensor. Has the same type as x. | tensorflow.math.squared_difference |
tf.math.subtract View source on GitHub Returns x - y element-wise. View aliases Main aliases
tf.subtract Compat aliases for migration See Migration guide for more details. tf.compat.v1.math.subtract, tf.compat.v1.subtract
tf.math.subtract(
x, y, name=None
)
Note: Subtract supports broadcasting. More about broadcasting here
Args
x A Tensor. Must be one of the following types: bfloat16, half, float32, float64, uint8, int8, uint16, int16, int32, int64, complex64, complex128, uint32.
y A Tensor. Must have the same type as x.
name A name for the operation (optional).
Returns A Tensor. Has the same type as x. | tensorflow.math.subtract |
tf.math.tan Computes tan of x element-wise. View aliases Main aliases
tf.tan Compat aliases for migration See Migration guide for more details. tf.compat.v1.math.tan, tf.compat.v1.tan
tf.math.tan(
x, name=None
)
Given an input tensor, this function computes tangent of every element in the tensor. Input range is (-inf, inf) and output range is (-inf, inf). If input lies outside the boundary, nan is returned. x = tf.constant([-float("inf"), -9, -0.5, 1, 1.2, 200, 10000, float("inf")])
tf.math.tan(x) ==> [nan 0.45231566 -0.5463025 1.5574077 2.572152 -1.7925274 0.32097113 nan]
Args
x A Tensor. Must be one of the following types: bfloat16, half, float32, float64, int8, int16, int32, int64, complex64, complex128.
name A name for the operation (optional).
Returns A Tensor. Has the same type as x. | tensorflow.math.tan |
tf.math.tanh Computes hyperbolic tangent of x element-wise. View aliases Main aliases
tf.nn.tanh, tf.tanh Compat aliases for migration See Migration guide for more details. tf.compat.v1.math.tanh, tf.compat.v1.nn.tanh, tf.compat.v1.tanh
tf.math.tanh(
x, name=None
)
Given an input tensor, this function computes hyperbolic tangent of every element in the tensor. Input range is [-inf, inf] and output range is [-1,1].
x = tf.constant([-float("inf"), -5, -0.5, 1, 1.2, 2, 3, float("inf")])
tf.math.tanh(x)
<tf.Tensor: shape=(8,), dtype=float32, numpy=
array([-1. , -0.99990916, -0.46211717, 0.7615942 , 0.8336547 ,
0.9640276 , 0.9950547 , 1. ], dtype=float32)>
Args
x A Tensor. Must be one of the following types: bfloat16, half, float32, float64, complex64, complex128.
name A name for the operation (optional).
Returns A Tensor. Has the same type as x. If x is a SparseTensor, returns SparseTensor(x.indices, tf.math.tanh(x.values, ...), x.dense_shape) | tensorflow.math.tanh |
tf.math.top_k View source on GitHub Finds values and indices of the k largest entries for the last dimension. View aliases Main aliases
tf.nn.top_k Compat aliases for migration See Migration guide for more details. tf.compat.v1.math.top_k, tf.compat.v1.nn.top_k
tf.math.top_k(
input, k=1, sorted=True, name=None
)
If the input is a vector (rank=1), finds the k largest entries in the vector and outputs their values and indices as vectors. Thus values[j] is the j-th largest entry in input, and its index is indices[j]. For matrices (resp. higher rank input), computes the top k entries in each row (resp. vector along the last dimension). Thus, values.shape = indices.shape = input.shape[:-1] + [k]
If two elements are equal, the lower-index element appears first.
Args
input 1-D or higher Tensor with last dimension at least k.
k 0-D int32 Tensor. Number of top elements to look for along the last dimension (along each row for matrices).
sorted If true the resulting k elements will be sorted by the values in descending order.
name Optional name for the operation.
Returns
values The k largest elements along each last dimensional slice.
indices The indices of values within the last dimension of input. | tensorflow.math.top_k |
tf.math.truediv View source on GitHub Divides x / y elementwise (using Python 3 division operator semantics). View aliases Main aliases
tf.truediv Compat aliases for migration See Migration guide for more details. tf.compat.v1.math.truediv, tf.compat.v1.truediv
tf.math.truediv(
x, y, name=None
)
Note: Prefer using the Tensor operator or tf.divide which obey Python division operator semantics.
This function forces Python 3 division operator semantics where all integer arguments are cast to floating types first. This op is generated by normal x / y division in Python 3 and in Python 2.7 with from __future__ import division. If you want integer division that rounds down, use x // y or tf.math.floordiv. x and y must have the same numeric type. If the inputs are floating point, the output will have the same type. If the inputs are integral, the inputs are cast to float32 for int8 and int16 and float64 for int32 and int64 (matching the behavior of Numpy).
Args
x Tensor numerator of numeric type.
y Tensor denominator of numeric type.
name A name for the operation (optional).
Returns x / y evaluated in floating point.
Raises
TypeError If x and y have different dtypes. | tensorflow.math.truediv |
tf.math.unsorted_segment_max Computes the maximum along segments of a tensor. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.math.unsorted_segment_max, tf.compat.v1.unsorted_segment_max
tf.math.unsorted_segment_max(
data, segment_ids, num_segments, name=None
)
Read the section on segmentation for an explanation of segments. This operator is similar to the unsorted segment sum operator found (here). Instead of computing the sum over segments, it computes the maximum such that: \(output_i = \max_{j...} data[j...]\) where max is over tuples j... such that segment_ids[j...] == i. If the maximum is empty for a given segment ID i, it outputs the smallest possible value for the specific numeric type, output[i] = numeric_limits<T>::lowest(). If the given segment ID i is negative, then the corresponding value is dropped, and will not be included in the result. For example: c = tf.constant([[1,2,3,4], [5,6,7,8], [4,3,2,1]])
tf.unsorted_segment_max(c, tf.constant([0, 1, 0]), num_segments=2)
# ==> [[ 4, 3, 3, 4],
# [5, 6, 7, 8]]
Args
data A Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, int64, bfloat16, uint16, half, uint32, uint64.
segment_ids A Tensor. Must be one of the following types: int32, int64. A tensor whose shape is a prefix of data.shape.
num_segments A Tensor. Must be one of the following types: int32, int64.
name A name for the operation (optional).
Returns A Tensor. Has the same type as data. | tensorflow.math.unsorted_segment_max |
tf.math.unsorted_segment_mean View source on GitHub Computes the mean along segments of a tensor. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.math.unsorted_segment_mean, tf.compat.v1.unsorted_segment_mean
tf.math.unsorted_segment_mean(
data, segment_ids, num_segments, name=None
)
Read the section on segmentation for an explanation of segments. This operator is similar to the tf.math.unsorted_segment_sum operator. Instead of computing the sum over segments, it computes the mean of all entries belonging to a segment such that: \(output_i = 1/N_i \sum_{j...} data[j...]\) where the sum is over tuples j... such that segment_ids[j...] == i with \N_i\ being the number of occurrences of id \i\. If there is no entry for a given segment ID i, it outputs 0. If the given segment ID i is negative, the value is dropped and will not be added to the sum of the segment.
Args
data A Tensor with floating point or complex dtype.
segment_ids An integer tensor whose shape is a prefix of data.shape.
num_segments An integer scalar Tensor. The number of distinct segment IDs.
name A name for the operation (optional).
Returns A Tensor. Has same shape as data, except for the first segment_ids.rank dimensions, which are replaced with a single dimension which has size num_segments. | tensorflow.math.unsorted_segment_mean |
tf.math.unsorted_segment_min Computes the minimum along segments of a tensor. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.math.unsorted_segment_min, tf.compat.v1.unsorted_segment_min
tf.math.unsorted_segment_min(
data, segment_ids, num_segments, name=None
)
Read the section on segmentation for an explanation of segments. This operator is similar to the unsorted segment sum operator found (here). Instead of computing the sum over segments, it computes the minimum such that: \(output_i = \min_{j...} data_[j...]\) where min is over tuples j... such that segment_ids[j...] == i. If the minimum is empty for a given segment ID i, it outputs the largest possible value for the specific numeric type, output[i] = numeric_limits<T>::max(). For example: c = tf.constant([[1,2,3,4], [5,6,7,8], [4,3,2,1]])
tf.unsorted_segment_min(c, tf.constant([0, 1, 0]), num_segments=2)
# ==> [[ 1, 2, 2, 1],
# [5, 6, 7, 8]]
If the given segment ID i is negative, then the corresponding value is dropped, and will not be included in the result.
Args
data A Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, int64, bfloat16, uint16, half, uint32, uint64.
segment_ids A Tensor. Must be one of the following types: int32, int64. A tensor whose shape is a prefix of data.shape.
num_segments A Tensor. Must be one of the following types: int32, int64.
name A name for the operation (optional).
Returns A Tensor. Has the same type as data. | tensorflow.math.unsorted_segment_min |
tf.math.unsorted_segment_prod Computes the product along segments of a tensor. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.math.unsorted_segment_prod, tf.compat.v1.unsorted_segment_prod
tf.math.unsorted_segment_prod(
data, segment_ids, num_segments, name=None
)
Read the section on segmentation for an explanation of segments. This operator is similar to the unsorted segment sum operator found (here). Instead of computing the sum over segments, it computes the product of all entries belonging to a segment such that: \(output_i = \prod_{j...} data[j...]\) where the product is over tuples j... such that segment_ids[j...] == i. For example: c = tf.constant([[1,2,3,4], [5,6,7,8], [4,3,2,1]])
tf.unsorted_segment_prod(c, tf.constant([0, 1, 0]), num_segments=2)
# ==> [[ 4, 6, 6, 4],
# [5, 6, 7, 8]]
If there is no entry for a given segment ID i, it outputs 1. If the given segment ID i is negative, then the corresponding value is dropped, and will not be included in the result.
Args
data A Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, complex64, int64, qint8, quint8, qint32, bfloat16, uint16, complex128, half, uint32, uint64.
segment_ids A Tensor. Must be one of the following types: int32, int64. A tensor whose shape is a prefix of data.shape.
num_segments A Tensor. Must be one of the following types: int32, int64.
name A name for the operation (optional).
Returns A Tensor. Has the same type as data. | tensorflow.math.unsorted_segment_prod |
tf.math.unsorted_segment_sqrt_n View source on GitHub Computes the sum along segments of a tensor divided by the sqrt(N). View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.math.unsorted_segment_sqrt_n, tf.compat.v1.unsorted_segment_sqrt_n
tf.math.unsorted_segment_sqrt_n(
data, segment_ids, num_segments, name=None
)
Read the section on segmentation for an explanation of segments. This operator is similar to the tf.math.unsorted_segment_sum operator. Additionally to computing the sum over segments, it divides the results by sqrt(N). \(output_i = 1/sqrt(N_i) \sum_{j...} data[j...]\) where the sum is over tuples j... such that segment_ids[j...] == i with \N_i\ being the number of occurrences of id \i\. If there is no entry for a given segment ID i, it outputs 0. Note that this op only supports floating point and complex dtypes, due to tf.sqrt only supporting these types. If the given segment ID i is negative, the value is dropped and will not be added to the sum of the segment.
Args
data A Tensor with floating point or complex dtype.
segment_ids An integer tensor whose shape is a prefix of data.shape.
num_segments An integer scalar Tensor. The number of distinct segment IDs.
name A name for the operation (optional).
Returns A Tensor. Has same shape as data, except for the first segment_ids.rank dimensions, which are replaced with a single dimension which has size num_segments. | tensorflow.math.unsorted_segment_sqrt_n |
tf.math.unsorted_segment_sum Computes the sum along segments of a tensor. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.math.unsorted_segment_sum, tf.compat.v1.unsorted_segment_sum
tf.math.unsorted_segment_sum(
data, segment_ids, num_segments, name=None
)
Read the section on segmentation for an explanation of segments. Computes a tensor such that \(output[i] = \sum_{j...} data[j...]\) where the sum is over tuples j... such that segment_ids[j...] == i. Unlike SegmentSum, segment_ids need not be sorted and need not cover all values in the full range of valid values. If the sum is empty for a given segment ID i, output[i] = 0. If the given segment ID i is negative, the value is dropped and will not be added to the sum of the segment. num_segments should equal the number of distinct segment IDs. c = tf.constant([[1,2,3,4], [5,6,7,8], [4,3,2,1]])
tf.unsorted_segment_sum(c, tf.constant([0, 1, 0]), num_segments=2)
# ==> [[ 5, 5, 5, 5],
# [5, 6, 7, 8]]
Args
data A Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, complex64, int64, qint8, quint8, qint32, bfloat16, uint16, complex128, half, uint32, uint64.
segment_ids A Tensor. Must be one of the following types: int32, int64. A tensor whose shape is a prefix of data.shape.
num_segments A Tensor. Must be one of the following types: int32, int64.
name A name for the operation (optional).
Returns A Tensor. Has the same type as data. | tensorflow.math.unsorted_segment_sum |
tf.math.xdivy Returns 0 if x == 0, and x / y otherwise, elementwise. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.math.xdivy
tf.math.xdivy(
x, y, name=None
)
Args
x A Tensor. Must be one of the following types: half, float32, float64, complex64, complex128.
y A Tensor. Must have the same type as x.
name A name for the operation (optional).
Returns A Tensor. Has the same type as x. | tensorflow.math.xdivy |
tf.math.xlog1py Compute x * log1p(y). View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.math.xlog1py
tf.math.xlog1py(
x, y, name=None
)
Given x and y, compute x * log1p(y). This function safely returns zero when x = 0, no matter what the value of y is. Example:
tf.math.xlog1py(0., 1.)
<tf.Tensor: shape=(), dtype=float32, numpy=0.>
tf.math.xlog1py(1., 1.)
<tf.Tensor: shape=(), dtype=float32, numpy=0.6931472>
tf.math.xlog1py(2., 2.)
<tf.Tensor: shape=(), dtype=float32, numpy=2.1972246>
tf.math.xlog1py(0., -1.)
<tf.Tensor: shape=(), dtype=float32, numpy=0.>
Args
x A tf.Tensor of type bfloat16, half, float32, float64, complex64, complex128
y A tf.Tensor of type bfloat16, half, float32, float64, complex64, complex128
name A name for the operation (optional).
Returns x * log1p(y).
Scipy Compatibility Equivalent to scipy.special.xlog1py | tensorflow.math.xlog1py |
tf.math.xlogy Returns 0 if x == 0, and x * log(y) otherwise, elementwise. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.math.xlogy
tf.math.xlogy(
x, y, name=None
)
Args
x A Tensor. Must be one of the following types: half, float32, float64, complex64, complex128.
y A Tensor. Must have the same type as x.
name A name for the operation (optional).
Returns A Tensor. Has the same type as x. | tensorflow.math.xlogy |
tf.math.zero_fraction View source on GitHub Returns the fraction of zeros in value. View aliases Main aliases
tf.nn.zero_fraction Compat aliases for migration See Migration guide for more details. tf.compat.v1.math.zero_fraction, tf.compat.v1.nn.zero_fraction
tf.math.zero_fraction(
value, name=None
)
If value is empty, the result is nan. This is useful in summaries to measure and report sparsity. For example, z = tf.nn.relu(...)
summ = tf.compat.v1.summary.scalar('sparsity', tf.nn.zero_fraction(z))
Args
value A tensor of numeric type.
name A name for the operation (optional).
Returns The fraction of zeros in value, with type float32. | tensorflow.math.zero_fraction |
tf.math.zeta Compute the Hurwitz zeta function \(\zeta(x, q)\). View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.math.zeta, tf.compat.v1.zeta
tf.math.zeta(
x, q, name=None
)
The Hurwitz zeta function is defined as: \(\zeta(x, q) = \sum_{n=0}^{\infty} (q + n)^{-x}\)
Args
x A Tensor. Must be one of the following types: float32, float64.
q A Tensor. Must have the same type as x.
name A name for the operation (optional).
Returns A Tensor. Has the same type as x. | tensorflow.math.zeta |
tf.meshgrid View source on GitHub Broadcasts parameters for evaluation on an N-D grid. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.meshgrid
tf.meshgrid(
*args, **kwargs
)
Given N one-dimensional coordinate arrays *args, returns a list outputs of N-D coordinate arrays for evaluating expressions on an N-D grid. Notes: meshgrid supports cartesian ('xy') and matrix ('ij') indexing conventions. When the indexing argument is set to 'xy' (the default), the broadcasting instructions for the first two dimensions are swapped. Examples: Calling X, Y = meshgrid(x, y) with the tensors x = [1, 2, 3]
y = [4, 5, 6]
X, Y = tf.meshgrid(x, y)
# X = [[1, 2, 3],
# [1, 2, 3],
# [1, 2, 3]]
# Y = [[4, 4, 4],
# [5, 5, 5],
# [6, 6, 6]]
Args
*args Tensors with rank 1.
**kwargs indexing: Either 'xy' or 'ij' (optional, default: 'xy'). name: A name for the operation (optional).
Returns
outputs A list of N Tensors with rank N.
Raises
TypeError When no keyword arguments (kwargs) are passed.
ValueError When indexing keyword argument is not one of xy or ij. | tensorflow.meshgrid |
Module: tf.mixed_precision Public API for tf.mixed_precision namespace. Modules experimental module: Public API for tf.mixed_precision.experimental namespace. | tensorflow.mixed_precision |
Module: tf.mixed_precision.experimental Public API for tf.mixed_precision.experimental namespace. Classes class DynamicLossScale: Loss scale that dynamically adjusts itself. class FixedLossScale: Loss scale with a fixed value. class LossScale: Base class for all TF1 loss scales. | tensorflow.mixed_precision.experimental |
tf.mixed_precision.experimental.DynamicLossScale Loss scale that dynamically adjusts itself. Inherits From: LossScale View aliases Main aliases
tf.train.experimental.DynamicLossScale Compat aliases for migration See Migration guide for more details. tf.compat.v1.mixed_precision.DynamicLossScale, tf.compat.v1.mixed_precision.experimental.DynamicLossScale, tf.compat.v1.train.experimental.DynamicLossScale
tf.mixed_precision.experimental.DynamicLossScale(
initial_loss_scale=(2 ** 15), increment_period=2000, multiplier=2.0
)
Warning: This class is deprecated and will be unexposed from the TF 2 namespace starting in TensorFlow 2.5. In TensorFlow 2.5, this class will only be accessible as tf.compat.v1.mixed_precision.DynamicLossScale. Additionally in 2.5, you will no longer be able to pass a DynamicLossScale to a tf.keras.mixed_precision.Policy. All the functionality in this class has been merged into tf.keras.mixed_precision.LossScaleOptimizer, so this class is no longer needed. Dynamic loss scaling works by adjusting the loss scale as training progresses. The goal is to keep the loss scale as high as possible without overflowing the gradients. As long as the gradients do not overflow, raising the loss scale never hurts. The algorithm starts by setting the loss scale to an initial value. Every N steps that the gradients are finite, the loss scale is increased by some factor. However, if a NaN or Inf gradient is found, the gradients for that step are not applied, and the loss scale is decreased by the factor. This process tends to keep the loss scale as high as possible without gradients overflowing.
Args
initial_loss_scale A Python float. The loss scale to use at the beginning. It's better to start this at a very high number, because a loss scale that is too high gets lowered far more quickly than a loss scale that is too low gets raised. The default is 2 ** 15, which is approximately half the maximum float16 value.
increment_period Increases loss scale every increment_period consecutive steps that finite gradients are encountered. If a nonfinite gradient is encountered, the count is reset back to zero.
multiplier The multiplier to use when increasing or decreasing the loss scale.
Attributes
increment_period
initial_loss_scale
multiplier
Methods from_config View source
@classmethod
from_config(
config
)
Creates the LossScale from its config. get_config View source
get_config()
Returns the config of this loss scale. update View source
update(
grads
)
Updates loss scale based on if gradients are finite in current step. __call__ View source
__call__()
Returns the current loss scale as a scalar float32 tensor. | tensorflow.mixed_precision.experimental.dynamiclossscale |
tf.mixed_precision.experimental.FixedLossScale Loss scale with a fixed value. Inherits From: LossScale View aliases Main aliases
tf.train.experimental.FixedLossScale Compat aliases for migration See Migration guide for more details. tf.compat.v1.mixed_precision.FixedLossScale, tf.compat.v1.mixed_precision.experimental.FixedLossScale, tf.compat.v1.train.experimental.FixedLossScale
tf.mixed_precision.experimental.FixedLossScale(
loss_scale_value
)
Warning: This class is deprecated and will be unexposed from the TF 2 namespace starting in TensorFlow 2.5. In TensorFlow 2.5, this class will only be accessible as tf.compat.v1.mixed_precision.FixedLossScale. Additionally in 2.5, you will no longer be able to pass a FixedLossScale to a tf.keras.mixed_precision.Policy. All the functionality in this class has been merged into tf.keras.mixed_precision.LossScaleOptimizer, so this class is no longer needed. The loss scale is not updated for the lifetime of instances of this class. A given instance of this class always returns the same number when called.
Args
loss_scale_value A Python float. Its ideal value varies depending on models to run. Choosing a too small loss_scale might affect model quality; a too big loss_scale might cause inf or nan. There is no single right loss_scale to apply. There is no harm choosing a relatively big number as long as no nan or inf is encountered in training.
Raises
ValueError If loss_scale_value is less than 1. Methods from_config View source
@classmethod
from_config(
config
)
Creates the LossScale from its config. get_config View source
get_config()
Returns the config of this loss scale. update View source
update(
grads
)
Updates the value of the loss scale. The loss scale will be potentially updated, based on the value of grads. The tensor returned by calling this class is only updated when this function is evaluated. In eager mode, this directly updates the loss scale, so that calling __call__ will return the newly updated loss scale. In graph mode, this returns an op that, when evaluated, updates the loss scale. This function also returns a should_apply_gradients bool. If False, gradients should not be applied to the variables that step, as nonfinite gradients were found, and the loss scale has been be updated to reduce the chance of finding nonfinite gradients in the next step. Some loss scale classes will always return True, as they cannot adjust themselves in response to nonfinite gradients. When a DistributionStrategy is used, this function may only be called in a cross-replica context.
Args
grads A nested structure of unscaled gradients, each which is the gradient of the loss with respect to a weight. The gradients should have already been divided by the loss scale being before passed to this function. 'None' gradients are accepted, and are ignored.
Returns
update_op In eager mode, None. In graph mode, an op to update the loss scale.
should_apply_gradients Either a bool or a scalar boolean tensor. If False, the caller should skip applying grads to the variables this step. __call__ View source
__call__()
Returns the current loss scale as a scalar float32 tensor. | tensorflow.mixed_precision.experimental.fixedlossscale |
tf.mixed_precision.experimental.LossScale Base class for all TF1 loss scales. View aliases Main aliases
tf.train.experimental.LossScale Compat aliases for migration See Migration guide for more details. tf.compat.v1.mixed_precision.LossScale, tf.compat.v1.mixed_precision.experimental.LossScale, tf.compat.v1.train.experimental.LossScale
tf.mixed_precision.experimental.LossScale()
Warning: This class is deprecated and will be unexposed from the TF 2 namespace starting in TensorFlow 2.5. In TensorFlow 2.5, this class will only be accessible as tf.compat.v1.mixed_precision.LossScale. Additionally in 2.5, you will no longer be able to pass a LossScale to a tf.keras.mixed_precision.Policy. All the functionality in this class has been merged into tf.keras.mixed_precision.LossScaleOptimizer, so this class is no longer needed. This is an abstract base class, so you cannot instantiate it directly. Instead, use one of its concrete subclasses: tf.compat.v1.mixed_precision.DynamicLossScale tf.compat.v1.mixed_precision.FixedLossScale Loss scaling is a process that multiplies the loss by a multiplier called the loss scale, and divides each gradient by the same multiplier. The pseudocode for this process is: loss = ...
loss *= loss_scale
grads = gradients(loss, vars)
grads /= loss_scale
Mathematically, loss scaling has no effect, but can help avoid numerical underflow in intermediate gradients when float16 tensors are used for mixed precision training. By multiplying the loss, each intermediate gradient will have the same multiplier applied. Instances of this class represent a loss scale. Calling instances of this class returns the loss scale as a scalar float32 tensor, while method update() updates the loss scale depending on the values of the gradients. Optimizers use instances of this class to scale loss and gradients. In most functions that accept a LossScale, you can also pass an int (such as 8) to create a FixedLossScale or the string "dynamic" to create a dynamic loss scale. Methods from_config View source
@classmethod
from_config(
config
)
Creates the LossScale from its config. get_config View source
@abc.abstractmethod
get_config()
Returns the config of this loss scale. update View source
@abc.abstractmethod
update(
grads
)
Updates the value of the loss scale. The loss scale will be potentially updated, based on the value of grads. The tensor returned by calling this class is only updated when this function is evaluated. In eager mode, this directly updates the loss scale, so that calling __call__ will return the newly updated loss scale. In graph mode, this returns an op that, when evaluated, updates the loss scale. This function also returns a should_apply_gradients bool. If False, gradients should not be applied to the variables that step, as nonfinite gradients were found, and the loss scale has been be updated to reduce the chance of finding nonfinite gradients in the next step. Some loss scale classes will always return True, as they cannot adjust themselves in response to nonfinite gradients. When a DistributionStrategy is used, this function may only be called in a cross-replica context.
Args
grads A nested structure of unscaled gradients, each which is the gradient of the loss with respect to a weight. The gradients should have already been divided by the loss scale being before passed to this function. 'None' gradients are accepted, and are ignored.
Returns
update_op In eager mode, None. In graph mode, an op to update the loss scale.
should_apply_gradients Either a bool or a scalar boolean tensor. If False, the caller should skip applying grads to the variables this step. __call__ View source
@abc.abstractmethod
__call__()
Returns the current loss scale as a scalar float32 tensor. | tensorflow.mixed_precision.experimental.lossscale |
Module: tf.mlir Public API for tf.mlir namespace. Modules experimental module: Public API for tf.mlir.experimental namespace. | tensorflow.mlir |
Module: tf.mlir.experimental Public API for tf.mlir.experimental namespace. Functions convert_function(...): Import a ConcreteFunction and convert it to a textual MLIR module. convert_graph_def(...): Import a GraphDef and convert it to a textual MLIR module. | tensorflow.mlir.experimental |
tf.mlir.experimental.convert_function Import a ConcreteFunction and convert it to a textual MLIR module. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.mlir.experimental.convert_function
tf.mlir.experimental.convert_function(
concrete_function, pass_pipeline='tf-standard-pipeline'
)
This API is only intended for inspecting the internals of TensorFlow and the string returned is at the moment intended for debugging purposes. A tf.function can be imported and converted from TensorFlow to TensorFlow MLIR with this API by extracting its ConcreteFunction (eagerly-executing wrapper around a tf.Graph). For example:
@tf.function
def add(a, b):
return a + b
concrete_function = add.get_concrete_function(
tf.TensorSpec(None, tf.dtypes.float32),
tf.TensorSpec(None, tf.dtypes.float32))
tf.mlir.experimental.convert_function(concrete_function)
'...module attributes {...} {...}'
Args
concrete_function An object of type ConcreteFunction.
pass_pipeline A textual description of an MLIR Pass Pipeline to run on the module, see MLIR documentation for the textual pass pipeline syntax.
Returns A textual representation of the MLIR module corresponding to the ConcreteFunction.
Raises
InvalidArgumentError if concrete_function is invalid or cannot be converted to MLIR. | tensorflow.mlir.experimental.convert_function |
tf.mlir.experimental.convert_graph_def Import a GraphDef and convert it to a textual MLIR module. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.mlir.experimental.convert_graph_def
tf.mlir.experimental.convert_graph_def(
graph_def, pass_pipeline='tf-standard-pipeline'
)
This API is only intended for inspecting the internals of TensorFlow and the string returned is at the moment intended for debugging purposes.
Args
graph_def An object of type graph_pb2.GraphDef or a textual proto representation of a valid GraphDef.
pass_pipeline A textual description of an MLIR Pass Pipeline to run on the module, see MLIR documentation for the textual pass pipeline syntax.
Returns A textual representation of the MLIR module corresponding to the graphdef.
Raises
InvalidArgumentError if graph_def is invalid or cannot be converted to MLIR. | tensorflow.mlir.experimental.convert_graph_def |
tf.Module View source on GitHub Base neural network module class. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.Module
tf.Module(
name=None
)
A module is a named container for tf.Variables, other tf.Modules and functions which apply to user input. For example a dense layer in a neural network might be implemented as a tf.Module:
class Dense(tf.Module):
def __init__(self, input_dim, output_size, name=None):
super(Dense, self).__init__(name=name)
self.w = tf.Variable(
tf.random.normal([input_dim, output_size]), name='w')
self.b = tf.Variable(tf.zeros([output_size]), name='b')
def __call__(self, x):
y = tf.matmul(x, self.w) + self.b
return tf.nn.relu(y)
You can use the Dense layer as you would expect:
d = Dense(input_dim=3, output_size=2)
d(tf.ones([1, 3]))
<tf.Tensor: shape=(1, 2), dtype=float32, numpy=..., dtype=float32)>
By subclassing tf.Module instead of object any tf.Variable or tf.Module instances assigned to object properties can be collected using the variables, trainable_variables or submodules property:
d.variables
(<tf.Variable 'b:0' shape=(2,) dtype=float32, numpy=...,
dtype=float32)>,
<tf.Variable 'w:0' shape=(3, 2) dtype=float32, numpy=..., dtype=float32)>)
Subclasses of tf.Module can also take advantage of the _flatten method which can be used to implement tracking of any other types. All tf.Module classes have an associated tf.name_scope which can be used to group operations in TensorBoard and create hierarchies for variable names which can help with debugging. We suggest using the name scope when creating nested submodules/parameters or for forward methods whose graph you might want to inspect in TensorBoard. You can enter the name scope explicitly using with self.name_scope: or you can annotate methods (apart from __init__) with @tf.Module.with_name_scope.
class MLP(tf.Module):
def __init__(self, input_size, sizes, name=None):
super(MLP, self).__init__(name=name)
self.layers = []
with self.name_scope:
for size in sizes:
self.layers.append(Dense(input_dim=input_size, output_size=size))
input_size = size
@tf.Module.with_name_scope
def __call__(self, x):
for layer in self.layers:
x = layer(x)
return x
module = MLP(input_size=5, sizes=[5, 5])
module.variables
(<tf.Variable 'mlp/b:0' shape=(5,) dtype=float32, numpy=..., dtype=float32)>,
<tf.Variable 'mlp/w:0' shape=(5, 5) dtype=float32, numpy=...,
dtype=float32)>,
<tf.Variable 'mlp/b:0' shape=(5,) dtype=float32, numpy=..., dtype=float32)>,
<tf.Variable 'mlp/w:0' shape=(5, 5) dtype=float32, numpy=...,
dtype=float32)>)
Attributes
name Returns the name of this module as passed or determined in the ctor.
Note: This is not the same as the self.name_scope.name which includes parent module names.
name_scope Returns a tf.name_scope instance for this class.
submodules Sequence of all sub-modules. Submodules are modules which are properties of this module, or found as properties of modules which are properties of this module (and so on).
a = tf.Module()
b = tf.Module()
c = tf.Module()
a.b = b
b.c = c
list(a.submodules) == [b, c]
True
list(b.submodules) == [c]
True
list(c.submodules) == []
True
trainable_variables Sequence of trainable variables owned by this module and its submodules.
Note: this method uses reflection to find variables on the current instance and submodules. For performance reasons you may wish to cache the result of calling this method if you don't expect the return value to change.
variables Sequence of variables owned by this module and its submodules.
Note: this method uses reflection to find variables on the current instance and submodules. For performance reasons you may wish to cache the result of calling this method if you don't expect the return value to change.
Methods with_name_scope View source
@classmethod
with_name_scope(
method
)
Decorator to automatically enter the module name scope.
class MyModule(tf.Module):
@tf.Module.with_name_scope
def __call__(self, x):
if not hasattr(self, 'w'):
self.w = tf.Variable(tf.random.normal([x.shape[1], 3]))
return tf.matmul(x, self.w)
Using the above module would produce tf.Variables and tf.Tensors whose names included the module name:
mod = MyModule()
mod(tf.ones([1, 2]))
<tf.Tensor: shape=(1, 3), dtype=float32, numpy=..., dtype=float32)>
mod.w
<tf.Variable 'my_module/Variable:0' shape=(2, 3) dtype=float32,
numpy=..., dtype=float32)>
Args
method The method to wrap.
Returns The original method wrapped such that it enters the module's name scope. | tensorflow.module |
tf.name_scope View source on GitHub A context manager for use when defining a Python op.
tf.name_scope(
name
)
This context manager pushes a name scope, which will make the name of all operations added within it have a prefix. For example, to define a new Python op called my_op: def my_op(a, b, c, name=None):
with tf.name_scope("MyOp") as scope:
a = tf.convert_to_tensor(a, name="a")
b = tf.convert_to_tensor(b, name="b")
c = tf.convert_to_tensor(c, name="c")
# Define some computation that uses `a`, `b`, and `c`.
return foo_op(..., name=scope)
When executed, the Tensors a, b, c, will have names MyOp/a, MyOp/b, and MyOp/c. Inside a tf.function, if the scope name already exists, the name will be made unique by appending _n. For example, calling my_op the second time will generate MyOp_1/a, etc.
Args
name The prefix to use on all names created within the name scope.
Raises
ValueError If name is not a string.
Attributes
name
Methods __enter__ View source
__enter__()
Start the scope block.
Returns The scope name.
__exit__ View source
__exit__(
type_arg, value_arg, traceback_arg
) | tensorflow.name_scope |
Module: tf.nest Public API for tf.nest namespace. Functions assert_same_structure(...): Asserts that two structures are nested in the same way. flatten(...): Returns a flat list from a given nested structure. is_nested(...): Returns true if its input is a collections.abc.Sequence (except strings). map_structure(...): Applies func to each entry in structure and returns a new structure. pack_sequence_as(...): Returns a given flattened sequence packed into a given structure. | tensorflow.nest |
tf.nest.assert_same_structure View source on GitHub Asserts that two structures are nested in the same way. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.nest.assert_same_structure
tf.nest.assert_same_structure(
nest1, nest2, check_types=True, expand_composites=False
)
Note that namedtuples with identical name and fields are always considered to have the same shallow structure (even with check_types=True). For instance, this code will print True: def nt(a, b):
return collections.namedtuple('foo', 'a b')(a, b)
print(assert_same_structure(nt(0, 1), nt(2, 3)))
Args
nest1 an arbitrarily nested structure.
nest2 an arbitrarily nested structure.
check_types if True (default) types of sequences are checked as well, including the keys of dictionaries. If set to False, for example a list and a tuple of objects will look the same if they have the same size. Note that namedtuples with identical name and fields are always considered to have the same shallow structure. Two types will also be considered the same if they are both list subtypes (which allows "list" and "_ListWrapper" from trackable dependency tracking to compare equal).
expand_composites If true, then composite tensors such as tf.sparse.SparseTensor and tf.RaggedTensor are expanded into their component tensors.
Raises
ValueError If the two structures do not have the same number of elements or if the two structures are not nested in the same way.
TypeError If the two structures differ in the type of sequence in any of their substructures. Only possible if check_types is True. | tensorflow.nest.assert_same_structure |
tf.nest.flatten View source on GitHub Returns a flat list from a given nested structure. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.nest.flatten
tf.nest.flatten(
structure, expand_composites=False
)
If nest is not a structure , tuple (or a namedtuple), dict, or an attrs class, then returns a single-element list: [nest]. In the case of dict instances, the sequence consists of the values, sorted by key to ensure deterministic behavior. This is true also for OrderedDict instances: their sequence order is ignored, the sorting order of keys is used instead. The same convention is followed in pack_sequence_as. This correctly repacks dicts and OrderedDicts after they have been flattened, and also allows flattening an OrderedDict and then repacking it back using a corresponding plain dict, or vice-versa. Dictionaries with non-sortable keys cannot be flattened. Users must not modify any collections used in nest while this function is running. Examples: Python dict (ordered by key):
dict = { "key3": "value3", "key1": "value1", "key2": "value2" }
tf.nest.flatten(dict)
['value1', 'value2', 'value3']
For a nested python tuple:
tuple = ((1.0, 2.0), (3.0, 4.0, 5.0), (6.0))
tf.nest.flatten(tuple)
[1.0, 2.0, 3.0, 4.0, 5.0, 6.0]
Numpy array (will not flatten):
array = np.array([[1, 2], [3, 4]])
tf.nest.flatten(array)
[array([[1, 2],
[3, 4]])]
tf.Tensor (will not flatten):
tensor = tf.constant([[1., 2., 3.], [4., 5., 6.], [7., 8., 9.]])
tf.nest.flatten(tensor)
[<tf.Tensor: shape=(3, 3), dtype=float32, numpy=
array([[1., 2., 3.],
[4., 5., 6.],
[7., 8., 9.]], dtype=float32)>]
Args
structure an arbitrarily nested structure. Note, numpy arrays are considered atoms and are not flattened.
expand_composites If true, then composite tensors such as tf.sparse.SparseTensor and tf.RaggedTensor are expanded into their component tensors.
Returns A Python list, the flattened version of the input.
Raises
TypeError The nest is or contains a dict with non-sortable keys. | tensorflow.nest.flatten |
tf.nest.is_nested View source on GitHub Returns true if its input is a collections.abc.Sequence (except strings). View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.nest.is_nested
tf.nest.is_nested(
seq
)
Args
seq an input sequence.
Returns True if the sequence is a not a string and is a collections.abc.Sequence or a dict. | tensorflow.nest.is_nested |
tf.nest.map_structure View source on GitHub Applies func to each entry in structure and returns a new structure. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.nest.map_structure
tf.nest.map_structure(
func, *structure, **kwargs
)
Applies func(x[0], x[1], ...) where x[i] is an entry in structure[i]. All structures in structure must have the same arity, and the return value will contain results with the same structure layout. Examples: A single Python dict:
a = {"hello": 24, "world": 76}
tf.nest.map_structure(lambda p: p * 2, a)
{'hello': 48, 'world': 152}
Multiple Python dictionaries:
d1 = {"hello": 24, "world": 76}
d2 = {"hello": 36, "world": 14}
tf.nest.map_structure(lambda p1, p2: p1 + p2, d1, d2)
{'hello': 60, 'world': 90}
Args
func A callable that accepts as many arguments as there are structures.
*structure scalar, or tuple or dict or list of constructed scalars and/or other tuples/lists, or scalars. Note: numpy arrays are considered as scalars.
**kwargs Valid keyword args are:
check_types: If set to True (default) the types of iterables within the structures have to be same (e.g. map_structure(func, [1], (1,)) raises a TypeError exception). To allow this set this argument to False. Note that namedtuples with identical name and fields are always considered to have the same shallow structure.
expand_composites: If set to True, then composite tensors such as tf.sparse.SparseTensor and tf.RaggedTensor are expanded into their component tensors. If False (the default), then composite tensors are not expanded.
Returns A new structure with the same arity as structure, whose values correspond to func(x[0], x[1], ...) where x[i] is a value in the corresponding location in structure[i]. If there are different sequence types and check_types is False the sequence types of the first structure will be used.
Raises
TypeError If func is not callable or if the structures do not match each other by depth tree.
ValueError If no structure is provided or if the structures do not match each other by type.
ValueError If wrong keyword arguments are provided. | tensorflow.nest.map_structure |
tf.nest.pack_sequence_as View source on GitHub Returns a given flattened sequence packed into a given structure. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.nest.pack_sequence_as
tf.nest.pack_sequence_as(
structure, flat_sequence, expand_composites=False
)
If structure is a scalar, flat_sequence must be a single-element list; in this case the return value is flat_sequence[0]. If structure is or contains a dict instance, the keys will be sorted to pack the flat sequence in deterministic order. This is true also for OrderedDict instances: their sequence order is ignored, the sorting order of keys is used instead. The same convention is followed in flatten. This correctly repacks dicts and OrderedDicts after they have been flattened, and also allows flattening an OrderedDict and then repacking it back using a corresponding plain dict, or vice-versa. Dictionaries with non-sortable keys cannot be flattened.
Args
structure Nested structure, whose structure is given by nested lists, tuples, and dicts. Note: numpy arrays and strings are considered scalars.
flat_sequence flat sequence to pack.
expand_composites If true, then composite tensors such as tf.sparse.SparseTensor and tf.RaggedTensor are expanded into their component tensors.
Returns
packed flat_sequence converted to have the same recursive structure as structure.
Raises
ValueError If flat_sequence and structure have different element counts.
TypeError structure is or contains a dict with non-sortable keys. | tensorflow.nest.pack_sequence_as |
Module: tf.nn Wrappers for primitive Neural Net (NN) Operations. Classes class RNNCellDeviceWrapper: Operator that ensures an RNNCell runs on a particular device. class RNNCellDropoutWrapper: Operator adding dropout to inputs and outputs of the given cell. class RNNCellResidualWrapper: RNNCell wrapper that ensures cell inputs are added to the outputs. Functions all_candidate_sampler(...): Generate the set of all classes. atrous_conv2d(...): Atrous convolution (a.k.a. convolution with holes or dilated convolution). atrous_conv2d_transpose(...): The transpose of atrous_conv2d. avg_pool(...): Performs the avg pooling on the input. avg_pool1d(...): Performs the average pooling on the input. avg_pool2d(...): Performs the average pooling on the input. avg_pool3d(...): Performs the average pooling on the input. batch_norm_with_global_normalization(...): Batch normalization. batch_normalization(...): Batch normalization. bias_add(...): Adds bias to value. collapse_repeated(...): Merge repeated labels into single labels. compute_accidental_hits(...): Compute the position ids in sampled_candidates matching true_classes. compute_average_loss(...): Scales per-example losses with sample_weights and computes their average. conv1d(...): Computes a 1-D convolution given 3-D input and filter tensors. conv1d_transpose(...): The transpose of conv1d. conv2d(...): Computes a 2-D convolution given input and 4-D filters tensors. conv2d_transpose(...): The transpose of conv2d. conv3d(...): Computes a 3-D convolution given 5-D input and filters tensors. conv3d_transpose(...): The transpose of conv3d. conv_transpose(...): The transpose of convolution. convolution(...): Computes sums of N-D convolutions (actually cross-correlation). crelu(...): Computes Concatenated ReLU. ctc_beam_search_decoder(...): Performs beam search decoding on the logits given in input. ctc_greedy_decoder(...): Performs greedy decoding on the logits given in input (best path). ctc_loss(...): Computes CTC (Connectionist Temporal Classification) loss. ctc_unique_labels(...): Get unique labels and indices for batched labels for tf.nn.ctc_loss. depth_to_space(...): DepthToSpace for tensors of type T. depthwise_conv2d(...): Depthwise 2-D convolution. depthwise_conv2d_backprop_filter(...): Computes the gradients of depthwise convolution with respect to the filter. depthwise_conv2d_backprop_input(...): Computes the gradients of depthwise convolution with respect to the input. dilation2d(...): Computes the grayscale dilation of 4-D input and 3-D filters tensors. dropout(...): Computes dropout: randomly sets elements to zero to prevent overfitting. elu(...): Computes exponential linear: exp(features) - 1 if < 0, features otherwise. embedding_lookup(...): Looks up embeddings for the given ids from a list of tensors. embedding_lookup_sparse(...): Looks up embeddings for the given ids and weights from a list of tensors. erosion2d(...): Computes the grayscale erosion of 4-D value and 3-D filters tensors. fixed_unigram_candidate_sampler(...): Samples a set of classes using the provided (fixed) base distribution. fractional_avg_pool(...): Performs fractional average pooling on the input. fractional_max_pool(...): Performs fractional max pooling on the input. gelu(...): Compute the Gaussian Error Linear Unit (GELU) activation function. in_top_k(...): Says whether the targets are in the top K predictions. isotonic_regression(...): Solves isotonic regression problems along the given axis. l2_loss(...): L2 Loss. l2_normalize(...): Normalizes along dimension axis using an L2 norm. leaky_relu(...): Compute the Leaky ReLU activation function. learned_unigram_candidate_sampler(...): Samples a set of classes from a distribution learned during training. local_response_normalization(...): Local Response Normalization. log_poisson_loss(...): Computes log Poisson loss given log_input. log_softmax(...): Computes log softmax activations. lrn(...): Local Response Normalization. max_pool(...): Performs the max pooling on the input. max_pool1d(...): Performs the max pooling on the input. max_pool2d(...): Performs the max pooling on the input. max_pool3d(...): Performs the max pooling on the input. max_pool_with_argmax(...): Performs max pooling on the input and outputs both max values and indices. moments(...): Calculates the mean and variance of x. nce_loss(...): Computes and returns the noise-contrastive estimation training loss. normalize_moments(...): Calculate the mean and variance of based on the sufficient statistics. pool(...): Performs an N-D pooling operation. relu(...): Computes rectified linear: max(features, 0). relu6(...): Computes Rectified Linear 6: min(max(features, 0), 6). safe_embedding_lookup_sparse(...): Lookup embedding results, accounting for invalid IDs and empty features. sampled_softmax_loss(...): Computes and returns the sampled softmax training loss. scale_regularization_loss(...): Scales the sum of the given regularization losses by number of replicas. selu(...): Computes scaled exponential linear: scale * alpha * (exp(features) - 1) separable_conv2d(...): 2-D convolution with separable filters. sigmoid(...): Computes sigmoid of x element-wise. sigmoid_cross_entropy_with_logits(...): Computes sigmoid cross entropy given logits. silu(...): Computes the SiLU or Swish activation function: x * sigmoid(x). softmax(...): Computes softmax activations. softmax_cross_entropy_with_logits(...): Computes softmax cross entropy between logits and labels. softplus(...): Computes softplus: log(exp(features) + 1). softsign(...): Computes softsign: features / (abs(features) + 1). space_to_batch(...): SpaceToBatch for N-D tensors of type T. space_to_depth(...): SpaceToDepth for tensors of type T. sparse_softmax_cross_entropy_with_logits(...): Computes sparse softmax cross entropy between logits and labels. sufficient_statistics(...): Calculate the sufficient statistics for the mean and variance of x. swish(...): Computes the SiLU or Swish activation function: x * sigmoid(x). tanh(...): Computes hyperbolic tangent of x element-wise. top_k(...): Finds values and indices of the k largest entries for the last dimension. weighted_cross_entropy_with_logits(...): Computes a weighted cross entropy. weighted_moments(...): Returns the frequency-weighted mean and variance of x. with_space_to_batch(...): Performs op on the space-to-batch representation of input. zero_fraction(...): Returns the fraction of zeros in value. | tensorflow.nn |
tf.nn.atrous_conv2d View source on GitHub Atrous convolution (a.k.a. convolution with holes or dilated convolution). View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.nn.atrous_conv2d
tf.nn.atrous_conv2d(
value, filters, rate, padding, name=None
)
This function is a simpler wrapper around the more general tf.nn.convolution, and exists only for backwards compatibility. You can use tf.nn.convolution to perform 1-D, 2-D, or 3-D atrous convolution. Computes a 2-D atrous convolution, also known as convolution with holes or dilated convolution, given 4-D value and filters tensors. If the rate parameter is equal to one, it performs regular 2-D convolution. If the rate parameter is greater than one, it performs convolution with holes, sampling the input values every rate pixels in the height and width dimensions. This is equivalent to convolving the input with a set of upsampled filters, produced by inserting rate - 1 zeros between two consecutive values of the filters along the height and width dimensions, hence the name atrous convolution or convolution with holes (the French word trous means holes in English). More specifically: output[batch, height, width, out_channel] =
sum_{dheight, dwidth, in_channel} (
filters[dheight, dwidth, in_channel, out_channel] *
value[batch, height + rate*dheight, width + rate*dwidth, in_channel]
)
Atrous convolution allows us to explicitly control how densely to compute feature responses in fully convolutional networks. Used in conjunction with bilinear interpolation, it offers an alternative to conv2d_transpose in dense prediction tasks such as semantic image segmentation, optical flow computation, or depth estimation. It also allows us to effectively enlarge the field of view of filters without increasing the number of parameters or the amount of computation. For a description of atrous convolution and how it can be used for dense feature extraction, please see: (Chen et al., 2015). The same operation is investigated further in (Yu et al., 2016). Previous works that effectively use atrous convolution in different ways are, among others, (Sermanet et al., 2014) and (Giusti et al., 2013). Atrous convolution is also closely related to the so-called noble identities in multi-rate signal processing. There are many different ways to implement atrous convolution (see the refs above). The implementation here reduces atrous_conv2d(value, filters, rate, padding=padding)
to the following three operations: paddings = ...
net = space_to_batch(value, paddings, block_size=rate)
net = conv2d(net, filters, strides=[1, 1, 1, 1], padding="VALID")
crops = ...
net = batch_to_space(net, crops, block_size=rate)
Advanced usage. Note the following optimization: A sequence of atrous_conv2d operations with identical rate parameters, 'SAME' padding, and filters with odd heights/ widths: net = atrous_conv2d(net, filters1, rate, padding="SAME")
net = atrous_conv2d(net, filters2, rate, padding="SAME")
...
net = atrous_conv2d(net, filtersK, rate, padding="SAME")
can be equivalently performed cheaper in terms of computation and memory as: pad = ... # padding so that the input dims are multiples of rate
net = space_to_batch(net, paddings=pad, block_size=rate)
net = conv2d(net, filters1, strides=[1, 1, 1, 1], padding="SAME")
net = conv2d(net, filters2, strides=[1, 1, 1, 1], padding="SAME")
...
net = conv2d(net, filtersK, strides=[1, 1, 1, 1], padding="SAME")
net = batch_to_space(net, crops=pad, block_size=rate)
because a pair of consecutive space_to_batch and batch_to_space ops with the same block_size cancel out when their respective paddings and crops inputs are identical.
Args
value A 4-D Tensor of type float. It needs to be in the default "NHWC" format. Its shape is [batch, in_height, in_width, in_channels].
filters A 4-D Tensor with the same type as value and shape [filter_height, filter_width, in_channels, out_channels]. filters' in_channels dimension must match that of value. Atrous convolution is equivalent to standard convolution with upsampled filters with effective height filter_height + (filter_height - 1) * (rate - 1) and effective width filter_width + (filter_width - 1) * (rate - 1), produced by inserting rate - 1 zeros along consecutive elements across the filters' spatial dimensions.
rate A positive int32. The stride with which we sample input values across the height and width dimensions. Equivalently, the rate by which we upsample the filter values by inserting zeros across the height and width dimensions. In the literature, the same parameter is sometimes called input stride or dilation.
padding A string, either 'VALID' or 'SAME'. The padding algorithm.
name Optional name for the returned tensor.
Returns A Tensor with the same type as value. Output shape with 'VALID' padding is: [batch, height - 2 * (filter_width - 1), width - 2 * (filter_height - 1), out_channels]. Output shape with 'SAME' padding is: [batch, height, width, out_channels].
Raises
ValueError If input/output depth does not match filters' shape, or if padding is other than 'VALID' or 'SAME'. References: Multi-Scale Context Aggregation by Dilated Convolutions: Yu et al., 2016 (pdf) Semantic Image Segmentation with Deep Convolutional Nets and Fully Connected CRFs: Chen et al., 2015 (pdf) OverFeat - Integrated Recognition, Localization and Detection using Convolutional Networks: Sermanet et al., 2014 (pdf) Fast Image Scanning with Deep Max-Pooling Convolutional Neural Networks: Giusti et al., 2013 (pdf) | tensorflow.nn.atrous_conv2d |
tf.nn.atrous_conv2d_transpose View source on GitHub The transpose of atrous_conv2d. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.nn.atrous_conv2d_transpose
tf.nn.atrous_conv2d_transpose(
value, filters, output_shape, rate, padding, name=None
)
This operation is sometimes called "deconvolution" after (Zeiler et al., 2010), but is really the transpose (gradient) of atrous_conv2d rather than an actual deconvolution.
Args
value A 4-D Tensor of type float. It needs to be in the default NHWC format. Its shape is [batch, in_height, in_width, in_channels].
filters A 4-D Tensor with the same type as value and shape [filter_height, filter_width, out_channels, in_channels]. filters' in_channels dimension must match that of value. Atrous convolution is equivalent to standard convolution with upsampled filters with effective height filter_height + (filter_height - 1) * (rate - 1) and effective width filter_width + (filter_width - 1) * (rate - 1), produced by inserting rate - 1 zeros along consecutive elements across the filters' spatial dimensions.
output_shape A 1-D Tensor of shape representing the output shape of the deconvolution op.
rate A positive int32. The stride with which we sample input values across the height and width dimensions. Equivalently, the rate by which we upsample the filter values by inserting zeros across the height and width dimensions. In the literature, the same parameter is sometimes called input stride or dilation.
padding A string, either 'VALID' or 'SAME'. The padding algorithm.
name Optional name for the returned tensor.
Returns A Tensor with the same type as value.
Raises
ValueError If input/output depth does not match filters' shape, or if padding is other than 'VALID' or 'SAME', or if the rate is less than one, or if the output_shape is not a tensor with 4 elements. References: Deconvolutional Networks: Zeiler et al., 2010 (pdf) | tensorflow.nn.atrous_conv2d_transpose |
tf.nn.avg_pool View source on GitHub Performs the avg pooling on the input. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.nn.avg_pool_v2
tf.nn.avg_pool(
input, ksize, strides, padding, data_format=None, name=None
)
Each entry in output is the mean of the corresponding size ksize window in value.
Args
input Tensor of rank N+2, of shape [batch_size] + input_spatial_shape + [num_channels] if data_format does not start with "NC" (default), or [batch_size, num_channels] + input_spatial_shape if data_format starts with "NC". Pooling happens over the spatial dimensions only.
ksize An int or list of ints that has length 1, N or N+2. The size of the window for each dimension of the input tensor.
strides An int or list of ints that has length 1, N or N+2. The stride of the sliding window for each dimension of the input tensor.
padding A string, either 'VALID' or 'SAME'. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
data_format A string. Specifies the channel dimension. For N=1 it can be either "NWC" (default) or "NCW", for N=2 it can be either "NHWC" (default) or "NCHW" and for N=3 either "NDHWC" (default) or "NCDHW".
name Optional name for the operation.
Returns A Tensor of format specified by data_format. The average pooled output tensor. | tensorflow.nn.avg_pool |
tf.nn.avg_pool1d View source on GitHub Performs the average pooling on the input. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.nn.avg_pool1d
tf.nn.avg_pool1d(
input, ksize, strides, padding, data_format='NWC', name=None
)
Each entry in output is the mean of the corresponding size ksize window in value. Note internally this op reshapes and uses the underlying 2d operation.
Args
input A 3-D Tensor of the format specified by data_format.
ksize An int or list of ints that has length 1 or 3. The size of the window for each dimension of the input tensor.
strides An int or list of ints that has length 1 or 3. The stride of the sliding window for each dimension of the input tensor.
padding A string, either 'VALID' or 'SAME'. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
data_format An optional string from: "NWC", "NCW". Defaults to "NWC".
name A name for the operation (optional).
Returns A Tensor of format specified by data_format. The max pooled output tensor. | tensorflow.nn.avg_pool1d |
tf.nn.avg_pool2d Performs the average pooling on the input.
tf.nn.avg_pool2d(
input, ksize, strides, padding, data_format='NHWC', name=None
)
Each entry in output is the mean of the corresponding size ksize window in value.
Args
input A 4-D Tensor of shape [batch, height, width, channels] and type float32, float64, qint8, quint8, or qint32.
ksize An int or list of ints that has length 1, 2 or 4. The size of the window for each dimension of the input tensor.
strides An int or list of ints that has length 1, 2 or 4. The stride of the sliding window for each dimension of the input tensor.
padding A string, either 'VALID' or 'SAME'. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
data_format A string. 'NHWC' and 'NCHW' are supported.
name Optional name for the operation.
Returns A Tensor with the same type as value. The average pooled output tensor. | tensorflow.nn.avg_pool2d |
tf.nn.avg_pool3d View source on GitHub Performs the average pooling on the input. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.nn.avg_pool3d
tf.nn.avg_pool3d(
input, ksize, strides, padding, data_format='NDHWC', name=None
)
Each entry in output is the mean of the corresponding size ksize window in value.
Args
input A 5-D Tensor of shape [batch, height, width, channels] and type float32, float64, qint8, quint8, or qint32.
ksize An int or list of ints that has length 1, 3 or 5. The size of the window for each dimension of the input tensor.
strides An int or list of ints that has length 1, 3 or 5. The stride of the sliding window for each dimension of the input tensor.
padding A string, either 'VALID' or 'SAME'. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
data_format A string. 'NDHWC' and 'NCDHW' are supported.
name Optional name for the operation.
Returns A Tensor with the same type as value. The average pooled output tensor. | tensorflow.nn.avg_pool3d |
tf.nn.batch_normalization View source on GitHub Batch normalization. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.nn.batch_normalization
tf.nn.batch_normalization(
x, mean, variance, offset, scale, variance_epsilon, name=None
)
Normalizes a tensor by mean and variance, and applies (optionally) a scale \(\gamma\) to it, as well as an offset \(\beta\): \(\frac{\gamma(x-\mu)}{\sigma}+\beta\) mean, variance, offset and scale are all expected to be of one of two shapes: In all generality, they can have the same number of dimensions as the input x, with identical sizes as x for the dimensions that are not normalized over (the 'depth' dimension(s)), and dimension 1 for the others which are being normalized over. mean and variance in this case would typically be the outputs of tf.nn.moments(..., keepdims=True) during training, or running averages thereof during inference. In the common case where the 'depth' dimension is the last dimension in the input tensor x, they may be one dimensional tensors of the same size as the 'depth' dimension. This is the case for example for the common [batch, depth] layout of fully-connected layers, and [batch, height, width, depth] for convolutions. mean and variance in this case would typically be the outputs of tf.nn.moments(..., keepdims=False) during training, or running averages thereof during inference. See equation 11 in Algorithm 2 of source: Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift; S. Ioffe, C. Szegedy.
Args
x Input Tensor of arbitrary dimensionality.
mean A mean Tensor.
variance A variance Tensor.
offset An offset Tensor, often denoted \(\beta\) in equations, or None. If present, will be added to the normalized tensor.
scale A scale Tensor, often denoted \(\gamma\) in equations, or None. If present, the scale is applied to the normalized tensor.
variance_epsilon A small float number to avoid dividing by 0.
name A name for this operation (optional).
Returns the normalized, scaled, offset tensor.
References: Batch Normalization - Accelerating Deep Network Training by Reducing Internal Covariate Shift: Ioffe et al., 2015 (pdf) | tensorflow.nn.batch_normalization |
tf.nn.batch_norm_with_global_normalization View source on GitHub Batch normalization.
tf.nn.batch_norm_with_global_normalization(
input, mean, variance, beta, gamma, variance_epsilon, scale_after_normalization,
name=None
)
This op is deprecated. See tf.nn.batch_normalization.
Args
input A 4D input Tensor.
mean A 1D mean Tensor with size matching the last dimension of t. This is the first output from tf.nn.moments, or a saved moving average thereof.
variance A 1D variance Tensor with size matching the last dimension of t. This is the second output from tf.nn.moments, or a saved moving average thereof.
beta A 1D beta Tensor with size matching the last dimension of t. An offset to be added to the normalized tensor.
gamma A 1D gamma Tensor with size matching the last dimension of t. If "scale_after_normalization" is true, this tensor will be multiplied with the normalized tensor.
variance_epsilon A small float number to avoid dividing by 0.
scale_after_normalization A bool indicating whether the resulted tensor needs to be multiplied with gamma.
name A name for this operation (optional).
Returns A batch-normalized t.
References: Batch Normalization - Accelerating Deep Network Training by Reducing Internal Covariate Shift: Ioffe et al., 2015 (pdf) | tensorflow.nn.batch_norm_with_global_normalization |
tf.nn.bias_add View source on GitHub Adds bias to value. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.nn.bias_add
tf.nn.bias_add(
value, bias, data_format=None, name=None
)
This is (mostly) a special case of tf.add where bias is restricted to 1-D. Broadcasting is supported, so value may have any number of dimensions. Unlike tf.add, the type of bias is allowed to differ from value in the case where both types are quantized.
Args
value A Tensor with type float, double, int64, int32, uint8, int16, int8, complex64, or complex128.
bias A 1-D Tensor with size matching the channel dimension of value. Must be the same type as value unless value is a quantized type, in which case a different quantized type may be used.
data_format A string. 'N...C' and 'NC...' are supported. If None (the default) is specified then 'N..C' is assumed.
name A name for the operation (optional).
Returns A Tensor with the same type as value.
Raises ValueError if data format is unrecognized, if value has less than two dimensions when data_format is 'N..C'/None or value has less then three dimensions when data_format is NC.., if bias does not have exactly one dimension (is a vector), or if the size of bias does not match the size of the channel dimension of value. | tensorflow.nn.bias_add |
tf.nn.collapse_repeated View source on GitHub Merge repeated labels into single labels. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.nn.collapse_repeated
tf.nn.collapse_repeated(
labels, seq_length, name=None
)
Args
labels Tensor of shape [batch, max value in seq_length]
seq_length Tensor of shape [batch], sequence length of each batch element.
name A name for this Op. Defaults to "collapse_repeated_labels".
Returns A tuple (collapsed_labels, new_seq_length) where collapsed_labels Tensor of shape [batch, max_seq_length] with repeated labels collapsed and padded to max_seq_length, eg: [[A, A, B, B, A], [A, B, C, D, E]] => [[A, B, A, 0, 0], [A, B, C, D, E]]
new_seq_length int tensor of shape [batch] with new sequence lengths. | tensorflow.nn.collapse_repeated |
tf.nn.compute_accidental_hits View source on GitHub Compute the position ids in sampled_candidates matching true_classes. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.nn.compute_accidental_hits
tf.nn.compute_accidental_hits(
true_classes, sampled_candidates, num_true, seed=None, name=None
)
In Candidate Sampling, this operation facilitates virtually removing sampled classes which happen to match target classes. This is done in Sampled Softmax and Sampled Logistic. See our Candidate Sampling Algorithms Reference. We presuppose that the sampled_candidates are unique. We call it an 'accidental hit' when one of the target classes matches one of the sampled classes. This operation reports accidental hits as triples (index, id, weight), where index represents the row number in true_classes, id represents the position in sampled_candidates, and weight is -FLOAT_MAX. The result of this op should be passed through a sparse_to_dense operation, then added to the logits of the sampled classes. This removes the contradictory effect of accidentally sampling the true target classes as noise classes for the same example.
Args
true_classes A Tensor of type int64 and shape [batch_size, num_true]. The target classes.
sampled_candidates A tensor of type int64 and shape [num_sampled]. The sampled_candidates output of CandidateSampler.
num_true An int. The number of target classes per training example.
seed An int. An operation-specific seed. Default is 0.
name A name for the operation (optional).
Returns
indices A Tensor of type int32 and shape [num_accidental_hits]. Values indicate rows in true_classes.
ids A Tensor of type int64 and shape [num_accidental_hits]. Values indicate positions in sampled_candidates.
weights A Tensor of type float and shape [num_accidental_hits]. Each value is -FLOAT_MAX. | tensorflow.nn.compute_accidental_hits |
tf.nn.compute_average_loss View source on GitHub Scales per-example losses with sample_weights and computes their average. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.nn.compute_average_loss
tf.nn.compute_average_loss(
per_example_loss, sample_weight=None, global_batch_size=None
)
Usage with distribution strategy and custom training loop: with strategy.scope():
def compute_loss(labels, predictions, sample_weight=None):
# If you are using a `Loss` class instead, set reduction to `NONE` so that
# we can do the reduction afterwards and divide by global batch size.
per_example_loss = tf.keras.losses.sparse_categorical_crossentropy(
labels, predictions)
# Compute loss that is scaled by sample_weight and by global batch size.
return tf.nn.compute_average_loss(
per_example_loss,
sample_weight=sample_weight,
global_batch_size=GLOBAL_BATCH_SIZE)
Args
per_example_loss Per-example loss.
sample_weight Optional weighting for each example.
global_batch_size Optional global batch size value. Defaults to (size of first dimension of losses) * (number of replicas).
Returns Scalar loss value. | tensorflow.nn.compute_average_loss |
tf.nn.conv1d View source on GitHub Computes a 1-D convolution given 3-D input and filter tensors.
tf.nn.conv1d(
input, filters, stride, padding, data_format='NWC', dilations=None,
name=None
)
Given an input tensor of shape batch_shape + [in_width, in_channels] if data_format is "NWC", or batch_shape + [in_channels, in_width] if data_format is "NCW", and a filter / kernel tensor of shape [filter_width, in_channels, out_channels], this op reshapes the arguments to pass them to conv2d to perform the equivalent convolution operation. Internally, this op reshapes the input tensors and invokes tf.nn.conv2d. For example, if data_format does not start with "NC", a tensor of shape batch_shape + [in_width, in_channels] is reshaped to batch_shape + [1, in_width, in_channels], and the filter is reshaped to [1, filter_width, in_channels, out_channels]. The result is then reshaped back to batch_shape + [out_width, out_channels] (where out_width is a function of the stride and padding as in conv2d) and returned to the caller.
Args
input A Tensor of rank at least 3. Must be of type float16, float32, or float64.
filters A Tensor of rank at least 3. Must have the same type as input.
stride An int or list of ints that has length 1 or 3. The number of entries by which the filter is moved right at each step.
padding 'SAME' or 'VALID'
data_format An optional string from "NWC", "NCW". Defaults to "NWC", the data is stored in the order of batch_shape + [in_width, in_channels]. The "NCW" format stores data as batch_shape + [in_channels, in_width].
dilations An int or list of ints that has length 1 or 3 which defaults to 1. The dilation factor for each dimension of input. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. Dilations in the batch and depth dimensions must be 1.
name A name for the operation (optional).
Returns A Tensor. Has the same type as input.
Raises
ValueError if data_format is invalid. | tensorflow.nn.conv1d |
tf.nn.conv1d_transpose View source on GitHub The transpose of conv1d. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.nn.conv1d_transpose
tf.nn.conv1d_transpose(
input, filters, output_shape, strides, padding='SAME',
data_format='NWC', dilations=None, name=None
)
This operation is sometimes called "deconvolution" after (Zeiler et al., 2010), but is actually the transpose (gradient) of conv1d rather than an actual deconvolution.
Args
input A 3-D Tensor of type float and shape [batch, in_width, in_channels] for NWC data format or [batch, in_channels, in_width] for NCW data format.
filters A 3-D Tensor with the same type as input and shape [filter_width, output_channels, in_channels]. filter's in_channels dimension must match that of input.
output_shape A 1-D Tensor, containing three elements, representing the output shape of the deconvolution op.
strides An int or list of ints that has length 1 or 3. The number of entries by which the filter is moved right at each step.
padding A string, either 'VALID' or 'SAME'. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
data_format A string. 'NWC' and 'NCW' are supported.
dilations An int or list of ints that has length 1 or 3 which defaults to 1. The dilation factor for each dimension of input. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. Dilations in the batch and depth dimensions must be 1.
name Optional name for the returned tensor.
Returns A Tensor with the same type as input.
Raises
ValueError If input/output depth does not match filter's shape, if output_shape is not at 3-element vector, if padding is other than 'VALID' or 'SAME', or if data_format is invalid. References: Deconvolutional Networks: Zeiler et al., 2010 (pdf) | tensorflow.nn.conv1d_transpose |
tf.nn.conv2d View source on GitHub Computes a 2-D convolution given input and 4-D filters tensors.
tf.nn.conv2d(
input, filters, strides, padding, data_format='NHWC', dilations=None,
name=None
)
The input tensor may have rank 4 or higher, where shape dimensions [:-3] are considered batch dimensions (batch_shape). Given an input tensor of shape batch_shape + [in_height, in_width, in_channels] and a filter / kernel tensor of shape [filter_height, filter_width, in_channels, out_channels], this op performs the following: Flattens the filter to a 2-D matrix with shape [filter_height * filter_width * in_channels, output_channels]. Extracts image patches from the input tensor to form a virtual tensor of shape [batch, out_height, out_width, filter_height * filter_width * in_channels]. For each patch, right-multiplies the filter matrix and the image patch vector. In detail, with the default NHWC format, output[b, i, j, k] =
sum_{di, dj, q} input[b, strides[1] * i + di, strides[2] * j + dj, q] *
filter[di, dj, q, k]
Must have strides[0] = strides[3] = 1. For the most common case of the same horizontal and vertical strides, strides = [1, stride, stride, 1]. Usage Example:
x_in = np.array([[
[[2], [1], [2], [0], [1]],
[[1], [3], [2], [2], [3]],
[[1], [1], [3], [3], [0]],
[[2], [2], [0], [1], [1]],
[[0], [0], [3], [1], [2]], ]])
kernel_in = np.array([
[ [[2, 0.1]], [[3, 0.2]] ],
[ [[0, 0.3]],[[1, 0.4]] ], ])
x = tf.constant(x_in, dtype=tf.float32)
kernel = tf.constant(kernel_in, dtype=tf.float32)
tf.nn.conv2d(x, kernel, strides=[1, 1, 1, 1], padding='VALID')
<tf.Tensor: shape=(1, 4, 4, 2), dtype=float32, numpy=..., dtype=float32)>
Args
input A Tensor. Must be one of the following types: half, bfloat16, float32, float64. A Tensor of rank at least 4. The dimension order is interpreted according to the value of data_format; with the all-but-inner-3 dimensions acting as batch dimensions. See below for details.
filters A Tensor. Must have the same type as input. A 4-D tensor of shape [filter_height, filter_width, in_channels, out_channels]
strides An int or list of ints that has length 1, 2 or 4. The stride of the sliding window for each dimension of input. If a single value is given it is replicated in the H and W dimension. By default the N and C dimensions are set to 1. The dimension order is determined by the value of data_format, see below for details.
padding Either the string "SAME" or "VALID" indicating the type of padding algorithm to use, or a list indicating the explicit paddings at the start and end of each dimension. When explicit padding is used and data_format is "NHWC", this should be in the form [[0, 0], [pad_top, pad_bottom], [pad_left, pad_right], [0, 0]]. When explicit padding used and data_format is "NCHW", this should be in the form [[0, 0], [0, 0], [pad_top, pad_bottom], [pad_left, pad_right]].
data_format An optional string from: "NHWC", "NCHW". Defaults to "NHWC". Specify the data format of the input and output data. With the default format "NHWC", the data is stored in the order of: batch_shape + [height, width, channels]. Alternatively, the format could be "NCHW", the data storage order of: batch_shape + [channels, height, width].
dilations An int or list of ints that has length 1, 2 or 4, defaults to 1. The dilation factor for each dimension ofinput. If a single value is given it is replicated in the H and W dimension. By default the N and C dimensions are set to 1. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of data_format, see above for details. Dilations in the batch and depth dimensions if a 4-d tensor must be 1.
name A name for the operation (optional).
Returns A Tensor. Has the same type as input and the same outer batch shape. | tensorflow.nn.conv2d |
tf.nn.conv2d_transpose View source on GitHub The transpose of conv2d.
tf.nn.conv2d_transpose(
input, filters, output_shape, strides, padding='SAME',
data_format='NHWC', dilations=None, name=None
)
This operation is sometimes called "deconvolution" after (Zeiler et al., 2010), but is really the transpose (gradient) of atrous_conv2d rather than an actual deconvolution.
Args
input A 4-D Tensor of type float and shape [batch, height, width, in_channels] for NHWC data format or [batch, in_channels, height, width] for NCHW data format.
filters A 4-D Tensor with the same type as input and shape [height, width, output_channels, in_channels]. filter's in_channels dimension must match that of input.
output_shape A 1-D Tensor representing the output shape of the deconvolution op.
strides An int or list of ints that has length 1, 2 or 4. The stride of the sliding window for each dimension of input. If a single value is given it is replicated in the H and W dimension. By default the N and C dimensions are set to 0. The dimension order is determined by the value of data_format, see below for details.
padding Either the string"SAME"or"VALID"indicating the type of padding algorithm to use, or a list indicating the explicit paddings at the start and end of each dimension. When explicit padding is used and data_format is"NHWC", this should be in the form[[0, 0], [pad_top, pad_bottom], [pad_left, pad_right], [0, 0]]. When explicit padding used and data_format is"NCHW", this should be in the form[[0, 0], [0, 0], [pad_top, pad_bottom], [pad_left, pad_right]]. </td> </tr><tr> <td>data_format</td> <td> A string. 'NHWC' and 'NCHW' are supported. </td> </tr><tr> <td>dilations</td> <td> An int or list ofintsthat has length1,2or4, defaults to 1. The dilation factor for each dimension ofinput. If a single value is given it is replicated in theHandWdimension. By default theNandCdimensions are set to 1. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value ofdata_format, see above for details. Dilations in the batch and depth dimensions if a 4-d tensor must be 1. </td> </tr><tr> <td>name` Optional name for the returned tensor.
Returns A Tensor with the same type as input.
Raises
ValueError If input/output depth does not match filter's shape, or if padding is other than 'VALID' or 'SAME'. References: Deconvolutional Networks: Zeiler et al., 2010 (pdf) | tensorflow.nn.conv2d_transpose |
tf.nn.conv3d View source on GitHub Computes a 3-D convolution given 5-D input and filters tensors.
tf.nn.conv3d(
input, filters, strides, padding, data_format='NDHWC', dilations=None,
name=None
)
In signal processing, cross-correlation is a measure of similarity of two waveforms as a function of a time-lag applied to one of them. This is also known as a sliding dot product or sliding inner-product. Our Conv3D implements a form of cross-correlation.
Args
input A Tensor. Must be one of the following types: half, bfloat16, float32, float64. Shape [batch, in_depth, in_height, in_width, in_channels].
filters A Tensor. Must have the same type as input. Shape [filter_depth, filter_height, filter_width, in_channels, out_channels]. in_channels must match between input and filters.
strides A list of ints that has length >= 5. 1-D tensor of length 5. The stride of the sliding window for each dimension of input. Must have strides[0] = strides[4] = 1.
padding A string from: "SAME", "VALID". The type of padding algorithm to use.
data_format An optional string from: "NDHWC", "NCDHW". Defaults to "NDHWC". The data format of the input and output data. With the default format "NDHWC", the data is stored in the order of: [batch, in_depth, in_height, in_width, in_channels]. Alternatively, the format could be "NCDHW", the data storage order is: [batch, in_channels, in_depth, in_height, in_width].
dilations An optional list of ints. Defaults to [1, 1, 1, 1, 1]. 1-D tensor of length 5. The dilation factor for each dimension of input. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of data_format, see above for details. Dilations in the batch and depth dimensions must be 1.
name A name for the operation (optional).
Returns A Tensor. Has the same type as input. | tensorflow.nn.conv3d |
tf.nn.conv3d_transpose View source on GitHub The transpose of conv3d.
tf.nn.conv3d_transpose(
input, filters, output_shape, strides, padding='SAME',
data_format='NDHWC', dilations=None, name=None
)
This operation is sometimes called "deconvolution" after (Zeiler et al., 2010), but is really the transpose (gradient) of conv3d rather than an actual deconvolution.
Args
input A 5-D Tensor of type float and shape [batch, depth, height, width, in_channels] for NDHWC data format or [batch, in_channels, depth, height, width] for NCDHW data format.
filters A 5-D Tensor with the same type as input and shape [depth, height, width, output_channels, in_channels]. filter's in_channels dimension must match that of input.
output_shape A 1-D Tensor representing the output shape of the deconvolution op.
strides An int or list of ints that has length 1, 3 or 5. The stride of the sliding window for each dimension of input. If a single value is given it is replicated in the D, H and W dimension. By default the N and C dimensions are set to 0. The dimension order is determined by the value of data_format, see below for details.
padding A string, either 'VALID' or 'SAME'. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
data_format A string. 'NDHWC' and 'NCDHW' are supported.
dilations An int or list of ints that has length 1, 3 or 5, defaults to 1. The dilation factor for each dimension ofinput. If a single value is given it is replicated in the D, H and W dimension. By default the N and C dimensions are set to 1. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of data_format, see above for details. Dilations in the batch and depth dimensions if a 5-d tensor must be 1.
name Optional name for the returned tensor.
Returns A Tensor with the same type as input.
References: Deconvolutional Networks: Zeiler et al., 2010 (pdf) | tensorflow.nn.conv3d_transpose |
tf.nn.convolution View source on GitHub Computes sums of N-D convolutions (actually cross-correlation).
tf.nn.convolution(
input, filters, strides=None, padding='VALID', data_format=None,
dilations=None, name=None
)
This also supports either output striding via the optional strides parameter or atrous convolution (also known as convolution with holes or dilated convolution, based on the French word "trous" meaning holes in English) via the optional dilations parameter. Currently, however, output striding is not supported for atrous convolutions. Specifically, in the case that data_format does not start with "NC", given a rank (N+2) input Tensor of shape [num_batches, input_spatial_shape[0], ..., input_spatial_shape[N-1], num_input_channels], a rank (N+2) filters Tensor of shape [spatial_filter_shape[0], ..., spatial_filter_shape[N-1], num_input_channels, num_output_channels], an optional dilations tensor of shape N specifying the filter upsampling/input downsampling rate, and an optional list of N strides (defaulting [1]*N), this computes for each N-D spatial output position (x[0], ..., x[N-1]): output[b, x[0], ..., x[N-1], k] =
sum_{z[0], ..., z[N-1], q}
filter[z[0], ..., z[N-1], q, k] *
padded_input[b,
x[0]*strides[0] + dilation_rate[0]*z[0],
...,
x[N-1]*strides[N-1] + dilation_rate[N-1]*z[N-1],
q]
where b is the index into the batch, k is the output channel number, q is the input channel number, and z is the N-D spatial offset within the filter. Here, padded_input is obtained by zero padding the input using an effective spatial filter shape of (spatial_filter_shape-1) * dilation_rate + 1 and output striding strides as described in the comment here. In the case that data_format does start with "NC", the input and output (but not the filters) are simply transposed as follows: convolution(input, data_format, **kwargs) = tf.transpose(convolution(tf.transpose(input, [0] + range(2,N+2) + [1]), **kwargs), [0, N+1] + range(1, N+1)) It is required that 1 <= N <= 3.
Args
input An (N+2)-D Tensor of type T, of shape [batch_size] + input_spatial_shape + [in_channels] if data_format does not start with "NC" (default), or [batch_size, in_channels] + input_spatial_shape if data_format starts with "NC".
filters An (N+2)-D Tensor with the same type as input and shape spatial_filter_shape + [in_channels, out_channels].
padding A string, either "VALID" or "SAME". The padding algorithm. "valid" means no padding. "same" results in padding evenly to the left/right or up/down of the input such that output has the same height/width dimension as the input.
strides Optional. Sequence of N ints >= 1. Specifies the output stride. Defaults to [1]*N. If any value of strides is > 1, then all values of dilation_rate must be 1.
dilations Optional. Sequence of N ints >= 1. Specifies the filter upsampling/input downsampling rate. In the literature, the same parameter is sometimes called input stride or dilation. The effective filter size used for the convolution will be spatial_filter_shape + (spatial_filter_shape - 1) * (rate - 1), obtained by inserting (dilation_rate[i]-1) zeros between consecutive elements of the original filter in each spatial dimension i. If any value of dilation_rate is > 1, then all values of strides must be 1.
name Optional name for the returned tensor.
data_format A string or None. Specifies whether the channel dimension of the input and output is the last dimension (default, or if data_format does not start with "NC"), or the second dimension (if data_format starts with "NC"). For N=1, the valid values are "NWC" (default) and "NCW". For N=2, the valid values are "NHWC" (default) and "NCHW". For N=3, the valid values are "NDHWC" (default) and "NCDHW".
Returns A Tensor with the same type as input of shape [batch_size] + output_spatial_shape + [out_channels] if data_format is None or does not start with "NC", or [batch_size, out_channels] + output_spatial_shape if data_format starts with "NC", where output_spatial_shape depends on the value of padding. If padding == "SAME": output_spatial_shape[i] = ceil(input_spatial_shape[i] / strides[i]) If padding == "VALID": output_spatial_shape[i] = ceil((input_spatial_shape[i] - (spatial_filter_shape[i]-1) * dilation_rate[i]) / strides[i]).
Raises
ValueError If input/output depth does not match filters shape, if padding is other than "VALID" or "SAME", or if data_format is invalid. | tensorflow.nn.convolution |
tf.nn.conv_transpose View source on GitHub The transpose of convolution. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.nn.conv_transpose
tf.nn.conv_transpose(
input, filters, output_shape, strides, padding='SAME',
data_format=None, dilations=None, name=None
)
This operation is sometimes called "deconvolution" after (Zeiler et al., 2010), but is really the transpose (gradient) of conv3d rather than an actual deconvolution.
Args
input An N+2 dimensional Tensor of shape [batch_size] + input_spatial_shape + [in_channels] if data_format does not start with "NC" (default), or [batch_size, in_channels] + input_spatial_shape if data_format starts with "NC". It must be one of the following types: half, bfloat16, float32, float64.
filters An N+2 dimensional Tensor with the same type as input and shape spatial_filter_shape + [in_channels, out_channels].
output_shape A 1-D Tensor representing the output shape of the deconvolution op.
strides An int or list of ints that has length 1, N or N+2. The stride of the sliding window for each dimension of input. If a single value is given it is replicated in the spatial dimensions. By default the N and C dimensions are set to 0. The dimension order is determined by the value of data_format, see below for details.
padding A string, either 'VALID' or 'SAME'. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
data_format A string or None. Specifies whether the channel dimension of the input and output is the last dimension (default, or if data_format does not start with "NC"), or the second dimension (if data_format starts with "NC"). For N=1, the valid values are "NWC" (default) and "NCW". For N=2, the valid values are "NHWC" (default) and "NCHW". For N=3, the valid values are "NDHWC" (default) and "NCDHW".
dilations An int or list of ints that has length 1, N or N+2, defaults to 1. The dilation factor for each dimension ofinput. If a single value is given it is replicated in the spatial dimensions. By default the N and C dimensions are set to 1. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of data_format, see above for details.
name A name for the operation (optional). If not specified "conv_transpose" is used.
Returns A Tensor with the same type as value.
References: Deconvolutional Networks: Zeiler et al., 2010 (pdf) | tensorflow.nn.conv_transpose |
tf.nn.crelu View source on GitHub Computes Concatenated ReLU.
tf.nn.crelu(
features, axis=-1, name=None
)
Concatenates a ReLU which selects only the positive part of the activation with a ReLU which selects only the negative part of the activation. Note that as a result this non-linearity doubles the depth of the activations. Source: Understanding and Improving Convolutional Neural Networks via Concatenated Rectified Linear Units. W. Shang, et al.
Args
features A Tensor with type float, double, int32, int64, uint8, int16, or int8.
name A name for the operation (optional).
axis The axis that the output values are concatenated along. Default is -1.
Returns A Tensor with the same type as features.
References: Understanding and Improving Convolutional Neural Networks via Concatenated Rectified Linear Units: Shang et al., 2016 (pdf) | tensorflow.nn.crelu |
tf.nn.ctc_beam_search_decoder View source on GitHub Performs beam search decoding on the logits given in input. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.nn.ctc_beam_search_decoder_v2
tf.nn.ctc_beam_search_decoder(
inputs, sequence_length, beam_width=100, top_paths=1
)
Note: The ctc_greedy_decoder is a special case of the ctc_beam_search_decoder with top_paths=1 and beam_width=1 (but that decoder is faster for this special case).
Args
inputs 3-D float Tensor, size [max_time, batch_size, num_classes]. The logits.
sequence_length 1-D int32 vector containing sequence lengths, having size [batch_size].
beam_width An int scalar >= 0 (beam search beam width).
top_paths An int scalar >= 0, <= beam_width (controls output size).
Returns A tuple (decoded, log_probabilities) where decoded A list of length top_paths, where decoded[j] is a SparseTensor containing the decoded outputs: decoded[j].indices: Indices matrix [total_decoded_outputs[j], 2]; The rows store: [batch, time]. decoded[j].values: Values vector, size [total_decoded_outputs[j]]. The vector stores the decoded classes for beam j. decoded[j].dense_shape: Shape vector, size (2). The shape values are: [batch_size, max_decoded_length[j]].
log_probability A float matrix [batch_size, top_paths] containing sequence log-probabilities. | tensorflow.nn.ctc_beam_search_decoder |
tf.nn.ctc_greedy_decoder View source on GitHub Performs greedy decoding on the logits given in input (best path). View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.nn.ctc_greedy_decoder
tf.nn.ctc_greedy_decoder(
inputs, sequence_length, merge_repeated=True
)
Note: Regardless of the value of merge_repeated, if the maximum index of a given time and batch corresponds to the blank index (num_classes - 1), no new element is emitted.
If merge_repeated is True, merge repeated classes in output. This means that if consecutive logits' maximum indices are the same, only the first of these is emitted. The sequence A B B * B * B (where '*' is the blank label) becomes
A B B B if merge_repeated=True.
A B B B B if merge_repeated=False.
Args
inputs 3-D float Tensor sized [max_time, batch_size, num_classes]. The logits.
sequence_length 1-D int32 vector containing sequence lengths, having size [batch_size].
merge_repeated Boolean. Default: True.
Returns A tuple (decoded, neg_sum_logits) where decoded A single-element list. decoded[0] is an SparseTensor containing the decoded outputs s.t.: decoded.indices: Indices matrix (total_decoded_outputs, 2). The rows store: [batch, time]. decoded.values: Values vector, size (total_decoded_outputs). The vector stores the decoded classes. decoded.dense_shape: Shape vector, size (2). The shape values are: [batch_size, max_decoded_length]
neg_sum_logits A float matrix (batch_size x 1) containing, for the sequence found, the negative of the sum of the greatest logit at each timeframe. | tensorflow.nn.ctc_greedy_decoder |
tf.nn.ctc_loss View source on GitHub Computes CTC (Connectionist Temporal Classification) loss.
tf.nn.ctc_loss(
labels, logits, label_length, logit_length, logits_time_major=True, unique=None,
blank_index=None, name=None
)
This op implements the CTC loss as presented in (Graves et al., 2006). Notes: Same as the "Classic CTC" in TensorFlow 1.x's tf.compat.v1.nn.ctc_loss setting of preprocess_collapse_repeated=False, ctc_merge_repeated=True Labels may be supplied as either a dense, zero-padded tensor with a vector of label sequence lengths OR as a SparseTensor. On TPU and GPU: Only dense padded labels are supported. On CPU: Caller may use SparseTensor or dense padded labels but calling with a SparseTensor will be significantly faster. Default blank label is 0 rather num_classes - 1, unless overridden by blank_index.
Args
labels tensor of shape [batch_size, max_label_seq_length] or SparseTensor
logits tensor of shape [frames, batch_size, num_labels], if logits_time_major == False, shape is [batch_size, frames, num_labels].
label_length tensor of shape [batch_size], None if labels is SparseTensor Length of reference label sequence in labels.
logit_length tensor of shape [batch_size] Length of input sequence in logits.
logits_time_major (optional) If True (default), logits is shaped [time, batch, logits]. If False, shape is [batch, time, logits]
unique (optional) Unique label indices as computed by ctc_unique_labels(labels). If supplied, enable a faster, memory efficient implementation on TPU.
blank_index (optional) Set the class index to use for the blank label. Negative values will start from num_classes, ie, -1 will reproduce the ctc_loss behavior of using num_classes - 1 for the blank symbol. There is some memory/performance overhead to switching from the default of 0 as an additional shifted copy of the logits may be created.
name A name for this Op. Defaults to "ctc_loss_dense".
Returns
loss tensor of shape [batch_size], negative log probabilities. References: Connectionist Temporal Classification - Labeling Unsegmented Sequence Data with Recurrent Neural Networks: Graves et al., 2006 (pdf) | tensorflow.nn.ctc_loss |
tf.nn.ctc_unique_labels View source on GitHub Get unique labels and indices for batched labels for tf.nn.ctc_loss. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.nn.ctc_unique_labels
tf.nn.ctc_unique_labels(
labels, name=None
)
For use with tf.nn.ctc_loss optional argument unique: This op can be used to preprocess labels in input pipeline to for better speed/memory use computing the ctc loss on TPU. Example: ctc_unique_labels([[3, 4, 4, 3]]) -> unique labels padded with 0: [[3, 4, 0, 0]] indices of original labels in unique: [0, 1, 1, 0]
Args
labels tensor of shape [batch_size, max_label_length] padded with 0.
name A name for this Op. Defaults to "ctc_unique_labels".
Returns tuple of unique labels, tensor of shape [batch_size, max_label_length]
indices into unique labels, shape [batch_size, max_label_length] | tensorflow.nn.ctc_unique_labels |
tf.nn.depthwise_conv2d View source on GitHub Depthwise 2-D convolution.
tf.nn.depthwise_conv2d(
input, filter, strides, padding, data_format=None, dilations=None, name=None
)
Given a 4D input tensor ('NHWC' or 'NCHW' data formats) and a filter tensor of shape [filter_height, filter_width, in_channels, channel_multiplier] containing in_channels convolutional filters of depth 1, depthwise_conv2d applies a different filter to each input channel (expanding from 1 channel to channel_multiplier channels for each), then concatenates the results together. The output has in_channels * channel_multiplier channels. In detail, with the default NHWC format, output[b, i, j, k * channel_multiplier + q] = sum_{di, dj}
filter[di, dj, k, q] * input[b, strides[1] * i + rate[0] * di,
strides[2] * j + rate[1] * dj, k]
Must have strides[0] = strides[3] = 1. For the most common case of the same horizontal and vertical strides, strides = [1, stride, stride, 1]. If any value in rate is greater than 1, we perform atrous depthwise convolution, in which case all values in the strides tensor must be equal to 1. Usage Example:
x = np.array([
[1., 2.],
[3., 4.],
[5., 6.]
], dtype=np.float32).reshape((1, 3, 2, 1))
kernel = np.array([
[1., 2.],
[3., 4]
], dtype=np.float32).reshape((2, 1, 1, 2))
tf.nn.depthwise_conv2d(x, kernel, strides=[1, 1, 1, 1],
padding='VALID').numpy()
array([[[[10., 14.],
[14., 20.]],
[[18., 26.],
[22., 32.]]]], dtype=float32)
tf.nn.depthwise_conv2d(x, kernel, strides=[1, 1, 1, 1],
padding=[[0, 0], [1, 0], [1, 0], [0, 0]]).numpy()
array([[[[ 0., 0.],
[ 3., 4.],
[ 6., 8.]],
[[ 0., 0.],
[10., 14.],
[14., 20.]],
[[ 0., 0.],
[18., 26.],
[22., 32.]]]], dtype=float32)
Args
input 4-D with shape according to data_format.
filter 4-D with shape [filter_height, filter_width, in_channels, channel_multiplier].
strides 1-D of size 4. The stride of the sliding window for each dimension of input.
padding Controls how to pad the image before applying the convolution. Can be the string "SAME" or "VALID" indicating the type of padding algorithm to use, or a list indicating the explicit paddings at the start and end of each dimension. When explicit padding is used and data_format is "NHWC", this should be in the form [[0, 0], [pad_top, pad_bottom], [pad_left, pad_right], [0, 0]]. When explicit padding used and data_format is "NCHW", this should be in the form [[0, 0], [0, 0], [pad_top, pad_bottom], [pad_left, pad_right]].
data_format The data format for input. Either "NHWC" (default) or "NCHW".
dilations 1-D of size 2. The dilation rate in which we sample input values across the height and width dimensions in atrous convolution. If it is greater than 1, then all values of strides must be 1.
name A name for this operation (optional).
Returns A 4-D Tensor with shape according to data_format. E.g., for "NHWC" format, shape is [batch, out_height, out_width, in_channels * channel_multiplier]. | tensorflow.nn.depthwise_conv2d |
tf.nn.depthwise_conv2d_backprop_filter View source on GitHub Computes the gradients of depthwise convolution with respect to the filter. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.nn.depthwise_conv2d_backprop_filter, tf.compat.v1.nn.depthwise_conv2d_native_backprop_filter
tf.nn.depthwise_conv2d_backprop_filter(
input, filter_sizes, out_backprop, strides, padding,
data_format='NHWC', dilations=[1, 1, 1, 1], name=None
)
Args
input A Tensor. Must be one of the following types: half, bfloat16, float32, float64. 4-D with shape based on data_format. For example, if data_format is 'NHWC' then input is a 4-D [batch, in_height, in_width, in_channels] tensor.
filter_sizes A Tensor of type int32. An integer vector representing the tensor shape of filter, where filter is a 4-D [filter_height, filter_width, in_channels, depthwise_multiplier] tensor.
out_backprop A Tensor. Must have the same type as input. 4-D with shape based on data_format. For example, if data_format is 'NHWC' then out_backprop shape is [batch, out_height, out_width, out_channels]. Gradients w.r.t. the output of the convolution.
strides A list of ints. The stride of the sliding window for each dimension of the input of the convolution.
padding Controls how to pad the image before applying the convolution. Can be the string "SAME" or "VALID" indicating the type of padding algorithm to use, or a list indicating the explicit paddings at the start and end of each dimension. When explicit padding is used and data_format is "NHWC", this should be in the form [[0, 0], [pad_top, pad_bottom], [pad_left, pad_right], [0, 0]]. When explicit padding used and data_format is "NCHW", this should be in the form [[0, 0], [0, 0], [pad_top, pad_bottom], [pad_left, pad_right]].
data_format An optional string from: "NHWC", "NCHW". Defaults to "NHWC". Specify the data format of the input and output data. With the default format "NHWC", the data is stored in the order of: [batch, height, width, channels]. Alternatively, the format could be "NCHW", the data storage order of: [batch, channels, height, width].
dilations An optional list of ints. Defaults to [1, 1, 1, 1]. 1-D tensor of length 4. The dilation factor for each dimension of input. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of data_format, see above for details. Dilations in the batch and depth dimensions must be 1.
name A name for the operation (optional).
Returns A Tensor. Has the same type as input. | tensorflow.nn.depthwise_conv2d_backprop_filter |
tf.nn.depthwise_conv2d_backprop_input View source on GitHub Computes the gradients of depthwise convolution with respect to the input. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.nn.depthwise_conv2d_backprop_input, tf.compat.v1.nn.depthwise_conv2d_native_backprop_input
tf.nn.depthwise_conv2d_backprop_input(
input_sizes, filter, out_backprop, strides, padding,
data_format='NHWC', dilations=[1, 1, 1, 1], name=None
)
Args
input_sizes A Tensor of type int32. An integer vector representing the shape of input, based on data_format. For example, if data_format is 'NHWC' then input is a 4-D [batch, height, width, channels] tensor.
filter A Tensor. Must be one of the following types: half, bfloat16, float32, float64. 4-D with shape [filter_height, filter_width, in_channels, depthwise_multiplier].
out_backprop A Tensor. Must have the same type as filter. 4-D with shape based on data_format. For example, if data_format is 'NHWC' then out_backprop shape is [batch, out_height, out_width, out_channels]. Gradients w.r.t. the output of the convolution.
strides A list of ints. The stride of the sliding window for each dimension of the input of the convolution.
padding Controls how to pad the image before applying the convolution. Can be the string "SAME" or "VALID" indicating the type of padding algorithm to use, or a list indicating the explicit paddings at the start and end of each dimension. When explicit padding is used and data_format is "NHWC", this should be in the form [[0, 0], [pad_top, pad_bottom], [pad_left, pad_right], [0, 0]]. When explicit padding used and data_format is "NCHW", this should be in the form [[0, 0], [0, 0], [pad_top, pad_bottom], [pad_left, pad_right]].
data_format An optional string from: "NHWC", "NCHW". Defaults to "NHWC". Specify the data format of the input and output data. With the default format "NHWC", the data is stored in the order of: [batch, height, width, channels]. Alternatively, the format could be "NCHW", the data storage order of: [batch, channels, height, width].
dilations An optional list of ints. Defaults to [1, 1, 1, 1]. 1-D tensor of length 4. The dilation factor for each dimension of input. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of data_format, see above for details. Dilations in the batch and depth dimensions must be 1.
name A name for the operation (optional).
Returns A Tensor. Has the same type as filter. | tensorflow.nn.depthwise_conv2d_backprop_input |
tf.nn.depth_to_space View source on GitHub DepthToSpace for tensors of type T.
tf.nn.depth_to_space(
input, block_size, data_format='NHWC', name=None
)
Rearranges data from depth into blocks of spatial data. This is the reverse transformation of SpaceToDepth. More specifically, this op outputs a copy of the input tensor where values from the depth dimension are moved in spatial blocks to the height and width dimensions. The attr block_size indicates the input block size and how the data is moved. Chunks of data of size block_size * block_size from depth are rearranged into non-overlapping blocks of size block_size x block_size
The width the output tensor is input_depth * block_size, whereas the height is input_height * block_size. The Y, X coordinates within each block of the output image are determined by the high order component of the input channel index. The depth of the input tensor must be divisible by block_size * block_size. The data_format attr specifies the layout of the input and output tensors with the following options: "NHWC": [ batch, height, width, channels ] "NCHW": [ batch, channels, height, width ] "NCHW_VECT_C": qint8 [ batch, channels / 4, height, width, 4 ] It is useful to consider the operation as transforming a 6-D Tensor. e.g. for data_format = NHWC, Each element in the input tensor can be specified via 6 coordinates, ordered by decreasing memory layout significance as: n,iY,iX,bY,bX,oC (where n=batch index, iX, iY means X or Y coordinates within the input image, bX, bY means coordinates within the output block, oC means output channels). The output would be the input transposed to the following layout: n,iY,bY,iX,bX,oC This operation is useful for resizing the activations between convolutions (but keeping all data), e.g. instead of pooling. It is also useful for training purely convolutional models. For example, given an input of shape [1, 1, 1, 4], data_format = "NHWC" and block_size = 2: x = [[[[1, 2, 3, 4]]]]
This operation will output a tensor of shape [1, 2, 2, 1]: [[[[1], [2]],
[[3], [4]]]]
Here, the input has a batch of 1 and each batch element has shape [1, 1, 4], the corresponding output will have 2x2 elements and will have a depth of 1 channel (1 = 4 / (block_size * block_size)). The output element shape is [2, 2, 1]. For an input tensor with larger depth, here of shape [1, 1, 1, 12], e.g. x = [[[[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]]]]
This operation, for block size of 2, will return the following tensor of shape [1, 2, 2, 3] [[[[1, 2, 3], [4, 5, 6]],
[[7, 8, 9], [10, 11, 12]]]]
Similarly, for the following input of shape [1 2 2 4], and a block size of 2: x = [[[[1, 2, 3, 4],
[5, 6, 7, 8]],
[[9, 10, 11, 12],
[13, 14, 15, 16]]]]
the operator will return the following tensor of shape [1 4 4 1]: x = [[[ [1], [2], [5], [6]],
[ [3], [4], [7], [8]],
[ [9], [10], [13], [14]],
[ [11], [12], [15], [16]]]]
Args
input A Tensor.
block_size An int that is >= 2. The size of the spatial block, same as in Space2Depth.
data_format An optional string from: "NHWC", "NCHW", "NCHW_VECT_C". Defaults to "NHWC".
name A name for the operation (optional).
Returns A Tensor. Has the same type as input. | tensorflow.nn.depth_to_space |
tf.nn.dilation2d View source on GitHub Computes the grayscale dilation of 4-D input and 3-D filters tensors.
tf.nn.dilation2d(
input, filters, strides, padding, data_format, dilations, name=None
)
The input tensor has shape [batch, in_height, in_width, depth] and the filters tensor has shape [filter_height, filter_width, depth], i.e., each input channel is processed independently of the others with its own structuring function. The output tensor has shape [batch, out_height, out_width, depth]. The spatial dimensions of the output tensor depend on the padding algorithm. We currently only support the default "NHWC" data_format. In detail, the grayscale morphological 2-D dilation is the max-sum correlation (for consistency with conv2d, we use unmirrored filters): output[b, y, x, c] =
max_{dy, dx} input[b,
strides[1] * y + rates[1] * dy,
strides[2] * x + rates[2] * dx,
c] +
filters[dy, dx, c]
Max-pooling is a special case when the filter has size equal to the pooling kernel size and contains all zeros. Note on duality: The dilation of input by the filters is equal to the negation of the erosion of -input by the reflected filters.
Args
input A Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, int64, bfloat16, uint16, half, uint32, uint64. 4-D with shape [batch, in_height, in_width, depth].
filters A Tensor. Must have the same type as input. 3-D with shape [filter_height, filter_width, depth].
strides A list of ints that has length >= 4. The stride of the sliding window for each dimension of the input tensor. Must be: [1, stride_height, stride_width, 1].
padding A string from: "SAME", "VALID". The type of padding algorithm to use.
data_format A string, only "NHWC" is currently supported.
dilations A list of ints that has length >= 4. The input stride for atrous morphological dilation. Must be: [1, rate_height, rate_width, 1].
name A name for the operation (optional).
Returns A Tensor. Has the same type as input. | tensorflow.nn.dilation2d |
tf.nn.dropout View source on GitHub Computes dropout: randomly sets elements to zero to prevent overfitting.
tf.nn.dropout(
x, rate, noise_shape=None, seed=None, name=None
)
Note: The behavior of dropout has changed between TensorFlow 1.x and 2.x. When converting 1.x code, please use named arguments to ensure behavior stays consistent.
See also: tf.keras.layers.Dropout for a dropout layer. Dropout is useful for regularizing DNN models. Inputs elements are randomly set to zero (and the other elements are rescaled). This encourages each node to be independently useful, as it cannot rely on the output of other nodes. More precisely: With probability rate elements of x are set to 0. The remaining elements are scaled up by 1.0 / (1 - rate), so that the expected value is preserved.
tf.random.set_seed(0)
x = tf.ones([3,5])
tf.nn.dropout(x, rate = 0.5, seed = 1).numpy()
array([[2., 0., 0., 2., 2.],
[2., 2., 2., 2., 2.],
[2., 0., 2., 0., 2.]], dtype=float32)
tf.random.set_seed(0)
x = tf.ones([3,5])
tf.nn.dropout(x, rate = 0.8, seed = 1).numpy()
array([[0., 0., 0., 5., 5.],
[0., 5., 0., 5., 0.],
[5., 0., 5., 0., 5.]], dtype=float32)
tf.nn.dropout(x, rate = 0.0) == x
<tf.Tensor: shape=(3, 5), dtype=bool, numpy=
array([[ True, True, True, True, True],
[ True, True, True, True, True],
[ True, True, True, True, True]])>
By default, each element is kept or dropped independently. If noise_shape is specified, it must be broadcastable to the shape of x, and only dimensions with noise_shape[i] == shape(x)[i] will make independent decisions. This is useful for dropping whole channels from an image or sequence. For example:
tf.random.set_seed(0)
x = tf.ones([3,10])
tf.nn.dropout(x, rate = 2/3, noise_shape=[1,10], seed=1).numpy()
array([[0., 0., 0., 3., 3., 0., 3., 3., 3., 0.],
[0., 0., 0., 3., 3., 0., 3., 3., 3., 0.],
[0., 0., 0., 3., 3., 0., 3., 3., 3., 0.]], dtype=float32)
Args
x A floating point tensor.
rate A scalar Tensor with the same type as x. The probability that each element is dropped. For example, setting rate=0.1 would drop 10% of input elements.
noise_shape A 1-D Tensor of type int32, representing the shape for randomly generated keep/drop flags.
seed A Python integer. Used to create random seeds. See tf.random.set_seed for behavior.
name A name for this operation (optional).
Returns A Tensor of the same shape of x.
Raises
ValueError If rate is not in [0, 1) or if x is not a floating point tensor. rate=1 is disallowed, because the output would be all zeros, which is likely not what was intended. | tensorflow.nn.dropout |
tf.nn.elu Computes exponential linear: exp(features) - 1 if < 0, features otherwise. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.nn.elu
tf.nn.elu(
features, name=None
)
See Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs)
Args
features A Tensor. Must be one of the following types: half, bfloat16, float32, float64.
name A name for the operation (optional).
Returns A Tensor. Has the same type as features. | tensorflow.nn.elu |
tf.nn.embedding_lookup View source on GitHub Looks up embeddings for the given ids from a list of tensors.
tf.nn.embedding_lookup(
params, ids, max_norm=None, name=None
)
This function is used to perform parallel lookups on the list of tensors in params. It is a generalization of tf.gather, where params is interpreted as a partitioning of a large embedding tensor. If len(params) > 1, each element id of ids is partitioned between the elements of params according to the "div" partition strategy, which means we assign ids to partitions in a contiguous manner. For instance, 13 ids are split across 5 partitions as: [[0, 1, 2], [3, 4, 5], [6, 7, 8], [9, 10], [11, 12]]. If the id space does not evenly divide the number of partitions, each of the first (max_id + 1) % len(params) partitions will be assigned one more id. The results of the lookup are concatenated into a dense tensor. The returned tensor has shape shape(ids) + shape(params)[1:].
Args
params A single tensor representing the complete embedding tensor, or a list of tensors all of same shape except for the first dimension, representing sharded embedding tensors following "div" partition strategy.
ids A Tensor with type int32 or int64 containing the ids to be looked up in params.
max_norm If not None, each embedding is clipped if its l2-norm is larger than this value.
name A name for the operation (optional).
Returns A Tensor with the same type as the tensors in params. For instance, if params is a 5x2 matrix: [[1, 2], [3, 4], [5, 6], [7, 8], [9, 10]]
or a list of matrices: params[0]: [[1, 2], [3, 4]]
params[1]: [[5, 6], [7, 8]]
params[2]: [[9, 10]]
and ids is: [0, 3, 4]
The output will be a 3x2 matrix: [[1, 2], [7, 8], [9, 10]]
Raises
ValueError If params is empty. | tensorflow.nn.embedding_lookup |
tf.nn.embedding_lookup_sparse View source on GitHub Looks up embeddings for the given ids and weights from a list of tensors.
tf.nn.embedding_lookup_sparse(
params, sp_ids, sp_weights, combiner=None, max_norm=None, name=None
)
This op assumes that there is at least one id for each row in the dense tensor represented by sp_ids (i.e. there are no rows with empty features), and that all the indices of sp_ids are in canonical row-major order. sp_ids and sp_weights (if not None) are SparseTensors with rank of 2. Embeddings are always aggregated along the last dimension. It also assumes that all id values lie in the range [0, p0), where p0 is the sum of the size of params along dimension 0. If len(params) > 1, each element of sp_ids is partitioned between the elements of params according to the "div" partition strategy, which means we assign ids to partitions in a contiguous manner. For instance, 13 ids are split across 5 partitions as: [[0, 1, 2], [3, 4, 5], [6, 7, 8], [9, 10], [11, 12]]. If the id space does not evenly divide the number of partitions, each of the first (max_id + 1) % len(params) partitions will be assigned one more id.
Args
params A single tensor representing the complete embedding tensor, or a list of tensors all of same shape except for the first dimension, representing sharded embedding tensors following "div" partition strategy.
sp_ids N x M SparseTensor of int64 ids where N is typically batch size and M is arbitrary.
sp_weights either a SparseTensor of float / double weights, or None to indicate all weights should be taken to be 1. If specified, sp_weights must have exactly the same shape and indices as sp_ids.
combiner A string specifying the reduction op. Currently "mean", "sqrtn" and "sum" are supported. "sum" computes the weighted sum of the embedding results for each row. "mean" is the weighted sum divided by the total weight. "sqrtn" is the weighted sum divided by the square root of the sum of the squares of the weights. Defaults to mean.
max_norm If not None, each embedding is clipped if its l2-norm is larger than this value, before combining.
name Optional name for the op.
Returns A dense tensor representing the combined embeddings for the sparse ids. For each row in the dense tensor represented by sp_ids, the op looks up the embeddings for all ids in that row, multiplies them by the corresponding weight, and combines these embeddings as specified. In other words, if shape(combined params) = [p0, p1, ..., pm] and shape(sp_ids) = shape(sp_weights) = [d0, d1] then shape(output) = [d0, p1, ..., pm]. For instance, if params is a 10x20 matrix, and sp_ids / sp_weights are [0, 0]: id 1, weight 2.0
[0, 1]: id 3, weight 0.5
[1, 0]: id 0, weight 1.0
[2, 3]: id 1, weight 3.0
with combiner="mean", then the output will be a 3x20 matrix where output[0, :] = (params[1, :] * 2.0 + params[3, :] * 0.5) / (2.0 + 0.5)
output[1, :] = (params[0, :] * 1.0) / 1.0
output[2, :] = (params[1, :] * 3.0) / 3.0
Raises
TypeError If sp_ids is not a SparseTensor, or if sp_weights is neither None nor SparseTensor.
ValueError If combiner is not one of {"mean", "sqrtn", "sum"}. | tensorflow.nn.embedding_lookup_sparse |
tf.nn.erosion2d View source on GitHub Computes the grayscale erosion of 4-D value and 3-D filters tensors.
tf.nn.erosion2d(
value, filters, strides, padding, data_format, dilations, name=None
)
The value tensor has shape [batch, in_height, in_width, depth] and the filters tensor has shape [filters_height, filters_width, depth], i.e., each input channel is processed independently of the others with its own structuring function. The output tensor has shape [batch, out_height, out_width, depth]. The spatial dimensions of the output tensor depend on the padding algorithm. We currently only support the default "NHWC" data_format. In detail, the grayscale morphological 2-D erosion is given by: output[b, y, x, c] =
min_{dy, dx} value[b,
strides[1] * y - dilations[1] * dy,
strides[2] * x - dilations[2] * dx,
c] -
filters[dy, dx, c]
Duality: The erosion of value by the filters is equal to the negation of the dilation of -value by the reflected filters.
Args
value A Tensor. 4-D with shape [batch, in_height, in_width, depth].
filters A Tensor. Must have the same type as value. 3-D with shape [filters_height, filters_width, depth].
strides A list of ints that has length >= 4. 1-D of length 4. The stride of the sliding window for each dimension of the input tensor. Must be: [1, stride_height, stride_width, 1].
padding A string from: "SAME", "VALID". The type of padding algorithm to use.
data_format A string, only "NHWC" is currently supported.
dilations A list of ints that has length >= 4. 1-D of length 4. The input stride for atrous morphological dilation. Must be: [1, rate_height, rate_width, 1].
name A name for the operation (optional). If not specified "erosion2d" is used.
Returns A Tensor. Has the same type as value. 4-D with shape [batch, out_height, out_width, depth].
Raises
ValueError If the value depth does not match filters' shape, or if padding is other than 'VALID' or 'SAME'. | tensorflow.nn.erosion2d |
tf.nn.fractional_avg_pool View source on GitHub Performs fractional average pooling on the input.
tf.nn.fractional_avg_pool(
value, pooling_ratio, pseudo_random=False, overlapping=False, seed=0, name=None
)
Fractional average pooling is similar to Fractional max pooling in the pooling region generation step. The only difference is that after pooling regions are generated, a mean operation is performed instead of a max operation in each pooling region.
Args
value A Tensor. 4-D with shape [batch, height, width, channels].
pooling_ratio A list of floats that has length >= 4. Pooling ratio for each dimension of value, currently only supports row and col dimension and should be >= 1.0. For example, a valid pooling ratio looks like [1.0, 1.44, 1.73, 1.0]. The first and last elements must be 1.0 because we don't allow pooling on batch and channels dimensions. 1.44 and 1.73 are pooling ratio on height and width dimensions respectively.
pseudo_random An optional bool. Defaults to False. When set to True, generates the pooling sequence in a pseudorandom fashion, otherwise, in a random fashion. Check paper (Graham, 2015) for difference between pseudorandom and random.
overlapping An optional bool. Defaults to False. When set to True, it means when pooling, the values at the boundary of adjacent pooling cells are used by both cells. For example: index 0 1 2 3 4 value 20 5 16 3 7 If the pooling sequence is [0, 2, 4], then 16, at index 2 will be used twice. The result would be [20, 16] for fractional avg pooling.
seed An optional int. Defaults to 0. If set to be non-zero, the random number generator is seeded by the given seed. Otherwise it is seeded by a random seed.
name A name for the operation (optional).
Returns
A tuple of Tensor objects (output, row_pooling_sequence, col_pooling_sequence). output: Output Tensor after fractional avg pooling. Has the same type as value. row_pooling_sequence: A Tensor of type int64. col_pooling_sequence: A Tensor of type int64. References: Fractional Max-Pooling: Graham, 2015 (pdf) | tensorflow.nn.fractional_avg_pool |
tf.nn.fractional_max_pool View source on GitHub Performs fractional max pooling on the input.
tf.nn.fractional_max_pool(
value, pooling_ratio, pseudo_random=False, overlapping=False, seed=0, name=None
)
Fractional max pooling is slightly different than regular max pooling. In regular max pooling, you downsize an input set by taking the maximum value of smaller N x N subsections of the set (often 2x2), and try to reduce the set by a factor of N, where N is an integer. Fractional max pooling, as you might expect from the word "fractional", means that the overall reduction ratio N does not have to be an integer. The sizes of the pooling regions are generated randomly but are fairly uniform. For example, let's look at the height dimension, and the constraints on the list of rows that will be pool boundaries. First we define the following: input_row_length : the number of rows from the input set output_row_length : which will be smaller than the input alpha = input_row_length / output_row_length : our reduction ratio K = floor(alpha) row_pooling_sequence : this is the result list of pool boundary rows Then, row_pooling_sequence should satisfy: a[0] = 0 : the first value of the sequence is 0 a[end] = input_row_length : the last value of the sequence is the size K <= (a[i+1] - a[i]) <= K+1 : all intervals are K or K+1 size length(row_pooling_sequence) = output_row_length+1
Args
value A Tensor. 4-D with shape [batch, height, width, channels].
pooling_ratio An int or list of ints that has length 1, 2 or 4. Pooling ratio for each dimension of value, currently only supports row and col dimension and should be >= 1.0. For example, a valid pooling ratio looks like [1.0, 1.44, 1.73, 1.0]. The first and last elements must be 1.0 because we don't allow pooling on batch and channels dimensions. 1.44 and 1.73 are pooling ratio on height and width dimensions respectively.
pseudo_random An optional bool. Defaults to False. When set to True, generates the pooling sequence in a pseudorandom fashion, otherwise, in a random fashion. Check paper (Graham, 2015) for difference between pseudorandom and random.
overlapping An optional bool. Defaults to False. When set to True, it means when pooling, the values at the boundary of adjacent pooling cells are used by both cells. For example: index 0 1 2 3 4 value 20 5 16 3 7 If the pooling sequence is [0, 2, 4], then 16, at index 2 will be used twice. The result would be [20, 16] for fractional max pooling.
seed An optional int. Defaults to 0. If set to be non-zero, the random number generator is seeded by the given seed. Otherwise it is seeded by a random seed.
name A name for the operation (optional).
Returns
A tuple of Tensor objects (output, row_pooling_sequence, col_pooling_sequence). output: Output Tensor after fractional max pooling. Has the same type as value. row_pooling_sequence: A Tensor of type int64. col_pooling_sequence: A Tensor of type int64. References: Fractional Max-Pooling: Graham, 2015 (pdf) | tensorflow.nn.fractional_max_pool |
tf.nn.gelu Compute the Gaussian Error Linear Unit (GELU) activation function.
tf.nn.gelu(
features, approximate=False, name=None
)
Gaussian error linear unit (GELU) computes x * P(X <= x), where P(X) ~ N(0, 1). The (GELU) nonlinearity weights inputs by their value, rather than gates inputs by their sign as in ReLU. For example:
x = tf.constant([-3.0, -1.0, 0.0, 1.0, 3.0], dtype=tf.float32)
y = tf.nn.gelu(x)
y.numpy()
array([-0.00404951, -0.15865529, 0. , 0.8413447 , 2.9959507 ],
dtype=float32)
y = tf.nn.gelu(x, approximate=True)
y.numpy()
array([-0.00363752, -0.15880796, 0. , 0.841192 , 2.9963627 ],
dtype=float32)
Args
features A Tensor representing preactivation values.
approximate An optional bool. Defaults to False. Whether to enable approximation.
name A name for the operation (optional).
Returns A Tensor with the same type as features.
References: Gaussian Error Linear Units (GELUs). | tensorflow.nn.gelu |
tf.nn.isotonic_regression Solves isotonic regression problems along the given axis.
tf.nn.isotonic_regression(
inputs, decreasing=True, axis=-1
)
For each vector x, the problem solved is $$\argmin_{y_1 >= y_2 >= ... >= y_n} \sum_i (x_i - y_i)^2.$$ As the solution is component-wise constant, a second tensor is returned that encodes the segments. The problems are solved over the given axis. Consider the following example, where we solve a batch of two problems. The first input is [3, 1, 2], while the second 1, 3, 4. >>> x = tf.constant([[3, 1, 2], [1, 3, 4]], dtype=tf.float32)
>>> y, segments = tf.nn.isotonic_regression(x, axis=1)
>>> y # The solution.
<tf.Tensor: shape=(2, 3), dtype=float32, numpy=
array([[3. , 1.5 , 1.5 ],
[2.6666667, 2.6666667, 2.6666667]], dtype=float32)>
Note that the first solution has two blocks [2] and [1.5, 1.5]. The second solution is constant, and thus has a single segment. These segments are exactly what the second returned tensor encodes:
segments
<tf.Tensor: shape=(2, 3), dtype=int32, numpy=
array([[0, 1, 1],
[0, 0, 0]], dtype=int32)>
Args
inputs A tensor holding the inputs.
decreasing If set to False, the inequalities in the optimizing constrained are flipped.
axis The axis along which the problems should be solved.
Returns
output The solutions, same shape as type as the input.
segments An int32 tensor, same shape as the input indicating the segments that have the same value. Specifically, those positions that have the same value correspond to the same segment. These values start at zero, and are monotonously increasing for each solution. | tensorflow.nn.isotonic_regression |
tf.nn.l2_loss L2 Loss. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.nn.l2_loss
tf.nn.l2_loss(
t, name=None
)
Computes half the L2 norm of a tensor without the sqrt: output = sum(t ** 2) / 2
Args
t A Tensor. Must be one of the following types: half, bfloat16, float32, float64. Typically 2-D, but may have any dimensions.
name A name for the operation (optional).
Returns A Tensor. Has the same type as t. | tensorflow.nn.l2_loss |
tf.nn.leaky_relu View source on GitHub Compute the Leaky ReLU activation function. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.nn.leaky_relu
tf.nn.leaky_relu(
features, alpha=0.2, name=None
)
Source: Rectifier Nonlinearities Improve Neural Network Acoustic Models. AL Maas, AY Hannun, AY Ng - Proc. ICML, 2013. Args: features: A Tensor representing preactivation values. Must be one of the following types: float16, float32, float64, int32, int64. alpha: Slope of the activation function at x < 0. name: A name for the operation (optional).
Returns The activation value.
References: Rectifier Nonlinearities Improve Neural Network Acoustic Models: Maas et al., 2013 (pdf) | tensorflow.nn.leaky_relu |
tf.nn.local_response_normalization Local Response Normalization. View aliases Main aliases
tf.nn.lrn Compat aliases for migration See Migration guide for more details. tf.compat.v1.nn.local_response_normalization, tf.compat.v1.nn.lrn
tf.nn.local_response_normalization(
input, depth_radius=5, bias=1, alpha=1, beta=0.5, name=None
)
The 4-D input tensor is treated as a 3-D array of 1-D vectors (along the last dimension), and each vector is normalized independently. Within a given vector, each component is divided by the weighted, squared sum of inputs within depth_radius. In detail, sqr_sum[a, b, c, d] =
sum(input[a, b, c, d - depth_radius : d + depth_radius + 1] ** 2)
output = input / (bias + alpha * sqr_sum) ** beta
For details, see Krizhevsky et al., ImageNet classification with deep convolutional neural networks (NIPS 2012).
Args
input A Tensor. Must be one of the following types: half, bfloat16, float32. 4-D.
depth_radius An optional int. Defaults to 5. 0-D. Half-width of the 1-D normalization window.
bias An optional float. Defaults to 1. An offset (usually positive to avoid dividing by 0).
alpha An optional float. Defaults to 1. A scale factor, usually positive.
beta An optional float. Defaults to 0.5. An exponent.
name A name for the operation (optional).
Returns A Tensor. Has the same type as input. | tensorflow.nn.local_response_normalization |
tf.nn.log_poisson_loss View source on GitHub Computes log Poisson loss given log_input. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.nn.log_poisson_loss
tf.nn.log_poisson_loss(
targets, log_input, compute_full_loss=False, name=None
)
Gives the log-likelihood loss between the prediction and the target under the assumption that the target has a Poisson distribution. Caveat: By default, this is not the exact loss, but the loss minus a constant term [log(z!)]. That has no effect for optimization, but does not play well with relative loss comparisons. To compute an approximation of the log factorial term, specify compute_full_loss=True to enable Stirling's Approximation. For brevity, let c = log(x) = log_input, z = targets. The log Poisson loss is -log(exp(-x) * (x^z) / z!)
= -log(exp(-x) * (x^z)) + log(z!)
~ -log(exp(-x)) - log(x^z) [+ z * log(z) - z + 0.5 * log(2 * pi * z)]
[ Note the second term is the Stirling's Approximation for log(z!).
It is invariant to x and does not affect optimization, though
important for correct relative loss comparisons. It is only
computed when compute_full_loss == True. ]
= x - z * log(x) [+ z * log(z) - z + 0.5 * log(2 * pi * z)]
= exp(c) - z * c [+ z * log(z) - z + 0.5 * log(2 * pi * z)]
Args
targets A Tensor of the same type and shape as log_input.
log_input A Tensor of type float32 or float64.
compute_full_loss whether to compute the full loss. If false, a constant term is dropped in favor of more efficient optimization.
name A name for the operation (optional).
Returns A Tensor of the same shape as log_input with the componentwise logistic losses.
Raises
ValueError If log_input and targets do not have the same shape. | tensorflow.nn.log_poisson_loss |
tf.nn.log_softmax View source on GitHub Computes log softmax activations. View aliases Main aliases
tf.math.log_softmax
tf.nn.log_softmax(
logits, axis=None, name=None
)
For each batch i and class j we have logsoftmax = logits - log(reduce_sum(exp(logits), axis))
Args
logits A non-empty Tensor. Must be one of the following types: half, float32, float64.
axis The dimension softmax would be performed on. The default is -1 which indicates the last dimension.
name A name for the operation (optional).
Returns A Tensor. Has the same type as logits. Same shape as logits.
Raises
InvalidArgumentError if logits is empty or axis is beyond the last dimension of logits. | tensorflow.nn.log_softmax |
tf.nn.max_pool View source on GitHub Performs the max pooling on the input. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.nn.max_pool_v2
tf.nn.max_pool(
input, ksize, strides, padding, data_format=None, name=None
)
Args
input Tensor of rank N+2, of shape [batch_size] + input_spatial_shape + [num_channels] if data_format does not start with "NC" (default), or [batch_size, num_channels] + input_spatial_shape if data_format starts with "NC". Pooling happens over the spatial dimensions only.
ksize An int or list of ints that has length 1, N or N+2. The size of the window for each dimension of the input tensor.
strides An int or list of ints that has length 1, N or N+2. The stride of the sliding window for each dimension of the input tensor.
padding Either the string"SAME"or"VALID"indicating the type of padding algorithm to use, or a list indicating the explicit paddings at the start and end of each dimension. When explicit padding is used and data_format is"NHWC", this should be in the form[[0, 0], [pad_top, pad_bottom], [pad_left, pad_right], [0, 0]]. When explicit padding used and data_format is"NCHW", this should be in the form[[0, 0], [0, 0], [pad_top, pad_bottom], [pad_left, pad_right]]. When using explicit padding, the size of the paddings cannot be greater than the sliding window size. </td> </tr><tr> <td>data_format</td> <td> A string. Specifies the channel dimension. For N=1 it can be either "NWC" (default) or "NCW", for N=2 it can be either "NHWC" (default) or "NCHW" and for N=3 either "NDHWC" (default) or "NCDHW". </td> </tr><tr> <td>name` Optional name for the operation.
Returns A Tensor of format specified by data_format. The max pooled output tensor. | tensorflow.nn.max_pool |
tf.nn.max_pool1d View source on GitHub Performs the max pooling on the input. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.nn.max_pool1d
tf.nn.max_pool1d(
input, ksize, strides, padding, data_format='NWC', name=None
)
Note internally this op reshapes and uses the underlying 2d operation.
Args
input A 3-D Tensor of the format specified by data_format.
ksize An int or list of ints that has length 1 or 3. The size of the window for each dimension of the input tensor.
strides An int or list of ints that has length 1 or 3. The stride of the sliding window for each dimension of the input tensor.
padding Either the string"SAME"or"VALID"indicating the type of padding algorithm to use, or a list indicating the explicit paddings at the start and end of each dimension. When explicit padding is used and data_format is"NWC", this should be in the form[[0, 0], [pad_left, pad_right], [0, 0]]. When explicit padding used and data_format is"NCW", this should be in the form[[0, 0], [0, 0], [pad_left, pad_right]]. When using explicit padding, the size of the paddings cannot be greater than the sliding window size. </td> </tr><tr> <td>data_format</td> <td> An optional string from: "NWC", "NCW". Defaults to "NWC". </td> </tr><tr> <td>name` A name for the operation (optional).
Returns A Tensor of format specified by data_format. The max pooled output tensor. | tensorflow.nn.max_pool1d |
tf.nn.max_pool2d View source on GitHub Performs the max pooling on the input. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.nn.max_pool2d
tf.nn.max_pool2d(
input, ksize, strides, padding, data_format='NHWC', name=None
)
Args
input A 4-D Tensor of the format specified by data_format.
ksize An int or list of ints that has length 1, 2 or 4. The size of the window for each dimension of the input tensor.
strides An int or list of ints that has length 1, 2 or 4. The stride of the sliding window for each dimension of the input tensor.
padding Either the string"SAME"or"VALID"indicating the type of padding algorithm to use, or a list indicating the explicit paddings at the start and end of each dimension. When explicit padding is used and data_format is"NHWC", this should be in the form[[0, 0], [pad_top, pad_bottom], [pad_left, pad_right], [0, 0]]. When explicit padding used and data_format is"NCHW", this should be in the form[[0, 0], [0, 0], [pad_top, pad_bottom], [pad_left, pad_right]]. When using explicit padding, the size of the paddings cannot be greater than the sliding window size. </td> </tr><tr> <td>data_format</td> <td> A string. 'NHWC', 'NCHW' and 'NCHW_VECT_C' are supported. </td> </tr><tr> <td>name` Optional name for the operation.
Returns A Tensor of format specified by data_format. The max pooled output tensor. | tensorflow.nn.max_pool2d |
tf.nn.max_pool3d View source on GitHub Performs the max pooling on the input. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.nn.max_pool3d
tf.nn.max_pool3d(
input, ksize, strides, padding, data_format='NDHWC', name=None
)
Args
input A 5-D Tensor of the format specified by data_format.
ksize An int or list of ints that has length 1, 3 or 5. The size of the window for each dimension of the input tensor.
strides An int or list of ints that has length 1, 3 or 5. The stride of the sliding window for each dimension of the input tensor.
padding A string, either 'VALID' or 'SAME'. The padding algorithm. See the "returns" section of tf.nn.convolution for details.
data_format An optional string from: "NDHWC", "NCDHW". Defaults to "NDHWC". The data format of the input and output data. With the default format "NDHWC", the data is stored in the order of: [batch, in_depth, in_height, in_width, in_channels]. Alternatively, the format could be "NCDHW", the data storage order is: [batch, in_channels, in_depth, in_height, in_width].
name A name for the operation (optional).
Returns A Tensor of format specified by data_format. The max pooled output tensor. | tensorflow.nn.max_pool3d |
tf.nn.max_pool_with_argmax View source on GitHub Performs max pooling on the input and outputs both max values and indices.
tf.nn.max_pool_with_argmax(
input, ksize, strides, padding, data_format='NHWC',
output_dtype=tf.dtypes.int64, include_batch_in_index=False, name=None
)
The indices in argmax are flattened, so that a maximum value at position [b, y, x, c] becomes flattened index: (y * width + x) * channels + c if include_batch_in_index is False; ((b * height + y) * width + x) * channels + c if include_batch_in_index is True. The indices returned are always in [0, height) x [0, width) before flattening, even if padding is involved and the mathematically correct answer is outside (either negative or too large). This is a bug, but fixing it is difficult to do in a safe backwards compatible way, especially due to flattening.
Args
input A Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, int64, bfloat16, uint16, half, uint32, uint64. 4-D with shape [batch, height, width, channels]. Input to pool over.
ksize An int or list of ints that has length 1, 2 or 4. The size of the window for each dimension of the input tensor.
strides An int or list of ints that has length 1, 2 or 4. The stride of the sliding window for each dimension of the input tensor.
padding A string from: "SAME", "VALID". The type of padding algorithm to use.
data_format An optional string, must be set to "NHWC". Defaults to "NHWC". Specify the data format of the input and output data.
output_dtype An optional tf.DType from: tf.int32, tf.int64. Defaults to tf.int64. The dtype of the returned argmax tensor.
include_batch_in_index An optional boolean. Defaults to False. Whether to include batch dimension in flattened index of argmax.
name A name for the operation (optional).
Returns A tuple of Tensor objects (output, argmax). output A Tensor. Has the same type as input.
argmax A Tensor of type output_dtype. | tensorflow.nn.max_pool_with_argmax |
tf.nn.moments View source on GitHub Calculates the mean and variance of x.
tf.nn.moments(
x, axes, shift=None, keepdims=False, name=None
)
The mean and variance are calculated by aggregating the contents of x across axes. If x is 1-D and axes = [0] this is just the mean and variance of a vector.
Note: shift is currently not used; the true mean is computed and used.
When using these moments for batch normalization (see tf.nn.batch_normalization): for so-called "global normalization", used with convolutional filters with shape [batch, height, width, depth], pass axes=[0, 1, 2]. for simple batch normalization pass axes=[0] (batch only).
Args
x A Tensor.
axes Array of ints. Axes along which to compute mean and variance.
shift Not used in the current implementation.
keepdims produce moments with the same dimensionality as the input.
name Name used to scope the operations that compute the moments.
Returns Two Tensor objects: mean and variance. | tensorflow.nn.moments |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.