doc_content
stringlengths
1
386k
doc_id
stringlengths
5
188
tf.random.set_global_generator Replaces the global generator with another Generator object. View aliases Main aliases tf.random.experimental.set_global_generator Compat aliases for migration See Migration guide for more details. tf.compat.v1.random.experimental.set_global_generator, tf.compat.v1.random.set_global_generator tf.random.set_global_generator( generator ) This function creates a new Generator object (and the Variable object within), which does not work well with tf.function because (1) tf.function puts restrictions on Variable creation thus reset_global_generator can't be freely used inside tf.function; (2) redirecting a global variable to a new object is problematic with tf.function because the old object may be captured by a 'tf.function'ed function and still be used by it. A 'tf.function'ed function only keeps weak references to variables, so deleting a variable and then calling that function again may raise an error, as demonstrated by random_test.py/RandomTest.testResetGlobalGeneratorBadWithDefun . Args generator the new Generator object.
tensorflow.random.set_global_generator
tf.random.set_seed Sets the global random seed. tf.random.set_seed( seed ) Operations that rely on a random seed actually derive it from two seeds: the global and operation-level seeds. This sets the global seed. Its interactions with operation-level seeds is as follows: If neither the global seed nor the operation seed is set: A randomly picked seed is used for this op. If the graph-level seed is set, but the operation seed is not: The system deterministically picks an operation seed in conjunction with the graph-level seed so that it gets a unique random sequence. Within the same version of tensorflow and user code, this sequence is deterministic. However across different versions, this sequence might change. If the code depends on particular seeds to work, specify both graph-level and operation-level seeds explicitly. If the operation seed is set, but the global seed is not set: A default global seed and the specified operation seed are used to determine the random sequence. If both the global and the operation seed are set: Both seeds are used in conjunction to determine the random sequence. To illustrate the user-visible effects, consider these examples: If neither the global seed nor the operation seed is set, we get different results for every call to the random op and every re-run of the program: print(tf.random.uniform([1])) # generates 'A1' print(tf.random.uniform([1])) # generates 'A2' (now close the program and run it again) print(tf.random.uniform([1])) # generates 'A3' print(tf.random.uniform([1])) # generates 'A4' If the global seed is set but the operation seed is not set, we get different results for every call to the random op, but the same sequence for every re-run of the program: tf.random.set_seed(1234) print(tf.random.uniform([1])) # generates 'A1' print(tf.random.uniform([1])) # generates 'A2' (now close the program and run it again) tf.random.set_seed(1234) print(tf.random.uniform([1])) # generates 'A1' print(tf.random.uniform([1])) # generates 'A2' The reason we get 'A2' instead 'A1' on the second call of tf.random.uniform above is because the second call uses a different operation seed. Note that tf.function acts like a re-run of a program in this case. When the global seed is set but operation seeds are not set, the sequence of random numbers are the same for each tf.function. For example: tf.random.set_seed(1234) @tf.function def f(): a = tf.random.uniform([1]) b = tf.random.uniform([1]) return a, b @tf.function def g(): a = tf.random.uniform([1]) b = tf.random.uniform([1]) return a, b print(f()) # prints '(A1, A2)' print(g()) # prints '(A1, A2)' If the operation seed is set, we get different results for every call to the random op, but the same sequence for every re-run of the program: print(tf.random.uniform([1], seed=1)) # generates 'A1' print(tf.random.uniform([1], seed=1)) # generates 'A2' (now close the program and run it again) print(tf.random.uniform([1], seed=1)) # generates 'A1' print(tf.random.uniform([1], seed=1)) # generates 'A2' The reason we get 'A2' instead 'A1' on the second call of tf.random.uniform above is because the same tf.random.uniform kernel (i.e. internal representation) is used by TensorFlow for all calls of it with the same arguments, and the kernel maintains an internal counter which is incremented every time it is executed, generating different results. Calling tf.random.set_seed will reset any such counters: tf.random.set_seed(1234) print(tf.random.uniform([1], seed=1)) # generates 'A1' print(tf.random.uniform([1], seed=1)) # generates 'A2' tf.random.set_seed(1234) print(tf.random.uniform([1], seed=1)) # generates 'A1' print(tf.random.uniform([1], seed=1)) # generates 'A2' When multiple identical random ops are wrapped in a tf.function, their behaviors change because the ops no long share the same counter. For example: @tf.function def foo(): a = tf.random.uniform([1], seed=1) b = tf.random.uniform([1], seed=1) return a, b print(foo()) # prints '(A1, A1)' print(foo()) # prints '(A2, A2)' @tf.function def bar(): a = tf.random.uniform([1]) b = tf.random.uniform([1]) return a, b print(bar()) # prints '(A1, A2)' print(bar()) # prints '(A3, A4)' The second call of foo returns '(A2, A2)' instead of '(A1, A1)' because tf.random.uniform maintains an internal counter. If you want foo to return '(A1, A1)' every time, use the stateless random ops such as tf.random.stateless_uniform. Also see tf.random.experimental.Generator for a new set of stateful random ops that use external variables to manage their states. Args seed integer.
tensorflow.random.set_seed
tf.random.shuffle View source on GitHub Randomly shuffles a tensor along its first dimension. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.random.shuffle, tf.compat.v1.random_shuffle tf.random.shuffle( value, seed=None, name=None ) The tensor is shuffled along dimension 0, such that each value[j] is mapped to one and only one output[i]. For example, a mapping that might occur for a 3x2 tensor is: [[1, 2], [[5, 6], [3, 4], ==> [1, 2], [5, 6]] [3, 4]] Args value A Tensor to be shuffled. seed A Python integer. Used to create a random seed for the distribution. See tf.random.set_seed for behavior. name A name for the operation (optional). Returns A tensor of same shape and type as value, shuffled along its first dimension.
tensorflow.random.shuffle
tf.random.stateless_binomial Outputs deterministic pseudorandom values from a binomial distribution. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.random.stateless_binomial tf.random.stateless_binomial( shape, seed, counts, probs, output_dtype=tf.dtypes.int32, name=None ) The generated values follow a binomial distribution with specified count and probability of success parameters. This is a stateless version of tf.random.Generator.binomial: if run twice with the same seeds and shapes, it will produce the same pseudorandom numbers. The output is consistent across multiple runs on the same hardware (and between CPU and GPU), but may change between versions of TensorFlow or on non-CPU/GPU hardware. Example: counts = [10., 20.] # Probability of success. probs = [0.8] binomial_samples = tf.random.stateless_binomial( shape=[2], seed=[123, 456], counts=counts, probs=probs) counts = ... # Shape [3, 1, 2] probs = ... # Shape [1, 4, 2] shape = [3, 4, 3, 4, 2] # Sample shape will be [3, 4, 3, 4, 2] binomial_samples = tf.random.stateless_binomial( shape=shape, seed=[123, 456], counts=counts, probs=probs) Args shape A 1-D integer Tensor or Python array. The shape of the output tensor. seed A shape [2] Tensor, the seed to the random number generator. Must have dtype int32 or int64. (When using XLA, only int32 is allowed.) counts Tensor. The counts of the binomial distribution. Must be broadcastable with probs, and broadcastable with the rightmost dimensions of shape. probs Tensor. The probability of success for the binomial distribution. Must be broadcastable with counts and broadcastable with the rightmost dimensions of shape. output_dtype The type of the output. Default: tf.int32 name A name for the operation (optional). Returns samples A Tensor of the specified shape filled with random binomial values. For each i, each samples[..., i] is an independent draw from the binomial distribution on counts[i] trials with probability of success probs[i].
tensorflow.random.stateless_binomial
tf.random.stateless_categorical View source on GitHub Draws deterministic pseudorandom samples from a categorical distribution. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.random.stateless_categorical tf.random.stateless_categorical( logits, num_samples, seed, dtype=tf.dtypes.int64, name=None ) This is a stateless version of tf.categorical: if run twice with the same seeds and shapes, it will produce the same pseudorandom numbers. The output is consistent across multiple runs on the same hardware (and between CPU and GPU), but may change between versions of TensorFlow or on non-CPU/GPU hardware. Example: # samples has shape [1, 5], where each value is either 0 or 1 with equal # probability. samples = tf.random.stateless_categorical( tf.math.log([[0.5, 0.5]]), 5, seed=[7, 17]) Args logits 2-D Tensor with shape [batch_size, num_classes]. Each slice [i, :] represents the unnormalized log-probabilities for all classes. num_samples 0-D. Number of independent samples to draw for each row slice. seed A shape [2] Tensor, the seed to the random number generator. Must have dtype int32 or int64. (When using XLA, only int32 is allowed.) dtype integer type to use for the output. Defaults to int64. name Optional name for the operation. Returns The drawn samples of shape [batch_size, num_samples].
tensorflow.random.stateless_categorical
tf.random.stateless_gamma Outputs deterministic pseudorandom values from a gamma distribution. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.random.stateless_gamma tf.random.stateless_gamma( shape, seed, alpha, beta=None, dtype=tf.dtypes.float32, name=None ) The generated values follow a gamma distribution with specified concentration (alpha) and inverse scale (beta) parameters. This is a stateless version of tf.random.gamma: if run twice with the same seeds and shapes, it will produce the same pseudorandom numbers. The output is consistent across multiple runs on the same hardware (and between CPU and GPU), but may change between versions of TensorFlow or on non-CPU/GPU hardware. A slight difference exists in the interpretation of the shape parameter between stateless_gamma and gamma: in gamma, the shape is always prepended to the shape of the broadcast of alpha with beta; whereas in stateless_gamma the shape parameter must always encompass the shapes of each of alpha and beta (which must broadcast together to match the trailing dimensions of shape). Note: Because internal calculations are done using float64 and casting has floor semantics, we must manually map zero outcomes to the smallest possible positive floating-point value, i.e., np.finfo(dtype).tiny. This means that np.finfo(dtype).tiny occurs more frequently than it otherwise should. This bias can only happen for small values of alpha, i.e., alpha << 1 or large values of beta, i.e., beta >> 1. The samples are differentiable w.r.t. alpha and beta. The derivatives are computed using the approach described in (Figurnov et al., 2018). Example: samples = tf.random.stateless_gamma([10, 2], seed=[12, 34], alpha=[0.5, 1.5]) # samples has shape [10, 2], where each slice [:, 0] and [:, 1] represents # the samples drawn from each distribution samples = tf.random.stateless_gamma([7, 5, 2], seed=[12, 34], alpha=[.5, 1.5]) # samples has shape [7, 5, 2], where each slice [:, :, 0] and [:, :, 1] # represents the 7x5 samples drawn from each of the two distributions alpha = tf.constant([[1.], [3.], [5.]]) beta = tf.constant([[3., 4.]]) samples = tf.random.stateless_gamma( [30, 3, 2], seed=[12, 34], alpha=alpha, beta=beta) # samples has shape [30, 3, 2], with 30 samples each of 3x2 distributions. with tf.GradientTape() as tape: tape.watch([alpha, beta]) loss = tf.reduce_mean(tf.square(tf.random.stateless_gamma( [30, 3, 2], seed=[12, 34], alpha=alpha, beta=beta))) dloss_dalpha, dloss_dbeta = tape.gradient(loss, [alpha, beta]) # unbiased stochastic derivatives of the loss function alpha.shape == dloss_dalpha.shape # True beta.shape == dloss_dbeta.shape # True Args shape A 1-D integer Tensor or Python array. The shape of the output tensor. seed A shape [2] Tensor, the seed to the random number generator. Must have dtype int32 or int64. (When using XLA, only int32 is allowed.) alpha Tensor. The concentration parameter of the gamma distribution. Must be broadcastable with beta, and broadcastable with the rightmost dimensions of shape. beta Tensor. The inverse scale parameter of the gamma distribution. Must be broadcastable with alpha and broadcastable with the rightmost dimensions of shape. dtype Floating point dtype of alpha, beta, and the output. name A name for the operation (optional). Returns samples A Tensor of the specified shape filled with random gamma values. For each i, each `samples[..., i] is an independent draw from the gamma distribution with concentration alpha[i] and scale beta[i].
tensorflow.random.stateless_gamma
tf.random.stateless_normal View source on GitHub Outputs deterministic pseudorandom values from a normal distribution. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.random.stateless_normal tf.random.stateless_normal( shape, seed, mean=0.0, stddev=1.0, dtype=tf.dtypes.float32, name=None ) This is a stateless version of tf.random.normal: if run twice with the same seeds and shapes, it will produce the same pseudorandom numbers. The output is consistent across multiple runs on the same hardware (and between CPU and GPU), but may change between versions of TensorFlow or on non-CPU/GPU hardware. Args shape A 1-D integer Tensor or Python array. The shape of the output tensor. seed A shape [2] Tensor, the seed to the random number generator. Must have dtype int32 or int64. (When using XLA, only int32 is allowed.) mean A 0-D Tensor or Python value of type dtype. The mean of the normal distribution. stddev A 0-D Tensor or Python value of type dtype. The standard deviation of the normal distribution. dtype The type of the output. name A name for the operation (optional). Returns A tensor of the specified shape filled with random normal values.
tensorflow.random.stateless_normal
tf.random.stateless_parameterized_truncated_normal Outputs random values from a truncated normal distribution. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.random.stateless_parameterized_truncated_normal tf.random.stateless_parameterized_truncated_normal( shape, seed, means=0.0, stddevs=1.0, minvals=-2.0, maxvals=2.0, name=None ) The generated values follow a normal distribution with specified mean and standard deviation, except that values whose magnitude is more than 2 standard deviations from the mean are dropped and re-picked. Examples: Sample from a Truncated normal, with deferring shape parameters that broadcast. means = 0. stddevs = tf.math.exp(tf.random.uniform(shape=[2, 3])) minvals = [-1., -2., -1000.] maxvals = [[10000.], [1.]] y = tf.random.stateless_parameterized_truncated_normal( shape=[10, 2, 3], seed=[7, 17], means=means, stddevs=stddevs, minvals=minvals, maxvals=maxvals) y.shape TensorShape([10, 2, 3]) Args shape A 1-D integer Tensor or Python array. The shape of the output tensor. seed A shape [2] Tensor, the seed to the random number generator. Must have dtype int32 or int64. (When using XLA, only int32 is allowed.) means A Tensor or Python value of type dtype. The mean of the truncated normal distribution. This must broadcast with stddevs, minvals and maxvals, and the broadcasted shape must be dominated by shape. stddevs A Tensor or Python value of type dtype. The standard deviation of the truncated normal distribution. This must broadcast with means, minvals and maxvals, and the broadcasted shape must be dominated by shape. minvals A Tensor or Python value of type dtype. The minimum value of the truncated normal distribution. This must broadcast with means, stddevs and maxvals, and the broadcasted shape must be dominated by shape. maxvals A Tensor or Python value of type dtype. The maximum value of the truncated normal distribution. This must broadcast with means, stddevs and minvals, and the broadcasted shape must be dominated by shape. name A name for the operation (optional). Returns A tensor of the specified shape filled with random truncated normal values.
tensorflow.random.stateless_parameterized_truncated_normal
tf.random.stateless_poisson Outputs deterministic pseudorandom values from a Poisson distribution. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.random.stateless_poisson tf.random.stateless_poisson( shape, seed, lam, dtype=tf.dtypes.int32, name=None ) The generated values follow a Poisson distribution with specified rate parameter. This is a stateless version of tf.random.poisson: if run twice with the same seeds and shapes, it will produce the same pseudorandom numbers. The output is consistent across multiple runs on the same hardware, but may change between versions of TensorFlow or on non-CPU/GPU hardware. A slight difference exists in the interpretation of the shape parameter between stateless_poisson and poisson: in poisson, the shape is always prepended to the shape of lam; whereas in stateless_poisson the shape of lam must match the trailing dimensions of shape. Example: samples = tf.random.stateless_poisson([10, 2], seed=[12, 34], lam=[5, 15]) # samples has shape [10, 2], where each slice [:, 0] and [:, 1] represents # the samples drawn from each distribution samples = tf.random.stateless_poisson([7, 5, 2], seed=[12, 34], lam=[5, 15]) # samples has shape [7, 5, 2], where each slice [:, :, 0] and [:, :, 1] # represents the 7x5 samples drawn from each of the two distributions rate = tf.constant([[1.], [3.], [5.]]) samples = tf.random.stateless_poisson([30, 3, 1], seed=[12, 34], lam=rate) # samples has shape [30, 3, 1], with 30 samples each of 3x1 distributions. Args shape A 1-D integer Tensor or Python array. The shape of the output tensor. seed A shape [2] Tensor, the seed to the random number generator. Must have dtype int32 or int64. (When using XLA, only int32 is allowed.) lam Tensor. The rate parameter "lambda" of the Poisson distribution. Shape must match the rightmost dimensions of shape. dtype Dtype of the samples (int or float dtypes are permissible, as samples are discrete). Default: int32. name A name for the operation (optional). Returns samples A Tensor of the specified shape filled with random Poisson values. For each i, each samples[..., i] is an independent draw from the Poisson distribution with rate lam[i].
tensorflow.random.stateless_poisson
tf.random.stateless_truncated_normal View source on GitHub Outputs deterministic pseudorandom values, truncated normally distributed. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.random.stateless_truncated_normal tf.random.stateless_truncated_normal( shape, seed, mean=0.0, stddev=1.0, dtype=tf.dtypes.float32, name=None ) This is a stateless version of tf.random.truncated_normal: if run twice with the same seeds and shapes, it will produce the same pseudorandom numbers. The output is consistent across multiple runs on the same hardware (and between CPU and GPU), but may change between versions of TensorFlow or on non-CPU/GPU hardware. The generated values follow a normal distribution with specified mean and standard deviation, except that values whose magnitude is more than 2 standard deviations from the mean are dropped and re-picked. Args shape A 1-D integer Tensor or Python array. The shape of the output tensor. seed A shape [2] Tensor, the seed to the random number generator. Must have dtype int32 or int64. (When using XLA, only int32 is allowed.) mean A 0-D Tensor or Python value of type dtype. The mean of the truncated normal distribution. stddev A 0-D Tensor or Python value of type dtype. The standard deviation of the normal distribution, before truncation. dtype The type of the output. name A name for the operation (optional). Returns A tensor of the specified shape filled with random truncated normal values.
tensorflow.random.stateless_truncated_normal
tf.random.stateless_uniform View source on GitHub Outputs deterministic pseudorandom values from a uniform distribution. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.random.stateless_uniform tf.random.stateless_uniform( shape, seed, minval=0, maxval=None, dtype=tf.dtypes.float32, name=None ) This is a stateless version of tf.random.uniform: if run twice with the same seeds and shapes, it will produce the same pseudorandom numbers. The output is consistent across multiple runs on the same hardware (and between CPU and GPU), but may change between versions of TensorFlow or on non-CPU/GPU hardware. The generated values follow a uniform distribution in the range [minval, maxval). The lower bound minval is included in the range, while the upper bound maxval is excluded. For floats, the default range is [0, 1). For ints, at least maxval must be specified explicitly. In the integer case, the random integers are slightly biased unless maxval - minval is an exact power of two. The bias is small for values of maxval - minval significantly smaller than the range of the output (either 2**32 or 2**64). For full-range (i.e. inclusive of both max and min) random integers, pass minval=None and maxval=None with an integer dtype. For an integer dtype either both minval and maxval must be None or neither may be None. For example: ints = tf.random.stateless_uniform( [10], seed=(2, 3), minval=None, maxval=None, dtype=tf.int32) Args shape A 1-D integer Tensor or Python array. The shape of the output tensor. seed A shape [2] Tensor, the seed to the random number generator. Must have dtype int32 or int64. (When using XLA, only int32 is allowed.) minval A Tensor or Python value of type dtype, broadcastable with shape (for integer types, broadcasting is not supported, so it needs to be a scalar). The lower bound on the range of random values to generate. Pass None for full-range integers. Defaults to 0. maxval A Tensor or Python value of type dtype, broadcastable with shape (for integer types, broadcasting is not supported, so it needs to be a scalar). The upper bound on the range of random values to generate. Defaults to 1 if dtype is floating point. Pass None for full-range integers. dtype The type of the output: float16, float32, float64, int32, or int64. For unbounded uniform ints (minval, maxval both None), uint32 and uint64 may be used. name A name for the operation (optional). Returns A tensor of the specified shape filled with random uniform values. Raises ValueError If dtype is integral and only one of minval or maxval is specified.
tensorflow.random.stateless_uniform
tf.random.truncated_normal View source on GitHub Outputs random values from a truncated normal distribution. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.random.truncated_normal, tf.compat.v1.truncated_normal tf.random.truncated_normal( shape, mean=0.0, stddev=1.0, dtype=tf.dtypes.float32, seed=None, name=None ) The generated values follow a normal distribution with specified mean and standard deviation, except that values whose magnitude is more than 2 standard deviations from the mean are dropped and re-picked. Args shape A 1-D integer Tensor or Python array. The shape of the output tensor. mean A 0-D Tensor or Python value of type dtype. The mean of the truncated normal distribution. stddev A 0-D Tensor or Python value of type dtype. The standard deviation of the normal distribution, before truncation. dtype The type of the output. seed A Python integer. Used to create a random seed for the distribution. See tf.random.set_seed for behavior. name A name for the operation (optional). Returns A tensor of the specified shape filled with random truncated normal values.
tensorflow.random.truncated_normal
tf.random.uniform View source on GitHub Outputs random values from a uniform distribution. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.random.uniform, tf.compat.v1.random_uniform tf.random.uniform( shape, minval=0, maxval=None, dtype=tf.dtypes.float32, seed=None, name=None ) The generated values follow a uniform distribution in the range [minval, maxval). The lower bound minval is included in the range, while the upper bound maxval is excluded. For floats, the default range is [0, 1). For ints, at least maxval must be specified explicitly. In the integer case, the random integers are slightly biased unless maxval - minval is an exact power of two. The bias is small for values of maxval - minval significantly smaller than the range of the output (either 2**32 or 2**64). Examples: tf.random.uniform(shape=[2]) <tf.Tensor: shape=(2,), dtype=float32, numpy=array([..., ...], dtype=float32)> tf.random.uniform(shape=[], minval=-1., maxval=0.) <tf.Tensor: shape=(), dtype=float32, numpy=-...> tf.random.uniform(shape=[], minval=5, maxval=10, dtype=tf.int64) <tf.Tensor: shape=(), dtype=int64, numpy=...> The seed argument produces a deterministic sequence of tensors across multiple calls. To repeat that sequence, use tf.random.set_seed: tf.random.set_seed(5) tf.random.uniform(shape=[], maxval=3, dtype=tf.int32, seed=10) <tf.Tensor: shape=(), dtype=int32, numpy=2> tf.random.uniform(shape=[], maxval=3, dtype=tf.int32, seed=10) <tf.Tensor: shape=(), dtype=int32, numpy=0> tf.random.set_seed(5) tf.random.uniform(shape=[], maxval=3, dtype=tf.int32, seed=10) <tf.Tensor: shape=(), dtype=int32, numpy=2> tf.random.uniform(shape=[], maxval=3, dtype=tf.int32, seed=10) <tf.Tensor: shape=(), dtype=int32, numpy=0> Without tf.random.set_seed but with a seed argument is specified, small changes to function graphs or previously executed operations will change the returned value. See tf.random.set_seed for details. Args shape A 1-D integer Tensor or Python array. The shape of the output tensor. minval A Tensor or Python value of type dtype, broadcastable with shape (for integer types, broadcasting is not supported, so it needs to be a scalar). The lower bound on the range of random values to generate (inclusive). Defaults to 0. maxval A Tensor or Python value of type dtype, broadcastable with shape (for integer types, broadcasting is not supported, so it needs to be a scalar). The upper bound on the range of random values to generate (exclusive). Defaults to 1 if dtype is floating point. dtype The type of the output: float16, float32, float64, int32, or int64. seed A Python integer. Used in combination with tf.random.set_seed to create a reproducible sequence of tensors across multiple calls. name A name for the operation (optional). Returns A tensor of the specified shape filled with random uniform values. Raises ValueError If dtype is integral and maxval is not specified.
tensorflow.random.uniform
tf.random.uniform_candidate_sampler View source on GitHub Samples a set of classes using a uniform base distribution. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.nn.uniform_candidate_sampler, tf.compat.v1.random.uniform_candidate_sampler tf.random.uniform_candidate_sampler( true_classes, num_true, num_sampled, unique, range_max, seed=None, name=None ) This operation randomly samples a tensor of sampled classes (sampled_candidates) from the range of integers [0, range_max). The elements of sampled_candidates are drawn without replacement (if unique=True) or with replacement (if unique=False) from the base distribution. The base distribution for this operation is the uniform distribution over the range of integers [0, range_max). In addition, this operation returns tensors true_expected_count and sampled_expected_count representing the number of times each of the target classes (true_classes) and the sampled classes (sampled_candidates) is expected to occur in an average tensor of sampled classes. These values correspond to Q(y|x) defined in this document. If unique=True, then these are post-rejection probabilities and we compute them approximately. Args true_classes A Tensor of type int64 and shape [batch_size, num_true]. The target classes. num_true An int. The number of target classes per training example. num_sampled An int. The number of classes to randomly sample. The sampled_candidates return value will have shape [num_sampled]. If unique=True, num_sampled must be less than or equal to range_max. unique A bool. Determines whether all sampled classes in a batch are unique. range_max An int. The number of possible classes. seed An int. An operation-specific seed. Default is 0. name A name for the operation (optional). Returns sampled_candidates A tensor of type int64 and shape [num_sampled]. The sampled classes, either with possible duplicates (unique=False) or all unique (unique=True). In either case, sampled_candidates is independent of the true classes. true_expected_count A tensor of type float. Same shape as true_classes. The expected counts under the sampling distribution of each of true_classes. sampled_expected_count A tensor of type float. Same shape as sampled_candidates. The expected counts under the sampling distribution of each of sampled_candidates.
tensorflow.random.uniform_candidate_sampler
tf.random_normal_initializer View source on GitHub Initializer that generates tensors with a normal distribution. tf.random_normal_initializer( mean=0.0, stddev=0.05, seed=None ) Initializers allow you to pre-specify an initialization strategy, encoded in the Initializer object, without knowing the shape and dtype of the variable being initialized. Examples: def make_variables(k, initializer): return (tf.Variable(initializer(shape=[k], dtype=tf.float32)), tf.Variable(initializer(shape=[k, k], dtype=tf.float32))) v1, v2 = make_variables(3, tf.random_normal_initializer(mean=1., stddev=2.)) v1 <tf.Variable ... shape=(3,) ... numpy=array([...], dtype=float32)> v2 <tf.Variable ... shape=(3, 3) ... numpy= make_variables(4, tf.random_uniform_initializer(minval=-1., maxval=1.)) (<tf.Variable...shape=(4,) dtype=float32...>, <tf.Variable...shape=(4, 4) ... Args mean a python scalar or a scalar tensor. Mean of the random values to generate. stddev a python scalar or a scalar tensor. Standard deviation of the random values to generate. seed A Python integer. Used to create random seeds. See tf.random.set_seed for behavior. Methods from_config View source @classmethod from_config( config ) Instantiates an initializer from a configuration dictionary. Example: initializer = RandomUniform(-1, 1) config = initializer.get_config() initializer = RandomUniform.from_config(config) Args config A Python dictionary. It will typically be the output of get_config. Returns An Initializer instance. get_config View source get_config() Returns the configuration of the initializer as a JSON-serializable dict. Returns A JSON-serializable Python dict. __call__ View source __call__( shape, dtype=tf.dtypes.float32, **kwargs ) Returns a tensor object initialized as specified by the initializer. Args shape Shape of the tensor. dtype Optional dtype of the tensor. Only floating point types are supported. **kwargs Additional keyword arguments. Raises ValueError If the dtype is not floating point
tensorflow.random_normal_initializer
tf.random_uniform_initializer View source on GitHub Initializer that generates tensors with a uniform distribution. tf.random_uniform_initializer( minval=-0.05, maxval=0.05, seed=None ) Initializers allow you to pre-specify an initialization strategy, encoded in the Initializer object, without knowing the shape and dtype of the variable being initialized. Examples: def make_variables(k, initializer): return (tf.Variable(initializer(shape=[k], dtype=tf.float32)), tf.Variable(initializer(shape=[k, k], dtype=tf.float32))) v1, v2 = make_variables(3, tf.ones_initializer()) v1 <tf.Variable ... shape=(3,) ... numpy=array([1., 1., 1.], dtype=float32)> v2 <tf.Variable ... shape=(3, 3) ... numpy= array([[1., 1., 1.], [1., 1., 1.], [1., 1., 1.]], dtype=float32)> make_variables(4, tf.random_uniform_initializer(minval=-1., maxval=1.)) (<tf.Variable...shape=(4,) dtype=float32...>, <tf.Variable...shape=(4, 4) ... Args minval A python scalar or a scalar tensor. Lower bound of the range of random values to generate (inclusive). maxval A python scalar or a scalar tensor. Upper bound of the range of random values to generate (exclusive). seed A Python integer. Used to create random seeds. See tf.random.set_seed for behavior. Methods from_config View source @classmethod from_config( config ) Instantiates an initializer from a configuration dictionary. Example: initializer = RandomUniform(-1, 1) config = initializer.get_config() initializer = RandomUniform.from_config(config) Args config A Python dictionary. It will typically be the output of get_config. Returns An Initializer instance. get_config View source get_config() Returns the configuration of the initializer as a JSON-serializable dict. Returns A JSON-serializable Python dict. __call__ View source __call__( shape, dtype=tf.dtypes.float32, **kwargs ) Returns a tensor object initialized as specified by the initializer. Args shape Shape of the tensor. dtype Optional dtype of the tensor. Only floating point and integer types are supported. **kwargs Additional keyword arguments. Raises ValueError If the dtype is not numeric.
tensorflow.random_uniform_initializer
tf.range View source on GitHub Creates a sequence of numbers. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.range tf.range(limit, delta=1, dtype=None, name='range') tf.range(start, limit, delta=1, dtype=None, name='range') Creates a sequence of numbers that begins at start and extends by increments of delta up to but not including limit. The dtype of the resulting tensor is inferred from the inputs unless it is provided explicitly. Like the Python builtin range, start defaults to 0, so that range(n) = range(0, n). For example: start = 3 limit = 18 delta = 3 tf.range(start, limit, delta) <tf.Tensor: shape=(5,), dtype=int32, numpy=array([ 3, 6, 9, 12, 15], dtype=int32)> start = 3 limit = 1 delta = -0.5 tf.range(start, limit, delta) <tf.Tensor: shape=(4,), dtype=float32, numpy=array([3. , 2.5, 2. , 1.5], dtype=float32)> limit = 5 tf.range(limit) <tf.Tensor: shape=(5,), dtype=int32, numpy=array([0, 1, 2, 3, 4], dtype=int32)> Args start A 0-D Tensor (scalar). Acts as first entry in the range if limit is not None; otherwise, acts as range limit and first entry defaults to 0. limit A 0-D Tensor (scalar). Upper limit of sequence, exclusive. If None, defaults to the value of start while the first entry of the range defaults to 0. delta A 0-D Tensor (scalar). Number that increments start. Defaults to 1. dtype The type of the elements of the resulting tensor. name A name for the operation. Defaults to "range". Returns An 1-D Tensor of type dtype. Numpy Compatibility Equivalent to np.arange
tensorflow.range
tf.rank View source on GitHub Returns the rank of a tensor. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.rank tf.rank( input, name=None ) See also tf.shape. Returns a 0-D int32 Tensor representing the rank of input. For example: # shape of tensor 't' is [2, 2, 3] t = tf.constant([[[1, 1, 1], [2, 2, 2]], [[3, 3, 3], [4, 4, 4]]]) tf.rank(t) # 3 Note: The rank of a tensor is not the same as the rank of a matrix. The rank of a tensor is the number of indices required to uniquely select each element of the tensor. Rank is also known as "order", "degree", or "ndims." Args input A Tensor or SparseTensor. name A name for the operation (optional). Returns A Tensor of type int32. Numpy Compatibility Equivalent to np.ndim
tensorflow.rank
Module: tf.raw_ops Public API for tf.raw_ops namespace. Note: tf.raw_ops provides direct/low level access to all TensorFlow ops. See the RFC for details. Unless you are library writer, you likely do not need to use these ops directly. Op Name Has Gradient Abort ❌ Abs βœ”οΈ AccumulateNV2 βœ”οΈ AccumulatorApplyGradient ❌ AccumulatorNumAccumulated ❌ AccumulatorSetGlobalStep ❌ AccumulatorTakeGradient ❌ Acos βœ”οΈ Acosh βœ”οΈ Add βœ”οΈ AddManySparseToTensorsMap ❌ AddN βœ”οΈ AddSparseToTensorsMap ❌ AddV2 βœ”οΈ AdjustContrast ❌ AdjustContrastv2 ❌ AdjustHue ❌ AdjustSaturation ❌ All ❌ AllCandidateSampler ❌ AllToAll βœ”οΈ Angle βœ”οΈ AnonymousIterator ❌ AnonymousIteratorV2 ❌ AnonymousMemoryCache ❌ AnonymousMultiDeviceIterator ❌ AnonymousRandomSeedGenerator ❌ AnonymousSeedGenerator ❌ Any ❌ ApplyAdaMax ❌ ApplyAdadelta ❌ ApplyAdagrad ❌ ApplyAdagradDA ❌ ApplyAdagradV2 ❌ ApplyAdam ❌ ApplyAddSign ❌ ApplyCenteredRMSProp ❌ ApplyFtrl ❌ ApplyFtrlV2 ❌ ApplyGradientDescent ❌ ApplyMomentum ❌ ApplyPowerSign ❌ ApplyProximalAdagrad ❌ ApplyProximalGradientDescent ❌ ApplyRMSProp ❌ ApproximateEqual βœ”οΈ ArgMax βœ”οΈ ArgMin βœ”οΈ AsString βœ”οΈ Asin βœ”οΈ Asinh βœ”οΈ Assert βœ”οΈ AssertCardinalityDataset ❌ AssertNextDataset ❌ Assign βœ”οΈ AssignAdd βœ”οΈ AssignAddVariableOp ❌ AssignSub βœ”οΈ AssignSubVariableOp ❌ AssignVariableOp ❌ Atan βœ”οΈ Atan2 βœ”οΈ Atanh βœ”οΈ AudioSpectrogram ❌ AudioSummary βœ”οΈ AudioSummaryV2 βœ”οΈ AutoShardDataset ❌ AvgPool βœ”οΈ AvgPool3D βœ”οΈ AvgPool3DGrad βœ”οΈ AvgPoolGrad βœ”οΈ BandedTriangularSolve βœ”οΈ Barrier ❌ BarrierClose ❌ BarrierIncompleteSize ❌ BarrierInsertMany ❌ BarrierReadySize ❌ BarrierTakeMany ❌ Batch ❌ BatchCholesky ❌ BatchCholeskyGrad ❌ BatchDataset ❌ BatchDatasetV2 ❌ BatchFFT ❌ BatchFFT2D ❌ BatchFFT3D ❌ BatchFunction ❌ BatchIFFT ❌ BatchIFFT2D ❌ BatchIFFT3D ❌ BatchMatMul βœ”οΈ BatchMatMulV2 βœ”οΈ BatchMatrixBandPart ❌ BatchMatrixDeterminant ❌ BatchMatrixDiag ❌ BatchMatrixDiagPart ❌ BatchMatrixInverse ❌ BatchMatrixSetDiag ❌ BatchMatrixSolve ❌ BatchMatrixSolveLs ❌ BatchMatrixTriangularSolve ❌ BatchNormWithGlobalNormalization βœ”οΈ BatchNormWithGlobalNormalizationGrad ❌ BatchSelfAdjointEig ❌ BatchSelfAdjointEigV2 ❌ BatchSvd ❌ BatchToSpace βœ”οΈ BatchToSpaceND βœ”οΈ BesselI0 βœ”οΈ BesselI0e βœ”οΈ BesselI1 βœ”οΈ BesselI1e βœ”οΈ BesselJ0 βœ”οΈ BesselJ1 βœ”οΈ BesselK0 βœ”οΈ BesselK0e βœ”οΈ BesselK1 βœ”οΈ BesselK1e βœ”οΈ BesselY0 βœ”οΈ BesselY1 βœ”οΈ Betainc βœ”οΈ BiasAdd βœ”οΈ BiasAddGrad βœ”οΈ BiasAddV1 βœ”οΈ Bincount ❌ Bitcast ❌ BitwiseAnd βœ”οΈ BitwiseOr βœ”οΈ BitwiseXor βœ”οΈ BlockLSTM βœ”οΈ BlockLSTMGrad ❌ BlockLSTMGradV2 ❌ BlockLSTMV2 βœ”οΈ BoostedTreesAggregateStats ❌ BoostedTreesBucketize ❌ BoostedTreesCalculateBestFeatureSplit ❌ BoostedTreesCalculateBestFeatureSplitV2 ❌ BoostedTreesCalculateBestGainsPerFeature ❌ BoostedTreesCenterBias ❌ BoostedTreesCreateEnsemble ❌ BoostedTreesCreateQuantileStreamResource ❌ BoostedTreesDeserializeEnsemble ❌ BoostedTreesEnsembleResourceHandleOp ❌ BoostedTreesExampleDebugOutputs ❌ BoostedTreesFlushQuantileSummaries ❌ BoostedTreesGetEnsembleStates ❌ BoostedTreesMakeQuantileSummaries ❌ BoostedTreesMakeStatsSummary ❌ BoostedTreesPredict ❌ BoostedTreesQuantileStreamResourceAddSummaries ❌ BoostedTreesQuantileStreamResourceDeserialize ❌ BoostedTreesQuantileStreamResourceFlush ❌ BoostedTreesQuantileStreamResourceGetBucketBoundaries ❌ BoostedTreesQuantileStreamResourceHandleOp ❌ BoostedTreesSerializeEnsemble ❌ BoostedTreesSparseAggregateStats ❌ BoostedTreesSparseCalculateBestFeatureSplit ❌ BoostedTreesTrainingPredict ❌ BoostedTreesUpdateEnsemble ❌ BoostedTreesUpdateEnsembleV2 ❌ BroadcastArgs ❌ BroadcastGradientArgs βœ”οΈ BroadcastTo βœ”οΈ Bucketize ❌ BytesProducedStatsDataset ❌ CSRSparseMatrixComponents ❌ CSRSparseMatrixToDense βœ”οΈ CSRSparseMatrixToSparseTensor ❌ CSVDataset ❌ CSVDatasetV2 ❌ CTCBeamSearchDecoder βœ”οΈ CTCGreedyDecoder βœ”οΈ CTCLoss βœ”οΈ CTCLossV2 βœ”οΈ CacheDataset ❌ CacheDatasetV2 ❌ Case βœ”οΈ Cast βœ”οΈ Ceil βœ”οΈ CheckNumerics βœ”οΈ CheckNumericsV2 βœ”οΈ Cholesky βœ”οΈ CholeskyGrad ❌ ChooseFastestBranchDataset ❌ ChooseFastestDataset ❌ ClipByValue ❌ CloseSummaryWriter ❌ CollectiveBcastRecv ❌ CollectiveBcastSend ❌ CollectiveGather ❌ CollectiveGatherV2 ❌ CollectivePermute βœ”οΈ CollectiveReduce ❌ CollectiveReduceV2 ❌ CombinedNonMaxSuppression ❌ CompareAndBitpack ❌ Complex βœ”οΈ ComplexAbs βœ”οΈ CompressElement ❌ ComputeAccidentalHits ❌ ComputeBatchSize ❌ Concat βœ”οΈ ConcatOffset βœ”οΈ ConcatV2 βœ”οΈ ConcatenateDataset ❌ ConditionalAccumulator ❌ ConfigureDistributedTPU ❌ ConfigureTPUEmbedding ❌ Conj βœ”οΈ ConjugateTranspose βœ”οΈ Const βœ”οΈ ConsumeMutexLock ❌ ControlTrigger ❌ Conv2D βœ”οΈ Conv2DBackpropFilter βœ”οΈ Conv2DBackpropInput βœ”οΈ Conv3D βœ”οΈ Conv3DBackpropFilter ❌ Conv3DBackpropFilterV2 βœ”οΈ Conv3DBackpropInput ❌ Conv3DBackpropInputV2 βœ”οΈ Copy ❌ CopyHost ❌ Cos βœ”οΈ Cosh βœ”οΈ CountUpTo ❌ CreateSummaryDbWriter ❌ CreateSummaryFileWriter ❌ CropAndResize βœ”οΈ CropAndResizeGradBoxes ❌ CropAndResizeGradImage ❌ Cross βœ”οΈ CrossReplicaSum βœ”οΈ CudnnRNN βœ”οΈ CudnnRNNBackprop ❌ CudnnRNNBackpropV2 ❌ CudnnRNNBackpropV3 ❌ CudnnRNNCanonicalToParams ❌ CudnnRNNCanonicalToParamsV2 ❌ CudnnRNNParamsSize ❌ CudnnRNNParamsToCanonical ❌ CudnnRNNParamsToCanonicalV2 ❌ CudnnRNNV2 βœ”οΈ CudnnRNNV3 βœ”οΈ Cumprod βœ”οΈ Cumsum βœ”οΈ CumulativeLogsumexp βœ”οΈ DataFormatDimMap ❌ DataFormatVecPermute ❌ DataServiceDataset ❌ DatasetCardinality ❌ DatasetFromGraph ❌ DatasetToGraph ❌ DatasetToGraphV2 ❌ DatasetToSingleElement ❌ DatasetToTFRecord ❌ Dawsn βœ”οΈ DebugGradientIdentity βœ”οΈ DebugGradientRefIdentity βœ”οΈ DebugIdentity ❌ DebugIdentityV2 βœ”οΈ DebugNanCount ❌ DebugNumericSummary ❌ DebugNumericSummaryV2 ❌ DecodeAndCropJpeg ❌ DecodeBase64 βœ”οΈ DecodeBmp ❌ DecodeCSV ❌ DecodeCompressed ❌ DecodeGif ❌ DecodeImage ❌ DecodeJSONExample ❌ DecodeJpeg ❌ DecodePaddedRaw βœ”οΈ DecodePng ❌ DecodeProtoV2 βœ”οΈ DecodeRaw βœ”οΈ DecodeWav ❌ DeepCopy ❌ DeleteIterator ❌ DeleteMemoryCache ❌ DeleteMultiDeviceIterator ❌ DeleteRandomSeedGenerator ❌ DeleteSeedGenerator ❌ DeleteSessionTensor βœ”οΈ DenseBincount ❌ DenseCountSparseOutput ❌ DenseToCSRSparseMatrix βœ”οΈ DenseToDenseSetOperation βœ”οΈ DenseToSparseBatchDataset ❌ DenseToSparseSetOperation βœ”οΈ DepthToSpace βœ”οΈ DepthwiseConv2dNative βœ”οΈ DepthwiseConv2dNativeBackpropFilter βœ”οΈ DepthwiseConv2dNativeBackpropInput βœ”οΈ Dequantize ❌ DeserializeIterator ❌ DeserializeManySparse ❌ DeserializeSparse ❌ DestroyResourceOp ❌ DestroyTemporaryVariable ❌ DeviceIndex ❌ Diag βœ”οΈ DiagPart βœ”οΈ Digamma βœ”οΈ Dilation2D βœ”οΈ Dilation2DBackpropFilter ❌ Dilation2DBackpropInput ❌ DirectedInterleaveDataset ❌ Div βœ”οΈ DivNoNan βœ”οΈ DrawBoundingBoxes βœ”οΈ DrawBoundingBoxesV2 ❌ DummyIterationCounter ❌ DummyMemoryCache ❌ DummySeedGenerator ❌ DynamicPartition βœ”οΈ DynamicStitch βœ”οΈ EagerPyFunc βœ”οΈ EditDistance βœ”οΈ Eig βœ”οΈ Einsum βœ”οΈ Elu βœ”οΈ EluGrad βœ”οΈ Empty ❌ EmptyTensorList ❌ EncodeBase64 βœ”οΈ EncodeJpeg ❌ EncodeJpegVariableQuality ❌ EncodePng ❌ EncodeProto βœ”οΈ EncodeWav ❌ EnqueueTPUEmbeddingIntegerBatch ❌ EnqueueTPUEmbeddingRaggedTensorBatch ❌ EnqueueTPUEmbeddingSparseBatch ❌ EnqueueTPUEmbeddingSparseTensorBatch ❌ EnsureShape βœ”οΈ Enter βœ”οΈ Equal βœ”οΈ Erf βœ”οΈ Erfc βœ”οΈ Erfinv βœ”οΈ EuclideanNorm βœ”οΈ Exit βœ”οΈ Exp βœ”οΈ ExpandDims βœ”οΈ ExperimentalAssertNextDataset ❌ ExperimentalAutoShardDataset ❌ ExperimentalBytesProducedStatsDataset ❌ ExperimentalCSVDataset ❌ ExperimentalChooseFastestDataset ❌ ExperimentalDatasetCardinality ❌ ExperimentalDatasetToTFRecord ❌ ExperimentalDenseToSparseBatchDataset ❌ ExperimentalDirectedInterleaveDataset ❌ ExperimentalGroupByReducerDataset ❌ ExperimentalGroupByWindowDataset ❌ ExperimentalIgnoreErrorsDataset ❌ ExperimentalIteratorGetDevice ❌ ExperimentalLMDBDataset ❌ ExperimentalLatencyStatsDataset ❌ ExperimentalMapAndBatchDataset ❌ ExperimentalMapDataset ❌ ExperimentalMatchingFilesDataset ❌ ExperimentalMaxIntraOpParallelismDataset ❌ ExperimentalNonSerializableDataset ❌ ExperimentalParallelInterleaveDataset ❌ ExperimentalParseExampleDataset ❌ ExperimentalPrivateThreadPoolDataset ❌ ExperimentalRandomDataset ❌ ExperimentalRebatchDataset ❌ ExperimentalScanDataset ❌ ExperimentalSetStatsAggregatorDataset ❌ ExperimentalSleepDataset ❌ ExperimentalSlidingWindowDataset ❌ ExperimentalSqlDataset ❌ ExperimentalStatsAggregatorHandle ❌ ExperimentalStatsAggregatorSummary ❌ ExperimentalTakeWhileDataset ❌ ExperimentalThreadPoolDataset ❌ ExperimentalThreadPoolHandle ❌ ExperimentalUnbatchDataset ❌ ExperimentalUniqueDataset ❌ Expint βœ”οΈ Expm1 βœ”οΈ ExtractGlimpse βœ”οΈ ExtractGlimpseV2 ❌ ExtractImagePatches βœ”οΈ ExtractJpegShape ❌ ExtractVolumePatches βœ”οΈ FFT βœ”οΈ FFT2D βœ”οΈ FFT3D βœ”οΈ FIFOQueue ❌ FIFOQueueV2 ❌ Fact ❌ FakeParam ❌ FakeQuantWithMinMaxArgs βœ”οΈ FakeQuantWithMinMaxArgsGradient ❌ FakeQuantWithMinMaxVars βœ”οΈ FakeQuantWithMinMaxVarsGradient ❌ FakeQuantWithMinMaxVarsPerChannel βœ”οΈ FakeQuantWithMinMaxVarsPerChannelGradient ❌ FakeQueue ❌ Fill βœ”οΈ FilterByLastComponentDataset ❌ FilterDataset ❌ Fingerprint ❌ FixedLengthRecordDataset ❌ FixedLengthRecordDatasetV2 ❌ FixedLengthRecordReader βœ”οΈ FixedLengthRecordReaderV2 ❌ FixedUnigramCandidateSampler ❌ FlatMapDataset ❌ Floor βœ”οΈ FloorDiv βœ”οΈ FloorMod βœ”οΈ FlushSummaryWriter ❌ For ❌ FractionalAvgPool βœ”οΈ FractionalAvgPoolGrad ❌ FractionalMaxPool βœ”οΈ FractionalMaxPoolGrad ❌ FresnelCos βœ”οΈ FresnelSin βœ”οΈ FusedBatchNorm βœ”οΈ FusedBatchNormGrad βœ”οΈ FusedBatchNormGradV2 βœ”οΈ FusedBatchNormGradV3 βœ”οΈ FusedBatchNormV2 βœ”οΈ FusedBatchNormV3 βœ”οΈ FusedPadConv2D ❌ FusedResizeAndPadConv2D ❌ GRUBlockCell ❌ GRUBlockCellGrad ❌ Gather βœ”οΈ GatherNd βœ”οΈ GatherV2 βœ”οΈ GenerateBoundingBoxProposals βœ”οΈ GenerateVocabRemapping βœ”οΈ GeneratorDataset ❌ GetSessionHandle βœ”οΈ GetSessionHandleV2 βœ”οΈ GetSessionTensor βœ”οΈ Greater βœ”οΈ GreaterEqual βœ”οΈ GroupByReducerDataset ❌ GroupByWindowDataset ❌ GuaranteeConst ❌ HSVToRGB βœ”οΈ HashTable βœ”οΈ HashTableV2 βœ”οΈ HistogramFixedWidth ❌ HistogramSummary βœ”οΈ IFFT βœ”οΈ IFFT2D βœ”οΈ IFFT3D βœ”οΈ IRFFT βœ”οΈ IRFFT2D βœ”οΈ IRFFT3D ❌ Identity βœ”οΈ IdentityN βœ”οΈ IdentityReader βœ”οΈ IdentityReaderV2 ❌ If βœ”οΈ Igamma βœ”οΈ IgammaGradA ❌ Igammac βœ”οΈ IgnoreErrorsDataset ❌ Imag βœ”οΈ ImageProjectiveTransformV2 βœ”οΈ ImageProjectiveTransformV3 βœ”οΈ ImageSummary βœ”οΈ ImmutableConst ❌ ImportEvent ❌ InTopK ❌ InTopKV2 ❌ InfeedDequeue ❌ InfeedDequeueTuple ❌ InfeedEnqueue ❌ InfeedEnqueuePrelinearizedBuffer ❌ InfeedEnqueueTuple ❌ InitializeTable βœ”οΈ InitializeTableFromDataset ❌ InitializeTableFromTextFile βœ”οΈ InitializeTableFromTextFileV2 βœ”οΈ InitializeTableV2 βœ”οΈ InplaceAdd ❌ InplaceSub ❌ InplaceUpdate ❌ InterleaveDataset ❌ Inv βœ”οΈ InvGrad βœ”οΈ Invert βœ”οΈ InvertPermutation βœ”οΈ IsBoostedTreesEnsembleInitialized ❌ IsBoostedTreesQuantileStreamResourceInitialized ❌ IsFinite ❌ IsInf ❌ IsNan ❌ IsVariableInitialized ❌ IsotonicRegression βœ”οΈ Iterator ❌ IteratorFromStringHandle ❌ IteratorFromStringHandleV2 ❌ IteratorGetDevice ❌ IteratorGetNext ❌ IteratorGetNextAsOptional ❌ IteratorGetNextSync ❌ IteratorToStringHandle ❌ IteratorV2 ❌ L2Loss βœ”οΈ LMDBDataset ❌ LMDBReader βœ”οΈ LRN βœ”οΈ LRNGrad ❌ LSTMBlockCell ❌ LSTMBlockCellGrad ❌ LatencyStatsDataset ❌ LeakyRelu βœ”οΈ LeakyReluGrad βœ”οΈ LearnedUnigramCandidateSampler ❌ LeftShift βœ”οΈ LegacyParallelInterleaveDatasetV2 ❌ Less βœ”οΈ LessEqual βœ”οΈ Lgamma βœ”οΈ LinSpace βœ”οΈ ListDiff ❌ LoadAndRemapMatrix βœ”οΈ LoadDataset ❌ LoadTPUEmbeddingADAMParameters ❌ LoadTPUEmbeddingADAMParametersGradAccumDebug ❌ LoadTPUEmbeddingAdadeltaParameters ❌ LoadTPUEmbeddingAdadeltaParametersGradAccumDebug ❌ LoadTPUEmbeddingAdagradParameters ❌ LoadTPUEmbeddingAdagradParametersGradAccumDebug ❌ LoadTPUEmbeddingCenteredRMSPropParameters ❌ LoadTPUEmbeddingFTRLParameters ❌ LoadTPUEmbeddingFTRLParametersGradAccumDebug ❌ LoadTPUEmbeddingMDLAdagradLightParameters ❌ LoadTPUEmbeddingMomentumParameters ❌ LoadTPUEmbeddingMomentumParametersGradAccumDebug ❌ LoadTPUEmbeddingProximalAdagradParameters ❌ LoadTPUEmbeddingProximalAdagradParametersGradAccumDebug ❌ LoadTPUEmbeddingProximalYogiParameters ❌ LoadTPUEmbeddingProximalYogiParametersGradAccumDebug ❌ LoadTPUEmbeddingRMSPropParameters ❌ LoadTPUEmbeddingRMSPropParametersGradAccumDebug ❌ LoadTPUEmbeddingStochasticGradientDescentParameters ❌ LoadTPUEmbeddingStochasticGradientDescentParametersGradAccumDebug ❌ Log βœ”οΈ Log1p βœ”οΈ LogMatrixDeterminant βœ”οΈ LogSoftmax βœ”οΈ LogUniformCandidateSampler ❌ LogicalAnd βœ”οΈ LogicalNot βœ”οΈ LogicalOr βœ”οΈ LookupTableExport ❌ LookupTableExportV2 ❌ LookupTableFind βœ”οΈ LookupTableFindV2 βœ”οΈ LookupTableImport ❌ LookupTableImportV2 ❌ LookupTableInsert βœ”οΈ LookupTableInsertV2 βœ”οΈ LookupTableRemoveV2 ❌ LookupTableSize βœ”οΈ LookupTableSizeV2 βœ”οΈ LoopCond βœ”οΈ LowerBound ❌ Lu ❌ MakeIterator ❌ MapAndBatchDataset ❌ MapClear ❌ MapDataset ❌ MapDefun ❌ MapIncompleteSize ❌ MapPeek ❌ MapSize ❌ MapStage ❌ MapUnstage ❌ MapUnstageNoKey ❌ MatMul βœ”οΈ MatchingFiles ❌ MatchingFilesDataset ❌ MatrixBandPart βœ”οΈ MatrixDeterminant βœ”οΈ MatrixDiag βœ”οΈ MatrixDiagPart βœ”οΈ MatrixDiagPartV2 βœ”οΈ MatrixDiagPartV3 βœ”οΈ MatrixDiagV2 βœ”οΈ MatrixDiagV3 βœ”οΈ MatrixExponential ❌ MatrixInverse βœ”οΈ MatrixLogarithm ❌ MatrixSetDiag βœ”οΈ MatrixSetDiagV2 βœ”οΈ MatrixSetDiagV3 βœ”οΈ MatrixSolve βœ”οΈ MatrixSolveLs βœ”οΈ MatrixSquareRoot βœ”οΈ MatrixTriangularSolve βœ”οΈ Max βœ”οΈ MaxIntraOpParallelismDataset ❌ MaxPool βœ”οΈ MaxPool3D βœ”οΈ MaxPool3DGrad βœ”οΈ MaxPool3DGradGrad βœ”οΈ MaxPoolGrad βœ”οΈ MaxPoolGradGrad βœ”οΈ MaxPoolGradGradV2 ❌ MaxPoolGradGradWithArgmax ❌ MaxPoolGradV2 βœ”οΈ MaxPoolGradWithArgmax ❌ MaxPoolV2 βœ”οΈ MaxPoolWithArgmax βœ”οΈ Maximum βœ”οΈ Mean βœ”οΈ Merge βœ”οΈ MergeSummary βœ”οΈ MergeV2Checkpoints ❌ Mfcc ❌ Min βœ”οΈ Minimum βœ”οΈ MirrorPad βœ”οΈ MirrorPadGrad βœ”οΈ Mod ❌ ModelDataset ❌ Mul βœ”οΈ MulNoNan βœ”οΈ MultiDeviceIterator ❌ MultiDeviceIteratorFromStringHandle ❌ MultiDeviceIteratorGetNextFromShard ❌ MultiDeviceIteratorInit ❌ MultiDeviceIteratorToStringHandle ❌ Multinomial βœ”οΈ MutableDenseHashTable βœ”οΈ MutableDenseHashTableV2 βœ”οΈ MutableHashTable βœ”οΈ MutableHashTableOfTensors βœ”οΈ MutableHashTableOfTensorsV2 βœ”οΈ MutableHashTableV2 βœ”οΈ MutexLock ❌ MutexV2 ❌ NcclAllReduce βœ”οΈ NcclBroadcast βœ”οΈ NcclReduce βœ”οΈ Ndtri βœ”οΈ Neg βœ”οΈ NextAfter βœ”οΈ NextIteration βœ”οΈ NoOp ❌ NonDeterministicInts ❌ NonMaxSuppression βœ”οΈ NonMaxSuppressionV2 βœ”οΈ NonMaxSuppressionV3 ❌ NonMaxSuppressionV4 ❌ NonMaxSuppressionV5 ❌ NonMaxSuppressionWithOverlaps βœ”οΈ NonSerializableDataset ❌ NotEqual βœ”οΈ NthElement βœ”οΈ OneHot βœ”οΈ OneShotIterator ❌ OnesLike βœ”οΈ OptimizeDataset ❌ OptimizeDatasetV2 ❌ OptionalFromValue βœ”οΈ OptionalGetValue βœ”οΈ OptionalHasValue ❌ OptionalNone ❌ OrderedMapClear ❌ OrderedMapIncompleteSize ❌ OrderedMapPeek ❌ OrderedMapSize ❌ OrderedMapStage ❌ OrderedMapUnstage ❌ OrderedMapUnstageNoKey ❌ OutfeedDequeue ❌ OutfeedDequeueTuple ❌ OutfeedDequeueTupleV2 ❌ OutfeedDequeueV2 ❌ OutfeedEnqueue ❌ OutfeedEnqueueTuple ❌ Pack βœ”οΈ Pad βœ”οΈ PadV2 βœ”οΈ PaddedBatchDataset ❌ PaddedBatchDatasetV2 ❌ PaddingFIFOQueue ❌ PaddingFIFOQueueV2 ❌ ParallelConcat ❌ ParallelDynamicStitch βœ”οΈ ParallelInterleaveDataset ❌ ParallelInterleaveDatasetV2 ❌ ParallelInterleaveDatasetV3 ❌ ParallelInterleaveDatasetV4 ❌ ParallelMapDataset ❌ ParallelMapDatasetV2 ❌ ParameterizedTruncatedNormal βœ”οΈ ParseExample ❌ ParseExampleDataset ❌ ParseExampleDatasetV2 ❌ ParseExampleV2 ❌ ParseSequenceExample ❌ ParseSequenceExampleV2 ❌ ParseSingleExample ❌ ParseSingleSequenceExample ❌ ParseTensor βœ”οΈ PartitionedCall ❌ Placeholder ❌ PlaceholderV2 ❌ PlaceholderWithDefault βœ”οΈ Polygamma βœ”οΈ PopulationCount βœ”οΈ Pow βœ”οΈ PrefetchDataset ❌ Prelinearize ❌ PrelinearizeTuple ❌ PreventGradient βœ”οΈ Print βœ”οΈ PrintV2 ❌ PriorityQueue ❌ PriorityQueueV2 ❌ PrivateThreadPoolDataset ❌ Prod βœ”οΈ PyFunc βœ”οΈ PyFuncStateless βœ”οΈ Qr βœ”οΈ QuantizeAndDequantize βœ”οΈ QuantizeAndDequantizeV2 βœ”οΈ QuantizeAndDequantizeV3 βœ”οΈ QuantizeAndDequantizeV4 βœ”οΈ QuantizeAndDequantizeV4Grad βœ”οΈ QuantizeDownAndShrinkRange ❌ QuantizeV2 ❌ QuantizedAdd ❌ QuantizedAvgPool ❌ QuantizedBatchNormWithGlobalNormalization ❌ QuantizedBiasAdd ❌ QuantizedConcat ❌ QuantizedConv2D ❌ QuantizedConv2DAndRelu ❌ QuantizedConv2DAndReluAndRequantize ❌ QuantizedConv2DAndRequantize ❌ QuantizedConv2DPerChannel ❌ QuantizedConv2DWithBias ❌ QuantizedConv2DWithBiasAndRelu ❌ QuantizedConv2DWithBiasAndReluAndRequantize ❌ QuantizedConv2DWithBiasAndRequantize ❌ QuantizedConv2DWithBiasSignedSumAndReluAndRequantize ❌ QuantizedConv2DWithBiasSumAndRelu ❌ QuantizedConv2DWithBiasSumAndReluAndRequantize ❌ QuantizedDepthwiseConv2D ❌ QuantizedDepthwiseConv2DWithBias ❌ QuantizedDepthwiseConv2DWithBiasAndRelu ❌ QuantizedDepthwiseConv2DWithBiasAndReluAndRequantize ❌ QuantizedInstanceNorm ❌ QuantizedMatMul ❌ QuantizedMatMulWithBias ❌ QuantizedMatMulWithBiasAndDequantize ❌ QuantizedMatMulWithBiasAndRelu ❌ QuantizedMatMulWithBiasAndReluAndRequantize ❌ QuantizedMatMulWithBiasAndRequantize ❌ QuantizedMaxPool ❌ QuantizedMul ❌ QuantizedRelu ❌ QuantizedRelu6 ❌ QuantizedReluX ❌ QuantizedReshape ❌ QuantizedResizeBilinear ❌ QueueClose βœ”οΈ QueueCloseV2 ❌ QueueDequeue βœ”οΈ QueueDequeueMany βœ”οΈ QueueDequeueManyV2 ❌ QueueDequeueUpTo βœ”οΈ QueueDequeueUpToV2 ❌ QueueDequeueV2 ❌ QueueEnqueue βœ”οΈ QueueEnqueueMany βœ”οΈ QueueEnqueueManyV2 ❌ QueueEnqueueV2 ❌ QueueIsClosed ❌ QueueIsClosedV2 ❌ QueueSize βœ”οΈ QueueSizeV2 ❌ RFFT βœ”οΈ RFFT2D βœ”οΈ RFFT3D ❌ RGBToHSV βœ”οΈ RaggedBincount ❌ RaggedCountSparseOutput ❌ RaggedCross ❌ RaggedGather βœ”οΈ RaggedRange βœ”οΈ RaggedTensorFromVariant βœ”οΈ RaggedTensorToSparse βœ”οΈ RaggedTensorToTensor βœ”οΈ RaggedTensorToVariant βœ”οΈ RaggedTensorToVariantGradient ❌ RandomCrop βœ”οΈ RandomDataset ❌ RandomGamma βœ”οΈ RandomGammaGrad ❌ RandomPoisson ❌ RandomPoissonV2 ❌ RandomShuffle ❌ RandomShuffleQueue ❌ RandomShuffleQueueV2 ❌ RandomStandardNormal βœ”οΈ RandomUniform βœ”οΈ RandomUniformInt ❌ Range βœ”οΈ RangeDataset ❌ Rank βœ”οΈ ReadFile ❌ ReadVariableOp βœ”οΈ ReaderNumRecordsProduced βœ”οΈ ReaderNumRecordsProducedV2 ❌ ReaderNumWorkUnitsCompleted βœ”οΈ ReaderNumWorkUnitsCompletedV2 ❌ ReaderRead βœ”οΈ ReaderReadUpTo βœ”οΈ ReaderReadUpToV2 ❌ ReaderReadV2 ❌ ReaderReset βœ”οΈ ReaderResetV2 ❌ ReaderRestoreState βœ”οΈ ReaderRestoreStateV2 ❌ ReaderSerializeState βœ”οΈ ReaderSerializeStateV2 ❌ Real βœ”οΈ RealDiv βœ”οΈ RebatchDataset ❌ RebatchDatasetV2 ❌ Reciprocal βœ”οΈ ReciprocalGrad βœ”οΈ RecordInput ❌ Recv ❌ RecvTPUEmbeddingActivations ❌ ReduceDataset βœ”οΈ ReduceJoin βœ”οΈ RefEnter βœ”οΈ RefExit βœ”οΈ RefIdentity βœ”οΈ RefMerge βœ”οΈ RefNextIteration βœ”οΈ RefSelect ❌ RefSwitch βœ”οΈ RegexFullMatch ❌ RegexReplace βœ”οΈ RegisterDataset ❌ Relu βœ”οΈ Relu6 βœ”οΈ Relu6Grad βœ”οΈ ReluGrad βœ”οΈ RemoteCall ❌ RepeatDataset ❌ RequantizationRange ❌ RequantizationRangePerChannel ❌ Requantize ❌ RequantizePerChannel ❌ Reshape βœ”οΈ ResizeArea ❌ ResizeBicubic βœ”οΈ ResizeBicubicGrad ❌ ResizeBilinear βœ”οΈ ResizeBilinearGrad ❌ ResizeNearestNeighbor βœ”οΈ ResizeNearestNeighborGrad ❌ ResourceAccumulatorApplyGradient ❌ ResourceAccumulatorNumAccumulated ❌ ResourceAccumulatorSetGlobalStep ❌ ResourceAccumulatorTakeGradient ❌ ResourceApplyAdaMax ❌ ResourceApplyAdadelta ❌ ResourceApplyAdagrad ❌ ResourceApplyAdagradDA ❌ ResourceApplyAdagradV2 ❌ ResourceApplyAdam ❌ ResourceApplyAdamWithAmsgrad ❌ ResourceApplyAddSign ❌ ResourceApplyCenteredRMSProp ❌ ResourceApplyFtrl ❌ ResourceApplyFtrlV2 ❌ ResourceApplyGradientDescent ❌ ResourceApplyKerasMomentum ❌ ResourceApplyMomentum ❌ ResourceApplyPowerSign ❌ ResourceApplyProximalAdagrad ❌ ResourceApplyProximalGradientDescent ❌ ResourceApplyRMSProp ❌ ResourceConditionalAccumulator ❌ ResourceCountUpTo ❌ ResourceGather βœ”οΈ ResourceGatherNd βœ”οΈ ResourceScatterAdd ❌ ResourceScatterDiv ❌ ResourceScatterMax ❌ ResourceScatterMin ❌ ResourceScatterMul ❌ ResourceScatterNdAdd ❌ ResourceScatterNdMax ❌ ResourceScatterNdMin ❌ ResourceScatterNdSub ❌ ResourceScatterNdUpdate ❌ ResourceScatterSub ❌ ResourceScatterUpdate ❌ ResourceSparseApplyAdadelta ❌ ResourceSparseApplyAdagrad ❌ ResourceSparseApplyAdagradDA ❌ ResourceSparseApplyAdagradV2 ❌ ResourceSparseApplyCenteredRMSProp ❌ ResourceSparseApplyFtrl ❌ ResourceSparseApplyFtrlV2 ❌ ResourceSparseApplyKerasMomentum ❌ ResourceSparseApplyMomentum ❌ ResourceSparseApplyProximalAdagrad ❌ ResourceSparseApplyProximalGradientDescent ❌ ResourceSparseApplyRMSProp ❌ ResourceStridedSliceAssign ❌ Restore ❌ RestoreSlice ❌ RestoreV2 ❌ RetrieveTPUEmbeddingADAMParameters ❌ RetrieveTPUEmbeddingADAMParametersGradAccumDebug ❌ RetrieveTPUEmbeddingAdadeltaParameters ❌ RetrieveTPUEmbeddingAdadeltaParametersGradAccumDebug ❌ RetrieveTPUEmbeddingAdagradParameters ❌ RetrieveTPUEmbeddingAdagradParametersGradAccumDebug ❌ RetrieveTPUEmbeddingCenteredRMSPropParameters ❌ RetrieveTPUEmbeddingFTRLParameters ❌ RetrieveTPUEmbeddingFTRLParametersGradAccumDebug ❌ RetrieveTPUEmbeddingMDLAdagradLightParameters ❌ RetrieveTPUEmbeddingMomentumParameters ❌ RetrieveTPUEmbeddingMomentumParametersGradAccumDebug ❌ RetrieveTPUEmbeddingProximalAdagradParameters ❌ RetrieveTPUEmbeddingProximalAdagradParametersGradAccumDebug ❌ RetrieveTPUEmbeddingProximalYogiParameters ❌ RetrieveTPUEmbeddingProximalYogiParametersGradAccumDebug ❌ RetrieveTPUEmbeddingRMSPropParameters ❌ RetrieveTPUEmbeddingRMSPropParametersGradAccumDebug ❌ RetrieveTPUEmbeddingStochasticGradientDescentParameters ❌ RetrieveTPUEmbeddingStochasticGradientDescentParametersGradAccumDebug ❌ Reverse βœ”οΈ ReverseSequence βœ”οΈ ReverseV2 βœ”οΈ RightShift βœ”οΈ Rint βœ”οΈ RngReadAndSkip ❌ RngSkip ❌ Roll βœ”οΈ Round βœ”οΈ Rsqrt βœ”οΈ RsqrtGrad βœ”οΈ SampleDistortedBoundingBox βœ”οΈ SampleDistortedBoundingBoxV2 βœ”οΈ SamplingDataset ❌ Save ❌ SaveDataset ❌ SaveSlices ❌ SaveV2 ❌ ScalarSummary βœ”οΈ ScaleAndTranslate βœ”οΈ ScaleAndTranslateGrad ❌ ScanDataset ❌ ScatterAdd βœ”οΈ ScatterDiv βœ”οΈ ScatterMax ❌ ScatterMin ❌ ScatterMul βœ”οΈ ScatterNd βœ”οΈ ScatterNdAdd βœ”οΈ ScatterNdMax ❌ ScatterNdMin ❌ ScatterNdNonAliasingAdd βœ”οΈ ScatterNdSub βœ”οΈ ScatterNdUpdate βœ”οΈ ScatterSub βœ”οΈ ScatterUpdate ❌ SdcaFprint βœ”οΈ SdcaOptimizer βœ”οΈ SdcaOptimizerV2 βœ”οΈ SdcaShrinkL1 βœ”οΈ SegmentMax βœ”οΈ SegmentMean βœ”οΈ SegmentMin βœ”οΈ SegmentProd ❌ SegmentSum βœ”οΈ Select βœ”οΈ SelectV2 βœ”οΈ SelfAdjointEig ❌ SelfAdjointEigV2 βœ”οΈ Selu βœ”οΈ SeluGrad βœ”οΈ Send ❌ SendTPUEmbeddingGradients ❌ SerializeIterator ❌ SerializeManySparse ❌ SerializeSparse ❌ SerializeTensor βœ”οΈ SetSize βœ”οΈ SetStatsAggregatorDataset ❌ Shape βœ”οΈ ShapeN βœ”οΈ ShardDataset ❌ ShardedFilename ❌ ShardedFilespec ❌ ShuffleAndRepeatDataset ❌ ShuffleAndRepeatDatasetV2 ❌ ShuffleDataset ❌ ShuffleDatasetV2 ❌ ShuffleDatasetV3 ❌ ShutdownDistributedTPU ❌ Sigmoid βœ”οΈ SigmoidGrad βœ”οΈ Sign βœ”οΈ Sin βœ”οΈ Sinh βœ”οΈ Size βœ”οΈ SkipDataset ❌ SleepDataset ❌ Slice βœ”οΈ SlidingWindowDataset ❌ Snapshot ❌ SnapshotDataset ❌ SnapshotDatasetV2 ❌ SobolSample ❌ Softmax βœ”οΈ SoftmaxCrossEntropyWithLogits βœ”οΈ Softplus βœ”οΈ SoftplusGrad βœ”οΈ Softsign βœ”οΈ SoftsignGrad ❌ SpaceToBatch βœ”οΈ SpaceToBatchND βœ”οΈ SpaceToDepth βœ”οΈ SparseAccumulatorApplyGradient ❌ SparseAccumulatorTakeGradient ❌ SparseAdd βœ”οΈ SparseAddGrad βœ”οΈ SparseApplyAdadelta ❌ SparseApplyAdagrad ❌ SparseApplyAdagradDA ❌ SparseApplyAdagradV2 ❌ SparseApplyCenteredRMSProp ❌ SparseApplyFtrl ❌ SparseApplyFtrlV2 ❌ SparseApplyMomentum ❌ SparseApplyProximalAdagrad ❌ SparseApplyProximalGradientDescent ❌ SparseApplyRMSProp ❌ SparseBincount ❌ SparseConcat βœ”οΈ SparseConditionalAccumulator ❌ SparseCountSparseOutput ❌ SparseCross ❌ SparseCrossHashed ❌ SparseCrossV2 ❌ SparseDenseCwiseAdd βœ”οΈ SparseDenseCwiseDiv βœ”οΈ SparseDenseCwiseMul βœ”οΈ SparseFillEmptyRows βœ”οΈ SparseFillEmptyRowsGrad ❌ SparseMatMul βœ”οΈ SparseMatrixAdd βœ”οΈ SparseMatrixMatMul βœ”οΈ SparseMatrixMul βœ”οΈ SparseMatrixNNZ βœ”οΈ SparseMatrixOrderingAMD ❌ SparseMatrixSoftmax βœ”οΈ SparseMatrixSoftmaxGrad ❌ SparseMatrixSparseCholesky ❌ SparseMatrixSparseMatMul βœ”οΈ SparseMatrixTranspose βœ”οΈ SparseMatrixZeros βœ”οΈ SparseReduceMax ❌ SparseReduceMaxSparse ❌ SparseReduceSum βœ”οΈ SparseReduceSumSparse ❌ SparseReorder βœ”οΈ SparseReshape ❌ SparseSegmentMean βœ”οΈ SparseSegmentMeanGrad ❌ SparseSegmentMeanWithNumSegments βœ”οΈ SparseSegmentSqrtN βœ”οΈ SparseSegmentSqrtNGrad ❌ SparseSegmentSqrtNWithNumSegments βœ”οΈ SparseSegmentSum βœ”οΈ SparseSegmentSumWithNumSegments βœ”οΈ SparseSlice βœ”οΈ SparseSliceGrad ❌ SparseSoftmax βœ”οΈ SparseSoftmaxCrossEntropyWithLogits βœ”οΈ SparseSparseMaximum βœ”οΈ SparseSparseMinimum βœ”οΈ SparseSplit ❌ SparseTensorDenseAdd βœ”οΈ SparseTensorDenseMatMul βœ”οΈ SparseTensorSliceDataset ❌ SparseTensorToCSRSparseMatrix ❌ SparseToDense βœ”οΈ SparseToSparseSetOperation βœ”οΈ Spence βœ”οΈ Split βœ”οΈ SplitV βœ”οΈ SqlDataset ❌ Sqrt βœ”οΈ SqrtGrad βœ”οΈ Square βœ”οΈ SquaredDifference βœ”οΈ Squeeze βœ”οΈ Stack βœ”οΈ StackClose βœ”οΈ StackCloseV2 ❌ StackPop βœ”οΈ StackPopV2 ❌ StackPush βœ”οΈ StackPushV2 ❌ StackV2 ❌ Stage ❌ StageClear ❌ StagePeek ❌ StageSize ❌ StatefulPartitionedCall ❌ StatefulRandomBinomial ❌ StatefulStandardNormal ❌ StatefulStandardNormalV2 ❌ StatefulTruncatedNormal ❌ StatefulUniform ❌ StatefulUniformFullInt ❌ StatefulUniformInt ❌ StatelessCase βœ”οΈ StatelessIf βœ”οΈ StatelessMultinomial βœ”οΈ StatelessParameterizedTruncatedNormal βœ”οΈ StatelessRandomBinomial βœ”οΈ StatelessRandomGammaV2 βœ”οΈ StatelessRandomGetKeyCounterAlg ❌ StatelessRandomNormal βœ”οΈ StatelessRandomNormalV2 βœ”οΈ StatelessRandomPoisson βœ”οΈ StatelessRandomUniform βœ”οΈ StatelessRandomUniformFullInt βœ”οΈ StatelessRandomUniformFullIntV2 βœ”οΈ StatelessRandomUniformInt βœ”οΈ StatelessRandomUniformIntV2 βœ”οΈ StatelessRandomUniformV2 βœ”οΈ StatelessSampleDistortedBoundingBox ❌ StatelessTruncatedNormal βœ”οΈ StatelessTruncatedNormalV2 βœ”οΈ StatelessWhile βœ”οΈ StaticRegexFullMatch ❌ StaticRegexReplace ❌ StatsAggregatorHandle ❌ StatsAggregatorHandleV2 ❌ StatsAggregatorSetSummaryWriter ❌ StatsAggregatorSummary ❌ StopGradient βœ”οΈ StridedSlice βœ”οΈ StridedSliceAssign ❌ StridedSliceGrad βœ”οΈ StringFormat ❌ StringJoin βœ”οΈ StringLength ❌ StringLower ❌ StringNGrams ❌ StringSplit βœ”οΈ StringSplitV2 ❌ StringStrip ❌ StringToHashBucket βœ”οΈ StringToHashBucketFast βœ”οΈ StringToHashBucketStrong βœ”οΈ StringToNumber βœ”οΈ StringUpper ❌ Sub βœ”οΈ Substr ❌ Sum βœ”οΈ SummaryWriter ❌ Svd βœ”οΈ Switch βœ”οΈ SymbolicGradient ❌ TFRecordDataset ❌ TFRecordReader βœ”οΈ TFRecordReaderV2 ❌ TPUCompilationResult ❌ TPUEmbeddingActivations βœ”οΈ TPUOrdinalSelector ❌ TPUPartitionedCall ❌ TPUReplicateMetadata ❌ TPUReplicatedInput βœ”οΈ TPUReplicatedOutput ❌ TakeDataset ❌ TakeManySparseFromTensorsMap ❌ TakeWhileDataset ❌ Tan βœ”οΈ Tanh βœ”οΈ TanhGrad βœ”οΈ TemporaryVariable ❌ TensorArray βœ”οΈ TensorArrayClose βœ”οΈ TensorArrayCloseV2 βœ”οΈ TensorArrayCloseV3 βœ”οΈ TensorArrayConcat βœ”οΈ TensorArrayConcatV2 βœ”οΈ TensorArrayConcatV3 βœ”οΈ TensorArrayGather βœ”οΈ TensorArrayGatherV2 βœ”οΈ TensorArrayGatherV3 βœ”οΈ TensorArrayGrad βœ”οΈ TensorArrayGradV2 βœ”οΈ TensorArrayGradV3 βœ”οΈ TensorArrayGradWithShape βœ”οΈ TensorArrayPack ❌ TensorArrayRead βœ”οΈ TensorArrayReadV2 βœ”οΈ TensorArrayReadV3 βœ”οΈ TensorArrayScatter βœ”οΈ TensorArrayScatterV2 βœ”οΈ TensorArrayScatterV3 βœ”οΈ TensorArraySize βœ”οΈ TensorArraySizeV2 βœ”οΈ TensorArraySizeV3 βœ”οΈ TensorArraySplit βœ”οΈ TensorArraySplitV2 βœ”οΈ TensorArraySplitV3 βœ”οΈ TensorArrayUnpack ❌ TensorArrayV2 βœ”οΈ TensorArrayV3 βœ”οΈ TensorArrayWrite βœ”οΈ TensorArrayWriteV2 βœ”οΈ TensorArrayWriteV3 βœ”οΈ TensorDataset ❌ TensorListConcat βœ”οΈ TensorListConcatLists βœ”οΈ TensorListConcatV2 βœ”οΈ TensorListElementShape βœ”οΈ TensorListFromTensor βœ”οΈ TensorListGather βœ”οΈ TensorListGetItem βœ”οΈ TensorListLength βœ”οΈ TensorListPopBack βœ”οΈ TensorListPushBack βœ”οΈ TensorListPushBackBatch βœ”οΈ TensorListReserve ❌ TensorListResize βœ”οΈ TensorListScatter βœ”οΈ TensorListScatterIntoExistingList βœ”οΈ TensorListScatterV2 βœ”οΈ TensorListSetItem βœ”οΈ TensorListSplit βœ”οΈ TensorListStack βœ”οΈ TensorScatterAdd βœ”οΈ TensorScatterMax βœ”οΈ TensorScatterMin βœ”οΈ TensorScatterSub βœ”οΈ TensorScatterUpdate βœ”οΈ TensorSliceDataset ❌ TensorStridedSliceUpdate βœ”οΈ TensorSummary βœ”οΈ TensorSummaryV2 βœ”οΈ TextLineDataset ❌ TextLineReader βœ”οΈ TextLineReaderV2 ❌ ThreadPoolDataset ❌ ThreadPoolHandle ❌ ThreadUnsafeUnigramCandidateSampler ❌ Tile βœ”οΈ TileGrad ❌ Timestamp βœ”οΈ ToBool ❌ TopK βœ”οΈ TopKV2 βœ”οΈ Transpose βœ”οΈ TridiagonalMatMul βœ”οΈ TridiagonalSolve βœ”οΈ TruncateDiv βœ”οΈ TruncateMod ❌ TruncatedNormal βœ”οΈ Unbatch ❌ UnbatchDataset ❌ UnbatchGrad ❌ UncompressElement ❌ UnicodeDecode ❌ UnicodeDecodeWithOffsets ❌ UnicodeEncode ❌ UnicodeScript ❌ UnicodeTranscode ❌ UniformCandidateSampler ❌ Unique ❌ UniqueDataset ❌ UniqueV2 ❌ UniqueWithCounts ❌ UniqueWithCountsV2 ❌ Unpack βœ”οΈ UnravelIndex ❌ UnsortedSegmentJoin ❌ UnsortedSegmentMax βœ”οΈ UnsortedSegmentMin βœ”οΈ UnsortedSegmentProd βœ”οΈ UnsortedSegmentSum βœ”οΈ Unstage ❌ UnwrapDatasetVariant ❌ UpperBound ❌ VarHandleOp ❌ VarIsInitializedOp βœ”οΈ Variable ❌ VariableShape βœ”οΈ VariableV2 ❌ Where ❌ While βœ”οΈ WholeFileReader βœ”οΈ WholeFileReaderV2 ❌ WindowDataset ❌ WorkerHeartbeat ❌ WrapDatasetVariant ❌ WriteAudioSummary ❌ WriteFile ❌ WriteGraphSummary ❌ WriteHistogramSummary ❌ WriteImageSummary ❌ WriteRawProtoSummary ❌ WriteScalarSummary ❌ WriteSummary ❌ Xdivy βœ”οΈ Xlog1py βœ”οΈ Xlogy βœ”οΈ ZerosLike βœ”οΈ Zeta βœ”οΈ ZipDataset ❌
tensorflow.raw_ops
tf.raw_ops.Abort Raise a exception to abort the process when called. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.Abort tf.raw_ops.Abort( error_msg='', exit_without_error=False, name=None ) If exit_without_error is true, the process will exit normally, otherwise it will exit with a SIGABORT signal. Returns nothing but an exception. Args error_msg An optional string. Defaults to "". A string which is the message associated with the exception. exit_without_error An optional bool. Defaults to False. name A name for the operation (optional). Returns The created Operation.
tensorflow.raw_ops.abort
tf.raw_ops.Abs Computes the absolute value of a tensor. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.Abs tf.raw_ops.Abs( x, name=None ) Given a tensor x, this operation returns a tensor containing the absolute value of each element in x. For example, if x is an input element and y is an output element, this operation computes \(y = |x|\). Args x A Tensor. Must be one of the following types: bfloat16, half, float32, float64, int8, int16, int32, int64. name A name for the operation (optional). Returns A Tensor. Has the same type as x.
tensorflow.raw_ops.abs
tf.raw_ops.AccumulateNV2 Returns the element-wise sum of a list of tensors. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.AccumulateNV2 tf.raw_ops.AccumulateNV2( inputs, shape, name=None ) tf.accumulate_n_v2 performs the same operation as tf.add_n, but does not wait for all of its inputs to be ready before beginning to sum. This can save memory if inputs are ready at different times, since minimum temporary storage is proportional to the output size rather than the inputs size. Unlike the original accumulate_n, accumulate_n_v2 is differentiable. Returns a Tensor of same shape and type as the elements of inputs. Args inputs A list of at least 1 Tensor objects with the same type in: float32, float64, int32, uint8, int16, int8, complex64, int64, qint8, quint8, qint32, bfloat16, uint16, complex128, half, uint32, uint64. A list of Tensor objects, each with same shape and type. shape A tf.TensorShape or list of ints. Shape of elements of inputs. name A name for the operation (optional). Returns A Tensor. Has the same type as inputs.
tensorflow.raw_ops.accumulatenv2
tf.raw_ops.AccumulatorApplyGradient Applies a gradient to a given accumulator. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.AccumulatorApplyGradient tf.raw_ops.AccumulatorApplyGradient( handle, local_step, gradient, name=None ) Does not add if local_step is lesser than the accumulator's global_step. Args handle A Tensor of type mutable string. The handle to a accumulator. local_step A Tensor of type int64. The local_step value at which the gradient was computed. gradient A Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, complex64, int64, qint8, quint8, qint32, bfloat16, uint16, complex128, half, uint32, uint64. A tensor of the gradient to be accumulated. name A name for the operation (optional). Returns The created Operation.
tensorflow.raw_ops.accumulatorapplygradient
tf.raw_ops.AccumulatorNumAccumulated Returns the number of gradients aggregated in the given accumulators. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.AccumulatorNumAccumulated tf.raw_ops.AccumulatorNumAccumulated( handle, name=None ) Args handle A Tensor of type mutable string. The handle to an accumulator. name A name for the operation (optional). Returns A Tensor of type int32.
tensorflow.raw_ops.accumulatornumaccumulated
tf.raw_ops.AccumulatorSetGlobalStep Updates the accumulator with a new value for global_step. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.AccumulatorSetGlobalStep tf.raw_ops.AccumulatorSetGlobalStep( handle, new_global_step, name=None ) Logs warning if the accumulator's value is already higher than new_global_step. Args handle A Tensor of type mutable string. The handle to an accumulator. new_global_step A Tensor of type int64. The new global_step value to set. name A name for the operation (optional). Returns The created Operation.
tensorflow.raw_ops.accumulatorsetglobalstep
tf.raw_ops.AccumulatorTakeGradient Extracts the average gradient in the given ConditionalAccumulator. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.AccumulatorTakeGradient tf.raw_ops.AccumulatorTakeGradient( handle, num_required, dtype, name=None ) The op blocks until sufficient (i.e., more than num_required) gradients have been accumulated. If the accumulator has already aggregated more than num_required gradients, it returns the average of the accumulated gradients. Also automatically increments the recorded global_step in the accumulator by 1, and resets the aggregate to 0. Args handle A Tensor of type mutable string. The handle to an accumulator. num_required A Tensor of type int32. Number of gradients required before we return an aggregate. dtype A tf.DType from: tf.float32, tf.float64, tf.int32, tf.uint8, tf.int16, tf.int8, tf.complex64, tf.int64, tf.qint8, tf.quint8, tf.qint32, tf.bfloat16, tf.uint16, tf.complex128, tf.half, tf.uint32, tf.uint64. The data type of accumulated gradients. Needs to correspond to the type of the accumulator. name A name for the operation (optional). Returns A Tensor of type dtype.
tensorflow.raw_ops.accumulatortakegradient
tf.raw_ops.Acos Computes acos of x element-wise. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.Acos tf.raw_ops.Acos( x, name=None ) Provided an input tensor, the tf.math.acos operation returns the inverse cosine of each element of the tensor. If y = tf.math.cos(x) then, x = tf.math.acos(y). Input range is [-1, 1] and the output has a range of [0, pi]. Args x A Tensor. Must be one of the following types: bfloat16, half, float32, float64, int8, int16, int32, int64, complex64, complex128. name A name for the operation (optional). Returns A Tensor. Has the same type as x.
tensorflow.raw_ops.acos
tf.raw_ops.Acosh Computes inverse hyperbolic cosine of x element-wise. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.Acosh tf.raw_ops.Acosh( x, name=None ) Given an input tensor, the function computes inverse hyperbolic cosine of every element. Input range is [1, inf]. It returns nan if the input lies outside the range. x = tf.constant([-2, -0.5, 1, 1.2, 200, 10000, float("inf")]) tf.math.acosh(x) ==> [nan nan 0. 0.62236255 5.9914584 9.903487 inf] Args x A Tensor. Must be one of the following types: bfloat16, half, float32, float64, complex64, complex128. name A name for the operation (optional). Returns A Tensor. Has the same type as x.
tensorflow.raw_ops.acosh
tf.raw_ops.Add Returns x + y element-wise. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.Add tf.raw_ops.Add( x, y, name=None ) Note: math.add supports broadcasting. AddN does not. More about broadcasting here Given two input tensors, the tf.add operation computes the sum for every element in the tensor. Both input and output have a range (-inf, inf). Args x A Tensor. Must be one of the following types: bfloat16, half, float32, float64, uint8, int8, int16, int32, int64, complex64, complex128, string. y A Tensor. Must have the same type as x. name A name for the operation (optional). Returns A Tensor. Has the same type as x.
tensorflow.raw_ops.add
tf.raw_ops.AddManySparseToTensorsMap Add an N-minibatch SparseTensor to a SparseTensorsMap, return N handles. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.AddManySparseToTensorsMap tf.raw_ops.AddManySparseToTensorsMap( sparse_indices, sparse_values, sparse_shape, container='', shared_name='', name=None ) A SparseTensor of rank R is represented by three tensors: sparse_indices, sparse_values, and sparse_shape, where sparse_indices.shape[1] == sparse_shape.shape[0] == R An N-minibatch of SparseTensor objects is represented as a SparseTensor having a first sparse_indices column taking values between [0, N), where the minibatch size N == sparse_shape[0]. The input SparseTensor must have rank R greater than 1, and the first dimension is treated as the minibatch dimension. Elements of the SparseTensor must be sorted in increasing order of this first dimension. The stored SparseTensor objects pointed to by each row of the output sparse_handles will have rank R-1. The SparseTensor values can then be read out as part of a minibatch by passing the given keys as vector elements to TakeManySparseFromTensorsMap. To ensure the correct SparseTensorsMap is accessed, ensure that the same container and shared_name are passed to that Op. If no shared_name is provided here, instead use the name of the Operation created by calling AddManySparseToTensorsMap as the shared_name passed to TakeManySparseFromTensorsMap. Ensure the Operations are colocated. Args sparse_indices A Tensor of type int64. 2-D. The indices of the minibatch SparseTensor. sparse_indices[:, 0] must be ordered values in [0, N). sparse_values A Tensor. 1-D. The values of the minibatch SparseTensor. sparse_shape A Tensor of type int64. 1-D. The shape of the minibatch SparseTensor. The minibatch size N == sparse_shape[0]. container An optional string. Defaults to "". The container name for the SparseTensorsMap created by this op. shared_name An optional string. Defaults to "". The shared name for the SparseTensorsMap created by this op. If blank, the new Operation's unique name is used. name A name for the operation (optional). Returns A Tensor of type int64.
tensorflow.raw_ops.addmanysparsetotensorsmap
tf.raw_ops.AddN Add all input tensors element wise. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.AddN tf.raw_ops.AddN( inputs, name=None ) Inputs must be of same size and shape. x = [9, 7, 10] tf.math.add_n(x) ==> 26 Args inputs A list of at least 1 Tensor objects with the same type in: float32, float64, int32, uint8, int16, int8, complex64, int64, qint8, quint8, qint32, bfloat16, uint16, complex128, half, uint32, uint64, variant. name A name for the operation (optional). Returns A Tensor. Has the same type as inputs.
tensorflow.raw_ops.addn
tf.raw_ops.AddSparseToTensorsMap Add a SparseTensor to a SparseTensorsMap return its handle. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.AddSparseToTensorsMap tf.raw_ops.AddSparseToTensorsMap( sparse_indices, sparse_values, sparse_shape, container='', shared_name='', name=None ) A SparseTensor is represented by three tensors: sparse_indices, sparse_values, and sparse_shape. This operator takes the given SparseTensor and adds it to a container object (a SparseTensorsMap). A unique key within this container is generated in the form of an int64, and this is the value that is returned. The SparseTensor can then be read out as part of a minibatch by passing the key as a vector element to TakeManySparseFromTensorsMap. To ensure the correct SparseTensorsMap is accessed, ensure that the same container and shared_name are passed to that Op. If no shared_name is provided here, instead use the name of the Operation created by calling AddSparseToTensorsMap as the shared_name passed to TakeManySparseFromTensorsMap. Ensure the Operations are colocated. Args sparse_indices A Tensor of type int64. 2-D. The indices of the SparseTensor. sparse_values A Tensor. 1-D. The values of the SparseTensor. sparse_shape A Tensor of type int64. 1-D. The shape of the SparseTensor. container An optional string. Defaults to "". The container name for the SparseTensorsMap created by this op. shared_name An optional string. Defaults to "". The shared name for the SparseTensorsMap created by this op. If blank, the new Operation's unique name is used. name A name for the operation (optional). Returns A Tensor of type int64.
tensorflow.raw_ops.addsparsetotensorsmap
tf.raw_ops.AddV2 Returns x + y element-wise. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.AddV2 tf.raw_ops.AddV2( x, y, name=None ) Note: Add supports broadcasting. AddN does not. More about broadcasting here Args x A Tensor. Must be one of the following types: bfloat16, half, float32, float64, uint8, int8, int16, uint32, int32, int64, complex64, complex128. y A Tensor. Must have the same type as x. name A name for the operation (optional). Returns A Tensor. Has the same type as x.
tensorflow.raw_ops.addv2
tf.raw_ops.AdjustContrast Deprecated. Disallowed in GraphDef version >= 2. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.AdjustContrast tf.raw_ops.AdjustContrast( images, contrast_factor, min_value, max_value, name=None ) Args images A Tensor. Must be one of the following types: uint8, int8, int16, int32, int64, float32, float64. contrast_factor A Tensor of type float32. min_value A Tensor of type float32. max_value A Tensor of type float32. name A name for the operation (optional). Returns A Tensor of type float32.
tensorflow.raw_ops.adjustcontrast
tf.raw_ops.AdjustContrastv2 Adjust the contrast of one or more images. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.AdjustContrastv2 tf.raw_ops.AdjustContrastv2( images, contrast_factor, name=None ) images is a tensor of at least 3 dimensions. The last 3 dimensions are interpreted as [height, width, channels]. The other dimensions only represent a collection of images, such as [batch, height, width, channels]. Contrast is adjusted independently for each channel of each image. For each channel, the Op first computes the mean of the image pixels in the channel and then adjusts each component of each pixel to (x - mean) * contrast_factor + mean. Args images A Tensor. Must be one of the following types: half, float32. Images to adjust. At least 3-D. contrast_factor A Tensor of type float32. A float multiplier for adjusting contrast. name A name for the operation (optional). Returns A Tensor. Has the same type as images.
tensorflow.raw_ops.adjustcontrastv2
tf.raw_ops.AdjustHue Adjust the hue of one or more images. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.AdjustHue tf.raw_ops.AdjustHue( images, delta, name=None ) images is a tensor of at least 3 dimensions. The last dimension is interpreted as channels, and must be three. The input image is considered in the RGB colorspace. Conceptually, the RGB colors are first mapped into HSV. A delta is then applied all the hue values, and then remapped back to RGB colorspace. Args images A Tensor. Must be one of the following types: half, float32. Images to adjust. At least 3-D. delta A Tensor of type float32. A float delta to add to the hue. name A name for the operation (optional). Returns A Tensor. Has the same type as images.
tensorflow.raw_ops.adjusthue
tf.raw_ops.AdjustSaturation Adjust the saturation of one or more images. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.AdjustSaturation tf.raw_ops.AdjustSaturation( images, scale, name=None ) images is a tensor of at least 3 dimensions. The last dimension is interpreted as channels, and must be three. The input image is considered in the RGB colorspace. Conceptually, the RGB colors are first mapped into HSV. A scale is then applied all the saturation values, and then remapped back to RGB colorspace. Args images A Tensor. Must be one of the following types: half, float32. Images to adjust. At least 3-D. scale A Tensor of type float32. A float scale to add to the saturation. name A name for the operation (optional). Returns A Tensor. Has the same type as images.
tensorflow.raw_ops.adjustsaturation
tf.raw_ops.All Computes the "logical and" of elements across dimensions of a tensor. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.All tf.raw_ops.All( input, axis, keep_dims=False, name=None ) Reduces input along the dimensions given in axis. Unless keep_dims is true, the rank of the tensor is reduced by 1 for each entry in axis. If keep_dims is true, the reduced dimensions are retained with length 1. Args input A Tensor of type bool. The tensor to reduce. axis A Tensor. Must be one of the following types: int32, int64. The dimensions to reduce. Must be in the range [-rank(input), rank(input)). keep_dims An optional bool. Defaults to False. If true, retain reduced dimensions with length 1. name A name for the operation (optional). Returns A Tensor of type bool.
tensorflow.raw_ops.all
tf.raw_ops.AllCandidateSampler Generates labels for candidate sampling with a learned unigram distribution. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.AllCandidateSampler tf.raw_ops.AllCandidateSampler( true_classes, num_true, num_sampled, unique, seed=0, seed2=0, name=None ) See explanations of candidate sampling and the data formats at go/candidate-sampling. For each batch, this op picks a single set of sampled candidate labels. The advantages of sampling candidates per-batch are simplicity and the possibility of efficient dense matrix multiplication. The disadvantage is that the sampled candidates must be chosen independently of the context and of the true labels. Args true_classes A Tensor of type int64. A batch_size * num_true matrix, in which each row contains the IDs of the num_true target_classes in the corresponding original label. num_true An int that is >= 1. Number of true labels per context. num_sampled An int that is >= 1. Number of candidates to produce. unique A bool. If unique is true, we sample with rejection, so that all sampled candidates in a batch are unique. This requires some approximation to estimate the post-rejection sampling probabilities. seed An optional int. Defaults to 0. If either seed or seed2 are set to be non-zero, the random number generator is seeded by the given seed. Otherwise, it is seeded by a random seed. seed2 An optional int. Defaults to 0. An second seed to avoid seed collision. name A name for the operation (optional). Returns A tuple of Tensor objects (sampled_candidates, true_expected_count, sampled_expected_count). sampled_candidates A Tensor of type int64. true_expected_count A Tensor of type float32. sampled_expected_count A Tensor of type float32.
tensorflow.raw_ops.allcandidatesampler
tf.raw_ops.AllToAll An Op to exchange data across TPU replicas. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.AllToAll tf.raw_ops.AllToAll( input, group_assignment, concat_dimension, split_dimension, split_count, name=None ) On each replica, the input is split into split_count blocks along split_dimension and send to the other replicas given group_assignment. After receiving split_count - 1 blocks from other replicas, we concatenate the blocks along concat_dimension as the output. For example, suppose there are 2 TPU replicas: replica 0 receives input: [[A, B]] replica 1 receives input: [[C, D]] group_assignment=[[0, 1]] concat_dimension=0 split_dimension=1 split_count=2 replica 0's output: [[A], [C]] replica 1's output: [[B], [D]] Args input A Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, complex64, int64, qint8, quint8, qint32, bfloat16, uint16, complex128, half, uint32, uint64, bool. The local input to the sum. group_assignment A Tensor of type int32. An int32 tensor with shape [num_groups, num_replicas_per_group]. group_assignment[i] represents the replica ids in the ith subgroup. concat_dimension An int. The dimension number to concatenate. split_dimension An int. The dimension number to split. split_count An int. The number of splits, this number must equal to the sub-group size(group_assignment.get_shape()[1]) name A name for the operation (optional). Returns A Tensor. Has the same type as input.
tensorflow.raw_ops.alltoall
tf.raw_ops.Angle Returns the argument of a complex number. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.Angle tf.raw_ops.Angle( input, Tout=tf.dtypes.float32, name=None ) Given a tensor input of complex numbers, this operation returns a tensor of type float that is the argument of each element in input. All elements in input must be complex numbers of the form \(a + bj\), where a is the real part and b is the imaginary part. The argument returned by this operation is of the form \(atan2(b, a)\). For example: # tensor 'input' is [-2.25 + 4.75j, 3.25 + 5.75j] tf.angle(input) ==> [2.0132, 1.056] Args input A Tensor. Must be one of the following types: complex64, complex128. Tout An optional tf.DType from: tf.float32, tf.float64. Defaults to tf.float32. name A name for the operation (optional). Returns A Tensor of type Tout. Numpy Compatibility Equivalent to np.angle.
tensorflow.raw_ops.angle
tf.raw_ops.AnonymousIterator A container for an iterator resource. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.AnonymousIterator tf.raw_ops.AnonymousIterator( output_types, output_shapes, name=None ) Args output_types A list of tf.DTypes that has length >= 1. output_shapes A list of shapes (each a tf.TensorShape or list of ints) that has length >= 1. name A name for the operation (optional). Returns A Tensor of type resource.
tensorflow.raw_ops.anonymousiterator
tf.raw_ops.AnonymousIteratorV2 A container for an iterator resource. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.AnonymousIteratorV2 tf.raw_ops.AnonymousIteratorV2( output_types, output_shapes, name=None ) Args output_types A list of tf.DTypes that has length >= 1. output_shapes A list of shapes (each a tf.TensorShape or list of ints) that has length >= 1. name A name for the operation (optional). Returns A tuple of Tensor objects (handle, deleter). handle A Tensor of type resource. deleter A Tensor of type variant.
tensorflow.raw_ops.anonymousiteratorv2
tf.raw_ops.AnonymousMemoryCache View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.AnonymousMemoryCache tf.raw_ops.AnonymousMemoryCache( name=None ) Args name A name for the operation (optional). Returns A tuple of Tensor objects (handle, deleter). handle A Tensor of type resource. deleter A Tensor of type variant.
tensorflow.raw_ops.anonymousmemorycache
tf.raw_ops.AnonymousMultiDeviceIterator A container for a multi device iterator resource. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.AnonymousMultiDeviceIterator tf.raw_ops.AnonymousMultiDeviceIterator( devices, output_types, output_shapes, name=None ) Args devices A list of strings that has length >= 1. output_types A list of tf.DTypes that has length >= 1. output_shapes A list of shapes (each a tf.TensorShape or list of ints) that has length >= 1. name A name for the operation (optional). Returns A tuple of Tensor objects (handle, deleter). handle A Tensor of type resource. deleter A Tensor of type variant.
tensorflow.raw_ops.anonymousmultideviceiterator
tf.raw_ops.AnonymousRandomSeedGenerator View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.AnonymousRandomSeedGenerator tf.raw_ops.AnonymousRandomSeedGenerator( seed, seed2, name=None ) Args seed A Tensor of type int64. seed2 A Tensor of type int64. name A name for the operation (optional). Returns A tuple of Tensor objects (handle, deleter). handle A Tensor of type resource. deleter A Tensor of type variant.
tensorflow.raw_ops.anonymousrandomseedgenerator
tf.raw_ops.AnonymousSeedGenerator View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.AnonymousSeedGenerator tf.raw_ops.AnonymousSeedGenerator( seed, seed2, reshuffle, name=None ) Args seed A Tensor of type int64. seed2 A Tensor of type int64. reshuffle A Tensor of type bool. name A name for the operation (optional). Returns A tuple of Tensor objects (handle, deleter). handle A Tensor of type resource. deleter A Tensor of type variant.
tensorflow.raw_ops.anonymousseedgenerator
tf.raw_ops.Any Computes the "logical or" of elements across dimensions of a tensor. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.Any tf.raw_ops.Any( input, axis, keep_dims=False, name=None ) Reduces input along the dimensions given in axis. Unless keep_dims is true, the rank of the tensor is reduced by 1 for each entry in axis. If keep_dims is true, the reduced dimensions are retained with length 1. Args input A Tensor of type bool. The tensor to reduce. axis A Tensor. Must be one of the following types: int32, int64. The dimensions to reduce. Must be in the range [-rank(input), rank(input)). keep_dims An optional bool. Defaults to False. If true, retain reduced dimensions with length 1. name A name for the operation (optional). Returns A Tensor of type bool.
tensorflow.raw_ops.any
tf.raw_ops.ApplyAdadelta Update '*var' according to the adadelta scheme. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.ApplyAdadelta tf.raw_ops.ApplyAdadelta( var, accum, accum_update, lr, rho, epsilon, grad, use_locking=False, name=None ) accum = rho() * accum + (1 - rho()) * grad.square(); update = (update_accum + epsilon).sqrt() * (accum + epsilon()).rsqrt() * grad; update_accum = rho() * update_accum + (1 - rho()) * update.square(); var -= update; Args var A mutable Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, complex64, int64, qint8, quint8, qint32, bfloat16, uint16, complex128, half, uint32, uint64. Should be from a Variable(). accum A mutable Tensor. Must have the same type as var. Should be from a Variable(). accum_update A mutable Tensor. Must have the same type as var. Should be from a Variable(). lr A Tensor. Must have the same type as var. Scaling factor. Must be a scalar. rho A Tensor. Must have the same type as var. Decay factor. Must be a scalar. epsilon A Tensor. Must have the same type as var. Constant factor. Must be a scalar. grad A Tensor. Must have the same type as var. The gradient. use_locking An optional bool. Defaults to False. If True, updating of the var, accum and update_accum tensors will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention. name A name for the operation (optional). Returns A mutable Tensor. Has the same type as var.
tensorflow.raw_ops.applyadadelta
tf.raw_ops.ApplyAdagrad Update '*var' according to the adagrad scheme. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.ApplyAdagrad tf.raw_ops.ApplyAdagrad( var, accum, lr, grad, use_locking=False, update_slots=True, name=None ) accum += grad * grad var -= lr * grad * (1 / sqrt(accum)) Args var A mutable Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, complex64, int64, qint8, quint8, qint32, bfloat16, uint16, complex128, half, uint32, uint64. Should be from a Variable(). accum A mutable Tensor. Must have the same type as var. Should be from a Variable(). lr A Tensor. Must have the same type as var. Scaling factor. Must be a scalar. grad A Tensor. Must have the same type as var. The gradient. use_locking An optional bool. Defaults to False. If True, updating of the var and accum tensors will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention. update_slots An optional bool. Defaults to True. name A name for the operation (optional). Returns A mutable Tensor. Has the same type as var.
tensorflow.raw_ops.applyadagrad
tf.raw_ops.ApplyAdagradDA Update '*var' according to the proximal adagrad scheme. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.ApplyAdagradDA tf.raw_ops.ApplyAdagradDA( var, gradient_accumulator, gradient_squared_accumulator, grad, lr, l1, l2, global_step, use_locking=False, name=None ) Args var A mutable Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, complex64, int64, qint8, quint8, qint32, bfloat16, uint16, complex128, half, uint32, uint64. Should be from a Variable(). gradient_accumulator A mutable Tensor. Must have the same type as var. Should be from a Variable(). gradient_squared_accumulator A mutable Tensor. Must have the same type as var. Should be from a Variable(). grad A Tensor. Must have the same type as var. The gradient. lr A Tensor. Must have the same type as var. Scaling factor. Must be a scalar. l1 A Tensor. Must have the same type as var. L1 regularization. Must be a scalar. l2 A Tensor. Must have the same type as var. L2 regularization. Must be a scalar. global_step A Tensor of type int64. Training step number. Must be a scalar. use_locking An optional bool. Defaults to False. If True, updating of the var and accum tensors will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention. name A name for the operation (optional). Returns A mutable Tensor. Has the same type as var.
tensorflow.raw_ops.applyadagradda
tf.raw_ops.ApplyAdagradV2 Update '*var' according to the adagrad scheme. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.ApplyAdagradV2 tf.raw_ops.ApplyAdagradV2( var, accum, lr, epsilon, grad, use_locking=False, update_slots=True, name=None ) accum += grad * grad var -= lr * grad * (1 / sqrt(accum)) Args var A mutable Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, complex64, int64, qint8, quint8, qint32, bfloat16, uint16, complex128, half, uint32, uint64. Should be from a Variable(). accum A mutable Tensor. Must have the same type as var. Should be from a Variable(). lr A Tensor. Must have the same type as var. Scaling factor. Must be a scalar. epsilon A Tensor. Must have the same type as var. Constant factor. Must be a scalar. grad A Tensor. Must have the same type as var. The gradient. use_locking An optional bool. Defaults to False. If True, updating of the var and accum tensors will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention. update_slots An optional bool. Defaults to True. name A name for the operation (optional). Returns A mutable Tensor. Has the same type as var.
tensorflow.raw_ops.applyadagradv2
tf.raw_ops.ApplyAdam Update '*var' according to the Adam algorithm. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.ApplyAdam tf.raw_ops.ApplyAdam( var, m, v, beta1_power, beta2_power, lr, beta1, beta2, epsilon, grad, use_locking=False, use_nesterov=False, name=None ) $$lr_t := \text{learning\_rate} * \sqrt{1 - beta_2^t} / (1 - beta_1^t)$$ $$m_t := beta_1 * m_{t-1} + (1 - beta_1) * g$$ $$v_t := beta_2 * v_{t-1} + (1 - beta_2) * g * g$$ $$variable := variable - lr_t * m_t / (\sqrt{v_t} + \epsilon)$$ Args var A mutable Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, complex64, int64, qint8, quint8, qint32, bfloat16, uint16, complex128, half, uint32, uint64. Should be from a Variable(). m A mutable Tensor. Must have the same type as var. Should be from a Variable(). v A mutable Tensor. Must have the same type as var. Should be from a Variable(). beta1_power A Tensor. Must have the same type as var. Must be a scalar. beta2_power A Tensor. Must have the same type as var. Must be a scalar. lr A Tensor. Must have the same type as var. Scaling factor. Must be a scalar. beta1 A Tensor. Must have the same type as var. Momentum factor. Must be a scalar. beta2 A Tensor. Must have the same type as var. Momentum factor. Must be a scalar. epsilon A Tensor. Must have the same type as var. Ridge term. Must be a scalar. grad A Tensor. Must have the same type as var. The gradient. use_locking An optional bool. Defaults to False. If True, updating of the var, m, and v tensors will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention. use_nesterov An optional bool. Defaults to False. If True, uses the nesterov update. name A name for the operation (optional). Returns A mutable Tensor. Has the same type as var.
tensorflow.raw_ops.applyadam
tf.raw_ops.ApplyAdaMax Update '*var' according to the AdaMax algorithm. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.ApplyAdaMax tf.raw_ops.ApplyAdaMax( var, m, v, beta1_power, lr, beta1, beta2, epsilon, grad, use_locking=False, name=None ) mt <- beta1 * m{t-1} + (1 - beta1) * g vt <- max(beta2 * v{t-1}, abs(g)) variable <- variable - learning_rate / (1 - beta1^t) * m_t / (v_t + epsilon) Args var A mutable Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, complex64, int64, qint8, quint8, qint32, bfloat16, uint16, complex128, half, uint32, uint64. Should be from a Variable(). m A mutable Tensor. Must have the same type as var. Should be from a Variable(). v A mutable Tensor. Must have the same type as var. Should be from a Variable(). beta1_power A Tensor. Must have the same type as var. Must be a scalar. lr A Tensor. Must have the same type as var. Scaling factor. Must be a scalar. beta1 A Tensor. Must have the same type as var. Momentum factor. Must be a scalar. beta2 A Tensor. Must have the same type as var. Momentum factor. Must be a scalar. epsilon A Tensor. Must have the same type as var. Ridge term. Must be a scalar. grad A Tensor. Must have the same type as var. The gradient. use_locking An optional bool. Defaults to False. If True, updating of the var, m, and v tensors will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention. name A name for the operation (optional). Returns A mutable Tensor. Has the same type as var.
tensorflow.raw_ops.applyadamax
tf.raw_ops.ApplyAddSign Update '*var' according to the AddSign update. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.ApplyAddSign tf.raw_ops.ApplyAddSign( var, m, lr, alpha, sign_decay, beta, grad, use_locking=False, name=None ) mt <- beta1 * m{t-1} + (1 - beta1) * g update <- (alpha + sign_decay * sign(g) *sign(m)) * g variable <- variable - lr_t * update Args var A mutable Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, complex64, int64, qint8, quint8, qint32, bfloat16, uint16, complex128, half, uint32, uint64. Should be from a Variable(). m A mutable Tensor. Must have the same type as var. Should be from a Variable(). lr A Tensor. Must have the same type as var. Scaling factor. Must be a scalar. alpha A Tensor. Must have the same type as var. Must be a scalar. sign_decay A Tensor. Must have the same type as var. Must be a scalar. beta A Tensor. Must have the same type as var. Must be a scalar. grad A Tensor. Must have the same type as var. The gradient. use_locking An optional bool. Defaults to False. If True, updating of the var and m tensors is protected by a lock; otherwise the behavior is undefined, but may exhibit less contention. name A name for the operation (optional). Returns A mutable Tensor. Has the same type as var.
tensorflow.raw_ops.applyaddsign
tf.raw_ops.ApplyCenteredRMSProp Update '*var' according to the centered RMSProp algorithm. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.ApplyCenteredRMSProp tf.raw_ops.ApplyCenteredRMSProp( var, mg, ms, mom, lr, rho, momentum, epsilon, grad, use_locking=False, name=None ) The centered RMSProp algorithm uses an estimate of the centered second moment (i.e., the variance) for normalization, as opposed to regular RMSProp, which uses the (uncentered) second moment. This often helps with training, but is slightly more expensive in terms of computation and memory. Note that in dense implementation of this algorithm, mg, ms, and mom will update even if the grad is zero, but in this sparse implementation, mg, ms, and mom will not update in iterations during which the grad is zero. mean_square = decay * mean_square + (1-decay) * gradient ** 2 mean_grad = decay * mean_grad + (1-decay) * gradient Delta = learning_rate * gradient / sqrt(mean_square + epsilon - mean_grad ** 2) mg <- rho * mg{t-1} + (1-rho) * grad ms <- rho * ms{t-1} + (1-rho) * grad * grad mom <- momentum * mom_{t-1} + lr * grad / sqrt(ms - mg * mg + epsilon) var <- var - mom Args var A mutable Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, complex64, int64, qint8, quint8, qint32, bfloat16, uint16, complex128, half, uint32, uint64. Should be from a Variable(). mg A mutable Tensor. Must have the same type as var. Should be from a Variable(). ms A mutable Tensor. Must have the same type as var. Should be from a Variable(). mom A mutable Tensor. Must have the same type as var. Should be from a Variable(). lr A Tensor. Must have the same type as var. Scaling factor. Must be a scalar. rho A Tensor. Must have the same type as var. Decay rate. Must be a scalar. momentum A Tensor. Must have the same type as var. Momentum Scale. Must be a scalar. epsilon A Tensor. Must have the same type as var. Ridge term. Must be a scalar. grad A Tensor. Must have the same type as var. The gradient. use_locking An optional bool. Defaults to False. If True, updating of the var, mg, ms, and mom tensors is protected by a lock; otherwise the behavior is undefined, but may exhibit less contention. name A name for the operation (optional). Returns A mutable Tensor. Has the same type as var.
tensorflow.raw_ops.applycenteredrmsprop
tf.raw_ops.ApplyFtrl Update '*var' according to the Ftrl-proximal scheme. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.ApplyFtrl tf.raw_ops.ApplyFtrl( var, accum, linear, grad, lr, l1, l2, lr_power, use_locking=False, multiply_linear_by_lr=False, name=None ) accum_new = accum + grad * grad linear += grad - (accum_new^(-lr_power) - accum^(-lr_power)) / lr * var quadratic = 1.0 / (accum_new^(lr_power) * lr) + 2 * l2 var = (sign(linear) * l1 - linear) / quadratic if |linear| > l1 else 0.0 accum = accum_new Args var A mutable Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, complex64, int64, qint8, quint8, qint32, bfloat16, uint16, complex128, half, uint32, uint64. Should be from a Variable(). accum A mutable Tensor. Must have the same type as var. Should be from a Variable(). linear A mutable Tensor. Must have the same type as var. Should be from a Variable(). grad A Tensor. Must have the same type as var. The gradient. lr A Tensor. Must have the same type as var. Scaling factor. Must be a scalar. l1 A Tensor. Must have the same type as var. L1 regularization. Must be a scalar. l2 A Tensor. Must have the same type as var. L2 regularization. Must be a scalar. lr_power A Tensor. Must have the same type as var. Scaling factor. Must be a scalar. use_locking An optional bool. Defaults to False. If True, updating of the var and accum tensors will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention. multiply_linear_by_lr An optional bool. Defaults to False. name A name for the operation (optional). Returns A mutable Tensor. Has the same type as var.
tensorflow.raw_ops.applyftrl
tf.raw_ops.ApplyFtrlV2 Update '*var' according to the Ftrl-proximal scheme. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.ApplyFtrlV2 tf.raw_ops.ApplyFtrlV2( var, accum, linear, grad, lr, l1, l2, l2_shrinkage, lr_power, use_locking=False, multiply_linear_by_lr=False, name=None ) grad_with_shrinkage = grad + 2 * l2_shrinkage * var accum_new = accum + grad * grad linear += grad_with_shrinkage - (accum_new^(-lr_power) - accum^(-lr_power)) / lr * var quadratic = 1.0 / (accum_new^(lr_power) * lr) + 2 * l2 var = (sign(linear) * l1 - linear) / quadratic if |linear| > l1 else 0.0 accum = accum_new Args var A mutable Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, complex64, int64, qint8, quint8, qint32, bfloat16, uint16, complex128, half, uint32, uint64. Should be from a Variable(). accum A mutable Tensor. Must have the same type as var. Should be from a Variable(). linear A mutable Tensor. Must have the same type as var. Should be from a Variable(). grad A Tensor. Must have the same type as var. The gradient. lr A Tensor. Must have the same type as var. Scaling factor. Must be a scalar. l1 A Tensor. Must have the same type as var. L1 regularization. Must be a scalar. l2 A Tensor. Must have the same type as var. L2 shrinkage regularization. Must be a scalar. l2_shrinkage A Tensor. Must have the same type as var. lr_power A Tensor. Must have the same type as var. Scaling factor. Must be a scalar. use_locking An optional bool. Defaults to False. If True, updating of the var and accum tensors will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention. multiply_linear_by_lr An optional bool. Defaults to False. name A name for the operation (optional). Returns A mutable Tensor. Has the same type as var.
tensorflow.raw_ops.applyftrlv2
tf.raw_ops.ApplyGradientDescent Update '*var' by subtracting 'alpha' * 'delta' from it. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.ApplyGradientDescent tf.raw_ops.ApplyGradientDescent( var, alpha, delta, use_locking=False, name=None ) Args var A mutable Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, complex64, int64, qint8, quint8, qint32, bfloat16, uint16, complex128, half, uint32, uint64. Should be from a Variable(). alpha A Tensor. Must have the same type as var. Scaling factor. Must be a scalar. delta A Tensor. Must have the same type as var. The change. use_locking An optional bool. Defaults to False. If True, the subtraction will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention. name A name for the operation (optional). Returns A mutable Tensor. Has the same type as var.
tensorflow.raw_ops.applygradientdescent
tf.raw_ops.ApplyMomentum Update '*var' according to the momentum scheme. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.ApplyMomentum tf.raw_ops.ApplyMomentum( var, accum, lr, grad, momentum, use_locking=False, use_nesterov=False, name=None ) Set use_nesterov = True if you want to use Nesterov momentum. accum = accum * momentum + grad var -= lr * accum Args var A mutable Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, complex64, int64, qint8, quint8, qint32, bfloat16, uint16, complex128, half, uint32, uint64. Should be from a Variable(). accum A mutable Tensor. Must have the same type as var. Should be from a Variable(). lr A Tensor. Must have the same type as var. Scaling factor. Must be a scalar. grad A Tensor. Must have the same type as var. The gradient. momentum A Tensor. Must have the same type as var. Momentum. Must be a scalar. use_locking An optional bool. Defaults to False. If True, updating of the var and accum tensors will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention. use_nesterov An optional bool. Defaults to False. If True, the tensor passed to compute grad will be var - lr * momentum * accum, so in the end, the var you get is actually var - lr * momentum * accum. name A name for the operation (optional). Returns A mutable Tensor. Has the same type as var.
tensorflow.raw_ops.applymomentum
tf.raw_ops.ApplyPowerSign Update '*var' according to the AddSign update. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.ApplyPowerSign tf.raw_ops.ApplyPowerSign( var, m, lr, logbase, sign_decay, beta, grad, use_locking=False, name=None ) mt <- beta1 * m{t-1} + (1 - beta1) * g update <- exp(logbase * sign_decay * sign(g) * sign(m_t)) * g variable <- variable - lr_t * update Args var A mutable Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, complex64, int64, qint8, quint8, qint32, bfloat16, uint16, complex128, half, uint32, uint64. Should be from a Variable(). m A mutable Tensor. Must have the same type as var. Should be from a Variable(). lr A Tensor. Must have the same type as var. Scaling factor. Must be a scalar. logbase A Tensor. Must have the same type as var. Must be a scalar. sign_decay A Tensor. Must have the same type as var. Must be a scalar. beta A Tensor. Must have the same type as var. Must be a scalar. grad A Tensor. Must have the same type as var. The gradient. use_locking An optional bool. Defaults to False. If True, updating of the var and m tensors is protected by a lock; otherwise the behavior is undefined, but may exhibit less contention. name A name for the operation (optional). Returns A mutable Tensor. Has the same type as var.
tensorflow.raw_ops.applypowersign
tf.raw_ops.ApplyProximalAdagrad Update 'var' and 'accum' according to FOBOS with Adagrad learning rate. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.ApplyProximalAdagrad tf.raw_ops.ApplyProximalAdagrad( var, accum, lr, l1, l2, grad, use_locking=False, name=None ) accum += grad * grad prox_v = var - lr * grad * (1 / sqrt(accum)) var = sign(prox_v)/(1+lr*l2) * max{|prox_v|-lr*l1,0} Args var A mutable Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, complex64, int64, qint8, quint8, qint32, bfloat16, uint16, complex128, half, uint32, uint64. Should be from a Variable(). accum A mutable Tensor. Must have the same type as var. Should be from a Variable(). lr A Tensor. Must have the same type as var. Scaling factor. Must be a scalar. l1 A Tensor. Must have the same type as var. L1 regularization. Must be a scalar. l2 A Tensor. Must have the same type as var. L2 regularization. Must be a scalar. grad A Tensor. Must have the same type as var. The gradient. use_locking An optional bool. Defaults to False. If True, updating of the var and accum tensors will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention. name A name for the operation (optional). Returns A mutable Tensor. Has the same type as var.
tensorflow.raw_ops.applyproximaladagrad
tf.raw_ops.ApplyProximalGradientDescent Update '*var' as FOBOS algorithm with fixed learning rate. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.ApplyProximalGradientDescent tf.raw_ops.ApplyProximalGradientDescent( var, alpha, l1, l2, delta, use_locking=False, name=None ) prox_v = var - alpha * delta var = sign(prox_v)/(1+alpha*l2) * max{|prox_v|-alpha*l1,0} Args var A mutable Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, complex64, int64, qint8, quint8, qint32, bfloat16, uint16, complex128, half, uint32, uint64. Should be from a Variable(). alpha A Tensor. Must have the same type as var. Scaling factor. Must be a scalar. l1 A Tensor. Must have the same type as var. L1 regularization. Must be a scalar. l2 A Tensor. Must have the same type as var. L2 regularization. Must be a scalar. delta A Tensor. Must have the same type as var. The change. use_locking An optional bool. Defaults to False. If True, the subtraction will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention. name A name for the operation (optional). Returns A mutable Tensor. Has the same type as var.
tensorflow.raw_ops.applyproximalgradientdescent
tf.raw_ops.ApplyRMSProp Update '*var' according to the RMSProp algorithm. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.ApplyRMSProp tf.raw_ops.ApplyRMSProp( var, ms, mom, lr, rho, momentum, epsilon, grad, use_locking=False, name=None ) Note that in dense implementation of this algorithm, ms and mom will update even if the grad is zero, but in this sparse implementation, ms and mom will not update in iterations during which the grad is zero. mean_square = decay * mean_square + (1-decay) * gradient ** 2 Delta = learning_rate * gradient / sqrt(mean_square + epsilon) ms <- rho * ms{t-1} + (1-rho) * grad * grad mom <- momentum * mom{t-1} + lr * grad / sqrt(ms + epsilon) var <- var - mom Args var A mutable Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, complex64, int64, qint8, quint8, qint32, bfloat16, uint16, complex128, half, uint32, uint64. Should be from a Variable(). ms A mutable Tensor. Must have the same type as var. Should be from a Variable(). mom A mutable Tensor. Must have the same type as var. Should be from a Variable(). lr A Tensor. Must have the same type as var. Scaling factor. Must be a scalar. rho A Tensor. Must have the same type as var. Decay rate. Must be a scalar. momentum A Tensor. Must have the same type as var. epsilon A Tensor. Must have the same type as var. Ridge term. Must be a scalar. grad A Tensor. Must have the same type as var. The gradient. use_locking An optional bool. Defaults to False. If True, updating of the var, ms, and mom tensors is protected by a lock; otherwise the behavior is undefined, but may exhibit less contention. name A name for the operation (optional). Returns A mutable Tensor. Has the same type as var.
tensorflow.raw_ops.applyrmsprop
tf.raw_ops.ApproximateEqual Returns the truth value of abs(x-y) < tolerance element-wise. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.ApproximateEqual tf.raw_ops.ApproximateEqual( x, y, tolerance=1e-05, name=None ) Args x A Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, complex64, int64, qint8, quint8, qint32, bfloat16, uint16, complex128, half, uint32, uint64. y A Tensor. Must have the same type as x. tolerance An optional float. Defaults to 1e-05. name A name for the operation (optional). Returns A Tensor of type bool.
tensorflow.raw_ops.approximateequal
tf.raw_ops.ArgMax Returns the index with the largest value across dimensions of a tensor. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.ArgMax tf.raw_ops.ArgMax( input, dimension, output_type=tf.dtypes.int64, name=None ) Note that in case of ties the identity of the return value is not guaranteed. Usage: import tensorflow as tf a = [1, 10, 26.9, 2.8, 166.32, 62.3] b = tf.math.argmax(input = a) c = tf.keras.backend.eval(b) # c = 4 # here a[4] = 166.32 which is the largest element of a across axis 0 Args input A Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, complex64, int64, qint8, quint8, qint32, bfloat16, uint16, complex128, half, uint32, uint64, bool. dimension A Tensor. Must be one of the following types: int32, int64. int32 or int64, must be in the range [-rank(input), rank(input)). Describes which dimension of the input Tensor to reduce across. For vectors, use dimension = 0. output_type An optional tf.DType from: tf.int32, tf.int64. Defaults to tf.int64. name A name for the operation (optional). Returns A Tensor of type output_type.
tensorflow.raw_ops.argmax
tf.raw_ops.ArgMin Returns the index with the smallest value across dimensions of a tensor. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.ArgMin tf.raw_ops.ArgMin( input, dimension, output_type=tf.dtypes.int64, name=None ) Note that in case of ties the identity of the return value is not guaranteed. Usage: import tensorflow as tf a = [1, 10, 26.9, 2.8, 166.32, 62.3] b = tf.math.argmin(input = a) c = tf.keras.backend.eval(b) # c = 0 # here a[0] = 1 which is the smallest element of a across axis 0 Args input A Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, complex64, int64, qint8, quint8, qint32, bfloat16, uint16, complex128, half, uint32, uint64, bool. dimension A Tensor. Must be one of the following types: int32, int64. int32 or int64, must be in the range [-rank(input), rank(input)). Describes which dimension of the input Tensor to reduce across. For vectors, use dimension = 0. output_type An optional tf.DType from: tf.int32, tf.int64. Defaults to tf.int64. name A name for the operation (optional). Returns A Tensor of type output_type.
tensorflow.raw_ops.argmin
tf.raw_ops.Asin Computes the trignometric inverse sine of x element-wise. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.Asin tf.raw_ops.Asin( x, name=None ) The tf.math.asin operation returns the inverse of tf.math.sin, such that if y = tf.math.sin(x) then, x = tf.math.asin(y). Note: The output of tf.math.asin will lie within the invertible range of sine, i.e [-pi/2, pi/2]. For example: # Note: [1.047, 0.785] ~= [(pi/3), (pi/4)] x = tf.constant([1.047, 0.785]) y = tf.math.sin(x) # [0.8659266, 0.7068252] tf.math.asin(y) # [1.047, 0.785] = x Args x A Tensor. Must be one of the following types: bfloat16, half, float32, float64, int8, int16, int32, int64, complex64, complex128. name A name for the operation (optional). Returns A Tensor. Has the same type as x.
tensorflow.raw_ops.asin
tf.raw_ops.Asinh Computes inverse hyperbolic sine of x element-wise. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.Asinh tf.raw_ops.Asinh( x, name=None ) Given an input tensor, this function computes inverse hyperbolic sine for every element in the tensor. Both input and output has a range of [-inf, inf]. x = tf.constant([-float("inf"), -2, -0.5, 1, 1.2, 200, 10000, float("inf")]) tf.math.asinh(x) ==> [-inf -1.4436355 -0.4812118 0.8813736 1.0159732 5.991471 9.903487 inf] Args x A Tensor. Must be one of the following types: bfloat16, half, float32, float64, complex64, complex128. name A name for the operation (optional). Returns A Tensor. Has the same type as x.
tensorflow.raw_ops.asinh
tf.raw_ops.Assert Asserts that the given condition is true. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.Assert tf.raw_ops.Assert( condition, data, summarize=3, name=None ) If condition evaluates to false, print the list of tensors in data. summarize determines how many entries of the tensors to print. Args condition A Tensor of type bool. The condition to evaluate. data A list of Tensor objects. The tensors to print out when condition is false. summarize An optional int. Defaults to 3. Print this many entries of each tensor. name A name for the operation (optional). Returns The created Operation.
tensorflow.raw_ops.assert
tf.raw_ops.AssertCardinalityDataset View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.AssertCardinalityDataset tf.raw_ops.AssertCardinalityDataset( input_dataset, cardinality, output_types, output_shapes, name=None ) Args input_dataset A Tensor of type variant. cardinality A Tensor of type int64. output_types A list of tf.DTypes that has length >= 1. output_shapes A list of shapes (each a tf.TensorShape or list of ints) that has length >= 1. name A name for the operation (optional). Returns A Tensor of type variant.
tensorflow.raw_ops.assertcardinalitydataset
tf.raw_ops.AssertNextDataset A transformation that asserts which transformations happen next. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.AssertNextDataset tf.raw_ops.AssertNextDataset( input_dataset, transformations, output_types, output_shapes, name=None ) This transformation checks whether the camel-case names (i.e. "FlatMap", not "flat_map") of the transformations following this transformation match the list of names in the transformations argument. If there is a mismatch, the transformation raises an exception. The check occurs when iterating over the contents of the dataset, which means that the check happens after any static optimizations are applied to the dataset graph. Args input_dataset A Tensor of type variant. A variant tensor representing the input dataset. AssertNextDataset passes through the outputs of its input dataset. transformations A Tensor of type string. A tf.string vector tf.Tensor identifying the transformations that are expected to happen next. output_types A list of tf.DTypes that has length >= 1. output_shapes A list of shapes (each a tf.TensorShape or list of ints) that has length >= 1. name A name for the operation (optional). Returns A Tensor of type variant.
tensorflow.raw_ops.assertnextdataset
tf.raw_ops.Assign Update 'ref' by assigning 'value' to it. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.Assign tf.raw_ops.Assign( ref, value, validate_shape=True, use_locking=True, name=None ) This operation outputs "ref" after the assignment is done. This makes it easier to chain operations that need to use the reset value. Args ref A mutable Tensor. Should be from a Variable node. May be uninitialized. value A Tensor. Must have the same type as ref. The value to be assigned to the variable. validate_shape An optional bool. Defaults to True. If true, the operation will validate that the shape of 'value' matches the shape of the Tensor being assigned to. If false, 'ref' will take on the shape of 'value'. use_locking An optional bool. Defaults to True. If True, the assignment will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention. name A name for the operation (optional). Returns A mutable Tensor. Has the same type as ref.
tensorflow.raw_ops.assign
tf.raw_ops.AssignAdd Update 'ref' by adding 'value' to it. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.AssignAdd tf.raw_ops.AssignAdd( ref, value, use_locking=False, name=None ) This operation outputs "ref" after the update is done. This makes it easier to chain operations that need to use the reset value. Args ref A mutable Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, complex64, int64, qint8, quint8, qint32, bfloat16, uint16, complex128, half, uint32, uint64. Should be from a Variable node. value A Tensor. Must have the same type as ref. The value to be added to the variable. use_locking An optional bool. Defaults to False. If True, the addition will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention. name A name for the operation (optional). Returns A mutable Tensor. Has the same type as ref.
tensorflow.raw_ops.assignadd
tf.raw_ops.AssignAddVariableOp Adds a value to the current value of a variable. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.AssignAddVariableOp tf.raw_ops.AssignAddVariableOp( resource, value, name=None ) Any ReadVariableOp with a control dependency on this op is guaranteed to see the incremented value or a subsequent newer one. Args resource A Tensor of type resource. handle to the resource in which to store the variable. value A Tensor. the value by which the variable will be incremented. name A name for the operation (optional). Returns The created Operation.
tensorflow.raw_ops.assignaddvariableop
tf.raw_ops.AssignSub Update 'ref' by subtracting 'value' from it. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.AssignSub tf.raw_ops.AssignSub( ref, value, use_locking=False, name=None ) This operation outputs "ref" after the update is done. This makes it easier to chain operations that need to use the reset value. Args ref A mutable Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, complex64, int64, qint8, quint8, qint32, bfloat16, uint16, complex128, half, uint32, uint64. Should be from a Variable node. value A Tensor. Must have the same type as ref. The value to be subtracted to the variable. use_locking An optional bool. Defaults to False. If True, the subtraction will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention. name A name for the operation (optional). Returns A mutable Tensor. Has the same type as ref.
tensorflow.raw_ops.assignsub
tf.raw_ops.AssignSubVariableOp Subtracts a value from the current value of a variable. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.AssignSubVariableOp tf.raw_ops.AssignSubVariableOp( resource, value, name=None ) Any ReadVariableOp with a control dependency on this op is guaranteed to see the decremented value or a subsequent newer one. Args resource A Tensor of type resource. handle to the resource in which to store the variable. value A Tensor. the value by which the variable will be incremented. name A name for the operation (optional). Returns The created Operation.
tensorflow.raw_ops.assignsubvariableop
tf.raw_ops.AssignVariableOp Assigns a new value to a variable. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.AssignVariableOp tf.raw_ops.AssignVariableOp( resource, value, name=None ) Any ReadVariableOp with a control dependency on this op is guaranteed to return this value or a subsequent newer value of the variable. Args resource A Tensor of type resource. handle to the resource in which to store the variable. value A Tensor. the value to set the new tensor to use. name A name for the operation (optional). Returns The created Operation.
tensorflow.raw_ops.assignvariableop
tf.raw_ops.AsString Converts each entry in the given tensor to strings. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.AsString tf.raw_ops.AsString( input, precision=-1, scientific=False, shortest=False, width=-1, fill='', name=None ) Supports many numeric types and boolean. For Unicode, see the https://www.tensorflow.org/tutorials/representation/unicode tutorial. Examples: tf.strings.as_string([3, 2]) <tf.Tensor: shape=(2,), dtype=string, numpy=array([b'3', b'2'], dtype=object)> tf.strings.as_string([3.1415926, 2.71828], precision=2).numpy() array([b'3.14', b'2.72'], dtype=object) Args input A Tensor. Must be one of the following types: int8, int16, int32, int64, complex64, complex128, float32, float64, bool. precision An optional int. Defaults to -1. The post-decimal precision to use for floating point numbers. Only used if precision > -1. scientific An optional bool. Defaults to False. Use scientific notation for floating point numbers. shortest An optional bool. Defaults to False. Use shortest representation (either scientific or standard) for floating point numbers. width An optional int. Defaults to -1. Pad pre-decimal numbers to this width. Applies to both floating point and integer numbers. Only used if width > -1. fill An optional string. Defaults to "". The value to pad if width > -1. If empty, pads with spaces. Another typical value is '0'. String cannot be longer than 1 character. name A name for the operation (optional). Returns A Tensor of type string.
tensorflow.raw_ops.asstring
tf.raw_ops.Atan Computes the trignometric inverse tangent of x element-wise. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.Atan tf.raw_ops.Atan( x, name=None ) The tf.math.atan operation returns the inverse of tf.math.tan, such that if y = tf.math.tan(x) then, x = tf.math.atan(y). Note: The output of tf.math.atan will lie within the invertible range of tan, i.e (-pi/2, pi/2). For example: # Note: [1.047, 0.785] ~= [(pi/3), (pi/4)] x = tf.constant([1.047, 0.785]) y = tf.math.tan(x) # [1.731261, 0.99920404] tf.math.atan(y) # [1.047, 0.785] = x Args x A Tensor. Must be one of the following types: bfloat16, half, float32, float64, int8, int16, int32, int64, complex64, complex128. name A name for the operation (optional). Returns A Tensor. Has the same type as x.
tensorflow.raw_ops.atan
tf.raw_ops.Atan2 Computes arctangent of y/x element-wise, respecting signs of the arguments. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.Atan2 tf.raw_ops.Atan2( y, x, name=None ) This is the angle ( \theta \in [-\pi, \pi] ) such that [ x = r \cos(\theta) ] and [ y = r \sin(\theta) ] where (r = \sqrt(x^2 + y^2) ). Args y A Tensor. Must be one of the following types: bfloat16, half, float32, float64. x A Tensor. Must have the same type as y. name A name for the operation (optional). Returns A Tensor. Has the same type as y.
tensorflow.raw_ops.atan2
tf.raw_ops.Atanh Computes inverse hyperbolic tangent of x element-wise. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.Atanh tf.raw_ops.Atanh( x, name=None ) Given an input tensor, this function computes inverse hyperbolic tangent for every element in the tensor. Input range is [-1,1] and output range is [-inf, inf]. If input is -1, output will be -inf and if the input is 1, output will be inf. Values outside the range will have nan as output. x = tf.constant([-float("inf"), -1, -0.5, 1, 0, 0.5, 10, float("inf")]) tf.math.atanh(x) ==> [nan -inf -0.54930615 inf 0. 0.54930615 nan nan] Args x A Tensor. Must be one of the following types: bfloat16, half, float32, float64, complex64, complex128. name A name for the operation (optional). Returns A Tensor. Has the same type as x.
tensorflow.raw_ops.atanh
tf.raw_ops.AudioSpectrogram Produces a visualization of audio data over time. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.AudioSpectrogram tf.raw_ops.AudioSpectrogram( input, window_size, stride, magnitude_squared=False, name=None ) Spectrograms are a standard way of representing audio information as a series of slices of frequency information, one slice for each window of time. By joining these together into a sequence, they form a distinctive fingerprint of the sound over time. This op expects to receive audio data as an input, stored as floats in the range -1 to 1, together with a window width in samples, and a stride specifying how far to move the window between slices. From this it generates a three dimensional output. The first dimension is for the channels in the input, so a stereo audio input would have two here for example. The second dimension is time, with successive frequency slices. The third dimension has an amplitude value for each frequency during that time slice. This means the layout when converted and saved as an image is rotated 90 degrees clockwise from a typical spectrogram. Time is descending down the Y axis, and the frequency decreases from left to right. Each value in the result represents the square root of the sum of the real and imaginary parts of an FFT on the current window of samples. In this way, the lowest dimension represents the power of each frequency in the current window, and adjacent windows are concatenated in the next dimension. To get a more intuitive and visual look at what this operation does, you can run tensorflow/examples/wav_to_spectrogram to read in an audio file and save out the resulting spectrogram as a PNG image. Args input A Tensor of type float32. Float representation of audio data. window_size An int. How wide the input window is in samples. For the highest efficiency this should be a power of two, but other values are accepted. stride An int. How widely apart the center of adjacent sample windows should be. magnitude_squared An optional bool. Defaults to False. Whether to return the squared magnitude or just the magnitude. Using squared magnitude can avoid extra calculations. name A name for the operation (optional). Returns A Tensor of type float32.
tensorflow.raw_ops.audiospectrogram
tf.raw_ops.AudioSummary Outputs a Summary protocol buffer with audio. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.AudioSummary tf.raw_ops.AudioSummary( tag, tensor, sample_rate, max_outputs=3, name=None ) The summary has up to max_outputs summary values containing audio. The audio is built from tensor which must be 3-D with shape [batch_size, frames, channels] or 2-D with shape [batch_size, frames]. The values are assumed to be in the range of [-1.0, 1.0] with a sample rate of sample_rate. The tag argument is a scalar Tensor of type string. It is used to build the tag of the summary values: If max_outputs is 1, the summary value tag is 'tag/audio'. If max_outputs is greater than 1, the summary value tags are generated sequentially as 'tag/audio/0', 'tag/audio/1', etc. Args tag A Tensor of type string. Scalar. Used to build the tag attribute of the summary values. tensor A Tensor of type float32. 2-D of shape [batch_size, frames]. sample_rate A float. The sample rate of the signal in hertz. max_outputs An optional int that is >= 1. Defaults to 3. Max number of batch elements to generate audio for. name A name for the operation (optional). Returns A Tensor of type string.
tensorflow.raw_ops.audiosummary
tf.raw_ops.AudioSummaryV2 Outputs a Summary protocol buffer with audio. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.AudioSummaryV2 tf.raw_ops.AudioSummaryV2( tag, tensor, sample_rate, max_outputs=3, name=None ) The summary has up to max_outputs summary values containing audio. The audio is built from tensor which must be 3-D with shape [batch_size, frames, channels] or 2-D with shape [batch_size, frames]. The values are assumed to be in the range of [-1.0, 1.0] with a sample rate of sample_rate. The tag argument is a scalar Tensor of type string. It is used to build the tag of the summary values: If max_outputs is 1, the summary value tag is 'tag/audio'. If max_outputs is greater than 1, the summary value tags are generated sequentially as 'tag/audio/0', 'tag/audio/1', etc. Args tag A Tensor of type string. Scalar. Used to build the tag attribute of the summary values. tensor A Tensor of type float32. 2-D of shape [batch_size, frames]. sample_rate A Tensor of type float32. The sample rate of the signal in hertz. max_outputs An optional int that is >= 1. Defaults to 3. Max number of batch elements to generate audio for. name A name for the operation (optional). Returns A Tensor of type string.
tensorflow.raw_ops.audiosummaryv2
tf.raw_ops.AutoShardDataset Creates a dataset that shards the input dataset. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.AutoShardDataset tf.raw_ops.AutoShardDataset( input_dataset, num_workers, index, output_types, output_shapes, auto_shard_policy=0, num_replicas=0, name=None ) Creates a dataset that shards the input dataset by num_workers, returning a sharded dataset for the index-th worker. This attempts to automatically shard a dataset by examining the Dataset graph and inserting a shard op before the inputs to a reader Dataset (e.g. CSVDataset, TFRecordDataset). This dataset will throw a NotFound error if we cannot shard the dataset automatically. Args input_dataset A Tensor of type variant. A variant tensor representing the input dataset. num_workers A Tensor of type int64. A scalar representing the number of workers to distribute this dataset across. index A Tensor of type int64. A scalar representing the index of the current worker out of num_workers. output_types A list of tf.DTypes that has length >= 1. output_shapes A list of shapes (each a tf.TensorShape or list of ints) that has length >= 1. auto_shard_policy An optional int. Defaults to 0. num_replicas An optional int. Defaults to 0. name A name for the operation (optional). Returns A Tensor of type variant.
tensorflow.raw_ops.autosharddataset
tf.raw_ops.AvgPool Performs average pooling on the input. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.AvgPool tf.raw_ops.AvgPool( value, ksize, strides, padding, data_format='NHWC', name=None ) Each entry in output is the mean of the corresponding size ksize window in value. Args value A Tensor. Must be one of the following types: half, bfloat16, float32, float64. 4-D with shape [batch, height, width, channels]. ksize A list of ints that has length >= 4. The size of the sliding window for each dimension of value. strides A list of ints that has length >= 4. The stride of the sliding window for each dimension of value. padding A string from: "SAME", "VALID". The type of padding algorithm to use. data_format An optional string from: "NHWC", "NCHW". Defaults to "NHWC". Specify the data format of the input and output data. With the default format "NHWC", the data is stored in the order of: [batch, in_height, in_width, in_channels]. Alternatively, the format could be "NCHW", the data storage order of: [batch, in_channels, in_height, in_width]. name A name for the operation (optional). Returns A Tensor. Has the same type as value.
tensorflow.raw_ops.avgpool
tf.raw_ops.AvgPool3D Performs 3D average pooling on the input. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.AvgPool3D tf.raw_ops.AvgPool3D( input, ksize, strides, padding, data_format='NDHWC', name=None ) Each entry in output is the mean of the corresponding size ksize window in value. Args input A Tensor. Must be one of the following types: half, bfloat16, float32, float64. Shape [batch, depth, rows, cols, channels] tensor to pool over. ksize A list of ints that has length >= 5. 1-D tensor of length 5. The size of the window for each dimension of the input tensor. Must have ksize[0] = ksize[4] = 1. strides A list of ints that has length >= 5. 1-D tensor of length 5. The stride of the sliding window for each dimension of input. Must have strides[0] = strides[4] = 1. padding A string from: "SAME", "VALID". The type of padding algorithm to use. data_format An optional string from: "NDHWC", "NCDHW". Defaults to "NDHWC". The data format of the input and output data. With the default format "NDHWC", the data is stored in the order of: [batch, in_depth, in_height, in_width, in_channels]. Alternatively, the format could be "NCDHW", the data storage order is: [batch, in_channels, in_depth, in_height, in_width]. name A name for the operation (optional). Returns A Tensor. Has the same type as input.
tensorflow.raw_ops.avgpool3d
tf.raw_ops.AvgPool3DGrad Computes gradients of average pooling function. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.AvgPool3DGrad tf.raw_ops.AvgPool3DGrad( orig_input_shape, grad, ksize, strides, padding, data_format='NDHWC', name=None ) Args orig_input_shape A Tensor of type int32. The original input dimensions. grad A Tensor. Must be one of the following types: half, bfloat16, float32, float64. Output backprop of shape [batch, depth, rows, cols, channels]. ksize A list of ints that has length >= 5. 1-D tensor of length 5. The size of the window for each dimension of the input tensor. Must have ksize[0] = ksize[4] = 1. strides A list of ints that has length >= 5. 1-D tensor of length 5. The stride of the sliding window for each dimension of input. Must have strides[0] = strides[4] = 1. padding A string from: "SAME", "VALID". The type of padding algorithm to use. data_format An optional string from: "NDHWC", "NCDHW". Defaults to "NDHWC". The data format of the input and output data. With the default format "NDHWC", the data is stored in the order of: [batch, in_depth, in_height, in_width, in_channels]. Alternatively, the format could be "NCDHW", the data storage order is: [batch, in_channels, in_depth, in_height, in_width]. name A name for the operation (optional). Returns A Tensor. Has the same type as grad.
tensorflow.raw_ops.avgpool3dgrad
tf.raw_ops.AvgPoolGrad Computes gradients of the average pooling function. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.AvgPoolGrad tf.raw_ops.AvgPoolGrad( orig_input_shape, grad, ksize, strides, padding, data_format='NHWC', name=None ) Args orig_input_shape A Tensor of type int32. 1-D. Shape of the original input to avg_pool. grad A Tensor. Must be one of the following types: half, bfloat16, float32, float64. 4-D with shape [batch, height, width, channels]. Gradients w.r.t. the output of avg_pool. ksize A list of ints that has length >= 4. The size of the sliding window for each dimension of the input. strides A list of ints that has length >= 4. The stride of the sliding window for each dimension of the input. padding A string from: "SAME", "VALID". The type of padding algorithm to use. data_format An optional string from: "NHWC", "NCHW". Defaults to "NHWC". Specify the data format of the input and output data. With the default format "NHWC", the data is stored in the order of: [batch, in_height, in_width, in_channels]. Alternatively, the format could be "NCHW", the data storage order of: [batch, in_channels, in_height, in_width]. name A name for the operation (optional). Returns A Tensor. Has the same type as grad.
tensorflow.raw_ops.avgpoolgrad
tf.raw_ops.BandedTriangularSolve View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.BandedTriangularSolve tf.raw_ops.BandedTriangularSolve( matrix, rhs, lower=True, adjoint=False, name=None ) Args matrix A Tensor. Must be one of the following types: float64, float32, half, complex64, complex128. rhs A Tensor. Must have the same type as matrix. lower An optional bool. Defaults to True. adjoint An optional bool. Defaults to False. name A name for the operation (optional). Returns A Tensor. Has the same type as matrix.
tensorflow.raw_ops.bandedtriangularsolve
tf.raw_ops.Barrier Defines a barrier that persists across different graph executions. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.Barrier tf.raw_ops.Barrier( component_types, shapes=[], capacity=-1, container='', shared_name='', name=None ) A barrier represents a key-value map, where each key is a string, and each value is a tuple of tensors. At runtime, the barrier contains 'complete' and 'incomplete' elements. A complete element has defined tensors for all components of its value tuple, and may be accessed using BarrierTakeMany. An incomplete element has some undefined components in its value tuple, and may be updated using BarrierInsertMany. Args component_types A list of tf.DTypes that has length >= 1. The type of each component in a value. shapes An optional list of shapes (each a tf.TensorShape or list of ints). Defaults to []. The shape of each component in a value. Each shape must be 1 in the first dimension. The length of this attr must be the same as the length of component_types. capacity An optional int. Defaults to -1. The capacity of the barrier. The default capacity is MAX_INT32, which is the largest capacity of the underlying queue. container An optional string. Defaults to "". If non-empty, this barrier is placed in the given container. Otherwise, a default container is used. shared_name An optional string. Defaults to "". If non-empty, this barrier will be shared under the given name across multiple sessions. name A name for the operation (optional). Returns A Tensor of type mutable string.
tensorflow.raw_ops.barrier
tf.raw_ops.BarrierClose Closes the given barrier. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.BarrierClose tf.raw_ops.BarrierClose( handle, cancel_pending_enqueues=False, name=None ) This operation signals that no more new elements will be inserted in the given barrier. Subsequent InsertMany that try to introduce a new key will fail. Subsequent InsertMany operations that just add missing components to already existing elements will continue to succeed. Subsequent TakeMany operations will continue to succeed if sufficient completed elements remain in the barrier. Subsequent TakeMany operations that would block will fail immediately. Args handle A Tensor of type mutable string. The handle to a barrier. cancel_pending_enqueues An optional bool. Defaults to False. If true, all pending enqueue requests that are blocked on the barrier's queue will be canceled. InsertMany will fail, even if no new key is introduced. name A name for the operation (optional). Returns The created Operation.
tensorflow.raw_ops.barrierclose
tf.raw_ops.BarrierIncompleteSize Computes the number of incomplete elements in the given barrier. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.BarrierIncompleteSize tf.raw_ops.BarrierIncompleteSize( handle, name=None ) Args handle A Tensor of type mutable string. The handle to a barrier. name A name for the operation (optional). Returns A Tensor of type int32.
tensorflow.raw_ops.barrierincompletesize
tf.raw_ops.BarrierInsertMany For each key, assigns the respective value to the specified component. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.BarrierInsertMany tf.raw_ops.BarrierInsertMany( handle, keys, values, component_index, name=None ) If a key is not found in the barrier, this operation will create a new incomplete element. If a key is found in the barrier, and the element already has a value at component_index, this operation will fail with INVALID_ARGUMENT, and leave the barrier in an undefined state. Args handle A Tensor of type mutable string. The handle to a barrier. keys A Tensor of type string. A one-dimensional tensor of keys, with length n. values A Tensor. An any-dimensional tensor of values, which are associated with the respective keys. The 0th dimension must have length n. component_index An int. The component of the barrier elements that is being assigned. name A name for the operation (optional). Returns The created Operation.
tensorflow.raw_ops.barrierinsertmany
tf.raw_ops.BarrierReadySize Computes the number of complete elements in the given barrier. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.BarrierReadySize tf.raw_ops.BarrierReadySize( handle, name=None ) Args handle A Tensor of type mutable string. The handle to a barrier. name A name for the operation (optional). Returns A Tensor of type int32.
tensorflow.raw_ops.barrierreadysize
tf.raw_ops.BarrierTakeMany Takes the given number of completed elements from a barrier. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.BarrierTakeMany tf.raw_ops.BarrierTakeMany( handle, num_elements, component_types, allow_small_batch=False, wait_for_incomplete=False, timeout_ms=-1, name=None ) This operation concatenates completed-element component tensors along the 0th dimension to make a single component tensor. Elements come out of the barrier when they are complete, and in the order in which they were placed into the barrier. The indices output provides information about the batch in which each element was originally inserted into the barrier. Args handle A Tensor of type mutable string. The handle to a barrier. num_elements A Tensor of type int32. A single-element tensor containing the number of elements to take. component_types A list of tf.DTypes that has length >= 1. The type of each component in a value. allow_small_batch An optional bool. Defaults to False. Allow to return less than num_elements items if barrier is already closed. wait_for_incomplete An optional bool. Defaults to False. timeout_ms An optional int. Defaults to -1. If the queue is empty, this operation will block for up to timeout_ms milliseconds. Note: This option is not supported yet. name A name for the operation (optional). Returns A tuple of Tensor objects (indices, keys, values). indices A Tensor of type int64. keys A Tensor of type string. values A list of Tensor objects of type component_types.
tensorflow.raw_ops.barriertakemany
tf.raw_ops.Batch Batches all input tensors nondeterministically. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.Batch tf.raw_ops.Batch( in_tensors, num_batch_threads, max_batch_size, batch_timeout_micros, grad_timeout_micros, max_enqueued_batches=10, allowed_batch_sizes=[], container='', shared_name='', batching_queue='', name=None ) When many instances of this Op are being run concurrently with the same container/shared_name in the same device, some will output zero-shaped Tensors and others will output Tensors of size up to max_batch_size. All Tensors in in_tensors are batched together (so, for example, labels and features should be batched with a single instance of this operation. Each invocation of batch emits an id scalar which will be used to identify this particular invocation when doing unbatch or its gradient. Each op which emits a non-empty batch will also emit a non-empty batch_index Tensor, which, is a [K, 3] matrix where each row contains the invocation's id, start, and length of elements of each set of Tensors present in batched_tensors. Batched tensors are concatenated along the first dimension, and all tensors in in_tensors must have the first dimension of the same size. in_tensors: The tensors to be batched. num_batch_threads: Number of scheduling threads for processing batches of work. Determines the number of batches processed in parallel. max_batch_size: Batch sizes will never be bigger than this. batch_timeout_micros: Maximum number of microseconds to wait before outputting an incomplete batch. allowed_batch_sizes: Optional list of allowed batch sizes. If left empty, does nothing. Otherwise, supplies a list of batch sizes, causing the op to pad batches up to one of those sizes. The entries must increase monotonically, and the final entry must equal max_batch_size. grad_timeout_micros: The timeout to use for the gradient. See Unbatch. batched_tensors: Either empty tensors or a batch of concatenated Tensors. batch_index: If out_tensors is non-empty, has information to invert it. container: Controls the scope of sharing of this batch. id: always contains a scalar with a unique ID for this invocation of Batch. shared_name: Concurrently running instances of batch in the same device with the same container and shared_name will batch their elements together. If left empty, the op name will be used as the shared name. T: the types of tensors to be batched. Args in_tensors A list of Tensor objects. num_batch_threads An int. max_batch_size An int. batch_timeout_micros An int. grad_timeout_micros An int. max_enqueued_batches An optional int. Defaults to 10. allowed_batch_sizes An optional list of ints. Defaults to []. container An optional string. Defaults to "". shared_name An optional string. Defaults to "". batching_queue An optional string. Defaults to "". name A name for the operation (optional). Returns A tuple of Tensor objects (batched_tensors, batch_index, id). batched_tensors A list of Tensor objects. Has the same type as in_tensors. batch_index A Tensor of type int64. id A Tensor of type int64.
tensorflow.raw_ops.batch
tf.raw_ops.BatchCholesky View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.BatchCholesky tf.raw_ops.BatchCholesky( input, name=None ) Args input A Tensor. Must be one of the following types: float64, float32. name A name for the operation (optional). Returns A Tensor. Has the same type as input.
tensorflow.raw_ops.batchcholesky
tf.raw_ops.BatchCholeskyGrad View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.raw_ops.BatchCholeskyGrad tf.raw_ops.BatchCholeskyGrad( l, grad, name=None ) Args l A Tensor. Must be one of the following types: float32, float64. grad A Tensor. Must have the same type as l. name A name for the operation (optional). Returns A Tensor. Has the same type as l.
tensorflow.raw_ops.batchcholeskygrad