doc_content
stringlengths 1
386k
| doc_id
stringlengths 5
188
|
---|---|
tf.raw_ops.ResourceApplyAddSign Update '*var' according to the AddSign update. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.ResourceApplyAddSign
tf.raw_ops.ResourceApplyAddSign(
var, m, lr, alpha, sign_decay, beta, grad, use_locking=False, name=None
)
mt <- beta1 * m{t-1} + (1 - beta1) * g update <- (alpha + sign_decay * sign(g) *sign(m)) * g variable <- variable - lr_t * update
Args
var A Tensor of type resource. Should be from a Variable().
m A Tensor of type resource. Should be from a Variable().
lr A Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, complex64, int64, qint8, quint8, qint32, bfloat16, uint16, complex128, half, uint32, uint64. Scaling factor. Must be a scalar.
alpha A Tensor. Must have the same type as lr. Must be a scalar.
sign_decay A Tensor. Must have the same type as lr. Must be a scalar.
beta A Tensor. Must have the same type as lr. Must be a scalar.
grad A Tensor. Must have the same type as lr. The gradient.
use_locking An optional bool. Defaults to False. If True, updating of the var and m tensors is protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.
name A name for the operation (optional).
Returns The created Operation. | tensorflow.raw_ops.resourceapplyaddsign |
tf.raw_ops.ResourceApplyCenteredRMSProp Update '*var' according to the centered RMSProp algorithm. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.ResourceApplyCenteredRMSProp
tf.raw_ops.ResourceApplyCenteredRMSProp(
var, mg, ms, mom, lr, rho, momentum, epsilon, grad, use_locking=False, name=None
)
The centered RMSProp algorithm uses an estimate of the centered second moment (i.e., the variance) for normalization, as opposed to regular RMSProp, which uses the (uncentered) second moment. This often helps with training, but is slightly more expensive in terms of computation and memory. Note that in dense implementation of this algorithm, mg, ms, and mom will update even if the grad is zero, but in this sparse implementation, mg, ms, and mom will not update in iterations during which the grad is zero. mean_square = decay * mean_square + (1-decay) * gradient ** 2 mean_grad = decay * mean_grad + (1-decay) * gradient Delta = learning_rate * gradient / sqrt(mean_square + epsilon - mean_grad ** 2) mg <- rho * mg{t-1} + (1-rho) * grad ms <- rho * ms{t-1} + (1-rho) * grad * grad mom <- momentum * mom_{t-1} + lr * grad / sqrt(ms - mg * mg + epsilon) var <- var - mom
Args
var A Tensor of type resource. Should be from a Variable().
mg A Tensor of type resource. Should be from a Variable().
ms A Tensor of type resource. Should be from a Variable().
mom A Tensor of type resource. Should be from a Variable().
lr A Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, complex64, int64, qint8, quint8, qint32, bfloat16, uint16, complex128, half, uint32, uint64. Scaling factor. Must be a scalar.
rho A Tensor. Must have the same type as lr. Decay rate. Must be a scalar.
momentum A Tensor. Must have the same type as lr. Momentum Scale. Must be a scalar.
epsilon A Tensor. Must have the same type as lr. Ridge term. Must be a scalar.
grad A Tensor. Must have the same type as lr. The gradient.
use_locking An optional bool. Defaults to False. If True, updating of the var, mg, ms, and mom tensors is protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.
name A name for the operation (optional).
Returns The created Operation. | tensorflow.raw_ops.resourceapplycenteredrmsprop |
tf.raw_ops.ResourceApplyFtrl Update '*var' according to the Ftrl-proximal scheme. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.ResourceApplyFtrl
tf.raw_ops.ResourceApplyFtrl(
var, accum, linear, grad, lr, l1, l2, lr_power, use_locking=False,
multiply_linear_by_lr=False, name=None
)
accum_new = accum + grad * grad linear += grad - (accum_new^(-lr_power) - accum^(-lr_power)) / lr * var quadratic = 1.0 / (accum_new^(lr_power) * lr) + 2 * l2 var = (sign(linear) * l1 - linear) / quadratic if |linear| > l1 else 0.0 accum = accum_new
Args
var A Tensor of type resource. Should be from a Variable().
accum A Tensor of type resource. Should be from a Variable().
linear A Tensor of type resource. Should be from a Variable().
grad A Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, complex64, int64, qint8, quint8, qint32, bfloat16, uint16, complex128, half, uint32, uint64. The gradient.
lr A Tensor. Must have the same type as grad. Scaling factor. Must be a scalar.
l1 A Tensor. Must have the same type as grad. L1 regularization. Must be a scalar.
l2 A Tensor. Must have the same type as grad. L2 regularization. Must be a scalar.
lr_power A Tensor. Must have the same type as grad. Scaling factor. Must be a scalar.
use_locking An optional bool. Defaults to False. If True, updating of the var and accum tensors will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.
multiply_linear_by_lr An optional bool. Defaults to False.
name A name for the operation (optional).
Returns The created Operation. | tensorflow.raw_ops.resourceapplyftrl |
tf.raw_ops.ResourceApplyFtrlV2 Update '*var' according to the Ftrl-proximal scheme. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.ResourceApplyFtrlV2
tf.raw_ops.ResourceApplyFtrlV2(
var, accum, linear, grad, lr, l1, l2, l2_shrinkage, lr_power, use_locking=False,
multiply_linear_by_lr=False, name=None
)
grad_with_shrinkage = grad + 2 * l2_shrinkage * var accum_new = accum + grad_with_shrinkage * grad_with_shrinkage linear += grad_with_shrinkage + (accum_new^(-lr_power) - accum^(-lr_power)) / lr * var quadratic = 1.0 / (accum_new^(lr_power) * lr) + 2 * l2 var = (sign(linear) * l1 - linear) / quadratic if |linear| > l1 else 0.0 accum = accum_new
Args
var A Tensor of type resource. Should be from a Variable().
accum A Tensor of type resource. Should be from a Variable().
linear A Tensor of type resource. Should be from a Variable().
grad A Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, complex64, int64, qint8, quint8, qint32, bfloat16, uint16, complex128, half, uint32, uint64. The gradient.
lr A Tensor. Must have the same type as grad. Scaling factor. Must be a scalar.
l1 A Tensor. Must have the same type as grad. L1 regularization. Must be a scalar.
l2 A Tensor. Must have the same type as grad. L2 shrinkage regularization. Must be a scalar.
l2_shrinkage A Tensor. Must have the same type as grad.
lr_power A Tensor. Must have the same type as grad. Scaling factor. Must be a scalar.
use_locking An optional bool. Defaults to False. If True, updating of the var and accum tensors will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.
multiply_linear_by_lr An optional bool. Defaults to False.
name A name for the operation (optional).
Returns The created Operation. | tensorflow.raw_ops.resourceapplyftrlv2 |
tf.raw_ops.ResourceApplyGradientDescent Update '*var' by subtracting 'alpha' * 'delta' from it. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.ResourceApplyGradientDescent
tf.raw_ops.ResourceApplyGradientDescent(
var, alpha, delta, use_locking=False, name=None
)
Args
var A Tensor of type resource. Should be from a Variable().
alpha A Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, complex64, int64, qint8, quint8, qint32, bfloat16, uint16, complex128, half, uint32, uint64. Scaling factor. Must be a scalar.
delta A Tensor. Must have the same type as alpha. The change.
use_locking An optional bool. Defaults to False. If True, the subtraction will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.
name A name for the operation (optional).
Returns The created Operation. | tensorflow.raw_ops.resourceapplygradientdescent |
tf.raw_ops.ResourceApplyKerasMomentum Update '*var' according to the momentum scheme. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.ResourceApplyKerasMomentum
tf.raw_ops.ResourceApplyKerasMomentum(
var, accum, lr, grad, momentum, use_locking=False, use_nesterov=False, name=None
)
Set use_nesterov = True if you want to use Nesterov momentum. accum = accum * momentum - lr * grad var += accum
Args
var A Tensor of type resource. Should be from a Variable().
accum A Tensor of type resource. Should be from a Variable().
lr A Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, complex64, int64, qint8, quint8, qint32, bfloat16, uint16, complex128, half, uint32, uint64. Scaling factor. Must be a scalar.
grad A Tensor. Must have the same type as lr. The gradient.
momentum A Tensor. Must have the same type as lr. Momentum. Must be a scalar.
use_locking An optional bool. Defaults to False. If True, updating of the var and accum tensors will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.
use_nesterov An optional bool. Defaults to False. If True, the tensor passed to compute grad will be var + momentum * accum, so in the end, the var you get is actually var + momentum * accum.
name A name for the operation (optional).
Returns The created Operation. | tensorflow.raw_ops.resourceapplykerasmomentum |
tf.raw_ops.ResourceApplyMomentum Update '*var' according to the momentum scheme. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.ResourceApplyMomentum
tf.raw_ops.ResourceApplyMomentum(
var, accum, lr, grad, momentum, use_locking=False, use_nesterov=False, name=None
)
Set use_nesterov = True if you want to use Nesterov momentum. accum = accum * momentum + grad var -= lr * accum
Args
var A Tensor of type resource. Should be from a Variable().
accum A Tensor of type resource. Should be from a Variable().
lr A Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, complex64, int64, qint8, quint8, qint32, bfloat16, uint16, complex128, half, uint32, uint64. Scaling factor. Must be a scalar.
grad A Tensor. Must have the same type as lr. The gradient.
momentum A Tensor. Must have the same type as lr. Momentum. Must be a scalar.
use_locking An optional bool. Defaults to False. If True, updating of the var and accum tensors will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.
use_nesterov An optional bool. Defaults to False. If True, the tensor passed to compute grad will be var - lr * momentum * accum, so in the end, the var you get is actually var - lr * momentum * accum.
name A name for the operation (optional).
Returns The created Operation. | tensorflow.raw_ops.resourceapplymomentum |
tf.raw_ops.ResourceApplyPowerSign Update '*var' according to the AddSign update. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.ResourceApplyPowerSign
tf.raw_ops.ResourceApplyPowerSign(
var, m, lr, logbase, sign_decay, beta, grad, use_locking=False, name=None
)
mt <- beta1 * m{t-1} + (1 - beta1) * g update <- exp(logbase * sign_decay * sign(g) * sign(m_t)) * g variable <- variable - lr_t * update
Args
var A Tensor of type resource. Should be from a Variable().
m A Tensor of type resource. Should be from a Variable().
lr A Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, complex64, int64, qint8, quint8, qint32, bfloat16, uint16, complex128, half, uint32, uint64. Scaling factor. Must be a scalar.
logbase A Tensor. Must have the same type as lr. Must be a scalar.
sign_decay A Tensor. Must have the same type as lr. Must be a scalar.
beta A Tensor. Must have the same type as lr. Must be a scalar.
grad A Tensor. Must have the same type as lr. The gradient.
use_locking An optional bool. Defaults to False. If True, updating of the var and m tensors is protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.
name A name for the operation (optional).
Returns The created Operation. | tensorflow.raw_ops.resourceapplypowersign |
tf.raw_ops.ResourceApplyProximalAdagrad Update 'var' and 'accum' according to FOBOS with Adagrad learning rate. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.ResourceApplyProximalAdagrad
tf.raw_ops.ResourceApplyProximalAdagrad(
var, accum, lr, l1, l2, grad, use_locking=False, name=None
)
accum += grad * grad prox_v = var - lr * grad * (1 / sqrt(accum)) var = sign(prox_v)/(1+lr*l2) * max{|prox_v|-lr*l1,0}
Args
var A Tensor of type resource. Should be from a Variable().
accum A Tensor of type resource. Should be from a Variable().
lr A Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, complex64, int64, qint8, quint8, qint32, bfloat16, uint16, complex128, half, uint32, uint64. Scaling factor. Must be a scalar.
l1 A Tensor. Must have the same type as lr. L1 regularization. Must be a scalar.
l2 A Tensor. Must have the same type as lr. L2 regularization. Must be a scalar.
grad A Tensor. Must have the same type as lr. The gradient.
use_locking An optional bool. Defaults to False. If True, updating of the var and accum tensors will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.
name A name for the operation (optional).
Returns The created Operation. | tensorflow.raw_ops.resourceapplyproximaladagrad |
tf.raw_ops.ResourceApplyProximalGradientDescent Update '*var' as FOBOS algorithm with fixed learning rate. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.ResourceApplyProximalGradientDescent
tf.raw_ops.ResourceApplyProximalGradientDescent(
var, alpha, l1, l2, delta, use_locking=False, name=None
)
prox_v = var - alpha * delta var = sign(prox_v)/(1+alpha*l2) * max{|prox_v|-alpha*l1,0}
Args
var A Tensor of type resource. Should be from a Variable().
alpha A Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, complex64, int64, qint8, quint8, qint32, bfloat16, uint16, complex128, half, uint32, uint64. Scaling factor. Must be a scalar.
l1 A Tensor. Must have the same type as alpha. L1 regularization. Must be a scalar.
l2 A Tensor. Must have the same type as alpha. L2 regularization. Must be a scalar.
delta A Tensor. Must have the same type as alpha. The change.
use_locking An optional bool. Defaults to False. If True, the subtraction will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.
name A name for the operation (optional).
Returns The created Operation. | tensorflow.raw_ops.resourceapplyproximalgradientdescent |
tf.raw_ops.ResourceApplyRMSProp Update '*var' according to the RMSProp algorithm. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.ResourceApplyRMSProp
tf.raw_ops.ResourceApplyRMSProp(
var, ms, mom, lr, rho, momentum, epsilon, grad, use_locking=False, name=None
)
Note that in dense implementation of this algorithm, ms and mom will update even if the grad is zero, but in this sparse implementation, ms and mom will not update in iterations during which the grad is zero. mean_square = decay * mean_square + (1-decay) * gradient ** 2 Delta = learning_rate * gradient / sqrt(mean_square + epsilon) ms <- rho * ms{t-1} + (1-rho) * grad * grad mom <- momentum * mom{t-1} + lr * grad / sqrt(ms + epsilon) var <- var - mom
Args
var A Tensor of type resource. Should be from a Variable().
ms A Tensor of type resource. Should be from a Variable().
mom A Tensor of type resource. Should be from a Variable().
lr A Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, complex64, int64, qint8, quint8, qint32, bfloat16, uint16, complex128, half, uint32, uint64. Scaling factor. Must be a scalar.
rho A Tensor. Must have the same type as lr. Decay rate. Must be a scalar.
momentum A Tensor. Must have the same type as lr.
epsilon A Tensor. Must have the same type as lr. Ridge term. Must be a scalar.
grad A Tensor. Must have the same type as lr. The gradient.
use_locking An optional bool. Defaults to False. If True, updating of the var, ms, and mom tensors is protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.
name A name for the operation (optional).
Returns The created Operation. | tensorflow.raw_ops.resourceapplyrmsprop |
tf.raw_ops.ResourceConditionalAccumulator A conditional accumulator for aggregating gradients. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.ResourceConditionalAccumulator
tf.raw_ops.ResourceConditionalAccumulator(
dtype, shape, container='', shared_name='',
reduction_type='MEAN', name=None
)
The accumulator accepts gradients marked with local_step greater or equal to the most recent global_step known to the accumulator. The average can be extracted from the accumulator, provided sufficient gradients have been accumulated. Extracting the average automatically resets the aggregate to 0, and increments the global_step recorded by the accumulator. This is a resource version of ConditionalAccumulator that will work in TF2.0 with tf.cond version 2.
Args
dtype A tf.DType from: tf.float32, tf.float64, tf.int32, tf.uint8, tf.int16, tf.int8, tf.complex64, tf.int64, tf.qint8, tf.quint8, tf.qint32, tf.bfloat16, tf.uint16, tf.complex128, tf.half, tf.uint32, tf.uint64. The type of the value being accumulated.
shape A tf.TensorShape or list of ints. The shape of the values, can be [], in which case shape is unknown.
container An optional string. Defaults to "". If non-empty, this accumulator is placed in the given container. Otherwise, a default container is used.
shared_name An optional string. Defaults to "". If non-empty, this accumulator will be shared under the given name across multiple sessions.
reduction_type An optional string from: "MEAN", "SUM". Defaults to "MEAN".
name A name for the operation (optional).
Returns A Tensor of type resource. | tensorflow.raw_ops.resourceconditionalaccumulator |
tf.raw_ops.ResourceCountUpTo Increments variable pointed to by 'resource' until it reaches 'limit'. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.ResourceCountUpTo
tf.raw_ops.ResourceCountUpTo(
resource, limit, T, name=None
)
Args
resource A Tensor of type resource. Should be from a scalar Variable node.
limit An int. If incrementing ref would bring it above limit, instead generates an 'OutOfRange' error.
T A tf.DType from: tf.int32, tf.int64.
name A name for the operation (optional).
Returns A Tensor of type T. | tensorflow.raw_ops.resourcecountupto |
tf.raw_ops.ResourceGather Gather slices from the variable pointed to by resource according to indices. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.ResourceGather
tf.raw_ops.ResourceGather(
resource, indices, dtype, batch_dims=0, validate_indices=True, name=None
)
indices must be an integer tensor of any dimension (usually 0-D or 1-D). Produces an output tensor with shape indices.shape + params.shape[1:] where: # Scalar indices
output[:, ..., :] = params[indices, :, ... :]
# Vector indices
output[i, :, ..., :] = params[indices[i], :, ... :]
# Higher rank indices
output[i, ..., j, :, ... :] = params[indices[i, ..., j], :, ..., :]
Args
resource A Tensor of type resource.
indices A Tensor. Must be one of the following types: int32, int64.
dtype A tf.DType.
batch_dims An optional int. Defaults to 0.
validate_indices An optional bool. Defaults to True.
name A name for the operation (optional).
Returns A Tensor of type dtype. | tensorflow.raw_ops.resourcegather |
tf.raw_ops.ResourceGatherNd View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.ResourceGatherNd
tf.raw_ops.ResourceGatherNd(
resource, indices, dtype, name=None
)
Args
resource A Tensor of type resource.
indices A Tensor. Must be one of the following types: int32, int64.
dtype A tf.DType.
name A name for the operation (optional).
Returns A Tensor of type dtype. | tensorflow.raw_ops.resourcegathernd |
tf.raw_ops.ResourceScatterAdd Adds sparse updates to the variable referenced by resource. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.ResourceScatterAdd
tf.raw_ops.ResourceScatterAdd(
resource, indices, updates, name=None
)
This operation computes # Scalar indices
ref[indices, ...] += updates[...]
# Vector indices (for each i)
ref[indices[i], ...] += updates[i, ...]
# High rank indices (for each i, ..., j)
ref[indices[i, ..., j], ...] += updates[i, ..., j, ...]
Duplicate entries are handled correctly: if multiple indices reference the same location, their contributions add. Requires updates.shape = indices.shape + ref.shape[1:] or updates.shape = [].
Args
resource A Tensor of type resource. Should be from a Variable node.
indices A Tensor. Must be one of the following types: int32, int64. A tensor of indices into the first dimension of ref.
updates A Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, complex64, int64, qint8, quint8, qint32, bfloat16, uint16, complex128, half, uint32, uint64. A tensor of updated values to add to ref.
name A name for the operation (optional).
Returns The created Operation. | tensorflow.raw_ops.resourcescatteradd |
tf.raw_ops.ResourceScatterDiv Divides sparse updates into the variable referenced by resource. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.ResourceScatterDiv
tf.raw_ops.ResourceScatterDiv(
resource, indices, updates, name=None
)
This operation computes # Scalar indices
ref[indices, ...] /= updates[...]
# Vector indices (for each i)
ref[indices[i], ...] /= updates[i, ...]
# High rank indices (for each i, ..., j)
ref[indices[i, ..., j], ...] /= updates[i, ..., j, ...]
Duplicate entries are handled correctly: if multiple indices reference the same location, their contributions multiply. Requires updates.shape = indices.shape + ref.shape[1:] or updates.shape = [].
Args
resource A Tensor of type resource. Should be from a Variable node.
indices A Tensor. Must be one of the following types: int32, int64. A tensor of indices into the first dimension of ref.
updates A Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, complex64, int64, qint8, quint8, qint32, bfloat16, uint16, complex128, half, uint32, uint64. A tensor of updated values to add to ref.
name A name for the operation (optional).
Returns The created Operation. | tensorflow.raw_ops.resourcescatterdiv |
tf.raw_ops.ResourceScatterMax Reduces sparse updates into the variable referenced by resource using the max operation. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.ResourceScatterMax
tf.raw_ops.ResourceScatterMax(
resource, indices, updates, name=None
)
This operation computes # Scalar indices
ref[indices, ...] = max(ref[indices, ...], updates[...])
# Vector indices (for each i)
ref[indices[i], ...] = max(ref[indices[i], ...], updates[i, ...])
# High rank indices (for each i, ..., j)
ref[indices[i, ..., j], ...] = max(ref[indices[i, ..., j], ...], updates[i, ..., j, ...])
Duplicate entries are handled correctly: if multiple indices reference the same location, their contributions are combined. Requires updates.shape = indices.shape + ref.shape[1:] or updates.shape = [].
Args
resource A Tensor of type resource. Should be from a Variable node.
indices A Tensor. Must be one of the following types: int32, int64. A tensor of indices into the first dimension of ref.
updates A Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, complex64, int64, qint8, quint8, qint32, bfloat16, uint16, complex128, half, uint32, uint64. A tensor of updated values to add to ref.
name A name for the operation (optional).
Returns The created Operation. | tensorflow.raw_ops.resourcescattermax |
tf.raw_ops.ResourceScatterMin Reduces sparse updates into the variable referenced by resource using the min operation. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.ResourceScatterMin
tf.raw_ops.ResourceScatterMin(
resource, indices, updates, name=None
)
This operation computes # Scalar indices
ref[indices, ...] = min(ref[indices, ...], updates[...])
# Vector indices (for each i)
ref[indices[i], ...] = min(ref[indices[i], ...], updates[i, ...])
# High rank indices (for each i, ..., j)
ref[indices[i, ..., j], ...] = min(ref[indices[i, ..., j], ...], updates[i, ..., j, ...])
Duplicate entries are handled correctly: if multiple indices reference the same location, their contributions are combined. Requires updates.shape = indices.shape + ref.shape[1:] or updates.shape = [].
Args
resource A Tensor of type resource. Should be from a Variable node.
indices A Tensor. Must be one of the following types: int32, int64. A tensor of indices into the first dimension of ref.
updates A Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, complex64, int64, qint8, quint8, qint32, bfloat16, uint16, complex128, half, uint32, uint64. A tensor of updated values to add to ref.
name A name for the operation (optional).
Returns The created Operation. | tensorflow.raw_ops.resourcescattermin |
tf.raw_ops.ResourceScatterMul Multiplies sparse updates into the variable referenced by resource. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.ResourceScatterMul
tf.raw_ops.ResourceScatterMul(
resource, indices, updates, name=None
)
This operation computes # Scalar indices
ref[indices, ...] *= updates[...]
# Vector indices (for each i)
ref[indices[i], ...] *= updates[i, ...]
# High rank indices (for each i, ..., j)
ref[indices[i, ..., j], ...] *= updates[i, ..., j, ...]
Duplicate entries are handled correctly: if multiple indices reference the same location, their contributions multiply. Requires updates.shape = indices.shape + ref.shape[1:] or updates.shape = [].
Args
resource A Tensor of type resource. Should be from a Variable node.
indices A Tensor. Must be one of the following types: int32, int64. A tensor of indices into the first dimension of ref.
updates A Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, complex64, int64, qint8, quint8, qint32, bfloat16, uint16, complex128, half, uint32, uint64. A tensor of updated values to add to ref.
name A name for the operation (optional).
Returns The created Operation. | tensorflow.raw_ops.resourcescattermul |
tf.raw_ops.ResourceScatterNdAdd Applies sparse addition to individual values or slices in a Variable. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.ResourceScatterNdAdd
tf.raw_ops.ResourceScatterNdAdd(
ref, indices, updates, use_locking=True, name=None
)
ref is a Tensor with rank P and indices is a Tensor of rank Q. indices must be integer tensor, containing indices into ref. It must be shape [d_0, ..., d_{Q-2}, K] where 0 < K <= P. The innermost dimension of indices (with length K) corresponds to indices into elements (if K = P) or slices (if K < P) along the Kth dimension of ref. updates is Tensor of rank Q-1+P-K with shape: [d_0, ..., d_{Q-2}, ref.shape[K], ..., ref.shape[P-1]]
For example, say we want to add 4 scattered elements to a rank-1 tensor to 8 elements. In Python, that addition would look like this: ref = tf.Variable([1, 2, 3, 4, 5, 6, 7, 8], use_resource=True)
indices = tf.constant([[4], [3], [1], [7]])
updates = tf.constant([9, 10, 11, 12])
add = tf.scatter_nd_add(ref, indices, updates)
with tf.Session() as sess:
print sess.run(add)
The resulting update to ref would look like this: [1, 13, 3, 14, 14, 6, 7, 20]
See tf.scatter_nd for more details about how to make updates to slices.
Args
ref A Tensor of type resource. A resource handle. Must be from a VarHandleOp.
indices A Tensor. Must be one of the following types: int32, int64. A Tensor. Must be one of the following types: int32, int64. A tensor of indices into ref.
updates A Tensor. A Tensor. Must have the same type as ref. A tensor of values to add to ref.
use_locking An optional bool. Defaults to True. An optional bool. Defaults to True. If True, the assignment will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.
name A name for the operation (optional).
Returns The created Operation. | tensorflow.raw_ops.resourcescatterndadd |
tf.raw_ops.ResourceScatterNdMax View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.ResourceScatterNdMax
tf.raw_ops.ResourceScatterNdMax(
ref, indices, updates, use_locking=True, name=None
)
Args
ref A Tensor of type resource. A resource handle. Must be from a VarHandleOp.
indices A Tensor. Must be one of the following types: int32, int64. A Tensor. Must be one of the following types: int32, int64. A tensor of indices into ref.
updates A Tensor. A Tensor. Must have the same type as ref. A tensor of values whose element wise max is taken with ref
use_locking An optional bool. Defaults to True. An optional bool. Defaults to True. If True, the assignment will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.
name A name for the operation (optional).
Returns The created Operation. | tensorflow.raw_ops.resourcescatterndmax |
tf.raw_ops.ResourceScatterNdMin View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.ResourceScatterNdMin
tf.raw_ops.ResourceScatterNdMin(
ref, indices, updates, use_locking=True, name=None
)
Args
ref A Tensor of type resource. A resource handle. Must be from a VarHandleOp.
indices A Tensor. Must be one of the following types: int32, int64. A Tensor. Must be one of the following types: int32, int64. A tensor of indices into ref.
updates A Tensor. A Tensor. Must have the same type as ref. A tensor of values whose element wise min is taken with ref.
use_locking An optional bool. Defaults to True. An optional bool. Defaults to True. If True, the assignment will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.
name A name for the operation (optional).
Returns The created Operation. | tensorflow.raw_ops.resourcescatterndmin |
tf.raw_ops.ResourceScatterNdSub Applies sparse subtraction to individual values or slices in a Variable. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.ResourceScatterNdSub
tf.raw_ops.ResourceScatterNdSub(
ref, indices, updates, use_locking=True, name=None
)
ref is a Tensor with rank P and indices is a Tensor of rank Q. indices must be integer tensor, containing indices into ref. It must be shape [d_0, ..., d_{Q-2}, K] where 0 < K <= P. The innermost dimension of indices (with length K) corresponds to indices into elements (if K = P) or slices (if K < P) along the Kth dimension of ref. updates is Tensor of rank Q-1+P-K with shape: [d_0, ..., d_{Q-2}, ref.shape[K], ..., ref.shape[P-1]]
For example, say we want to subtract 4 scattered elements from a rank-1 tensor with 8 elements. In Python, that subtraction would look like this: ref = tf.Variable([1, 2, 3, 4, 5, 6, 7, 8], use_resource=True)
indices = tf.constant([[4], [3], [1], [7]])
updates = tf.constant([9, 10, 11, 12])
sub = tf.scatter_nd_sub(ref, indices, updates)
with tf.Session() as sess:
print sess.run(sub)
The resulting update to ref would look like this: [1, -9, 3, -6, -4, 6, 7, -4]
See tf.scatter_nd for more details about how to make updates to slices.
Args
ref A Tensor of type resource. A resource handle. Must be from a VarHandleOp.
indices A Tensor. Must be one of the following types: int32, int64. A Tensor. Must be one of the following types: int32, int64. A tensor of indices into ref.
updates A Tensor. A Tensor. Must have the same type as ref. A tensor of values to add to ref.
use_locking An optional bool. Defaults to True. An optional bool. Defaults to True. If True, the assignment will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.
name A name for the operation (optional).
Returns The created Operation. | tensorflow.raw_ops.resourcescatterndsub |
tf.raw_ops.ResourceScatterNdUpdate Applies sparse updates to individual values or slices within a given View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.ResourceScatterNdUpdate
tf.raw_ops.ResourceScatterNdUpdate(
ref, indices, updates, use_locking=True, name=None
)
variable according to indices. ref is a Tensor with rank P and indices is a Tensor of rank Q. indices must be integer tensor, containing indices into ref. It must be shape [d_0, ..., d_{Q-2}, K] where 0 < K <= P. The innermost dimension of indices (with length K) corresponds to indices into elements (if K = P) or slices (if K < P) along the Kth dimension of ref. updates is Tensor of rank Q-1+P-K with shape: [d_0, ..., d_{Q-2}, ref.shape[K], ..., ref.shape[P-1]].
For example, say we want to update 4 scattered elements to a rank-1 tensor to 8 elements. In Python, that update would look like this: ref = tf.Variable([1, 2, 3, 4, 5, 6, 7, 8])
indices = tf.constant([[4], [3], [1] ,[7]])
updates = tf.constant([9, 10, 11, 12])
update = tf.scatter_nd_update(ref, indices, updates)
with tf.Session() as sess:
print sess.run(update)
The resulting update to ref would look like this: [1, 11, 3, 10, 9, 6, 7, 12]
See tf.scatter_nd for more details about how to make updates to slices.
Args
ref A Tensor of type resource. A resource handle. Must be from a VarHandleOp.
indices A Tensor. Must be one of the following types: int32, int64. A Tensor. Must be one of the following types: int32, int64. A tensor of indices into ref.
updates A Tensor. A Tensor. Must have the same type as ref. A tensor of updated values to add to ref.
use_locking An optional bool. Defaults to True. An optional bool. Defaults to True. If True, the assignment will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.
name A name for the operation (optional).
Returns The created Operation. | tensorflow.raw_ops.resourcescatterndupdate |
tf.raw_ops.ResourceScatterSub Subtracts sparse updates from the variable referenced by resource. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.ResourceScatterSub
tf.raw_ops.ResourceScatterSub(
resource, indices, updates, name=None
)
This operation computes # Scalar indices
ref[indices, ...] -= updates[...]
# Vector indices (for each i)
ref[indices[i], ...] -= updates[i, ...]
# High rank indices (for each i, ..., j)
ref[indices[i, ..., j], ...] -= updates[i, ..., j, ...]
Duplicate entries are handled correctly: if multiple indices reference the same location, their contributions add. Requires updates.shape = indices.shape + ref.shape[1:] or updates.shape = [].
Args
resource A Tensor of type resource. Should be from a Variable node.
indices A Tensor. Must be one of the following types: int32, int64. A tensor of indices into the first dimension of ref.
updates A Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, complex64, int64, qint8, quint8, qint32, bfloat16, uint16, complex128, half, uint32, uint64. A tensor of updated values to add to ref.
name A name for the operation (optional).
Returns The created Operation. | tensorflow.raw_ops.resourcescattersub |
tf.raw_ops.ResourceScatterUpdate Assigns sparse updates to the variable referenced by resource. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.ResourceScatterUpdate
tf.raw_ops.ResourceScatterUpdate(
resource, indices, updates, name=None
)
This operation computes # Scalar indices
ref[indices, ...] = updates[...]
# Vector indices (for each i)
ref[indices[i], ...] = updates[i, ...]
# High rank indices (for each i, ..., j)
ref[indices[i, ..., j], ...] = updates[i, ..., j, ...]
Args
resource A Tensor of type resource. Should be from a Variable node.
indices A Tensor. Must be one of the following types: int32, int64. A tensor of indices into the first dimension of ref.
updates A Tensor. A tensor of updated values to add to ref.
name A name for the operation (optional).
Returns The created Operation. | tensorflow.raw_ops.resourcescatterupdate |
tf.raw_ops.ResourceSparseApplyAdadelta var: Should be from a Variable(). View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.ResourceSparseApplyAdadelta
tf.raw_ops.ResourceSparseApplyAdadelta(
var, accum, accum_update, lr, rho, epsilon, grad, indices, use_locking=False,
name=None
)
Args
var
A Tensor of type resource.
accum
A Tensor of type resource. Should be from a Variable().
accum_update
A Tensor of type resource. Should be from a Variable().
lr A Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, complex64, int64, qint8, quint8, qint32, bfloat16, uint16, complex128, half, uint32, uint64. Learning rate. Must be a scalar.
rho A Tensor. Must have the same type as lr. Decay factor. Must be a scalar.
epsilon A Tensor. Must have the same type as lr. Constant factor. Must be a scalar.
grad A Tensor. Must have the same type as lr. The gradient.
indices A Tensor. Must be one of the following types: int32, int64. A vector of indices into the first dimension of var and accum.
use_locking An optional bool. Defaults to False. If True, updating of the var and accum tensors will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.
name A name for the operation (optional).
Returns The created Operation. | tensorflow.raw_ops.resourcesparseapplyadadelta |
tf.raw_ops.ResourceSparseApplyAdagrad Update relevant entries in 'var' and 'accum' according to the adagrad scheme. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.ResourceSparseApplyAdagrad
tf.raw_ops.ResourceSparseApplyAdagrad(
var, accum, lr, grad, indices, use_locking=False, update_slots=True, name=None
)
That is for rows we have grad for, we update var and accum as follows: accum += grad * grad var -= lr * grad * (1 / sqrt(accum))
Args
var A Tensor of type resource. Should be from a Variable().
accum A Tensor of type resource. Should be from a Variable().
lr A Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, complex64, int64, qint8, quint8, qint32, bfloat16, uint16, complex128, half, uint32, uint64. Learning rate. Must be a scalar.
grad A Tensor. Must have the same type as lr. The gradient.
indices A Tensor. Must be one of the following types: int32, int64. A vector of indices into the first dimension of var and accum.
use_locking An optional bool. Defaults to False. If True, updating of the var and accum tensors will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.
update_slots An optional bool. Defaults to True.
name A name for the operation (optional).
Returns The created Operation. | tensorflow.raw_ops.resourcesparseapplyadagrad |
tf.raw_ops.ResourceSparseApplyAdagradDA Update entries in 'var' and 'accum' according to the proximal adagrad scheme. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.ResourceSparseApplyAdagradDA
tf.raw_ops.ResourceSparseApplyAdagradDA(
var, gradient_accumulator, gradient_squared_accumulator, grad, indices, lr, l1,
l2, global_step, use_locking=False, name=None
)
Args
var A Tensor of type resource. Should be from a Variable().
gradient_accumulator A Tensor of type resource. Should be from a Variable().
gradient_squared_accumulator A Tensor of type resource. Should be from a Variable().
grad A Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, complex64, int64, qint8, quint8, qint32, bfloat16, uint16, complex128, half, uint32, uint64. The gradient.
indices A Tensor. Must be one of the following types: int32, int64. A vector of indices into the first dimension of var and accum.
lr A Tensor. Must have the same type as grad. Learning rate. Must be a scalar.
l1 A Tensor. Must have the same type as grad. L1 regularization. Must be a scalar.
l2 A Tensor. Must have the same type as grad. L2 regularization. Must be a scalar.
global_step A Tensor of type int64. Training step number. Must be a scalar.
use_locking An optional bool. Defaults to False. If True, updating of the var and accum tensors will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.
name A name for the operation (optional).
Returns The created Operation. | tensorflow.raw_ops.resourcesparseapplyadagradda |
tf.raw_ops.ResourceSparseApplyAdagradV2 Update relevant entries in 'var' and 'accum' according to the adagrad scheme. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.ResourceSparseApplyAdagradV2
tf.raw_ops.ResourceSparseApplyAdagradV2(
var, accum, lr, epsilon, grad, indices, use_locking=False, update_slots=True,
name=None
)
That is for rows we have grad for, we update var and accum as follows: accum += grad * grad var -= lr * grad * (1 / sqrt(accum))
Args
var A Tensor of type resource. Should be from a Variable().
accum A Tensor of type resource. Should be from a Variable().
lr A Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, complex64, int64, qint8, quint8, qint32, bfloat16, uint16, complex128, half, uint32, uint64. Learning rate. Must be a scalar.
epsilon A Tensor. Must have the same type as lr. Constant factor. Must be a scalar.
grad A Tensor. Must have the same type as lr. The gradient.
indices A Tensor. Must be one of the following types: int32, int64. A vector of indices into the first dimension of var and accum.
use_locking An optional bool. Defaults to False. If True, updating of the var and accum tensors will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.
update_slots An optional bool. Defaults to True.
name A name for the operation (optional).
Returns The created Operation. | tensorflow.raw_ops.resourcesparseapplyadagradv2 |
tf.raw_ops.ResourceSparseApplyCenteredRMSProp Update '*var' according to the centered RMSProp algorithm. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.ResourceSparseApplyCenteredRMSProp
tf.raw_ops.ResourceSparseApplyCenteredRMSProp(
var, mg, ms, mom, lr, rho, momentum, epsilon, grad, indices, use_locking=False,
name=None
)
The centered RMSProp algorithm uses an estimate of the centered second moment (i.e., the variance) for normalization, as opposed to regular RMSProp, which uses the (uncentered) second moment. This often helps with training, but is slightly more expensive in terms of computation and memory. Note that in dense implementation of this algorithm, mg, ms, and mom will update even if the grad is zero, but in this sparse implementation, mg, ms, and mom will not update in iterations during which the grad is zero. mean_square = decay * mean_square + (1-decay) * gradient ** 2 mean_grad = decay * mean_grad + (1-decay) * gradient Delta = learning_rate * gradient / sqrt(mean_square + epsilon - mean_grad ** 2) ms <- rho * ms{t-1} + (1-rho) * grad * grad mom <- momentum * mom{t-1} + lr * grad / sqrt(ms + epsilon) var <- var - mom
Args
var A Tensor of type resource. Should be from a Variable().
mg A Tensor of type resource. Should be from a Variable().
ms A Tensor of type resource. Should be from a Variable().
mom A Tensor of type resource. Should be from a Variable().
lr A Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, complex64, int64, qint8, quint8, qint32, bfloat16, uint16, complex128, half, uint32, uint64. Scaling factor. Must be a scalar.
rho A Tensor. Must have the same type as lr. Decay rate. Must be a scalar.
momentum A Tensor. Must have the same type as lr.
epsilon A Tensor. Must have the same type as lr. Ridge term. Must be a scalar.
grad A Tensor. Must have the same type as lr. The gradient.
indices A Tensor. Must be one of the following types: int32, int64. A vector of indices into the first dimension of var, ms and mom.
use_locking An optional bool. Defaults to False. If True, updating of the var, mg, ms, and mom tensors is protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.
name A name for the operation (optional).
Returns The created Operation. | tensorflow.raw_ops.resourcesparseapplycenteredrmsprop |
tf.raw_ops.ResourceSparseApplyFtrl Update relevant entries in '*var' according to the Ftrl-proximal scheme. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.ResourceSparseApplyFtrl
tf.raw_ops.ResourceSparseApplyFtrl(
var, accum, linear, grad, indices, lr, l1, l2, lr_power, use_locking=False,
multiply_linear_by_lr=False, name=None
)
That is for rows we have grad for, we update var, accum and linear as follows: accum_new = accum + grad * grad linear += grad - (accum_new^(-lr_power) - accum^(-lr_power)) / lr * var quadratic = 1.0 / (accum_new^(lr_power) * lr) + 2 * l2 var = (sign(linear) * l1 - linear) / quadratic if |linear| > l1 else 0.0 accum = accum_new
Args
var A Tensor of type resource. Should be from a Variable().
accum A Tensor of type resource. Should be from a Variable().
linear A Tensor of type resource. Should be from a Variable().
grad A Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, complex64, int64, qint8, quint8, qint32, bfloat16, uint16, complex128, half, uint32, uint64. The gradient.
indices A Tensor. Must be one of the following types: int32, int64. A vector of indices into the first dimension of var and accum.
lr A Tensor. Must have the same type as grad. Scaling factor. Must be a scalar.
l1 A Tensor. Must have the same type as grad. L1 regularization. Must be a scalar.
l2 A Tensor. Must have the same type as grad. L2 regularization. Must be a scalar.
lr_power A Tensor. Must have the same type as grad. Scaling factor. Must be a scalar.
use_locking An optional bool. Defaults to False. If True, updating of the var and accum tensors will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.
multiply_linear_by_lr An optional bool. Defaults to False.
name A name for the operation (optional).
Returns The created Operation. | tensorflow.raw_ops.resourcesparseapplyftrl |
tf.raw_ops.ResourceSparseApplyFtrlV2 Update relevant entries in '*var' according to the Ftrl-proximal scheme. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.ResourceSparseApplyFtrlV2
tf.raw_ops.ResourceSparseApplyFtrlV2(
var, accum, linear, grad, indices, lr, l1, l2, l2_shrinkage, lr_power,
use_locking=False, multiply_linear_by_lr=False, name=None
)
That is for rows we have grad for, we update var, accum and linear as follows: grad_with_shrinkage = grad + 2 * l2_shrinkage * var accum_new = accum + grad_with_shrinkage * grad_with_shrinkage linear += grad_with_shrinkage + (accum_new^(-lr_power) - accum^(-lr_power)) / lr * var quadratic = 1.0 / (accum_new^(lr_power) * lr) + 2 * l2 var = (sign(linear) * l1 - linear) / quadratic if |linear| > l1 else 0.0 accum = accum_new
Args
var A Tensor of type resource. Should be from a Variable().
accum A Tensor of type resource. Should be from a Variable().
linear A Tensor of type resource. Should be from a Variable().
grad A Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, complex64, int64, qint8, quint8, qint32, bfloat16, uint16, complex128, half, uint32, uint64. The gradient.
indices A Tensor. Must be one of the following types: int32, int64. A vector of indices into the first dimension of var and accum.
lr A Tensor. Must have the same type as grad. Scaling factor. Must be a scalar.
l1 A Tensor. Must have the same type as grad. L1 regularization. Must be a scalar.
l2 A Tensor. Must have the same type as grad. L2 shrinkage regularization. Must be a scalar.
l2_shrinkage A Tensor. Must have the same type as grad.
lr_power A Tensor. Must have the same type as grad. Scaling factor. Must be a scalar.
use_locking An optional bool. Defaults to False. If True, updating of the var and accum tensors will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.
multiply_linear_by_lr An optional bool. Defaults to False.
name A name for the operation (optional).
Returns The created Operation. | tensorflow.raw_ops.resourcesparseapplyftrlv2 |
tf.raw_ops.ResourceSparseApplyKerasMomentum Update relevant entries in 'var' and 'accum' according to the momentum scheme. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.ResourceSparseApplyKerasMomentum
tf.raw_ops.ResourceSparseApplyKerasMomentum(
var, accum, lr, grad, indices, momentum, use_locking=False, use_nesterov=False,
name=None
)
Set use_nesterov = True if you want to use Nesterov momentum. That is for rows we have grad for, we update var and accum as follows: accum = accum * momentum - lr * grad var += accum
Args
var A Tensor of type resource. Should be from a Variable().
accum A Tensor of type resource. Should be from a Variable().
lr A Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, complex64, int64, qint8, quint8, qint32, bfloat16, uint16, complex128, half, uint32, uint64. Learning rate. Must be a scalar.
grad A Tensor. Must have the same type as lr. The gradient.
indices A Tensor. Must be one of the following types: int32, int64. A vector of indices into the first dimension of var and accum.
momentum A Tensor. Must have the same type as lr. Momentum. Must be a scalar.
use_locking An optional bool. Defaults to False. If True, updating of the var and accum tensors will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.
use_nesterov An optional bool. Defaults to False. If True, the tensor passed to compute grad will be var + momentum * accum, so in the end, the var you get is actually var + momentum * accum.
name A name for the operation (optional).
Returns The created Operation. | tensorflow.raw_ops.resourcesparseapplykerasmomentum |
tf.raw_ops.ResourceSparseApplyMomentum Update relevant entries in 'var' and 'accum' according to the momentum scheme. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.ResourceSparseApplyMomentum
tf.raw_ops.ResourceSparseApplyMomentum(
var, accum, lr, grad, indices, momentum, use_locking=False, use_nesterov=False,
name=None
)
Set use_nesterov = True if you want to use Nesterov momentum. That is for rows we have grad for, we update var and accum as follows: accum = accum * momentum + grad var -= lr * accum
Args
var A Tensor of type resource. Should be from a Variable().
accum A Tensor of type resource. Should be from a Variable().
lr A Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, complex64, int64, qint8, quint8, qint32, bfloat16, uint16, complex128, half, uint32, uint64. Learning rate. Must be a scalar.
grad A Tensor. Must have the same type as lr. The gradient.
indices A Tensor. Must be one of the following types: int32, int64. A vector of indices into the first dimension of var and accum.
momentum A Tensor. Must have the same type as lr. Momentum. Must be a scalar.
use_locking An optional bool. Defaults to False. If True, updating of the var and accum tensors will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.
use_nesterov An optional bool. Defaults to False. If True, the tensor passed to compute grad will be var - lr * momentum * accum, so in the end, the var you get is actually var - lr * momentum * accum.
name A name for the operation (optional).
Returns The created Operation. | tensorflow.raw_ops.resourcesparseapplymomentum |
tf.raw_ops.ResourceSparseApplyProximalAdagrad Sparse update entries in 'var' and 'accum' according to FOBOS algorithm. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.ResourceSparseApplyProximalAdagrad
tf.raw_ops.ResourceSparseApplyProximalAdagrad(
var, accum, lr, l1, l2, grad, indices, use_locking=False, name=None
)
That is for rows we have grad for, we update var and accum as follows: accum += grad * grad prox_v = var prox_v -= lr * grad * (1 / sqrt(accum)) var = sign(prox_v)/(1+lr*l2) * max{|prox_v|-lr*l1,0}
Args
var A Tensor of type resource. Should be from a Variable().
accum A Tensor of type resource. Should be from a Variable().
lr A Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, complex64, int64, qint8, quint8, qint32, bfloat16, uint16, complex128, half, uint32, uint64. Learning rate. Must be a scalar.
l1 A Tensor. Must have the same type as lr. L1 regularization. Must be a scalar.
l2 A Tensor. Must have the same type as lr. L2 regularization. Must be a scalar.
grad A Tensor. Must have the same type as lr. The gradient.
indices A Tensor. Must be one of the following types: int32, int64. A vector of indices into the first dimension of var and accum.
use_locking An optional bool. Defaults to False. If True, updating of the var and accum tensors will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.
name A name for the operation (optional).
Returns The created Operation. | tensorflow.raw_ops.resourcesparseapplyproximaladagrad |
tf.raw_ops.ResourceSparseApplyProximalGradientDescent Sparse update '*var' as FOBOS algorithm with fixed learning rate. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.ResourceSparseApplyProximalGradientDescent
tf.raw_ops.ResourceSparseApplyProximalGradientDescent(
var, alpha, l1, l2, grad, indices, use_locking=False, name=None
)
That is for rows we have grad for, we update var as follows: prox_v = var - alpha * grad var = sign(prox_v)/(1+alpha*l2) * max{|prox_v|-alpha*l1,0}
Args
var A Tensor of type resource. Should be from a Variable().
alpha A Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, complex64, int64, qint8, quint8, qint32, bfloat16, uint16, complex128, half, uint32, uint64. Scaling factor. Must be a scalar.
l1 A Tensor. Must have the same type as alpha. L1 regularization. Must be a scalar.
l2 A Tensor. Must have the same type as alpha. L2 regularization. Must be a scalar.
grad A Tensor. Must have the same type as alpha. The gradient.
indices A Tensor. Must be one of the following types: int32, int64. A vector of indices into the first dimension of var and accum.
use_locking An optional bool. Defaults to False. If True, the subtraction will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.
name A name for the operation (optional).
Returns The created Operation. | tensorflow.raw_ops.resourcesparseapplyproximalgradientdescent |
tf.raw_ops.ResourceSparseApplyRMSProp Update '*var' according to the RMSProp algorithm. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.ResourceSparseApplyRMSProp
tf.raw_ops.ResourceSparseApplyRMSProp(
var, ms, mom, lr, rho, momentum, epsilon, grad, indices, use_locking=False,
name=None
)
Note that in dense implementation of this algorithm, ms and mom will update even if the grad is zero, but in this sparse implementation, ms and mom will not update in iterations during which the grad is zero. mean_square = decay * mean_square + (1-decay) * gradient ** 2 Delta = learning_rate * gradient / sqrt(mean_square + epsilon) ms <- rho * ms{t-1} + (1-rho) * grad * grad mom <- momentum * mom{t-1} + lr * grad / sqrt(ms + epsilon) var <- var - mom
Args
var A Tensor of type resource. Should be from a Variable().
ms A Tensor of type resource. Should be from a Variable().
mom A Tensor of type resource. Should be from a Variable().
lr A Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, complex64, int64, qint8, quint8, qint32, bfloat16, uint16, complex128, half, uint32, uint64. Scaling factor. Must be a scalar.
rho A Tensor. Must have the same type as lr. Decay rate. Must be a scalar.
momentum A Tensor. Must have the same type as lr.
epsilon A Tensor. Must have the same type as lr. Ridge term. Must be a scalar.
grad A Tensor. Must have the same type as lr. The gradient.
indices A Tensor. Must be one of the following types: int32, int64. A vector of indices into the first dimension of var, ms and mom.
use_locking An optional bool. Defaults to False. If True, updating of the var, ms, and mom tensors is protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.
name A name for the operation (optional).
Returns The created Operation. | tensorflow.raw_ops.resourcesparseapplyrmsprop |
tf.raw_ops.ResourceStridedSliceAssign Assign value to the sliced l-value reference of ref. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.ResourceStridedSliceAssign
tf.raw_ops.ResourceStridedSliceAssign(
ref, begin, end, strides, value, begin_mask=0, end_mask=0, ellipsis_mask=0,
new_axis_mask=0, shrink_axis_mask=0, name=None
)
The values of value are assigned to the positions in the variable ref that are selected by the slice parameters. The slice parameters begin,end,strides, etc. work exactly as inStridedSlice`. NOTE this op currently does not support broadcasting and so value's shape must be exactly the shape produced by the slice of ref.
Args
ref A Tensor of type resource.
begin A Tensor. Must be one of the following types: int32, int64.
end A Tensor. Must have the same type as begin.
strides A Tensor. Must have the same type as begin.
value A Tensor.
begin_mask An optional int. Defaults to 0.
end_mask An optional int. Defaults to 0.
ellipsis_mask An optional int. Defaults to 0.
new_axis_mask An optional int. Defaults to 0.
shrink_axis_mask An optional int. Defaults to 0.
name A name for the operation (optional).
Returns The created Operation. | tensorflow.raw_ops.resourcestridedsliceassign |
tf.raw_ops.Restore Restores a tensor from checkpoint files. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.Restore
tf.raw_ops.Restore(
file_pattern, tensor_name, dt, preferred_shard=-1, name=None
)
Reads a tensor stored in one or several files. If there are several files (for instance because a tensor was saved as slices), file_pattern may contain wildcard symbols (* and ?) in the filename portion only, not in the directory portion. If a file_pattern matches several files, preferred_shard can be used to hint in which file the requested tensor is likely to be found. This op will first open the file at index preferred_shard in the list of matching files and try to restore tensors from that file. Only if some tensors or tensor slices are not found in that first file, then the Op opens all the files. Setting preferred_shard to match the value passed as the shard input of a matching Save Op may speed up Restore. This attribute only affects performance, not correctness. The default value -1 means files are processed in order. See also RestoreSlice.
Args
file_pattern A Tensor of type string. Must have a single element. The pattern of the files from which we read the tensor.
tensor_name A Tensor of type string. Must have a single element. The name of the tensor to be restored.
dt A tf.DType. The type of the tensor to be restored.
preferred_shard An optional int. Defaults to -1. Index of file to open first if multiple files match file_pattern.
name A name for the operation (optional).
Returns A Tensor of type dt. | tensorflow.raw_ops.restore |
tf.raw_ops.RestoreSlice Restores a tensor from checkpoint files. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.RestoreSlice
tf.raw_ops.RestoreSlice(
file_pattern, tensor_name, shape_and_slice, dt, preferred_shard=-1, name=None
)
This is like Restore except that restored tensor can be listed as filling only a slice of a larger tensor. shape_and_slice specifies the shape of the larger tensor and the slice that the restored tensor covers. The shape_and_slice input has the same format as the elements of the shapes_and_slices input of the SaveSlices op.
Args
file_pattern A Tensor of type string. Must have a single element. The pattern of the files from which we read the tensor.
tensor_name A Tensor of type string. Must have a single element. The name of the tensor to be restored.
shape_and_slice A Tensor of type string. Scalar. The shapes and slice specifications to use when restoring a tensors.
dt A tf.DType. The type of the tensor to be restored.
preferred_shard An optional int. Defaults to -1. Index of file to open first if multiple files match file_pattern. See the documentation for Restore.
name A name for the operation (optional).
Returns A Tensor of type dt. | tensorflow.raw_ops.restoreslice |
tf.raw_ops.RestoreV2 Restores tensors from a V2 checkpoint. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.RestoreV2
tf.raw_ops.RestoreV2(
prefix, tensor_names, shape_and_slices, dtypes, name=None
)
For backward compatibility with the V1 format, this Op currently allows restoring from a V1 checkpoint as well: This Op first attempts to find the V2 index file pointed to by "prefix", and if found proceed to read it as a V2 checkpoint; Otherwise the V1 read path is invoked. Relying on this behavior is not recommended, as the ability to fall back to read V1 might be deprecated and eventually removed. By default, restores the named tensors in full. If the caller wishes to restore specific slices of stored tensors, "shape_and_slices" should be non-empty strings and correspondingly well-formed. Callers must ensure all the named tensors are indeed stored in the checkpoint.
Args
prefix A Tensor of type string. Must have a single element. The prefix of a V2 checkpoint.
tensor_names A Tensor of type string. shape {N}. The names of the tensors to be restored.
shape_and_slices A Tensor of type string. shape {N}. The slice specs of the tensors to be restored. Empty strings indicate that they are non-partitioned tensors.
dtypes A list of tf.DTypes that has length >= 1. shape {N}. The list of expected dtype for the tensors. Must match those stored in the checkpoint.
name A name for the operation (optional).
Returns A list of Tensor objects of type dtypes. | tensorflow.raw_ops.restorev2 |
tf.raw_ops.RetrieveTPUEmbeddingAdadeltaParameters Retrieve Adadelta embedding parameters. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.RetrieveTPUEmbeddingAdadeltaParameters
tf.raw_ops.RetrieveTPUEmbeddingAdadeltaParameters(
num_shards, shard_id, table_id=-1, table_name='', config='',
name=None
)
An op that retrieves optimization parameters from embedding to host memory. Must be preceded by a ConfigureTPUEmbeddingHost op that sets up the correct embedding table configuration. For example, this op is used to retrieve updated parameters before saving a checkpoint.
Args
num_shards An int.
shard_id An int.
table_id An optional int. Defaults to -1.
table_name An optional string. Defaults to "".
config An optional string. Defaults to "".
name A name for the operation (optional).
Returns A tuple of Tensor objects (parameters, accumulators, updates). parameters A Tensor of type float32.
accumulators A Tensor of type float32.
updates A Tensor of type float32. | tensorflow.raw_ops.retrievetpuembeddingadadeltaparameters |
tf.raw_ops.RetrieveTPUEmbeddingAdadeltaParametersGradAccumDebug Retrieve Adadelta embedding parameters with debug support. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.RetrieveTPUEmbeddingAdadeltaParametersGradAccumDebug
tf.raw_ops.RetrieveTPUEmbeddingAdadeltaParametersGradAccumDebug(
num_shards, shard_id, table_id=-1, table_name='', config='',
name=None
)
An op that retrieves optimization parameters from embedding to host memory. Must be preceded by a ConfigureTPUEmbeddingHost op that sets up the correct embedding table configuration. For example, this op is used to retrieve updated parameters before saving a checkpoint.
Args
num_shards An int.
shard_id An int.
table_id An optional int. Defaults to -1.
table_name An optional string. Defaults to "".
config An optional string. Defaults to "".
name A name for the operation (optional).
Returns A tuple of Tensor objects (parameters, accumulators, updates, gradient_accumulators). parameters A Tensor of type float32.
accumulators A Tensor of type float32.
updates A Tensor of type float32.
gradient_accumulators A Tensor of type float32. | tensorflow.raw_ops.retrievetpuembeddingadadeltaparametersgradaccumdebug |
tf.raw_ops.RetrieveTPUEmbeddingAdagradParameters Retrieve Adagrad embedding parameters. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.RetrieveTPUEmbeddingAdagradParameters
tf.raw_ops.RetrieveTPUEmbeddingAdagradParameters(
num_shards, shard_id, table_id=-1, table_name='', config='',
name=None
)
An op that retrieves optimization parameters from embedding to host memory. Must be preceded by a ConfigureTPUEmbeddingHost op that sets up the correct embedding table configuration. For example, this op is used to retrieve updated parameters before saving a checkpoint.
Args
num_shards An int.
shard_id An int.
table_id An optional int. Defaults to -1.
table_name An optional string. Defaults to "".
config An optional string. Defaults to "".
name A name for the operation (optional).
Returns A tuple of Tensor objects (parameters, accumulators). parameters A Tensor of type float32.
accumulators A Tensor of type float32. | tensorflow.raw_ops.retrievetpuembeddingadagradparameters |
tf.raw_ops.RetrieveTPUEmbeddingAdagradParametersGradAccumDebug Retrieve Adagrad embedding parameters with debug support. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.RetrieveTPUEmbeddingAdagradParametersGradAccumDebug
tf.raw_ops.RetrieveTPUEmbeddingAdagradParametersGradAccumDebug(
num_shards, shard_id, table_id=-1, table_name='', config='',
name=None
)
An op that retrieves optimization parameters from embedding to host memory. Must be preceded by a ConfigureTPUEmbeddingHost op that sets up the correct embedding table configuration. For example, this op is used to retrieve updated parameters before saving a checkpoint.
Args
num_shards An int.
shard_id An int.
table_id An optional int. Defaults to -1.
table_name An optional string. Defaults to "".
config An optional string. Defaults to "".
name A name for the operation (optional).
Returns A tuple of Tensor objects (parameters, accumulators, gradient_accumulators). parameters A Tensor of type float32.
accumulators A Tensor of type float32.
gradient_accumulators A Tensor of type float32. | tensorflow.raw_ops.retrievetpuembeddingadagradparametersgradaccumdebug |
tf.raw_ops.RetrieveTPUEmbeddingADAMParameters Retrieve ADAM embedding parameters. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.RetrieveTPUEmbeddingADAMParameters
tf.raw_ops.RetrieveTPUEmbeddingADAMParameters(
num_shards, shard_id, table_id=-1, table_name='', config='',
name=None
)
An op that retrieves optimization parameters from embedding to host memory. Must be preceded by a ConfigureTPUEmbeddingHost op that sets up the correct embedding table configuration. For example, this op is used to retrieve updated parameters before saving a checkpoint.
Args
num_shards An int.
shard_id An int.
table_id An optional int. Defaults to -1.
table_name An optional string. Defaults to "".
config An optional string. Defaults to "".
name A name for the operation (optional).
Returns A tuple of Tensor objects (parameters, momenta, velocities). parameters A Tensor of type float32.
momenta A Tensor of type float32.
velocities A Tensor of type float32. | tensorflow.raw_ops.retrievetpuembeddingadamparameters |
tf.raw_ops.RetrieveTPUEmbeddingADAMParametersGradAccumDebug Retrieve ADAM embedding parameters with debug support. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.RetrieveTPUEmbeddingADAMParametersGradAccumDebug
tf.raw_ops.RetrieveTPUEmbeddingADAMParametersGradAccumDebug(
num_shards, shard_id, table_id=-1, table_name='', config='',
name=None
)
An op that retrieves optimization parameters from embedding to host memory. Must be preceded by a ConfigureTPUEmbeddingHost op that sets up the correct embedding table configuration. For example, this op is used to retrieve updated parameters before saving a checkpoint.
Args
num_shards An int.
shard_id An int.
table_id An optional int. Defaults to -1.
table_name An optional string. Defaults to "".
config An optional string. Defaults to "".
name A name for the operation (optional).
Returns A tuple of Tensor objects (parameters, momenta, velocities, gradient_accumulators). parameters A Tensor of type float32.
momenta A Tensor of type float32.
velocities A Tensor of type float32.
gradient_accumulators A Tensor of type float32. | tensorflow.raw_ops.retrievetpuembeddingadamparametersgradaccumdebug |
tf.raw_ops.RetrieveTPUEmbeddingCenteredRMSPropParameters Retrieve centered RMSProp embedding parameters. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.RetrieveTPUEmbeddingCenteredRMSPropParameters
tf.raw_ops.RetrieveTPUEmbeddingCenteredRMSPropParameters(
num_shards, shard_id, table_id=-1, table_name='', config='',
name=None
)
An op that retrieves optimization parameters from embedding to host memory. Must be preceded by a ConfigureTPUEmbeddingHost op that sets up the correct embedding table configuration. For example, this op is used to retrieve updated parameters before saving a checkpoint.
Args
num_shards An int.
shard_id An int.
table_id An optional int. Defaults to -1.
table_name An optional string. Defaults to "".
config An optional string. Defaults to "".
name A name for the operation (optional).
Returns A tuple of Tensor objects (parameters, ms, mom, mg). parameters A Tensor of type float32.
ms A Tensor of type float32.
mom A Tensor of type float32.
mg A Tensor of type float32. | tensorflow.raw_ops.retrievetpuembeddingcenteredrmspropparameters |
tf.raw_ops.RetrieveTPUEmbeddingFTRLParameters Retrieve FTRL embedding parameters. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.RetrieveTPUEmbeddingFTRLParameters
tf.raw_ops.RetrieveTPUEmbeddingFTRLParameters(
num_shards, shard_id, table_id=-1, table_name='', config='',
name=None
)
An op that retrieves optimization parameters from embedding to host memory. Must be preceded by a ConfigureTPUEmbeddingHost op that sets up the correct embedding table configuration. For example, this op is used to retrieve updated parameters before saving a checkpoint.
Args
num_shards An int.
shard_id An int.
table_id An optional int. Defaults to -1.
table_name An optional string. Defaults to "".
config An optional string. Defaults to "".
name A name for the operation (optional).
Returns A tuple of Tensor objects (parameters, accumulators, linears). parameters A Tensor of type float32.
accumulators A Tensor of type float32.
linears A Tensor of type float32. | tensorflow.raw_ops.retrievetpuembeddingftrlparameters |
tf.raw_ops.RetrieveTPUEmbeddingFTRLParametersGradAccumDebug Retrieve FTRL embedding parameters with debug support. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.RetrieveTPUEmbeddingFTRLParametersGradAccumDebug
tf.raw_ops.RetrieveTPUEmbeddingFTRLParametersGradAccumDebug(
num_shards, shard_id, table_id=-1, table_name='', config='',
name=None
)
An op that retrieves optimization parameters from embedding to host memory. Must be preceded by a ConfigureTPUEmbeddingHost op that sets up the correct embedding table configuration. For example, this op is used to retrieve updated parameters before saving a checkpoint.
Args
num_shards An int.
shard_id An int.
table_id An optional int. Defaults to -1.
table_name An optional string. Defaults to "".
config An optional string. Defaults to "".
name A name for the operation (optional).
Returns A tuple of Tensor objects (parameters, accumulators, linears, gradient_accumulators). parameters A Tensor of type float32.
accumulators A Tensor of type float32.
linears A Tensor of type float32.
gradient_accumulators A Tensor of type float32. | tensorflow.raw_ops.retrievetpuembeddingftrlparametersgradaccumdebug |
tf.raw_ops.RetrieveTPUEmbeddingMDLAdagradLightParameters Retrieve MDL Adagrad Light embedding parameters. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.RetrieveTPUEmbeddingMDLAdagradLightParameters
tf.raw_ops.RetrieveTPUEmbeddingMDLAdagradLightParameters(
num_shards, shard_id, table_id=-1, table_name='', config='',
name=None
)
An op that retrieves optimization parameters from embedding to host memory. Must be preceded by a ConfigureTPUEmbeddingHost op that sets up the correct embedding table configuration. For example, this op is used to retrieve updated parameters before saving a checkpoint.
Args
num_shards An int.
shard_id An int.
table_id An optional int. Defaults to -1.
table_name An optional string. Defaults to "".
config An optional string. Defaults to "".
name A name for the operation (optional).
Returns A tuple of Tensor objects (parameters, accumulators, weights, benefits). parameters A Tensor of type float32.
accumulators A Tensor of type float32.
weights A Tensor of type float32.
benefits A Tensor of type float32. | tensorflow.raw_ops.retrievetpuembeddingmdladagradlightparameters |
tf.raw_ops.RetrieveTPUEmbeddingMomentumParameters Retrieve Momentum embedding parameters. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.RetrieveTPUEmbeddingMomentumParameters
tf.raw_ops.RetrieveTPUEmbeddingMomentumParameters(
num_shards, shard_id, table_id=-1, table_name='', config='',
name=None
)
An op that retrieves optimization parameters from embedding to host memory. Must be preceded by a ConfigureTPUEmbeddingHost op that sets up the correct embedding table configuration. For example, this op is used to retrieve updated parameters before saving a checkpoint.
Args
num_shards An int.
shard_id An int.
table_id An optional int. Defaults to -1.
table_name An optional string. Defaults to "".
config An optional string. Defaults to "".
name A name for the operation (optional).
Returns A tuple of Tensor objects (parameters, momenta). parameters A Tensor of type float32.
momenta A Tensor of type float32. | tensorflow.raw_ops.retrievetpuembeddingmomentumparameters |
tf.raw_ops.RetrieveTPUEmbeddingMomentumParametersGradAccumDebug Retrieve Momentum embedding parameters with debug support. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.RetrieveTPUEmbeddingMomentumParametersGradAccumDebug
tf.raw_ops.RetrieveTPUEmbeddingMomentumParametersGradAccumDebug(
num_shards, shard_id, table_id=-1, table_name='', config='',
name=None
)
An op that retrieves optimization parameters from embedding to host memory. Must be preceded by a ConfigureTPUEmbeddingHost op that sets up the correct embedding table configuration. For example, this op is used to retrieve updated parameters before saving a checkpoint.
Args
num_shards An int.
shard_id An int.
table_id An optional int. Defaults to -1.
table_name An optional string. Defaults to "".
config An optional string. Defaults to "".
name A name for the operation (optional).
Returns A tuple of Tensor objects (parameters, momenta, gradient_accumulators). parameters A Tensor of type float32.
momenta A Tensor of type float32.
gradient_accumulators A Tensor of type float32. | tensorflow.raw_ops.retrievetpuembeddingmomentumparametersgradaccumdebug |
tf.raw_ops.RetrieveTPUEmbeddingProximalAdagradParameters Retrieve proximal Adagrad embedding parameters. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.RetrieveTPUEmbeddingProximalAdagradParameters
tf.raw_ops.RetrieveTPUEmbeddingProximalAdagradParameters(
num_shards, shard_id, table_id=-1, table_name='', config='',
name=None
)
An op that retrieves optimization parameters from embedding to host memory. Must be preceded by a ConfigureTPUEmbeddingHost op that sets up the correct embedding table configuration. For example, this op is used to retrieve updated parameters before saving a checkpoint.
Args
num_shards An int.
shard_id An int.
table_id An optional int. Defaults to -1.
table_name An optional string. Defaults to "".
config An optional string. Defaults to "".
name A name for the operation (optional).
Returns A tuple of Tensor objects (parameters, accumulators). parameters A Tensor of type float32.
accumulators A Tensor of type float32. | tensorflow.raw_ops.retrievetpuembeddingproximaladagradparameters |
tf.raw_ops.RetrieveTPUEmbeddingProximalAdagradParametersGradAccumDebug Retrieve proximal Adagrad embedding parameters with debug support. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.RetrieveTPUEmbeddingProximalAdagradParametersGradAccumDebug
tf.raw_ops.RetrieveTPUEmbeddingProximalAdagradParametersGradAccumDebug(
num_shards, shard_id, table_id=-1, table_name='', config='',
name=None
)
An op that retrieves optimization parameters from embedding to host memory. Must be preceded by a ConfigureTPUEmbeddingHost op that sets up the correct embedding table configuration. For example, this op is used to retrieve updated parameters before saving a checkpoint.
Args
num_shards An int.
shard_id An int.
table_id An optional int. Defaults to -1.
table_name An optional string. Defaults to "".
config An optional string. Defaults to "".
name A name for the operation (optional).
Returns A tuple of Tensor objects (parameters, accumulators, gradient_accumulators). parameters A Tensor of type float32.
accumulators A Tensor of type float32.
gradient_accumulators A Tensor of type float32. | tensorflow.raw_ops.retrievetpuembeddingproximaladagradparametersgradaccumdebug |
tf.raw_ops.RetrieveTPUEmbeddingProximalYogiParameters View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.RetrieveTPUEmbeddingProximalYogiParameters
tf.raw_ops.RetrieveTPUEmbeddingProximalYogiParameters(
num_shards, shard_id, table_id=-1, table_name='', config='',
name=None
)
Args
num_shards An int.
shard_id An int.
table_id An optional int. Defaults to -1.
table_name An optional string. Defaults to "".
config An optional string. Defaults to "".
name A name for the operation (optional).
Returns A tuple of Tensor objects (parameters, v, m). parameters A Tensor of type float32.
v A Tensor of type float32.
m A Tensor of type float32. | tensorflow.raw_ops.retrievetpuembeddingproximalyogiparameters |
tf.raw_ops.RetrieveTPUEmbeddingProximalYogiParametersGradAccumDebug View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.RetrieveTPUEmbeddingProximalYogiParametersGradAccumDebug
tf.raw_ops.RetrieveTPUEmbeddingProximalYogiParametersGradAccumDebug(
num_shards, shard_id, table_id=-1, table_name='', config='',
name=None
)
Args
num_shards An int.
shard_id An int.
table_id An optional int. Defaults to -1.
table_name An optional string. Defaults to "".
config An optional string. Defaults to "".
name A name for the operation (optional).
Returns A tuple of Tensor objects (parameters, v, m, gradient_accumulators). parameters A Tensor of type float32.
v A Tensor of type float32.
m A Tensor of type float32.
gradient_accumulators A Tensor of type float32. | tensorflow.raw_ops.retrievetpuembeddingproximalyogiparametersgradaccumdebug |
tf.raw_ops.RetrieveTPUEmbeddingRMSPropParameters Retrieve RMSProp embedding parameters. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.RetrieveTPUEmbeddingRMSPropParameters
tf.raw_ops.RetrieveTPUEmbeddingRMSPropParameters(
num_shards, shard_id, table_id=-1, table_name='', config='',
name=None
)
An op that retrieves optimization parameters from embedding to host memory. Must be preceded by a ConfigureTPUEmbeddingHost op that sets up the correct embedding table configuration. For example, this op is used to retrieve updated parameters before saving a checkpoint.
Args
num_shards An int.
shard_id An int.
table_id An optional int. Defaults to -1.
table_name An optional string. Defaults to "".
config An optional string. Defaults to "".
name A name for the operation (optional).
Returns A tuple of Tensor objects (parameters, ms, mom). parameters A Tensor of type float32.
ms A Tensor of type float32.
mom A Tensor of type float32. | tensorflow.raw_ops.retrievetpuembeddingrmspropparameters |
tf.raw_ops.RetrieveTPUEmbeddingRMSPropParametersGradAccumDebug Retrieve RMSProp embedding parameters with debug support. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.RetrieveTPUEmbeddingRMSPropParametersGradAccumDebug
tf.raw_ops.RetrieveTPUEmbeddingRMSPropParametersGradAccumDebug(
num_shards, shard_id, table_id=-1, table_name='', config='',
name=None
)
An op that retrieves optimization parameters from embedding to host memory. Must be preceded by a ConfigureTPUEmbeddingHost op that sets up the correct embedding table configuration. For example, this op is used to retrieve updated parameters before saving a checkpoint.
Args
num_shards An int.
shard_id An int.
table_id An optional int. Defaults to -1.
table_name An optional string. Defaults to "".
config An optional string. Defaults to "".
name A name for the operation (optional).
Returns A tuple of Tensor objects (parameters, ms, mom, gradient_accumulators). parameters A Tensor of type float32.
ms A Tensor of type float32.
mom A Tensor of type float32.
gradient_accumulators A Tensor of type float32. | tensorflow.raw_ops.retrievetpuembeddingrmspropparametersgradaccumdebug |
tf.raw_ops.RetrieveTPUEmbeddingStochasticGradientDescentParameters Retrieve SGD embedding parameters. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.RetrieveTPUEmbeddingStochasticGradientDescentParameters
tf.raw_ops.RetrieveTPUEmbeddingStochasticGradientDescentParameters(
num_shards, shard_id, table_id=-1, table_name='', config='',
name=None
)
An op that retrieves optimization parameters from embedding to host memory. Must be preceded by a ConfigureTPUEmbeddingHost op that sets up the correct embedding table configuration. For example, this op is used to retrieve updated parameters before saving a checkpoint.
Args
num_shards An int.
shard_id An int.
table_id An optional int. Defaults to -1.
table_name An optional string. Defaults to "".
config An optional string. Defaults to "".
name A name for the operation (optional).
Returns A Tensor of type float32. | tensorflow.raw_ops.retrievetpuembeddingstochasticgradientdescentparameters |
tf.raw_ops.RetrieveTPUEmbeddingStochasticGradientDescentParametersGradAccumDebug Retrieve SGD embedding parameters with debug support. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.RetrieveTPUEmbeddingStochasticGradientDescentParametersGradAccumDebug
tf.raw_ops.RetrieveTPUEmbeddingStochasticGradientDescentParametersGradAccumDebug(
num_shards, shard_id, table_id=-1, table_name='', config='',
name=None
)
An op that retrieves optimization parameters from embedding to host memory. Must be preceded by a ConfigureTPUEmbeddingHost op that sets up the correct embedding table configuration. For example, this op is used to retrieve updated parameters before saving a checkpoint.
Args
num_shards An int.
shard_id An int.
table_id An optional int. Defaults to -1.
table_name An optional string. Defaults to "".
config An optional string. Defaults to "".
name A name for the operation (optional).
Returns A tuple of Tensor objects (parameters, gradient_accumulators). parameters A Tensor of type float32.
gradient_accumulators A Tensor of type float32. | tensorflow.raw_ops.retrievetpuembeddingstochasticgradientdescentparametersgradaccumdebug |
tf.raw_ops.Reverse Reverses specific dimensions of a tensor. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.Reverse
tf.raw_ops.Reverse(
tensor, dims, name=None
)
Given a tensor, and a bool tensor dims representing the dimensions of tensor, this operation reverses each dimension i of tensor where dims[i] is True. tensor can have up to 8 dimensions. The number of dimensions of tensor must equal the number of elements in dims. In other words: rank(tensor) = size(dims) For example: # tensor 't' is [[[[ 0, 1, 2, 3],
# [ 4, 5, 6, 7],
# [ 8, 9, 10, 11]],
# [[12, 13, 14, 15],
# [16, 17, 18, 19],
# [20, 21, 22, 23]]]]
# tensor 't' shape is [1, 2, 3, 4]
# 'dims' is [False, False, False, True]
reverse(t, dims) ==> [[[[ 3, 2, 1, 0],
[ 7, 6, 5, 4],
[ 11, 10, 9, 8]],
[[15, 14, 13, 12],
[19, 18, 17, 16],
[23, 22, 21, 20]]]]
# 'dims' is [False, True, False, False]
reverse(t, dims) ==> [[[[12, 13, 14, 15],
[16, 17, 18, 19],
[20, 21, 22, 23]
[[ 0, 1, 2, 3],
[ 4, 5, 6, 7],
[ 8, 9, 10, 11]]]]
# 'dims' is [False, False, True, False]
reverse(t, dims) ==> [[[[8, 9, 10, 11],
[4, 5, 6, 7],
[0, 1, 2, 3]]
[[20, 21, 22, 23],
[16, 17, 18, 19],
[12, 13, 14, 15]]]]
Args
tensor A Tensor. Must be one of the following types: uint8, int8, uint16, int16, int32, int64, bool, half, float32, float64, complex64, complex128, string. Up to 8-D.
dims A Tensor of type bool. 1-D. The dimensions to reverse.
name A name for the operation (optional).
Returns A Tensor. Has the same type as tensor. | tensorflow.raw_ops.reverse |
tf.raw_ops.ReverseSequence Reverses variable length slices. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.ReverseSequence
tf.raw_ops.ReverseSequence(
input, seq_lengths, seq_dim, batch_dim=0, name=None
)
This op first slices input along the dimension batch_dim, and for each slice i, reverses the first seq_lengths[i] elements along the dimension seq_dim. The elements of seq_lengths must obey seq_lengths[i] <= input.dims[seq_dim], and seq_lengths must be a vector of length input.dims[batch_dim]. The output slice i along dimension batch_dim is then given by input slice i, with the first seq_lengths[i] slices along dimension seq_dim reversed. For example: # Given this:
batch_dim = 0
seq_dim = 1
input.dims = (4, 8, ...)
seq_lengths = [7, 2, 3, 5]
# then slices of input are reversed on seq_dim, but only up to seq_lengths:
output[0, 0:7, :, ...] = input[0, 7:0:-1, :, ...]
output[1, 0:2, :, ...] = input[1, 2:0:-1, :, ...]
output[2, 0:3, :, ...] = input[2, 3:0:-1, :, ...]
output[3, 0:5, :, ...] = input[3, 5:0:-1, :, ...]
# while entries past seq_lens are copied through:
output[0, 7:, :, ...] = input[0, 7:, :, ...]
output[1, 2:, :, ...] = input[1, 2:, :, ...]
output[2, 3:, :, ...] = input[2, 3:, :, ...]
output[3, 2:, :, ...] = input[3, 2:, :, ...]
In contrast, if: # Given this:
batch_dim = 2
seq_dim = 0
input.dims = (8, ?, 4, ...)
seq_lengths = [7, 2, 3, 5]
# then slices of input are reversed on seq_dim, but only up to seq_lengths:
output[0:7, :, 0, :, ...] = input[7:0:-1, :, 0, :, ...]
output[0:2, :, 1, :, ...] = input[2:0:-1, :, 1, :, ...]
output[0:3, :, 2, :, ...] = input[3:0:-1, :, 2, :, ...]
output[0:5, :, 3, :, ...] = input[5:0:-1, :, 3, :, ...]
# while entries past seq_lens are copied through:
output[7:, :, 0, :, ...] = input[7:, :, 0, :, ...]
output[2:, :, 1, :, ...] = input[2:, :, 1, :, ...]
output[3:, :, 2, :, ...] = input[3:, :, 2, :, ...]
output[2:, :, 3, :, ...] = input[2:, :, 3, :, ...]
Args
input A Tensor. The input to reverse.
seq_lengths A Tensor. Must be one of the following types: int32, int64. 1-D with length input.dims(batch_dim) and max(seq_lengths) <= input.dims(seq_dim)
seq_dim An int. The dimension which is partially reversed.
batch_dim An optional int. Defaults to 0. The dimension along which reversal is performed.
name A name for the operation (optional).
Returns A Tensor. Has the same type as input. | tensorflow.raw_ops.reversesequence |
tf.raw_ops.ReverseV2 Reverses specific dimensions of a tensor. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.ReverseV2
tf.raw_ops.ReverseV2(
tensor, axis, name=None
)
NOTE tf.reverse has now changed behavior in preparation for 1.0. tf.reverse_v2 is currently an alias that will be deprecated before TF 1.0. Given a tensor, and a int32 tensor axis representing the set of dimensions of tensor to reverse. This operation reverses each dimension i for which there exists j s.t. axis[j] == i. tensor can have up to 8 dimensions. The number of dimensions specified in axis may be 0 or more entries. If an index is specified more than once, a InvalidArgument error is raised. For example: # tensor 't' is [[[[ 0, 1, 2, 3],
# [ 4, 5, 6, 7],
# [ 8, 9, 10, 11]],
# [[12, 13, 14, 15],
# [16, 17, 18, 19],
# [20, 21, 22, 23]]]]
# tensor 't' shape is [1, 2, 3, 4]
# 'dims' is [3] or 'dims' is [-1]
reverse(t, dims) ==> [[[[ 3, 2, 1, 0],
[ 7, 6, 5, 4],
[ 11, 10, 9, 8]],
[[15, 14, 13, 12],
[19, 18, 17, 16],
[23, 22, 21, 20]]]]
# 'dims' is '[1]' (or 'dims' is '[-3]')
reverse(t, dims) ==> [[[[12, 13, 14, 15],
[16, 17, 18, 19],
[20, 21, 22, 23]
[[ 0, 1, 2, 3],
[ 4, 5, 6, 7],
[ 8, 9, 10, 11]]]]
# 'dims' is '[2]' (or 'dims' is '[-2]')
reverse(t, dims) ==> [[[[8, 9, 10, 11],
[4, 5, 6, 7],
[0, 1, 2, 3]]
[[20, 21, 22, 23],
[16, 17, 18, 19],
[12, 13, 14, 15]]]]
Args
tensor A Tensor. Must be one of the following types: uint8, int8, uint16, int16, int32, int64, bool, bfloat16, half, float32, float64, complex64, complex128, string. Up to 8-D.
axis A Tensor. Must be one of the following types: int32, int64. 1-D. The indices of the dimensions to reverse. Must be in the range [-rank(tensor), rank(tensor)).
name A name for the operation (optional).
Returns A Tensor. Has the same type as tensor. | tensorflow.raw_ops.reversev2 |
tf.raw_ops.RFFT Real-valued fast Fourier transform. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.RFFT
tf.raw_ops.RFFT(
input, fft_length, Tcomplex=tf.dtypes.complex64, name=None
)
Computes the 1-dimensional discrete Fourier transform of a real-valued signal over the inner-most dimension of input. Since the DFT of a real signal is Hermitian-symmetric, RFFT only returns the fft_length / 2 + 1 unique components of the FFT: the zero-frequency term, followed by the fft_length / 2 positive-frequency terms. Along the axis RFFT is computed on, if fft_length is smaller than the corresponding dimension of input, the dimension is cropped. If it is larger, the dimension is padded with zeros.
Args
input A Tensor. Must be one of the following types: float32, float64. A float32 tensor.
fft_length A Tensor of type int32. An int32 tensor of shape [1]. The FFT length.
Tcomplex An optional tf.DType from: tf.complex64, tf.complex128. Defaults to tf.complex64.
name A name for the operation (optional).
Returns A Tensor of type Tcomplex. | tensorflow.raw_ops.rfft |
tf.raw_ops.RFFT2D 2D real-valued fast Fourier transform. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.RFFT2D
tf.raw_ops.RFFT2D(
input, fft_length, Tcomplex=tf.dtypes.complex64, name=None
)
Computes the 2-dimensional discrete Fourier transform of a real-valued signal over the inner-most 2 dimensions of input. Since the DFT of a real signal is Hermitian-symmetric, RFFT2D only returns the fft_length / 2 + 1 unique components of the FFT for the inner-most dimension of output: the zero-frequency term, followed by the fft_length / 2 positive-frequency terms. Along each axis RFFT2D is computed on, if fft_length is smaller than the corresponding dimension of input, the dimension is cropped. If it is larger, the dimension is padded with zeros.
Args
input A Tensor. Must be one of the following types: float32, float64. A float32 tensor.
fft_length A Tensor of type int32. An int32 tensor of shape [2]. The FFT length for each dimension.
Tcomplex An optional tf.DType from: tf.complex64, tf.complex128. Defaults to tf.complex64.
name A name for the operation (optional).
Returns A Tensor of type Tcomplex. | tensorflow.raw_ops.rfft2d |
tf.raw_ops.RFFT3D 3D real-valued fast Fourier transform. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.RFFT3D
tf.raw_ops.RFFT3D(
input, fft_length, Tcomplex=tf.dtypes.complex64, name=None
)
Computes the 3-dimensional discrete Fourier transform of a real-valued signal over the inner-most 3 dimensions of input. Since the DFT of a real signal is Hermitian-symmetric, RFFT3D only returns the fft_length / 2 + 1 unique components of the FFT for the inner-most dimension of output: the zero-frequency term, followed by the fft_length / 2 positive-frequency terms. Along each axis RFFT3D is computed on, if fft_length is smaller than the corresponding dimension of input, the dimension is cropped. If it is larger, the dimension is padded with zeros.
Args
input A Tensor. Must be one of the following types: float32, float64. A float32 tensor.
fft_length A Tensor of type int32. An int32 tensor of shape [3]. The FFT length for each dimension.
Tcomplex An optional tf.DType from: tf.complex64, tf.complex128. Defaults to tf.complex64.
name A name for the operation (optional).
Returns A Tensor of type Tcomplex. | tensorflow.raw_ops.rfft3d |
tf.raw_ops.RGBToHSV Converts one or more images from RGB to HSV. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.RGBToHSV
tf.raw_ops.RGBToHSV(
images, name=None
)
Outputs a tensor of the same shape as the images tensor, containing the HSV value of the pixels. The output is only well defined if the value in images are in [0,1]. output[..., 0] contains hue, output[..., 1] contains saturation, and output[..., 2] contains value. All HSV values are in [0,1]. A hue of 0 corresponds to pure red, hue 1/3 is pure green, and 2/3 is pure blue. Usage Example:
blue_image = tf.stack([
tf.zeros([5,5]),
tf.zeros([5,5]),
tf.ones([5,5])],
axis=-1)
blue_hsv_image = tf.image.rgb_to_hsv(blue_image)
blue_hsv_image[0,0].numpy()
array([0.6666667, 1. , 1. ], dtype=float32)
Args
images A Tensor. Must be one of the following types: half, bfloat16, float32, float64. 1-D or higher rank. RGB data to convert. Last dimension must be size 3.
name A name for the operation (optional).
Returns A Tensor. Has the same type as images. | tensorflow.raw_ops.rgbtohsv |
tf.raw_ops.RightShift Elementwise computes the bitwise right-shift of x and y. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.RightShift
tf.raw_ops.RightShift(
x, y, name=None
)
Performs a logical shift for unsigned integer types, and an arithmetic shift for signed integer types. If y is negative, or greater than or equal to than the width of x in bits the result is implementation defined. Example: import tensorflow as tf
from tensorflow.python.ops import bitwise_ops
import numpy as np
dtype_list = [tf.int8, tf.int16, tf.int32, tf.int64]
for dtype in dtype_list:
lhs = tf.constant([-1, -5, -3, -14], dtype=dtype)
rhs = tf.constant([5, 0, 7, 11], dtype=dtype)
right_shift_result = bitwise_ops.right_shift(lhs, rhs)
print(right_shift_result)
# This will print:
# tf.Tensor([-1 -5 -1 -1], shape=(4,), dtype=int8)
# tf.Tensor([-1 -5 -1 -1], shape=(4,), dtype=int16)
# tf.Tensor([-1 -5 -1 -1], shape=(4,), dtype=int32)
# tf.Tensor([-1 -5 -1 -1], shape=(4,), dtype=int64)
lhs = np.array([-2, 64, 101, 32], dtype=np.int8)
rhs = np.array([-1, -5, -3, -14], dtype=np.int8)
bitwise_ops.right_shift(lhs, rhs)
# <tf.Tensor: shape=(4,), dtype=int8, numpy=array([ -2, 64, 101, 32], dtype=int8)>
Args
x A Tensor. Must be one of the following types: int8, int16, int32, int64, uint8, uint16, uint32, uint64.
y A Tensor. Must have the same type as x.
name A name for the operation (optional).
Returns A Tensor. Has the same type as x. | tensorflow.raw_ops.rightshift |
tf.raw_ops.Rint Returns element-wise integer closest to x. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.Rint
tf.raw_ops.Rint(
x, name=None
)
If the result is midway between two representable values, the even representable is chosen. For example: rint(-1.5) ==> -2.0
rint(0.5000001) ==> 1.0
rint([-1.7, -1.5, -0.2, 0.2, 1.5, 1.7, 2.0]) ==> [-2., -2., -0., 0., 2., 2., 2.]
Args
x A Tensor. Must be one of the following types: bfloat16, half, float32, float64.
name A name for the operation (optional).
Returns A Tensor. Has the same type as x. | tensorflow.raw_ops.rint |
tf.raw_ops.RngReadAndSkip Advance the counter of a counter-based RNG. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.RngReadAndSkip
tf.raw_ops.RngReadAndSkip(
resource, alg, delta, name=None
)
The state of the RNG after rng_read_and_skip(n) will be the same as that after uniform([n]) (or any other distribution). The actual increment added to the counter is an unspecified implementation choice.
Args
resource A Tensor of type resource. The handle of the resource variable that stores the state of the RNG.
alg A Tensor of type int32. The RNG algorithm.
delta A Tensor of type uint64. The amount of advancement.
name A name for the operation (optional).
Returns A Tensor of type int64. | tensorflow.raw_ops.rngreadandskip |
tf.raw_ops.RngSkip Advance the counter of a counter-based RNG. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.RngSkip
tf.raw_ops.RngSkip(
resource, algorithm, delta, name=None
)
The state of the RNG after rng_skip(n) will be the same as that after stateful_uniform([n]) (or any other distribution). The actual increment added to the counter is an unspecified implementation detail.
Args
resource A Tensor of type resource. The handle of the resource variable that stores the state of the RNG.
algorithm A Tensor of type int64. The RNG algorithm.
delta A Tensor of type int64. The amount of advancement.
name A name for the operation (optional).
Returns The created Operation. | tensorflow.raw_ops.rngskip |
tf.raw_ops.Roll Rolls the elements of a tensor along an axis. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.Roll
tf.raw_ops.Roll(
input, shift, axis, name=None
)
The elements are shifted positively (towards larger indices) by the offset of shift along the dimension of axis. Negative shift values will shift elements in the opposite direction. Elements that roll passed the last position will wrap around to the first and vice versa. Multiple shifts along multiple axes may be specified. For example: # 't' is [0, 1, 2, 3, 4]
roll(t, shift=2, axis=0) ==> [3, 4, 0, 1, 2]
# shifting along multiple dimensions
# 't' is [[0, 1, 2, 3, 4], [5, 6, 7, 8, 9]]
roll(t, shift=[1, -2], axis=[0, 1]) ==> [[7, 8, 9, 5, 6], [2, 3, 4, 0, 1]]
# shifting along the same axis multiple times
# 't' is [[0, 1, 2, 3, 4], [5, 6, 7, 8, 9]]
roll(t, shift=[2, -3], axis=[1, 1]) ==> [[1, 2, 3, 4, 0], [6, 7, 8, 9, 5]]
Args
input A Tensor.
shift A Tensor. Must be one of the following types: int32, int64. Dimension must be 0-D or 1-D. shift[i] specifies the number of places by which elements are shifted positively (towards larger indices) along the dimension specified by axis[i]. Negative shifts will roll the elements in the opposite direction.
axis A Tensor. Must be one of the following types: int32, int64. Dimension must be 0-D or 1-D. axis[i] specifies the dimension that the shift shift[i] should occur. If the same axis is referenced more than once, the total shift for that axis will be the sum of all the shifts that belong to that axis.
name A name for the operation (optional).
Returns A Tensor. Has the same type as input. | tensorflow.raw_ops.roll |
tf.raw_ops.Round Rounds the values of a tensor to the nearest integer, element-wise. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.Round
tf.raw_ops.Round(
x, name=None
)
Rounds half to even. Also known as bankers rounding. If you want to round according to the current system rounding mode use std::cint.
Args
x A Tensor. Must be one of the following types: bfloat16, half, float32, float64, int8, int16, int32, int64, complex64, complex128.
name A name for the operation (optional).
Returns A Tensor. Has the same type as x. | tensorflow.raw_ops.round |
tf.raw_ops.Rsqrt Computes reciprocal of square root of x element-wise. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.Rsqrt
tf.raw_ops.Rsqrt(
x, name=None
)
I.e., \(y = 1 / \sqrt{x}\).
Args
x A Tensor. Must be one of the following types: bfloat16, half, float32, float64, complex64, complex128.
name A name for the operation (optional).
Returns A Tensor. Has the same type as x. | tensorflow.raw_ops.rsqrt |
tf.raw_ops.RsqrtGrad Computes the gradient for the rsqrt of x wrt its input. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.RsqrtGrad
tf.raw_ops.RsqrtGrad(
y, dy, name=None
)
Specifically, grad = dy * -0.5 * y^3, where y = rsqrt(x), and dy is the corresponding input gradient.
Args
y A Tensor. Must be one of the following types: bfloat16, half, float32, float64, complex64, complex128.
dy A Tensor. Must have the same type as y.
name A name for the operation (optional).
Returns A Tensor. Has the same type as y. | tensorflow.raw_ops.rsqrtgrad |
tf.raw_ops.SampleDistortedBoundingBox Generate a single randomly distorted bounding box for an image. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.SampleDistortedBoundingBox
tf.raw_ops.SampleDistortedBoundingBox(
image_size, bounding_boxes, seed=0, seed2=0, min_object_covered=0.1,
aspect_ratio_range=[0.75, 1.33], area_range=[0.05, 1], max_attempts=100,
use_image_if_no_bounding_boxes=False, name=None
)
Bounding box annotations are often supplied in addition to ground-truth labels in image recognition or object localization tasks. A common technique for training such a system is to randomly distort an image while preserving its content, i.e. data augmentation. This Op outputs a randomly distorted localization of an object, i.e. bounding box, given an image_size, bounding_boxes and a series of constraints. The output of this Op is a single bounding box that may be used to crop the original image. The output is returned as 3 tensors: begin, size and bboxes. The first 2 tensors can be fed directly into tf.slice to crop the image. The latter may be supplied to tf.image.draw_bounding_boxes to visualize what the bounding box looks like. Bounding boxes are supplied and returned as [y_min, x_min, y_max, x_max]. The bounding box coordinates are floats in [0.0, 1.0] relative to the width and height of the underlying image. For example, # Generate a single distorted bounding box.
begin, size, bbox_for_draw = tf.image.sample_distorted_bounding_box(
tf.shape(image),
bounding_boxes=bounding_boxes)
# Draw the bounding box in an image summary.
image_with_box = tf.image.draw_bounding_boxes(tf.expand_dims(image, 0),
bbox_for_draw)
tf.summary.image('images_with_box', image_with_box)
# Employ the bounding box to distort the image.
distorted_image = tf.slice(image, begin, size)
Note that if no bounding box information is available, setting use_image_if_no_bounding_boxes = true will assume there is a single implicit bounding box covering the whole image. If use_image_if_no_bounding_boxes is false and no bounding boxes are supplied, an error is raised.
Args
image_size A Tensor. Must be one of the following types: uint8, int8, int16, int32, int64. 1-D, containing [height, width, channels].
bounding_boxes A Tensor of type float32. 3-D with shape [batch, N, 4] describing the N bounding boxes associated with the image.
seed An optional int. Defaults to 0. If either seed or seed2 are set to non-zero, the random number generator is seeded by the given seed. Otherwise, it is seeded by a random seed.
seed2 An optional int. Defaults to 0. A second seed to avoid seed collision.
min_object_covered An optional float. Defaults to 0.1. The cropped area of the image must contain at least this fraction of any bounding box supplied. The value of this parameter should be non-negative. In the case of 0, the cropped area does not need to overlap any of the bounding boxes supplied.
aspect_ratio_range An optional list of floats. Defaults to [0.75, 1.33]. The cropped area of the image must have an aspect ratio = width / height within this range.
area_range An optional list of floats. Defaults to [0.05, 1]. The cropped area of the image must contain a fraction of the supplied image within this range.
max_attempts An optional int. Defaults to 100. Number of attempts at generating a cropped region of the image of the specified constraints. After max_attempts failures, return the entire image.
use_image_if_no_bounding_boxes An optional bool. Defaults to False. Controls behavior if no bounding boxes supplied. If true, assume an implicit bounding box covering the whole input. If false, raise an error.
name A name for the operation (optional).
Returns A tuple of Tensor objects (begin, size, bboxes). begin A Tensor. Has the same type as image_size.
size A Tensor. Has the same type as image_size.
bboxes A Tensor of type float32. | tensorflow.raw_ops.sampledistortedboundingbox |
tf.raw_ops.SampleDistortedBoundingBoxV2 Generate a single randomly distorted bounding box for an image. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.SampleDistortedBoundingBoxV2
tf.raw_ops.SampleDistortedBoundingBoxV2(
image_size, bounding_boxes, min_object_covered, seed=0, seed2=0,
aspect_ratio_range=[0.75, 1.33], area_range=[0.05, 1], max_attempts=100,
use_image_if_no_bounding_boxes=False, name=None
)
Bounding box annotations are often supplied in addition to ground-truth labels in image recognition or object localization tasks. A common technique for training such a system is to randomly distort an image while preserving its content, i.e. data augmentation. This Op outputs a randomly distorted localization of an object, i.e. bounding box, given an image_size, bounding_boxes and a series of constraints. The output of this Op is a single bounding box that may be used to crop the original image. The output is returned as 3 tensors: begin, size and bboxes. The first 2 tensors can be fed directly into tf.slice to crop the image. The latter may be supplied to tf.image.draw_bounding_boxes to visualize what the bounding box looks like. Bounding boxes are supplied and returned as [y_min, x_min, y_max, x_max]. The bounding box coordinates are floats in [0.0, 1.0] relative to the width and height of the underlying image. For example, # Generate a single distorted bounding box.
begin, size, bbox_for_draw = tf.image.sample_distorted_bounding_box(
tf.shape(image),
bounding_boxes=bounding_boxes)
# Draw the bounding box in an image summary.
image_with_box = tf.image.draw_bounding_boxes(tf.expand_dims(image, 0),
bbox_for_draw)
tf.summary.image('images_with_box', image_with_box)
# Employ the bounding box to distort the image.
distorted_image = tf.slice(image, begin, size)
Note that if no bounding box information is available, setting use_image_if_no_bounding_boxes = true will assume there is a single implicit bounding box covering the whole image. If use_image_if_no_bounding_boxes is false and no bounding boxes are supplied, an error is raised.
Args
image_size A Tensor. Must be one of the following types: uint8, int8, int16, int32, int64. 1-D, containing [height, width, channels].
bounding_boxes A Tensor of type float32. 3-D with shape [batch, N, 4] describing the N bounding boxes associated with the image.
min_object_covered A Tensor of type float32. The cropped area of the image must contain at least this fraction of any bounding box supplied. The value of this parameter should be non-negative. In the case of 0, the cropped area does not need to overlap any of the bounding boxes supplied.
seed An optional int. Defaults to 0. If either seed or seed2 are set to non-zero, the random number generator is seeded by the given seed. Otherwise, it is seeded by a random seed.
seed2 An optional int. Defaults to 0. A second seed to avoid seed collision.
aspect_ratio_range An optional list of floats. Defaults to [0.75, 1.33]. The cropped area of the image must have an aspect ratio = width / height within this range.
area_range An optional list of floats. Defaults to [0.05, 1]. The cropped area of the image must contain a fraction of the supplied image within this range.
max_attempts An optional int. Defaults to 100. Number of attempts at generating a cropped region of the image of the specified constraints. After max_attempts failures, return the entire image.
use_image_if_no_bounding_boxes An optional bool. Defaults to False. Controls behavior if no bounding boxes supplied. If true, assume an implicit bounding box covering the whole input. If false, raise an error.
name A name for the operation (optional).
Returns A tuple of Tensor objects (begin, size, bboxes). begin A Tensor. Has the same type as image_size.
size A Tensor. Has the same type as image_size.
bboxes A Tensor of type float32. | tensorflow.raw_ops.sampledistortedboundingboxv2 |
tf.raw_ops.SamplingDataset Creates a dataset that takes a Bernoulli sample of the contents of another dataset. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.SamplingDataset
tf.raw_ops.SamplingDataset(
input_dataset, rate, seed, seed2, output_types, output_shapes, name=None
)
There is no transformation in the tf.data Python API for creating this dataset. Instead, it is created as a result of the filter_with_random_uniform_fusion static optimization. Whether this optimization is performed is determined by the experimental_optimization.filter_with_random_uniform_fusion option of tf.data.Options.
Args
input_dataset A Tensor of type variant.
rate A Tensor of type float32. A scalar representing the sample rate. Each element of input_dataset is retained with this probability, independent of all other elements.
seed A Tensor of type int64. A scalar representing seed of random number generator.
seed2 A Tensor of type int64. A scalar representing seed2 of random number generator.
output_types A list of tf.DTypes that has length >= 1.
output_shapes A list of shapes (each a tf.TensorShape or list of ints) that has length >= 1.
name A name for the operation (optional).
Returns A Tensor of type variant. | tensorflow.raw_ops.samplingdataset |
tf.raw_ops.Save Saves the input tensors to disk. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.Save
tf.raw_ops.Save(
filename, tensor_names, data, name=None
)
The size of tensor_names must match the number of tensors in data. data[i] is written to filename with name tensor_names[i]. See also SaveSlices.
Args
filename A Tensor of type string. Must have a single element. The name of the file to which we write the tensor.
tensor_names A Tensor of type string. Shape [N]. The names of the tensors to be saved.
data A list of Tensor objects. N tensors to save.
name A name for the operation (optional).
Returns The created Operation. | tensorflow.raw_ops.save |
tf.raw_ops.SaveDataset View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.SaveDataset
tf.raw_ops.SaveDataset(
input_dataset, path, shard_func_other_args, shard_func,
compression='', use_shard_func=True, name=None
)
Args
input_dataset A Tensor of type variant.
path A Tensor of type string.
shard_func_other_args A list of Tensor objects.
shard_func A function decorated with @Defun.
compression An optional string. Defaults to "".
use_shard_func An optional bool. Defaults to True.
name A name for the operation (optional).
Returns The created Operation. | tensorflow.raw_ops.savedataset |
tf.raw_ops.SaveSlices Saves input tensors slices to disk. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.SaveSlices
tf.raw_ops.SaveSlices(
filename, tensor_names, shapes_and_slices, data, name=None
)
This is like Save except that tensors can be listed in the saved file as being a slice of a larger tensor. shapes_and_slices specifies the shape of the larger tensor and the slice that this tensor covers. shapes_and_slices must have as many elements as tensor_names. Elements of the shapes_and_slices input must either be: The empty string, in which case the corresponding tensor is saved normally. A string of the form dim0 dim1 ... dimN-1 slice-spec where the dimI are the dimensions of the larger tensor and slice-spec specifies what part is covered by the tensor to save. slice-spec itself is a :-separated list: slice0:slice1:...:sliceN-1 where each sliceI is either: The string - meaning that the slice covers all indices of this dimension
start,length where start and length are integers. In that case the slice covers length indices starting at start. See also Save.
Args
filename A Tensor of type string. Must have a single element. The name of the file to which we write the tensor.
tensor_names A Tensor of type string. Shape [N]. The names of the tensors to be saved.
shapes_and_slices A Tensor of type string. Shape [N]. The shapes and slice specifications to use when saving the tensors.
data A list of Tensor objects. N tensors to save.
name A name for the operation (optional).
Returns The created Operation. | tensorflow.raw_ops.saveslices |
tf.raw_ops.SaveV2 Saves tensors in V2 checkpoint format. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.SaveV2
tf.raw_ops.SaveV2(
prefix, tensor_names, shape_and_slices, tensors, name=None
)
By default, saves the named tensors in full. If the caller wishes to save specific slices of full tensors, "shape_and_slices" should be non-empty strings and correspondingly well-formed.
Args
prefix A Tensor of type string. Must have a single element. The prefix of the V2 checkpoint to which we write the tensors.
tensor_names A Tensor of type string. shape {N}. The names of the tensors to be saved.
shape_and_slices A Tensor of type string. shape {N}. The slice specs of the tensors to be saved. Empty strings indicate that they are non-partitioned tensors.
tensors A list of Tensor objects. N tensors to save.
name A name for the operation (optional).
Returns The created Operation. | tensorflow.raw_ops.savev2 |
tf.raw_ops.ScalarSummary Outputs a Summary protocol buffer with scalar values. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.ScalarSummary
tf.raw_ops.ScalarSummary(
tags, values, name=None
)
The input tags and values must have the same shape. The generated summary has a summary value for each tag-value pair in tags and values.
Args
tags A Tensor of type string. Tags for the summary.
values A Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, int64, bfloat16, uint16, half, uint32, uint64. Same shape as tags. Values for the summary. </td> </tr><tr> <td>name` A name for the operation (optional).
Returns A Tensor of type string. | tensorflow.raw_ops.scalarsummary |
tf.raw_ops.ScaleAndTranslate View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.ScaleAndTranslate
tf.raw_ops.ScaleAndTranslate(
images, size, scale, translation, kernel_type='lanczos3',
antialias=True, name=None
)
Args
images A Tensor. Must be one of the following types: int8, uint8, int16, uint16, int32, int64, bfloat16, half, float32, float64.
size A Tensor of type int32.
scale A Tensor of type float32.
translation A Tensor of type float32.
kernel_type An optional string. Defaults to "lanczos3".
antialias An optional bool. Defaults to True.
name A name for the operation (optional).
Returns A Tensor of type float32. | tensorflow.raw_ops.scaleandtranslate |
tf.raw_ops.ScaleAndTranslateGrad View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.ScaleAndTranslateGrad
tf.raw_ops.ScaleAndTranslateGrad(
grads, original_image, scale, translation, kernel_type='lanczos3',
antialias=True, name=None
)
Args
grads A Tensor. Must be one of the following types: float32.
original_image A Tensor. Must have the same type as grads.
scale A Tensor of type float32.
translation A Tensor of type float32.
kernel_type An optional string. Defaults to "lanczos3".
antialias An optional bool. Defaults to True.
name A name for the operation (optional).
Returns A Tensor. Has the same type as grads. | tensorflow.raw_ops.scaleandtranslategrad |
tf.raw_ops.ScanDataset Creates a dataset successively reduces f over the elements of input_dataset. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.ScanDataset
tf.raw_ops.ScanDataset(
input_dataset, initial_state, other_arguments, f, output_types, output_shapes,
preserve_cardinality=False, use_default_device=True, name=None
)
Args
input_dataset A Tensor of type variant.
initial_state A list of Tensor objects.
other_arguments A list of Tensor objects.
f A function decorated with @Defun.
output_types A list of tf.DTypes that has length >= 1.
output_shapes A list of shapes (each a tf.TensorShape or list of ints) that has length >= 1.
preserve_cardinality An optional bool. Defaults to False.
use_default_device An optional bool. Defaults to True.
name A name for the operation (optional).
Returns A Tensor of type variant. | tensorflow.raw_ops.scandataset |
tf.raw_ops.ScatterAdd Adds sparse updates to a variable reference. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.ScatterAdd
tf.raw_ops.ScatterAdd(
ref, indices, updates, use_locking=False, name=None
)
This operation computes # Scalar indices
ref[indices, ...] += updates[...]
# Vector indices (for each i)
ref[indices[i], ...] += updates[i, ...]
# High rank indices (for each i, ..., j)
ref[indices[i, ..., j], ...] += updates[i, ..., j, ...]
This operation outputs ref after the update is done. This makes it easier to chain operations that need to use the reset value. Duplicate entries are handled correctly: if multiple indices reference the same location, their contributions add. Requires updates.shape = indices.shape + ref.shape[1:] or updates.shape = [].
Args
ref A mutable Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, complex64, int64, qint8, quint8, qint32, bfloat16, uint16, complex128, half, uint32, uint64. Should be from a Variable node.
indices A Tensor. Must be one of the following types: int32, int64. A tensor of indices into the first dimension of ref.
updates A Tensor. Must have the same type as ref. A tensor of updated values to add to ref.
use_locking An optional bool. Defaults to False. If True, the addition will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.
name A name for the operation (optional).
Returns A mutable Tensor. Has the same type as ref. | tensorflow.raw_ops.scatteradd |
tf.raw_ops.ScatterDiv Divides a variable reference by sparse updates. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.ScatterDiv
tf.raw_ops.ScatterDiv(
ref, indices, updates, use_locking=False, name=None
)
This operation computes # Scalar indices
ref[indices, ...] /= updates[...]
# Vector indices (for each i)
ref[indices[i], ...] /= updates[i, ...]
# High rank indices (for each i, ..., j)
ref[indices[i, ..., j], ...] /= updates[i, ..., j, ...]
This operation outputs ref after the update is done. This makes it easier to chain operations that need to use the reset value. Duplicate entries are handled correctly: if multiple indices reference the same location, their contributions divide. Requires updates.shape = indices.shape + ref.shape[1:] or updates.shape = [].
Args
ref A mutable Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, complex64, int64, qint8, quint8, qint32, bfloat16, uint16, complex128, half, uint32, uint64. Should be from a Variable node.
indices A Tensor. Must be one of the following types: int32, int64. A tensor of indices into the first dimension of ref.
updates A Tensor. Must have the same type as ref. A tensor of values that ref is divided by.
use_locking An optional bool. Defaults to False. If True, the operation will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.
name A name for the operation (optional).
Returns A mutable Tensor. Has the same type as ref. | tensorflow.raw_ops.scatterdiv |
tf.raw_ops.ScatterMax Reduces sparse updates into a variable reference using the max operation. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.ScatterMax
tf.raw_ops.ScatterMax(
ref, indices, updates, use_locking=False, name=None
)
This operation computes # Scalar indices
ref[indices, ...] = max(ref[indices, ...], updates[...])
# Vector indices (for each i)
ref[indices[i], ...] = max(ref[indices[i], ...], updates[i, ...])
# High rank indices (for each i, ..., j)
ref[indices[i, ..., j], ...] = max(ref[indices[i, ..., j], ...], updates[i, ..., j, ...])
This operation outputs ref after the update is done. This makes it easier to chain operations that need to use the reset value. Duplicate entries are handled correctly: if multiple indices reference the same location, their contributions combine. Requires updates.shape = indices.shape + ref.shape[1:] or updates.shape = [].
Args
ref A mutable Tensor. Must be one of the following types: half, bfloat16, float32, float64, int32, int64. Should be from a Variable node.
indices A Tensor. Must be one of the following types: int32, int64. A tensor of indices into the first dimension of ref.
updates A Tensor. Must have the same type as ref. A tensor of updated values to reduce into ref.
use_locking An optional bool. Defaults to False. If True, the update will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.
name A name for the operation (optional).
Returns A mutable Tensor. Has the same type as ref. | tensorflow.raw_ops.scattermax |
tf.raw_ops.ScatterMin Reduces sparse updates into a variable reference using the min operation. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.ScatterMin
tf.raw_ops.ScatterMin(
ref, indices, updates, use_locking=False, name=None
)
This operation computes # Scalar indices
ref[indices, ...] = min(ref[indices, ...], updates[...])
# Vector indices (for each i)
ref[indices[i], ...] = min(ref[indices[i], ...], updates[i, ...])
# High rank indices (for each i, ..., j)
ref[indices[i, ..., j], ...] = min(ref[indices[i, ..., j], ...], updates[i, ..., j, ...])
This operation outputs ref after the update is done. This makes it easier to chain operations that need to use the reset value. Duplicate entries are handled correctly: if multiple indices reference the same location, their contributions combine. Requires updates.shape = indices.shape + ref.shape[1:] or updates.shape = [].
Args
ref A mutable Tensor. Must be one of the following types: half, bfloat16, float32, float64, int32, int64. Should be from a Variable node.
indices A Tensor. Must be one of the following types: int32, int64. A tensor of indices into the first dimension of ref.
updates A Tensor. Must have the same type as ref. A tensor of updated values to reduce into ref.
use_locking An optional bool. Defaults to False. If True, the update will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.
name A name for the operation (optional).
Returns A mutable Tensor. Has the same type as ref. | tensorflow.raw_ops.scattermin |
tf.raw_ops.ScatterMul Multiplies sparse updates into a variable reference. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.ScatterMul
tf.raw_ops.ScatterMul(
ref, indices, updates, use_locking=False, name=None
)
This operation computes # Scalar indices
ref[indices, ...] *= updates[...]
# Vector indices (for each i)
ref[indices[i], ...] *= updates[i, ...]
# High rank indices (for each i, ..., j)
ref[indices[i, ..., j], ...] *= updates[i, ..., j, ...]
This operation outputs ref after the update is done. This makes it easier to chain operations that need to use the reset value. Duplicate entries are handled correctly: if multiple indices reference the same location, their contributions multiply. Requires updates.shape = indices.shape + ref.shape[1:] or updates.shape = [].
Args
ref A mutable Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, complex64, int64, qint8, quint8, qint32, bfloat16, uint16, complex128, half, uint32, uint64. Should be from a Variable node.
indices A Tensor. Must be one of the following types: int32, int64. A tensor of indices into the first dimension of ref.
updates A Tensor. Must have the same type as ref. A tensor of updated values to multiply to ref.
use_locking An optional bool. Defaults to False. If True, the operation will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.
name A name for the operation (optional).
Returns A mutable Tensor. Has the same type as ref. | tensorflow.raw_ops.scattermul |
tf.raw_ops.ScatterNd Scatter updates into a new tensor according to indices. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.ScatterNd
tf.raw_ops.ScatterNd(
indices, updates, shape, name=None
)
Creates a new tensor by applying sparse updates to individual values or slices within a tensor (initially zero for numeric, empty for string) of the given shape according to indices. This operator is the inverse of the tf.gather_nd operator which extracts values or slices from a given tensor. This operation is similar to tensor_scatter_add, except that the tensor is zero-initialized. Calling tf.scatter_nd(indices, values, shape) is identical to tensor_scatter_add(tf.zeros(shape, values.dtype), indices, values) If indices contains duplicates, then their updates are accumulated (summed). Warning: The order in which updates are applied is nondeterministic, so the output will be nondeterministic if indices contains duplicates -- because of some numerical approximation issues, numbers summed in different order may yield different results. indices is an integer tensor containing indices into a new tensor of shape shape. The last dimension of indices can be at most the rank of shape: indices.shape[-1] <= shape.rank
The last dimension of indices corresponds to indices into elements (if indices.shape[-1] = shape.rank) or slices (if indices.shape[-1] < shape.rank) along dimension indices.shape[-1] of shape. updates is a tensor with shape indices.shape[:-1] + shape[indices.shape[-1]:]
The simplest form of scatter is to insert individual elements in a tensor by index. For example, say we want to insert 4 scattered elements in a rank-1 tensor with 8 elements. In Python, this scatter operation would look like this: indices = tf.constant([[4], [3], [1], [7]])
updates = tf.constant([9, 10, 11, 12])
shape = tf.constant([8])
scatter = tf.scatter_nd(indices, updates, shape)
print(scatter)
The resulting tensor would look like this: [0, 11, 0, 10, 9, 0, 0, 12]
We can also, insert entire slices of a higher rank tensor all at once. For example, if we wanted to insert two slices in the first dimension of a rank-3 tensor with two matrices of new values. In Python, this scatter operation would look like this: indices = tf.constant([[0], [2]])
updates = tf.constant([[[5, 5, 5, 5], [6, 6, 6, 6],
[7, 7, 7, 7], [8, 8, 8, 8]],
[[5, 5, 5, 5], [6, 6, 6, 6],
[7, 7, 7, 7], [8, 8, 8, 8]]])
shape = tf.constant([4, 4, 4])
scatter = tf.scatter_nd(indices, updates, shape)
print(scatter)
The resulting tensor would look like this: [[[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]],
[[0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0]],
[[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]],
[[0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0]]]
Note that on CPU, if an out of bound index is found, an error is returned. On GPU, if an out of bound index is found, the index is ignored.
Args
indices A Tensor. Must be one of the following types: int32, int64. Index tensor.
updates A Tensor. Updates to scatter into output.
shape A Tensor. Must have the same type as indices. 1-D. The shape of the resulting tensor.
name A name for the operation (optional).
Returns A Tensor. Has the same type as updates. | tensorflow.raw_ops.scatternd |
tf.raw_ops.ScatterNdAdd Applies sparse addition to individual values or slices in a Variable. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.ScatterNdAdd
tf.raw_ops.ScatterNdAdd(
ref, indices, updates, use_locking=False, name=None
)
ref is a Tensor with rank P and indices is a Tensor of rank Q. indices must be integer tensor, containing indices into ref. It must be shape [d_0, ..., d_{Q-2}, K] where 0 < K <= P. The innermost dimension of indices (with length K) corresponds to indices into elements (if K = P) or slices (if K < P) along the Kth dimension of ref. updates is Tensor of rank Q-1+P-K with shape: [d_0, ..., d_{Q-2}, ref.shape[K], ..., ref.shape[P-1]]
For example, say we want to add 4 scattered elements to a rank-1 tensor to 8 elements. In Python, that addition would look like this: ref = tf.Variable([1, 2, 3, 4, 5, 6, 7, 8])
indices = tf.constant([[4], [3], [1], [7]])
updates = tf.constant([9, 10, 11, 12])
add = tf.scatter_nd_add(ref, indices, updates)
with tf.Session() as sess:
print sess.run(add)
The resulting update to ref would look like this: [1, 13, 3, 14, 14, 6, 7, 20]
See tf.scatter_nd for more details about how to make updates to slices.
Args
ref A mutable Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, complex64, int64, qint8, quint8, qint32, bfloat16, uint16, complex128, half, uint32, uint64. A mutable Tensor. Should be from a Variable node.
indices A Tensor. Must be one of the following types: int32, int64. A Tensor. Must be one of the following types: int32, int64. A tensor of indices into ref.
updates A Tensor. Must have the same type as ref. A Tensor. Must have the same type as ref. A tensor of updated values to add to ref.
use_locking An optional bool. Defaults to False. An optional bool. Defaults to True. If True, the assignment will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.
name A name for the operation (optional).
Returns A mutable Tensor. Has the same type as ref. | tensorflow.raw_ops.scatterndadd |
tf.raw_ops.ScatterNdMax Computes element-wise maximum. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.ScatterNdMax
tf.raw_ops.ScatterNdMax(
ref, indices, updates, use_locking=False, name=None
)
Args
ref A mutable Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, complex64, int64, qint8, quint8, qint32, bfloat16, uint16, complex128, half, uint32, uint64. A mutable Tensor. Should be from a Variable node.
indices A Tensor. Must be one of the following types: int32, int64. A Tensor. Must be one of the following types: int32, int64. A tensor of indices into ref.
updates A Tensor. Must have the same type as ref. A Tensor. Must have the same type as ref. A tensor of updated values to add to ref.
use_locking An optional bool. Defaults to False. An optional bool. Defaults to True. If True, the assignment will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.
name A name for the operation (optional).
Returns A mutable Tensor. Has the same type as ref. | tensorflow.raw_ops.scatterndmax |
tf.raw_ops.ScatterNdMin Computes element-wise minimum. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.ScatterNdMin
tf.raw_ops.ScatterNdMin(
ref, indices, updates, use_locking=False, name=None
)
Args
ref A mutable Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, complex64, int64, qint8, quint8, qint32, bfloat16, uint16, complex128, half, uint32, uint64. A mutable Tensor. Should be from a Variable node.
indices A Tensor. Must be one of the following types: int32, int64. A Tensor. Must be one of the following types: int32, int64. A tensor of indices into ref.
updates A Tensor. Must have the same type as ref. A Tensor. Must have the same type as ref. A tensor of updated values to add to ref.
use_locking An optional bool. Defaults to False. An optional bool. Defaults to True. If True, the assignment will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.
name A name for the operation (optional).
Returns A mutable Tensor. Has the same type as ref. | tensorflow.raw_ops.scatterndmin |
tf.raw_ops.ScatterNdNonAliasingAdd Applies sparse addition to input using individual values or slices View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.ScatterNdNonAliasingAdd
tf.raw_ops.ScatterNdNonAliasingAdd(
input, indices, updates, name=None
)
from updates according to indices indices. The updates are non-aliasing: input is only modified in-place if no other operations will use it. Otherwise, a copy of input is made. This operation has a gradient with respect to both input and updates. input is a Tensor with rank P and indices is a Tensor of rank Q. indices must be integer tensor, containing indices into input. It must be shape \([d_0, ..., d_{Q-2}, K]\) where 0 < K <= P. The innermost dimension of indices (with length K) corresponds to indices into elements (if K = P) or (P-K)-dimensional slices (if K < P) along the Kth dimension of input. updates is Tensor of rank Q-1+P-K with shape: $$[d_0, ..., d_{Q-2}, input.shape[K], ..., input.shape[P-1]].$$ For example, say we want to add 4 scattered elements to a rank-1 tensor to 8 elements. In Python, that addition would look like this: input = tf.constant([1, 2, 3, 4, 5, 6, 7, 8])
indices = tf.constant([[4], [3], [1], [7]])
updates = tf.constant([9, 10, 11, 12])
output = tf.scatter_nd_non_aliasing_add(input, indices, updates)
with tf.Session() as sess:
print(sess.run(output))
The resulting value output would look like this: [1, 13, 3, 14, 14, 6, 7, 20]
See tf.scatter_nd for more details about how to make updates to slices.
Args
input A Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, complex64, int64, qint8, quint8, qint32, bfloat16, uint16, complex128, half, uint32, uint64, bool. A Tensor.
indices A Tensor. Must be one of the following types: int32, int64. A Tensor. Must be one of the following types: int32, int64. A tensor of indices into input.
updates A Tensor. Must have the same type as input. A Tensor. Must have the same type as ref. A tensor of updated values to add to input.
name A name for the operation (optional).
Returns A Tensor. Has the same type as input. | tensorflow.raw_ops.scatterndnonaliasingadd |
tf.raw_ops.ScatterNdSub Applies sparse subtraction to individual values or slices in a Variable. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.ScatterNdSub
tf.raw_ops.ScatterNdSub(
ref, indices, updates, use_locking=False, name=None
)
within a given variable according to indices. ref is a Tensor with rank P and indices is a Tensor of rank Q. indices must be integer tensor, containing indices into ref. It must be shape [d_0, ..., d_{Q-2}, K] where 0 < K <= P. The innermost dimension of indices (with length K) corresponds to indices into elements (if K = P) or slices (if K < P) along the Kth dimension of ref. updates is Tensor of rank Q-1+P-K with shape: [d_0, ..., d_{Q-2}, ref.shape[K], ..., ref.shape[P-1]]
For example, say we want to subtract 4 scattered elements from a rank-1 tensor with 8 elements. In Python, that subtraction would look like this: ref = tf.Variable([1, 2, 3, 4, 5, 6, 7, 8])
indices = tf.constant([[4], [3], [1], [7]])
updates = tf.constant([9, 10, 11, 12])
sub = tf.scatter_nd_sub(ref, indices, updates)
with tf.Session() as sess:
print sess.run(sub)
The resulting update to ref would look like this: [1, -9, 3, -6, -4, 6, 7, -4]
See tf.scatter_nd for more details about how to make updates to slices.
Args
ref A mutable Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, complex64, int64, qint8, quint8, qint32, bfloat16, uint16, complex128, half, uint32, uint64. A mutable Tensor. Should be from a Variable node.
indices A Tensor. Must be one of the following types: int32, int64. A Tensor. Must be one of the following types: int32, int64. A tensor of indices into ref.
updates A Tensor. Must have the same type as ref. A Tensor. Must have the same type as ref. A tensor of updated values to subtract from ref.
use_locking An optional bool. Defaults to False. An optional bool. Defaults to True. If True, the assignment will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.
name A name for the operation (optional).
Returns A mutable Tensor. Has the same type as ref. | tensorflow.raw_ops.scatterndsub |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.