doc_content
stringlengths 1
386k
| doc_id
stringlengths 5
188
|
---|---|
tf.raw_ops.TruncatedNormal Outputs random values from a truncated normal distribution. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.TruncatedNormal
tf.raw_ops.TruncatedNormal(
shape, dtype, seed=0, seed2=0, name=None
)
The generated values follow a normal distribution with mean 0 and standard deviation 1, except that values whose magnitude is more than 2 standard deviations from the mean are dropped and re-picked.
Args
shape A Tensor. Must be one of the following types: int32, int64. The shape of the output tensor.
dtype A tf.DType from: tf.half, tf.bfloat16, tf.float32, tf.float64. The type of the output.
seed An optional int. Defaults to 0. If either seed or seed2 are set to be non-zero, the random number generator is seeded by the given seed. Otherwise, it is seeded by a random seed.
seed2 An optional int. Defaults to 0. A second seed to avoid seed collision.
name A name for the operation (optional).
Returns A Tensor of type dtype. | tensorflow.raw_ops.truncatednormal |
tf.raw_ops.TruncateMod Returns element-wise remainder of division. This emulates C semantics in that View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.TruncateMod
tf.raw_ops.TruncateMod(
x, y, name=None
)
the result here is consistent with a truncating divide. E.g. truncate(x / y) * y + truncate_mod(x, y) = x.
Note: truncatemod supports broadcasting. More about broadcasting here
Args
x A Tensor. Must be one of the following types: int32, int64, bfloat16, half, float32, float64.
y A Tensor. Must have the same type as x.
name A name for the operation (optional).
Returns A Tensor. Has the same type as x. | tensorflow.raw_ops.truncatemod |
tf.raw_ops.Unbatch Reverses the operation of Batch for a single output Tensor. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.Unbatch
tf.raw_ops.Unbatch(
batched_tensor, batch_index, id, timeout_micros, container='',
shared_name='', name=None
)
An instance of Unbatch either receives an empty batched_tensor, in which case it asynchronously waits until the values become available from a concurrently running instance of Unbatch with the same container and shared_name, or receives a non-empty batched_tensor in which case it finalizes all other concurrently running instances and outputs its own element from the batch. batched_tensor: The possibly transformed output of Batch. The size of the first dimension should remain unchanged by the transformations for the operation to work. batch_index: The matching batch_index obtained from Batch. id: The id scalar emitted by Batch. unbatched_tensor: The Tensor corresponding to this execution. timeout_micros: Maximum amount of time (in microseconds) to wait to receive the batched input tensor associated with a given invocation of the op. container: Container to control resource sharing. shared_name: Instances of Unbatch with the same container and shared_name are assumed to possibly belong to the same batch. If left empty, the op name will be used as the shared name.
Args
batched_tensor A Tensor.
batch_index A Tensor of type int64.
id A Tensor of type int64.
timeout_micros An int.
container An optional string. Defaults to "".
shared_name An optional string. Defaults to "".
name A name for the operation (optional).
Returns A Tensor. Has the same type as batched_tensor. | tensorflow.raw_ops.unbatch |
tf.raw_ops.UnbatchDataset A dataset that splits the elements of its input into multiple elements. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.UnbatchDataset
tf.raw_ops.UnbatchDataset(
input_dataset, output_types, output_shapes, name=None
)
Args
input_dataset A Tensor of type variant.
output_types A list of tf.DTypes that has length >= 1.
output_shapes A list of shapes (each a tf.TensorShape or list of ints) that has length >= 1.
name A name for the operation (optional).
Returns A Tensor of type variant. | tensorflow.raw_ops.unbatchdataset |
tf.raw_ops.UnbatchGrad Gradient of Unbatch. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.UnbatchGrad
tf.raw_ops.UnbatchGrad(
original_input, batch_index, grad, id, container='',
shared_name='', name=None
)
Acts like Batch but using the given batch_index index of batching things as they become available. This ensures that the gradients are propagated back in the same session which did the forward pass. original_input: The input to the Unbatch operation this is the gradient of. batch_index: The batch_index given to the Unbatch operation this is the gradient of. grad: The downstream gradient. id: The id scalar emitted by Batch. batched_grad: The return value, either an empty tensor or the batched gradient. container: Container to control resource sharing. shared_name: Instances of UnbatchGrad with the same container and shared_name are assumed to possibly belong to the same batch. If left empty, the op name will be used as the shared name.
Args
original_input A Tensor.
batch_index A Tensor of type int64.
grad A Tensor. Must have the same type as original_input.
id A Tensor of type int64.
container An optional string. Defaults to "".
shared_name An optional string. Defaults to "".
name A name for the operation (optional).
Returns A Tensor. Has the same type as original_input. | tensorflow.raw_ops.unbatchgrad |
tf.raw_ops.UncompressElement Uncompresses a compressed dataset element. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.UncompressElement
tf.raw_ops.UncompressElement(
compressed, output_types, output_shapes, name=None
)
Args
compressed A Tensor of type variant.
output_types A list of tf.DTypes that has length >= 1.
output_shapes A list of shapes (each a tf.TensorShape or list of ints) that has length >= 1.
name A name for the operation (optional).
Returns A list of Tensor objects of type output_types. | tensorflow.raw_ops.uncompresselement |
tf.raw_ops.UnicodeDecode Decodes each string in input into a sequence of Unicode code points. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.UnicodeDecode
tf.raw_ops.UnicodeDecode(
input, input_encoding, errors='replace', replacement_char=65533,
replace_control_characters=False, Tsplits=tf.dtypes.int64, name=None
)
The character codepoints for all strings are returned using a single vector char_values, with strings expanded to characters in row-major order. The row_splits tensor indicates where the codepoints for each input string begin and end within the char_values tensor. In particular, the values for the ith string (in row-major order) are stored in the slice [row_splits[i]:row_splits[i+1]]. Thus:
char_values[row_splits[i]+j] is the Unicode codepoint for the jth character in the ith string (in row-major order).
row_splits[i+1] - row_splits[i] is the number of characters in the ith string (in row-major order).
Args
input A Tensor of type string. The text to be decoded. Can have any shape. Note that the output is flattened to a vector of char values.
input_encoding A string. Text encoding of the input strings. This is any of the encodings supported by ICU ucnv algorithmic converters. Examples: "UTF-16", "US ASCII", "UTF-8".
errors An optional string from: "strict", "replace", "ignore". Defaults to "replace". Error handling policy when there is invalid formatting found in the input. The value of 'strict' will cause the operation to produce a InvalidArgument error on any invalid input formatting. A value of 'replace' (the default) will cause the operation to replace any invalid formatting in the input with the replacement_char codepoint. A value of 'ignore' will cause the operation to skip any invalid formatting in the input and produce no corresponding output character.
replacement_char An optional int. Defaults to 65533. The replacement character codepoint to be used in place of any invalid formatting in the input when errors='replace'. Any valid unicode codepoint may be used. The default value is the default unicode replacement character is 0xFFFD or U+65533.)
replace_control_characters An optional bool. Defaults to False. Whether to replace the C0 control characters (00-1F) with the replacement_char. Default is false.
Tsplits An optional tf.DType from: tf.int32, tf.int64. Defaults to tf.int64.
name A name for the operation (optional).
Returns A tuple of Tensor objects (row_splits, char_values). row_splits A Tensor of type Tsplits.
char_values A Tensor of type int32. | tensorflow.raw_ops.unicodedecode |
tf.raw_ops.UnicodeDecodeWithOffsets Decodes each string in input into a sequence of Unicode code points. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.UnicodeDecodeWithOffsets
tf.raw_ops.UnicodeDecodeWithOffsets(
input, input_encoding, errors='replace', replacement_char=65533,
replace_control_characters=False, Tsplits=tf.dtypes.int64, name=None
)
The character codepoints for all strings are returned using a single vector char_values, with strings expanded to characters in row-major order. Similarly, the character start byte offsets are returned using a single vector char_to_byte_starts, with strings expanded in row-major order. The row_splits tensor indicates where the codepoints and start offsets for each input string begin and end within the char_values and char_to_byte_starts tensors. In particular, the values for the ith string (in row-major order) are stored in the slice [row_splits[i]:row_splits[i+1]]. Thus:
char_values[row_splits[i]+j] is the Unicode codepoint for the jth character in the ith string (in row-major order).
char_to_bytes_starts[row_splits[i]+j] is the start byte offset for the jth character in the ith string (in row-major order).
row_splits[i+1] - row_splits[i] is the number of characters in the ith string (in row-major order).
Args
input A Tensor of type string. The text to be decoded. Can have any shape. Note that the output is flattened to a vector of char values.
input_encoding A string. Text encoding of the input strings. This is any of the encodings supported by ICU ucnv algorithmic converters. Examples: "UTF-16", "US ASCII", "UTF-8".
errors An optional string from: "strict", "replace", "ignore". Defaults to "replace". Error handling policy when there is invalid formatting found in the input. The value of 'strict' will cause the operation to produce a InvalidArgument error on any invalid input formatting. A value of 'replace' (the default) will cause the operation to replace any invalid formatting in the input with the replacement_char codepoint. A value of 'ignore' will cause the operation to skip any invalid formatting in the input and produce no corresponding output character.
replacement_char An optional int. Defaults to 65533. The replacement character codepoint to be used in place of any invalid formatting in the input when errors='replace'. Any valid unicode codepoint may be used. The default value is the default unicode replacement character is 0xFFFD or U+65533.)
replace_control_characters An optional bool. Defaults to False. Whether to replace the C0 control characters (00-1F) with the replacement_char. Default is false.
Tsplits An optional tf.DType from: tf.int32, tf.int64. Defaults to tf.int64.
name A name for the operation (optional).
Returns A tuple of Tensor objects (row_splits, char_values, char_to_byte_starts). row_splits A Tensor of type Tsplits.
char_values A Tensor of type int32.
char_to_byte_starts A Tensor of type int64. | tensorflow.raw_ops.unicodedecodewithoffsets |
tf.raw_ops.UnicodeEncode Encode a tensor of ints into unicode strings. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.UnicodeEncode
tf.raw_ops.UnicodeEncode(
input_values, input_splits, output_encoding, errors='replace',
replacement_char=65533, name=None
)
Returns a vector of strings, where output[i] is constructed by encoding the Unicode codepoints in input_values[input_splits[i]:input_splits[i+1]] using output_encoding. Example: input_values = [72, 101, 108, 108, 111, 87, 111, 114, 108, 100]
input_splits = [0, 5, 10]
output_encoding = 'UTF-8'
output = ['Hello', 'World']
Args
input_values A Tensor of type int32. A 1D tensor containing the unicode codepoints that should be encoded.
input_splits A Tensor. Must be one of the following types: int32, int64. A 1D tensor specifying how the unicode codepoints should be split into strings. In particular, output[i] is constructed by encoding the codepoints in the slice input_values[input_splits[i]:input_splits[i+1]].
output_encoding A string from: "UTF-8", "UTF-16-BE", "UTF-32-BE". Unicode encoding of the output strings. Valid encodings are: "UTF-8", "UTF-16-BE", and "UTF-32-BE".
errors An optional string from: "ignore", "replace", "strict". Defaults to "replace". Error handling policy when there is invalid formatting found in the input. The value of 'strict' will cause the operation to produce a InvalidArgument error on any invalid input formatting. A value of 'replace' (the default) will cause the operation to replace any invalid formatting in the input with the replacement_char codepoint. A value of 'ignore' will cause the operation to skip any invalid formatting in the input and produce no corresponding output character.
replacement_char An optional int. Defaults to 65533. The replacement character codepoint to be used in place of any invalid formatting in the input when errors='replace'. Any valid unicode codepoint may be used. The default value is the default unicode replacement character is 0xFFFD (U+65533).
name A name for the operation (optional).
Returns A Tensor of type string. | tensorflow.raw_ops.unicodeencode |
tf.raw_ops.UnicodeScript Determine the script codes of a given tensor of Unicode integer code points. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.UnicodeScript
tf.raw_ops.UnicodeScript(
input, name=None
)
This operation converts Unicode code points to script codes corresponding to each code point. Script codes correspond to International Components for Unicode (ICU) UScriptCode values. See ICU project docs for more details on script codes. For an example, see the unicode strings guide on unicode scripts. Returns -1 (USCRIPT_INVALID_CODE) for invalid codepoints. Output shape will match input shape. Examples:
tf.strings.unicode_script([1, 31, 38])
<tf.Tensor: shape=(3,), dtype=int32, numpy=array([0, 0, 0], dtype=int32)>
Args
input A Tensor of type int32. A Tensor of int32 Unicode code points.
name A name for the operation (optional).
Returns A Tensor of type int32. | tensorflow.raw_ops.unicodescript |
tf.raw_ops.UnicodeTranscode Transcode the input text from a source encoding to a destination encoding. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.UnicodeTranscode
tf.raw_ops.UnicodeTranscode(
input, input_encoding, output_encoding, errors='replace',
replacement_char=65533, replace_control_characters=False, name=None
)
The input is a string tensor of any shape. The output is a string tensor of the same shape containing the transcoded strings. Output strings are always valid unicode. If the input contains invalid encoding positions, the errors attribute sets the policy for how to deal with them. If the default error-handling policy is used, invalid formatting will be substituted in the output by the replacement_char. If the errors policy is to ignore, any invalid encoding positions in the input are skipped and not included in the output. If it set to strict then any invalid formatting will result in an InvalidArgument error. This operation can be used with output_encoding = input_encoding to enforce correct formatting for inputs even if they are already in the desired encoding. If the input is prefixed by a Byte Order Mark needed to determine encoding (e.g. if the encoding is UTF-16 and the BOM indicates big-endian), then that BOM will be consumed and not emitted into the output. If the input encoding is marked with an explicit endianness (e.g. UTF-16-BE), then the BOM is interpreted as a non-breaking-space and is preserved in the output (including always for UTF-8). The end result is that if the input is marked as an explicit endianness the transcoding is faithful to all codepoints in the source. If it is not marked with an explicit endianness, the BOM is not considered part of the string itself but as metadata, and so is not preserved in the output. Examples:
tf.strings.unicode_transcode(["Hello", "TensorFlow", "2.x"], "UTF-8", "UTF-16-BE")
<tf.Tensor: shape=(3,), dtype=string, numpy=
array([b'\x00H\x00e\x00l\x00l\x00o',
b'\x00T\x00e\x00n\x00s\x00o\x00r\x00F\x00l\x00o\x00w',
b'\x002\x00.\x00x'], dtype=object)>
tf.strings.unicode_transcode(["A", "B", "C"], "US ASCII", "UTF-8").numpy()
array([b'A', b'B', b'C'], dtype=object)
Args
input A Tensor of type string. The text to be processed. Can have any shape.
input_encoding A string. Text encoding of the input strings. This is any of the encodings supported by ICU ucnv algorithmic converters. Examples: "UTF-16", "US ASCII", "UTF-8".
output_encoding A string from: "UTF-8", "UTF-16-BE", "UTF-32-BE". The unicode encoding to use in the output. Must be one of "UTF-8", "UTF-16-BE", "UTF-32-BE". Multi-byte encodings will be big-endian.
errors An optional string from: "strict", "replace", "ignore". Defaults to "replace". Error handling policy when there is invalid formatting found in the input. The value of 'strict' will cause the operation to produce a InvalidArgument error on any invalid input formatting. A value of 'replace' (the default) will cause the operation to replace any invalid formatting in the input with the replacement_char codepoint. A value of 'ignore' will cause the operation to skip any invalid formatting in the input and produce no corresponding output character.
replacement_char An optional int. Defaults to 65533. The replacement character codepoint to be used in place of any invalid formatting in the input when errors='replace'. Any valid unicode codepoint may be used. The default value is the default unicode replacement character is 0xFFFD or U+65533.) Note that for UTF-8, passing a replacement character expressible in 1 byte, such as ' ', will preserve string alignment to the source since invalid bytes will be replaced with a 1-byte replacement. For UTF-16-BE and UTF-16-LE, any 1 or 2 byte replacement character will preserve byte alignment to the source.
replace_control_characters An optional bool. Defaults to False. Whether to replace the C0 control characters (00-1F) with the replacement_char. Default is false.
name A name for the operation (optional).
Returns A Tensor of type string. | tensorflow.raw_ops.unicodetranscode |
tf.raw_ops.UniformCandidateSampler Generates labels for candidate sampling with a uniform distribution. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.UniformCandidateSampler
tf.raw_ops.UniformCandidateSampler(
true_classes, num_true, num_sampled, unique, range_max, seed=0, seed2=0,
name=None
)
See explanations of candidate sampling and the data formats at go/candidate-sampling. For each batch, this op picks a single set of sampled candidate labels. The advantages of sampling candidates per-batch are simplicity and the possibility of efficient dense matrix multiplication. The disadvantage is that the sampled candidates must be chosen independently of the context and of the true labels.
Args
true_classes A Tensor of type int64. A batch_size * num_true matrix, in which each row contains the IDs of the num_true target_classes in the corresponding original label.
num_true An int that is >= 1. Number of true labels per context.
num_sampled An int that is >= 1. Number of candidates to randomly sample.
unique A bool. If unique is true, we sample with rejection, so that all sampled candidates in a batch are unique. This requires some approximation to estimate the post-rejection sampling probabilities.
range_max An int that is >= 1. The sampler will sample integers from the interval [0, range_max).
seed An optional int. Defaults to 0. If either seed or seed2 are set to be non-zero, the random number generator is seeded by the given seed. Otherwise, it is seeded by a random seed.
seed2 An optional int. Defaults to 0. An second seed to avoid seed collision.
name A name for the operation (optional).
Returns A tuple of Tensor objects (sampled_candidates, true_expected_count, sampled_expected_count). sampled_candidates A Tensor of type int64.
true_expected_count A Tensor of type float32.
sampled_expected_count A Tensor of type float32. | tensorflow.raw_ops.uniformcandidatesampler |
tf.raw_ops.Unique Finds unique elements in a 1-D tensor. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.Unique
tf.raw_ops.Unique(
x, out_idx=tf.dtypes.int32, name=None
)
This operation returns a tensor y containing all of the unique elements of x sorted in the same order that they occur in x; x does not need to be sorted. This operation also returns a tensor idx the same size as x that contains the index of each value of x in the unique output y. In other words: y[idx[i]] = x[i] for i in [0, 1,...,rank(x) - 1] Examples: # tensor 'x' is [1, 1, 2, 4, 4, 4, 7, 8, 8]
y, idx = unique(x)
y ==> [1, 2, 4, 7, 8]
idx ==> [0, 0, 1, 2, 2, 2, 3, 4, 4]
# tensor 'x' is [4, 5, 1, 2, 3, 3, 4, 5]
y, idx = unique(x)
y ==> [4, 5, 1, 2, 3]
idx ==> [0, 1, 2, 3, 4, 4, 0, 1]
Args
x A Tensor. 1-D.
out_idx An optional tf.DType from: tf.int32, tf.int64. Defaults to tf.int32.
name A name for the operation (optional).
Returns A tuple of Tensor objects (y, idx). y A Tensor. Has the same type as x.
idx A Tensor of type out_idx. | tensorflow.raw_ops.unique |
tf.raw_ops.UniqueDataset Creates a dataset that contains the unique elements of input_dataset. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.UniqueDataset
tf.raw_ops.UniqueDataset(
input_dataset, output_types, output_shapes, name=None
)
Args
input_dataset A Tensor of type variant.
output_types A list of tf.DTypes that has length >= 1.
output_shapes A list of shapes (each a tf.TensorShape or list of ints) that has length >= 1.
name A name for the operation (optional).
Returns A Tensor of type variant. | tensorflow.raw_ops.uniquedataset |
tf.raw_ops.UniqueV2 Finds unique elements along an axis of a tensor. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.UniqueV2
tf.raw_ops.UniqueV2(
x, axis, out_idx=tf.dtypes.int32, name=None
)
This operation either returns a tensor y containing unique elements along the axis of a tensor. The returned unique elements is sorted in the same order as they occur along axis in x. This operation also returns a tensor idx that is the same size as the number of the elements in x along the axis dimension. It contains the index in the unique output y. In other words, for an 1-D tensor x with `axis = None: y[idx[i]] = x[i] for i in [0, 1,...,rank(x) - 1] For example: # tensor 'x' is [1, 1, 2, 4, 4, 4, 7, 8, 8]
y, idx = unique(x)
y ==> [1, 2, 4, 7, 8]
idx ==> [0, 0, 1, 2, 2, 2, 3, 4, 4]
For an 2-D tensor x with axis = 0: # tensor 'x' is [[1, 0, 0],
# [1, 0, 0],
# [2, 0, 0]]
y, idx = unique(x, axis=0)
y ==> [[1, 0, 0],
[2, 0, 0]]
idx ==> [0, 0, 1]
For an 2-D tensor x with axis = 1: # tensor 'x' is [[1, 0, 0],
# [1, 0, 0],
# [2, 0, 0]]
y, idx = unique(x, axis=1)
y ==> [[1, 0],
[1, 0],
[2, 0]]
idx ==> [0, 1, 1]
Args
x A Tensor. A Tensor.
axis A Tensor. Must be one of the following types: int32, int64. A Tensor of type int32 (default: None). The axis of the Tensor to find the unique elements.
out_idx An optional tf.DType from: tf.int32, tf.int64. Defaults to tf.int32.
name A name for the operation (optional).
Returns A tuple of Tensor objects (y, idx). y A Tensor. Has the same type as x.
idx A Tensor of type out_idx. | tensorflow.raw_ops.uniquev2 |
tf.raw_ops.UniqueWithCounts Finds unique elements in a 1-D tensor. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.UniqueWithCounts
tf.raw_ops.UniqueWithCounts(
x, out_idx=tf.dtypes.int32, name=None
)
This operation returns a tensor y containing all of the unique elements of x sorted in the same order that they occur in x. This operation also returns a tensor idx the same size as x that contains the index of each value of x in the unique output y. Finally, it returns a third tensor count that contains the count of each element of y in x. In other words: y[idx[i]] = x[i] for i in [0, 1,...,rank(x) - 1] For example: # tensor 'x' is [1, 1, 2, 4, 4, 4, 7, 8, 8]
y, idx, count = unique_with_counts(x)
y ==> [1, 2, 4, 7, 8]
idx ==> [0, 0, 1, 2, 2, 2, 3, 4, 4]
count ==> [2, 1, 3, 1, 2]
Args
x A Tensor. 1-D.
out_idx An optional tf.DType from: tf.int32, tf.int64. Defaults to tf.int32.
name A name for the operation (optional).
Returns A tuple of Tensor objects (y, idx, count). y A Tensor. Has the same type as x.
idx A Tensor of type out_idx.
count A Tensor of type out_idx. | tensorflow.raw_ops.uniquewithcounts |
tf.raw_ops.UniqueWithCountsV2 Finds unique elements along an axis of a tensor. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.UniqueWithCountsV2
tf.raw_ops.UniqueWithCountsV2(
x, axis, out_idx=tf.dtypes.int32, name=None
)
This operation either returns a tensor y containing unique elements along the axis of a tensor. The returned unique elements is sorted in the same order as they occur along axis in x. This operation also returns a tensor idx and a tensor count that are the same size as the number of the elements in x along the axis dimension. The idx contains the index in the unique output y and the count contains the count in the unique output y. In other words, for an 1-D tensor x with `axis = None: y[idx[i]] = x[i] for i in [0, 1,...,rank(x) - 1] For example: # tensor 'x' is [1, 1, 2, 4, 4, 4, 7, 8, 8]
y, idx, count = unique_with_counts(x)
y ==> [1, 2, 4, 7, 8]
idx ==> [0, 0, 1, 2, 2, 2, 3, 4, 4]
count ==> [2, 1, 3, 1, 2]
For an 2-D tensor x with axis = 0: # tensor 'x' is [[1, 0, 0],
# [1, 0, 0],
# [2, 0, 0]]
y, idx, count = unique_with_counts(x, axis=0)
y ==> [[1, 0, 0],
[2, 0, 0]]
idx ==> [0, 0, 1]
count ==> [2, 1]
For an 2-D tensor x with axis = 1: # tensor 'x' is [[1, 0, 0],
# [1, 0, 0],
# [2, 0, 0]]
y, idx, count = unique_with_counts(x, axis=1)
y ==> [[1, 0],
[1, 0],
[2, 0]]
idx ==> [0, 1, 1]
count ==> [1, 2]
Args
x A Tensor. A Tensor.
axis A Tensor. Must be one of the following types: int32, int64. A Tensor of type int32 (default: None). The axis of the Tensor to find the unique elements.
out_idx An optional tf.DType from: tf.int32, tf.int64. Defaults to tf.int32.
name A name for the operation (optional).
Returns A tuple of Tensor objects (y, idx, count). y A Tensor. Has the same type as x.
idx A Tensor of type out_idx.
count A Tensor of type out_idx. | tensorflow.raw_ops.uniquewithcountsv2 |
tf.raw_ops.Unpack Unpacks a given dimension of a rank-R tensor into num rank-(R-1) tensors. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.Unpack
tf.raw_ops.Unpack(
value, num, axis=0, name=None
)
Unpacks num tensors from value by chipping it along the axis dimension. For example, given a tensor of shape (A, B, C, D); If axis == 0 then the i'th tensor in output is the slice value[i, :, :, :] and each tensor in output will have shape (B, C, D). (Note that the dimension unpacked along is gone, unlike split). If axis == 1 then the i'th tensor in output is the slice value[:, i, :, :] and each tensor in output will have shape (A, C, D). Etc. This is the opposite of pack.
Args
value A Tensor. 1-D or higher, with axis dimension size equal to num.
num An int that is >= 0.
axis An optional int. Defaults to 0. Dimension along which to unpack. Negative values wrap around, so the valid range is [-R, R).
name A name for the operation (optional).
Returns A list of num Tensor objects with the same type as value. | tensorflow.raw_ops.unpack |
tf.raw_ops.UnravelIndex Converts an array of flat indices into a tuple of coordinate arrays. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.UnravelIndex
tf.raw_ops.UnravelIndex(
indices, dims, name=None
)
Example: y = tf.unravel_index(indices=[2, 5, 7], dims=[3, 3])
# 'dims' represent a hypothetical (3, 3) tensor of indices:
# [[0, 1, *2*],
# [3, 4, *5*],
# [6, *7*, 8]]
# For each entry from 'indices', this operation returns
# its coordinates (marked with '*'), such as
# 2 ==> (0, 2)
# 5 ==> (1, 2)
# 7 ==> (2, 1)
y ==> [[0, 1, 2], [2, 2, 1]]
Args
indices A Tensor. Must be one of the following types: int32, int64. An 0-D or 1-D int Tensor whose elements are indices into the flattened version of an array of dimensions dims.
dims A Tensor. Must have the same type as indices. An 1-D int Tensor. The shape of the array to use for unraveling indices.
name A name for the operation (optional).
Returns A Tensor. Has the same type as indices.
Numpy Compatibility Equivalent to np.unravel_index | tensorflow.raw_ops.unravelindex |
tf.raw_ops.UnsortedSegmentJoin Joins the elements of inputs based on segment_ids. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.UnsortedSegmentJoin
tf.raw_ops.UnsortedSegmentJoin(
inputs, segment_ids, num_segments, separator='', name=None
)
Computes the string join along segments of a tensor. Given segment_ids with rank N and data with rank N+M: `output[i, k1...kM] = strings.join([data[j1...jN, k1...kM])`
where the join is over all [j1...jN] such that segment_ids[j1...jN] = i. Strings are joined in row-major order. For example: inputs = [['Y', 'q', 'c'], ['Y', '6', '6'], ['p', 'G', 'a']]
output_array = string_ops.unsorted_segment_join(inputs=inputs,
segment_ids=[1, 0, 1],
num_segments=2,
separator=':'))
# output_array ==> [['Y', '6', '6'], ['Y:p', 'q:G', 'c:a']]
inputs = ['this', 'is', 'a', 'test']
output_array = string_ops.unsorted_segment_join(inputs=inputs,
segment_ids=[0, 0, 0, 0],
num_segments=1,
separator=':'))
# output_array ==> ['this:is:a:test']
Args
inputs A Tensor of type string. The input to be joined.
segment_ids A Tensor. Must be one of the following types: int32, int64. A tensor whose shape is a prefix of data.shape. Negative segment ids are not supported.
num_segments A Tensor. Must be one of the following types: int32, int64. A scalar.
separator An optional string. Defaults to "". The separator to use when joining.
name A name for the operation (optional).
Returns A Tensor of type string. | tensorflow.raw_ops.unsortedsegmentjoin |
tf.raw_ops.UnsortedSegmentMax Computes the maximum along segments of a tensor. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.UnsortedSegmentMax
tf.raw_ops.UnsortedSegmentMax(
data, segment_ids, num_segments, name=None
)
Read the section on segmentation for an explanation of segments. This operator is similar to the unsorted segment sum operator found (here). Instead of computing the sum over segments, it computes the maximum such that: \(output_i = \max_{j...} data[j...]\) where max is over tuples j... such that segment_ids[j...] == i. If the maximum is empty for a given segment ID i, it outputs the smallest possible value for the specific numeric type, output[i] = numeric_limits<T>::lowest(). If the given segment ID i is negative, then the corresponding value is dropped, and will not be included in the result. For example: c = tf.constant([[1,2,3,4], [5,6,7,8], [4,3,2,1]])
tf.unsorted_segment_max(c, tf.constant([0, 1, 0]), num_segments=2)
# ==> [[ 4, 3, 3, 4],
# [5, 6, 7, 8]]
Args
data A Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, int64, bfloat16, uint16, half, uint32, uint64.
segment_ids A Tensor. Must be one of the following types: int32, int64. A tensor whose shape is a prefix of data.shape.
num_segments A Tensor. Must be one of the following types: int32, int64.
name A name for the operation (optional).
Returns A Tensor. Has the same type as data. | tensorflow.raw_ops.unsortedsegmentmax |
tf.raw_ops.UnsortedSegmentMin Computes the minimum along segments of a tensor. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.UnsortedSegmentMin
tf.raw_ops.UnsortedSegmentMin(
data, segment_ids, num_segments, name=None
)
Read the section on segmentation for an explanation of segments. This operator is similar to the unsorted segment sum operator found (here). Instead of computing the sum over segments, it computes the minimum such that: \(output_i = \min_{j...} data_[j...]\) where min is over tuples j... such that segment_ids[j...] == i. If the minimum is empty for a given segment ID i, it outputs the largest possible value for the specific numeric type, output[i] = numeric_limits<T>::max(). For example: c = tf.constant([[1,2,3,4], [5,6,7,8], [4,3,2,1]])
tf.unsorted_segment_min(c, tf.constant([0, 1, 0]), num_segments=2)
# ==> [[ 1, 2, 2, 1],
# [5, 6, 7, 8]]
If the given segment ID i is negative, then the corresponding value is dropped, and will not be included in the result.
Args
data A Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, int64, bfloat16, uint16, half, uint32, uint64.
segment_ids A Tensor. Must be one of the following types: int32, int64. A tensor whose shape is a prefix of data.shape.
num_segments A Tensor. Must be one of the following types: int32, int64.
name A name for the operation (optional).
Returns A Tensor. Has the same type as data. | tensorflow.raw_ops.unsortedsegmentmin |
tf.raw_ops.UnsortedSegmentProd Computes the product along segments of a tensor. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.UnsortedSegmentProd
tf.raw_ops.UnsortedSegmentProd(
data, segment_ids, num_segments, name=None
)
Read the section on segmentation for an explanation of segments. This operator is similar to the unsorted segment sum operator found (here). Instead of computing the sum over segments, it computes the product of all entries belonging to a segment such that: \(output_i = \prod_{j...} data[j...]\) where the product is over tuples j... such that segment_ids[j...] == i. For example: c = tf.constant([[1,2,3,4], [5,6,7,8], [4,3,2,1]])
tf.unsorted_segment_prod(c, tf.constant([0, 1, 0]), num_segments=2)
# ==> [[ 4, 6, 6, 4],
# [5, 6, 7, 8]]
If there is no entry for a given segment ID i, it outputs 1. If the given segment ID i is negative, then the corresponding value is dropped, and will not be included in the result.
Args
data A Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, complex64, int64, qint8, quint8, qint32, bfloat16, uint16, complex128, half, uint32, uint64.
segment_ids A Tensor. Must be one of the following types: int32, int64. A tensor whose shape is a prefix of data.shape.
num_segments A Tensor. Must be one of the following types: int32, int64.
name A name for the operation (optional).
Returns A Tensor. Has the same type as data. | tensorflow.raw_ops.unsortedsegmentprod |
tf.raw_ops.UnsortedSegmentSum Computes the sum along segments of a tensor. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.UnsortedSegmentSum
tf.raw_ops.UnsortedSegmentSum(
data, segment_ids, num_segments, name=None
)
Read the section on segmentation for an explanation of segments. Computes a tensor such that \(output[i] = \sum_{j...} data[j...]\) where the sum is over tuples j... such that segment_ids[j...] == i. Unlike SegmentSum, segment_ids need not be sorted and need not cover all values in the full range of valid values. If the sum is empty for a given segment ID i, output[i] = 0. If the given segment ID i is negative, the value is dropped and will not be added to the sum of the segment. num_segments should equal the number of distinct segment IDs. c = tf.constant([[1,2,3,4], [5,6,7,8], [4,3,2,1]])
tf.unsorted_segment_sum(c, tf.constant([0, 1, 0]), num_segments=2)
# ==> [[ 5, 5, 5, 5],
# [5, 6, 7, 8]]
Args
data A Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, complex64, int64, qint8, quint8, qint32, bfloat16, uint16, complex128, half, uint32, uint64.
segment_ids A Tensor. Must be one of the following types: int32, int64. A tensor whose shape is a prefix of data.shape.
num_segments A Tensor. Must be one of the following types: int32, int64.
name A name for the operation (optional).
Returns A Tensor. Has the same type as data. | tensorflow.raw_ops.unsortedsegmentsum |
tf.raw_ops.Unstage Op is similar to a lightweight Dequeue. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.Unstage
tf.raw_ops.Unstage(
dtypes, capacity=0, memory_limit=0, container='',
shared_name='', name=None
)
The basic functionality is similar to dequeue with many fewer capabilities and options. This Op is optimized for performance.
Args
dtypes A list of tf.DTypes that has length >= 1.
capacity An optional int that is >= 0. Defaults to 0.
memory_limit An optional int that is >= 0. Defaults to 0.
container An optional string. Defaults to "".
shared_name An optional string. Defaults to "".
name A name for the operation (optional).
Returns A list of Tensor objects of type dtypes. | tensorflow.raw_ops.unstage |
tf.raw_ops.UnwrapDatasetVariant View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.UnwrapDatasetVariant
tf.raw_ops.UnwrapDatasetVariant(
input_handle, name=None
)
Args
input_handle A Tensor of type variant.
name A name for the operation (optional).
Returns A Tensor of type variant. | tensorflow.raw_ops.unwrapdatasetvariant |
tf.raw_ops.UpperBound Applies upper_bound(sorted_search_values, values) along each row. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.UpperBound
tf.raw_ops.UpperBound(
sorted_inputs, values, out_type=tf.dtypes.int32, name=None
)
Each set of rows with the same index in (sorted_inputs, values) is treated independently. The resulting row is the equivalent of calling np.searchsorted(sorted_inputs, values, side='right'). The result is not a global index to the entire Tensor, but rather just the index in the last dimension. A 2-D example: sorted_sequence = [[0, 3, 9, 9, 10], [1, 2, 3, 4, 5]] values = [[2, 4, 9], [0, 2, 6]] result = UpperBound(sorted_sequence, values) result == [[1, 2, 4], [0, 2, 5]]
Args
sorted_inputs A Tensor. 2-D Tensor where each row is ordered.
values A Tensor. Must have the same type as sorted_inputs. 2-D Tensor with the same numbers of rows as sorted_search_values. Contains the values that will be searched for in sorted_search_values.
out_type An optional tf.DType from: tf.int32, tf.int64. Defaults to tf.int32.
name A name for the operation (optional).
Returns A Tensor of type out_type. | tensorflow.raw_ops.upperbound |
tf.raw_ops.VarHandleOp Creates a handle to a Variable resource. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.VarHandleOp
tf.raw_ops.VarHandleOp(
dtype, shape, container='', shared_name='',
allowed_devices=[], name=None
)
Args
dtype A tf.DType. the type of this variable. Must agree with the dtypes of all ops using this variable.
shape A tf.TensorShape or list of ints. The (possibly partially specified) shape of this variable.
container An optional string. Defaults to "". the container this variable is placed in.
shared_name An optional string. Defaults to "". the name by which this variable is referred to.
allowed_devices An optional list of strings. Defaults to []. DEPRECATED. The allowed devices containing the resource variable. Set when the output ResourceHandle represents a per-replica/partitioned resource variable.
name A name for the operation (optional).
Returns A Tensor of type resource. | tensorflow.raw_ops.varhandleop |
tf.raw_ops.Variable Use VariableV2 instead. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.Variable
tf.raw_ops.Variable(
shape, dtype, container='', shared_name='', name=None
)
Args
shape A tf.TensorShape or list of ints.
dtype A tf.DType.
container An optional string. Defaults to "".
shared_name An optional string. Defaults to "".
name A name for the operation (optional).
Returns A mutable Tensor of type dtype. | tensorflow.raw_ops.variable |
tf.raw_ops.VariableShape Returns the shape of the variable pointed to by resource. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.VariableShape
tf.raw_ops.VariableShape(
input, out_type=tf.dtypes.int32, name=None
)
This operation returns a 1-D integer tensor representing the shape of input. For example: # 't' is [[[1, 1, 1], [2, 2, 2]], [[3, 3, 3], [4, 4, 4]]]
shape(t) ==> [2, 2, 3]
Args
input A Tensor of type resource.
out_type An optional tf.DType from: tf.int32, tf.int64. Defaults to tf.int32.
name A name for the operation (optional).
Returns A Tensor of type out_type. | tensorflow.raw_ops.variableshape |
tf.raw_ops.VariableV2 Holds state in the form of a tensor that persists across steps. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.VariableV2
tf.raw_ops.VariableV2(
shape, dtype, container='', shared_name='', name=None
)
Outputs a ref to the tensor state so it may be read or modified. about sharing states in tensorflow.
Args
shape A tf.TensorShape or list of ints. The shape of the variable tensor.
dtype A tf.DType. The type of elements in the variable tensor.
container An optional string. Defaults to "". If non-empty, this variable is placed in the given container. Otherwise, a default container is used.
shared_name An optional string. Defaults to "". If non-empty, this variable is named in the given bucket with this shared_name. Otherwise, the node name is used instead.
name A name for the operation (optional).
Returns A mutable Tensor of type dtype. | tensorflow.raw_ops.variablev2 |
tf.raw_ops.VarIsInitializedOp Checks whether a resource handle-based variable has been initialized. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.VarIsInitializedOp
tf.raw_ops.VarIsInitializedOp(
resource, name=None
)
Args
resource A Tensor of type resource. the input resource handle.
name A name for the operation (optional).
Returns A Tensor of type bool. | tensorflow.raw_ops.varisinitializedop |
tf.raw_ops.Where Returns locations of nonzero / true values in a tensor. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.Where
tf.raw_ops.Where(
condition, name=None
)
This operation returns the coordinates of true elements in condition. The coordinates are returned in a 2-D tensor where the first dimension (rows) represents the number of true elements, and the second dimension (columns) represents the coordinates of the true elements. Keep in mind, the shape of the output tensor can vary depending on how many true values there are in condition. Indices are output in row-major order. For example: # 'input' tensor is [[True, False]
# [True, False]]
# 'input' has two true values, so output has two coordinates.
# 'input' has rank of 2, so coordinates have two indices.
where(input) ==> [[0, 0],
[1, 0]]
# `condition` tensor is [[[True, False]
# [True, False]]
# [[False, True]
# [False, True]]
# [[False, False]
# [False, True]]]
# 'input' has 5 true values, so output has 5 coordinates.
# 'input' has rank of 3, so coordinates have three indices.
where(input) ==> [[0, 0, 0],
[0, 1, 0],
[1, 0, 1],
[1, 1, 1],
[2, 1, 1]]
# `condition` tensor is [[[1.5, 0.0]
# [-0.5, 0.0]]
# [[0.0, 0.25]
# [0.0, 0.75]]
# [[0.0, 0.0]
# [0.0, 0.01]]]
# 'input' has 5 nonzero values, so output has 5 coordinates.
# 'input' has rank of 3, so coordinates have three indices.
where(input) ==> [[0, 0, 0],
[0, 1, 0],
[1, 0, 1],
[1, 1, 1],
[2, 1, 1]]
# `condition` tensor is [[[1.5 + 0.0j, 0.0 + 0.0j]
# [0.0 + 0.5j, 0.0 + 0.0j]]
# [[0.0 + 0.0j, 0.25 + 1.5j]
# [0.0 + 0.0j, 0.75 + 0.0j]]
# [[0.0 + 0.0j, 0.0 + 0.0j]
# [0.0 + 0.0j, 0.01 + 0.0j]]]
# 'input' has 5 nonzero magnitude values, so output has 5 coordinates.
# 'input' has rank of 3, so coordinates have three indices.
where(input) ==> [[0, 0, 0],
[0, 1, 0],
[1, 0, 1],
[1, 1, 1],
[2, 1, 1]]
Args
condition A Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, complex64, int64, qint8, quint8, qint32, bfloat16, uint16, complex128, half, uint32, uint64, bool.
name A name for the operation (optional).
Returns A Tensor of type int64. | tensorflow.raw_ops.where |
tf.raw_ops.While output = input; While (Cond(output)) { output = Body(output) } View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.While
tf.raw_ops.While(
input, cond, body, output_shapes=[], parallel_iterations=10, name=None
)
Args
input A list of Tensor objects. A list of input tensors whose types are T.
cond A function decorated with @Defun. A function takes 'input' and returns a tensor. If the tensor is a scalar of non-boolean, the scalar is converted to a boolean according to the following rule: if the scalar is a numerical value, non-zero means True and zero means False; if the scalar is a string, non-empty means True and empty means False. If the tensor is not a scalar, non-emptiness means True and False otherwise.
body A function decorated with @Defun. A function that takes a list of tensors and returns another list of tensors. Both lists have the same types as specified by T.
output_shapes An optional list of shapes (each a tf.TensorShape or list of ints). Defaults to [].
parallel_iterations An optional int. Defaults to 10.
name A name for the operation (optional).
Returns A list of Tensor objects. Has the same type as input. | tensorflow.raw_ops.while |
tf.raw_ops.WholeFileReader A Reader that outputs the entire contents of a file as a value. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.WholeFileReader
tf.raw_ops.WholeFileReader(
container='', shared_name='', name=None
)
To use, enqueue filenames in a Queue. The output of ReaderRead will be a filename (key) and the contents of that file (value).
Args
container An optional string. Defaults to "". If non-empty, this reader is placed in the given container. Otherwise, a default container is used.
shared_name An optional string. Defaults to "". If non-empty, this reader is named in the given bucket with this shared_name. Otherwise, the node name is used instead.
name A name for the operation (optional).
Returns A Tensor of type mutable string. | tensorflow.raw_ops.wholefilereader |
tf.raw_ops.WholeFileReaderV2 A Reader that outputs the entire contents of a file as a value. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.WholeFileReaderV2
tf.raw_ops.WholeFileReaderV2(
container='', shared_name='', name=None
)
To use, enqueue filenames in a Queue. The output of ReaderRead will be a filename (key) and the contents of that file (value).
Args
container An optional string. Defaults to "". If non-empty, this reader is placed in the given container. Otherwise, a default container is used.
shared_name An optional string. Defaults to "". If non-empty, this reader is named in the given bucket with this shared_name. Otherwise, the node name is used instead.
name A name for the operation (optional).
Returns A Tensor of type resource. | tensorflow.raw_ops.wholefilereaderv2 |
tf.raw_ops.WindowDataset Combines (nests of) input elements into a dataset of (nests of) windows. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.WindowDataset
tf.raw_ops.WindowDataset(
input_dataset, size, shift, stride, drop_remainder, output_types, output_shapes,
name=None
)
A "window" is a finite dataset of flat elements of size size (or possibly fewer if there are not enough input elements to fill the window and drop_remainder evaluates to false). The shift argument determines the number of input elements by which the window moves on each iteration. The first element in the kth window will be element 1 + (k-1) * shift
of the input dataset. In particular, the first element of the first window will always be the first element of the input dataset. If the stride parameter is greater than 1, then each window will skip (stride - 1) input elements between each element that appears in the window. Output windows will still contain size elements regardless of the value of stride. The stride argument determines the stride of the input elements, and the shift argument determines the shift of the window. For example, letting {...} to represent a Dataset:
tf.data.Dataset.range(7).window(2) produces { {0, 1}, {2, 3}, {4, 5}, {6} }
tf.data.Dataset.range(7).window(3, 2, 1, True) produces { {0, 1, 2}, {2, 3, 4}, {4, 5, 6} }
tf.data.Dataset.range(7).window(3, 1, 2, True) produces { {0, 2, 4}, {1, 3, 5}, {2, 4, 6} }
Note that when the window transformation is applied to a dataset of nested elements, it produces a dataset of nested windows. For example:
tf.data.Dataset.from_tensor_slices((range(4), range(4))).window(2) produces {({0, 1}, {0, 1}), ({2, 3}, {2, 3})}
tf.data.Dataset.from_tensor_slices({"a": range(4)}).window(2) produces { {"a": {0, 1} }, {"a": {2, 3} } }
Args
input_dataset A Tensor of type variant.
size A Tensor of type int64. An integer scalar, representing the number of elements of the input dataset to combine into a window. Must be positive.
shift A Tensor of type int64. An integer scalar, representing the number of input elements by which the window moves in each iteration. Defaults to size. Must be positive.
stride A Tensor of type int64. An integer scalar, representing the stride of the input elements in the sliding window. Must be positive. The default value of 1 means "retain every input element".
drop_remainder A Tensor of type bool. A Boolean scalar, representing whether the last window should be dropped if its size is smaller than window_size.
output_types A list of tf.DTypes that has length >= 1.
output_shapes A list of shapes (each a tf.TensorShape or list of ints) that has length >= 1.
name A name for the operation (optional).
Returns A Tensor of type variant. | tensorflow.raw_ops.windowdataset |
tf.raw_ops.WorkerHeartbeat Worker heartbeat op. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.WorkerHeartbeat
tf.raw_ops.WorkerHeartbeat(
request, name=None
)
Heartbeats may be sent periodically to indicate the coordinator is still active, to retrieve the current worker status and to expedite shutdown when necessary.
Args
request A Tensor of type string. A string tensor containing a serialized WorkerHeartbeatRequest
name A name for the operation (optional).
Returns A Tensor of type string. | tensorflow.raw_ops.workerheartbeat |
tf.raw_ops.WrapDatasetVariant View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.WrapDatasetVariant
tf.raw_ops.WrapDatasetVariant(
input_handle, name=None
)
Args
input_handle A Tensor of type variant.
name A name for the operation (optional).
Returns A Tensor of type variant. | tensorflow.raw_ops.wrapdatasetvariant |
tf.raw_ops.WriteAudioSummary Writes an audio summary. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.WriteAudioSummary
tf.raw_ops.WriteAudioSummary(
writer, step, tag, tensor, sample_rate, max_outputs=3, name=None
)
Writes encoded audio summary tensor at step with tag using summary writer. sample_rate is the audio sample rate is Hz.
Args
writer A Tensor of type resource.
step A Tensor of type int64.
tag A Tensor of type string.
tensor A Tensor of type float32.
sample_rate A Tensor of type float32.
max_outputs An optional int that is >= 1. Defaults to 3.
name A name for the operation (optional).
Returns The created Operation. | tensorflow.raw_ops.writeaudiosummary |
tf.raw_ops.WriteFile Writes contents to the file at input filename. Creates file and recursively View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.WriteFile
tf.raw_ops.WriteFile(
filename, contents, name=None
)
creates directory if not existing.
Args
filename A Tensor of type string. scalar. The name of the file to which we write the contents.
contents A Tensor of type string. scalar. The content to be written to the output file.
name A name for the operation (optional).
Returns The created Operation. | tensorflow.raw_ops.writefile |
tf.raw_ops.WriteGraphSummary Writes a graph summary. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.WriteGraphSummary
tf.raw_ops.WriteGraphSummary(
writer, step, tensor, name=None
)
Writes TensorFlow graph tensor at step using summary writer.
Args
writer A Tensor of type resource.
step A Tensor of type int64.
tensor A Tensor of type string.
name A name for the operation (optional).
Returns The created Operation. | tensorflow.raw_ops.writegraphsummary |
tf.raw_ops.WriteHistogramSummary Writes a histogram summary. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.WriteHistogramSummary
tf.raw_ops.WriteHistogramSummary(
writer, step, tag, values, name=None
)
Writes histogram values at step with tag using summary writer.
Args
writer A Tensor of type resource.
step A Tensor of type int64.
tag A Tensor of type string.
values A Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, int64, bfloat16, uint16, half, uint32, uint64.
name A name for the operation (optional).
Returns The created Operation. | tensorflow.raw_ops.writehistogramsummary |
tf.raw_ops.WriteImageSummary Writes an image summary. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.WriteImageSummary
tf.raw_ops.WriteImageSummary(
writer, step, tag, tensor, bad_color, max_images=3, name=None
)
Writes image tensor at step with tag using summary writer. tensor is image with shape [height, width, channels].
Args
writer A Tensor of type resource.
step A Tensor of type int64.
tag A Tensor of type string.
tensor A Tensor. Must be one of the following types: uint8, float32, half.
bad_color A Tensor of type uint8.
max_images An optional int that is >= 1. Defaults to 3.
name A name for the operation (optional).
Returns The created Operation. | tensorflow.raw_ops.writeimagesummary |
tf.raw_ops.WriteRawProtoSummary Writes a serialized proto summary. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.WriteRawProtoSummary
tf.raw_ops.WriteRawProtoSummary(
writer, step, tensor, name=None
)
Writes tensor, a serialized proto at step using summary writer.
Args
writer A Tensor of type resource.
step A Tensor of type int64.
tensor A Tensor of type string.
name A name for the operation (optional).
Returns The created Operation. | tensorflow.raw_ops.writerawprotosummary |
tf.raw_ops.WriteScalarSummary Writes a scalar summary. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.WriteScalarSummary
tf.raw_ops.WriteScalarSummary(
writer, step, tag, value, name=None
)
Writes scalar value at step with tag using summary writer.
Args
writer A Tensor of type resource.
step A Tensor of type int64.
tag A Tensor of type string.
value A Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, int64, bfloat16, uint16, half, uint32, uint64.
name A name for the operation (optional).
Returns The created Operation. | tensorflow.raw_ops.writescalarsummary |
tf.raw_ops.WriteSummary Writes a tensor summary. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.WriteSummary
tf.raw_ops.WriteSummary(
writer, step, tensor, tag, summary_metadata, name=None
)
Writes tensor at step with tag using summary writer.
Args
writer A Tensor of type resource.
step A Tensor of type int64.
tensor A Tensor.
tag A Tensor of type string.
summary_metadata A Tensor of type string.
name A name for the operation (optional).
Returns The created Operation. | tensorflow.raw_ops.writesummary |
tf.raw_ops.Xdivy Returns 0 if x == 0, and x / y otherwise, elementwise. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.Xdivy
tf.raw_ops.Xdivy(
x, y, name=None
)
Args
x A Tensor. Must be one of the following types: half, float32, float64, complex64, complex128.
y A Tensor. Must have the same type as x.
name A name for the operation (optional).
Returns A Tensor. Has the same type as x. | tensorflow.raw_ops.xdivy |
tf.raw_ops.Xlog1py Returns 0 if x == 0, and x * log1p(y) otherwise, elementwise. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.Xlog1py
tf.raw_ops.Xlog1py(
x, y, name=None
)
Args
x A Tensor. Must be one of the following types: half, float32, float64, complex64, complex128.
y A Tensor. Must have the same type as x.
name A name for the operation (optional).
Returns A Tensor. Has the same type as x. | tensorflow.raw_ops.xlog1py |
tf.raw_ops.Xlogy Returns 0 if x == 0, and x * log(y) otherwise, elementwise. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.Xlogy
tf.raw_ops.Xlogy(
x, y, name=None
)
Args
x A Tensor. Must be one of the following types: half, float32, float64, complex64, complex128.
y A Tensor. Must have the same type as x.
name A name for the operation (optional).
Returns A Tensor. Has the same type as x. | tensorflow.raw_ops.xlogy |
tf.raw_ops.ZerosLike Returns a tensor of zeros with the same shape and type as x. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.ZerosLike
tf.raw_ops.ZerosLike(
x, name=None
)
Args
x A Tensor. a tensor of type T.
name A name for the operation (optional).
Returns A Tensor. Has the same type as x. | tensorflow.raw_ops.zeroslike |
tf.raw_ops.Zeta Compute the Hurwitz zeta function \(\zeta(x, q)\). View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.Zeta
tf.raw_ops.Zeta(
x, q, name=None
)
The Hurwitz zeta function is defined as: \(\zeta(x, q) = \sum_{n=0}^{\infty} (q + n)^{-x}\)
Args
x A Tensor. Must be one of the following types: float32, float64.
q A Tensor. Must have the same type as x.
name A name for the operation (optional).
Returns A Tensor. Has the same type as x. | tensorflow.raw_ops.zeta |
tf.raw_ops.ZipDataset Creates a dataset that zips together input_datasets. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.raw_ops.ZipDataset
tf.raw_ops.ZipDataset(
input_datasets, output_types, output_shapes, name=None
)
The elements of the resulting dataset are created by zipping corresponding elements from each of the input datasets. The size of the resulting dataset will match the size of the smallest input dataset, and no error will be raised if input datasets have different sizes.
Args
input_datasets A list of at least 1 Tensor objects with type variant. List of N variant Tensors representing datasets to be zipped together.
output_types A list of tf.DTypes that has length >= 1.
output_shapes A list of shapes (each a tf.TensorShape or list of ints) that has length >= 1.
name A name for the operation (optional).
Returns A Tensor of type variant. | tensorflow.raw_ops.zipdataset |
tf.realdiv Returns x / y element-wise for real types. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.realdiv
tf.realdiv(
x, y, name=None
)
If x and y are reals, this will return the floating-point division.
Note: Div supports broadcasting. More about broadcasting here
Args
x A Tensor. Must be one of the following types: bfloat16, half, float32, float64, uint8, int8, uint16, int16, int32, int64, complex64, complex128.
y A Tensor. Must have the same type as x.
name A name for the operation (optional).
Returns A Tensor. Has the same type as x. | tensorflow.realdiv |
tf.recompute_grad View source on GitHub An eager-compatible version of recompute_grad. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.recompute_grad
tf.recompute_grad(
f
)
For f(*args, **kwargs), this supports gradients with respect to args or kwargs, but kwargs are currently only supported in eager-mode. Note that for keras layer and model objects, this is handled automatically. Warning: If f was originally a tf.keras Model or Layer object, g will not be able to access the member variables of that object, because g returns through the wrapper function inner. When recomputing gradients through objects that inherit from keras, we suggest keeping a reference to the underlying object around for the purpose of accessing these variables.
Args
f function f(*x) that returns a Tensor or sequence of Tensor outputs.
Returns A function g that wraps f, but which recomputes f on the backwards pass of a gradient call. | tensorflow.recompute_grad |
tf.RegisterGradient View source on GitHub A decorator for registering the gradient function for an op type. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.RegisterGradient
tf.RegisterGradient(
op_type
)
This decorator is only used when defining a new op type. For an op with m inputs and n outputs, the gradient function is a function that takes the original Operation and n Tensor objects (representing the gradients with respect to each output of the op), and returns m Tensor objects (representing the partial gradients with respect to each input of the op). For example, assuming that operations of type "Sub" take two inputs x and y, and return a single output x - y, the following gradient function would be registered: @tf.RegisterGradient("Sub")
def _sub_grad(unused_op, grad):
return grad, tf.negative(grad)
The decorator argument op_type is the string type of an operation. This corresponds to the OpDef.name field for the proto that defines the operation.
Args
op_type The string type of an operation. This corresponds to the OpDef.name field for the proto that defines the operation.
Raises
TypeError If op_type is not string. Methods __call__ View source
__call__(
f
)
Registers the function f as gradient function for op_type. | tensorflow.registergradient |
tf.register_tensor_conversion_function View source on GitHub Registers a function for converting objects of base_type to Tensor. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.register_tensor_conversion_function
tf.register_tensor_conversion_function(
base_type, conversion_func, priority=100
)
The conversion function must have the following signature: def conversion_func(value, dtype=None, name=None, as_ref=False):
# ...
It must return a Tensor with the given dtype if specified. If the conversion function creates a new Tensor, it should use the given name if specified. All exceptions will be propagated to the caller. The conversion function may return NotImplemented for some inputs. In this case, the conversion process will continue to try subsequent conversion functions. If as_ref is true, the function must return a Tensor reference, such as a Variable.
Note: The conversion functions will execute in order of priority, followed by order of registration. To ensure that a conversion function F runs before another conversion function G, ensure that F is registered with a smaller priority than G.
Args
base_type The base type or tuple of base types for all objects that conversion_func accepts.
conversion_func A function that converts instances of base_type to Tensor.
priority Optional integer that indicates the priority for applying this conversion function. Conversion functions with smaller priority values run earlier than conversion functions with larger priority values. Defaults to 100.
Raises
TypeError If the arguments do not have the appropriate type. | tensorflow.register_tensor_conversion_function |
tf.repeat View source on GitHub Repeat elements of input. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.repeat
tf.repeat(
input, repeats, axis=None, name=None
)
See also tf.concat, tf.stack, tf.tile.
Args
input An N-dimensional Tensor.
repeats An 1-D int Tensor. The number of repetitions for each element. repeats is broadcasted to fit the shape of the given axis. len(repeats) must equal input.shape[axis] if axis is not None.
axis An int. The axis along which to repeat values. By default (axis=None), use the flattened input array, and return a flat output array.
name A name for the operation.
Returns A Tensor which has the same shape as input, except along the given axis. If axis is None then the output array is flattened to match the flattened input array.
Example usage:
repeat(['a', 'b', 'c'], repeats=[3, 0, 2], axis=0)
<tf.Tensor: shape=(5,), dtype=string,
numpy=array([b'a', b'a', b'a', b'c', b'c'], dtype=object)>
repeat([[1, 2], [3, 4]], repeats=[2, 3], axis=0)
<tf.Tensor: shape=(5, 2), dtype=int32, numpy=
array([[1, 2],
[1, 2],
[3, 4],
[3, 4],
[3, 4]], dtype=int32)>
repeat([[1, 2], [3, 4]], repeats=[2, 3], axis=1)
<tf.Tensor: shape=(2, 5), dtype=int32, numpy=
array([[1, 1, 2, 2, 2],
[3, 3, 4, 4, 4]], dtype=int32)>
repeat(3, repeats=4)
<tf.Tensor: shape=(4,), dtype=int32, numpy=array([3, 3, 3, 3], dtype=int32)>
repeat([[1,2], [3,4]], repeats=2)
<tf.Tensor: shape=(8,), dtype=int32,
numpy=array([1, 1, 2, 2, 3, 3, 4, 4], dtype=int32)> | tensorflow.repeat |
tf.required_space_to_batch_paddings View source on GitHub Calculate padding required to make block_shape divide input_shape. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.required_space_to_batch_paddings
tf.required_space_to_batch_paddings(
input_shape, block_shape, base_paddings=None, name=None
)
This function can be used to calculate a suitable paddings argument for use with space_to_batch_nd and batch_to_space_nd.
Args
input_shape int32 Tensor of shape [N].
block_shape int32 Tensor of shape [N].
base_paddings Optional int32 Tensor of shape [N, 2]. Specifies the minimum amount of padding to use. All elements must be >= 0. If not specified, defaults to 0.
name string. Optional name prefix.
Returns (paddings, crops), where: paddings and crops are int32 Tensors of rank 2 and shape [N, 2]
satisfying paddings[i, 0] = base_paddings[i, 0]. 0 <= paddings[i, 1] - base_paddings[i, 1] < block_shapei % block_shape[i] == 0 crops[i, 0] = 0 crops[i, 1] = paddings[i, 1] - base_paddings[i, 1]
Raises: ValueError if called with incompatible shapes. | tensorflow.required_space_to_batch_paddings |
tf.reshape View source on GitHub Reshapes a tensor. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.manip.reshape, tf.compat.v1.reshape
tf.reshape(
tensor, shape, name=None
)
Given tensor, this operation returns a new tf.Tensor that has the same values as tensor in the same order, except with a new shape given by shape.
t1 = [[1, 2, 3],
[4, 5, 6]]
print(tf.shape(t1).numpy())
[2 3]
t2 = tf.reshape(t1, [6])
t2
<tf.Tensor: shape=(6,), dtype=int32,
numpy=array([1, 2, 3, 4, 5, 6], dtype=int32)>
tf.reshape(t2, [3, 2])
<tf.Tensor: shape=(3, 2), dtype=int32, numpy=
array([[1, 2],
[3, 4],
[5, 6]], dtype=int32)>
The tf.reshape does not change the order of or the total number of elements in the tensor, and so it can reuse the underlying data buffer. This makes it a fast operation independent of how big of a tensor it is operating on.
tf.reshape([1, 2, 3], [2, 2])
Traceback (most recent call last):
InvalidArgumentError: Input to reshape is a tensor with 3 values, but the
requested shape has 4
To instead reorder the data to rearrange the dimensions of a tensor, see tf.transpose.
t = [[1, 2, 3],
[4, 5, 6]]
tf.reshape(t, [3, 2]).numpy()
array([[1, 2],
[3, 4],
[5, 6]], dtype=int32)
tf.transpose(t, perm=[1, 0]).numpy()
array([[1, 4],
[2, 5],
[3, 6]], dtype=int32)
If one component of shape is the special value -1, the size of that dimension is computed so that the total size remains constant. In particular, a shape of [-1] flattens into 1-D. At most one component of shape can be -1.
t = [[1, 2, 3],
[4, 5, 6]]
tf.reshape(t, [-1])
<tf.Tensor: shape=(6,), dtype=int32,
numpy=array([1, 2, 3, 4, 5, 6], dtype=int32)>
tf.reshape(t, [3, -1])
<tf.Tensor: shape=(3, 2), dtype=int32, numpy=
array([[1, 2],
[3, 4],
[5, 6]], dtype=int32)>
tf.reshape(t, [-1, 2])
<tf.Tensor: shape=(3, 2), dtype=int32, numpy=
array([[1, 2],
[3, 4],
[5, 6]], dtype=int32)>
tf.reshape(t, []) reshapes a tensor t with one element to a scalar.
tf.reshape([7], []).numpy()
7
More examples:
t = [1, 2, 3, 4, 5, 6, 7, 8, 9]
print(tf.shape(t).numpy())
[9]
tf.reshape(t, [3, 3])
<tf.Tensor: shape=(3, 3), dtype=int32, numpy=
array([[1, 2, 3],
[4, 5, 6],
[7, 8, 9]], dtype=int32)>
t = [[[1, 1], [2, 2]],
[[3, 3], [4, 4]]]
print(tf.shape(t).numpy())
[2 2 2]
tf.reshape(t, [2, 4])
<tf.Tensor: shape=(2, 4), dtype=int32, numpy=
array([[1, 1, 2, 2],
[3, 3, 4, 4]], dtype=int32)>
t = [[[1, 1, 1],
[2, 2, 2]],
[[3, 3, 3],
[4, 4, 4]],
[[5, 5, 5],
[6, 6, 6]]]
print(tf.shape(t).numpy())
[3 2 3]
# Pass '[-1]' to flatten 't'.
tf.reshape(t, [-1])
<tf.Tensor: shape=(18,), dtype=int32,
numpy=array([1, 1, 1, 2, 2, 2, 3, 3, 3, 4, 4, 4, 5, 5, 5, 6, 6, 6],
dtype=int32)>
# -- Using -1 to infer the shape --
# Here -1 is inferred to be 9:
tf.reshape(t, [2, -1])
<tf.Tensor: shape=(2, 9), dtype=int32, numpy=
array([[1, 1, 1, 2, 2, 2, 3, 3, 3],
[4, 4, 4, 5, 5, 5, 6, 6, 6]], dtype=int32)>
# -1 is inferred to be 2:
tf.reshape(t, [-1, 9])
<tf.Tensor: shape=(2, 9), dtype=int32, numpy=
array([[1, 1, 1, 2, 2, 2, 3, 3, 3],
[4, 4, 4, 5, 5, 5, 6, 6, 6]], dtype=int32)>
# -1 is inferred to be 3:
tf.reshape(t, [ 2, -1, 3])
<tf.Tensor: shape=(2, 3, 3), dtype=int32, numpy=
array([[[1, 1, 1],
[2, 2, 2],
[3, 3, 3]],
[[4, 4, 4],
[5, 5, 5],
[6, 6, 6]]], dtype=int32)>
Args
tensor A Tensor.
shape A Tensor. Must be one of the following types: int32, int64. Defines the shape of the output tensor.
name Optional string. A name for the operation.
Returns A Tensor. Has the same type as tensor. | tensorflow.reshape |
tf.reverse Reverses specific dimensions of a tensor. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.manip.reverse, tf.compat.v1.reverse, tf.compat.v1.reverse_v2
tf.reverse(
tensor, axis, name=None
)
NOTE tf.reverse has now changed behavior in preparation for 1.0. tf.reverse_v2 is currently an alias that will be deprecated before TF 1.0. Given a tensor, and a int32 tensor axis representing the set of dimensions of tensor to reverse. This operation reverses each dimension i for which there exists j s.t. axis[j] == i. tensor can have up to 8 dimensions. The number of dimensions specified in axis may be 0 or more entries. If an index is specified more than once, a InvalidArgument error is raised. For example: # tensor 't' is [[[[ 0, 1, 2, 3],
# [ 4, 5, 6, 7],
# [ 8, 9, 10, 11]],
# [[12, 13, 14, 15],
# [16, 17, 18, 19],
# [20, 21, 22, 23]]]]
# tensor 't' shape is [1, 2, 3, 4]
# 'dims' is [3] or 'dims' is [-1]
reverse(t, dims) ==> [[[[ 3, 2, 1, 0],
[ 7, 6, 5, 4],
[ 11, 10, 9, 8]],
[[15, 14, 13, 12],
[19, 18, 17, 16],
[23, 22, 21, 20]]]]
# 'dims' is '[1]' (or 'dims' is '[-3]')
reverse(t, dims) ==> [[[[12, 13, 14, 15],
[16, 17, 18, 19],
[20, 21, 22, 23]
[[ 0, 1, 2, 3],
[ 4, 5, 6, 7],
[ 8, 9, 10, 11]]]]
# 'dims' is '[2]' (or 'dims' is '[-2]')
reverse(t, dims) ==> [[[[8, 9, 10, 11],
[4, 5, 6, 7],
[0, 1, 2, 3]]
[[20, 21, 22, 23],
[16, 17, 18, 19],
[12, 13, 14, 15]]]]
Args
tensor A Tensor. Must be one of the following types: uint8, int8, uint16, int16, int32, int64, bool, bfloat16, half, float32, float64, complex64, complex128, string. Up to 8-D.
axis A Tensor. Must be one of the following types: int32, int64. 1-D. The indices of the dimensions to reverse. Must be in the range [-rank(tensor), rank(tensor)).
name A name for the operation (optional).
Returns A Tensor. Has the same type as tensor. | tensorflow.reverse |
tf.reverse_sequence View source on GitHub Reverses variable length slices.
tf.reverse_sequence(
input, seq_lengths, seq_axis=None, batch_axis=None, name=None
)
This op first slices input along the dimension batch_axis, and for each slice i, reverses the first seq_lengths[i] elements along the dimension seq_axis. The elements of seq_lengths must obey seq_lengths[i] <= input.dims[seq_axis], and seq_lengths must be a vector of length input.dims[batch_axis]. The output slice i along dimension batch_axis is then given by input slice i, with the first seq_lengths[i] slices along dimension seq_axis reversed. Example usage:
seq_lengths = [7, 2, 3, 5]
input = [[1, 2, 3, 4, 5, 0, 0, 0], [1, 2, 0, 0, 0, 0, 0, 0],
[1, 2, 3, 4, 0, 0, 0, 0], [1, 2, 3, 4, 5, 6, 7, 8]]
output = tf.reverse_sequence(input, seq_lengths, seq_axis=1, batch_axis=0)
output
<tf.Tensor: shape=(4, 8), dtype=int32, numpy=
array([[0, 0, 5, 4, 3, 2, 1, 0],
[2, 1, 0, 0, 0, 0, 0, 0],
[3, 2, 1, 4, 0, 0, 0, 0],
[5, 4, 3, 2, 1, 6, 7, 8]], dtype=int32)>
Args
input A Tensor. The input to reverse.
seq_lengths A Tensor. Must be one of the following types: int32, int64. 1-D with length input.dims(batch_axis) and max(seq_lengths) <= input.dims(seq_axis)
seq_axis An int. The dimension which is partially reversed.
batch_axis An optional int. Defaults to 0. The dimension along which reversal is performed.
name A name for the operation (optional).
Returns A Tensor. Has the same type as input. | tensorflow.reverse_sequence |
tf.roll View source on GitHub Rolls the elements of a tensor along an axis. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.manip.roll, tf.compat.v1.roll
tf.roll(
input, shift, axis, name=None
)
The elements are shifted positively (towards larger indices) by the offset of shift along the dimension of axis. Negative shift values will shift elements in the opposite direction. Elements that roll passed the last position will wrap around to the first and vice versa. Multiple shifts along multiple axes may be specified. For example: # 't' is [0, 1, 2, 3, 4]
roll(t, shift=2, axis=0) ==> [3, 4, 0, 1, 2]
# shifting along multiple dimensions
# 't' is [[0, 1, 2, 3, 4], [5, 6, 7, 8, 9]]
roll(t, shift=[1, -2], axis=[0, 1]) ==> [[7, 8, 9, 5, 6], [2, 3, 4, 0, 1]]
# shifting along the same axis multiple times
# 't' is [[0, 1, 2, 3, 4], [5, 6, 7, 8, 9]]
roll(t, shift=[2, -3], axis=[1, 1]) ==> [[1, 2, 3, 4, 0], [6, 7, 8, 9, 5]]
Args
input A Tensor.
shift A Tensor. Must be one of the following types: int32, int64. Dimension must be 0-D or 1-D. shift[i] specifies the number of places by which elements are shifted positively (towards larger indices) along the dimension specified by axis[i]. Negative shifts will roll the elements in the opposite direction.
axis A Tensor. Must be one of the following types: int32, int64. Dimension must be 0-D or 1-D. axis[i] specifies the dimension that the shift shift[i] should occur. If the same axis is referenced more than once, the total shift for that axis will be the sum of all the shifts that belong to that axis.
name A name for the operation (optional).
Returns A Tensor. Has the same type as input. | tensorflow.roll |
Module: tf.saved_model Public API for tf.saved_model namespace. Modules experimental module: Public API for tf.saved_model.experimental namespace. Classes class Asset: Represents a file asset to hermetically include in a SavedModel. class LoadOptions: Options for loading a SavedModel. class SaveOptions: Options for saving to SavedModel. Functions contains_saved_model(...): Checks whether the provided export directory could contain a SavedModel. load(...): Load a SavedModel from export_dir. save(...): Exports the Trackable object obj to SavedModel format.
Other Members
ASSETS_DIRECTORY 'assets'
ASSETS_KEY 'saved_model_assets'
CLASSIFY_INPUTS 'inputs'
CLASSIFY_METHOD_NAME 'tensorflow/serving/classify'
CLASSIFY_OUTPUT_CLASSES 'classes'
CLASSIFY_OUTPUT_SCORES 'scores'
DEBUG_DIRECTORY 'debug'
DEBUG_INFO_FILENAME_PB 'saved_model_debug_info.pb'
DEFAULT_SERVING_SIGNATURE_DEF_KEY 'serving_default'
GPU 'gpu'
PREDICT_INPUTS 'inputs'
PREDICT_METHOD_NAME 'tensorflow/serving/predict'
PREDICT_OUTPUTS 'outputs'
REGRESS_INPUTS 'inputs'
REGRESS_METHOD_NAME 'tensorflow/serving/regress'
REGRESS_OUTPUTS 'outputs'
SAVED_MODEL_FILENAME_PB 'saved_model.pb'
SAVED_MODEL_FILENAME_PBTXT 'saved_model.pbtxt'
SAVED_MODEL_SCHEMA_VERSION 1
SERVING 'serve'
TPU 'tpu'
TRAINING 'train'
VARIABLES_DIRECTORY 'variables'
VARIABLES_FILENAME 'variables' | tensorflow.saved_model |
tf.saved_model.Asset Represents a file asset to hermetically include in a SavedModel. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.saved_model.Asset
tf.saved_model.Asset(
path
)
A SavedModel can include arbitrary files, called assets, that are needed for its use. For example a vocabulary file used initialize a lookup table. When a trackable object is exported via tf.saved_model.save(), all the Assets reachable from it are copied into the SavedModel assets directory. Upon loading, the assets and the serialized functions that depend on them will refer to the correct filepaths inside the SavedModel directory. Example: filename = tf.saved_model.Asset("file.txt")
@tf.function(input_signature=[])
def func():
return tf.io.read_file(filename)
trackable_obj = tf.train.Checkpoint()
trackable_obj.func = func
trackable_obj.filename = filename
tf.saved_model.save(trackable_obj, "/tmp/saved_model")
# The created SavedModel is hermetic, it does not depend on
# the original file and can be moved to another path.
tf.io.gfile.remove("file.txt")
tf.io.gfile.rename("/tmp/saved_model", "/tmp/new_location")
reloaded_obj = tf.saved_model.load("/tmp/new_location")
print(reloaded_obj.func())
Attributes
asset_path A 0-D tf.string tensor with path to the asset. | tensorflow.saved_model.asset |
tf.saved_model.contains_saved_model View source on GitHub Checks whether the provided export directory could contain a SavedModel.
tf.saved_model.contains_saved_model(
export_dir
)
Note that the method does not load any data by itself. If the method returns false, the export directory definitely does not contain a SavedModel. If the method returns true, the export directory may contain a SavedModel but provides no guarantee that it can be loaded.
Args
export_dir Absolute string path to possible export location. For example, '/my/foo/model'.
Returns True if the export directory contains SavedModel files, False otherwise. | tensorflow.saved_model.contains_saved_model |
Module: tf.saved_model.experimental Public API for tf.saved_model.experimental namespace. Classes class VariablePolicy: Enum defining options for variable handling when saving. | tensorflow.saved_model.experimental |
tf.saved_model.experimental.VariablePolicy Enum defining options for variable handling when saving. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.saved_model.experimental.VariablePolicy NONE No policy applied: Distributed variables are saved as one variable, with no device attached. SAVE_VARIABLE_DEVICES When saving variables, also save their device assignment. This is useful if one wants to hardcode devices in saved models, but it also makes them non-portable if soft device placement is disabled (more details in tf.config.set_soft_device_placement). This is currently not fully supported by saved_model.load, and is mainly intended to be used when one will be reading the saved model at a lower API level. In the example below, the graph saved by the call to saved_model.save will have the variable devices correctly specified: exported = tf.train.Checkpoint()
with tf.device('/GPU:0'):
exported.x_gpu = tf.Variable(1.0)
with tf.device('/CPU:0'):
exported.x_cpu = tf.Variable(1.0)
tf.saved_model.save(exported, export_dir,
options = tf.saved_model.SaveOptions(
experimental_variable_policy=
tf.saved_model.experimental.VariablePolicy.SAVE_VARIABLE_DEVICES))
Distributed variables are still saved as one variable under this policy. EXPAND_DISTRIBUTED_VARIABLES Distributed variables will be saved with information about their components, allowing for their restoration on load. Also, the saved graph will contain references to those variables. This is useful when one wants to use the model for training in environments where the original distribution strategy is not available.
Class Variables
EXPAND_DISTRIBUTED_VARIABLES tf.saved_model.experimental.VariablePolicy
NONE tf.saved_model.experimental.VariablePolicy
SAVE_VARIABLE_DEVICES tf.saved_model.experimental.VariablePolicy | tensorflow.saved_model.experimental.variablepolicy |
tf.saved_model.load View source on GitHub Load a SavedModel from export_dir. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.saved_model.load_v2
tf.saved_model.load(
export_dir, tags=None, options=None
)
Signatures associated with the SavedModel are available as functions: imported = tf.saved_model.load(path)
f = imported.signatures["serving_default"]
print(f(x=tf.constant([[1.]])))
Objects exported with tf.saved_model.save additionally have trackable objects and functions assigned to attributes: exported = tf.train.Checkpoint(v=tf.Variable(3.))
exported.f = tf.function(
lambda x: exported.v * x,
input_signature=[tf.TensorSpec(shape=None, dtype=tf.float32)])
tf.saved_model.save(exported, path)
imported = tf.saved_model.load(path)
assert 3. == imported.v.numpy()
assert 6. == imported.f(x=tf.constant(2.)).numpy()
Loading Keras models Keras models are trackable, so they can be saved to SavedModel. The object returned by tf.saved_model.load is not a Keras object (i.e. doesn't have .fit, .predict, etc. methods). A few attributes and functions are still available: .variables, .trainable_variables and .__call__. model = tf.keras.Model(...)
tf.saved_model.save(model, path)
imported = tf.saved_model.load(path)
outputs = imported(inputs)
Use tf.keras.models.load_model to restore the Keras model. Importing SavedModels from TensorFlow 1.x SavedModels from tf.estimator.Estimator or 1.x SavedModel APIs have a flat graph instead of tf.function objects. These SavedModels will be loaded with the following attributes:
.signatures: A dictionary mapping signature names to functions.
.prune(feeds, fetches): A method which allows you to extract functions for new subgraphs. This is equivalent to importing the SavedModel and naming feeds and fetches in a Session from TensorFlow 1.x. imported = tf.saved_model.load(path_to_v1_saved_model)
pruned = imported.prune("x:0", "out:0")
pruned(tf.ones([]))
See tf.compat.v1.wrap_function for details.
.variables: A list of imported variables. .graph: The whole imported graph. .restore(save_path): A function that restores variables from a checkpoint saved from tf.compat.v1.Saver. Consuming SavedModels asynchronously When consuming SavedModels asynchronously (the producer is a separate process), the SavedModel directory will appear before all files have been written, and tf.saved_model.load will fail if pointed at an incomplete SavedModel. Rather than checking for the directory, check for "saved_model_dir/saved_model.pb". This file is written atomically as the last tf.saved_model.save file operation.
Args
export_dir The SavedModel directory to load from.
tags A tag or sequence of tags identifying the MetaGraph to load. Optional if the SavedModel contains a single MetaGraph, as for those exported from tf.saved_model.save.
options tf.saved_model.LoadOptions object that specifies options for loading.
Returns A trackable object with a signatures attribute mapping from signature keys to functions. If the SavedModel was exported by tf.saved_model.load, it also points to trackable objects, functions, debug info which it has been saved.
Raises
ValueError If tags don't match a MetaGraph in the SavedModel. | tensorflow.saved_model.load |
tf.saved_model.LoadOptions Options for loading a SavedModel.
tf.saved_model.LoadOptions(
experimental_io_device=None
)
This function may be used in the options argument in functions that load a SavedModel (tf.saved_model.load, tf.keras.models.load_model).
Args
experimental_io_device string. Applies in a distributed setting. Tensorflow device to use to access the filesystem. If None (default) then for each variable the filesystem is accessed from the CPU:0 device of the host where that variable is assigned. If specified, the filesystem is instead accessed from that device for all variables. This is for example useful if you want to load from a local directory, such as "/tmp" when running in a distributed setting. In that case pass a device for the host where the "/tmp" directory is accessible.
Class Variables
experimental_io_device | tensorflow.saved_model.loadoptions |
tf.saved_model.save View source on GitHub Exports the Trackable object obj to SavedModel format. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.saved_model.experimental.save, tf.compat.v1.saved_model.save
tf.saved_model.save(
obj, export_dir, signatures=None, options=None
)
Example usage: class Adder(tf.Module):
@tf.function(input_signature=[tf.TensorSpec(shape=None, dtype=tf.float32)])
def add(self, x):
return x + x + 1.
to_export = Adder()
tf.saved_model.save(to_export, '/tmp/adder')
The resulting SavedModel is then servable with an input named "x", its value having any shape and dtype float32. The optional signatures argument controls which methods in obj will be available to programs which consume SavedModels, for example, serving APIs. Python functions may be decorated with @tf.function(input_signature=...) and passed as signatures directly, or lazily with a call to get_concrete_function on the method decorated with @tf.function. If the signatures argument is omitted, obj will be searched for @tf.function-decorated methods. If exactly one @tf.function is found, that method will be used as the default signature for the SavedModel. This behavior is expected to change in the future, when a corresponding tf.saved_model.load symbol is added. At that point signatures will be completely optional, and any @tf.function attached to obj or its dependencies will be exported for use with load. When invoking a signature in an exported SavedModel, Tensor arguments are identified by name. These names will come from the Python function's argument names by default. They may be overridden by specifying a name=... argument in the corresponding tf.TensorSpec object. Explicit naming is required if multiple Tensors are passed through a single argument to the Python function. The outputs of functions used as signatures must either be flat lists, in which case outputs will be numbered, or a dictionary mapping string keys to Tensor, in which case the keys will be used to name outputs. Signatures are available in objects returned by tf.saved_model.load as a .signatures attribute. This is a reserved attribute: tf.saved_model.save on an object with a custom .signatures attribute will raise an exception. Since tf.keras.Model objects are also Trackable, this function can be used to export Keras models. For example, exporting with a signature specified: class Model(tf.keras.Model):
@tf.function(input_signature=[tf.TensorSpec(shape=[None], dtype=tf.string)])
def serve(self, serialized):
...
m = Model()
tf.saved_model.save(m, '/tmp/saved_model/')
Exporting from a function without a fixed signature: class Model(tf.keras.Model):
@tf.function
def call(self, x):
...
m = Model()
tf.saved_model.save(
m, '/tmp/saved_model/',
signatures=m.call.get_concrete_function(
tf.TensorSpec(shape=[None, 3], dtype=tf.float32, name="inp")))
tf.keras.Model instances constructed from inputs and outputs already have a signature and so do not require a @tf.function decorator or a signatures argument. If neither are specified, the model's forward pass is exported. x = input_layer.Input((4,), name="x")
y = core.Dense(5, name="out")(x)
model = training.Model(x, y)
tf.saved_model.save(model, '/tmp/saved_model/')
# The exported SavedModel takes "x" with shape [None, 4] and returns "out"
# with shape [None, 5]
Variables must be tracked by assigning them to an attribute of a tracked object or to an attribute of obj directly. TensorFlow objects (e.g. layers from tf.keras.layers, optimizers from tf.train) track their variables automatically. This is the same tracking scheme that tf.train.Checkpoint uses, and an exported Checkpoint object may be restored as a training checkpoint by pointing tf.train.Checkpoint.restore to the SavedModel's "variables/" subdirectory. Currently, variables are the only stateful objects supported by tf.saved_model.save, but others (e.g. tables) will be supported in the future. tf.function does not hard-code device annotations from outside the function body, instead of using the calling context's device. This means for example that exporting a model that runs on a GPU and serving it on a CPU will generally work, with some exceptions. tf.device annotations inside the body of the function will be hard-coded in the exported model; this type of annotation is discouraged. Device-specific operations, e.g. with "cuDNN" in the name or with device-specific layouts, may cause issues. Currently a DistributionStrategy is another exception: active distribution strategies will cause device placements to be hard-coded in a function. Exporting a single-device computation and importing under a DistributionStrategy is not currently supported, but may be in the future. SavedModels exported with tf.saved_model.save strip default-valued attributes automatically, which removes one source of incompatibilities when the consumer of a SavedModel is running an older TensorFlow version than the producer. There are however other sources of incompatibilities which are not handled automatically, such as when the exported model contains operations which the consumer does not have definitions for. A single tf.function can generate many ConcreteFunctions. If a downstream tool wants to refer to all concrete functions generated by a single tf.function you can use the function_aliases argument to store a map from the alias name to all concrete function names. E.g. class MyModel:
@tf.function
def func():
...
@tf.function
def serve():
...
func()
model = MyModel()
signatures = {
'serving_default': model.serve.get_concrete_function(),
}
options = tf.saved_model.SaveOptions(function_aliases={
'my_func': func,
})
tf.saved_model.save(model, export_dir, signatures, options)
Args
obj A trackable object to export.
export_dir A directory in which to write the SavedModel.
signatures Optional, one of three types: a tf.function with an input signature specified, which will use the default serving signature key, the result of f.get_concrete_function on a @tf.function-decorated function f, in which case f will be used to generate a signature for the SavedModel under the default serving signature key, a dictionary, which maps signature keys to either tf.function instances with input signatures or concrete functions. Keys of such a dictionary may be arbitrary strings, but will typically be from the tf.saved_model.signature_constants module.
options Optional, tf.saved_model.SaveOptions object that specifies options for saving.
Raises
ValueError If obj is not trackable. Eager Compatibility Not well supported when graph building. From TensorFlow 1.x, tf.compat.v1.enable_eager_execution() should run first. Calling tf.saved_model.save in a loop when graph building from TensorFlow 1.x will add new save operations to the default graph each iteration. May not be called from within a function body. | tensorflow.saved_model.save |
tf.saved_model.SaveOptions Options for saving to SavedModel. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.saved_model.SaveOptions
tf.saved_model.SaveOptions(
namespace_whitelist=None, save_debug_info=False, function_aliases=None,
experimental_io_device=None, experimental_variable_policy=None
)
This function may be used in the options argument in functions that save a SavedModel (tf.saved_model.save, tf.keras.models.save_model).
Args
namespace_whitelist List of strings containing op namespaces to whitelist when saving a model. Saving an object that uses namespaced ops must explicitly add all namespaces to the whitelist. The namespaced ops must be registered into the framework when loading the SavedModel.
save_debug_info Boolean indicating whether debug information is saved. If True, then a debug/saved_model_debug_info.pb file will be written with the contents of a GraphDebugInfo binary protocol buffer containing stack trace information for all ops and functions that are saved.
function_aliases Python dict. Mapping from string to object returned by @tf.function. A single tf.function can generate many ConcreteFunctions. If a downstream tool wants to refer to all concrete functions generated by a single tf.function you can use the function_aliases argument to store a map from the alias name to all concrete function names. E.g. class MyModel:
@tf.function
def func():
...
@tf.function
def serve():
...
func()
model = MyModel()
signatures = {
'serving_default': model.serve.get_concrete_function(),
}
options = tf.saved_model.SaveOptions(function_aliases={
'my_func': func,
})
tf.saved_model.save(model, export_dir, signatures, options)
experimental_io_device string. Applies in a distributed setting. Tensorflow device to use to access the filesystem. If None (default) then for each variable the filesystem is accessed from the CPU:0 device of the host where that variable is assigned. If specified, the filesystem is instead accessed from that device for all variables. This is for example useful if you want to save to a local directory, such as "/tmp" when running in a distributed setting. In that case pass a device for the host where the "/tmp" directory is accessible.
experimental_variable_policy The policy to apply to variables when saving. This is either a saved_model.experimental.VariablePolicy enum instance or one of its value strings (case is not important). See that enum documentation for details. A value of None corresponds to the default policy.
Class Variables
experimental_io_device
experimental_variable_policy
function_aliases
namespace_whitelist
save_debug_info | tensorflow.saved_model.saveoptions |
tf.scan View source on GitHub scan on the list of tensors unpacked from elems on dimension 0. (deprecated argument values)
tf.scan(
fn, elems, initializer=None, parallel_iterations=10, back_prop=True,
swap_memory=False, infer_shape=True, reverse=False, name=None
)
Warning: SOME ARGUMENT VALUES ARE DEPRECATED: (back_prop=False). They will be removed in a future version. Instructions for updating: back_prop=False is deprecated. Consider using tf.stop_gradient instead. Instead of: results = tf.scan(fn, elems, back_prop=False) Use: results = tf.nest.map_structure(tf.stop_gradient, tf.scan(fn, elems)) The simplest version of scan repeatedly applies the callable fn to a sequence of elements from first to last. The elements are made of the tensors unpacked from elems on dimension 0. The callable fn takes two tensors as arguments. The first argument is the accumulated value computed from the preceding invocation of fn, and the second is the value at the current position of elems. If initializer is None, elems must contain at least one element, and its first element is used as the initializer. Suppose that elems is unpacked into values, a list of tensors. The shape of the result tensor is [len(values)] + fn(initializer, values[0]).shape. If reverse=True, it's fn(initializer, values[-1]).shape. This method also allows multi-arity elems and accumulator. If elems is a (possibly nested) list or tuple of tensors, then each of these tensors must have a matching first (unpack) dimension. The second argument of fn must match the structure of elems. If no initializer is provided, the output structure and dtypes of fn are assumed to be the same as its input; and in this case, the first argument of fn must match the structure of elems. If an initializer is provided, then the output of fn must have the same structure as initializer; and the first argument of fn must match this structure. For example, if elems is (t1, [t2, t3]) and initializer is [i1, i2] then an appropriate signature for fn in python2 is: fn = lambda (acc_p1, acc_p2), (t1, [t2, t3]): and fn must return a list, [acc_n1, acc_n2]. An alternative correct signature for fn, and the one that works in python3, is: fn = lambda a, t:, where a and t correspond to the input tuples.
Args
fn The callable to be performed. It accepts two arguments. The first will have the same structure as initializer if one is provided, otherwise it will have the same structure as elems. The second will have the same (possibly nested) structure as elems. Its output must have the same structure as initializer if one is provided, otherwise it must have the same structure as elems.
elems A tensor or (possibly nested) sequence of tensors, each of which will be unpacked along their first dimension. The nested sequence of the resulting slices will be the first argument to fn.
initializer (optional) A tensor or (possibly nested) sequence of tensors, initial value for the accumulator, and the expected output type of fn.
parallel_iterations (optional) The number of iterations allowed to run in parallel.
back_prop (optional) Deprecated. False disables support for back propagation. Prefer using tf.stop_gradient instead.
swap_memory (optional) True enables GPU-CPU memory swapping.
infer_shape (optional) False disables tests for consistent output shapes.
reverse (optional) True scans the tensor last to first (instead of first to last).
name (optional) Name prefix for the returned tensors.
Returns A tensor or (possibly nested) sequence of tensors. Each tensor packs the results of applying fn to tensors unpacked from elems along the first dimension, and the previous accumulator value(s), from first to last (or last to first, if reverse=True).
Raises
TypeError if fn is not callable or the structure of the output of fn and initializer do not match.
ValueError if the lengths of the output of fn and initializer do not match. Examples: elems = np.array([1, 2, 3, 4, 5, 6])
sum = scan(lambda a, x: a + x, elems)
# sum == [1, 3, 6, 10, 15, 21]
sum = scan(lambda a, x: a + x, elems, reverse=True)
# sum == [21, 20, 18, 15, 11, 6]
elems = np.array([1, 2, 3, 4, 5, 6])
initializer = np.array(0)
sum_one = scan(
lambda a, x: x[0] - x[1] + a, (elems + 1, elems), initializer)
# sum_one == [1, 2, 3, 4, 5, 6]
elems = np.array([1, 0, 0, 0, 0, 0])
initializer = (np.array(0), np.array(1))
fibonaccis = scan(lambda a, _: (a[1], a[0] + a[1]), elems, initializer)
# fibonaccis == ([1, 1, 2, 3, 5, 8], [1, 2, 3, 5, 8, 13]) | tensorflow.scan |
tf.scatter_nd Scatter updates into a new tensor according to indices. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.manip.scatter_nd, tf.compat.v1.scatter_nd
tf.scatter_nd(
indices, updates, shape, name=None
)
Creates a new tensor by applying sparse updates to individual values or slices within a tensor (initially zero for numeric, empty for string) of the given shape according to indices. This operator is the inverse of the tf.gather_nd operator which extracts values or slices from a given tensor. This operation is similar to tensor_scatter_add, except that the tensor is zero-initialized. Calling tf.scatter_nd(indices, values, shape) is identical to tensor_scatter_add(tf.zeros(shape, values.dtype), indices, values) If indices contains duplicates, then their updates are accumulated (summed). Warning: The order in which updates are applied is nondeterministic, so the output will be nondeterministic if indices contains duplicates -- because of some numerical approximation issues, numbers summed in different order may yield different results. indices is an integer tensor containing indices into a new tensor of shape shape. The last dimension of indices can be at most the rank of shape: indices.shape[-1] <= shape.rank
The last dimension of indices corresponds to indices into elements (if indices.shape[-1] = shape.rank) or slices (if indices.shape[-1] < shape.rank) along dimension indices.shape[-1] of shape. updates is a tensor with shape indices.shape[:-1] + shape[indices.shape[-1]:]
The simplest form of scatter is to insert individual elements in a tensor by index. For example, say we want to insert 4 scattered elements in a rank-1 tensor with 8 elements. In Python, this scatter operation would look like this: indices = tf.constant([[4], [3], [1], [7]])
updates = tf.constant([9, 10, 11, 12])
shape = tf.constant([8])
scatter = tf.scatter_nd(indices, updates, shape)
print(scatter)
The resulting tensor would look like this: [0, 11, 0, 10, 9, 0, 0, 12]
We can also, insert entire slices of a higher rank tensor all at once. For example, if we wanted to insert two slices in the first dimension of a rank-3 tensor with two matrices of new values. In Python, this scatter operation would look like this: indices = tf.constant([[0], [2]])
updates = tf.constant([[[5, 5, 5, 5], [6, 6, 6, 6],
[7, 7, 7, 7], [8, 8, 8, 8]],
[[5, 5, 5, 5], [6, 6, 6, 6],
[7, 7, 7, 7], [8, 8, 8, 8]]])
shape = tf.constant([4, 4, 4])
scatter = tf.scatter_nd(indices, updates, shape)
print(scatter)
The resulting tensor would look like this: [[[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]],
[[0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0]],
[[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]],
[[0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0]]]
Note that on CPU, if an out of bound index is found, an error is returned. On GPU, if an out of bound index is found, the index is ignored.
Args
indices A Tensor. Must be one of the following types: int32, int64. Index tensor.
updates A Tensor. Updates to scatter into output.
shape A Tensor. Must have the same type as indices. 1-D. The shape of the resulting tensor.
name A name for the operation (optional).
Returns A Tensor. Has the same type as updates. | tensorflow.scatter_nd |
tf.searchsorted View source on GitHub Searches input tensor for values on the innermost dimension. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.searchsorted
tf.searchsorted(
sorted_sequence, values, side='left', out_type=tf.dtypes.int32,
name=None
)
A 2-D example: sorted_sequence = [[0, 3, 9, 9, 10],
[1, 2, 3, 4, 5]]
values = [[2, 4, 9],
[0, 2, 6]]
result = searchsorted(sorted_sequence, values, side="left")
result == [[1, 2, 2],
[0, 1, 5]]
result = searchsorted(sorted_sequence, values, side="right")
result == [[1, 2, 4],
[0, 2, 5]]
Args
sorted_sequence N-D Tensor containing a sorted sequence.
values N-D Tensor containing the search values.
side 'left' or 'right'; 'left' corresponds to lower_bound and 'right' to upper_bound.
out_type The output type (int32 or int64). Default is tf.int32.
name Optional name for the operation.
Returns An N-D Tensor the size of values containing the result of applying either lower_bound or upper_bound (depending on side) to each value. The result is not a global index to the entire Tensor, but the index in the last dimension.
Raises
ValueError If the last dimension of sorted_sequence >= 2^31-1 elements. If the total size of values exceeds 2^31 - 1 elements. If the first N-1 dimensions of the two tensors don't match. | tensorflow.searchsorted |
tf.sequence_mask View source on GitHub Returns a mask tensor representing the first N positions of each cell. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.sequence_mask
tf.sequence_mask(
lengths, maxlen=None, dtype=tf.dtypes.bool, name=None
)
If lengths has shape [d_1, d_2, ..., d_n] the resulting tensor mask has dtype dtype and shape [d_1, d_2, ..., d_n, maxlen], with mask[i_1, i_2, ..., i_n, j] = (j < lengths[i_1, i_2, ..., i_n])
Examples: tf.sequence_mask([1, 3, 2], 5) # [[True, False, False, False, False],
# [True, True, True, False, False],
# [True, True, False, False, False]]
tf.sequence_mask([[1, 3],[2,0]]) # [[[True, False, False],
# [True, True, True]],
# [[True, True, False],
# [False, False, False]]]
Args
lengths integer tensor, all its values <= maxlen.
maxlen scalar integer tensor, size of last dimension of returned tensor. Default is the maximum value in lengths.
dtype output type of the resulting tensor.
name name of the op.
Returns A mask tensor of shape lengths.shape + (maxlen,), cast to specified dtype.
Raises
ValueError if maxlen is not a scalar. | tensorflow.sequence_mask |
Module: tf.sets Tensorflow set operations. Functions difference(...): Compute set difference of elements in last dimension of a and b. intersection(...): Compute set intersection of elements in last dimension of a and b. size(...): Compute number of unique elements along last dimension of a. union(...): Compute set union of elements in last dimension of a and b. | tensorflow.sets |
tf.sets.difference View source on GitHub Compute set difference of elements in last dimension of a and b. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.sets.difference, tf.compat.v1.sets.set_difference
tf.sets.difference(
a, b, aminusb=True, validate_indices=True
)
All but the last dimension of a and b must match. Example: import tensorflow as tf
import collections
# Represent the following array of sets as a sparse tensor:
# a = np.array([[{1, 2}, {3}], [{4}, {5, 6}]])
a = collections.OrderedDict([
((0, 0, 0), 1),
((0, 0, 1), 2),
((0, 1, 0), 3),
((1, 0, 0), 4),
((1, 1, 0), 5),
((1, 1, 1), 6),
])
a = tf.sparse.SparseTensor(list(a.keys()), list(a.values()),
dense_shape=[2, 2, 2])
# np.array([[{1, 3}, {2}], [{4, 5}, {5, 6, 7, 8}]])
b = collections.OrderedDict([
((0, 0, 0), 1),
((0, 0, 1), 3),
((0, 1, 0), 2),
((1, 0, 0), 4),
((1, 0, 1), 5),
((1, 1, 0), 5),
((1, 1, 1), 6),
((1, 1, 2), 7),
((1, 1, 3), 8),
])
b = tf.sparse.SparseTensor(list(b.keys()), list(b.values()),
dense_shape=[2, 2, 4])
# `set_difference` is applied to each aligned pair of sets.
tf.sets.difference(a, b)
# The result will be equivalent to either of:
#
# np.array([[{2}, {3}], [{}, {}]])
#
# collections.OrderedDict([
# ((0, 0, 0), 2),
# ((0, 1, 0), 3),
# ])
Args
a Tensor or SparseTensor of the same type as b. If sparse, indices must be sorted in row-major order.
b Tensor or SparseTensor of the same type as a. If sparse, indices must be sorted in row-major order.
aminusb Whether to subtract b from a, vs vice versa.
validate_indices Whether to validate the order and range of sparse indices in a and b.
Returns A SparseTensor whose shape is the same rank as a and b, and all but the last dimension the same. Elements along the last dimension contain the differences.
Raises
TypeError If inputs are invalid types, or if a and b have different types.
ValueError If a is sparse and b is dense.
errors_impl.InvalidArgumentError If the shapes of a and b do not match in any dimension other than the last dimension. | tensorflow.sets.difference |
tf.sets.intersection View source on GitHub Compute set intersection of elements in last dimension of a and b. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.sets.intersection, tf.compat.v1.sets.set_intersection
tf.sets.intersection(
a, b, validate_indices=True
)
All but the last dimension of a and b must match. Example: import tensorflow as tf
import collections
# Represent the following array of sets as a sparse tensor:
# a = np.array([[{1, 2}, {3}], [{4}, {5, 6}]])
a = collections.OrderedDict([
((0, 0, 0), 1),
((0, 0, 1), 2),
((0, 1, 0), 3),
((1, 0, 0), 4),
((1, 1, 0), 5),
((1, 1, 1), 6),
])
a = tf.sparse.SparseTensor(list(a.keys()), list(a.values()),
dense_shape=[2,2,2])
# b = np.array([[{1}, {}], [{4}, {5, 6, 7, 8}]])
b = collections.OrderedDict([
((0, 0, 0), 1),
((1, 0, 0), 4),
((1, 1, 0), 5),
((1, 1, 1), 6),
((1, 1, 2), 7),
((1, 1, 3), 8),
])
b = tf.sparse.SparseTensor(list(b.keys()), list(b.values()),
dense_shape=[2, 2, 4])
# `tf.sets.intersection` is applied to each aligned pair of sets.
tf.sets.intersection(a, b)
# The result will be equivalent to either of:
#
# np.array([[{1}, {}], [{4}, {5, 6}]])
#
# collections.OrderedDict([
# ((0, 0, 0), 1),
# ((1, 0, 0), 4),
# ((1, 1, 0), 5),
# ((1, 1, 1), 6),
# ])
Args
a Tensor or SparseTensor of the same type as b. If sparse, indices must be sorted in row-major order.
b Tensor or SparseTensor of the same type as a. If sparse, indices must be sorted in row-major order.
validate_indices Whether to validate the order and range of sparse indices in a and b.
Returns A SparseTensor whose shape is the same rank as a and b, and all but the last dimension the same. Elements along the last dimension contain the intersections. | tensorflow.sets.intersection |
tf.sets.size View source on GitHub Compute number of unique elements along last dimension of a. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.sets.set_size, tf.compat.v1.sets.size
tf.sets.size(
a, validate_indices=True
)
Args
a SparseTensor, with indices sorted in row-major order.
validate_indices Whether to validate the order and range of sparse indices in a.
Returns int32 Tensor of set sizes. For a ranked n, this is a Tensor with rank n-1, and the same 1st n-1 dimensions as a. Each value is the number of unique elements in the corresponding [0...n-1] dimension of a.
Raises
TypeError If a is an invalid types. | tensorflow.sets.size |
tf.sets.union View source on GitHub Compute set union of elements in last dimension of a and b. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.sets.set_union, tf.compat.v1.sets.union
tf.sets.union(
a, b, validate_indices=True
)
All but the last dimension of a and b must match. Example: import tensorflow as tf
import collections
# [[{1, 2}, {3}], [{4}, {5, 6}]]
a = collections.OrderedDict([
((0, 0, 0), 1),
((0, 0, 1), 2),
((0, 1, 0), 3),
((1, 0, 0), 4),
((1, 1, 0), 5),
((1, 1, 1), 6),
])
a = tf.sparse.SparseTensor(list(a.keys()), list(a.values()),
dense_shape=[2, 2, 2])
# [[{1, 3}, {2}], [{4, 5}, {5, 6, 7, 8}]]
b = collections.OrderedDict([
((0, 0, 0), 1),
((0, 0, 1), 3),
((0, 1, 0), 2),
((1, 0, 0), 4),
((1, 0, 1), 5),
((1, 1, 0), 5),
((1, 1, 1), 6),
((1, 1, 2), 7),
((1, 1, 3), 8),
])
b = tf.sparse.SparseTensor(list(b.keys()), list(b.values()),
dense_shape=[2, 2, 4])
# `set_union` is applied to each aligned pair of sets.
tf.sets.union(a, b)
# The result will be a equivalent to either of:
#
# np.array([[{1, 2, 3}, {2, 3}], [{4, 5}, {5, 6, 7, 8}]])
#
# collections.OrderedDict([
# ((0, 0, 0), 1),
# ((0, 0, 1), 2),
# ((0, 0, 2), 3),
# ((0, 1, 0), 2),
# ((0, 1, 1), 3),
# ((1, 0, 0), 4),
# ((1, 0, 1), 5),
# ((1, 1, 0), 5),
# ((1, 1, 1), 6),
# ((1, 1, 2), 7),
# ((1, 1, 3), 8),
# ])
Args
a Tensor or SparseTensor of the same type as b. If sparse, indices must be sorted in row-major order.
b Tensor or SparseTensor of the same type as a. If sparse, indices must be sorted in row-major order.
validate_indices Whether to validate the order and range of sparse indices in a and b.
Returns A SparseTensor whose shape is the same rank as a and b, and all but the last dimension the same. Elements along the last dimension contain the unions. | tensorflow.sets.union |
tf.shape View source on GitHub Returns a tensor containing the shape of the input tensor.
tf.shape(
input, out_type=tf.dtypes.int32, name=None
)
See also tf.size, tf.rank. tf.shape returns a 1-D integer tensor representing the shape of input. For a scalar input, the tensor returned has a shape of (0,) and its value is the empty vector (i.e. []). For example:
tf.shape(1.)
<tf.Tensor: shape=(0,), dtype=int32, numpy=array([], dtype=int32)>
t = tf.constant([[[1, 1, 1], [2, 2, 2]], [[3, 3, 3], [4, 4, 4]]])
tf.shape(t)
<tf.Tensor: shape=(3,), dtype=int32, numpy=array([2, 2, 3], dtype=int32)>
Note: When using symbolic tensors, such as when using the Keras API, tf.shape() will return the shape of the symbolic tensor.
a = tf.keras.layers.Input((None, 10))
tf.shape(a)
<... shape=(3,) dtype=int32...>
In these cases, using tf.Tensor.shape will return more informative results.
a.shape
TensorShape([None, None, 10])
(The first None represents the as yet unknown batch size.) tf.shape and Tensor.shape should be identical in eager mode. Within tf.function or within a compat.v1 context, not all dimensions may be known until execution time. Hence when defining custom layers and models for graph mode, prefer the dynamic tf.shape(x) over the static x.shape.
Args
input A Tensor or SparseTensor.
out_type (Optional) The specified output type of the operation (int32 or int64). Defaults to tf.int32.
name A name for the operation (optional).
Returns A Tensor of type out_type. | tensorflow.shape |
tf.shape_n View source on GitHub Returns shape of tensors. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.shape_n
tf.shape_n(
input, out_type=tf.dtypes.int32, name=None
)
Args
input A list of at least 1 Tensor object with the same type.
out_type The specified output type of the operation (int32 or int64). Defaults to tf.int32(optional).
name A name for the operation (optional).
Returns A list with the same length as input of Tensor objects with type out_type. | tensorflow.shape_n |
Module: tf.signal Signal processing operations. See the tf.signal guide. Functions dct(...): Computes the 1D [Discrete Cosine Transform (DCT)][dct] of input. fft(...): Fast Fourier transform. fft2d(...): 2D fast Fourier transform. fft3d(...): 3D fast Fourier transform. fftshift(...): Shift the zero-frequency component to the center of the spectrum. frame(...): Expands signal's axis dimension into frames of frame_length. hamming_window(...): Generate a Hamming window. hann_window(...): Generate a Hann window. idct(...): Computes the 1D [Inverse Discrete Cosine Transform (DCT)][idct] of input. ifft(...): Inverse fast Fourier transform. ifft2d(...): Inverse 2D fast Fourier transform. ifft3d(...): Inverse 3D fast Fourier transform. ifftshift(...): The inverse of fftshift. inverse_mdct(...): Computes the inverse modified DCT of mdcts. inverse_stft(...): Computes the inverse Short-time Fourier Transform of stfts. inverse_stft_window_fn(...): Generates a window function that can be used in inverse_stft. irfft(...): Inverse real-valued fast Fourier transform. irfft2d(...): Inverse 2D real-valued fast Fourier transform. irfft3d(...): Inverse 3D real-valued fast Fourier transform. kaiser_bessel_derived_window(...): Generate a [Kaiser Bessel derived window][kbd]. kaiser_window(...): Generate a [Kaiser window][kaiser]. linear_to_mel_weight_matrix(...): Returns a matrix to warp linear scale spectrograms to the mel scale. mdct(...): Computes the [Modified Discrete Cosine Transform][mdct] of signals. mfccs_from_log_mel_spectrograms(...): Computes MFCCs of log_mel_spectrograms. overlap_and_add(...): Reconstructs a signal from a framed representation. rfft(...): Real-valued fast Fourier transform. rfft2d(...): 2D real-valued fast Fourier transform. rfft3d(...): 3D real-valued fast Fourier transform. stft(...): Computes the Short-time Fourier Transform of signals. vorbis_window(...): Generate a [Vorbis power complementary window][vorbis]. | tensorflow.signal |
tf.signal.dct View source on GitHub Computes the 1D Discrete Cosine Transform (DCT) of input. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.signal.dct, tf.compat.v1.spectral.dct
tf.signal.dct(
input, type=2, n=None, axis=-1, norm=None, name=None
)
Types I, II, III and IV are supported. Type I is implemented using a length 2N padded tf.signal.rfft. Type II is implemented using a length 2N padded tf.signal.rfft, as described here: Type 2 DCT using 2N FFT padded (Makhoul). Type III is a fairly straightforward inverse of Type II (i.e. using a length 2N padded tf.signal.irfft). Type IV is calculated through 2N length DCT2 of padded signal and picking the odd indices.
Args
input A [..., samples] float32/float64 Tensor containing the signals to take the DCT of.
type The DCT type to perform. Must be 1, 2, 3 or 4.
n The length of the transform. If length is less than sequence length, only the first n elements of the sequence are considered for the DCT. If n is greater than the sequence length, zeros are padded and then the DCT is computed as usual.
axis For future expansion. The axis to compute the DCT along. Must be -1.
norm The normalization to apply. None for no normalization or 'ortho' for orthonormal normalization.
name An optional name for the operation.
Returns A [..., samples] float32/float64 Tensor containing the DCT of input.
Raises
ValueError If type is not 1, 2, 3 or 4, axis is not -1, n is not None or greater than 0, or norm is not None or 'ortho'.
ValueError If type is 1 and norm is ortho. Scipy Compatibility Equivalent to scipy.fftpack.dct for Type-I, Type-II, Type-III and Type-IV DCT. | tensorflow.signal.dct |
tf.signal.fft Fast Fourier transform. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.fft, tf.compat.v1.signal.fft, tf.compat.v1.spectral.fft
tf.signal.fft(
input, name=None
)
Computes the 1-dimensional discrete Fourier transform over the inner-most dimension of input.
Args
input A Tensor. Must be one of the following types: complex64, complex128. A complex tensor.
name A name for the operation (optional).
Returns A Tensor. Has the same type as input. | tensorflow.signal.fft |
tf.signal.fft2d 2D fast Fourier transform. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.fft2d, tf.compat.v1.signal.fft2d, tf.compat.v1.spectral.fft2d
tf.signal.fft2d(
input, name=None
)
Computes the 2-dimensional discrete Fourier transform over the inner-most 2 dimensions of input.
Args
input A Tensor. Must be one of the following types: complex64, complex128. A complex tensor.
name A name for the operation (optional).
Returns A Tensor. Has the same type as input. | tensorflow.signal.fft2d |
tf.signal.fft3d 3D fast Fourier transform. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.fft3d, tf.compat.v1.signal.fft3d, tf.compat.v1.spectral.fft3d
tf.signal.fft3d(
input, name=None
)
Computes the 3-dimensional discrete Fourier transform over the inner-most 3 dimensions of input.
Args
input A Tensor. Must be one of the following types: complex64, complex128. A complex tensor.
name A name for the operation (optional).
Returns A Tensor. Has the same type as input. | tensorflow.signal.fft3d |
tf.signal.fftshift View source on GitHub Shift the zero-frequency component to the center of the spectrum. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.signal.fftshift
tf.signal.fftshift(
x, axes=None, name=None
)
This function swaps half-spaces for all axes listed (defaults to all). Note that y[0] is the Nyquist component only if len(x) is even. For example: x = tf.signal.fftshift([ 0., 1., 2., 3., 4., -5., -4., -3., -2., -1.])
x.numpy() # array([-5., -4., -3., -2., -1., 0., 1., 2., 3., 4.])
Args
x Tensor, input tensor.
axes int or shape tuple, optional Axes over which to shift. Default is None, which shifts all axes.
name An optional name for the operation.
Returns A Tensor, The shifted tensor.
Numpy Compatibility Equivalent to numpy.fft.fftshift. https://docs.scipy.org/doc/numpy/reference/generated/numpy.fft.fftshift.html | tensorflow.signal.fftshift |
tf.signal.frame View source on GitHub Expands signal's axis dimension into frames of frame_length. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.signal.frame
tf.signal.frame(
signal, frame_length, frame_step, pad_end=False, pad_value=0, axis=-1, name=None
)
Slides a window of size frame_length over signal's axis dimension with a stride of frame_step, replacing the axis dimension with [frames, frame_length] frames. If pad_end is True, window positions that are past the end of the axis dimension are padded with pad_value until the window moves fully past the end of the dimension. Otherwise, only window positions that fully overlap the axis dimension are produced. For example:
# A batch size 3 tensor of 9152 audio samples.
audio = tf.random.normal([3, 9152])
# Compute overlapping frames of length 512 with a step of 180 (frames overlap
# by 332 samples). By default, only 49 frames are generated since a frame
# with start position j*180 for j > 48 would overhang the end.
frames = tf.signal.frame(audio, 512, 180)
frames.shape.assert_is_compatible_with([3, 49, 512])
# When pad_end is enabled, the final two frames are kept (padded with zeros).
frames = tf.signal.frame(audio, 512, 180, pad_end=True)
frames.shape.assert_is_compatible_with([3, 51, 512])
If the dimension along axis is N, and pad_end=False, the number of frames can be computed by: num_frames = 1 + (N - frame_size) // frame_step
If pad_end=True, the number of frames can be computed by: num_frames = -(-N // frame_step) # ceiling division
Args
signal A [..., samples, ...] Tensor. The rank and dimensions may be unknown. Rank must be at least 1.
frame_length The frame length in samples. An integer or scalar Tensor.
frame_step The frame hop size in samples. An integer or scalar Tensor.
pad_end Whether to pad the end of signal with pad_value.
pad_value An optional scalar Tensor to use where the input signal does not exist when pad_end is True.
axis A scalar integer Tensor indicating the axis to frame. Defaults to the last axis. Supports negative values for indexing from the end.
name An optional name for the operation.
Returns A Tensor of frames with shape [..., num_frames, frame_length, ...].
Raises
ValueError If frame_length, frame_step, pad_value, or axis are not scalar. | tensorflow.signal.frame |
tf.signal.hamming_window View source on GitHub Generate a Hamming window. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.signal.hamming_window
tf.signal.hamming_window(
window_length, periodic=True, dtype=tf.dtypes.float32, name=None
)
Args
window_length A scalar Tensor indicating the window length to generate.
periodic A bool Tensor indicating whether to generate a periodic or symmetric window. Periodic windows are typically used for spectral analysis while symmetric windows are typically used for digital filter design.
dtype The data type to produce. Must be a floating point type.
name An optional name for the operation.
Returns A Tensor of shape [window_length] of type dtype.
Raises
ValueError If dtype is not a floating point type. | tensorflow.signal.hamming_window |
tf.signal.hann_window View source on GitHub Generate a Hann window. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.signal.hann_window
tf.signal.hann_window(
window_length, periodic=True, dtype=tf.dtypes.float32, name=None
)
Args
window_length A scalar Tensor indicating the window length to generate.
periodic A bool Tensor indicating whether to generate a periodic or symmetric window. Periodic windows are typically used for spectral analysis while symmetric windows are typically used for digital filter design.
dtype The data type to produce. Must be a floating point type.
name An optional name for the operation.
Returns A Tensor of shape [window_length] of type dtype.
Raises
ValueError If dtype is not a floating point type. | tensorflow.signal.hann_window |
tf.signal.idct View source on GitHub Computes the 1D Inverse Discrete Cosine Transform (DCT) of input. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.signal.idct, tf.compat.v1.spectral.idct
tf.signal.idct(
input, type=2, n=None, axis=-1, norm=None, name=None
)
Currently Types I, II, III, IV are supported. Type III is the inverse of Type II, and vice versa. Note that you must re-normalize by 1/(2n) to obtain an inverse if norm is not 'ortho'. That is: signal == idct(dct(signal)) * 0.5 / signal.shape[-1]. When norm='ortho', we have: signal == idct(dct(signal, norm='ortho'), norm='ortho').
Args
input A [..., samples] float32/float64 Tensor containing the signals to take the DCT of.
type The IDCT type to perform. Must be 1, 2, 3 or 4.
n For future expansion. The length of the transform. Must be None.
axis For future expansion. The axis to compute the DCT along. Must be -1.
norm The normalization to apply. None for no normalization or 'ortho' for orthonormal normalization.
name An optional name for the operation.
Returns A [..., samples] float32/float64 Tensor containing the IDCT of input.
Raises
ValueError If type is not 1, 2 or 3, n is not None,axisis not-1, ornormis notNoneor'ortho'`. Scipy Compatibility Equivalent to scipy.fftpack.idct for Type-I, Type-II, Type-III and Type-IV DCT. | tensorflow.signal.idct |
tf.signal.ifft Inverse fast Fourier transform. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.ifft, tf.compat.v1.signal.ifft, tf.compat.v1.spectral.ifft
tf.signal.ifft(
input, name=None
)
Computes the inverse 1-dimensional discrete Fourier transform over the inner-most dimension of input.
Args
input A Tensor. Must be one of the following types: complex64, complex128. A complex tensor.
name A name for the operation (optional).
Returns A Tensor. Has the same type as input. | tensorflow.signal.ifft |
tf.signal.ifft2d Inverse 2D fast Fourier transform. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.ifft2d, tf.compat.v1.signal.ifft2d, tf.compat.v1.spectral.ifft2d
tf.signal.ifft2d(
input, name=None
)
Computes the inverse 2-dimensional discrete Fourier transform over the inner-most 2 dimensions of input.
Args
input A Tensor. Must be one of the following types: complex64, complex128. A complex tensor.
name A name for the operation (optional).
Returns A Tensor. Has the same type as input. | tensorflow.signal.ifft2d |
tf.signal.ifft3d Inverse 3D fast Fourier transform. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.ifft3d, tf.compat.v1.signal.ifft3d, tf.compat.v1.spectral.ifft3d
tf.signal.ifft3d(
input, name=None
)
Computes the inverse 3-dimensional discrete Fourier transform over the inner-most 3 dimensions of input.
Args
input A Tensor. Must be one of the following types: complex64, complex128. A complex tensor.
name A name for the operation (optional).
Returns A Tensor. Has the same type as input. | tensorflow.signal.ifft3d |
tf.signal.ifftshift View source on GitHub The inverse of fftshift. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.signal.ifftshift
tf.signal.ifftshift(
x, axes=None, name=None
)
Although identical for even-length x, the functions differ by one sample for odd-length x. For example: x = tf.signal.ifftshift([[ 0., 1., 2.],[ 3., 4., -4.],[-3., -2., -1.]])
x.numpy() # array([[ 4., -4., 3.],[-2., -1., -3.],[ 1., 2., 0.]])
Args
x Tensor, input tensor.
axes int or shape tuple Axes over which to calculate. Defaults to None, which shifts all axes.
name An optional name for the operation.
Returns A Tensor, The shifted tensor.
Numpy Compatibility Equivalent to numpy.fft.ifftshift. https://docs.scipy.org/doc/numpy/reference/generated/numpy.fft.ifftshift.html | tensorflow.signal.ifftshift |
tf.signal.inverse_mdct Computes the inverse modified DCT of mdcts. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.signal.inverse_mdct
tf.signal.inverse_mdct(
mdcts, window_fn=tf.signal.vorbis_window, norm=None, name=None
)
To reconstruct an original waveform, the same window function should be used with mdct and inverse_mdct. Example usage:
@tf.function
def compare_round_trip():
samples = 1000
frame_length = 400
halflen = frame_length // 2
waveform = tf.random.normal(dtype=tf.float32, shape=[samples])
waveform_pad = tf.pad(waveform, [[halflen, 0],])
mdct = tf.signal.mdct(waveform_pad, frame_length, pad_end=True,
window_fn=tf.signal.vorbis_window)
inverse_mdct = tf.signal.inverse_mdct(mdct,
window_fn=tf.signal.vorbis_window)
inverse_mdct = inverse_mdct[halflen: halflen + samples]
return waveform, inverse_mdct
waveform, inverse_mdct = compare_round_trip()
np.allclose(waveform.numpy(), inverse_mdct.numpy(), rtol=1e-3, atol=1e-4)
True
Implemented with TPU/GPU-compatible ops and supports gradients.
Args
mdcts A float32/float64 [..., frames, frame_length // 2] Tensor of MDCT bins representing a batch of frame_length // 2-point MDCTs.
window_fn A callable that takes a frame_length and a dtype keyword argument and returns a [frame_length] Tensor of samples in the provided datatype. If set to None, a rectangular window with a scale of 1/sqrt(2) is used. For perfect reconstruction of a signal from mdct followed by inverse_mdct, please use tf.signal.vorbis_window, tf.signal.kaiser_bessel_derived_window or None. If using another window function, make sure that w[n]^2 + w[n + frame_length // 2]^2 = 1 and w[n] = w[frame_length - n - 1] for n = 0,...,frame_length // 2 - 1 to achieve perfect reconstruction.
norm If "ortho", orthonormal inverse DCT4 is performed, if it is None, a regular dct4 followed by scaling of 1/frame_length is performed.
name An optional name for the operation.
Returns A [..., samples] Tensor of float32/float64 signals representing the inverse MDCT for each input MDCT in mdcts where samples is (frames - 1) * (frame_length // 2) + frame_length.
Raises
ValueError If mdcts is not at least rank 2. | tensorflow.signal.inverse_mdct |
tf.signal.inverse_stft View source on GitHub Computes the inverse Short-time Fourier Transform of stfts. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.signal.inverse_stft
tf.signal.inverse_stft(
stfts, frame_length, frame_step, fft_length=None,
window_fn=tf.signal.hann_window, name=None
)
To reconstruct an original waveform, a complementary window function should be used with inverse_stft. Such a window function can be constructed with tf.signal.inverse_stft_window_fn. Example: frame_length = 400
frame_step = 160
waveform = tf.random.normal(dtype=tf.float32, shape=[1000])
stft = tf.signal.stft(waveform, frame_length, frame_step)
inverse_stft = tf.signal.inverse_stft(
stft, frame_length, frame_step,
window_fn=tf.signal.inverse_stft_window_fn(frame_step))
If a custom window_fn is used with tf.signal.stft, it must be passed to tf.signal.inverse_stft_window_fn: frame_length = 400
frame_step = 160
window_fn = tf.signal.hamming_window
waveform = tf.random.normal(dtype=tf.float32, shape=[1000])
stft = tf.signal.stft(
waveform, frame_length, frame_step, window_fn=window_fn)
inverse_stft = tf.signal.inverse_stft(
stft, frame_length, frame_step,
window_fn=tf.signal.inverse_stft_window_fn(
frame_step, forward_window_fn=window_fn))
Implemented with TPU/GPU-compatible ops and supports gradients.
Args
stfts A complex64/complex128 [..., frames, fft_unique_bins] Tensor of STFT bins representing a batch of fft_length-point STFTs where fft_unique_bins is fft_length // 2 + 1
frame_length An integer scalar Tensor. The window length in samples.
frame_step An integer scalar Tensor. The number of samples to step.
fft_length An integer scalar Tensor. The size of the FFT that produced stfts. If not provided, uses the smallest power of 2 enclosing frame_length.
window_fn A callable that takes a window length and a dtype keyword argument and returns a [window_length] Tensor of samples in the provided datatype. If set to None, no windowing is used.
name An optional name for the operation.
Returns A [..., samples] Tensor of float32/float64 signals representing the inverse STFT for each input STFT in stfts.
Raises
ValueError If stfts is not at least rank 2, frame_length is not scalar, frame_step is not scalar, or fft_length is not scalar. | tensorflow.signal.inverse_stft |
tf.signal.inverse_stft_window_fn View source on GitHub Generates a window function that can be used in inverse_stft. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.signal.inverse_stft_window_fn
tf.signal.inverse_stft_window_fn(
frame_step, forward_window_fn=tf.signal.hann_window, name=None
)
Constructs a window that is equal to the forward window with a further pointwise amplitude correction. inverse_stft_window_fn is equivalent to forward_window_fn in the case where it would produce an exact inverse. See examples in inverse_stft documentation for usage.
Args
frame_step An integer scalar Tensor. The number of samples to step.
forward_window_fn window_fn used in the forward transform, stft.
name An optional name for the operation.
Returns A callable that takes a window length and a dtype keyword argument and returns a [window_length] Tensor of samples in the provided datatype. The returned window is suitable for reconstructing original waveform in inverse_stft. | tensorflow.signal.inverse_stft_window_fn |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.