doc_content
stringlengths 1
386k
| doc_id
stringlengths 5
188
|
---|---|
tf.experimental.numpy.random.uniform TensorFlow variant of NumPy's random.uniform.
tf.experimental.numpy.random.uniform(
low=0.0, high=1.0, size=None
)
See the NumPy documentation for numpy.random.uniform. | tensorflow.experimental.numpy.random.uniform |
tf.experimental.numpy.ravel TensorFlow variant of NumPy's ravel. View aliases Main aliases
tf.experimental.numpy.ndarray.ravel
tf.experimental.numpy.ravel(
a
)
Unsupported arguments: order. See the NumPy documentation for numpy.ravel. | tensorflow.experimental.numpy.ravel |
tf.experimental.numpy.real TensorFlow variant of NumPy's real.
tf.experimental.numpy.real(
val
)
See the NumPy documentation for numpy.real. | tensorflow.experimental.numpy.real |
tf.experimental.numpy.reciprocal TensorFlow variant of NumPy's reciprocal.
tf.experimental.numpy.reciprocal(
x
)
Unsupported arguments: out, where, casting, order, dtype, subok, signature, extobj. See the NumPy documentation for numpy.reciprocal. | tensorflow.experimental.numpy.reciprocal |
tf.experimental.numpy.remainder TensorFlow variant of NumPy's remainder.
tf.experimental.numpy.remainder(
x1, x2
)
Unsupported arguments: out, where, casting, order, dtype, subok, signature, extobj. See the NumPy documentation for numpy.remainder. | tensorflow.experimental.numpy.remainder |
tf.experimental.numpy.repeat TensorFlow variant of NumPy's repeat.
tf.experimental.numpy.repeat(
a, repeats, axis=None
)
See the NumPy documentation for numpy.repeat. | tensorflow.experimental.numpy.repeat |
tf.experimental.numpy.reshape TensorFlow variant of NumPy's reshape.
tf.experimental.numpy.reshape(
a, newshape, order='C'
)
order argument can only b 'C' or 'F'. See the NumPy documentation for numpy.reshape. | tensorflow.experimental.numpy.reshape |
tf.experimental.numpy.result_type TensorFlow variant of NumPy's result_type.
tf.experimental.numpy.result_type(
*arrays_and_dtypes
)
See the NumPy documentation for numpy.result_type. | tensorflow.experimental.numpy.result_type |
tf.experimental.numpy.roll TensorFlow variant of NumPy's roll.
tf.experimental.numpy.roll(
a, shift, axis=None
)
See the NumPy documentation for numpy.roll. | tensorflow.experimental.numpy.roll |
tf.experimental.numpy.rot90 TensorFlow variant of NumPy's rot90.
tf.experimental.numpy.rot90(
m, k=1, axes=(0, 1)
)
See the NumPy documentation for numpy.rot90. | tensorflow.experimental.numpy.rot90 |
tf.experimental.numpy.round TensorFlow variant of NumPy's round.
tf.experimental.numpy.round(
a, decimals=0
)
Unsupported arguments: out. See the NumPy documentation for numpy.round. | tensorflow.experimental.numpy.round |
tf.experimental.numpy.select TensorFlow variant of NumPy's select.
tf.experimental.numpy.select(
condlist, choicelist, default=0
)
See the NumPy documentation for numpy.select. | tensorflow.experimental.numpy.select |
tf.experimental.numpy.shape TensorFlow variant of NumPy's shape.
tf.experimental.numpy.shape(
a
)
See the NumPy documentation for numpy.shape. | tensorflow.experimental.numpy.shape |
tf.experimental.numpy.sign TensorFlow variant of NumPy's sign.
tf.experimental.numpy.sign(
x, out=None, where=None, **kwargs
)
See the NumPy documentation for numpy.sign. | tensorflow.experimental.numpy.sign |
tf.experimental.numpy.signbit TensorFlow variant of NumPy's signbit.
tf.experimental.numpy.signbit(
x
)
Unsupported arguments: out, where, casting, order, dtype, subok, signature, extobj. See the NumPy documentation for numpy.signbit. | tensorflow.experimental.numpy.signbit |
tf.experimental.numpy.sin TensorFlow variant of NumPy's sin.
tf.experimental.numpy.sin(
x
)
Unsupported arguments: out, where, casting, order, dtype, subok, signature, extobj. See the NumPy documentation for numpy.sin. | tensorflow.experimental.numpy.sin |
tf.experimental.numpy.sinc TensorFlow variant of NumPy's sinc.
tf.experimental.numpy.sinc(
x
)
See the NumPy documentation for numpy.sinc. | tensorflow.experimental.numpy.sinc |
tf.experimental.numpy.sinh TensorFlow variant of NumPy's sinh.
tf.experimental.numpy.sinh(
x
)
Unsupported arguments: out, where, casting, order, dtype, subok, signature, extobj. See the NumPy documentation for numpy.sinh. | tensorflow.experimental.numpy.sinh |
tf.experimental.numpy.size TensorFlow variant of NumPy's size.
tf.experimental.numpy.size(
x, axis=None
)
Unsupported arguments: a. See the NumPy documentation for numpy.size. | tensorflow.experimental.numpy.size |
tf.experimental.numpy.sort TensorFlow variant of NumPy's sort.
tf.experimental.numpy.sort(
a, axis=-1, kind='quicksort', order=None
)
See the NumPy documentation for numpy.sort. | tensorflow.experimental.numpy.sort |
tf.experimental.numpy.split TensorFlow variant of NumPy's split.
tf.experimental.numpy.split(
ary, indices_or_sections, axis=0
)
See the NumPy documentation for numpy.split. | tensorflow.experimental.numpy.split |
tf.experimental.numpy.sqrt TensorFlow variant of NumPy's sqrt.
tf.experimental.numpy.sqrt(
x
)
Unsupported arguments: out, where, casting, order, dtype, subok, signature, extobj. See the NumPy documentation for numpy.sqrt. | tensorflow.experimental.numpy.sqrt |
tf.experimental.numpy.square TensorFlow variant of NumPy's square.
tf.experimental.numpy.square(
x
)
Unsupported arguments: out, where, casting, order, dtype, subok, signature, extobj. See the NumPy documentation for numpy.square. | tensorflow.experimental.numpy.square |
tf.experimental.numpy.squeeze TensorFlow variant of NumPy's squeeze.
tf.experimental.numpy.squeeze(
a, axis=None
)
See the NumPy documentation for numpy.squeeze. | tensorflow.experimental.numpy.squeeze |
tf.experimental.numpy.stack TensorFlow variant of NumPy's stack.
tf.experimental.numpy.stack(
arrays, axis=0
)
Unsupported arguments: out. See the NumPy documentation for numpy.stack. | tensorflow.experimental.numpy.stack |
tf.experimental.numpy.std TensorFlow variant of NumPy's std.
tf.experimental.numpy.std(
a, axis=None, keepdims=None
)
Unsupported arguments: dtype, out, ddof. See the NumPy documentation for numpy.std. | tensorflow.experimental.numpy.std |
tf.experimental.numpy.string_ bytes(iterable_of_ints) -> bytes
tf.experimental.numpy.string_(
*args, **kwargs
)
bytes(string, encoding[, errors]) -> bytes bytes(bytes_or_buffer) -> immutable copy of bytes_or_buffer bytes(int) -> bytes object of size given by the parameter initialized with null bytes bytes() -> empty bytes object Construct an immutable array of bytes from: an iterable yielding integers in range(256) a text string encoded using the specified encoding any object implementing the buffer API. an integer Methods all
all()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. any
any()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. argmax
argmax()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. argmin
argmin()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. argsort
argsort()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. astype
astype()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. byteswap
byteswap()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. capitalize
capitalize()
B.capitalize() -> copy of B Return a copy of B with only its first character capitalized (ASCII) and the rest lower-cased. center
center()
B.center(width[, fillchar]) -> copy of B Return B centered in a string of length width. Padding is done using the specified fill character (default is a space). choose
choose()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. clip
clip()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. compress
compress()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. conj
conj()
conjugate
conjugate()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. copy
copy()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. count
count()
B.count(sub[, start[, end]]) -> int Return the number of non-overlapping occurrences of subsection sub in bytes B[start:end]. Optional arguments start and end are interpreted as in slice notation. cumprod
cumprod()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. cumsum
cumsum()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. decode
decode(
encoding='utf-8', errors='strict'
)
Decode the bytes using the codec registered for encoding. encoding The encoding with which to decode the bytes. errors The error handling scheme to use for the handling of decoding errors. The default is 'strict' meaning that decoding errors raise a UnicodeDecodeError. Other possible values are 'ignore' and 'replace' as well as any other name registered with codecs.register_error that can handle UnicodeDecodeErrors. diagonal
diagonal()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. dump
dump()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. dumps
dumps()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. endswith
endswith()
B.endswith(suffix[, start[, end]]) -> bool Return True if B ends with the specified suffix, False otherwise. With optional start, test B beginning at that position. With optional end, stop comparing B at that position. suffix can also be a tuple of bytes to try. expandtabs
expandtabs()
B.expandtabs(tabsize=8) -> copy of B Return a copy of B where all tab characters are expanded using spaces. If tabsize is not given, a tab size of 8 characters is assumed. fill
fill()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. find
find()
B.find(sub[, start[, end]]) -> int Return the lowest index in B where subsection sub is found, such that sub is contained within B[start,end]. Optional arguments start and end are interpreted as in slice notation. Return -1 on failure. flatten
flatten()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. fromhex
fromhex(
string, /
)
Create a bytes object from a string of hexadecimal numbers. Spaces between two numbers are accepted. Example: bytes.fromhex('B9 01EF') -> b'\xb9\x01\xef'. getfield
getfield()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. hex
hex()
B.hex() -> string Create a string of hexadecimal numbers from a bytes object. Example: b'\xb9\x01\xef'.hex() -> 'b901ef'. index
index()
B.index(sub[, start[, end]]) -> int Return the lowest index in B where subsection sub is found, such that sub is contained within B[start,end]. Optional arguments start and end are interpreted as in slice notation. Raises ValueError when the subsection is not found. isalnum
isalnum()
B.isalnum() -> bool Return True if all characters in B are alphanumeric and there is at least one character in B, False otherwise. isalpha
isalpha()
B.isalpha() -> bool Return True if all characters in B are alphabetic and there is at least one character in B, False otherwise. isascii
isascii()
B.isascii() -> bool Return True if B is empty or all characters in B are ASCII, False otherwise. isdigit
isdigit()
B.isdigit() -> bool Return True if all characters in B are digits and there is at least one character in B, False otherwise. islower
islower()
B.islower() -> bool Return True if all cased characters in B are lowercase and there is at least one cased character in B, False otherwise. isspace
isspace()
B.isspace() -> bool Return True if all characters in B are whitespace and there is at least one character in B, False otherwise. istitle
istitle()
B.istitle() -> bool Return True if B is a titlecased string and there is at least one character in B, i.e. uppercase characters may only follow uncased characters and lowercase characters only cased ones. Return False otherwise. isupper
isupper()
B.isupper() -> bool Return True if all cased characters in B are uppercase and there is at least one cased character in B, False otherwise. item
item()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. itemset
itemset()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. join
join(
iterable_of_bytes, /
)
Concatenate any number of bytes objects. The bytes whose method is called is inserted in between each pair. The result is returned as a new bytes object. Example: b'.'.join([b'ab', b'pq', b'rs']) -> b'ab.pq.rs'. ljust
ljust()
B.ljust(width[, fillchar]) -> copy of B Return B left justified in a string of length width. Padding is done using the specified fill character (default is a space). lower
lower()
B.lower() -> copy of B Return a copy of B with all ASCII characters converted to lowercase. lstrip
lstrip(
bytes, /
)
Strip leading bytes contained in the argument. If the argument is omitted or None, strip leading ASCII whitespace. maketrans
maketrans(
frm, to, /
)
Return a translation table useable for the bytes or bytearray translate method. The returned table will be one where each byte in frm is mapped to the byte at the same position in to. The bytes objects frm and to must be of the same length. max
max()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. mean
mean()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. min
min()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. newbyteorder
newbyteorder()
newbyteorder(new_order='S') Return a new dtype with a different byte order. Changes are also made in all fields and sub-arrays of the data type. The new_order code can be any from the following: 'S' - swap dtype from current to opposite endian '<', 'L'- little endian '>', 'B'- big endian '=', 'N'- native order '|', 'I'- ignore (no change to byte order) Parameters new_order : str, optional Byte order to force; a value from the byte order specifications above. The default value ('S') results in swapping the current byte order. The code does a case-insensitive check on the first letter of new_order for the alternatives above. For example, any of 'B' or 'b' or 'biggish' are valid to specify big-endian. Returns new_dtype : dtype New dtype object with the given change to the byte order. nonzero
nonzero()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. partition
partition(
sep, /
)
Partition the bytes into three parts using the given separator. This will search for the separator sep in the bytes. If the separator is found, returns a 3-tuple containing the part before the separator, the separator itself, and the part after it. If the separator is not found, returns a 3-tuple containing the original bytes object and two empty bytes objects. prod
prod()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. ptp
ptp()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. put
put()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. ravel
ravel()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. repeat
repeat()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. replace
replace(
old, new, count, /
)
Return a copy with all occurrences of substring old replaced by new. count Maximum number of occurrences to replace. -1 (the default value) means replace all occurrences. If the optional argument count is given, only the first count occurrences are replaced. reshape
reshape()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. resize
resize()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. rfind
rfind()
B.rfind(sub[, start[, end]]) -> int Return the highest index in B where subsection sub is found, such that sub is contained within B[start,end]. Optional arguments start and end are interpreted as in slice notation. Return -1 on failure. rindex
rindex()
B.rindex(sub[, start[, end]]) -> int Return the highest index in B where subsection sub is found, such that sub is contained within B[start,end]. Optional arguments start and end are interpreted as in slice notation. Raise ValueError when the subsection is not found. rjust
rjust()
B.rjust(width[, fillchar]) -> copy of B Return B right justified in a string of length width. Padding is done using the specified fill character (default is a space) round
round()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. rpartition
rpartition(
sep, /
)
Partition the bytes into three parts using the given separator. This will search for the separator sep in the bytes, starting at the end. If the separator is found, returns a 3-tuple containing the part before the separator, the separator itself, and the part after it. If the separator is not found, returns a 3-tuple containing two empty bytes objects and the original bytes object. rsplit
rsplit(
sep=None, maxsplit=-1
)
Return a list of the sections in the bytes, using sep as the delimiter. sep The delimiter according which to split the bytes. None (the default value) means split on ASCII whitespace characters (space, tab, return, newline, formfeed, vertical tab). maxsplit Maximum number of splits to do. -1 (the default value) means no limit. Splitting is done starting at the end of the bytes and working to the front. rstrip
rstrip(
bytes, /
)
Strip trailing bytes contained in the argument. If the argument is omitted or None, strip trailing ASCII whitespace. searchsorted
searchsorted()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. setfield
setfield()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. setflags
setflags()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. sort
sort()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. split
split(
sep=None, maxsplit=-1
)
Return a list of the sections in the bytes, using sep as the delimiter. sep The delimiter according which to split the bytes. None (the default value) means split on ASCII whitespace characters (space, tab, return, newline, formfeed, vertical tab). maxsplit Maximum number of splits to do. -1 (the default value) means no limit. splitlines
splitlines(
keepends=False
)
Return a list of the lines in the bytes, breaking at line boundaries. Line breaks are not included in the resulting list unless keepends is given and true. squeeze
squeeze()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. startswith
startswith()
B.startswith(prefix[, start[, end]]) -> bool Return True if B starts with the specified prefix, False otherwise. With optional start, test B beginning at that position. With optional end, stop comparing B at that position. prefix can also be a tuple of bytes to try. std
std()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. strip
strip(
bytes, /
)
Strip leading and trailing bytes contained in the argument. If the argument is omitted or None, strip leading and trailing ASCII whitespace. sum
sum()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. swapaxes
swapaxes()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. swapcase
swapcase()
B.swapcase() -> copy of B Return a copy of B with uppercase ASCII characters converted to lowercase ASCII and vice versa. take
take()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. title
title()
B.title() -> copy of B Return a titlecased version of B, i.e. ASCII words start with uppercase characters, all remaining cased characters have lowercase. tobytes
tobytes()
tofile
tofile()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. tolist
tolist()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. tostring
tostring()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. trace
trace()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. translate
translate(
table, /, delete=b''
)
Return a copy with each character mapped by the given translation table. table Translation table, which must be a bytes object of length 256. All characters occurring in the optional argument delete are removed. The remaining characters are mapped through the given translation table. transpose
transpose()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. upper
upper()
B.upper() -> copy of B Return a copy of B with all ASCII characters converted to uppercase. var
var()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. view
view()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. zfill
zfill()
B.zfill(width) -> copy of B Pad a numeric string B with zeros on the left, to fill a field of the specified width. B is never truncated. __abs__
__abs__()
abs(self) __add__
__add__(
value, /
)
Return self+value. __and__
__and__(
value, /
)
Return self&value. __bool__
__bool__()
self != 0 __contains__
__contains__(
key, /
)
Return key in self. __eq__
__eq__(
value, /
)
Return self==value. __floordiv__
__floordiv__(
value, /
)
Return self//value. __ge__
__ge__(
value, /
)
Return self>=value. __getitem__
__getitem__(
key, /
)
Return self[key]. __gt__
__gt__(
value, /
)
Return self>value. __invert__
__invert__()
~self __iter__
__iter__()
Implement iter(self). __le__
__le__(
value, /
)
Return self<=value. __len__
__len__()
Return len(self). __lt__
__lt__(
value, /
)
Return self<value. __mod__
__mod__(
value, /
)
Return self%value. __mul__
__mul__(
value, /
)
Return self*value. __ne__
__ne__(
value, /
)
Return self!=value. __neg__
__neg__()
-self __or__
__or__(
value, /
)
Return self|value. __pos__
__pos__()
+self __pow__
__pow__(
value, mod, /
)
Return pow(self, value, mod). __radd__
__radd__(
value, /
)
Return value+self. __rand__
__rand__(
value, /
)
Return value&self. __rfloordiv__
__rfloordiv__(
value, /
)
Return value//self. __rmod__
__rmod__(
value, /
)
Return value%self. __rmul__
__rmul__(
value, /
)
Return value*self. __ror__
__ror__(
value, /
)
Return value|self. __rpow__
__rpow__(
value, mod, /
)
Return pow(value, self, mod). __rsub__
__rsub__(
value, /
)
Return value-self. __rtruediv__
__rtruediv__(
value, /
)
Return value/self. __rxor__
__rxor__(
value, /
)
Return value^self. __sub__
__sub__(
value, /
)
Return self-value. __truediv__
__truediv__(
value, /
)
Return self/value. __xor__
__xor__(
value, /
)
Return self^value.
Class Variables
T
base
data
dtype
flags
flat
imag
itemsize
nbytes
ndim
real
shape
size
strides | tensorflow.experimental.numpy.string_ |
tf.experimental.numpy.subtract TensorFlow variant of NumPy's subtract.
tf.experimental.numpy.subtract(
x1, x2
)
Unsupported arguments: out, where, casting, order, dtype, subok, signature, extobj. See the NumPy documentation for numpy.subtract. | tensorflow.experimental.numpy.subtract |
tf.experimental.numpy.sum TensorFlow variant of NumPy's sum.
tf.experimental.numpy.sum(
a, axis=None, dtype=None, keepdims=None
)
Unsupported arguments: out, initial, where. See the NumPy documentation for numpy.sum. | tensorflow.experimental.numpy.sum |
tf.experimental.numpy.swapaxes TensorFlow variant of NumPy's swapaxes.
tf.experimental.numpy.swapaxes(
a, axis1, axis2
)
See the NumPy documentation for numpy.swapaxes. | tensorflow.experimental.numpy.swapaxes |
tf.experimental.numpy.take TensorFlow variant of NumPy's take.
tf.experimental.numpy.take(
a, indices, axis=None, out=None, mode='clip'
)
out argument is not supported, and default mode is clip. See the NumPy documentation for numpy.take. | tensorflow.experimental.numpy.take |
tf.experimental.numpy.take_along_axis TensorFlow variant of NumPy's take_along_axis.
tf.experimental.numpy.take_along_axis(
arr, indices, axis
)
See the NumPy documentation for numpy.take_along_axis. | tensorflow.experimental.numpy.take_along_axis |
tf.experimental.numpy.tan TensorFlow variant of NumPy's tan.
tf.experimental.numpy.tan(
x
)
Unsupported arguments: out, where, casting, order, dtype, subok, signature, extobj. See the NumPy documentation for numpy.tan. | tensorflow.experimental.numpy.tan |
tf.experimental.numpy.tanh TensorFlow variant of NumPy's tanh.
tf.experimental.numpy.tanh(
x
)
Unsupported arguments: out, where, casting, order, dtype, subok, signature, extobj. See the NumPy documentation for numpy.tanh. | tensorflow.experimental.numpy.tanh |
tf.experimental.numpy.tensordot TensorFlow variant of NumPy's tensordot.
tf.experimental.numpy.tensordot(
a, b, axes=2
)
See the NumPy documentation for numpy.tensordot. | tensorflow.experimental.numpy.tensordot |
tf.experimental.numpy.tile TensorFlow variant of NumPy's tile.
tf.experimental.numpy.tile(
a, reps
)
See the NumPy documentation for numpy.tile. | tensorflow.experimental.numpy.tile |
tf.experimental.numpy.trace TensorFlow variant of NumPy's trace.
tf.experimental.numpy.trace(
a, offset=0, axis1=0, axis2=1, dtype=None
)
Unsupported arguments: out. See the NumPy documentation for numpy.trace. | tensorflow.experimental.numpy.trace |
tf.experimental.numpy.transpose TensorFlow variant of NumPy's transpose. View aliases Main aliases
tf.experimental.numpy.ndarray.transpose
tf.experimental.numpy.transpose(
a, axes=None
)
See the NumPy documentation for numpy.transpose. | tensorflow.experimental.numpy.transpose |
tf.experimental.numpy.tri TensorFlow variant of NumPy's tri.
tf.experimental.numpy.tri(
N, M=None, k=0, dtype=None
)
See the NumPy documentation for numpy.tri. | tensorflow.experimental.numpy.tri |
tf.experimental.numpy.tril TensorFlow variant of NumPy's tril.
tf.experimental.numpy.tril(
m, k=0
)
See the NumPy documentation for numpy.tril. | tensorflow.experimental.numpy.tril |
tf.experimental.numpy.triu TensorFlow variant of NumPy's triu.
tf.experimental.numpy.triu(
m, k=0
)
See the NumPy documentation for numpy.triu. | tensorflow.experimental.numpy.triu |
tf.experimental.numpy.true_divide TensorFlow variant of NumPy's true_divide.
tf.experimental.numpy.true_divide(
x1, x2
)
Unsupported arguments: out, where, casting, order, dtype, subok, signature, extobj. See the NumPy documentation for numpy.true_divide. | tensorflow.experimental.numpy.true_divide |
tf.experimental.numpy.uint16 Unsigned integer type, compatible with C unsigned short.
tf.experimental.numpy.uint16(
*args, **kwargs
)
Character code: 'H'. Canonical name: np.ushort. Alias on this platform: np.uint16: 16-bit unsigned integer (0 to 65535). Methods all
all()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. any
any()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. argmax
argmax()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. argmin
argmin()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. argsort
argsort()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. astype
astype()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. byteswap
byteswap()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. choose
choose()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. clip
clip()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. compress
compress()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. conj
conj()
conjugate
conjugate()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. copy
copy()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. cumprod
cumprod()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. cumsum
cumsum()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. diagonal
diagonal()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. dump
dump()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. dumps
dumps()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. fill
fill()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. flatten
flatten()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. getfield
getfield()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. item
item()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. itemset
itemset()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. max
max()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. mean
mean()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. min
min()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. newbyteorder
newbyteorder()
newbyteorder(new_order='S') Return a new dtype with a different byte order. Changes are also made in all fields and sub-arrays of the data type. The new_order code can be any from the following: 'S' - swap dtype from current to opposite endian '<', 'L'- little endian '>', 'B'- big endian '=', 'N'- native order '|', 'I'- ignore (no change to byte order) Parameters new_order : str, optional Byte order to force; a value from the byte order specifications above. The default value ('S') results in swapping the current byte order. The code does a case-insensitive check on the first letter of new_order for the alternatives above. For example, any of 'B' or 'b' or 'biggish' are valid to specify big-endian. Returns new_dtype : dtype New dtype object with the given change to the byte order. nonzero
nonzero()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. prod
prod()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. ptp
ptp()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. put
put()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. ravel
ravel()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. repeat
repeat()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. reshape
reshape()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. resize
resize()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. round
round()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. searchsorted
searchsorted()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. setfield
setfield()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. setflags
setflags()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. sort
sort()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. squeeze
squeeze()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. std
std()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. sum
sum()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. swapaxes
swapaxes()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. take
take()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. tobytes
tobytes()
tofile
tofile()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. tolist
tolist()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. tostring
tostring()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. trace
trace()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. transpose
transpose()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. var
var()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. view
view()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. __abs__
__abs__()
abs(self) __add__
__add__(
value, /
)
Return self+value. __and__
__and__(
value, /
)
Return self&value. __bool__
__bool__()
self != 0 __eq__
__eq__(
value, /
)
Return self==value. __floordiv__
__floordiv__(
value, /
)
Return self//value. __ge__
__ge__(
value, /
)
Return self>=value. __getitem__
__getitem__(
key, /
)
Return self[key]. __gt__
__gt__(
value, /
)
Return self>value. __invert__
__invert__()
~self __le__
__le__(
value, /
)
Return self<=value. __lt__
__lt__(
value, /
)
Return self<value. __mod__
__mod__(
value, /
)
Return self%value. __mul__
__mul__(
value, /
)
Return self*value. __ne__
__ne__(
value, /
)
Return self!=value. __neg__
__neg__()
-self __or__
__or__(
value, /
)
Return self|value. __pos__
__pos__()
+self __pow__
__pow__(
value, mod, /
)
Return pow(self, value, mod). __radd__
__radd__(
value, /
)
Return value+self. __rand__
__rand__(
value, /
)
Return value&self. __rfloordiv__
__rfloordiv__(
value, /
)
Return value//self. __rmod__
__rmod__(
value, /
)
Return value%self. __rmul__
__rmul__(
value, /
)
Return value*self. __ror__
__ror__(
value, /
)
Return value|self. __rpow__
__rpow__(
value, mod, /
)
Return pow(value, self, mod). __rsub__
__rsub__(
value, /
)
Return value-self. __rtruediv__
__rtruediv__(
value, /
)
Return value/self. __rxor__
__rxor__(
value, /
)
Return value^self. __sub__
__sub__(
value, /
)
Return self-value. __truediv__
__truediv__(
value, /
)
Return self/value. __xor__
__xor__(
value, /
)
Return self^value.
Class Variables
T
base
data
denominator
dtype
flags
flat
imag
itemsize
nbytes
ndim
numerator
real
shape
size
strides | tensorflow.experimental.numpy.uint16 |
tf.experimental.numpy.uint32 Unsigned integer type, compatible with C unsigned int.
tf.experimental.numpy.uint32(
*args, **kwargs
)
Character code: 'I'. Canonical name: np.uintc. Alias on this platform: np.uint32: 32-bit unsigned integer (0 to 4294967295). Methods all
all()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. any
any()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. argmax
argmax()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. argmin
argmin()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. argsort
argsort()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. astype
astype()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. byteswap
byteswap()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. choose
choose()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. clip
clip()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. compress
compress()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. conj
conj()
conjugate
conjugate()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. copy
copy()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. cumprod
cumprod()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. cumsum
cumsum()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. diagonal
diagonal()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. dump
dump()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. dumps
dumps()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. fill
fill()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. flatten
flatten()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. getfield
getfield()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. item
item()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. itemset
itemset()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. max
max()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. mean
mean()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. min
min()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. newbyteorder
newbyteorder()
newbyteorder(new_order='S') Return a new dtype with a different byte order. Changes are also made in all fields and sub-arrays of the data type. The new_order code can be any from the following: 'S' - swap dtype from current to opposite endian '<', 'L'- little endian '>', 'B'- big endian '=', 'N'- native order '|', 'I'- ignore (no change to byte order) Parameters new_order : str, optional Byte order to force; a value from the byte order specifications above. The default value ('S') results in swapping the current byte order. The code does a case-insensitive check on the first letter of new_order for the alternatives above. For example, any of 'B' or 'b' or 'biggish' are valid to specify big-endian. Returns new_dtype : dtype New dtype object with the given change to the byte order. nonzero
nonzero()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. prod
prod()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. ptp
ptp()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. put
put()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. ravel
ravel()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. repeat
repeat()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. reshape
reshape()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. resize
resize()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. round
round()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. searchsorted
searchsorted()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. setfield
setfield()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. setflags
setflags()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. sort
sort()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. squeeze
squeeze()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. std
std()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. sum
sum()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. swapaxes
swapaxes()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. take
take()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. tobytes
tobytes()
tofile
tofile()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. tolist
tolist()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. tostring
tostring()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. trace
trace()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. transpose
transpose()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. var
var()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. view
view()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. __abs__
__abs__()
abs(self) __add__
__add__(
value, /
)
Return self+value. __and__
__and__(
value, /
)
Return self&value. __bool__
__bool__()
self != 0 __eq__
__eq__(
value, /
)
Return self==value. __floordiv__
__floordiv__(
value, /
)
Return self//value. __ge__
__ge__(
value, /
)
Return self>=value. __getitem__
__getitem__(
key, /
)
Return self[key]. __gt__
__gt__(
value, /
)
Return self>value. __invert__
__invert__()
~self __le__
__le__(
value, /
)
Return self<=value. __lt__
__lt__(
value, /
)
Return self<value. __mod__
__mod__(
value, /
)
Return self%value. __mul__
__mul__(
value, /
)
Return self*value. __ne__
__ne__(
value, /
)
Return self!=value. __neg__
__neg__()
-self __or__
__or__(
value, /
)
Return self|value. __pos__
__pos__()
+self __pow__
__pow__(
value, mod, /
)
Return pow(self, value, mod). __radd__
__radd__(
value, /
)
Return value+self. __rand__
__rand__(
value, /
)
Return value&self. __rfloordiv__
__rfloordiv__(
value, /
)
Return value//self. __rmod__
__rmod__(
value, /
)
Return value%self. __rmul__
__rmul__(
value, /
)
Return value*self. __ror__
__ror__(
value, /
)
Return value|self. __rpow__
__rpow__(
value, mod, /
)
Return pow(value, self, mod). __rsub__
__rsub__(
value, /
)
Return value-self. __rtruediv__
__rtruediv__(
value, /
)
Return value/self. __rxor__
__rxor__(
value, /
)
Return value^self. __sub__
__sub__(
value, /
)
Return self-value. __truediv__
__truediv__(
value, /
)
Return self/value. __xor__
__xor__(
value, /
)
Return self^value.
Class Variables
T
base
data
denominator
dtype
flags
flat
imag
itemsize
nbytes
ndim
numerator
real
shape
size
strides | tensorflow.experimental.numpy.uint32 |
tf.experimental.numpy.uint64 Unsigned integer type, compatible with C unsigned long.
tf.experimental.numpy.uint64(
*args, **kwargs
)
Character code: 'L'. Canonical name: np.uint. Alias on this platform: np.uint64: 64-bit unsigned integer (0 to 18446744073709551615). Alias on this platform: np.uintp: Unsigned integer large enough to fit pointer, compatible with C uintptr_t. Methods all
all()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. any
any()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. argmax
argmax()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. argmin
argmin()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. argsort
argsort()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. astype
astype()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. byteswap
byteswap()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. choose
choose()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. clip
clip()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. compress
compress()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. conj
conj()
conjugate
conjugate()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. copy
copy()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. cumprod
cumprod()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. cumsum
cumsum()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. diagonal
diagonal()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. dump
dump()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. dumps
dumps()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. fill
fill()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. flatten
flatten()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. getfield
getfield()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. item
item()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. itemset
itemset()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. max
max()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. mean
mean()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. min
min()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. newbyteorder
newbyteorder()
newbyteorder(new_order='S') Return a new dtype with a different byte order. Changes are also made in all fields and sub-arrays of the data type. The new_order code can be any from the following: 'S' - swap dtype from current to opposite endian '<', 'L'- little endian '>', 'B'- big endian '=', 'N'- native order '|', 'I'- ignore (no change to byte order) Parameters new_order : str, optional Byte order to force; a value from the byte order specifications above. The default value ('S') results in swapping the current byte order. The code does a case-insensitive check on the first letter of new_order for the alternatives above. For example, any of 'B' or 'b' or 'biggish' are valid to specify big-endian. Returns new_dtype : dtype New dtype object with the given change to the byte order. nonzero
nonzero()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. prod
prod()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. ptp
ptp()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. put
put()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. ravel
ravel()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. repeat
repeat()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. reshape
reshape()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. resize
resize()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. round
round()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. searchsorted
searchsorted()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. setfield
setfield()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. setflags
setflags()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. sort
sort()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. squeeze
squeeze()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. std
std()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. sum
sum()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. swapaxes
swapaxes()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. take
take()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. tobytes
tobytes()
tofile
tofile()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. tolist
tolist()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. tostring
tostring()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. trace
trace()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. transpose
transpose()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. var
var()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. view
view()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. __abs__
__abs__()
abs(self) __add__
__add__(
value, /
)
Return self+value. __and__
__and__(
value, /
)
Return self&value. __bool__
__bool__()
self != 0 __eq__
__eq__(
value, /
)
Return self==value. __floordiv__
__floordiv__(
value, /
)
Return self//value. __ge__
__ge__(
value, /
)
Return self>=value. __getitem__
__getitem__(
key, /
)
Return self[key]. __gt__
__gt__(
value, /
)
Return self>value. __invert__
__invert__()
~self __le__
__le__(
value, /
)
Return self<=value. __lt__
__lt__(
value, /
)
Return self<value. __mod__
__mod__(
value, /
)
Return self%value. __mul__
__mul__(
value, /
)
Return self*value. __ne__
__ne__(
value, /
)
Return self!=value. __neg__
__neg__()
-self __or__
__or__(
value, /
)
Return self|value. __pos__
__pos__()
+self __pow__
__pow__(
value, mod, /
)
Return pow(self, value, mod). __radd__
__radd__(
value, /
)
Return value+self. __rand__
__rand__(
value, /
)
Return value&self. __rfloordiv__
__rfloordiv__(
value, /
)
Return value//self. __rmod__
__rmod__(
value, /
)
Return value%self. __rmul__
__rmul__(
value, /
)
Return value*self. __ror__
__ror__(
value, /
)
Return value|self. __rpow__
__rpow__(
value, mod, /
)
Return pow(value, self, mod). __rsub__
__rsub__(
value, /
)
Return value-self. __rtruediv__
__rtruediv__(
value, /
)
Return value/self. __rxor__
__rxor__(
value, /
)
Return value^self. __sub__
__sub__(
value, /
)
Return self-value. __truediv__
__truediv__(
value, /
)
Return self/value. __xor__
__xor__(
value, /
)
Return self^value.
Class Variables
T
base
data
denominator
dtype
flags
flat
imag
itemsize
nbytes
ndim
numerator
real
shape
size
strides | tensorflow.experimental.numpy.uint64 |
tf.experimental.numpy.uint8 Unsigned integer type, compatible with C unsigned char.
tf.experimental.numpy.uint8(
*args, **kwargs
)
Character code: 'B'. Canonical name: np.ubyte. Alias on this platform: np.uint8: 8-bit unsigned integer (0 to 255). Methods all
all()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. any
any()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. argmax
argmax()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. argmin
argmin()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. argsort
argsort()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. astype
astype()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. byteswap
byteswap()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. choose
choose()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. clip
clip()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. compress
compress()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. conj
conj()
conjugate
conjugate()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. copy
copy()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. cumprod
cumprod()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. cumsum
cumsum()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. diagonal
diagonal()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. dump
dump()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. dumps
dumps()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. fill
fill()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. flatten
flatten()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. getfield
getfield()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. item
item()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. itemset
itemset()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. max
max()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. mean
mean()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. min
min()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. newbyteorder
newbyteorder()
newbyteorder(new_order='S') Return a new dtype with a different byte order. Changes are also made in all fields and sub-arrays of the data type. The new_order code can be any from the following: 'S' - swap dtype from current to opposite endian '<', 'L'- little endian '>', 'B'- big endian '=', 'N'- native order '|', 'I'- ignore (no change to byte order) Parameters new_order : str, optional Byte order to force; a value from the byte order specifications above. The default value ('S') results in swapping the current byte order. The code does a case-insensitive check on the first letter of new_order for the alternatives above. For example, any of 'B' or 'b' or 'biggish' are valid to specify big-endian. Returns new_dtype : dtype New dtype object with the given change to the byte order. nonzero
nonzero()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. prod
prod()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. ptp
ptp()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. put
put()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. ravel
ravel()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. repeat
repeat()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. reshape
reshape()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. resize
resize()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. round
round()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. searchsorted
searchsorted()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. setfield
setfield()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. setflags
setflags()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. sort
sort()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. squeeze
squeeze()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. std
std()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. sum
sum()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. swapaxes
swapaxes()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. take
take()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. tobytes
tobytes()
tofile
tofile()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. tolist
tolist()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. tostring
tostring()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. trace
trace()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. transpose
transpose()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. var
var()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. view
view()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. __abs__
__abs__()
abs(self) __add__
__add__(
value, /
)
Return self+value. __and__
__and__(
value, /
)
Return self&value. __bool__
__bool__()
self != 0 __eq__
__eq__(
value, /
)
Return self==value. __floordiv__
__floordiv__(
value, /
)
Return self//value. __ge__
__ge__(
value, /
)
Return self>=value. __getitem__
__getitem__(
key, /
)
Return self[key]. __gt__
__gt__(
value, /
)
Return self>value. __invert__
__invert__()
~self __le__
__le__(
value, /
)
Return self<=value. __lt__
__lt__(
value, /
)
Return self<value. __mod__
__mod__(
value, /
)
Return self%value. __mul__
__mul__(
value, /
)
Return self*value. __ne__
__ne__(
value, /
)
Return self!=value. __neg__
__neg__()
-self __or__
__or__(
value, /
)
Return self|value. __pos__
__pos__()
+self __pow__
__pow__(
value, mod, /
)
Return pow(self, value, mod). __radd__
__radd__(
value, /
)
Return value+self. __rand__
__rand__(
value, /
)
Return value&self. __rfloordiv__
__rfloordiv__(
value, /
)
Return value//self. __rmod__
__rmod__(
value, /
)
Return value%self. __rmul__
__rmul__(
value, /
)
Return value*self. __ror__
__ror__(
value, /
)
Return value|self. __rpow__
__rpow__(
value, mod, /
)
Return pow(value, self, mod). __rsub__
__rsub__(
value, /
)
Return value-self. __rtruediv__
__rtruediv__(
value, /
)
Return value/self. __rxor__
__rxor__(
value, /
)
Return value^self. __sub__
__sub__(
value, /
)
Return self-value. __truediv__
__truediv__(
value, /
)
Return self/value. __xor__
__xor__(
value, /
)
Return self^value.
Class Variables
T
base
data
denominator
dtype
flags
flat
imag
itemsize
nbytes
ndim
numerator
real
shape
size
strides | tensorflow.experimental.numpy.uint8 |
tf.experimental.numpy.unicode_ str(object='') -> str
tf.experimental.numpy.unicode_(
*args, **kwargs
)
str(bytes_or_buffer[, encoding[, errors]]) -> str Create a new string object from the given object. If encoding or errors is specified, then the object must expose a data buffer that will be decoded using the given encoding and error handler. Otherwise, returns the result of object.str() (if defined) or repr(object). encoding defaults to sys.getdefaultencoding(). errors defaults to 'strict'. Methods all
all()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. any
any()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. argmax
argmax()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. argmin
argmin()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. argsort
argsort()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. astype
astype()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. byteswap
byteswap()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. capitalize
capitalize()
Return a capitalized version of the string. More specifically, make the first character have upper case and the rest lower case. casefold
casefold()
Return a version of the string suitable for caseless comparisons. center
center(
width, fillchar, /
)
Return a centered string of length width. Padding is done using the specified fill character (default is a space). choose
choose()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. clip
clip()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. compress
compress()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. conj
conj()
conjugate
conjugate()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. copy
copy()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. count
count()
S.count(sub[, start[, end]]) -> int Return the number of non-overlapping occurrences of substring sub in string S[start:end]. Optional arguments start and end are interpreted as in slice notation. cumprod
cumprod()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. cumsum
cumsum()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. diagonal
diagonal()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. dump
dump()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. dumps
dumps()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. encode
encode(
encoding='utf-8', errors='strict'
)
Encode the string using the codec registered for encoding. encoding The encoding in which to encode the string. errors The error handling scheme to use for encoding errors. The default is 'strict' meaning that encoding errors raise a UnicodeEncodeError. Other possible values are 'ignore', 'replace' and 'xmlcharrefreplace' as well as any other name registered with codecs.register_error that can handle UnicodeEncodeErrors. endswith
endswith()
S.endswith(suffix[, start[, end]]) -> bool Return True if S ends with the specified suffix, False otherwise. With optional start, test S beginning at that position. With optional end, stop comparing S at that position. suffix can also be a tuple of strings to try. expandtabs
expandtabs(
tabsize=8
)
Return a copy where all tab characters are expanded using spaces. If tabsize is not given, a tab size of 8 characters is assumed. fill
fill()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. find
find()
S.find(sub[, start[, end]]) -> int Return the lowest index in S where substring sub is found, such that sub is contained within S[start:end]. Optional arguments start and end are interpreted as in slice notation. Return -1 on failure. flatten
flatten()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. format
format()
S.format(*args, **kwargs) -> str Return a formatted version of S, using substitutions from args and kwargs. The substitutions are identified by braces ('{' and '}'). format_map
format_map()
S.format_map(mapping) -> str Return a formatted version of S, using substitutions from mapping. The substitutions are identified by braces ('{' and '}'). getfield
getfield()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. index
index()
S.index(sub[, start[, end]]) -> int Return the lowest index in S where substring sub is found, such that sub is contained within S[start:end]. Optional arguments start and end are interpreted as in slice notation. Raises ValueError when the substring is not found. isalnum
isalnum()
Return True if the string is an alpha-numeric string, False otherwise. A string is alpha-numeric if all characters in the string are alpha-numeric and there is at least one character in the string. isalpha
isalpha()
Return True if the string is an alphabetic string, False otherwise. A string is alphabetic if all characters in the string are alphabetic and there is at least one character in the string. isascii
isascii()
Return True if all characters in the string are ASCII, False otherwise. ASCII characters have code points in the range U+0000-U+007F. Empty string is ASCII too. isdecimal
isdecimal()
Return True if the string is a decimal string, False otherwise. A string is a decimal string if all characters in the string are decimal and there is at least one character in the string. isdigit
isdigit()
Return True if the string is a digit string, False otherwise. A string is a digit string if all characters in the string are digits and there is at least one character in the string. isidentifier
isidentifier()
Return True if the string is a valid Python identifier, False otherwise. Use keyword.iskeyword() to test for reserved identifiers such as "def" and "class". islower
islower()
Return True if the string is a lowercase string, False otherwise. A string is lowercase if all cased characters in the string are lowercase and there is at least one cased character in the string. isnumeric
isnumeric()
Return True if the string is a numeric string, False otherwise. A string is numeric if all characters in the string are numeric and there is at least one character in the string. isprintable
isprintable()
Return True if the string is printable, False otherwise. A string is printable if all of its characters are considered printable in repr() or if it is empty. isspace
isspace()
Return True if the string is a whitespace string, False otherwise. A string is whitespace if all characters in the string are whitespace and there is at least one character in the string. istitle
istitle()
Return True if the string is a title-cased string, False otherwise. In a title-cased string, upper- and title-case characters may only follow uncased characters and lowercase characters only cased ones. isupper
isupper()
Return True if the string is an uppercase string, False otherwise. A string is uppercase if all cased characters in the string are uppercase and there is at least one cased character in the string. item
item()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. itemset
itemset()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. join
join(
iterable, /
)
Concatenate any number of strings. The string whose method is called is inserted in between each given string. The result is returned as a new string. Example: '.'.join(['ab', 'pq', 'rs']) -> 'ab.pq.rs' ljust
ljust(
width, fillchar, /
)
Return a left-justified string of length width. Padding is done using the specified fill character (default is a space). lower
lower()
Return a copy of the string converted to lowercase. lstrip
lstrip(
chars, /
)
Return a copy of the string with leading whitespace removed. If chars is given and not None, remove characters in chars instead. maketrans
maketrans(
x, y, z, /
)
Return a translation table usable for str.translate(). If there is only one argument, it must be a dictionary mapping Unicode ordinals (integers) or characters to Unicode ordinals, strings or None. Character keys will be then converted to ordinals. If there are two arguments, they must be strings of equal length, and in the resulting dictionary, each character in x will be mapped to the character at the same position in y. If there is a third argument, it must be a string, whose characters will be mapped to None in the result. max
max()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. mean
mean()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. min
min()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. newbyteorder
newbyteorder()
newbyteorder(new_order='S') Return a new dtype with a different byte order. Changes are also made in all fields and sub-arrays of the data type. The new_order code can be any from the following: 'S' - swap dtype from current to opposite endian '<', 'L'- little endian '>', 'B'- big endian '=', 'N'- native order '|', 'I'- ignore (no change to byte order) Parameters new_order : str, optional Byte order to force; a value from the byte order specifications above. The default value ('S') results in swapping the current byte order. The code does a case-insensitive check on the first letter of new_order for the alternatives above. For example, any of 'B' or 'b' or 'biggish' are valid to specify big-endian. Returns new_dtype : dtype New dtype object with the given change to the byte order. nonzero
nonzero()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. partition
partition(
sep, /
)
Partition the string into three parts using the given separator. This will search for the separator in the string. If the separator is found, returns a 3-tuple containing the part before the separator, the separator itself, and the part after it. If the separator is not found, returns a 3-tuple containing the original string and two empty strings. prod
prod()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. ptp
ptp()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. put
put()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. ravel
ravel()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. repeat
repeat()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. replace
replace(
old, new, count, /
)
Return a copy with all occurrences of substring old replaced by new. count Maximum number of occurrences to replace. -1 (the default value) means replace all occurrences. If the optional argument count is given, only the first count occurrences are replaced. reshape
reshape()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. resize
resize()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. rfind
rfind()
S.rfind(sub[, start[, end]]) -> int Return the highest index in S where substring sub is found, such that sub is contained within S[start:end]. Optional arguments start and end are interpreted as in slice notation. Return -1 on failure. rindex
rindex()
S.rindex(sub[, start[, end]]) -> int Return the highest index in S where substring sub is found, such that sub is contained within S[start:end]. Optional arguments start and end are interpreted as in slice notation. Raises ValueError when the substring is not found. rjust
rjust(
width, fillchar, /
)
Return a right-justified string of length width. Padding is done using the specified fill character (default is a space). round
round()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. rpartition
rpartition(
sep, /
)
Partition the string into three parts using the given separator. This will search for the separator in the string, starting at the end. If the separator is found, returns a 3-tuple containing the part before the separator, the separator itself, and the part after it. If the separator is not found, returns a 3-tuple containing two empty strings and the original string. rsplit
rsplit(
sep=None, maxsplit=-1
)
Return a list of the words in the string, using sep as the delimiter string. sep The delimiter according which to split the string. None (the default value) means split according to any whitespace, and discard empty strings from the result. maxsplit Maximum number of splits to do. -1 (the default value) means no limit. Splits are done starting at the end of the string and working to the front. rstrip
rstrip(
chars, /
)
Return a copy of the string with trailing whitespace removed. If chars is given and not None, remove characters in chars instead. searchsorted
searchsorted()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. setfield
setfield()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. setflags
setflags()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. sort
sort()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. split
split(
sep=None, maxsplit=-1
)
Return a list of the words in the string, using sep as the delimiter string. sep The delimiter according which to split the string. None (the default value) means split according to any whitespace, and discard empty strings from the result. maxsplit Maximum number of splits to do. -1 (the default value) means no limit. splitlines
splitlines(
keepends=False
)
Return a list of the lines in the string, breaking at line boundaries. Line breaks are not included in the resulting list unless keepends is given and true. squeeze
squeeze()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. startswith
startswith()
S.startswith(prefix[, start[, end]]) -> bool Return True if S starts with the specified prefix, False otherwise. With optional start, test S beginning at that position. With optional end, stop comparing S at that position. prefix can also be a tuple of strings to try. std
std()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. strip
strip(
chars, /
)
Return a copy of the string with leading and trailing whitespace remove. If chars is given and not None, remove characters in chars instead. sum
sum()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. swapaxes
swapaxes()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. swapcase
swapcase()
Convert uppercase characters to lowercase and lowercase characters to uppercase. take
take()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. title
title()
Return a version of the string where each word is titlecased. More specifically, words start with uppercased characters and all remaining cased characters have lower case. tobytes
tobytes()
tofile
tofile()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. tolist
tolist()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. tostring
tostring()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. trace
trace()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. translate
translate(
table, /
)
Replace each character in the string using the given translation table. table Translation table, which must be a mapping of Unicode ordinals to Unicode ordinals, strings, or None. The table must implement lookup/indexing via getitem, for instance a dictionary or list. If this operation raises LookupError, the character is left untouched. Characters mapped to None are deleted. transpose
transpose()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. upper
upper()
Return a copy of the string converted to uppercase. var
var()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. view
view()
Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. zfill
zfill(
width, /
)
Pad a numeric string with zeros on the left, to fill a field of the given width. The string is never truncated. __abs__
__abs__()
abs(self) __add__
__add__(
value, /
)
Return self+value. __and__
__and__(
value, /
)
Return self&value. __bool__
__bool__()
self != 0 __contains__
__contains__(
key, /
)
Return key in self. __eq__
__eq__(
value, /
)
Return self==value. __floordiv__
__floordiv__(
value, /
)
Return self//value. __ge__
__ge__(
value, /
)
Return self>=value. __getitem__
__getitem__(
key, /
)
Return self[key]. __gt__
__gt__(
value, /
)
Return self>value. __invert__
__invert__()
~self __iter__
__iter__()
Implement iter(self). __le__
__le__(
value, /
)
Return self<=value. __len__
__len__()
Return len(self). __lt__
__lt__(
value, /
)
Return self<value. __mod__
__mod__(
value, /
)
Return self%value. __mul__
__mul__(
value, /
)
Return self*value. __ne__
__ne__(
value, /
)
Return self!=value. __neg__
__neg__()
-self __or__
__or__(
value, /
)
Return self|value. __pos__
__pos__()
+self __pow__
__pow__(
value, mod, /
)
Return pow(self, value, mod). __radd__
__radd__(
value, /
)
Return value+self. __rand__
__rand__(
value, /
)
Return value&self. __rfloordiv__
__rfloordiv__(
value, /
)
Return value//self. __rmod__
__rmod__(
value, /
)
Return value%self. __rmul__
__rmul__(
value, /
)
Return value*self. __ror__
__ror__(
value, /
)
Return value|self. __rpow__
__rpow__(
value, mod, /
)
Return pow(value, self, mod). __rsub__
__rsub__(
value, /
)
Return value-self. __rtruediv__
__rtruediv__(
value, /
)
Return value/self. __rxor__
__rxor__(
value, /
)
Return value^self. __sub__
__sub__(
value, /
)
Return self-value. __truediv__
__truediv__(
value, /
)
Return self/value. __xor__
__xor__(
value, /
)
Return self^value.
Class Variables
T
base
data
dtype
flags
flat
imag
itemsize
nbytes
ndim
real
shape
size
strides | tensorflow.experimental.numpy.unicode_ |
tf.experimental.numpy.vander TensorFlow variant of NumPy's vander.
tf.experimental.numpy.vander(
x, N=None, increasing=False
)
See the NumPy documentation for numpy.vander. | tensorflow.experimental.numpy.vander |
tf.experimental.numpy.var TensorFlow variant of NumPy's var.
tf.experimental.numpy.var(
a, axis=None, dtype=None, out=None, ddof=0, keepdims=None
)
See the NumPy documentation for numpy.var. | tensorflow.experimental.numpy.var |
tf.experimental.numpy.vdot TensorFlow variant of NumPy's vdot.
tf.experimental.numpy.vdot(
a, b
)
See the NumPy documentation for numpy.vdot. | tensorflow.experimental.numpy.vdot |
tf.experimental.numpy.vsplit TensorFlow variant of NumPy's vsplit.
tf.experimental.numpy.vsplit(
ary, indices_or_sections
)
See the NumPy documentation for numpy.vsplit. | tensorflow.experimental.numpy.vsplit |
tf.experimental.numpy.vstack TensorFlow variant of NumPy's vstack.
tf.experimental.numpy.vstack(
tup
)
See the NumPy documentation for numpy.vstack. | tensorflow.experimental.numpy.vstack |
tf.experimental.numpy.where TensorFlow variant of NumPy's where.
tf.experimental.numpy.where(
condition, x=None, y=None
)
Raises ValueError if exactly one of x or y is not None. See the NumPy documentation for numpy.where. | tensorflow.experimental.numpy.where |
tf.experimental.numpy.zeros TensorFlow variant of NumPy's zeros.
tf.experimental.numpy.zeros(
shape, dtype=float
)
See the NumPy documentation for numpy.zeros. | tensorflow.experimental.numpy.zeros |
tf.experimental.numpy.zeros_like TensorFlow variant of NumPy's zeros_like.
tf.experimental.numpy.zeros_like(
a, dtype=None
)
Unsupported arguments: order, subok, shape. See the NumPy documentation for numpy.zeros_like. | tensorflow.experimental.numpy.zeros_like |
tf.experimental.Optional Represents a value that may or may not be present. View aliases Main aliases
tf.data.experimental.Optional Compat aliases for migration See Migration guide for more details. tf.compat.v1.data.experimental.Optional, tf.compat.v1.experimental.Optional A tf.experimental.Optional can represent the result of an operation that may fail as a value, rather than raising an exception and halting execution. For example, tf.data.Iterator.get_next_as_optional() returns a tf.experimental.Optional that either contains the next element of an iterator if one exists, or an "empty" value that indicates the end of the sequence has been reached. tf.experimental.Optional can only be used with values that are convertible to tf.Tensor or tf.CompositeTensor. One can create a tf.experimental.Optional from a value using the from_value() method:
optional = tf.experimental.Optional.from_value(42)
print(optional.has_value())
tf.Tensor(True, shape=(), dtype=bool)
print(optional.get_value())
tf.Tensor(42, shape=(), dtype=int32)
or without a value using the empty() method:
optional = tf.experimental.Optional.empty(
tf.TensorSpec(shape=(), dtype=tf.int32, name=None))
print(optional.has_value())
tf.Tensor(False, shape=(), dtype=bool)
Attributes
element_spec The type specification of an element of this optional.
optional = tf.experimental.Optional.from_value(42)
print(optional.element_spec)
tf.TensorSpec(shape=(), dtype=tf.int32, name=None)
Methods empty View source
@staticmethod
empty(
element_spec
)
Returns an Optional that has no value.
Note: This method takes an argument that defines the structure of the value that would be contained in the returned Optional if it had a value.
optional = tf.experimental.Optional.empty(
tf.TensorSpec(shape=(), dtype=tf.int32, name=None))
print(optional.has_value())
tf.Tensor(False, shape=(), dtype=bool)
Args
element_spec A nested structure of tf.TypeSpec objects matching the structure of an element of this optional.
Returns A tf.experimental.Optional with no value.
from_value View source
@staticmethod
from_value(
value
)
Returns a tf.experimental.Optional that wraps the given value.
optional = tf.experimental.Optional.from_value(42)
print(optional.has_value())
tf.Tensor(True, shape=(), dtype=bool)
print(optional.get_value())
tf.Tensor(42, shape=(), dtype=int32)
Args
value A value to wrap. The value must be convertible to tf.Tensor or tf.CompositeTensor.
Returns A tf.experimental.Optional that wraps value.
get_value View source
@abc.abstractmethod
get_value(
name=None
)
Returns the value wrapped by this optional. If this optional does not have a value (i.e. self.has_value() evaluates to False), this operation will raise tf.errors.InvalidArgumentError at runtime.
optional = tf.experimental.Optional.from_value(42)
print(optional.get_value())
tf.Tensor(42, shape=(), dtype=int32)
Args
name (Optional.) A name for the created operation.
Returns The wrapped value.
has_value View source
@abc.abstractmethod
has_value(
name=None
)
Returns a tensor that evaluates to True if this optional has a value.
optional = tf.experimental.Optional.from_value(42)
print(optional.has_value())
tf.Tensor(True, shape=(), dtype=bool)
Args
name (Optional.) A name for the created operation.
Returns A scalar tf.Tensor of type tf.bool. | tensorflow.experimental.optional |
tf.experimental.register_filesystem_plugin Loads a TensorFlow FileSystem plugin. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.experimental.register_filesystem_plugin
tf.experimental.register_filesystem_plugin(
plugin_location
)
Args
plugin_location Path to the plugin. Relative or absolute filesystem plugin path to a dynamic library file.
Returns None
Raises
OSError When the file to be loaded is not found.
RuntimeError when unable to load the library. | tensorflow.experimental.register_filesystem_plugin |
Module: tf.experimental.tensorrt Public API for tf.experimental.tensorrt namespace. Classes class ConversionParams: Parameters that are used for TF-TRT conversion. class Converter: An offline converter for TF-TRT transformation for TF 2.0 SavedModels. | tensorflow.experimental.tensorrt |
tf.experimental.tensorrt.ConversionParams Parameters that are used for TF-TRT conversion.
tf.experimental.tensorrt.ConversionParams(
rewriter_config_template=None,
max_workspace_size_bytes=DEFAULT_TRT_MAX_WORKSPACE_SIZE_BYTES,
precision_mode=TrtPrecisionMode.FP32, minimum_segment_size=3,
is_dynamic_op=True, maximum_cached_engines=1, use_calibration=True,
max_batch_size=1, allow_build_at_runtime=True
)
Fields:
rewriter_config_template: a template RewriterConfig proto used to create a TRT-enabled RewriterConfig. If None, it will use a default one.
max_workspace_size_bytes: the maximum GPU temporary memory which the TRT engine can use at execution time. This corresponds to the 'workspaceSize' parameter of nvinfer1::IBuilder::setMaxWorkspaceSize().
precision_mode: one the strings in TrtPrecisionMode.supported_precision_modes().
minimum_segment_size: the minimum number of nodes required for a subgraph to be replaced by TRTEngineOp.
is_dynamic_op: whether to generate dynamic TRT ops which will build the TRT network and engine at run time. i.e. Since TensorRT version < 6.0 does not support dynamic dimensions other than the batch dimension, when the TensorFlow graph has a non-batch dimension of dynamic size, we would need to enable this option. This option should be set to True in TF 2.0.
maximum_cached_engines: max number of cached TRT engines for dynamic TRT ops. Created TRT engines for a dynamic dimension are cached. This is the maximum number of engines that can be cached. If the number of cached engines is already at max but none of them supports the input shapes, the TRTEngineOp will fall back to run the original TF subgraph that corresponds to the TRTEngineOp.
use_calibration: this argument is ignored if precision_mode is not INT8. If set to True, a calibration graph will be created to calibrate the missing ranges. The calibration graph must be converted to an inference graph by running calibration with calibrate(). If set to False, quantization nodes will be expected for every tensor in the graph (excluding those which will be fused). If a range is missing, an error will occur. Please note that accuracy may be negatively affected if there is a mismatch between which tensors TRT quantizes and which tensors were trained with fake quantization.
max_batch_size: max size for the input batch. This parameter is only effective when use_implicit_batch is true.
allow_build_at_runtime: whether to build TensorRT engines during runtime. If no TensorRT engine can be found in cache that can handle the given inputs during runtime, then a new TensorRT engine is built at runtime if allow_build_at_runtime=True, and otherwise native TF is used. This argument is only effective if is_dynamic_op=True.
Attributes
rewriter_config_template
max_workspace_size_bytes
precision_mode
minimum_segment_size
is_dynamic_op
maximum_cached_engines
use_calibration
max_batch_size
allow_build_at_runtime | tensorflow.experimental.tensorrt.conversionparams |
tf.experimental.tensorrt.Converter An offline converter for TF-TRT transformation for TF 2.0 SavedModels.
tf.experimental.tensorrt.Converter(
input_saved_model_dir=None, input_saved_model_tags=None,
input_saved_model_signature_key=None, conversion_params=None
)
Currently this is not available on Windows platform. Note that in V2, is_dynamic_op=False is not supported, meaning TRT engines will be built only when the corresponding TRTEngineOp is executed. But we still provide a way to avoid the cost of building TRT engines during inference (see more below). There are several ways to run the conversion:
FP32/FP16 precision params = tf.experimental.tensorrt.ConversionParams(
precision_mode='FP16')
converter = tf.experimental.tensorrt.Converter(
input_saved_model_dir="my_dir", conversion_params=params)
converter.convert()
converter.save(output_saved_model_dir)
In this case, no TRT engines will be built or saved in the converted SavedModel. But if input data is available during conversion, we can still build and save the TRT engines to reduce the cost during inference (see option 2 below).
FP32/FP16 precision with pre-built engines params = tf.experimental.tensorrt.ConversionParams(
precision_mode='FP16',
# Set this to a large enough number so it can cache all the engines.
maximum_cached_engines=16)
converter = tf.experimental.tensorrt.Converter(
input_saved_model_dir="my_dir", conversion_params=params)
converter.convert()
# Define a generator function that yields input data, and use it to execute
# the graph to build TRT engines.
# With TensorRT 5.1, different engines will be built (and saved later) for
# different input shapes to the TRTEngineOp.
def my_input_fn():
for _ in range(num_runs):
inp1, inp2 = ...
yield inp1, inp2
converter.build(input_fn=my_input_fn) # Generate corresponding TRT engines
converter.save(output_saved_model_dir) # Generated engines will be saved.
In this way, one engine will be built/saved for each unique input shapes of the TRTEngineOp. This is good for applications that cannot afford building engines during inference but have access to input data that is similar to the one used in production (for example, that has the same input shapes). Also, the generated TRT engines is platform dependent, so we need to run build() in an environment that is similar to production (e.g. with same type of GPU).
INT8 precision and calibration with pre-built engines params = tf.experimental.tensorrt.ConversionParams(
precision_mode='INT8',
# Currently only one INT8 engine is supported in this mode.
maximum_cached_engines=1,
use_calibration=True)
converter = tf.experimental.tensorrt.Converter(
input_saved_model_dir="my_dir", conversion_params=params)
# Define a generator function that yields input data, and run INT8
# calibration with the data. All input data should have the same shape.
# At the end of convert(), the calibration stats (e.g. range information)
# will be saved and can be used to generate more TRT engines with different
# shapes. Also, one TRT engine will be generated (with the same shape as
# the calibration data) for save later.
def my_calibration_input_fn():
for _ in range(num_runs):
inp1, inp2 = ...
yield inp1, inp2
converter.convert(calibration_input_fn=my_calibration_input_fn)
# (Optional) Generate more TRT engines offline (same as the previous
# option), to avoid the cost of generating them during inference.
def my_input_fn():
for _ in range(num_runs):
inp1, inp2 = ...
yield inp1, inp2
converter.build(input_fn=my_input_fn)
# Save the TRT engine and the engines.
converter.save(output_saved_model_dir)
Args
input_saved_model_dir the directory to load the SavedModel which contains the input graph to transforms. Used only when input_graph_def is None.
input_saved_model_tags list of tags to load the SavedModel.
input_saved_model_signature_key the key of the signature to optimize the graph for.
conversion_params a TrtConversionParams instance.
Raises
ValueError if the combination of the parameters is invalid. Methods build View source
build(
input_fn
)
Run inference with converted graph in order to build TensorRT engines.
Args
input_fn a generator function that yields input data as a list or tuple, which will be used to execute the converted signature to generate TRT engines. Example: `def input_fn(): Let's assume a network with 2 input tensors. We generate 3 sets of dummy input data: input_shapes = [[(1, 16), (2, 16)], # 1st input list [(2, 32), (4, 32)], # 2nd list of two tensors [(4, 32), (8, 32)]] # 3rd input list for shapes in input_shapes: return a list of input tensors yield [np.zeros(x).astype(np.float32) for x in shapes]`
Raises
NotImplementedError build() is already called.
RuntimeError the input_fx is None. convert View source
convert(
calibration_input_fn=None
)
Convert the input SavedModel in 2.0 format.
Args
calibration_input_fn a generator function that yields input data as a list or tuple, which will be used to execute the converted signature for calibration. All the returned input data should have the same shape. Example: def input_fn(): yield input1, input2, input3
Raises
ValueError if the input combination is invalid.
Returns The TF-TRT converted Function.
save View source
save(
output_saved_model_dir
)
Save the converted SavedModel.
Args
output_saved_model_dir directory to saved the converted SavedModel. | tensorflow.experimental.tensorrt.converter |
tf.extract_volume_patches Extract patches from input and put them in the "depth" output dimension. 3D extension of extract_image_patches. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.extract_volume_patches
tf.extract_volume_patches(
input, ksizes, strides, padding, name=None
)
Args
input A Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, int64, bfloat16, uint16, half, uint32, uint64. 5-D Tensor with shape [batch, in_planes, in_rows, in_cols, depth].
ksizes A list of ints that has length >= 5. The size of the sliding window for each dimension of input.
strides A list of ints that has length >= 5. 1-D of length 5. How far the centers of two consecutive patches are in input. Must be: [1, stride_planes, stride_rows, stride_cols, 1].
padding A string from: "SAME", "VALID". The type of padding algorithm to use. The size-related attributes are specified as follows: ksizes = [1, ksize_planes, ksize_rows, ksize_cols, 1]
strides = [1, stride_planes, strides_rows, strides_cols, 1]
name A name for the operation (optional).
Returns A Tensor. Has the same type as input. | tensorflow.extract_volume_patches |
tf.eye View source on GitHub Construct an identity matrix, or a batch of matrices. View aliases Main aliases
tf.linalg.eye Compat aliases for migration See Migration guide for more details. tf.compat.v1.eye, tf.compat.v1.linalg.eye
tf.eye(
num_rows, num_columns=None, batch_shape=None, dtype=tf.dtypes.float32, name=None
)
See also tf.ones, tf.zeros, tf.fill, tf.one_hot. # Construct one identity matrix.
tf.eye(2)
==> [[1., 0.],
[0., 1.]]
# Construct a batch of 3 identity matrices, each 2 x 2.
# batch_identity[i, :, :] is a 2 x 2 identity matrix, i = 0, 1, 2.
batch_identity = tf.eye(2, batch_shape=[3])
# Construct one 2 x 3 "identity" matrix
tf.eye(2, num_columns=3)
==> [[ 1., 0., 0.],
[ 0., 1., 0.]]
Args
num_rows Non-negative int32 scalar Tensor giving the number of rows in each batch matrix.
num_columns Optional non-negative int32 scalar Tensor giving the number of columns in each batch matrix. Defaults to num_rows.
batch_shape A list or tuple of Python integers or a 1-D int32 Tensor. If provided, the returned Tensor will have leading batch dimensions of this shape.
dtype The type of an element in the resulting Tensor
name A name for this Op. Defaults to "eye".
Returns A Tensor of shape batch_shape + [num_rows, num_columns] | tensorflow.eye |
Module: tf.feature_column Public API for tf.feature_column namespace. Functions bucketized_column(...): Represents discretized dense input bucketed by boundaries. categorical_column_with_hash_bucket(...): Represents sparse feature where ids are set by hashing. categorical_column_with_identity(...): A CategoricalColumn that returns identity values. categorical_column_with_vocabulary_file(...): A CategoricalColumn with a vocabulary file. categorical_column_with_vocabulary_list(...): A CategoricalColumn with in-memory vocabulary. crossed_column(...): Returns a column for performing crosses of categorical features. embedding_column(...): DenseColumn that converts from sparse, categorical input. indicator_column(...): Represents multi-hot representation of given categorical column. make_parse_example_spec(...): Creates parsing spec dictionary from input feature_columns. numeric_column(...): Represents real valued or numerical features. sequence_categorical_column_with_hash_bucket(...): A sequence of categorical terms where ids are set by hashing. sequence_categorical_column_with_identity(...): Returns a feature column that represents sequences of integers. sequence_categorical_column_with_vocabulary_file(...): A sequence of categorical terms where ids use a vocabulary file. sequence_categorical_column_with_vocabulary_list(...): A sequence of categorical terms where ids use an in-memory list. sequence_numeric_column(...): Returns a feature column that represents sequences of numeric data. shared_embeddings(...): List of dense columns that convert from sparse, categorical input. weighted_categorical_column(...): Applies weight values to a CategoricalColumn. | tensorflow.feature_column |
tf.feature_column.bucketized_column View source on GitHub Represents discretized dense input bucketed by boundaries. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.feature_column.bucketized_column
tf.feature_column.bucketized_column(
source_column, boundaries
)
Buckets include the left boundary, and exclude the right boundary. Namely, boundaries=[0., 1., 2.] generates buckets (-inf, 0.), [0., 1.), [1., 2.), and [2., +inf). For example, if the inputs are boundaries = [0, 10, 100]
input tensor = [[-5, 10000]
[150, 10]
[5, 100]]
then the output will be output = [[0, 3]
[3, 2]
[1, 3]]
Example: price = tf.feature_column.numeric_column('price')
bucketized_price = tf.feature_column.bucketized_column(
price, boundaries=[...])
columns = [bucketized_price, ...]
features = tf.io.parse_example(
..., features=tf.feature_column.make_parse_example_spec(columns))
dense_tensor = tf.keras.layers.DenseFeatures(columns)(features)
A bucketized_column can also be crossed with another categorical column using crossed_column: price = tf.feature_column.numeric_column('price')
# bucketized_column converts numerical feature to a categorical one.
bucketized_price = tf.feature_column.bucketized_column(
price, boundaries=[...])
# 'keywords' is a string feature.
price_x_keywords = tf.feature_column.crossed_column(
[bucketized_price, 'keywords'], 50K)
columns = [price_x_keywords, ...]
features = tf.io.parse_example(
..., features=tf.feature_column.make_parse_example_spec(columns))
dense_tensor = tf.keras.layers.DenseFeatures(columns)(features)
linear_model = tf.keras.experimental.LinearModel(units=...)(dense_tensor)
Args
source_column A one-dimensional dense column which is generated with numeric_column.
boundaries A sorted list or tuple of floats specifying the boundaries.
Returns A BucketizedColumn.
Raises
ValueError If source_column is not a numeric column, or if it is not one-dimensional.
ValueError If boundaries is not a sorted list or tuple. | tensorflow.feature_column.bucketized_column |
tf.feature_column.categorical_column_with_hash_bucket View source on GitHub Represents sparse feature where ids are set by hashing. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.feature_column.categorical_column_with_hash_bucket
tf.feature_column.categorical_column_with_hash_bucket(
key, hash_bucket_size, dtype=tf.dtypes.string
)
Use this when your sparse features are in string or integer format, and you want to distribute your inputs into a finite number of buckets by hashing. output_id = Hash(input_feature_string) % bucket_size for string type input. For int type input, the value is converted to its string representation first and then hashed by the same formula. For input dictionary features, features[key] is either Tensor or SparseTensor. If Tensor, missing values can be represented by -1 for int and '' for string, which will be dropped by this feature column. Example: keywords = categorical_column_with_hash_bucket("keywords", 10K)
columns = [keywords, ...]
features = tf.io.parse_example(..., features=make_parse_example_spec(columns))
linear_prediction = linear_model(features, columns)
# or
keywords_embedded = embedding_column(keywords, 16)
columns = [keywords_embedded, ...]
features = tf.io.parse_example(..., features=make_parse_example_spec(columns))
dense_tensor = input_layer(features, columns)
Args
key A unique string identifying the input feature. It is used as the column name and the dictionary key for feature parsing configs, feature Tensor objects, and feature columns.
hash_bucket_size An int > 1. The number of buckets.
dtype The type of features. Only string and integer types are supported.
Returns A HashedCategoricalColumn.
Raises
ValueError hash_bucket_size is not greater than 1.
ValueError dtype is neither string nor integer. | tensorflow.feature_column.categorical_column_with_hash_bucket |
tf.feature_column.categorical_column_with_identity View source on GitHub A CategoricalColumn that returns identity values. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.feature_column.categorical_column_with_identity
tf.feature_column.categorical_column_with_identity(
key, num_buckets, default_value=None
)
Use this when your inputs are integers in the range [0, num_buckets), and you want to use the input value itself as the categorical ID. Values outside this range will result in default_value if specified, otherwise it will fail. Typically, this is used for contiguous ranges of integer indexes, but it doesn't have to be. This might be inefficient, however, if many of IDs are unused. Consider categorical_column_with_hash_bucket in that case. For input dictionary features, features[key] is either Tensor or SparseTensor. If Tensor, missing values can be represented by -1 for int and '' for string, which will be dropped by this feature column. In the following examples, each input in the range [0, 1000000) is assigned the same value. All other inputs are assigned default_value 0. Note that a literal 0 in inputs will result in the same default ID. Linear model: video_id = categorical_column_with_identity(
key='video_id', num_buckets=1000000, default_value=0)
columns = [video_id, ...]
features = tf.io.parse_example(..., features=make_parse_example_spec(columns))
linear_prediction, _, _ = linear_model(features, columns)
Embedding for a DNN model: columns = [embedding_column(video_id, 9),...]
features = tf.io.parse_example(..., features=make_parse_example_spec(columns))
dense_tensor = input_layer(features, columns)
Args
key A unique string identifying the input feature. It is used as the column name and the dictionary key for feature parsing configs, feature Tensor objects, and feature columns.
num_buckets Range of inputs and outputs is [0, num_buckets).
default_value If set, values outside of range [0, num_buckets) will be replaced with this value. If not set, values >= num_buckets will cause a failure while values < 0 will be dropped.
Returns A CategoricalColumn that returns identity values.
Raises
ValueError if num_buckets is less than one.
ValueError if default_value is not in range [0, num_buckets). | tensorflow.feature_column.categorical_column_with_identity |
tf.feature_column.categorical_column_with_vocabulary_file View source on GitHub A CategoricalColumn with a vocabulary file.
tf.feature_column.categorical_column_with_vocabulary_file(
key, vocabulary_file, vocabulary_size=None, dtype=tf.dtypes.string,
default_value=None, num_oov_buckets=0
)
Use this when your inputs are in string or integer format, and you have a vocabulary file that maps each value to an integer ID. By default, out-of-vocabulary values are ignored. Use either (but not both) of num_oov_buckets and default_value to specify how to include out-of-vocabulary values. For input dictionary features, features[key] is either Tensor or SparseTensor. If Tensor, missing values can be represented by -1 for int and '' for string, which will be dropped by this feature column. Example with num_oov_buckets: File '/us/states.txt' contains 50 lines, each with a 2-character U.S. state abbreviation. All inputs with values in that file are assigned an ID 0-49, corresponding to its line number. All other values are hashed and assigned an ID 50-54. states = categorical_column_with_vocabulary_file(
key='states', vocabulary_file='/us/states.txt', vocabulary_size=50,
num_oov_buckets=5)
columns = [states, ...]
features = tf.io.parse_example(..., features=make_parse_example_spec(columns))
linear_prediction = linear_model(features, columns)
Example with default_value: File '/us/states.txt' contains 51 lines - the first line is 'XX', and the other 50 each have a 2-character U.S. state abbreviation. Both a literal 'XX' in input, and other values missing from the file, will be assigned ID 0. All others are assigned the corresponding line number 1-50. states = categorical_column_with_vocabulary_file(
key='states', vocabulary_file='/us/states.txt', vocabulary_size=51,
default_value=0)
columns = [states, ...]
features = tf.io.parse_example(..., features=make_parse_example_spec(columns))
linear_prediction, _, _ = linear_model(features, columns)
And to make an embedding with either: columns = [embedding_column(states, 3),...]
features = tf.io.parse_example(..., features=make_parse_example_spec(columns))
dense_tensor = input_layer(features, columns)
Args
key A unique string identifying the input feature. It is used as the column name and the dictionary key for feature parsing configs, feature Tensor objects, and feature columns.
vocabulary_file The vocabulary file name.
vocabulary_size Number of the elements in the vocabulary. This must be no greater than length of vocabulary_file, if less than length, later values are ignored. If None, it is set to the length of vocabulary_file.
dtype The type of features. Only string and integer types are supported.
default_value The integer ID value to return for out-of-vocabulary feature values, defaults to -1. This can not be specified with a positive num_oov_buckets.
num_oov_buckets Non-negative integer, the number of out-of-vocabulary buckets. All out-of-vocabulary inputs will be assigned IDs in the range [vocabulary_size, vocabulary_size+num_oov_buckets) based on a hash of the input value. A positive num_oov_buckets can not be specified with default_value.
Returns A CategoricalColumn with a vocabulary file.
Raises
ValueError vocabulary_file is missing or cannot be opened.
ValueError vocabulary_size is missing or < 1.
ValueError num_oov_buckets is a negative integer.
ValueError num_oov_buckets and default_value are both specified.
ValueError dtype is neither string nor integer. | tensorflow.feature_column.categorical_column_with_vocabulary_file |
tf.feature_column.categorical_column_with_vocabulary_list View source on GitHub A CategoricalColumn with in-memory vocabulary. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.feature_column.categorical_column_with_vocabulary_list
tf.feature_column.categorical_column_with_vocabulary_list(
key, vocabulary_list, dtype=None, default_value=-1, num_oov_buckets=0
)
Use this when your inputs are in string or integer format, and you have an in-memory vocabulary mapping each value to an integer ID. By default, out-of-vocabulary values are ignored. Use either (but not both) of num_oov_buckets and default_value to specify how to include out-of-vocabulary values. For input dictionary features, features[key] is either Tensor or SparseTensor. If Tensor, missing values can be represented by -1 for int and '' for string, which will be dropped by this feature column. Example with num_oov_buckets: In the following example, each input in vocabulary_list is assigned an ID 0-3 corresponding to its index (e.g., input 'B' produces output 2). All other inputs are hashed and assigned an ID 4-5. colors = categorical_column_with_vocabulary_list(
key='colors', vocabulary_list=('R', 'G', 'B', 'Y'),
num_oov_buckets=2)
columns = [colors, ...]
features = tf.io.parse_example(..., features=make_parse_example_spec(columns))
linear_prediction, _, _ = linear_model(features, columns)
Example with default_value: In the following example, each input in vocabulary_list is assigned an ID 0-4 corresponding to its index (e.g., input 'B' produces output 3). All other inputs are assigned default_value 0. colors = categorical_column_with_vocabulary_list(
key='colors', vocabulary_list=('X', 'R', 'G', 'B', 'Y'), default_value=0)
columns = [colors, ...]
features = tf.io.parse_example(..., features=make_parse_example_spec(columns))
linear_prediction, _, _ = linear_model(features, columns)
And to make an embedding with either: columns = [embedding_column(colors, 3),...]
features = tf.io.parse_example(..., features=make_parse_example_spec(columns))
dense_tensor = input_layer(features, columns)
Args
key A unique string identifying the input feature. It is used as the column name and the dictionary key for feature parsing configs, feature Tensor objects, and feature columns.
vocabulary_list An ordered iterable defining the vocabulary. Each feature is mapped to the index of its value (if present) in vocabulary_list. Must be castable to dtype.
dtype The type of features. Only string and integer types are supported. If None, it will be inferred from vocabulary_list.
default_value The integer ID value to return for out-of-vocabulary feature values, defaults to -1. This can not be specified with a positive num_oov_buckets.
num_oov_buckets Non-negative integer, the number of out-of-vocabulary buckets. All out-of-vocabulary inputs will be assigned IDs in the range [len(vocabulary_list), len(vocabulary_list)+num_oov_buckets) based on a hash of the input value. A positive num_oov_buckets can not be specified with default_value.
Returns A CategoricalColumn with in-memory vocabulary.
Raises
ValueError if vocabulary_list is empty, or contains duplicate keys.
ValueError num_oov_buckets is a negative integer.
ValueError num_oov_buckets and default_value are both specified.
ValueError if dtype is not integer or string. | tensorflow.feature_column.categorical_column_with_vocabulary_list |
tf.feature_column.crossed_column View source on GitHub Returns a column for performing crosses of categorical features. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.feature_column.crossed_column
tf.feature_column.crossed_column(
keys, hash_bucket_size, hash_key=None
)
Crossed features will be hashed according to hash_bucket_size. Conceptually, the transformation can be thought of as: Hash(cartesian product of features) % hash_bucket_size For example, if the input features are:
SparseTensor referred by first key: shape = [2, 2]
{
[0, 0]: "a"
[1, 0]: "b"
[1, 1]: "c"
}
SparseTensor referred by second key: shape = [2, 1]
{
[0, 0]: "d"
[1, 0]: "e"
}
then crossed feature will look like: shape = [2, 2]
{
[0, 0]: Hash64("d", Hash64("a")) % hash_bucket_size
[1, 0]: Hash64("e", Hash64("b")) % hash_bucket_size
[1, 1]: Hash64("e", Hash64("c")) % hash_bucket_size
}
Here is an example to create a linear model with crosses of string features: keywords_x_doc_terms = crossed_column(['keywords', 'doc_terms'], 50K)
columns = [keywords_x_doc_terms, ...]
features = tf.io.parse_example(..., features=make_parse_example_spec(columns))
linear_prediction = linear_model(features, columns)
You could also use vocabulary lookup before crossing: keywords = categorical_column_with_vocabulary_file(
'keywords', '/path/to/vocabulary/file', vocabulary_size=1K)
keywords_x_doc_terms = crossed_column([keywords, 'doc_terms'], 50K)
columns = [keywords_x_doc_terms, ...]
features = tf.io.parse_example(..., features=make_parse_example_spec(columns))
linear_prediction = linear_model(features, columns)
If an input feature is of numeric type, you can use categorical_column_with_identity, or bucketized_column, as in the example: # vertical_id is an integer categorical feature.
vertical_id = categorical_column_with_identity('vertical_id', 10K)
price = numeric_column('price')
# bucketized_column converts numerical feature to a categorical one.
bucketized_price = bucketized_column(price, boundaries=[...])
vertical_id_x_price = crossed_column([vertical_id, bucketized_price], 50K)
columns = [vertical_id_x_price, ...]
features = tf.io.parse_example(..., features=make_parse_example_spec(columns))
linear_prediction = linear_model(features, columns)
To use crossed column in DNN model, you need to add it in an embedding column as in this example: vertical_id_x_price = crossed_column([vertical_id, bucketized_price], 50K)
vertical_id_x_price_embedded = embedding_column(vertical_id_x_price, 10)
dense_tensor = input_layer(features, [vertical_id_x_price_embedded, ...])
Args
keys An iterable identifying the features to be crossed. Each element can be either: string: Will use the corresponding feature which must be of string type.
CategoricalColumn: Will use the transformed tensor produced by this column. Does not support hashed categorical column.
hash_bucket_size An int > 1. The number of buckets.
hash_key Specify the hash_key that will be used by the FingerprintCat64 function to combine the crosses fingerprints on SparseCrossOp (optional).
Returns A CrossedColumn.
Raises
ValueError If len(keys) < 2.
ValueError If any of the keys is neither a string nor CategoricalColumn.
ValueError If any of the keys is HashedCategoricalColumn.
ValueError If hash_bucket_size < 1. | tensorflow.feature_column.crossed_column |
tf.feature_column.embedding_column View source on GitHub DenseColumn that converts from sparse, categorical input. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.feature_column.embedding_column
tf.feature_column.embedding_column(
categorical_column, dimension, combiner='mean', initializer=None,
ckpt_to_load_from=None, tensor_name_in_ckpt=None, max_norm=None, trainable=True,
use_safe_embedding_lookup=True
)
Use this when your inputs are sparse, but you want to convert them to a dense representation (e.g., to feed to a DNN). Inputs must be a CategoricalColumn created by any of the categorical_column_* function. Here is an example of using embedding_column with DNNClassifier: video_id = categorical_column_with_identity(
key='video_id', num_buckets=1000000, default_value=0)
columns = [embedding_column(video_id, 9),...]
estimator = tf.estimator.DNNClassifier(feature_columns=columns, ...)
label_column = ...
def input_fn():
features = tf.io.parse_example(
..., features=make_parse_example_spec(columns + [label_column]))
labels = features.pop(label_column.name)
return features, labels
estimator.train(input_fn=input_fn, steps=100)
Here is an example using embedding_column with model_fn: def model_fn(features, ...):
video_id = categorical_column_with_identity(
key='video_id', num_buckets=1000000, default_value=0)
columns = [embedding_column(video_id, 9),...]
dense_tensor = input_layer(features, columns)
# Form DNN layers, calculate loss, and return EstimatorSpec.
...
Args
categorical_column A CategoricalColumn created by a categorical_column_with_* function. This column produces the sparse IDs that are inputs to the embedding lookup.
dimension An integer specifying dimension of the embedding, must be > 0.
combiner A string specifying how to reduce if there are multiple entries in a single row. Currently 'mean', 'sqrtn' and 'sum' are supported, with 'mean' the default. 'sqrtn' often achieves good accuracy, in particular with bag-of-words columns. Each of this can be thought as example level normalizations on the column. For more information, see tf.embedding_lookup_sparse.
initializer A variable initializer function to be used in embedding variable initialization. If not specified, defaults to truncated_normal_initializer with mean 0.0 and standard deviation 1/sqrt(dimension).
ckpt_to_load_from String representing checkpoint name/pattern from which to restore column weights. Required if tensor_name_in_ckpt is not None.
tensor_name_in_ckpt Name of the Tensor in ckpt_to_load_from from which to restore the column weights. Required if ckpt_to_load_from is not None.
max_norm If not None, embedding values are l2-normalized to this value.
trainable Whether or not the embedding is trainable. Default is True.
use_safe_embedding_lookup If true, uses safe_embedding_lookup_sparse instead of embedding_lookup_sparse. safe_embedding_lookup_sparse ensures there are no empty rows and all weights and ids are positive at the expense of extra compute cost. This only applies to rank 2 (NxM) shaped input tensors. Defaults to true, consider turning off if the above checks are not needed. Note that having empty rows will not trigger any error though the output result might be 0 or omitted.
Returns DenseColumn that converts from sparse input.
Raises
ValueError if dimension not > 0.
ValueError if exactly one of ckpt_to_load_from and tensor_name_in_ckpt is specified.
ValueError if initializer is specified and is not callable.
RuntimeError If eager execution is enabled. | tensorflow.feature_column.embedding_column |
tf.feature_column.indicator_column View source on GitHub Represents multi-hot representation of given categorical column. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.feature_column.indicator_column
tf.feature_column.indicator_column(
categorical_column
)
For DNN model, indicator_column can be used to wrap any categorical_column_* (e.g., to feed to DNN). Consider to Use embedding_column if the number of buckets/unique(values) are large. For Wide (aka linear) model, indicator_column is the internal representation for categorical column when passing categorical column directly (as any element in feature_columns) to linear_model. See linear_model for details. name = indicator_column(categorical_column_with_vocabulary_list(
'name', ['bob', 'george', 'wanda']))
columns = [name, ...]
features = tf.io.parse_example(..., features=make_parse_example_spec(columns))
dense_tensor = input_layer(features, columns)
dense_tensor == [[1, 0, 0]] # If "name" bytes_list is ["bob"]
dense_tensor == [[1, 0, 1]] # If "name" bytes_list is ["bob", "wanda"]
dense_tensor == [[2, 0, 0]] # If "name" bytes_list is ["bob", "bob"]
Args
categorical_column A CategoricalColumn which is created by categorical_column_with_* or crossed_column functions.
Returns An IndicatorColumn.
Raises
ValueError If categorical_column is not CategoricalColumn type. | tensorflow.feature_column.indicator_column |
tf.feature_column.make_parse_example_spec View source on GitHub Creates parsing spec dictionary from input feature_columns.
tf.feature_column.make_parse_example_spec(
feature_columns
)
The returned dictionary can be used as arg 'features' in tf.io.parse_example. Typical usage example: # Define features and transformations
feature_a = tf.feature_column.categorical_column_with_vocabulary_file(...)
feature_b = tf.feature_column.numeric_column(...)
feature_c_bucketized = tf.feature_column.bucketized_column(
tf.feature_column.numeric_column("feature_c"), ...)
feature_a_x_feature_c = tf.feature_column.crossed_column(
columns=["feature_a", feature_c_bucketized], ...)
feature_columns = set(
[feature_b, feature_c_bucketized, feature_a_x_feature_c])
features = tf.io.parse_example(
serialized=serialized_examples,
features=tf.feature_column.make_parse_example_spec(feature_columns))
For the above example, make_parse_example_spec would return the dict: {
"feature_a": parsing_ops.VarLenFeature(tf.string),
"feature_b": parsing_ops.FixedLenFeature([1], dtype=tf.float32),
"feature_c": parsing_ops.FixedLenFeature([1], dtype=tf.float32)
}
Args
feature_columns An iterable containing all feature columns. All items should be instances of classes derived from FeatureColumn.
Returns A dict mapping each feature key to a FixedLenFeature or VarLenFeature value.
Raises
ValueError If any of the given feature_columns is not a FeatureColumn instance. | tensorflow.feature_column.make_parse_example_spec |
tf.feature_column.numeric_column View source on GitHub Represents real valued or numerical features. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.feature_column.numeric_column
tf.feature_column.numeric_column(
key, shape=(1,), default_value=None, dtype=tf.dtypes.float32, normalizer_fn=None
)
Example: price = numeric_column('price')
columns = [price, ...]
features = tf.io.parse_example(..., features=make_parse_example_spec(columns))
dense_tensor = input_layer(features, columns)
# or
bucketized_price = bucketized_column(price, boundaries=[...])
columns = [bucketized_price, ...]
features = tf.io.parse_example(..., features=make_parse_example_spec(columns))
linear_prediction = linear_model(features, columns)
Args
key A unique string identifying the input feature. It is used as the column name and the dictionary key for feature parsing configs, feature Tensor objects, and feature columns.
shape An iterable of integers specifies the shape of the Tensor. An integer can be given which means a single dimension Tensor with given width. The Tensor representing the column will have the shape of [batch_size] + shape.
default_value A single value compatible with dtype or an iterable of values compatible with dtype which the column takes on during tf.Example parsing if data is missing. A default value of None will cause tf.io.parse_example to fail if an example does not contain this column. If a single value is provided, the same value will be applied as the default value for every item. If an iterable of values is provided, the shape of the default_value should be equal to the given shape.
dtype defines the type of values. Default value is tf.float32. Must be a non-quantized, real integer or floating point type.
normalizer_fn If not None, a function that can be used to normalize the value of the tensor after default_value is applied for parsing. Normalizer function takes the input Tensor as its argument, and returns the output Tensor. (e.g. lambda x: (x - 3.0) / 4.2). Please note that even though the most common use case of this function is normalization, it can be used for any kind of Tensorflow transformations.
Returns A NumericColumn.
Raises
TypeError if any dimension in shape is not an int
ValueError if any dimension in shape is not a positive integer
TypeError if default_value is an iterable but not compatible with shape
TypeError if default_value is not compatible with dtype.
ValueError if dtype is not convertible to tf.float32. | tensorflow.feature_column.numeric_column |
tf.feature_column.sequence_categorical_column_with_hash_bucket View source on GitHub A sequence of categorical terms where ids are set by hashing. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.feature_column.sequence_categorical_column_with_hash_bucket
tf.feature_column.sequence_categorical_column_with_hash_bucket(
key, hash_bucket_size, dtype=tf.dtypes.string
)
Pass this to embedding_column or indicator_column to convert sequence categorical data into dense representation for input to sequence NN, such as RNN. Example: tokens = sequence_categorical_column_with_hash_bucket(
'tokens', hash_bucket_size=1000)
tokens_embedding = embedding_column(tokens, dimension=10)
columns = [tokens_embedding]
features = tf.io.parse_example(..., features=make_parse_example_spec(columns))
sequence_feature_layer = SequenceFeatures(columns)
sequence_input, sequence_length = sequence_feature_layer(features)
sequence_length_mask = tf.sequence_mask(sequence_length)
rnn_cell = tf.keras.layers.SimpleRNNCell(hidden_size)
rnn_layer = tf.keras.layers.RNN(rnn_cell)
outputs, state = rnn_layer(sequence_input, mask=sequence_length_mask)
Args
key A unique string identifying the input feature.
hash_bucket_size An int > 1. The number of buckets.
dtype The type of features. Only string and integer types are supported.
Returns A SequenceCategoricalColumn.
Raises
ValueError hash_bucket_size is not greater than 1.
ValueError dtype is neither string nor integer. | tensorflow.feature_column.sequence_categorical_column_with_hash_bucket |
tf.feature_column.sequence_categorical_column_with_identity View source on GitHub Returns a feature column that represents sequences of integers. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.feature_column.sequence_categorical_column_with_identity
tf.feature_column.sequence_categorical_column_with_identity(
key, num_buckets, default_value=None
)
Pass this to embedding_column or indicator_column to convert sequence categorical data into dense representation for input to sequence NN, such as RNN. Example: watches = sequence_categorical_column_with_identity(
'watches', num_buckets=1000)
watches_embedding = embedding_column(watches, dimension=10)
columns = [watches_embedding]
features = tf.io.parse_example(..., features=make_parse_example_spec(columns))
sequence_feature_layer = SequenceFeatures(columns)
sequence_input, sequence_length = sequence_feature_layer(features)
sequence_length_mask = tf.sequence_mask(sequence_length)
rnn_cell = tf.keras.layers.SimpleRNNCell(hidden_size)
rnn_layer = tf.keras.layers.RNN(rnn_cell)
outputs, state = rnn_layer(sequence_input, mask=sequence_length_mask)
Args
key A unique string identifying the input feature.
num_buckets Range of inputs. Namely, inputs are expected to be in the range [0, num_buckets).
default_value If None, this column's graph operations will fail for out-of-range inputs. Otherwise, this value must be in the range [0, num_buckets), and will replace out-of-range inputs.
Returns A SequenceCategoricalColumn.
Raises
ValueError if num_buckets is less than one.
ValueError if default_value is not in range [0, num_buckets). | tensorflow.feature_column.sequence_categorical_column_with_identity |
tf.feature_column.sequence_categorical_column_with_vocabulary_file View source on GitHub A sequence of categorical terms where ids use a vocabulary file. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.feature_column.sequence_categorical_column_with_vocabulary_file
tf.feature_column.sequence_categorical_column_with_vocabulary_file(
key, vocabulary_file, vocabulary_size=None, num_oov_buckets=0,
default_value=None, dtype=tf.dtypes.string
)
Pass this to embedding_column or indicator_column to convert sequence categorical data into dense representation for input to sequence NN, such as RNN. Example: states = sequence_categorical_column_with_vocabulary_file(
key='states', vocabulary_file='/us/states.txt', vocabulary_size=50,
num_oov_buckets=5)
states_embedding = embedding_column(states, dimension=10)
columns = [states_embedding]
features = tf.io.parse_example(..., features=make_parse_example_spec(columns))
sequence_feature_layer = SequenceFeatures(columns)
sequence_input, sequence_length = sequence_feature_layer(features)
sequence_length_mask = tf.sequence_mask(sequence_length)
rnn_cell = tf.keras.layers.SimpleRNNCell(hidden_size)
rnn_layer = tf.keras.layers.RNN(rnn_cell)
outputs, state = rnn_layer(sequence_input, mask=sequence_length_mask)
Args
key A unique string identifying the input feature.
vocabulary_file The vocabulary file name.
vocabulary_size Number of the elements in the vocabulary. This must be no greater than length of vocabulary_file, if less than length, later values are ignored. If None, it is set to the length of vocabulary_file.
num_oov_buckets Non-negative integer, the number of out-of-vocabulary buckets. All out-of-vocabulary inputs will be assigned IDs in the range [vocabulary_size, vocabulary_size+num_oov_buckets) based on a hash of the input value. A positive num_oov_buckets can not be specified with default_value.
default_value The integer ID value to return for out-of-vocabulary feature values, defaults to -1. This can not be specified with a positive num_oov_buckets.
dtype The type of features. Only string and integer types are supported.
Returns A SequenceCategoricalColumn.
Raises
ValueError vocabulary_file is missing or cannot be opened.
ValueError vocabulary_size is missing or < 1.
ValueError num_oov_buckets is a negative integer.
ValueError num_oov_buckets and default_value are both specified.
ValueError dtype is neither string nor integer. | tensorflow.feature_column.sequence_categorical_column_with_vocabulary_file |
tf.feature_column.sequence_categorical_column_with_vocabulary_list View source on GitHub A sequence of categorical terms where ids use an in-memory list. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.feature_column.sequence_categorical_column_with_vocabulary_list
tf.feature_column.sequence_categorical_column_with_vocabulary_list(
key, vocabulary_list, dtype=None, default_value=-1, num_oov_buckets=0
)
Pass this to embedding_column or indicator_column to convert sequence categorical data into dense representation for input to sequence NN, such as RNN. Example: colors = sequence_categorical_column_with_vocabulary_list(
key='colors', vocabulary_list=('R', 'G', 'B', 'Y'),
num_oov_buckets=2)
colors_embedding = embedding_column(colors, dimension=3)
columns = [colors_embedding]
features = tf.io.parse_example(..., features=make_parse_example_spec(columns))
sequence_feature_layer = SequenceFeatures(columns)
sequence_input, sequence_length = sequence_feature_layer(features)
sequence_length_mask = tf.sequence_mask(sequence_length)
rnn_cell = tf.keras.layers.SimpleRNNCell(hidden_size)
rnn_layer = tf.keras.layers.RNN(rnn_cell)
outputs, state = rnn_layer(sequence_input, mask=sequence_length_mask)
Args
key A unique string identifying the input feature.
vocabulary_list An ordered iterable defining the vocabulary. Each feature is mapped to the index of its value (if present) in vocabulary_list. Must be castable to dtype.
dtype The type of features. Only string and integer types are supported. If None, it will be inferred from vocabulary_list.
default_value The integer ID value to return for out-of-vocabulary feature values, defaults to -1. This can not be specified with a positive num_oov_buckets.
num_oov_buckets Non-negative integer, the number of out-of-vocabulary buckets. All out-of-vocabulary inputs will be assigned IDs in the range [len(vocabulary_list), len(vocabulary_list)+num_oov_buckets) based on a hash of the input value. A positive num_oov_buckets can not be specified with default_value.
Returns A SequenceCategoricalColumn.
Raises
ValueError if vocabulary_list is empty, or contains duplicate keys.
ValueError num_oov_buckets is a negative integer.
ValueError num_oov_buckets and default_value are both specified.
ValueError if dtype is not integer or string. | tensorflow.feature_column.sequence_categorical_column_with_vocabulary_list |
tf.feature_column.sequence_numeric_column View source on GitHub Returns a feature column that represents sequences of numeric data. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.feature_column.sequence_numeric_column
tf.feature_column.sequence_numeric_column(
key, shape=(1,), default_value=0.0, dtype=tf.dtypes.float32, normalizer_fn=None
)
Example: temperature = sequence_numeric_column('temperature')
columns = [temperature]
features = tf.io.parse_example(..., features=make_parse_example_spec(columns))
sequence_feature_layer = SequenceFeatures(columns)
sequence_input, sequence_length = sequence_feature_layer(features)
sequence_length_mask = tf.sequence_mask(sequence_length)
rnn_cell = tf.keras.layers.SimpleRNNCell(hidden_size)
rnn_layer = tf.keras.layers.RNN(rnn_cell)
outputs, state = rnn_layer(sequence_input, mask=sequence_length_mask)
Args
key A unique string identifying the input features.
shape The shape of the input data per sequence id. E.g. if shape=(2,), each example must contain 2 * sequence_length values.
default_value A single value compatible with dtype that is used for padding the sparse data into a dense Tensor.
dtype The type of values.
normalizer_fn If not None, a function that can be used to normalize the value of the tensor after default_value is applied for parsing. Normalizer function takes the input Tensor as its argument, and returns the output Tensor. (e.g. lambda x: (x - 3.0) / 4.2). Please note that even though the most common use case of this function is normalization, it can be used for any kind of Tensorflow transformations.
Returns A SequenceNumericColumn.
Raises
TypeError if any dimension in shape is not an int.
ValueError if any dimension in shape is not a positive integer.
ValueError if dtype is not convertible to tf.float32. | tensorflow.feature_column.sequence_numeric_column |
tf.feature_column.shared_embeddings List of dense columns that convert from sparse, categorical input.
tf.feature_column.shared_embeddings(
categorical_columns, dimension, combiner='mean', initializer=None,
shared_embedding_collection_name=None, ckpt_to_load_from=None,
tensor_name_in_ckpt=None, max_norm=None, trainable=True,
use_safe_embedding_lookup=True
)
This is similar to embedding_column, except that it produces a list of embedding columns that share the same embedding weights. Use this when your inputs are sparse and of the same type (e.g. watched and impression video IDs that share the same vocabulary), and you want to convert them to a dense representation (e.g., to feed to a DNN). Inputs must be a list of categorical columns created by any of the categorical_column_* function. They must all be of the same type and have the same arguments except key. E.g. they can be categorical_column_with_vocabulary_file with the same vocabulary_file. Some or all columns could also be weighted_categorical_column. Here is an example embedding of two features for a DNNClassifier model: watched_video_id = categorical_column_with_vocabulary_file(
'watched_video_id', video_vocabulary_file, video_vocabulary_size)
impression_video_id = categorical_column_with_vocabulary_file(
'impression_video_id', video_vocabulary_file, video_vocabulary_size)
columns = shared_embedding_columns(
[watched_video_id, impression_video_id], dimension=10)
estimator = tf.estimator.DNNClassifier(feature_columns=columns, ...)
label_column = ...
def input_fn():
features = tf.io.parse_example(
..., features=make_parse_example_spec(columns + [label_column]))
labels = features.pop(label_column.name)
return features, labels
estimator.train(input_fn=input_fn, steps=100)
Here is an example using shared_embedding_columns with model_fn: def model_fn(features, ...):
watched_video_id = categorical_column_with_vocabulary_file(
'watched_video_id', video_vocabulary_file, video_vocabulary_size)
impression_video_id = categorical_column_with_vocabulary_file(
'impression_video_id', video_vocabulary_file, video_vocabulary_size)
columns = shared_embedding_columns(
[watched_video_id, impression_video_id], dimension=10)
dense_tensor = input_layer(features, columns)
# Form DNN layers, calculate loss, and return EstimatorSpec.
...
Args
categorical_columns List of categorical columns created by a categorical_column_with_* function. These columns produce the sparse IDs that are inputs to the embedding lookup. All columns must be of the same type and have the same arguments except key. E.g. they can be categorical_column_with_vocabulary_file with the same vocabulary_file. Some or all columns could also be weighted_categorical_column.
dimension An integer specifying dimension of the embedding, must be > 0.
combiner A string specifying how to reduce if there are multiple entries in a single row. Currently 'mean', 'sqrtn' and 'sum' are supported, with 'mean' the default. 'sqrtn' often achieves good accuracy, in particular with bag-of-words columns. Each of this can be thought as example level normalizations on the column. For more information, see tf.embedding_lookup_sparse.
initializer A variable initializer function to be used in embedding variable initialization. If not specified, defaults to truncated_normal_initializer with mean 0.0 and standard deviation 1/sqrt(dimension).
shared_embedding_collection_name Optional collective name of these columns. If not given, a reasonable name will be chosen based on the names of categorical_columns.
ckpt_to_load_from String representing checkpoint name/pattern from which to restore column weights. Required if tensor_name_in_ckpt is not None.
tensor_name_in_ckpt Name of the Tensor in ckpt_to_load_from from which to restore the column weights. Required if ckpt_to_load_from is not None.
max_norm If not None, each embedding is clipped if its l2-norm is larger than this value, before combining.
trainable Whether or not the embedding is trainable. Default is True.
use_safe_embedding_lookup If true, uses safe_embedding_lookup_sparse instead of embedding_lookup_sparse. safe_embedding_lookup_sparse ensures there are no empty rows and all weights and ids are positive at the expense of extra compute cost. This only applies to rank 2 (NxM) shaped input tensors. Defaults to true, consider turning off if the above checks are not needed. Note that having empty rows will not trigger any error though the output result might be 0 or omitted.
Returns A list of dense columns that converts from sparse input. The order of results follows the ordering of categorical_columns.
Raises
ValueError if dimension not > 0.
ValueError if any of the given categorical_columns is of different type or has different arguments than the others.
ValueError if exactly one of ckpt_to_load_from and tensor_name_in_ckpt is specified.
ValueError if initializer is specified and is not callable.
RuntimeError if eager execution is enabled. | tensorflow.feature_column.shared_embeddings |
tf.feature_column.weighted_categorical_column View source on GitHub Applies weight values to a CategoricalColumn. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.feature_column.weighted_categorical_column
tf.feature_column.weighted_categorical_column(
categorical_column, weight_feature_key, dtype=tf.dtypes.float32
)
Use this when each of your sparse inputs has both an ID and a value. For example, if you're representing text documents as a collection of word frequencies, you can provide 2 parallel sparse input features ('terms' and 'frequencies' below). Example: Input tf.Example objects: [
features {
feature {
key: "terms"
value {bytes_list {value: "very" value: "model"} }
}
feature {
key: "frequencies"
value {float_list {value: 0.3 value: 0.1} }
}
},
features {
feature {
key: "terms"
value {bytes_list {value: "when" value: "course" value: "human"} }
}
feature {
key: "frequencies"
value {float_list {value: 0.4 value: 0.1 value: 0.2} }
}
}
]
categorical_column = categorical_column_with_hash_bucket(
column_name='terms', hash_bucket_size=1000)
weighted_column = weighted_categorical_column(
categorical_column=categorical_column, weight_feature_key='frequencies')
columns = [weighted_column, ...]
features = tf.io.parse_example(..., features=make_parse_example_spec(columns))
linear_prediction, _, _ = linear_model(features, columns)
This assumes the input dictionary contains a SparseTensor for key 'terms', and a SparseTensor for key 'frequencies'. These 2 tensors must have the same indices and dense shape.
Args
categorical_column A CategoricalColumn created by categorical_column_with_* functions.
weight_feature_key String key for weight values.
dtype Type of weights, such as tf.float32. Only float and integer weights are supported.
Returns A CategoricalColumn composed of two sparse features: one represents id, the other represents weight (value) of the id feature in that example.
Raises
ValueError if dtype is not convertible to float. | tensorflow.feature_column.weighted_categorical_column |
tf.fill View source on GitHub Creates a tensor filled with a scalar value. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.fill
tf.fill(
dims, value, name=None
)
See also tf.ones, tf.zeros, tf.one_hot, tf.eye. This operation creates a tensor of shape dims and fills it with value. For example:
tf.fill([2, 3], 9)
<tf.Tensor: shape=(2, 3), dtype=int32, numpy=
array([[9, 9, 9],
[9, 9, 9]], dtype=int32)>
tf.fill evaluates at graph runtime and supports dynamic shapes based on other runtime tf.Tensors, unlike tf.constant(value, shape=dims), which embeds the value as a Const node.
Args
dims A 1-D sequence of non-negative numbers. Represents the shape of the output tf.Tensor. Entries should be of type: int32, int64.
value A value to fill the returned tf.Tensor.
name Optional string. The name of the output tf.Tensor.
Returns A tf.Tensor with shape dims and the same dtype as value.
Raises
InvalidArgumentError dims contains negative entries.
NotFoundError dims contains non-integer entries. Numpy Compatibility Similar to np.full. In numpy, more parameters are supported. Passing a number argument as the shape (np.full(5, value)) is valid in numpy for specifying a 1-D shaped result, while TensorFlow does not support this syntax. | tensorflow.fill |
tf.fingerprint View source on GitHub Generates fingerprint values. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.fingerprint
tf.fingerprint(
data, method='farmhash64', name=None
)
Generates fingerprint values of data. Fingerprint op considers the first dimension of data as the batch dimension, and output[i] contains the fingerprint value generated from contents in data[i, ...] for all i. Fingerprint op writes fingerprint values as byte arrays. For example, the default method farmhash64 generates a 64-bit fingerprint value at a time. This 8-byte value is written out as an tf.uint8 array of size 8, in little-endian order. For example, suppose that data has data type tf.int32 and shape (2, 3, 4), and that the fingerprint method is farmhash64. In this case, the output shape is (2, 8), where 2 is the batch dimension size of data, and 8 is the size of each fingerprint value in bytes. output[0, :] is generated from 12 integers in data[0, :, :] and similarly output[1, :] is generated from other 12 integers in data[1, :, :]. Note that this op fingerprints the raw underlying buffer, and it does not fingerprint Tensor's metadata such as data type and/or shape. For example, the fingerprint values are invariant under reshapes and bitcasts as long as the batch dimension remain the same: tf.fingerprint(data) == tf.fingerprint(tf.reshape(data, ...))
tf.fingerprint(data) == tf.fingerprint(tf.bitcast(data, ...))
For string data, one should expect tf.fingerprint(data) != tf.fingerprint(tf.string.reduce_join(data)) in general.
Args
data A Tensor. Must have rank 1 or higher.
method A Tensor of type tf.string. Fingerprint method used by this op. Currently available method is farmhash64.
name A name for the operation (optional).
Returns A two-dimensional Tensor of type tf.uint8. The first dimension equals to data's first dimension, and the second dimension size depends on the fingerprint algorithm. | tensorflow.fingerprint |
tf.foldl View source on GitHub foldl on the list of tensors unpacked from elems on dimension 0. (deprecated argument values)
tf.foldl(
fn, elems, initializer=None, parallel_iterations=10, back_prop=True,
swap_memory=False, name=None
)
Warning: SOME ARGUMENT VALUES ARE DEPRECATED: (back_prop=False). They will be removed in a future version. Instructions for updating: back_prop=False is deprecated. Consider using tf.stop_gradient instead. Instead of: results = tf.foldl(fn, elems, back_prop=False) Use: results = tf.nest.map_structure(tf.stop_gradient, tf.foldl(fn, elems)) This foldl operator repeatedly applies the callable fn to a sequence of elements from first to last. The elements are made of the tensors unpacked from elems on dimension 0. The callable fn takes two tensors as arguments. The first argument is the accumulated value computed from the preceding invocation of fn, and the second is the value at the current position of elems. If initializer is None, elems must contain at least one element, and its first element is used as the initializer. Suppose that elems is unpacked into values, a list of tensors. The shape of the result tensor is fn(initializer, values[0]).shape`. This method also allows multi-arity elems and output of fn. If elems is a (possibly nested) list or tuple of tensors, then each of these tensors must have a matching first (unpack) dimension. The signature of fn may match the structure of elems. That is, if elems is (t1, [t2, t3, [t4, t5]]), then an appropriate signature for fn is: fn = lambda (t1, [t2, t3, [t4, t5]]):.
Args
fn The callable to be performed.
elems A tensor or (possibly nested) sequence of tensors, each of which will be unpacked along their first dimension. The nested sequence of the resulting slices will be the first argument to fn.
initializer (optional) A tensor or (possibly nested) sequence of tensors, as the initial value for the accumulator.
parallel_iterations (optional) The number of iterations allowed to run in parallel.
back_prop (optional) Deprecated. False disables support for back propagation. Prefer using tf.stop_gradient instead.
swap_memory (optional) True enables GPU-CPU memory swapping.
name (optional) Name prefix for the returned tensors.
Returns A tensor or (possibly nested) sequence of tensors, resulting from applying fn consecutively to the list of tensors unpacked from elems, from first to last.
Raises
TypeError if fn is not callable. Example: elems = tf.constant([1, 2, 3, 4, 5, 6])
sum = foldl(lambda a, x: a + x, elems)
# sum == 21 | tensorflow.foldl |
tf.foldr View source on GitHub foldr on the list of tensors unpacked from elems on dimension 0. (deprecated argument values)
tf.foldr(
fn, elems, initializer=None, parallel_iterations=10, back_prop=True,
swap_memory=False, name=None
)
Warning: SOME ARGUMENT VALUES ARE DEPRECATED: (back_prop=False). They will be removed in a future version. Instructions for updating: back_prop=False is deprecated. Consider using tf.stop_gradient instead. Instead of: results = tf.foldr(fn, elems, back_prop=False) Use: results = tf.nest.map_structure(tf.stop_gradient, tf.foldr(fn, elems)) This foldr operator repeatedly applies the callable fn to a sequence of elements from last to first. The elements are made of the tensors unpacked from elems. The callable fn takes two tensors as arguments. The first argument is the accumulated value computed from the preceding invocation of fn, and the second is the value at the current position of elems. If initializer is None, elems must contain at least one element, and its first element is used as the initializer. Suppose that elems is unpacked into values, a list of tensors. The shape of the result tensor is fn(initializer, values[0]).shape. This method also allows multi-arity elems and output of fn. If elems is a (possibly nested) list or tuple of tensors, then each of these tensors must have a matching first (unpack) dimension. The signature of fn may match the structure of elems. That is, if elems is (t1, [t2, t3, [t4, t5]]), then an appropriate signature for fn is: fn = lambda (t1, [t2, t3, [t4, t5]]):.
Args
fn The callable to be performed.
elems A tensor or (possibly nested) sequence of tensors, each of which will be unpacked along their first dimension. The nested sequence of the resulting slices will be the first argument to fn.
initializer (optional) A tensor or (possibly nested) sequence of tensors, as the initial value for the accumulator.
parallel_iterations (optional) The number of iterations allowed to run in parallel.
back_prop (optional) Deprecated. False disables support for back propagation. Prefer using tf.stop_gradient instead.
swap_memory (optional) True enables GPU-CPU memory swapping.
name (optional) Name prefix for the returned tensors.
Returns A tensor or (possibly nested) sequence of tensors, resulting from applying fn consecutively to the list of tensors unpacked from elems, from last to first.
Raises
TypeError if fn is not callable. Example: elems = [1, 2, 3, 4, 5, 6]
sum = foldr(lambda a, x: a + x, elems)
# sum == 21 | tensorflow.foldr |
tf.function View source on GitHub Compiles a function into a callable TensorFlow graph. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.function
tf.function(
func=None, input_signature=None, autograph=True, experimental_implements=None,
experimental_autograph_options=None, experimental_relax_shapes=False,
experimental_compile=None, experimental_follow_type_hints=None
)
tf.function constructs a callable that executes a TensorFlow graph (tf.Graph) created by trace-compiling the TensorFlow operations in func, effectively executing func as a TensorFlow graph. Example usage:
@tf.function
def f(x, y):
return x ** 2 + y
x = tf.constant([2, 3])
y = tf.constant([3, -2])
f(x, y)
<tf.Tensor: ... numpy=array([7, 7], ...)>
Features func may use data-dependent control flow, including if, for, while break, continue and return statements:
@tf.function
def f(x):
if tf.reduce_sum(x) > 0:
return x * x
else:
return -x // 2
f(tf.constant(-2))
<tf.Tensor: ... numpy=1>
func's closure may include tf.Tensor and tf.Variable objects:
@tf.function
def f():
return x ** 2 + y
x = tf.constant([-2, -3])
y = tf.Variable([3, -2])
f()
<tf.Tensor: ... numpy=array([7, 7], ...)>
func may also use ops with side effects, such as tf.print, tf.Variable and others:
v = tf.Variable(1)
@tf.function
def f(x):
for i in tf.range(x):
v.assign_add(i)
f(3)
v
<tf.Variable ... numpy=4>
Key Point: Any Python side-effects (appending to a list, printing with print, etc) will only happen once, when func is traced. To have side-effects executed into your tf.function they need to be written as TF ops:
l = []
@tf.function
def f(x):
for i in x:
l.append(i + 1) # Caution! Will only happen once when tracing
f(tf.constant([1, 2, 3]))
l
[<tf.Tensor ...>]
Instead, use TensorFlow collections like tf.TensorArray:
@tf.function
def f(x):
ta = tf.TensorArray(dtype=tf.int32, size=0, dynamic_size=True)
for i in range(len(x)):
ta = ta.write(i, x[i] + 1)
return ta.stack()
f(tf.constant([1, 2, 3]))
<tf.Tensor: ..., numpy=array([2, 3, 4], ...)>
tf.function is polymorphic Internally, tf.function can build more than one graph, to support arguments with different data types or shapes, since TensorFlow can build more efficient graphs that are specialized on shapes and dtypes. tf.function also treats any pure Python value as opaque objects, and builds a separate graph for each set of Python arguments that it encounters. To obtain an individual graph, use the get_concrete_function method of the callable created by tf.function. It can be called with the same arguments as func and returns a special tf.Graph object:
@tf.function
def f(x):
return x + 1
isinstance(f.get_concrete_function(1).graph, tf.Graph)
True
Caution: Passing python scalars or lists as arguments to tf.function will always build a new graph. To avoid this, pass numeric arguments as Tensors whenever possible:
@tf.function
def f(x):
return tf.abs(x)
f1 = f.get_concrete_function(1)
f2 = f.get_concrete_function(2) # Slow - builds new graph
f1 is f2
False
f1 = f.get_concrete_function(tf.constant(1))
f2 = f.get_concrete_function(tf.constant(2)) # Fast - reuses f1
f1 is f2
True
Python numerical arguments should only be used when they take few distinct values, such as hyperparameters like the number of layers in a neural network. Input signatures For Tensor arguments, tf.function instantiates a separate graph for every unique set of input shapes and datatypes. The example below creates two separate graphs, each specialized to a different shape:
@tf.function
def f(x):
return x + 1
vector = tf.constant([1.0, 1.0])
matrix = tf.constant([[3.0]])
f.get_concrete_function(vector) is f.get_concrete_function(matrix)
False
An "input signature" can be optionally provided to tf.function to control the graphs traced. The input signature specifies the shape and type of each Tensor argument to the function using a tf.TensorSpec object. More general shapes can be used. This is useful to avoid creating multiple graphs when Tensors have dynamic shapes. It also restricts the shape and datatype of Tensors that can be used:
@tf.function(
input_signature=[tf.TensorSpec(shape=None, dtype=tf.float32)])
def f(x):
return x + 1
vector = tf.constant([1.0, 1.0])
matrix = tf.constant([[3.0]])
f.get_concrete_function(vector) is f.get_concrete_function(matrix)
True
Variables may only be created once tf.function only allows creating new tf.Variable objects when it is called for the first time:
class MyModule(tf.Module):
def __init__(self):
self.v = None
@tf.function
def __call__(self, x):
if self.v is None:
self.v = tf.Variable(tf.ones_like(x))
return self.v * x
In general, it is recommended to create stateful objects like tf.Variable outside of tf.function and passing them as arguments. Using type annotations to improve performance 'experimental_follow_type_hints` can be used along with type annotations to improve performance by reducing the number of expensive graph retracings. For example, an argument annotated with tf.Tensor is converted to Tensor even when the input is a non-Tensor value.
@tf.function(experimental_follow_type_hints=True)
def f_with_hints(x: tf.Tensor):
print('Tracing')
return x
@tf.function(experimental_follow_type_hints=False)
def f_no_hints(x: tf.Tensor):
print('Tracing')
return x
f_no_hints(1)
Tracing
<tf.Tensor: shape=(), dtype=int32, numpy=1>
f_no_hints(2)
Tracing
<tf.Tensor: shape=(), dtype=int32, numpy=2>
f_with_hints(1)
Tracing
<tf.Tensor: shape=(), dtype=int32, numpy=1>
f_with_hints(2)
<tf.Tensor: shape=(), dtype=int32, numpy=2>
Args
func the function to be compiled. If func is None, tf.function returns a decorator that can be invoked with a single argument - func. In other words, tf.function(input_signature=...)(func) is equivalent to tf.function(func, input_signature=...). The former can be used as decorator.
input_signature A possibly nested sequence of tf.TensorSpec objects specifying the shapes and dtypes of the Tensors that will be supplied to this function. If None, a separate function is instantiated for each inferred input signature. If input_signature is specified, every input to func must be a Tensor, and func cannot accept **kwargs.
autograph Whether autograph should be applied on func before tracing a graph. Data-dependent control flow requires autograph=True. For more information, see the tf.function and AutoGraph guide.
experimental_implements If provided, contains a name of a "known" function this implements. For example "mycompany.my_recurrent_cell". This is stored as an attribute in inference function, which can then be detected when processing serialized function. See standardizing composite ops for details. For an example of utilizing this attribute see this example The code above automatically detects and substitutes function that implements "embedded_matmul" and allows TFLite to substitute its own implementations. For instance, a tensorflow user can use this attribute to mark that their function also implements embedded_matmul (perhaps more efficiently!) by specifying it using this parameter: @tf.function(experimental_implements="embedded_matmul") This can either be specified as just the string name of the function or a NameAttrList corresponding to a list of key-value attributes associated with the function name. The name of the function will be in the 'name' field of the NameAttrList.
experimental_autograph_options Optional tuple of tf.autograph.experimental.Feature values.
experimental_relax_shapes When True, tf.function may generate fewer, graphs that are less specialized on input shapes.
experimental_compile If True, the function is always compiled by XLA. XLA may be more efficient in some cases (e.g. TPU, XLA_GPU, dense tensor computations).
experimental_follow_type_hints When True, the function may use type annotations from func to optimize the tracing performance. For example, arguments annotated with tf.Tensor will automatically be converted to a Tensor.
Returns If func is not None, returns a callable that will execute the compiled function (and return zero or more tf.Tensor objects). If func is None, returns a decorator that, when invoked with a single func argument, returns a callable equivalent to the case above.
Raises ValueError when attempting to use experimental_compile, but XLA support is not enabled. | tensorflow.function |
tf.gather View source on GitHub Gather slices from params axis axis according to indices.
tf.gather(
params, indices, validate_indices=None, axis=None, batch_dims=0, name=None
)
Gather slices from params axis axis according to indices. indices must be an integer tensor of any dimension (usually 0-D or 1-D). For 0-D (scalar) indices: $$\begin{align*} output[p_0, ..., p_{axis-1}, && &&& p_{axis + 1}, ..., p_{N-1}] = \\ params[p_0, ..., p_{axis-1}, && indices, &&& p_{axis + 1}, ..., p_{N-1}] \end{align*}$$ Where N = ndims(params). For 1-D (vector) indices with batch_dims=0: $$\begin{align*} output[p_0, ..., p_{axis-1}, && &i, &&p_{axis + 1}, ..., p_{N-1}] =\\ params[p_0, ..., p_{axis-1}, && indices[&i], &&p_{axis + 1}, ..., p_{N-1}] \end{align*}$$ In the general case, produces an output tensor where: $$\begin{align*} output[p_0, &..., p_{axis-1}, & &i_{B}, ..., i_{M-1}, & p_{axis + 1}, &..., p_{N-1}] = \\ params[p_0, &..., p_{axis-1}, & indices[p_0, ..., p_{B-1}, &i_{B}, ..., i_{M-1}], & p_{axis + 1}, &..., p_{N-1}] \end{align*}$$ Where N = ndims(params), M = ndims(indices), and B = batch_dims. Note that params.shape[:batch_dims] must be identical to indices.shape[:batch_dims]. The shape of the output tensor is: output.shape = params.shape[:axis] + indices.shape[batch_dims:] + params.shape[axis + 1:]. Note that on CPU, if an out of bound index is found, an error is returned. On GPU, if an out of bound index is found, a 0 is stored in the corresponding output value. See also tf.gather_nd.
Args
params The Tensor from which to gather values. Must be at least rank axis + 1.
indices The index Tensor. Must be one of the following types: int32, int64. Must be in range [0, params.shape[axis]).
validate_indices Deprecated, does nothing.
axis A Tensor. Must be one of the following types: int32, int64. The axis in params to gather indices from. Must be greater than or equal to batch_dims. Defaults to the first non-batch dimension. Supports negative indexes.
batch_dims An integer. The number of batch dimensions. Must be less than or equal to rank(indices).
name A name for the operation (optional).
Returns A Tensor. Has the same type as params. | tensorflow.gather |
tf.gather_nd View source on GitHub Gather slices from params into a Tensor with shape specified by indices.
tf.gather_nd(
params, indices, batch_dims=0, name=None
)
indices is an K-dimensional integer tensor, best thought of as a (K-1)-dimensional tensor of indices into params, where each element defines a slice of params: output[\\(i_0, ..., i_{K-2}\\)] = params[indices[\\(i_0, ..., i_{K-2}\\)]]
Whereas in tf.gather indices defines slices into the first dimension of params, in tf.gather_nd, indices defines slices into the first N dimensions of params, where N = indices.shape[-1]. The last dimension of indices can be at most the rank of params: indices.shape[-1] <= params.rank
The last dimension of indices corresponds to elements (if indices.shape[-1] == params.rank) or slices (if indices.shape[-1] < params.rank) along dimension indices.shape[-1] of params. The output tensor has shape indices.shape[:-1] + params.shape[indices.shape[-1]:]
Additionally both 'params' and 'indices' can have M leading batch dimensions that exactly match. In this case 'batch_dims' must be M. Note that on CPU, if an out of bound index is found, an error is returned. On GPU, if an out of bound index is found, a 0 is stored in the corresponding output value. Some examples below. Simple indexing into a matrix: indices = [[0, 0], [1, 1]]
params = [['a', 'b'], ['c', 'd']]
output = ['a', 'd']
Slice indexing into a matrix: indices = [[1], [0]]
params = [['a', 'b'], ['c', 'd']]
output = [['c', 'd'], ['a', 'b']]
Indexing into a 3-tensor: indices = [[1]]
params = [[['a0', 'b0'], ['c0', 'd0']],
[['a1', 'b1'], ['c1', 'd1']]]
output = [[['a1', 'b1'], ['c1', 'd1']]]
indices = [[0, 1], [1, 0]]
params = [[['a0', 'b0'], ['c0', 'd0']],
[['a1', 'b1'], ['c1', 'd1']]]
output = [['c0', 'd0'], ['a1', 'b1']]
indices = [[0, 0, 1], [1, 0, 1]]
params = [[['a0', 'b0'], ['c0', 'd0']],
[['a1', 'b1'], ['c1', 'd1']]]
output = ['b0', 'b1']
The examples below are for the case when only indices have leading extra dimensions. If both 'params' and 'indices' have leading batch dimensions, use the 'batch_dims' parameter to run gather_nd in batch mode. Batched indexing into a matrix: indices = [[[0, 0]], [[0, 1]]]
params = [['a', 'b'], ['c', 'd']]
output = [['a'], ['b']]
Batched slice indexing into a matrix: indices = [[[1]], [[0]]]
params = [['a', 'b'], ['c', 'd']]
output = [[['c', 'd']], [['a', 'b']]]
Batched indexing into a 3-tensor: indices = [[[1]], [[0]]]
params = [[['a0', 'b0'], ['c0', 'd0']],
[['a1', 'b1'], ['c1', 'd1']]]
output = [[[['a1', 'b1'], ['c1', 'd1']]],
[[['a0', 'b0'], ['c0', 'd0']]]]
indices = [[[0, 1], [1, 0]], [[0, 0], [1, 1]]]
params = [[['a0', 'b0'], ['c0', 'd0']],
[['a1', 'b1'], ['c1', 'd1']]]
output = [[['c0', 'd0'], ['a1', 'b1']],
[['a0', 'b0'], ['c1', 'd1']]]
indices = [[[0, 0, 1], [1, 0, 1]], [[0, 1, 1], [1, 1, 0]]]
params = [[['a0', 'b0'], ['c0', 'd0']],
[['a1', 'b1'], ['c1', 'd1']]]
output = [['b0', 'b1'], ['d0', 'c1']]
Examples with batched 'params' and 'indices': batch_dims = 1
indices = [[1], [0]]
params = [[['a0', 'b0'], ['c0', 'd0']],
[['a1', 'b1'], ['c1', 'd1']]]
output = [['c0', 'd0'], ['a1', 'b1']]
batch_dims = 1
indices = [[[1]], [[0]]]
params = [[['a0', 'b0'], ['c0', 'd0']],
[['a1', 'b1'], ['c1', 'd1']]]
output = [[['c0', 'd0']], [['a1', 'b1']]]
batch_dims = 1
indices = [[[1, 0]], [[0, 1]]]
params = [[['a0', 'b0'], ['c0', 'd0']],
[['a1', 'b1'], ['c1', 'd1']]]
output = [['c0'], ['b1']]
See also tf.gather.
Args
params A Tensor. The tensor from which to gather values.
indices A Tensor. Must be one of the following types: int32, int64. Index tensor.
name A name for the operation (optional).
batch_dims An integer or a scalar 'Tensor'. The number of batch dimensions.
Returns A Tensor. Has the same type as params. | tensorflow.gather_nd |
tf.get_logger View source on GitHub Return TF logger instance. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.get_logger
tf.get_logger() | tensorflow.get_logger |
tf.get_static_value View source on GitHub Returns the constant value of the given tensor, if efficiently calculable. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.get_static_value
tf.get_static_value(
tensor, partial=False
)
This function attempts to partially evaluate the given tensor, and returns its value as a numpy ndarray if this succeeds. Compatibility(V1): If constant_value(tensor) returns a non-None result, it will no longer be possible to feed a different value for tensor. This allows the result of this function to influence the graph that is constructed, and permits static shape optimizations.
Args
tensor The Tensor to be evaluated.
partial If True, the returned numpy array is allowed to have partially evaluated values. Values that can't be evaluated will be None.
Returns A numpy ndarray containing the constant value of the given tensor, or None if it cannot be calculated.
Raises
TypeError if tensor is not an ops.Tensor. | tensorflow.get_static_value |
tf.gradients View source on GitHub Constructs symbolic derivatives of sum of ys w.r.t. x in xs.
tf.gradients(
ys, xs, grad_ys=None, name='gradients', gate_gradients=False,
aggregation_method=None, stop_gradients=None,
unconnected_gradients=tf.UnconnectedGradients.NONE
)
tf.gradients is only valid in a graph context. In particular, it is valid in the context of a tf.function wrapper, where code is executing as a graph. ys and xs are each a Tensor or a list of tensors. grad_ys is a list of Tensor, holding the gradients received by the ys. The list must be the same length as ys. gradients() adds ops to the graph to output the derivatives of ys with respect to xs. It returns a list of Tensor of length len(xs) where each tensor is the sum(dy/dx) for y in ys and for x in xs. grad_ys is a list of tensors of the same length as ys that holds the initial gradients for each y in ys. When grad_ys is None, we fill in a tensor of '1's of the shape of y for each y in ys. A user can provide their own initial grad_ys to compute the derivatives using a different initial gradient for each y (e.g., if one wanted to weight the gradient differently for each value in each y). stop_gradients is a Tensor or a list of tensors to be considered constant with respect to all xs. These tensors will not be backpropagated through, as though they had been explicitly disconnected using stop_gradient. Among other things, this allows computation of partial derivatives as opposed to total derivatives. For example:
@tf.function
def example():
a = tf.constant(0.)
b = 2 * a
return tf.gradients(a + b, [a, b], stop_gradients=[a, b])
example()
[<tf.Tensor: shape=(), dtype=float32, numpy=1.0>,
<tf.Tensor: shape=(), dtype=float32, numpy=1.0>]
Here the partial derivatives g evaluate to [1.0, 1.0], compared to the total derivatives tf.gradients(a + b, [a, b]), which take into account the influence of a on b and evaluate to [3.0, 1.0]. Note that the above is equivalent to:
@tf.function
def example():
a = tf.stop_gradient(tf.constant(0.))
b = tf.stop_gradient(2 * a)
return tf.gradients(a + b, [a, b])
example()
[<tf.Tensor: shape=(), dtype=float32, numpy=1.0>,
<tf.Tensor: shape=(), dtype=float32, numpy=1.0>]
stop_gradients provides a way of stopping gradient after the graph has already been constructed, as compared to tf.stop_gradient which is used during graph construction. When the two approaches are combined, backpropagation stops at both tf.stop_gradient nodes and nodes in stop_gradients, whichever is encountered first. All integer tensors are considered constant with respect to all xs, as if they were included in stop_gradients. unconnected_gradients determines the value returned for each x in xs if it is unconnected in the graph to ys. By default this is None to safeguard against errors. Mathematically these gradients are zero which can be requested using the 'zero' option. tf.UnconnectedGradients provides the following options and behaviors:
@tf.function
def example(use_zero):
a = tf.ones([1, 2])
b = tf.ones([3, 1])
if use_zero:
return tf.gradients([b], [a], unconnected_gradients='zero')
else:
return tf.gradients([b], [a], unconnected_gradients='none')
example(False)
[None]
example(True)
[<tf.Tensor: shape=(1, 2), dtype=float32, numpy=array([[0., 0.]], ...)>]
Let us take one practical example which comes during the back propogation phase. This function is used to evaluate the derivatives of the cost function with respect to Weights Ws and Biases bs. Below sample implementation provides the exaplantion of what it is actually used for :
@tf.function
def example():
Ws = tf.constant(0.)
bs = 2 * Ws
cost = Ws + bs # This is just an example. Please ignore the formulas.
g = tf.gradients(cost, [Ws, bs])
dCost_dW, dCost_db = g
return dCost_dW, dCost_db
example()
(<tf.Tensor: shape=(), dtype=float32, numpy=3.0>,
<tf.Tensor: shape=(), dtype=float32, numpy=1.0>)
Args
ys A Tensor or list of tensors to be differentiated.
xs A Tensor or list of tensors to be used for differentiation.
grad_ys Optional. A Tensor or list of tensors the same size as ys and holding the gradients computed for each y in ys.
name Optional name to use for grouping all the gradient ops together. defaults to 'gradients'.
gate_gradients If True, add a tuple around the gradients returned for an operations. This avoids some race conditions.
aggregation_method Specifies the method used to combine gradient terms. Accepted values are constants defined in the class AggregationMethod.
stop_gradients Optional. A Tensor or list of tensors not to differentiate through.
unconnected_gradients Optional. Specifies the gradient value returned when the given input tensors are unconnected. Accepted values are constants defined in the class tf.UnconnectedGradients and the default value is none.
Returns A list of Tensor of length len(xs) where each tensor is the sum(dy/dx) for y in ys and for x in xs.
Raises
LookupError if one of the operations between x and y does not have a registered gradient function.
ValueError if the arguments are invalid.
RuntimeError if called in Eager mode. | tensorflow.gradients |
tf.GradientTape View source on GitHub Record operations for automatic differentiation. View aliases Main aliases
tf.autodiff.GradientTape Compat aliases for migration See Migration guide for more details. tf.compat.v1.GradientTape
tf.GradientTape(
persistent=False, watch_accessed_variables=True
)
Operations are recorded if they are executed within this context manager and at least one of their inputs is being "watched". Trainable variables (created by tf.Variable or tf.compat.v1.get_variable, where trainable=True is default in both cases) are automatically watched. Tensors can be manually watched by invoking the watch method on this context manager. For example, consider the function y = x * x. The gradient at x = 3.0 can be computed as:
x = tf.constant(3.0)
with tf.GradientTape() as g:
g.watch(x)
y = x * x
dy_dx = g.gradient(y, x)
print(dy_dx)
tf.Tensor(6.0, shape=(), dtype=float32)
GradientTapes can be nested to compute higher-order derivatives. For example,
x = tf.constant(5.0)
with tf.GradientTape() as g:
g.watch(x)
with tf.GradientTape() as gg:
gg.watch(x)
y = x * x
dy_dx = gg.gradient(y, x) # dy_dx = 2 * x
d2y_dx2 = g.gradient(dy_dx, x) # d2y_dx2 = 2
print(dy_dx)
tf.Tensor(10.0, shape=(), dtype=float32)
print(d2y_dx2)
tf.Tensor(2.0, shape=(), dtype=float32)
By default, the resources held by a GradientTape are released as soon as GradientTape.gradient() method is called. To compute multiple gradients over the same computation, create a persistent gradient tape. This allows multiple calls to the gradient() method as resources are released when the tape object is garbage collected. For example:
x = tf.constant(3.0)
with tf.GradientTape(persistent=True) as g:
g.watch(x)
y = x * x
z = y * y
dz_dx = g.gradient(z, x) # (4*x^3 at x = 3)
print(dz_dx)
tf.Tensor(108.0, shape=(), dtype=float32)
dy_dx = g.gradient(y, x)
print(dy_dx)
tf.Tensor(6.0, shape=(), dtype=float32)
By default GradientTape will automatically watch any trainable variables that are accessed inside the context. If you want fine grained control over which variables are watched you can disable automatic tracking by passing watch_accessed_variables=False to the tape constructor:
x = tf.Variable(2.0)
w = tf.Variable(5.0)
with tf.GradientTape(
watch_accessed_variables=False, persistent=True) as tape:
tape.watch(x)
y = x ** 2 # Gradients will be available for `x`.
z = w ** 3 # No gradients will be available as `w` isn't being watched.
dy_dx = tape.gradient(y, x)
print(dy_dx)
tf.Tensor(4.0, shape=(), dtype=float32)
# No gradients will be available as `w` isn't being watched.
dz_dy = tape.gradient(z, w)
print(dz_dy)
None
Note that when using models you should ensure that your variables exist when using watch_accessed_variables=False. Otherwise it's quite easy to make your first iteration not have any gradients: a = tf.keras.layers.Dense(32)
b = tf.keras.layers.Dense(32)
with tf.GradientTape(watch_accessed_variables=False) as tape:
tape.watch(a.variables) # Since `a.build` has not been called at this point
# `a.variables` will return an empty list and the
# tape will not be watching anything.
result = b(a(inputs))
tape.gradient(result, a.variables) # The result of this computation will be
# a list of `None`s since a's variables
# are not being watched.
Note that only tensors with real or complex dtypes are differentiable.
Args
persistent Boolean controlling whether a persistent gradient tape is created. False by default, which means at most one call can be made to the gradient() method on this object.
watch_accessed_variables Boolean controlling whether the tape will automatically watch any (trainable) variables accessed while the tape is active. Defaults to True meaning gradients can be requested from any result computed in the tape derived from reading a trainable Variable. If False users must explicitly watch any Variables they want to request gradients from. Methods batch_jacobian View source
batch_jacobian(
target, source, unconnected_gradients=tf.UnconnectedGradients.NONE,
parallel_iterations=None, experimental_use_pfor=True
)
Computes and stacks per-example jacobians. See wikipedia article for the definition of a Jacobian. This function is essentially an efficient implementation of the following: tf.stack([self.jacobian(y[i], x[i]) for i in range(x.shape[0])]). Note that compared to GradientTape.jacobian which computes gradient of each output value w.r.t each input value, this function is useful when target[i,...] is independent of source[j,...] for j != i. This assumption allows more efficient computation as compared to GradientTape.jacobian. The output, as well as intermediate activations, are lower dimensional and avoid a bunch of redundant zeros which would result in the jacobian computation given the independence assumption.
Note: Unless you set persistent=True a GradientTape can only be used to compute one set of gradients (or jacobians).
Example usage: with tf.GradientTape() as g:
x = tf.constant([[1., 2.], [3., 4.]], dtype=tf.float32)
g.watch(x)
y = x * x
batch_jacobian = g.batch_jacobian(y, x)
# batch_jacobian is [[[2, 0], [0, 4]], [[6, 0], [0, 8]]]
Args
target A tensor with rank 2 or higher and with shape [b, y1, ..., y_n]. target[i,...] should only depend on source[i,...].
source A tensor with rank 2 or higher and with shape [b, x1, ..., x_m].
unconnected_gradients a value which can either hold 'none' or 'zero' and alters the value which will be returned if the target and sources are unconnected. The possible values and effects are detailed in 'UnconnectedGradients' and it defaults to 'none'.
parallel_iterations A knob to control how many iterations are dispatched in parallel. This knob can be used to control the total memory usage.
experimental_use_pfor If true, uses pfor for computing the Jacobian. Else uses a tf.while_loop.
Returns A tensor t with shape [b, y_1, ..., y_n, x1, ..., x_m] where t[i, ...] is the jacobian of target[i, ...] w.r.t. source[i, ...], i.e. stacked per-example jacobians.
Raises
RuntimeError If called on a used, non-persistent tape.
RuntimeError If called on a non-persistent tape with eager execution enabled and without enabling experimental_use_pfor.
ValueError If vectorization of jacobian computation fails or if first dimension of target and source do not match. gradient View source
gradient(
target, sources, output_gradients=None,
unconnected_gradients=tf.UnconnectedGradients.NONE
)
Computes the gradient using operations recorded in context of this tape.
Note: Unless you set persistent=True a GradientTape can only be used to compute one set of gradients (or jacobians).
Args
target a list or nested structure of Tensors or Variables to be differentiated.
sources a list or nested structure of Tensors or Variables. target will be differentiated against elements in sources.
output_gradients a list of gradients, one for each element of target. Defaults to None.
unconnected_gradients a value which can either hold 'none' or 'zero' and alters the value which will be returned if the target and sources are unconnected. The possible values and effects are detailed in 'UnconnectedGradients' and it defaults to 'none'.
Returns a list or nested structure of Tensors (or IndexedSlices, or None), one for each element in sources. Returned structure is the same as the structure of sources.
Raises
RuntimeError If called on a used, non-persistent tape.
RuntimeError If called inside the context of the tape.
ValueError If the target is a variable or if unconnected gradients is called with an unknown value. jacobian View source
jacobian(
target, sources, unconnected_gradients=tf.UnconnectedGradients.NONE,
parallel_iterations=None, experimental_use_pfor=True
)
Computes the jacobian using operations recorded in context of this tape.
Note: Unless you set persistent=True a GradientTape can only be used to compute one set of gradients (or jacobians).
Seewikipedia article for the definition of a Jacobian. Example usage: with tf.GradientTape() as g:
x = tf.constant([1.0, 2.0])
g.watch(x)
y = x * x
jacobian = g.jacobian(y, x)
# jacobian value is [[2., 0.], [0., 4.]]
Args
target Tensor to be differentiated.
sources a list or nested structure of Tensors or Variables. target will be differentiated against elements in sources.
unconnected_gradients a value which can either hold 'none' or 'zero' and alters the value which will be returned if the target and sources are unconnected. The possible values and effects are detailed in 'UnconnectedGradients' and it defaults to 'none'.
parallel_iterations A knob to control how many iterations are dispatched in parallel. This knob can be used to control the total memory usage.
experimental_use_pfor If true, vectorizes the jacobian computation. Else falls back to a sequential while_loop. Vectorization can sometimes fail or lead to excessive memory usage. This option can be used to disable vectorization in such cases.
Returns A list or nested structure of Tensors (or None), one for each element in sources. Returned structure is the same as the structure of sources. Note if any gradient is sparse (IndexedSlices), jacobian function currently makes it dense and returns a Tensor instead. This may change in the future.
Raises
RuntimeError If called on a used, non-persistent tape.
RuntimeError If called on a non-persistent tape with eager execution enabled and without enabling experimental_use_pfor.
ValueError If vectorization of jacobian computation fails. reset View source
reset()
Clears all information stored in this tape. Equivalent to exiting and reentering the tape context manager with a new tape. For example, the two following code blocks are equivalent: with tf.GradientTape() as t:
loss = loss_fn()
with tf.GradientTape() as t:
loss += other_loss_fn()
t.gradient(loss, ...) # Only differentiates other_loss_fn, not loss_fn
# The following is equivalent to the above
with tf.GradientTape() as t:
loss = loss_fn()
t.reset()
loss += other_loss_fn()
t.gradient(loss, ...) # Only differentiates other_loss_fn, not loss_fn
This is useful if you don't want to exit the context manager for the tape, or can't because the desired reset point is inside a control flow construct: with tf.GradientTape() as t:
loss = ...
if loss > k:
t.reset()
stop_recording View source
@tf_contextlib.contextmanager
stop_recording()
Temporarily stops recording operations on this tape. Operations executed while this context manager is active will not be recorded on the tape. This is useful for reducing the memory used by tracing all computations. For example:
x = tf.constant(4.0)
with tf.GradientTape() as tape:
with tape.stop_recording():
y = x ** 2
dy_dx = tape.gradient(y, x)
print(dy_dx)
None
Yields None
Raises
RuntimeError if the tape is not currently recording. watch View source
watch(
tensor
)
Ensures that tensor is being traced by this tape.
Args
tensor a Tensor or list of Tensors.
Raises
ValueError if it encounters something that is not a tensor. watched_variables View source
watched_variables()
Returns variables watched by this tape in order of construction. __enter__ View source
__enter__()
Enters a context inside which operations are recorded on this tape. __exit__ View source
__exit__(
typ, value, traceback
)
Exits the recording context, no further operations are traced. | tensorflow.gradienttape |
tf.grad_pass_through View source on GitHub Creates a grad-pass-through op with the forward behavior provided in f. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.grad_pass_through
tf.grad_pass_through(
f
)
Use this function to wrap any op, maintaining its behavior in the forward pass, but replacing the original op in the backward graph with an identity. For example: x = tf.Variable(1.0, name="x")
z = tf.Variable(3.0, name="z")
with tf.GradientTape() as tape:
# y will evaluate to 9.0
y = tf.grad_pass_through(x.assign)(z**2)
# grads will evaluate to 6.0
grads = tape.gradient(y, z)
Another example is a 'differentiable' moving average approximation, where gradients are allowed to flow into the last value fed to the moving average, but the moving average is still used for the forward pass: x = ... # Some scalar value
# A moving average object, we don't need to know how this is implemented
moving_average = MovingAverage()
with backprop.GradientTape() as tape:
# mavg_x will evaluate to the current running average value
mavg_x = tf.grad_pass_through(moving_average)(x)
grads = tape.gradient(mavg_x, x) # grads will evaluate to 1.0
Args
f function f(*x) that returns a Tensor or nested structure of Tensor outputs.
Returns A function h(x) which returns the same values as f(x) and whose gradients are the same as those of an identity function. | tensorflow.grad_pass_through |
tf.Graph View source on GitHub A TensorFlow computation, represented as a dataflow graph. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.Graph
tf.Graph()
Graphs are used by tf.functions to represent the function's computations. Each graph contains a set of tf.Operation objects, which represent units of computation; and tf.Tensor objects, which represent the units of data that flow between operations. Using graphs directly (deprecated) A tf.Graph can be constructed and used directly without a tf.function, as was required in TensorFlow 1, but this is deprecated and it is recommended to use a tf.function instead. If a graph is directly used, other deprecated TensorFlow 1 classes are also required to execute the graph, such as a tf.compat.v1.Session. A default graph can be registered with the tf.Graph.as_default context manager. Then, operations will be added to the graph instead of being executed eagerly. For example: g = tf.Graph()
with g.as_default():
# Define operations and tensors in `g`.
c = tf.constant(30.0)
assert c.graph is g
tf.compat.v1.get_default_graph() can be used to obtain the default graph. Important note: This class is not thread-safe for graph construction. All operations should be created from a single thread, or external synchronization must be provided. Unless otherwise specified, all methods are not thread-safe. A Graph instance supports an arbitrary number of "collections" that are identified by name. For convenience when building a large graph, collections can store groups of related objects: for example, the tf.Variable uses a collection (named tf.GraphKeys.GLOBAL_VARIABLES) for all variables that are created during the construction of a graph. The caller may define additional collections by specifying a new name.
Attributes
building_function Returns True iff this graph represents a function.
collections Returns the names of the collections known to this graph.
finalized True if this graph has been finalized.
graph_def_versions The GraphDef version information of this graph. For details on the meaning of each version, see GraphDef.
seed The graph-level random seed of this graph.
version Returns a version number that increases as ops are added to the graph. Note that this is unrelated to the tf.Graph.graph_def_versions.
Methods add_to_collection View source
add_to_collection(
name, value
)
Stores value in the collection with the given name. Note that collections are not sets, so it is possible to add a value to a collection several times.
Args
name The key for the collection. The GraphKeys class contains many standard names for collections.
value The value to add to the collection. add_to_collections View source
add_to_collections(
names, value
)
Stores value in the collections given by names. Note that collections are not sets, so it is possible to add a value to a collection several times. This function makes sure that duplicates in names are ignored, but it will not check for pre-existing membership of value in any of the collections in names. names can be any iterable, but if names is a string, it is treated as a single collection name.
Args
names The keys for the collections to add to. The GraphKeys class contains many standard names for collections.
value The value to add to the collections. as_default View source
as_default()
Returns a context manager that makes this Graph the default graph. This method should be used if you want to create multiple graphs in the same process. For convenience, a global default graph is provided, and all ops will be added to this graph if you do not create a new graph explicitly. Use this method with the with keyword to specify that ops created within the scope of a block should be added to this graph. In this case, once the scope of the with is exited, the previous default graph is set again as default. There is a stack, so it's ok to have multiple nested levels of as_default calls. The default graph is a property of the current thread. If you create a new thread, and wish to use the default graph in that thread, you must explicitly add a with g.as_default(): in that thread's function. The following code examples are equivalent: # 1. Using Graph.as_default():
g = tf.Graph()
with g.as_default():
c = tf.constant(5.0)
assert c.graph is g
# 2. Constructing and making default:
with tf.Graph().as_default() as g:
c = tf.constant(5.0)
assert c.graph is g
If eager execution is enabled ops created under this context manager will be added to the graph instead of executed eagerly.
Returns A context manager for using this graph as the default graph.
as_graph_def View source
as_graph_def(
from_version=None, add_shapes=False
)
Returns a serialized GraphDef representation of this graph. The serialized GraphDef can be imported into another Graph (using tf.import_graph_def) or used with the C++ Session API. This method is thread-safe.
Args
from_version Optional. If this is set, returns a GraphDef containing only the nodes that were added to this graph since its version property had the given value.
add_shapes If true, adds an "_output_shapes" list attr to each node with the inferred shapes of each of its outputs.
Returns A GraphDef protocol buffer.
Raises
ValueError If the graph_def would be too large. as_graph_element View source
as_graph_element(
obj, allow_tensor=True, allow_operation=True
)
Returns the object referred to by obj, as an Operation or Tensor. This function validates that obj represents an element of this graph, and gives an informative error message if it is not. This function is the canonical way to get/validate an object of one of the allowed types from an external argument reference in the Session API. This method may be called concurrently from multiple threads.
Args
obj A Tensor, an Operation, or the name of a tensor or operation. Can also be any object with an _as_graph_element() method that returns a value of one of these types. Note: _as_graph_element will be called inside the graph's lock and so may not modify the graph.
allow_tensor If true, obj may refer to a Tensor.
allow_operation If true, obj may refer to an Operation.
Returns The Tensor or Operation in the Graph corresponding to obj.
Raises
TypeError If obj is not a type we support attempting to convert to types.
ValueError If obj is of an appropriate type but invalid. For example, an invalid string.
KeyError If obj is not an object in the graph. clear_collection View source
clear_collection(
name
)
Clears all values in a collection.
Args
name The key for the collection. The GraphKeys class contains many standard names for collections. colocate_with View source
@tf_contextlib.contextmanager
colocate_with(
op, ignore_existing=False
)
Returns a context manager that specifies an op to colocate with.
Note: this function is not for public use, only for internal libraries.
For example: a = tf.Variable([1.0])
with g.colocate_with(a):
b = tf.constant(1.0)
c = tf.add(a, b)
b and c will always be colocated with a, no matter where a is eventually placed.
Note: Using a colocation scope resets any existing device constraints.
If op is None then ignore_existing must be True and the new scope resets all colocation and device constraints.
Args
op The op to colocate all created ops with, or None.
ignore_existing If true, only applies colocation of this op within the context, rather than applying all colocation properties on the stack. If op is None, this value must be True.
Raises
ValueError if op is None but ignore_existing is False.
Yields A context manager that specifies the op with which to colocate newly created ops.
container View source
@tf_contextlib.contextmanager
container(
container_name
)
Returns a context manager that specifies the resource container to use. Stateful operations, such as variables and queues, can maintain their states on devices so that they can be shared by multiple processes. A resource container is a string name under which these stateful operations are tracked. These resources can be released or cleared with tf.Session.reset(). For example: with g.container('experiment0'):
# All stateful Operations constructed in this context will be placed
# in resource container "experiment0".
v1 = tf.Variable([1.0])
v2 = tf.Variable([2.0])
with g.container("experiment1"):
# All stateful Operations constructed in this context will be
# placed in resource container "experiment1".
v3 = tf.Variable([3.0])
q1 = tf.queue.FIFOQueue(10, tf.float32)
# All stateful Operations constructed in this context will be
# be created in the "experiment0".
v4 = tf.Variable([4.0])
q1 = tf.queue.FIFOQueue(20, tf.float32)
with g.container(""):
# All stateful Operations constructed in this context will be
# be placed in the default resource container.
v5 = tf.Variable([5.0])
q3 = tf.queue.FIFOQueue(30, tf.float32)
# Resets container "experiment0", after which the state of v1, v2, v4, q1
# will become undefined (such as uninitialized).
tf.Session.reset(target, ["experiment0"])
Args
container_name container name string.
Returns A context manager for defining resource containers for stateful ops, yields the container name.
control_dependencies View source
control_dependencies(
control_inputs
)
Returns a context manager that specifies control dependencies. Use with the with keyword to specify that all operations constructed within the context should have control dependencies on control_inputs. For example: with g.control_dependencies([a, b, c]):
# `d` and `e` will only run after `a`, `b`, and `c` have executed.
d = ...
e = ...
Multiple calls to control_dependencies() can be nested, and in that case a new Operation will have control dependencies on the union of control_inputs from all active contexts. with g.control_dependencies([a, b]):
# Ops constructed here run after `a` and `b`.
with g.control_dependencies([c, d]):
# Ops constructed here run after `a`, `b`, `c`, and `d`.
You can pass None to clear the control dependencies: with g.control_dependencies([a, b]):
# Ops constructed here run after `a` and `b`.
with g.control_dependencies(None):
# Ops constructed here run normally, not waiting for either `a` or `b`.
with g.control_dependencies([c, d]):
# Ops constructed here run after `c` and `d`, also not waiting
# for either `a` or `b`.
Note: The control dependencies context applies only to ops that are constructed within the context. Merely using an op or tensor in the context does not add a control dependency. The following example illustrates this point:
# WRONG
def my_func(pred, tensor):
t = tf.matmul(tensor, tensor)
with tf.control_dependencies([pred]):
# The matmul op is created outside the context, so no control
# dependency will be added.
return t
# RIGHT
def my_func(pred, tensor):
with tf.control_dependencies([pred]):
# The matmul op is created in the context, so a control dependency
# will be added.
return tf.matmul(tensor, tensor)
Also note that though execution of ops created under this scope will trigger execution of the dependencies, the ops created under this scope might still be pruned from a normal tensorflow graph. For example, in the following snippet of code the dependencies are never executed: loss = model.loss()
with tf.control_dependencies(dependencies):
loss = loss + tf.constant(1) # note: dependencies ignored in the
# backward pass
return tf.gradients(loss, model.variables)
This is because evaluating the gradient graph does not require evaluating the constant(1) op created in the forward pass.
Args
control_inputs A list of Operation or Tensor objects which must be executed or computed before running the operations defined in the context. Can also be None to clear the control dependencies.
Returns A context manager that specifies control dependencies for all operations constructed within the context.
Raises
TypeError If control_inputs is not a list of Operation or Tensor objects. create_op View source
create_op(
op_type, inputs, dtypes=None, input_types=None, name=None, attrs=None,
op_def=None, compute_shapes=True, compute_device=True
)
Creates an Operation in this graph. (deprecated arguments) Warning: SOME ARGUMENTS ARE DEPRECATED: (compute_shapes). They will be removed in a future version. Instructions for updating: Shapes are always computed; don't use the compute_shapes as it has no effect. This is a low-level interface for creating an Operation. Most programs will not call this method directly, and instead use the Python op constructors, such as tf.constant(), which add ops to the default graph.
Args
op_type The Operation type to create. This corresponds to the OpDef.name field for the proto that defines the operation.
inputs A list of Tensor objects that will be inputs to the Operation.
dtypes (Optional) A list of DType objects that will be the types of the tensors that the operation produces.
input_types (Optional.) A list of DTypes that will be the types of the tensors that the operation consumes. By default, uses the base DType of each input in inputs. Operations that expect reference-typed inputs must specify input_types explicitly.
name (Optional.) A string name for the operation. If not specified, a name is generated based on op_type.
attrs (Optional.) A dictionary where the key is the attribute name (a string) and the value is the respective attr attribute of the NodeDef proto that will represent the operation (an AttrValue proto).
op_def (Optional.) The OpDef proto that describes the op_type that the operation will have.
compute_shapes (Optional.) Deprecated. Has no effect (shapes are always computed).
compute_device (Optional.) If True, device functions will be executed to compute the device property of the Operation.
Raises
TypeError if any of the inputs is not a Tensor.
ValueError if colocation conflicts with existing device assignment.
Returns An Operation object.
device View source
@tf_contextlib.contextmanager
device(
device_name_or_function
)
Returns a context manager that specifies the default device to use. The device_name_or_function argument may either be a device name string, a device function, or None: If it is a device name string, all operations constructed in this context will be assigned to the device with that name, unless overridden by a nested device() context. If it is a function, it will be treated as a function from Operation objects to device name strings, and invoked each time a new Operation is created. The Operation will be assigned to the device with the returned name. If it is None, all device() invocations from the enclosing context will be ignored. For information about the valid syntax of device name strings, see the documentation in DeviceNameUtils. For example: with g.device('/device:GPU:0'):
# All operations constructed in this context will be placed
# on GPU 0.
with g.device(None):
# All operations constructed in this context will have no
# assigned device.
# Defines a function from `Operation` to device string.
def matmul_on_gpu(n):
if n.type == "MatMul":
return "/device:GPU:0"
else:
return "/cpu:0"
with g.device(matmul_on_gpu):
# All operations of type "MatMul" constructed in this context
# will be placed on GPU 0; all other operations will be placed
# on CPU 0.
Note: The device scope may be overridden by op wrappers or other library code. For example, a variable assignment op v.assign() must be colocated with the tf.Variable v, and incompatible device scopes will be ignored.
Args
device_name_or_function The device name or function to use in the context.
Yields A context manager that specifies the default device to use for newly created ops.
Raises
RuntimeError If device scopes are not properly nested. finalize View source
finalize()
Finalizes this graph, making it read-only. After calling g.finalize(), no new operations can be added to g. This method is used to ensure that no operations are added to a graph when it is shared between multiple threads, for example when using a tf.compat.v1.train.QueueRunner. get_all_collection_keys View source
get_all_collection_keys()
Returns a list of collections used in this graph. get_collection View source
get_collection(
name, scope=None
)
Returns a list of values in the collection with the given name. This is different from get_collection_ref() which always returns the actual collection list if it exists in that it returns a new list each time it is called.
Args
name The key for the collection. For example, the GraphKeys class contains many standard names for collections.
scope (Optional.) A string. If supplied, the resulting list is filtered to include only items whose name attribute matches scope using re.match. Items without a name attribute are never returned if a scope is supplied. The choice of re.match means that a scope without special tokens filters by prefix.
Returns The list of values in the collection with the given name, or an empty list if no value has been added to that collection. The list contains the values in the order under which they were collected.
get_collection_ref View source
get_collection_ref(
name
)
Returns a list of values in the collection with the given name. If the collection exists, this returns the list itself, which can be modified in place to change the collection. If the collection does not exist, it is created as an empty list and the list is returned. This is different from get_collection() which always returns a copy of the collection list if it exists and never creates an empty collection.
Args
name The key for the collection. For example, the GraphKeys class contains many standard names for collections.
Returns The list of values in the collection with the given name, or an empty list if no value has been added to that collection.
get_name_scope View source
get_name_scope()
Returns the current name scope. For example: with tf.name_scope('scope1'):
with tf.name_scope('scope2'):
print(tf.compat.v1.get_default_graph().get_name_scope())
would print the string scope1/scope2.
Returns A string representing the current name scope.
get_operation_by_name View source
get_operation_by_name(
name
)
Returns the Operation with the given name. This method may be called concurrently from multiple threads.
Args
name The name of the Operation to return.
Returns The Operation with the given name.
Raises
TypeError If name is not a string.
KeyError If name does not correspond to an operation in this graph. get_operations View source
get_operations()
Return the list of operations in the graph. You can modify the operations in place, but modifications to the list such as inserts/delete have no effect on the list of operations known to the graph. This method may be called concurrently from multiple threads.
Returns A list of Operations.
get_tensor_by_name View source
get_tensor_by_name(
name
)
Returns the Tensor with the given name. This method may be called concurrently from multiple threads.
Args
name The name of the Tensor to return.
Returns The Tensor with the given name.
Raises
TypeError If name is not a string.
KeyError If name does not correspond to a tensor in this graph. gradient_override_map View source
@tf_contextlib.contextmanager
gradient_override_map(
op_type_map
)
EXPERIMENTAL: A context manager for overriding gradient functions. This context manager can be used to override the gradient function that will be used for ops within the scope of the context. For example: @tf.RegisterGradient("CustomSquare")
def _custom_square_grad(op, grad):
# ...
with tf.Graph().as_default() as g:
c = tf.constant(5.0)
s_1 = tf.square(c) # Uses the default gradient for tf.square.
with g.gradient_override_map({"Square": "CustomSquare"}):
s_2 = tf.square(s_2) # Uses _custom_square_grad to compute the
# gradient of s_2.
Args
op_type_map A dictionary mapping op type strings to alternative op type strings.
Returns A context manager that sets the alternative op type to be used for one or more ops created in that context.
Raises
TypeError If op_type_map is not a dictionary mapping strings to strings. is_feedable View source
is_feedable(
tensor
)
Returns True if and only if tensor is feedable. is_fetchable View source
is_fetchable(
tensor_or_op
)
Returns True if and only if tensor_or_op is fetchable. name_scope View source
@tf_contextlib.contextmanager
name_scope(
name
)
Returns a context manager that creates hierarchical names for operations. A graph maintains a stack of name scopes. A with name_scope(...): statement pushes a new name onto the stack for the lifetime of the context. The name argument will be interpreted as follows: A string (not ending with '/') will create a new name scope, in which name is appended to the prefix of all operations created in the context. If name has been used before, it will be made unique by calling self.unique_name(name). A scope previously captured from a with g.name_scope(...) as scope: statement will be treated as an "absolute" name scope, which makes it possible to re-enter existing scopes. A value of None or the empty string will reset the current name scope to the top-level (empty) name scope. For example: with tf.Graph().as_default() as g:
c = tf.constant(5.0, name="c")
assert c.op.name == "c"
c_1 = tf.constant(6.0, name="c")
assert c_1.op.name == "c_1"
# Creates a scope called "nested"
with g.name_scope("nested") as scope:
nested_c = tf.constant(10.0, name="c")
assert nested_c.op.name == "nested/c"
# Creates a nested scope called "inner".
with g.name_scope("inner"):
nested_inner_c = tf.constant(20.0, name="c")
assert nested_inner_c.op.name == "nested/inner/c"
# Create a nested scope called "inner_1".
with g.name_scope("inner"):
nested_inner_1_c = tf.constant(30.0, name="c")
assert nested_inner_1_c.op.name == "nested/inner_1/c"
# Treats `scope` as an absolute name scope, and
# switches to the "nested/" scope.
with g.name_scope(scope):
nested_d = tf.constant(40.0, name="d")
assert nested_d.op.name == "nested/d"
with g.name_scope(""):
e = tf.constant(50.0, name="e")
assert e.op.name == "e"
The name of the scope itself can be captured by with g.name_scope(...) as scope:, which stores the name of the scope in the variable scope. This value can be used to name an operation that represents the overall result of executing the ops in a scope. For example: inputs = tf.constant(...)
with g.name_scope('my_layer') as scope:
weights = tf.Variable(..., name="weights")
biases = tf.Variable(..., name="biases")
affine = tf.matmul(inputs, weights) + biases
output = tf.nn.relu(affine, name=scope)
Note: This constructor validates the given name. Valid scope names match one of the following regular expressions:
[A-Za-z0-9.][A-Za-z0-9_.\-/]* (for scopes at the root)
[A-Za-z0-9_.\-/]* (for other scopes)
Args
name A name for the scope.
Returns A context manager that installs name as a new name scope.
Raises
ValueError If name is not a valid scope name, according to the rules above. prevent_feeding View source
prevent_feeding(
tensor
)
Marks the given tensor as unfeedable in this graph. prevent_fetching View source
prevent_fetching(
op
)
Marks the given op as unfetchable in this graph. switch_to_thread_local View source
switch_to_thread_local()
Make device, colocation and dependencies stacks thread-local. Device, colocation and dependencies stacks are not thread-local be default. If multiple threads access them, then the state is shared. This means that one thread may affect the behavior of another thread. After this method is called, the stacks become thread-local. If multiple threads access them, then the state is not shared. Each thread uses its own value; a thread doesn't affect other threads by mutating such a stack. The initial value for every thread's stack is set to the current value of the stack when switch_to_thread_local() was first called. unique_name View source
unique_name(
name, mark_as_used=True
)
Return a unique operation name for name.
Note: You rarely need to call unique_name() directly. Most of the time you just need to create with g.name_scope() blocks to generate structured names.
unique_name is used to generate structured names, separated by "/", to help identify operations when debugging a graph. Operation names are displayed in error messages reported by the TensorFlow runtime, and in various visualization tools such as TensorBoard. If mark_as_used is set to True, which is the default, a new unique name is created and marked as in use. If it's set to False, the unique name is returned without actually being marked as used. This is useful when the caller simply wants to know what the name to be created will be.
Args
name The name for an operation.
mark_as_used Whether to mark this name as being used.
Returns A string to be passed to create_op() that will be used to name the operation being created. | tensorflow.graph |
Module: tf.graph_util Helpers to manipulate a tensor graph in python. Functions import_graph_def(...): Imports the graph from graph_def into the current default Graph. (deprecated arguments) | tensorflow.graph_util |
tf.graph_util.import_graph_def View source on GitHub Imports the graph from graph_def into the current default Graph. (deprecated arguments) View aliases Main aliases
tf.import_graph_def Compat aliases for migration See Migration guide for more details. tf.compat.v1.graph_util.import_graph_def, tf.compat.v1.import_graph_def
tf.graph_util.import_graph_def(
graph_def, input_map=None, return_elements=None, name=None, op_dict=None,
producer_op_list=None
)
Warning: SOME ARGUMENTS ARE DEPRECATED: (op_dict). They will be removed in a future version. Instructions for updating: Please file an issue at https://github.com/tensorflow/tensorflow/issues if you depend on this feature. This function provides a way to import a serialized TensorFlow GraphDef protocol buffer, and extract individual objects in the GraphDef as tf.Tensor and tf.Operation objects. Once extracted, these objects are placed into the current default Graph. See tf.Graph.as_graph_def for a way to create a GraphDef proto.
Args
graph_def A GraphDef proto containing operations to be imported into the default graph.
input_map A dictionary mapping input names (as strings) in graph_def to Tensor objects. The values of the named input tensors in the imported graph will be re-mapped to the respective Tensor values.
return_elements A list of strings containing operation names in graph_def that will be returned as Operation objects; and/or tensor names in graph_def that will be returned as Tensor objects.
name (Optional.) A prefix that will be prepended to the names in graph_def. Note that this does not apply to imported function names. Defaults to "import".
op_dict (Optional.) Deprecated, do not use.
producer_op_list (Optional.) An OpList proto with the (possibly stripped) list of OpDefs used by the producer of the graph. If provided, unrecognized attrs for ops in graph_def that have their default value according to producer_op_list will be removed. This will allow some more GraphDefs produced by later binaries to be accepted by earlier binaries.
Returns A list of Operation and/or Tensor objects from the imported graph, corresponding to the names in return_elements, and None if returns_elements is None.
Raises
TypeError If graph_def is not a GraphDef proto, input_map is not a dictionary mapping strings to Tensor objects, or return_elements is not a list of strings.
ValueError If input_map, or return_elements contains names that do not appear in graph_def, or graph_def is not well-formed (e.g. it refers to an unknown tensor). | tensorflow.graph_util.import_graph_def |
tf.group View source on GitHub Create an op that groups multiple operations. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.group
tf.group(
*inputs, **kwargs
)
When this op finishes, all ops in inputs have finished. This op has no output.
Note: In TensorFlow 2 with eager and/or Autograph, you should not require this method, as code executes in your expected order. Only use tf.group when working with v1-style code or in a graph context such as inside Dataset.map.
When operating in a v1-style graph context, ops are not executed in the same order as specified in the code; TensorFlow will attempt to execute ops in parallel or in an order convienient to the result it is computing. tf.group allows you to request that one or more results finish before execution continues. tf.group creates a single op (of type NoOp), and then adds appropriate control dependencies. Thus, c = tf.group(a, b) will compute the same graph as this: with tf.control_dependencies([a, b]):
c = tf.no_op()
See also tf.tuple and tf.control_dependencies.
Args
*inputs Zero or more tensors to group.
name A name for this operation (optional).
Returns An Operation that executes all its inputs.
Raises
ValueError If an unknown keyword argument is provided. | tensorflow.group |
tf.guarantee_const Gives a guarantee to the TF runtime that the input tensor is a constant. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.guarantee_const
tf.guarantee_const(
input, name=None
)
The runtime is then free to make optimizations based on this. Only accepts value typed tensors as inputs and rejects resource variable handles as input. Returns the input tensor without modification.
Args
input A Tensor.
name A name for the operation (optional).
Returns A Tensor. Has the same type as input. | tensorflow.guarantee_const |
tf.hessians View source on GitHub Constructs the Hessian of sum of ys with respect to x in xs.
tf.hessians(
ys, xs, gate_gradients=False, aggregation_method=None, name='hessians'
)
hessians() adds ops to the graph to output the Hessian matrix of ys with respect to xs. It returns a list of Tensor of length len(xs) where each tensor is the Hessian of sum(ys). The Hessian is a matrix of second-order partial derivatives of a scalar tensor (see https://en.wikipedia.org/wiki/Hessian_matrix for more details).
Args
ys A Tensor or list of tensors to be differentiated.
xs A Tensor or list of tensors to be used for differentiation.
gate_gradients See gradients() documentation for details.
aggregation_method See gradients() documentation for details.
name Optional name to use for grouping all the gradient ops together. defaults to 'hessians'.
Returns A list of Hessian matrices of sum(ys) for each x in xs.
Raises
LookupError if one of the operations between xs and ys does not have a registered gradient function. | tensorflow.hessians |
tf.histogram_fixed_width View source on GitHub Return histogram of values. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.histogram_fixed_width
tf.histogram_fixed_width(
values, value_range, nbins=100, dtype=tf.dtypes.int32, name=None
)
Given the tensor values, this operation returns a rank 1 histogram counting the number of entries in values that fell into every bin. The bins are equal width and determined by the arguments value_range and nbins.
Args
values Numeric Tensor.
value_range Shape [2] Tensor of same dtype as values. values <= value_range[0] will be mapped to hist[0], values >= value_range[1] will be mapped to hist[-1].
nbins Scalar int32 Tensor. Number of histogram bins.
dtype dtype for returned histogram.
name A name for this operation (defaults to 'histogram_fixed_width').
Returns A 1-D Tensor holding histogram of values.
Raises
TypeError If any unsupported dtype is provided.
tf.errors.InvalidArgumentError If value_range does not satisfy value_range[0] < value_range[1]. Examples:
# Bins will be: (-inf, 1), [1, 2), [2, 3), [3, 4), [4, inf)
nbins = 5
value_range = [0.0, 5.0]
new_values = [-1.0, 0.0, 1.5, 2.0, 5.0, 15]
hist = tf.histogram_fixed_width(new_values, value_range, nbins=5)
hist.numpy()
array([2, 1, 1, 0, 2], dtype=int32) | tensorflow.histogram_fixed_width |
tf.histogram_fixed_width_bins View source on GitHub Bins the given values for use in a histogram. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.histogram_fixed_width_bins
tf.histogram_fixed_width_bins(
values, value_range, nbins=100, dtype=tf.dtypes.int32, name=None
)
Given the tensor values, this operation returns a rank 1 Tensor representing the indices of a histogram into which each element of values would be binned. The bins are equal width and determined by the arguments value_range and nbins.
Args
values Numeric Tensor.
value_range Shape [2] Tensor of same dtype as values. values <= value_range[0] will be mapped to hist[0], values >= value_range[1] will be mapped to hist[-1].
nbins Scalar int32 Tensor. Number of histogram bins.
dtype dtype for returned histogram.
name A name for this operation (defaults to 'histogram_fixed_width').
Returns A Tensor holding the indices of the binned values whose shape matches values.
Raises
TypeError If any unsupported dtype is provided.
tf.errors.InvalidArgumentError If value_range does not satisfy value_range[0] < value_range[1]. Examples:
# Bins will be: (-inf, 1), [1, 2), [2, 3), [3, 4), [4, inf)
nbins = 5
value_range = [0.0, 5.0]
new_values = [-1.0, 0.0, 1.5, 2.0, 5.0, 15]
indices = tf.histogram_fixed_width_bins(new_values, value_range, nbins=5)
indices.numpy()
array([0, 0, 1, 2, 4, 4], dtype=int32) | tensorflow.histogram_fixed_width_bins |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.