doc_content
stringlengths 1
386k
| doc_id
stringlengths 5
188
|
---|---|
torch.backends.openmp.is_available() [source]
Returns whether PyTorch is built with OpenMP support. | torch.backends#torch.backends.openmp.is_available |
torch.baddbmm(input, batch1, batch2, *, beta=1, alpha=1, out=None) → Tensor
Performs a batch matrix-matrix product of matrices in batch1 and batch2. input is added to the final result. batch1 and batch2 must be 3-D tensors each containing the same number of matrices. If batch1 is a (b×n×m)(b \times n \times m) tensor, batch2 is a (b×m×p)(b \times m \times p) tensor, then input must be broadcastable with a (b×n×p)(b \times n \times p) tensor and out will be a (b×n×p)(b \times n \times p) tensor. Both alpha and beta mean the same as the scaling factors used in torch.addbmm(). outi=βinputi+α(batch1i@batch2i)\text{out}_i = \beta\ \text{input}_i + \alpha\ (\text{batch1}_i \mathbin{@} \text{batch2}_i)
If beta is 0, then input will be ignored, and nan and inf in it will not be propagated. For inputs of type FloatTensor or DoubleTensor, arguments beta and alpha must be real numbers, otherwise they should be integers. This operator supports TensorFloat32. Parameters
input (Tensor) – the tensor to be added
batch1 (Tensor) – the first batch of matrices to be multiplied
batch2 (Tensor) – the second batch of matrices to be multiplied Keyword Arguments
beta (Number, optional) – multiplier for input (β\beta )
alpha (Number, optional) – multiplier for batch1@batch2\text{batch1} \mathbin{@} \text{batch2} (α\alpha )
out (Tensor, optional) – the output tensor. Example: >>> M = torch.randn(10, 3, 5)
>>> batch1 = torch.randn(10, 3, 4)
>>> batch2 = torch.randn(10, 4, 5)
>>> torch.baddbmm(M, batch1, batch2).size()
torch.Size([10, 3, 5]) | torch.generated.torch.baddbmm#torch.baddbmm |
torch.bartlett_window(window_length, periodic=True, *, dtype=None, layout=torch.strided, device=None, requires_grad=False) → Tensor
Bartlett window function. w[n]=1−∣2nN−1−1∣={2nN−1if 0≤n≤N−122−2nN−1if N−12<n<N,w[n] = 1 - \left| \frac{2n}{N-1} - 1 \right| = \begin{cases} \frac{2n}{N - 1} & \text{if } 0 \leq n \leq \frac{N - 1}{2} \\ 2 - \frac{2n}{N - 1} & \text{if } \frac{N - 1}{2} < n < N \\ \end{cases},
where NN is the full window size. The input window_length is a positive integer controlling the returned window size. periodic flag determines whether the returned window trims off the last duplicate value from the symmetric window and is ready to be used as a periodic window with functions like torch.stft(). Therefore, if periodic is true, the NN in above formula is in fact window_length+1\text{window\_length} + 1 . Also, we always have torch.bartlett_window(L, periodic=True) equal to torch.bartlett_window(L + 1, periodic=False)[:-1]). Note If window_length =1=1 , the returned window contains a single value 1. Parameters
window_length (int) – the size of returned window
periodic (bool, optional) – If True, returns a window to be used as periodic function. If False, return a symmetric window. Keyword Arguments
dtype (torch.dtype, optional) – the desired data type of returned tensor. Default: if None, uses a global default (see torch.set_default_tensor_type()). Only floating point types are supported.
layout (torch.layout, optional) – the desired layout of returned window tensor. Only torch.strided (dense layout) is supported.
device (torch.device, optional) – the desired device of returned tensor. Default: if None, uses the current device for the default tensor type (see torch.set_default_tensor_type()). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.
requires_grad (bool, optional) – If autograd should record operations on the returned tensor. Default: False. Returns
A 1-D tensor of size (window_length,)(\text{window\_length},) containing the window Return type
Tensor | torch.generated.torch.bartlett_window#torch.bartlett_window |
torch.bernoulli(input, *, generator=None, out=None) → Tensor
Draws binary random numbers (0 or 1) from a Bernoulli distribution. The input tensor should be a tensor containing probabilities to be used for drawing the binary random number. Hence, all values in input have to be in the range: 0≤inputi≤10 \leq \text{input}_i \leq 1 . The ith\text{i}^{th} element of the output tensor will draw a value 11 according to the ith\text{i}^{th} probability value given in input. outi∼Bernoulli(p=inputi)\text{out}_{i} \sim \mathrm{Bernoulli}(p = \text{input}_{i})
The returned out tensor only has values 0 or 1 and is of the same shape as input. out can have integral dtype, but input must have floating point dtype. Parameters
input (Tensor) – the input tensor of probability values for the Bernoulli distribution Keyword Arguments
generator (torch.Generator, optional) – a pseudorandom number generator for sampling
out (Tensor, optional) – the output tensor. Example: >>> a = torch.empty(3, 3).uniform_(0, 1) # generate a uniform random matrix with range [0, 1]
>>> a
tensor([[ 0.1737, 0.0950, 0.3609],
[ 0.7148, 0.0289, 0.2676],
[ 0.9456, 0.8937, 0.7202]])
>>> torch.bernoulli(a)
tensor([[ 1., 0., 0.],
[ 0., 0., 0.],
[ 1., 1., 1.]])
>>> a = torch.ones(3, 3) # probability of drawing "1" is 1
>>> torch.bernoulli(a)
tensor([[ 1., 1., 1.],
[ 1., 1., 1.],
[ 1., 1., 1.]])
>>> a = torch.zeros(3, 3) # probability of drawing "1" is 0
>>> torch.bernoulli(a)
tensor([[ 0., 0., 0.],
[ 0., 0., 0.],
[ 0., 0., 0.]]) | torch.generated.torch.bernoulli#torch.bernoulli |
torch.bincount(input, weights=None, minlength=0) → Tensor
Count the frequency of each value in an array of non-negative ints. The number of bins (size 1) is one larger than the largest value in input unless input is empty, in which case the result is a tensor of size 0. If minlength is specified, the number of bins is at least minlength and if input is empty, then the result is tensor of size minlength filled with zeros. If n is the value at position i, out[n] += weights[i] if weights is specified else out[n] += 1. Note This operation may produce nondeterministic gradients when given tensors on a CUDA device. See Reproducibility for more information. Parameters
input (Tensor) – 1-d int tensor
weights (Tensor) – optional, weight for each value in the input tensor. Should be of same size as input tensor.
minlength (int) – optional, minimum number of bins. Should be non-negative. Returns
a tensor of shape Size([max(input) + 1]) if input is non-empty, else Size(0) Return type
output (Tensor) Example: >>> input = torch.randint(0, 8, (5,), dtype=torch.int64)
>>> weights = torch.linspace(0, 1, steps=5)
>>> input, weights
(tensor([4, 3, 6, 3, 4]),
tensor([ 0.0000, 0.2500, 0.5000, 0.7500, 1.0000])
>>> torch.bincount(input)
tensor([0, 0, 0, 2, 2, 0, 1])
>>> input.bincount(weights)
tensor([0.0000, 0.0000, 0.0000, 1.0000, 1.0000, 0.0000, 0.5000]) | torch.generated.torch.bincount#torch.bincount |
torch.bitwise_and(input, other, *, out=None) → Tensor
Computes the bitwise AND of input and other. The input tensor must be of integral or Boolean types. For bool tensors, it computes the logical AND. Parameters
input – the first input tensor
other – the second input tensor Keyword Arguments
out (Tensor, optional) – the output tensor. Example >>> torch.bitwise_and(torch.tensor([-1, -2, 3], dtype=torch.int8), torch.tensor([1, 0, 3], dtype=torch.int8))
tensor([1, 0, 3], dtype=torch.int8)
>>> torch.bitwise_and(torch.tensor([True, True, False]), torch.tensor([False, True, False]))
tensor([ False, True, False]) | torch.generated.torch.bitwise_and#torch.bitwise_and |
torch.bitwise_not(input, *, out=None) → Tensor
Computes the bitwise NOT of the given input tensor. The input tensor must be of integral or Boolean types. For bool tensors, it computes the logical NOT. Parameters
input (Tensor) – the input tensor. Keyword Arguments
out (Tensor, optional) – the output tensor. Example >>> torch.bitwise_not(torch.tensor([-1, -2, 3], dtype=torch.int8))
tensor([ 0, 1, -4], dtype=torch.int8) | torch.generated.torch.bitwise_not#torch.bitwise_not |
torch.bitwise_or(input, other, *, out=None) → Tensor
Computes the bitwise OR of input and other. The input tensor must be of integral or Boolean types. For bool tensors, it computes the logical OR. Parameters
input – the first input tensor
other – the second input tensor Keyword Arguments
out (Tensor, optional) – the output tensor. Example >>> torch.bitwise_or(torch.tensor([-1, -2, 3], dtype=torch.int8), torch.tensor([1, 0, 3], dtype=torch.int8))
tensor([-1, -2, 3], dtype=torch.int8)
>>> torch.bitwise_or(torch.tensor([True, True, False]), torch.tensor([False, True, False]))
tensor([ True, True, False]) | torch.generated.torch.bitwise_or#torch.bitwise_or |
torch.bitwise_xor(input, other, *, out=None) → Tensor
Computes the bitwise XOR of input and other. The input tensor must be of integral or Boolean types. For bool tensors, it computes the logical XOR. Parameters
input – the first input tensor
other – the second input tensor Keyword Arguments
out (Tensor, optional) – the output tensor. Example >>> torch.bitwise_xor(torch.tensor([-1, -2, 3], dtype=torch.int8), torch.tensor([1, 0, 3], dtype=torch.int8))
tensor([-2, -2, 0], dtype=torch.int8)
>>> torch.bitwise_xor(torch.tensor([True, True, False]), torch.tensor([False, True, False]))
tensor([ True, False, False]) | torch.generated.torch.bitwise_xor#torch.bitwise_xor |
torch.blackman_window(window_length, periodic=True, *, dtype=None, layout=torch.strided, device=None, requires_grad=False) → Tensor
Blackman window function. w[n]=0.42−0.5cos(2πnN−1)+0.08cos(4πnN−1)w[n] = 0.42 - 0.5 \cos \left( \frac{2 \pi n}{N - 1} \right) + 0.08 \cos \left( \frac{4 \pi n}{N - 1} \right)
where NN is the full window size. The input window_length is a positive integer controlling the returned window size. periodic flag determines whether the returned window trims off the last duplicate value from the symmetric window and is ready to be used as a periodic window with functions like torch.stft(). Therefore, if periodic is true, the NN in above formula is in fact window_length+1\text{window\_length} + 1 . Also, we always have torch.blackman_window(L, periodic=True) equal to torch.blackman_window(L + 1, periodic=False)[:-1]). Note If window_length =1=1 , the returned window contains a single value 1. Parameters
window_length (int) – the size of returned window
periodic (bool, optional) – If True, returns a window to be used as periodic function. If False, return a symmetric window. Keyword Arguments
dtype (torch.dtype, optional) – the desired data type of returned tensor. Default: if None, uses a global default (see torch.set_default_tensor_type()). Only floating point types are supported.
layout (torch.layout, optional) – the desired layout of returned window tensor. Only torch.strided (dense layout) is supported.
device (torch.device, optional) – the desired device of returned tensor. Default: if None, uses the current device for the default tensor type (see torch.set_default_tensor_type()). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.
requires_grad (bool, optional) – If autograd should record operations on the returned tensor. Default: False. Returns
A 1-D tensor of size (window_length,)(\text{window\_length},) containing the window Return type
Tensor | torch.generated.torch.blackman_window#torch.blackman_window |
torch.block_diag(*tensors) [source]
Create a block diagonal matrix from provided tensors. Parameters
*tensors – One or more tensors with 0, 1, or 2 dimensions. Returns
A 2 dimensional tensor with all the input tensors arranged in
order such that their upper left and lower right corners are diagonally adjacent. All other elements are set to 0. Return type
Tensor Example: >>> import torch
>>> A = torch.tensor([[0, 1], [1, 0]])
>>> B = torch.tensor([[3, 4, 5], [6, 7, 8]])
>>> C = torch.tensor(7)
>>> D = torch.tensor([1, 2, 3])
>>> E = torch.tensor([[4], [5], [6]])
>>> torch.block_diag(A, B, C, D, E)
tensor([[0, 1, 0, 0, 0, 0, 0, 0, 0, 0],
[1, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 3, 4, 5, 0, 0, 0, 0, 0],
[0, 0, 6, 7, 8, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 7, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 1, 2, 3, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 4],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 5],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 6]]) | torch.generated.torch.block_diag#torch.block_diag |
torch.bmm(input, mat2, *, deterministic=False, out=None) → Tensor
Performs a batch matrix-matrix product of matrices stored in input and mat2. input and mat2 must be 3-D tensors each containing the same number of matrices. If input is a (b×n×m)(b \times n \times m) tensor, mat2 is a (b×m×p)(b \times m \times p) tensor, out will be a (b×n×p)(b \times n \times p) tensor. outi=inputi@mat2i\text{out}_i = \text{input}_i \mathbin{@} \text{mat2}_i
This operator supports TensorFloat32. Note This function does not broadcast. For broadcasting matrix products, see torch.matmul(). Parameters
input (Tensor) – the first batch of matrices to be multiplied
mat2 (Tensor) – the second batch of matrices to be multiplied Keyword Arguments
deterministic (bool, optional) – flag to choose between a faster non-deterministic calculation, or a slower deterministic calculation. This argument is only available for sparse-dense CUDA bmm. Default: False
out (Tensor, optional) – the output tensor. Example: >>> input = torch.randn(10, 3, 4)
>>> mat2 = torch.randn(10, 4, 5)
>>> res = torch.bmm(input, mat2)
>>> res.size()
torch.Size([10, 3, 5]) | torch.generated.torch.bmm#torch.bmm |
torch.broadcast_shapes(*shapes) → Size [source]
Similar to broadcast_tensors() but for shapes. This is equivalent to torch.broadcast_tensors(*map(torch.empty, shapes))[0].shape but avoids the need create to intermediate tensors. This is useful for broadcasting tensors of common batch shape but different rightmost shape, e.g. to broadcast mean vectors with covariance matrices. Example: >>> torch.broadcast_shapes((2,), (3, 1), (1, 1, 1))
torch.Size([1, 3, 2])
Parameters
*shapes (torch.Size) – Shapes of tensors. Returns
A shape compatible with all input shapes. Return type
shape (torch.Size) Raises
RuntimeError – If shapes are incompatible. | torch.generated.torch.broadcast_shapes#torch.broadcast_shapes |
torch.broadcast_tensors(*tensors) → List of Tensors [source]
Broadcasts the given tensors according to Broadcasting semantics. Parameters
*tensors – any number of tensors of the same type Warning More than one element of a broadcasted tensor may refer to a single memory location. As a result, in-place operations (especially ones that are vectorized) may result in incorrect behavior. If you need to write to the tensors, please clone them first. Example: >>> x = torch.arange(3).view(1, 3)
>>> y = torch.arange(2).view(2, 1)
>>> a, b = torch.broadcast_tensors(x, y)
>>> a.size()
torch.Size([2, 3])
>>> a
tensor([[0, 1, 2],
[0, 1, 2]]) | torch.generated.torch.broadcast_tensors#torch.broadcast_tensors |
torch.broadcast_to(input, shape) → Tensor
Broadcasts input to the shape shape. Equivalent to calling input.expand(shape). See expand() for details. Parameters
input (Tensor) – the input tensor.
shape (list, tuple, or torch.Size) – the new shape. Example: >>> x = torch.tensor([1, 2, 3])
>>> torch.broadcast_to(x, (3, 3))
tensor([[1, 2, 3],
[1, 2, 3],
[1, 2, 3]]) | torch.generated.torch.broadcast_to#torch.broadcast_to |
torch.bucketize(input, boundaries, *, out_int32=False, right=False, out=None) → Tensor
Returns the indices of the buckets to which each value in the input belongs, where the boundaries of the buckets are set by boundaries. Return a new tensor with the same size as input. If right is False (default), then the left boundary is closed. More formally, the returned index satisfies the following rules:
right returned index satisfies
False boundaries[i-1] < input[m][n]...[l][x] <= boundaries[i]
True boundaries[i-1] <= input[m][n]...[l][x] < boundaries[i] Parameters
input (Tensor or Scalar) – N-D tensor or a Scalar containing the search value(s).
boundaries (Tensor) – 1-D tensor, must contain a monotonically increasing sequence. Keyword Arguments
out_int32 (bool, optional) – indicate the output data type. torch.int32 if True, torch.int64 otherwise. Default value is False, i.e. default output data type is torch.int64.
right (bool, optional) – if False, return the first suitable location that is found. If True, return the last such index. If no suitable index found, return 0 for non-numerical value (eg. nan, inf) or the size of boundaries (one pass the last index). In other words, if False, gets the lower bound index for each value in input from boundaries. If True, gets the upper bound index instead. Default value is False.
out (Tensor, optional) – the output tensor, must be the same size as input if provided. Example: >>> boundaries = torch.tensor([1, 3, 5, 7, 9])
>>> boundaries
tensor([1, 3, 5, 7, 9])
>>> v = torch.tensor([[3, 6, 9], [3, 6, 9]])
>>> v
tensor([[3, 6, 9],
[3, 6, 9]])
>>> torch.bucketize(v, boundaries)
tensor([[1, 3, 4],
[1, 3, 4]])
>>> torch.bucketize(v, boundaries, right=True)
tensor([[2, 3, 5],
[2, 3, 5]]) | torch.generated.torch.bucketize#torch.bucketize |
torch.can_cast(from, to) → bool
Determines if a type conversion is allowed under PyTorch casting rules described in the type promotion documentation. Parameters
from (dpython:type) – The original torch.dtype.
to (dpython:type) – The target torch.dtype. Example: >>> torch.can_cast(torch.double, torch.float)
True
>>> torch.can_cast(torch.float, torch.int)
False | torch.generated.torch.can_cast#torch.can_cast |
torch.cartesian_prod(*tensors) [source]
Do cartesian product of the given sequence of tensors. The behavior is similar to python’s itertools.product. Parameters
*tensors – any number of 1 dimensional tensors. Returns
A tensor equivalent to converting all the input tensors into lists,
do itertools.product on these lists, and finally convert the resulting list into tensor. Return type
Tensor Example: >>> a = [1, 2, 3]
>>> b = [4, 5]
>>> list(itertools.product(a, b))
[(1, 4), (1, 5), (2, 4), (2, 5), (3, 4), (3, 5)]
>>> tensor_a = torch.tensor(a)
>>> tensor_b = torch.tensor(b)
>>> torch.cartesian_prod(tensor_a, tensor_b)
tensor([[1, 4],
[1, 5],
[2, 4],
[2, 5],
[3, 4],
[3, 5]]) | torch.generated.torch.cartesian_prod#torch.cartesian_prod |
torch.cat(tensors, dim=0, *, out=None) → Tensor
Concatenates the given sequence of seq tensors in the given dimension. All tensors must either have the same shape (except in the concatenating dimension) or be empty. torch.cat() can be seen as an inverse operation for torch.split() and torch.chunk(). torch.cat() can be best understood via examples. Parameters
tensors (sequence of Tensors) – any python sequence of tensors of the same type. Non-empty tensors provided must have the same shape, except in the cat dimension.
dim (int, optional) – the dimension over which the tensors are concatenated Keyword Arguments
out (Tensor, optional) – the output tensor. Example: >>> x = torch.randn(2, 3)
>>> x
tensor([[ 0.6580, -1.0969, -0.4614],
[-0.1034, -0.5790, 0.1497]])
>>> torch.cat((x, x, x), 0)
tensor([[ 0.6580, -1.0969, -0.4614],
[-0.1034, -0.5790, 0.1497],
[ 0.6580, -1.0969, -0.4614],
[-0.1034, -0.5790, 0.1497],
[ 0.6580, -1.0969, -0.4614],
[-0.1034, -0.5790, 0.1497]])
>>> torch.cat((x, x, x), 1)
tensor([[ 0.6580, -1.0969, -0.4614, 0.6580, -1.0969, -0.4614, 0.6580,
-1.0969, -0.4614],
[-0.1034, -0.5790, 0.1497, -0.1034, -0.5790, 0.1497, -0.1034,
-0.5790, 0.1497]]) | torch.generated.torch.cat#torch.cat |
torch.cdist(x1, x2, p=2.0, compute_mode='use_mm_for_euclid_dist_if_necessary') [source]
Computes batched the p-norm distance between each pair of the two collections of row vectors. Parameters
x1 (Tensor) – input tensor of shape B×P×MB \times P \times M .
x2 (Tensor) – input tensor of shape B×R×MB \times R \times M .
p – p value for the p-norm distance to calculate between each vector pair ∈[0,∞]\in [0, \infty] .
compute_mode – ‘use_mm_for_euclid_dist_if_necessary’ - will use matrix multiplication approach to calculate euclidean distance (p = 2) if P > 25 or R > 25 ‘use_mm_for_euclid_dist’ - will always use matrix multiplication approach to calculate euclidean distance (p = 2) ‘donot_use_mm_for_euclid_dist’ - will never use matrix multiplication approach to calculate euclidean distance (p = 2) Default: use_mm_for_euclid_dist_if_necessary. If x1 has shape B×P×MB \times P \times M and x2 has shape B×R×MB \times R \times M then the output will have shape B×P×RB \times P \times R . This function is equivalent to scipy.spatial.distance.cdist(input,’minkowski’, p=p) if p∈(0,∞)p \in (0, \infty) . When p=0p = 0 it is equivalent to scipy.spatial.distance.cdist(input, ‘hamming’) * M. When p=∞p = \infty , the closest scipy function is scipy.spatial.distance.cdist(xn, lambda x, y: np.abs(x - y).max()). Example >>> a = torch.tensor([[0.9041, 0.0196], [-0.3108, -2.4423], [-0.4821, 1.059]])
>>> a
tensor([[ 0.9041, 0.0196],
[-0.3108, -2.4423],
[-0.4821, 1.0590]])
>>> b = torch.tensor([[-2.1763, -0.4713], [-0.6986, 1.3702]])
>>> b
tensor([[-2.1763, -0.4713],
[-0.6986, 1.3702]])
>>> torch.cdist(a, b, p=2)
tensor([[3.1193, 2.0959],
[2.7138, 3.8322],
[2.2830, 0.3791]]) | torch.generated.torch.cdist#torch.cdist |
torch.ceil(input, *, out=None) → Tensor
Returns a new tensor with the ceil of the elements of input, the smallest integer greater than or equal to each element. outi=⌈inputi⌉=⌊inputi⌋+1\text{out}_{i} = \left\lceil \text{input}_{i} \right\rceil = \left\lfloor \text{input}_{i} \right\rfloor + 1
Parameters
input (Tensor) – the input tensor. Keyword Arguments
out (Tensor, optional) – the output tensor. Example: >>> a = torch.randn(4)
>>> a
tensor([-0.6341, -1.4208, -1.0900, 0.5826])
>>> torch.ceil(a)
tensor([-0., -1., -1., 1.]) | torch.generated.torch.ceil#torch.ceil |
torch.chain_matmul(*matrices) [source]
Returns the matrix product of the NN 2-D tensors. This product is efficiently computed using the matrix chain order algorithm which selects the order in which incurs the lowest cost in terms of arithmetic operations ([CLRS]). Note that since this is a function to compute the product, NN needs to be greater than or equal to 2; if equal to 2 then a trivial matrix-matrix product is returned. If NN is 1, then this is a no-op - the original matrix is returned as is. Parameters
matrices (Tensors...) – a sequence of 2 or more 2-D tensors whose product is to be determined. Returns
if the ithi^{th} tensor was of dimensions pi×pi+1p_{i} \times p_{i + 1} , then the product would be of dimensions p1×pN+1p_{1} \times p_{N + 1} . Return type
Tensor Example: >>> a = torch.randn(3, 4)
>>> b = torch.randn(4, 5)
>>> c = torch.randn(5, 6)
>>> d = torch.randn(6, 7)
>>> torch.chain_matmul(a, b, c, d)
tensor([[ -2.3375, -3.9790, -4.1119, -6.6577, 9.5609, -11.5095, -3.2614],
[ 21.4038, 3.3378, -8.4982, -5.2457, -10.2561, -2.4684, 2.7163],
[ -0.9647, -5.8917, -2.3213, -5.2284, 12.8615, -12.2816, -2.5095]]) | torch.generated.torch.chain_matmul#torch.chain_matmul |
torch.cholesky(input, upper=False, *, out=None) → Tensor
Computes the Cholesky decomposition of a symmetric positive-definite matrix AA or for batches of symmetric positive-definite matrices. If upper is True, the returned matrix U is upper-triangular, and the decomposition has the form: A=UTUA = U^TU
If upper is False, the returned matrix L is lower-triangular, and the decomposition has the form: A=LLTA = LL^T
If upper is True, and AA is a batch of symmetric positive-definite matrices, then the returned tensor will be composed of upper-triangular Cholesky factors of each of the individual matrices. Similarly, when upper is False, the returned tensor will be composed of lower-triangular Cholesky factors of each of the individual matrices. Note torch.linalg.cholesky() should be used over torch.cholesky when possible. Note however that torch.linalg.cholesky() does not yet support the upper parameter and instead always returns the lower triangular matrix. Parameters
input (Tensor) – the input tensor AA of size (∗,n,n)(*, n, n) where * is zero or more batch dimensions consisting of symmetric positive-definite matrices.
upper (bool, optional) – flag that indicates whether to return a upper or lower triangular matrix. Default: False
Keyword Arguments
out (Tensor, optional) – the output matrix Example: >>> a = torch.randn(3, 3)
>>> a = torch.mm(a, a.t()) # make symmetric positive-definite
>>> l = torch.cholesky(a)
>>> a
tensor([[ 2.4112, -0.7486, 1.4551],
[-0.7486, 1.3544, 0.1294],
[ 1.4551, 0.1294, 1.6724]])
>>> l
tensor([[ 1.5528, 0.0000, 0.0000],
[-0.4821, 1.0592, 0.0000],
[ 0.9371, 0.5487, 0.7023]])
>>> torch.mm(l, l.t())
tensor([[ 2.4112, -0.7486, 1.4551],
[-0.7486, 1.3544, 0.1294],
[ 1.4551, 0.1294, 1.6724]])
>>> a = torch.randn(3, 2, 2)
>>> a = torch.matmul(a, a.transpose(-1, -2)) + 1e-03 # make symmetric positive-definite
>>> l = torch.cholesky(a)
>>> z = torch.matmul(l, l.transpose(-1, -2))
>>> torch.max(torch.abs(z - a)) # Max non-zero
tensor(2.3842e-07) | torch.generated.torch.cholesky#torch.cholesky |
torch.cholesky_inverse(input, upper=False, *, out=None) → Tensor
Computes the inverse of a symmetric positive-definite matrix AA using its Cholesky factor uu : returns matrix inv. The inverse is computed using LAPACK routines dpotri and spotri (and the corresponding MAGMA routines). If upper is False, uu is lower triangular such that the returned tensor is inv=(uuT)−1inv = (uu^{{T}})^{{-1}}
If upper is True or not provided, uu is upper triangular such that the returned tensor is inv=(uTu)−1inv = (u^T u)^{{-1}}
Parameters
input (Tensor) – the input 2-D tensor uu , a upper or lower triangular Cholesky factor
upper (bool, optional) – whether to return a lower (default) or upper triangular matrix Keyword Arguments
out (Tensor, optional) – the output tensor for inv Example: >>> a = torch.randn(3, 3)
>>> a = torch.mm(a, a.t()) + 1e-05 * torch.eye(3) # make symmetric positive definite
>>> u = torch.cholesky(a)
>>> a
tensor([[ 0.9935, -0.6353, 1.5806],
[ -0.6353, 0.8769, -1.7183],
[ 1.5806, -1.7183, 10.6618]])
>>> torch.cholesky_inverse(u)
tensor([[ 1.9314, 1.2251, -0.0889],
[ 1.2251, 2.4439, 0.2122],
[-0.0889, 0.2122, 0.1412]])
>>> a.inverse()
tensor([[ 1.9314, 1.2251, -0.0889],
[ 1.2251, 2.4439, 0.2122],
[-0.0889, 0.2122, 0.1412]]) | torch.generated.torch.cholesky_inverse#torch.cholesky_inverse |
torch.cholesky_solve(input, input2, upper=False, *, out=None) → Tensor
Solves a linear system of equations with a positive semidefinite matrix to be inverted given its Cholesky factor matrix uu . If upper is False, uu is and lower triangular and c is returned such that: c=(uuT)−1bc = (u u^T)^{{-1}} b
If upper is True or not provided, uu is upper triangular and c is returned such that: c=(uTu)−1bc = (u^T u)^{{-1}} b
torch.cholesky_solve(b, u) can take in 2D inputs b, u or inputs that are batches of 2D matrices. If the inputs are batches, then returns batched outputs c Supports real-valued and complex-valued inputs. For the complex-valued inputs the transpose operator above is the conjugate transpose. Parameters
input (Tensor) – input matrix bb of size (∗,m,k)(*, m, k) , where ∗* is zero or more batch dimensions
input2 (Tensor) – input matrix uu of size (∗,m,m)(*, m, m) , where ∗* is zero of more batch dimensions composed of upper or lower triangular Cholesky factor
upper (bool, optional) – whether to consider the Cholesky factor as a lower or upper triangular matrix. Default: False. Keyword Arguments
out (Tensor, optional) – the output tensor for c Example: >>> a = torch.randn(3, 3)
>>> a = torch.mm(a, a.t()) # make symmetric positive definite
>>> u = torch.cholesky(a)
>>> a
tensor([[ 0.7747, -1.9549, 1.3086],
[-1.9549, 6.7546, -5.4114],
[ 1.3086, -5.4114, 4.8733]])
>>> b = torch.randn(3, 2)
>>> b
tensor([[-0.6355, 0.9891],
[ 0.1974, 1.4706],
[-0.4115, -0.6225]])
>>> torch.cholesky_solve(b, u)
tensor([[ -8.1625, 19.6097],
[ -5.8398, 14.2387],
[ -4.3771, 10.4173]])
>>> torch.mm(a.inverse(), b)
tensor([[ -8.1626, 19.6097],
[ -5.8398, 14.2387],
[ -4.3771, 10.4173]]) | torch.generated.torch.cholesky_solve#torch.cholesky_solve |
torch.chunk(input, chunks, dim=0) → List of Tensors
Splits a tensor into a specific number of chunks. Each chunk is a view of the input tensor. Last chunk will be smaller if the tensor size along the given dimension dim is not divisible by chunks. Parameters
input (Tensor) – the tensor to split
chunks (int) – number of chunks to return
dim (int) – dimension along which to split the tensor | torch.generated.torch.chunk#torch.chunk |
torch.clamp(input, min, max, *, out=None) → Tensor
Clamp all elements in input into the range [ min, max ]. Let min_value and max_value be min and max, respectively, this returns: yi=min(max(xi,min_value),max_value)y_i = \min(\max(x_i, \text{min\_value}), \text{max\_value})
Parameters
input (Tensor) – the input tensor.
min (Number) – lower-bound of the range to be clamped to
max (Number) – upper-bound of the range to be clamped to Keyword Arguments
out (Tensor, optional) – the output tensor. Example: >>> a = torch.randn(4)
>>> a
tensor([-1.7120, 0.1734, -0.0478, -0.0922])
>>> torch.clamp(a, min=-0.5, max=0.5)
tensor([-0.5000, 0.1734, -0.0478, -0.0922])
torch.clamp(input, *, min, out=None) → Tensor
Clamps all elements in input to be larger or equal min. Parameters
input (Tensor) – the input tensor. Keyword Arguments
min (Number) – minimal value of each element in the output
out (Tensor, optional) – the output tensor. Example: >>> a = torch.randn(4)
>>> a
tensor([-0.0299, -2.3184, 2.1593, -0.8883])
>>> torch.clamp(a, min=0.5)
tensor([ 0.5000, 0.5000, 2.1593, 0.5000])
torch.clamp(input, *, max, out=None) → Tensor
Clamps all elements in input to be smaller or equal max. Parameters
input (Tensor) – the input tensor. Keyword Arguments
max (Number) – maximal value of each element in the output
out (Tensor, optional) – the output tensor. Example: >>> a = torch.randn(4)
>>> a
tensor([ 0.7753, -0.4702, -0.4599, 1.1899])
>>> torch.clamp(a, max=0.5)
tensor([ 0.5000, -0.4702, -0.4599, 0.5000]) | torch.generated.torch.clamp#torch.clamp |
torch.clip(input, min, max, *, out=None) → Tensor
Alias for torch.clamp(). | torch.generated.torch.clip#torch.clip |
torch.clone(input, *, memory_format=torch.preserve_format) → Tensor
Returns a copy of input. Note This function is differentiable, so gradients will flow back from the result of this operation to input. To create a tensor without an autograd relationship to input see detach(). Parameters
input (Tensor) – the input tensor. Keyword Arguments
memory_format (torch.memory_format, optional) – the desired memory format of returned tensor. Default: torch.preserve_format. | torch.generated.torch.clone#torch.clone |
torch.column_stack(tensors, *, out=None) → Tensor
Creates a new tensor by horizontally stacking the tensors in tensors. Equivalent to torch.hstack(tensors), except each zero or one dimensional tensor t in tensors is first reshaped into a (t.numel(), 1) column before being stacked horizontally. Parameters
tensors (sequence of Tensors) – sequence of tensors to concatenate Keyword Arguments
out (Tensor, optional) – the output tensor. Example: >>> a = torch.tensor([1, 2, 3])
>>> b = torch.tensor([4, 5, 6])
>>> torch.column_stack((a, b))
tensor([[1, 4],
[2, 5],
[3, 6]])
>>> a = torch.arange(5)
>>> b = torch.arange(10).reshape(5, 2)
>>> torch.column_stack((a, b, b))
tensor([[0, 0, 1, 0, 1],
[1, 2, 3, 2, 3],
[2, 4, 5, 4, 5],
[3, 6, 7, 6, 7],
[4, 8, 9, 8, 9]]) | torch.generated.torch.column_stack#torch.column_stack |
torch.combinations(input, r=2, with_replacement=False) → seq
Compute combinations of length rr of the given tensor. The behavior is similar to python’s itertools.combinations when with_replacement is set to False, and itertools.combinations_with_replacement when with_replacement is set to True. Parameters
input (Tensor) – 1D vector.
r (int, optional) – number of elements to combine
with_replacement (boolean, optional) – whether to allow duplication in combination Returns
A tensor equivalent to converting all the input tensors into lists, do itertools.combinations or itertools.combinations_with_replacement on these lists, and finally convert the resulting list into tensor. Return type
Tensor Example: >>> a = [1, 2, 3]
>>> list(itertools.combinations(a, r=2))
[(1, 2), (1, 3), (2, 3)]
>>> list(itertools.combinations(a, r=3))
[(1, 2, 3)]
>>> list(itertools.combinations_with_replacement(a, r=2))
[(1, 1), (1, 2), (1, 3), (2, 2), (2, 3), (3, 3)]
>>> tensor_a = torch.tensor(a)
>>> torch.combinations(tensor_a)
tensor([[1, 2],
[1, 3],
[2, 3]])
>>> torch.combinations(tensor_a, r=3)
tensor([[1, 2, 3]])
>>> torch.combinations(tensor_a, with_replacement=True)
tensor([[1, 1],
[1, 2],
[1, 3],
[2, 2],
[2, 3],
[3, 3]]) | torch.generated.torch.combinations#torch.combinations |
torch.compiled_with_cxx11_abi() [source]
Returns whether PyTorch was built with _GLIBCXX_USE_CXX11_ABI=1 | torch.generated.torch.compiled_with_cxx11_abi#torch.compiled_with_cxx11_abi |
torch.complex(real, imag, *, out=None) → Tensor
Constructs a complex tensor with its real part equal to real and its imaginary part equal to imag. Parameters
real (Tensor) – The real part of the complex tensor. Must be float or double.
imag (Tensor) – The imaginary part of the complex tensor. Must be same dtype as real. Keyword Arguments
out (Tensor) – If the inputs are torch.float32, must be torch.complex64. If the inputs are torch.float64, must be torch.complex128. Example::
>>> real = torch.tensor([1, 2], dtype=torch.float32)
>>> imag = torch.tensor([3, 4], dtype=torch.float32)
>>> z = torch.complex(real, imag)
>>> z
tensor([(1.+3.j), (2.+4.j)])
>>> z.dtype
torch.complex64 | torch.generated.torch.complex#torch.complex |
torch.conj(input, *, out=None) → Tensor
Computes the element-wise conjugate of the given input tensor. If :attr`input` has a non-complex dtype, this function just returns input. Warning In the future, torch.conj() may return a non-writeable view for an input of non-complex dtype. It’s recommended that programs not modify the tensor returned by torch.conj() when input is of non-complex dtype to be compatible with this change. outi=conj(inputi)\text{out}_{i} = conj(\text{input}_{i})
Parameters
input (Tensor) – the input tensor. Keyword Arguments
out (Tensor, optional) – the output tensor. Example: >>> torch.conj(torch.tensor([-1 + 1j, -2 + 2j, 3 - 3j]))
tensor([-1 - 1j, -2 - 2j, 3 + 3j]) | torch.generated.torch.conj#torch.conj |
torch.copysign(input, other, *, out=None) → Tensor
Create a new floating-point tensor with the magnitude of input and the sign of other, elementwise. outi={−∣inputi∣ifotheri≤−0.0∣inputi∣ifotheri≥0.0\text{out}_{i} = \begin{cases} -|\text{input}_{i}| & \text{if} \text{other}_{i} \leq -0.0 \\ |\text{input}_{i}| & \text{if} \text{other}_{i} \geq 0.0 \\ \end{cases}
Supports broadcasting to a common shape, and integer and float inputs. Parameters
input (Tensor) – magnitudes.
other (Tensor or Number) – contains value(s) whose signbit(s) are applied to the magnitudes in input. Keyword Arguments
out (Tensor, optional) – the output tensor. Example: >>> a = torch.randn(5)
>>> a
tensor([-1.2557, -0.0026, -0.5387, 0.4740, -0.9244])
>>> torch.copysign(a, 1)
tensor([1.2557, 0.0026, 0.5387, 0.4740, 0.9244])
>>> a = torch.randn(4, 4)
>>> a
tensor([[ 0.7079, 0.2778, -1.0249, 0.5719],
[-0.0059, -0.2600, -0.4475, -1.3948],
[ 0.3667, -0.9567, -2.5757, -0.1751],
[ 0.2046, -0.0742, 0.2998, -0.1054]])
>>> b = torch.randn(4)
tensor([ 0.2373, 0.3120, 0.3190, -1.1128])
>>> torch.copysign(a, b)
tensor([[ 0.7079, 0.2778, 1.0249, -0.5719],
[ 0.0059, 0.2600, 0.4475, -1.3948],
[ 0.3667, 0.9567, 2.5757, -0.1751],
[ 0.2046, 0.0742, 0.2998, -0.1054]]) | torch.generated.torch.copysign#torch.copysign |
torch.cos(input, *, out=None) → Tensor
Returns a new tensor with the cosine of the elements of input. outi=cos(inputi)\text{out}_{i} = \cos(\text{input}_{i})
Parameters
input (Tensor) – the input tensor. Keyword Arguments
out (Tensor, optional) – the output tensor. Example: >>> a = torch.randn(4)
>>> a
tensor([ 1.4309, 1.2706, -0.8562, 0.9796])
>>> torch.cos(a)
tensor([ 0.1395, 0.2957, 0.6553, 0.5574]) | torch.generated.torch.cos#torch.cos |
torch.cosh(input, *, out=None) → Tensor
Returns a new tensor with the hyperbolic cosine of the elements of input. outi=cosh(inputi)\text{out}_{i} = \cosh(\text{input}_{i})
Parameters
input (Tensor) – the input tensor. Keyword Arguments
out (Tensor, optional) – the output tensor. Example: >>> a = torch.randn(4)
>>> a
tensor([ 0.1632, 1.1835, -0.6979, -0.7325])
>>> torch.cosh(a)
tensor([ 1.0133, 1.7860, 1.2536, 1.2805])
Note When input is on the CPU, the implementation of torch.cosh may use the Sleef library, which rounds very large results to infinity or negative infinity. See here for details. | torch.generated.torch.cosh#torch.cosh |
torch.count_nonzero(input, dim=None) → Tensor
Counts the number of non-zero values in the tensor input along the given dim. If no dim is specified then all non-zeros in the tensor are counted. Parameters
input (Tensor) – the input tensor.
dim (int or tuple of python:ints, optional) – Dim or tuple of dims along which to count non-zeros. Example: >>> x = torch.zeros(3,3)
>>> x[torch.randn(3,3) > 0.5] = 1
>>> x
tensor([[0., 1., 1.],
[0., 0., 0.],
[0., 0., 1.]])
>>> torch.count_nonzero(x)
tensor(3)
>>> torch.count_nonzero(x, dim=0)
tensor([0, 1, 2]) | torch.generated.torch.count_nonzero#torch.count_nonzero |
torch.cross(input, other, dim=None, *, out=None) → Tensor
Returns the cross product of vectors in dimension dim of input and other. input and other must have the same size, and the size of their dim dimension should be 3. If dim is not given, it defaults to the first dimension found with the size 3. Note that this might be unexpected. Parameters
input (Tensor) – the input tensor.
other (Tensor) – the second input tensor
dim (int, optional) – the dimension to take the cross-product in. Keyword Arguments
out (Tensor, optional) – the output tensor. Example: >>> a = torch.randn(4, 3)
>>> a
tensor([[-0.3956, 1.1455, 1.6895],
[-0.5849, 1.3672, 0.3599],
[-1.1626, 0.7180, -0.0521],
[-0.1339, 0.9902, -2.0225]])
>>> b = torch.randn(4, 3)
>>> b
tensor([[-0.0257, -1.4725, -1.2251],
[-1.1479, -0.7005, -1.9757],
[-1.3904, 0.3726, -1.1836],
[-0.9688, -0.7153, 0.2159]])
>>> torch.cross(a, b, dim=1)
tensor([[ 1.0844, -0.5281, 0.6120],
[-2.4490, -1.5687, 1.9792],
[-0.8304, -1.3037, 0.5650],
[-1.2329, 1.9883, 1.0551]])
>>> torch.cross(a, b)
tensor([[ 1.0844, -0.5281, 0.6120],
[-2.4490, -1.5687, 1.9792],
[-0.8304, -1.3037, 0.5650],
[-1.2329, 1.9883, 1.0551]]) | torch.generated.torch.cross#torch.cross |
torch.cuda This package adds support for CUDA tensor types, that implement the same function as CPU tensors, but they utilize GPUs for computation. It is lazily initialized, so you can always import it, and use is_available() to determine if your system supports CUDA. CUDA semantics has more details about working with CUDA.
torch.cuda.can_device_access_peer(device, peer_device) [source]
Checks if peer access between two devices is possible.
torch.cuda.current_blas_handle() [source]
Returns cublasHandle_t pointer to current cuBLAS handle
torch.cuda.current_device() [source]
Returns the index of a currently selected device.
torch.cuda.current_stream(device=None) [source]
Returns the currently selected Stream for a given device. Parameters
device (torch.device or int, optional) – selected device. Returns the currently selected Stream for the current device, given by current_device(), if device is None (default).
torch.cuda.default_stream(device=None) [source]
Returns the default Stream for a given device. Parameters
device (torch.device or int, optional) – selected device. Returns the default Stream for the current device, given by current_device(), if device is None (default).
class torch.cuda.device(device) [source]
Context-manager that changes the selected device. Parameters
device (torch.device or int) – device index to select. It’s a no-op if this argument is a negative integer or None.
torch.cuda.device_count() [source]
Returns the number of GPUs available.
class torch.cuda.device_of(obj) [source]
Context-manager that changes the current device to that of given object. You can use both tensors and storages as arguments. If a given object is not allocated on a GPU, this is a no-op. Parameters
obj (Tensor or Storage) – object allocated on the selected device.
torch.cuda.get_arch_list() [source]
Returns list CUDA architectures this library was compiled for.
torch.cuda.get_device_capability(device=None) [source]
Gets the cuda capability of a device. Parameters
device (torch.device or int, optional) – device for which to return the device capability. This function is a no-op if this argument is a negative integer. It uses the current device, given by current_device(), if device is None (default). Returns
the major and minor cuda capability of the device Return type
tuple(int, int)
torch.cuda.get_device_name(device=None) [source]
Gets the name of a device. Parameters
device (torch.device or int, optional) – device for which to return the name. This function is a no-op if this argument is a negative integer. It uses the current device, given by current_device(), if device is None (default). Returns
the name of the device Return type
str
torch.cuda.get_device_properties(device) [source]
Gets the properties of a device. Parameters
device (torch.device or int or str) – device for which to return the properties of the device. Returns
the properties of the device Return type
_CudaDeviceProperties
torch.cuda.get_gencode_flags() [source]
Returns NVCC gencode flags this library were compiled with.
torch.cuda.init() [source]
Initialize PyTorch’s CUDA state. You may need to call this explicitly if you are interacting with PyTorch via its C API, as Python bindings for CUDA functionality will not be available until this initialization takes place. Ordinary users should not need this, as all of PyTorch’s CUDA methods automatically initialize CUDA state on-demand. Does nothing if the CUDA state is already initialized.
torch.cuda.ipc_collect() [source]
Force collects GPU memory after it has been released by CUDA IPC. Note Checks if any sent CUDA tensors could be cleaned from the memory. Force closes shared memory file used for reference counting if there is no active counters. Useful when the producer process stopped actively sending tensors and want to release unused memory.
torch.cuda.is_available() [source]
Returns a bool indicating if CUDA is currently available.
torch.cuda.is_initialized() [source]
Returns whether PyTorch’s CUDA state has been initialized.
torch.cuda.set_device(device) [source]
Sets the current device. Usage of this function is discouraged in favor of device. In most cases it’s better to use CUDA_VISIBLE_DEVICES environmental variable. Parameters
device (torch.device or int) – selected device. This function is a no-op if this argument is negative.
torch.cuda.stream(stream) [source]
Context-manager that selects a given stream. All CUDA kernels queued within its context will be enqueued on a selected stream. Parameters
stream (Stream) – selected stream. This manager is a no-op if it’s None. Note Streams are per-device. If the selected stream is not on the current device, this function will also change the current device to match the stream.
torch.cuda.synchronize(device=None) [source]
Waits for all kernels in all streams on a CUDA device to complete. Parameters
device (torch.device or int, optional) – device for which to synchronize. It uses the current device, given by current_device(), if device is None (default).
Random Number Generator
torch.cuda.get_rng_state(device='cuda') [source]
Returns the random number generator state of the specified GPU as a ByteTensor. Parameters
device (torch.device or int, optional) – The device to return the RNG state of. Default: 'cuda' (i.e., torch.device('cuda'), the current CUDA device). Warning This function eagerly initializes CUDA.
torch.cuda.get_rng_state_all() [source]
Returns a list of ByteTensor representing the random number states of all devices.
torch.cuda.set_rng_state(new_state, device='cuda') [source]
Sets the random number generator state of the specified GPU. Parameters
new_state (torch.ByteTensor) – The desired state
device (torch.device or int, optional) – The device to set the RNG state. Default: 'cuda' (i.e., torch.device('cuda'), the current CUDA device).
torch.cuda.set_rng_state_all(new_states) [source]
Sets the random number generator state of all devices. Parameters
new_states (Iterable of torch.ByteTensor) – The desired state for each device
torch.cuda.manual_seed(seed) [source]
Sets the seed for generating random numbers for the current GPU. It’s safe to call this function if CUDA is not available; in that case, it is silently ignored. Parameters
seed (int) – The desired seed. Warning If you are working with a multi-GPU model, this function is insufficient to get determinism. To seed all GPUs, use manual_seed_all().
torch.cuda.manual_seed_all(seed) [source]
Sets the seed for generating random numbers on all GPUs. It’s safe to call this function if CUDA is not available; in that case, it is silently ignored. Parameters
seed (int) – The desired seed.
torch.cuda.seed() [source]
Sets the seed for generating random numbers to a random number for the current GPU. It’s safe to call this function if CUDA is not available; in that case, it is silently ignored. Warning If you are working with a multi-GPU model, this function will only initialize the seed on one GPU. To initialize all GPUs, use seed_all().
torch.cuda.seed_all() [source]
Sets the seed for generating random numbers to a random number on all GPUs. It’s safe to call this function if CUDA is not available; in that case, it is silently ignored.
torch.cuda.initial_seed() [source]
Returns the current random seed of the current GPU. Warning This function eagerly initializes CUDA.
Communication collectives
torch.cuda.comm.broadcast(tensor, devices=None, *, out=None) [source]
Broadcasts a tensor to specified GPU devices. Parameters
tensor (Tensor) – tensor to broadcast. Can be on CPU or GPU.
devices (Iterable[torch.device, str or int], optional) – an iterable of GPU devices, among which to broadcast.
out (Sequence[Tensor], optional, keyword-only) – the GPU tensors to store output results. Note Exactly one of devices and out must be specified. Returns
If devices is specified,
a tuple containing copies of tensor, placed on devices.
If out is specified,
a tuple containing out tensors, each containing a copy of tensor.
torch.cuda.comm.broadcast_coalesced(tensors, devices, buffer_size=10485760) [source]
Broadcasts a sequence tensors to the specified GPUs. Small tensors are first coalesced into a buffer to reduce the number of synchronizations. Parameters
tensors (sequence) – tensors to broadcast. Must be on the same device, either CPU or GPU.
devices (Iterable[torch.device, str or int]) – an iterable of GPU devices, among which to broadcast.
buffer_size (int) – maximum size of the buffer used for coalescing Returns
A tuple containing copies of tensor, placed on devices.
torch.cuda.comm.reduce_add(inputs, destination=None) [source]
Sums tensors from multiple GPUs. All inputs should have matching shapes, dtype, and layout. The output tensor will be of the same shape, dtype, and layout. Parameters
inputs (Iterable[Tensor]) – an iterable of tensors to add.
destination (int, optional) – a device on which the output will be placed (default: current device). Returns
A tensor containing an elementwise sum of all inputs, placed on the destination device.
torch.cuda.comm.scatter(tensor, devices=None, chunk_sizes=None, dim=0, streams=None, *, out=None) [source]
Scatters tensor across multiple GPUs. Parameters
tensor (Tensor) – tensor to scatter. Can be on CPU or GPU.
devices (Iterable[torch.device, str or int], optional) – an iterable of GPU devices, among which to scatter.
chunk_sizes (Iterable[int], optional) – sizes of chunks to be placed on each device. It should match devices in length and sums to tensor.size(dim). If not specified, tensor will be divided into equal chunks.
dim (int, optional) – A dimension along which to chunk tensor. Default: 0.
streams (Iterable[Stream], optional) – an iterable of Streams, among which to execute the scatter. If not specified, the default stream will be utilized.
out (Sequence[Tensor], optional, keyword-only) – the GPU tensors to store output results. Sizes of these tensors must match that of tensor, except for dim, where the total size must sum to tensor.size(dim). Note Exactly one of devices and out must be specified. When out is specified, chunk_sizes must not be specified and will be inferred from sizes of out. Returns
If devices is specified,
a tuple containing chunks of tensor, placed on devices.
If out is specified,
a tuple containing out tensors, each containing a chunk of tensor.
torch.cuda.comm.gather(tensors, dim=0, destination=None, *, out=None) [source]
Gathers tensors from multiple GPU devices. Parameters
tensors (Iterable[Tensor]) – an iterable of tensors to gather. Tensor sizes in all dimensions other than dim have to match.
dim (int, optional) – a dimension along which the tensors will be concatenated. Default: 0.
destination (torch.device, str, or int, optional) – the output device. Can be CPU or CUDA. Default: the current CUDA device.
out (Tensor, optional, keyword-only) – the tensor to store gather result. Its sizes must match those of tensors, except for dim, where the size must equal sum(tensor.size(dim) for tensor in tensors). Can be on CPU or CUDA. Note destination must not be specified when out is specified. Returns
If destination is specified,
a tensor located on destination device, that is a result of concatenating tensors along dim.
If out is specified,
the out tensor, now containing results of concatenating tensors along dim.
Streams and events
class torch.cuda.Stream [source]
Wrapper around a CUDA stream. A CUDA stream is a linear sequence of execution that belongs to a specific device, independent from other streams. See CUDA semantics for details. Parameters
device (torch.device or int, optional) – a device on which to allocate the stream. If device is None (default) or a negative integer, this will use the current device.
priority (int, optional) – priority of the stream. Can be either -1 (high priority) or 0 (low priority). By default, streams have priority 0. Note Although CUDA versions >= 11 support more than two levels of priorities, in PyTorch, we only support two levels of priorities.
query() [source]
Checks if all the work submitted has been completed. Returns
A boolean indicating if all kernels in this stream are completed.
record_event(event=None) [source]
Records an event. Parameters
event (Event, optional) – event to record. If not given, a new one will be allocated. Returns
Recorded event.
synchronize() [source]
Wait for all the kernels in this stream to complete. Note This is a wrapper around cudaStreamSynchronize(): see CUDA Stream documentation for more info.
wait_event(event) [source]
Makes all future work submitted to the stream wait for an event. Parameters
event (Event) – an event to wait for. Note This is a wrapper around cudaStreamWaitEvent(): see CUDA Stream documentation for more info. This function returns without waiting for event: only future operations are affected.
wait_stream(stream) [source]
Synchronizes with another stream. All future work submitted to this stream will wait until all kernels submitted to a given stream at the time of call complete. Parameters
stream (Stream) – a stream to synchronize. Note This function returns without waiting for currently enqueued kernels in stream: only future operations are affected.
class torch.cuda.Event [source]
Wrapper around a CUDA event. CUDA events are synchronization markers that can be used to monitor the device’s progress, to accurately measure timing, and to synchronize CUDA streams. The underlying CUDA events are lazily initialized when the event is first recorded or exported to another process. After creation, only streams on the same device may record the event. However, streams on any device can wait on the event. Parameters
enable_timing (bool, optional) – indicates if the event should measure time (default: False)
blocking (bool, optional) – if True, wait() will be blocking (default: False)
interprocess (bool) – if True, the event can be shared between processes (default: False)
elapsed_time(end_event) [source]
Returns the time elapsed in milliseconds after the event was recorded and before the end_event was recorded.
classmethod from_ipc_handle(device, handle) [source]
Reconstruct an event from an IPC handle on the given device.
ipc_handle() [source]
Returns an IPC handle of this event. If not recorded yet, the event will use the current device.
query() [source]
Checks if all work currently captured by event has completed. Returns
A boolean indicating if all work currently captured by event has completed.
record(stream=None) [source]
Records the event in a given stream. Uses torch.cuda.current_stream() if no stream is specified. The stream’s device must match the event’s device.
synchronize() [source]
Waits for the event to complete. Waits until the completion of all work currently captured in this event. This prevents the CPU thread from proceeding until the event completes. Note This is a wrapper around cudaEventSynchronize(): see CUDA Event documentation for more info.
wait(stream=None) [source]
Makes all future work submitted to the given stream wait for this event. Use torch.cuda.current_stream() if no stream is specified.
Memory management
torch.cuda.empty_cache() [source]
Releases all unoccupied cached memory currently held by the caching allocator so that those can be used in other GPU application and visible in nvidia-smi. Note empty_cache() doesn’t increase the amount of GPU memory available for PyTorch. However, it may help reduce fragmentation of GPU memory in certain cases. See Memory management for more details about GPU memory management.
torch.cuda.list_gpu_processes(device=None) [source]
Returns a human-readable printout of the running processes and their GPU memory use for a given device. This can be useful to display periodically during training, or when handling out-of-memory exceptions. Parameters
device (torch.device or int, optional) – selected device. Returns printout for the current device, given by current_device(), if device is None (default).
torch.cuda.memory_stats(device=None) [source]
Returns a dictionary of CUDA memory allocator statistics for a given device. The return value of this function is a dictionary of statistics, each of which is a non-negative integer. Core statistics:
"allocated.{all,large_pool,small_pool}.{current,peak,allocated,freed}": number of allocation requests received by the memory allocator.
"allocated_bytes.{all,large_pool,small_pool}.{current,peak,allocated,freed}": amount of allocated memory.
"segment.{all,large_pool,small_pool}.{current,peak,allocated,freed}": number of reserved segments from cudaMalloc().
"reserved_bytes.{all,large_pool,small_pool}.{current,peak,allocated,freed}": amount of reserved memory.
"active.{all,large_pool,small_pool}.{current,peak,allocated,freed}": number of active memory blocks.
"active_bytes.{all,large_pool,small_pool}.{current,peak,allocated,freed}": amount of active memory.
"inactive_split.{all,large_pool,small_pool}.{current,peak,allocated,freed}": number of inactive, non-releasable memory blocks.
"inactive_split_bytes.{all,large_pool,small_pool}.{current,peak,allocated,freed}": amount of inactive, non-releasable memory. For these core statistics, values are broken down as follows. Pool type:
all: combined statistics across all memory pools.
large_pool: statistics for the large allocation pool (as of October 2019, for size >= 1MB allocations).
small_pool: statistics for the small allocation pool (as of October 2019, for size < 1MB allocations). Metric type:
current: current value of this metric.
peak: maximum value of this metric.
allocated: historical total increase in this metric.
freed: historical total decrease in this metric. In addition to the core statistics, we also provide some simple event counters:
"num_alloc_retries": number of failed cudaMalloc calls that result in a cache flush and retry.
"num_ooms": number of out-of-memory errors thrown. Parameters
device (torch.device or int, optional) – selected device. Returns statistics for the current device, given by current_device(), if device is None (default). Note See Memory management for more details about GPU memory management.
torch.cuda.memory_summary(device=None, abbreviated=False) [source]
Returns a human-readable printout of the current memory allocator statistics for a given device. This can be useful to display periodically during training, or when handling out-of-memory exceptions. Parameters
device (torch.device or int, optional) – selected device. Returns printout for the current device, given by current_device(), if device is None (default).
abbreviated (bool, optional) – whether to return an abbreviated summary (default: False). Note See Memory management for more details about GPU memory management.
torch.cuda.memory_snapshot() [source]
Returns a snapshot of the CUDA memory allocator state across all devices. Interpreting the output of this function requires familiarity with the memory allocator internals. Note See Memory management for more details about GPU memory management.
torch.cuda.memory_allocated(device=None) [source]
Returns the current GPU memory occupied by tensors in bytes for a given device. Parameters
device (torch.device or int, optional) – selected device. Returns statistic for the current device, given by current_device(), if device is None (default). Note This is likely less than the amount shown in nvidia-smi since some unused memory can be held by the caching allocator and some context needs to be created on GPU. See Memory management for more details about GPU memory management.
torch.cuda.max_memory_allocated(device=None) [source]
Returns the maximum GPU memory occupied by tensors in bytes for a given device. By default, this returns the peak allocated memory since the beginning of this program. reset_peak_stats() can be used to reset the starting point in tracking this metric. For example, these two functions can measure the peak allocated memory usage of each iteration in a training loop. Parameters
device (torch.device or int, optional) – selected device. Returns statistic for the current device, given by current_device(), if device is None (default). Note See Memory management for more details about GPU memory management.
torch.cuda.reset_max_memory_allocated(device=None) [source]
Resets the starting point in tracking maximum GPU memory occupied by tensors for a given device. See max_memory_allocated() for details. Parameters
device (torch.device or int, optional) – selected device. Returns statistic for the current device, given by current_device(), if device is None (default). Warning This function now calls reset_peak_memory_stats(), which resets /all/ peak memory stats. Note See Memory management for more details about GPU memory management.
torch.cuda.memory_reserved(device=None) [source]
Returns the current GPU memory managed by the caching allocator in bytes for a given device. Parameters
device (torch.device or int, optional) – selected device. Returns statistic for the current device, given by current_device(), if device is None (default). Note See Memory management for more details about GPU memory management.
torch.cuda.max_memory_reserved(device=None) [source]
Returns the maximum GPU memory managed by the caching allocator in bytes for a given device. By default, this returns the peak cached memory since the beginning of this program. reset_peak_stats() can be used to reset the starting point in tracking this metric. For example, these two functions can measure the peak cached memory amount of each iteration in a training loop. Parameters
device (torch.device or int, optional) – selected device. Returns statistic for the current device, given by current_device(), if device is None (default). Note See Memory management for more details about GPU memory management.
torch.cuda.set_per_process_memory_fraction(fraction, device=None) [source]
Set memory fraction for a process. The fraction is used to limit an caching allocator to allocated memory on a CUDA device. The allowed value equals the total visible memory multiplied fraction. If trying to allocate more than the allowed value in a process, will raise an out of memory error in allocator. Parameters
fraction (float) – Range: 0~1. Allowed memory equals total_memory * fraction.
device (torch.device or int, optional) – selected device. If it is None the default CUDA device is used. Note In general, the total available free memory is less than the total capacity.
torch.cuda.memory_cached(device=None) [source]
Deprecated; see memory_reserved().
torch.cuda.max_memory_cached(device=None) [source]
Deprecated; see max_memory_reserved().
torch.cuda.reset_max_memory_cached(device=None) [source]
Resets the starting point in tracking maximum GPU memory managed by the caching allocator for a given device. See max_memory_cached() for details. Parameters
device (torch.device or int, optional) – selected device. Returns statistic for the current device, given by current_device(), if device is None (default). Warning This function now calls reset_peak_memory_stats(), which resets /all/ peak memory stats. Note See Memory management for more details about GPU memory management.
NVIDIA Tools Extension (NVTX)
torch.cuda.nvtx.mark(msg) [source]
Describe an instantaneous event that occurred at some point. Parameters
msg (string) – ASCII message to associate with the event.
torch.cuda.nvtx.range_push(msg) [source]
Pushes a range onto a stack of nested range span. Returns zero-based depth of the range that is started. Parameters
msg (string) – ASCII message to associate with range
torch.cuda.nvtx.range_pop() [source]
Pops a range off of a stack of nested range spans. Returns the zero-based depth of the range that is ended. | torch.cuda |
Automatic Mixed Precision package - torch.cuda.amp torch.cuda.amp provides convenience methods for mixed precision, where some operations use the torch.float32 (float) datatype and other operations use torch.float16 (half). Some ops, like linear layers and convolutions, are much faster in float16. Other ops, like reductions, often require the dynamic range of float32. Mixed precision tries to match each op to its appropriate datatype. Ordinarily, “automatic mixed precision training” uses torch.cuda.amp.autocast and torch.cuda.amp.GradScaler together, as shown in the Automatic Mixed Precision examples and Automatic Mixed Precision recipe. However, autocast and GradScaler are modular, and may be used separately if desired. Autocasting Gradient Scaling
Autocast Op Reference Op Eligibility
Op-Specific Behavior Ops that can autocast to float16 Ops that can autocast to float32 Ops that promote to the widest input type Prefer binary_cross_entropy_with_logits over binary_cross_entropy Autocasting
class torch.cuda.amp.autocast(enabled=True) [source]
Instances of autocast serve as context managers or decorators that allow regions of your script to run in mixed precision. In these regions, CUDA ops run in an op-specific dtype chosen by autocast to improve performance while maintaining accuracy. See the Autocast Op Reference for details. When entering an autocast-enabled region, Tensors may be any type. You should not call .half() on your model(s) or inputs when using autocasting. autocast should wrap only the forward pass(es) of your network, including the loss computation(s). Backward passes under autocast are not recommended. Backward ops run in the same type that autocast used for corresponding forward ops. Example: # Creates model and optimizer in default precision
model = Net().cuda()
optimizer = optim.SGD(model.parameters(), ...)
for input, target in data:
optimizer.zero_grad()
# Enables autocasting for the forward pass (model + loss)
with autocast():
output = model(input)
loss = loss_fn(output, target)
# Exits the context manager before backward()
loss.backward()
optimizer.step()
See the Automatic Mixed Precision examples for usage (along with gradient scaling) in more complex scenarios (e.g., gradient penalty, multiple models/losses, custom autograd functions). autocast can also be used as a decorator, e.g., on the forward method of your model: class AutocastModel(nn.Module):
...
@autocast()
def forward(self, input):
...
Floating-point Tensors produced in an autocast-enabled region may be float16. After returning to an autocast-disabled region, using them with floating-point Tensors of different dtypes may cause type mismatch errors. If so, cast the Tensor(s) produced in the autocast region back to float32 (or other dtype if desired). If a Tensor from the autocast region is already float32, the cast is a no-op, and incurs no additional overhead. Example: # Creates some tensors in default dtype (here assumed to be float32)
a_float32 = torch.rand((8, 8), device="cuda")
b_float32 = torch.rand((8, 8), device="cuda")
c_float32 = torch.rand((8, 8), device="cuda")
d_float32 = torch.rand((8, 8), device="cuda")
with autocast():
# torch.mm is on autocast's list of ops that should run in float16.
# Inputs are float32, but the op runs in float16 and produces float16 output.
# No manual casts are required.
e_float16 = torch.mm(a_float32, b_float32)
# Also handles mixed input types
f_float16 = torch.mm(d_float32, e_float16)
# After exiting autocast, calls f_float16.float() to use with d_float32
g_float32 = torch.mm(d_float32, f_float16.float())
Type mismatch errors in an autocast-enabled region are a bug; if this is what you observe, please file an issue. autocast(enabled=False) subregions can be nested in autocast-enabled regions. Locally disabling autocast can be useful, for example, if you want to force a subregion to run in a particular dtype. Disabling autocast gives you explicit control over the execution type. In the subregion, inputs from the surrounding region should be cast to dtype before use: # Creates some tensors in default dtype (here assumed to be float32)
a_float32 = torch.rand((8, 8), device="cuda")
b_float32 = torch.rand((8, 8), device="cuda")
c_float32 = torch.rand((8, 8), device="cuda")
d_float32 = torch.rand((8, 8), device="cuda")
with autocast():
e_float16 = torch.mm(a_float32, b_float32)
with autocast(enabled=False):
# Calls e_float16.float() to ensure float32 execution
# (necessary because e_float16 was created in an autocasted region)
f_float32 = torch.mm(c_float32, e_float16.float())
# No manual casts are required when re-entering the autocast-enabled region.
# torch.mm again runs in float16 and produces float16 output, regardless of input types.
g_float16 = torch.mm(d_float32, f_float32)
The autocast state is thread-local. If you want it enabled in a new thread, the context manager or decorator must be invoked in that thread. This affects torch.nn.DataParallel and torch.nn.parallel.DistributedDataParallel when used with more than one GPU per process (see Working with Multiple GPUs). Parameters
enabled (bool, optional, default=True) – Whether autocasting should be enabled in the region.
torch.cuda.amp.custom_fwd(fwd=None, **kwargs) [source]
Helper decorator for forward methods of custom autograd functions (subclasses of torch.autograd.Function). See the example page for more detail. Parameters
cast_inputs (torch.dtype or None, optional, default=None) – If not None, when forward runs in an autocast-enabled region, casts incoming floating-point CUDA Tensors to the target dtype (non-floating-point Tensors are not affected), then executes forward with autocast disabled. If None, forward’s internal ops execute with the current autocast state. Note If the decorated forward is called outside an autocast-enabled region, custom_fwd is a no-op and cast_inputs has no effect.
torch.cuda.amp.custom_bwd(bwd) [source]
Helper decorator for backward methods of custom autograd functions (subclasses of torch.autograd.Function). Ensures that backward executes with the same autocast state as forward. See the example page for more detail.
Gradient Scaling If the forward pass for a particular op has float16 inputs, the backward pass for that op will produce float16 gradients. Gradient values with small magnitudes may not be representable in float16. These values will flush to zero (“underflow”), so the update for the corresponding parameters will be lost. To prevent underflow, “gradient scaling” multiplies the network’s loss(es) by a scale factor and invokes a backward pass on the scaled loss(es). Gradients flowing backward through the network are then scaled by the same factor. In other words, gradient values have a larger magnitude, so they don’t flush to zero. Each parameter’s gradient (.grad attribute) should be unscaled before the optimizer updates the parameters, so the scale factor does not interfere with the learning rate.
class torch.cuda.amp.GradScaler(init_scale=65536.0, growth_factor=2.0, backoff_factor=0.5, growth_interval=2000, enabled=True) [source]
get_backoff_factor() [source]
Returns a Python float containing the scale backoff factor.
get_growth_factor() [source]
Returns a Python float containing the scale growth factor.
get_growth_interval() [source]
Returns a Python int containing the growth interval.
get_scale() [source]
Returns a Python float containing the current scale, or 1.0 if scaling is disabled. Warning get_scale() incurs a CPU-GPU sync.
is_enabled() [source]
Returns a bool indicating whether this instance is enabled.
load_state_dict(state_dict) [source]
Loads the scaler state. If this instance is disabled, load_state_dict() is a no-op. Parameters
state_dict (dict) – scaler state. Should be an object returned from a call to state_dict().
scale(outputs) [source]
Multiplies (‘scales’) a tensor or list of tensors by the scale factor. Returns scaled outputs. If this instance of GradScaler is not enabled, outputs are returned unmodified. Parameters
outputs (Tensor or iterable of Tensors) – Outputs to scale.
set_backoff_factor(new_factor) [source]
Parameters
new_scale (float) – Value to use as the new scale backoff factor.
set_growth_factor(new_factor) [source]
Parameters
new_scale (float) – Value to use as the new scale growth factor.
set_growth_interval(new_interval) [source]
Parameters
new_interval (int) – Value to use as the new growth interval.
state_dict() [source]
Returns the state of the scaler as a dict. It contains five entries:
"scale" - a Python float containing the current scale
"growth_factor" - a Python float containing the current growth factor
"backoff_factor" - a Python float containing the current backoff factor
"growth_interval" - a Python int containing the current growth interval
"_growth_tracker" - a Python int containing the number of recent consecutive unskipped steps. If this instance is not enabled, returns an empty dict. Note If you wish to checkpoint the scaler’s state after a particular iteration, state_dict() should be called after update().
step(optimizer, *args, **kwargs) [source]
step() carries out the following two operations: Internally invokes unscale_(optimizer) (unless unscale_() was explicitly called for optimizer earlier in the iteration). As part of the unscale_(), gradients are checked for infs/NaNs. If no inf/NaN gradients are found, invokes optimizer.step() using the unscaled gradients. Otherwise, optimizer.step() is skipped to avoid corrupting the params. *args and **kwargs are forwarded to optimizer.step(). Returns the return value of optimizer.step(*args, **kwargs). Parameters
optimizer (torch.optim.Optimizer) – Optimizer that applies the gradients.
args – Any arguments.
kwargs – Any keyword arguments. Warning Closure use is not currently supported.
unscale_(optimizer) [source]
Divides (“unscales”) the optimizer’s gradient tensors by the scale factor. unscale_() is optional, serving cases where you need to modify or inspect gradients between the backward pass(es) and step(). If unscale_() is not called explicitly, gradients will be unscaled automatically during step(). Simple example, using unscale_() to enable clipping of unscaled gradients: ...
scaler.scale(loss).backward()
scaler.unscale_(optimizer)
torch.nn.utils.clip_grad_norm_(model.parameters(), max_norm)
scaler.step(optimizer)
scaler.update()
Parameters
optimizer (torch.optim.Optimizer) – Optimizer that owns the gradients to be unscaled. Note unscale_() does not incur a CPU-GPU sync. Warning unscale_() should only be called once per optimizer per step() call, and only after all gradients for that optimizer’s assigned parameters have been accumulated. Calling unscale_() twice for a given optimizer between each step() triggers a RuntimeError. Warning unscale_() may unscale sparse gradients out of place, replacing the .grad attribute.
update(new_scale=None) [source]
Updates the scale factor. If any optimizer steps were skipped the scale is multiplied by backoff_factor to reduce it. If growth_interval unskipped iterations occurred consecutively, the scale is multiplied by growth_factor to increase it. Passing new_scale sets the scale directly. Parameters
new_scale (float or torch.cuda.FloatTensor, optional, default=None) – New scale factor. Warning update() should only be called at the end of the iteration, after scaler.step(optimizer) has been invoked for all optimizers used this iteration.
Autocast Op Reference Op Eligibility Only CUDA ops are eligible for autocasting. Ops that run in float64 or non-floating-point dtypes are not eligible, and will run in these types whether or not autocast is enabled. Only out-of-place ops and Tensor methods are eligible. In-place variants and calls that explicitly supply an out=... Tensor are allowed in autocast-enabled regions, but won’t go through autocasting. For example, in an autocast-enabled region a.addmm(b, c) can autocast, but a.addmm_(b, c) and a.addmm(b, c, out=d) cannot. For best performance and stability, prefer out-of-place ops in autocast-enabled regions. Ops called with an explicit dtype=... argument are not eligible, and will produce output that respects the dtype argument. Op-Specific Behavior The following lists describe the behavior of eligible ops in autocast-enabled regions. These ops always go through autocasting whether they are invoked as part of a torch.nn.Module, as a function, or as a torch.Tensor method. If functions are exposed in multiple namespaces, they go through autocasting regardless of the namespace. Ops not listed below do not go through autocasting. They run in the type defined by their inputs. However, autocasting may still change the type in which unlisted ops run if they’re downstream from autocasted ops. If an op is unlisted, we assume it’s numerically stable in float16. If you believe an unlisted op is numerically unstable in float16, please file an issue. Ops that can autocast to float16
__matmul__, addbmm, addmm, addmv, addr, baddbmm, bmm, chain_matmul, conv1d, conv2d, conv3d, conv_transpose1d, conv_transpose2d, conv_transpose3d, GRUCell, linear, LSTMCell, matmul, mm, mv, prelu, RNNCell Ops that can autocast to float32
__pow__, __rdiv__, __rpow__, __rtruediv__, acos, asin, binary_cross_entropy_with_logits, cosh, cosine_embedding_loss, cdist, cosine_similarity, cross_entropy, cumprod, cumsum, dist, erfinv, exp, expm1, gelu, group_norm, hinge_embedding_loss, kl_div, l1_loss, layer_norm, log, log_softmax, log10, log1p, log2, margin_ranking_loss, mse_loss, multilabel_margin_loss, multi_margin_loss, nll_loss, norm, normalize, pdist, poisson_nll_loss, pow, prod, reciprocal, rsqrt, sinh, smooth_l1_loss, soft_margin_loss, softmax, softmin, softplus, sum, renorm, tan, triplet_margin_loss Ops that promote to the widest input type These ops don’t require a particular dtype for stability, but take multiple inputs and require that the inputs’ dtypes match. If all of the inputs are float16, the op runs in float16. If any of the inputs is float32, autocast casts all inputs to float32 and runs the op in float32. addcdiv, addcmul, atan2, bilinear, cat, cross, dot, equal, index_put, stack, tensordot Some ops not listed here (e.g., binary ops like add) natively promote inputs without autocasting’s intervention. If inputs are a mixture of float16 and float32, these ops run in float32 and produce float32 output, regardless of whether autocast is enabled. Prefer binary_cross_entropy_with_logits over binary_cross_entropy
The backward passes of torch.nn.functional.binary_cross_entropy() (and torch.nn.BCELoss, which wraps it) can produce gradients that aren’t representable in float16. In autocast-enabled regions, the forward input may be float16, which means the backward gradient must be representable in float16 (autocasting float16 forward inputs to float32 doesn’t help, because that cast must be reversed in backward). Therefore, binary_cross_entropy and BCELoss raise an error in autocast-enabled regions. Many models use a sigmoid layer right before the binary cross entropy layer. In this case, combine the two layers using torch.nn.functional.binary_cross_entropy_with_logits() or torch.nn.BCEWithLogitsLoss. binary_cross_entropy_with_logits and BCEWithLogits are safe to autocast. | torch.amp |
class torch.cuda.amp.autocast(enabled=True) [source]
Instances of autocast serve as context managers or decorators that allow regions of your script to run in mixed precision. In these regions, CUDA ops run in an op-specific dtype chosen by autocast to improve performance while maintaining accuracy. See the Autocast Op Reference for details. When entering an autocast-enabled region, Tensors may be any type. You should not call .half() on your model(s) or inputs when using autocasting. autocast should wrap only the forward pass(es) of your network, including the loss computation(s). Backward passes under autocast are not recommended. Backward ops run in the same type that autocast used for corresponding forward ops. Example: # Creates model and optimizer in default precision
model = Net().cuda()
optimizer = optim.SGD(model.parameters(), ...)
for input, target in data:
optimizer.zero_grad()
# Enables autocasting for the forward pass (model + loss)
with autocast():
output = model(input)
loss = loss_fn(output, target)
# Exits the context manager before backward()
loss.backward()
optimizer.step()
See the Automatic Mixed Precision examples for usage (along with gradient scaling) in more complex scenarios (e.g., gradient penalty, multiple models/losses, custom autograd functions). autocast can also be used as a decorator, e.g., on the forward method of your model: class AutocastModel(nn.Module):
...
@autocast()
def forward(self, input):
...
Floating-point Tensors produced in an autocast-enabled region may be float16. After returning to an autocast-disabled region, using them with floating-point Tensors of different dtypes may cause type mismatch errors. If so, cast the Tensor(s) produced in the autocast region back to float32 (or other dtype if desired). If a Tensor from the autocast region is already float32, the cast is a no-op, and incurs no additional overhead. Example: # Creates some tensors in default dtype (here assumed to be float32)
a_float32 = torch.rand((8, 8), device="cuda")
b_float32 = torch.rand((8, 8), device="cuda")
c_float32 = torch.rand((8, 8), device="cuda")
d_float32 = torch.rand((8, 8), device="cuda")
with autocast():
# torch.mm is on autocast's list of ops that should run in float16.
# Inputs are float32, but the op runs in float16 and produces float16 output.
# No manual casts are required.
e_float16 = torch.mm(a_float32, b_float32)
# Also handles mixed input types
f_float16 = torch.mm(d_float32, e_float16)
# After exiting autocast, calls f_float16.float() to use with d_float32
g_float32 = torch.mm(d_float32, f_float16.float())
Type mismatch errors in an autocast-enabled region are a bug; if this is what you observe, please file an issue. autocast(enabled=False) subregions can be nested in autocast-enabled regions. Locally disabling autocast can be useful, for example, if you want to force a subregion to run in a particular dtype. Disabling autocast gives you explicit control over the execution type. In the subregion, inputs from the surrounding region should be cast to dtype before use: # Creates some tensors in default dtype (here assumed to be float32)
a_float32 = torch.rand((8, 8), device="cuda")
b_float32 = torch.rand((8, 8), device="cuda")
c_float32 = torch.rand((8, 8), device="cuda")
d_float32 = torch.rand((8, 8), device="cuda")
with autocast():
e_float16 = torch.mm(a_float32, b_float32)
with autocast(enabled=False):
# Calls e_float16.float() to ensure float32 execution
# (necessary because e_float16 was created in an autocasted region)
f_float32 = torch.mm(c_float32, e_float16.float())
# No manual casts are required when re-entering the autocast-enabled region.
# torch.mm again runs in float16 and produces float16 output, regardless of input types.
g_float16 = torch.mm(d_float32, f_float32)
The autocast state is thread-local. If you want it enabled in a new thread, the context manager or decorator must be invoked in that thread. This affects torch.nn.DataParallel and torch.nn.parallel.DistributedDataParallel when used with more than one GPU per process (see Working with Multiple GPUs). Parameters
enabled (bool, optional, default=True) – Whether autocasting should be enabled in the region. | torch.amp#torch.cuda.amp.autocast |
torch.cuda.amp.custom_bwd(bwd) [source]
Helper decorator for backward methods of custom autograd functions (subclasses of torch.autograd.Function). Ensures that backward executes with the same autocast state as forward. See the example page for more detail. | torch.amp#torch.cuda.amp.custom_bwd |
torch.cuda.amp.custom_fwd(fwd=None, **kwargs) [source]
Helper decorator for forward methods of custom autograd functions (subclasses of torch.autograd.Function). See the example page for more detail. Parameters
cast_inputs (torch.dtype or None, optional, default=None) – If not None, when forward runs in an autocast-enabled region, casts incoming floating-point CUDA Tensors to the target dtype (non-floating-point Tensors are not affected), then executes forward with autocast disabled. If None, forward’s internal ops execute with the current autocast state. Note If the decorated forward is called outside an autocast-enabled region, custom_fwd is a no-op and cast_inputs has no effect. | torch.amp#torch.cuda.amp.custom_fwd |
class torch.cuda.amp.GradScaler(init_scale=65536.0, growth_factor=2.0, backoff_factor=0.5, growth_interval=2000, enabled=True) [source]
get_backoff_factor() [source]
Returns a Python float containing the scale backoff factor.
get_growth_factor() [source]
Returns a Python float containing the scale growth factor.
get_growth_interval() [source]
Returns a Python int containing the growth interval.
get_scale() [source]
Returns a Python float containing the current scale, or 1.0 if scaling is disabled. Warning get_scale() incurs a CPU-GPU sync.
is_enabled() [source]
Returns a bool indicating whether this instance is enabled.
load_state_dict(state_dict) [source]
Loads the scaler state. If this instance is disabled, load_state_dict() is a no-op. Parameters
state_dict (dict) – scaler state. Should be an object returned from a call to state_dict().
scale(outputs) [source]
Multiplies (‘scales’) a tensor or list of tensors by the scale factor. Returns scaled outputs. If this instance of GradScaler is not enabled, outputs are returned unmodified. Parameters
outputs (Tensor or iterable of Tensors) – Outputs to scale.
set_backoff_factor(new_factor) [source]
Parameters
new_scale (float) – Value to use as the new scale backoff factor.
set_growth_factor(new_factor) [source]
Parameters
new_scale (float) – Value to use as the new scale growth factor.
set_growth_interval(new_interval) [source]
Parameters
new_interval (int) – Value to use as the new growth interval.
state_dict() [source]
Returns the state of the scaler as a dict. It contains five entries:
"scale" - a Python float containing the current scale
"growth_factor" - a Python float containing the current growth factor
"backoff_factor" - a Python float containing the current backoff factor
"growth_interval" - a Python int containing the current growth interval
"_growth_tracker" - a Python int containing the number of recent consecutive unskipped steps. If this instance is not enabled, returns an empty dict. Note If you wish to checkpoint the scaler’s state after a particular iteration, state_dict() should be called after update().
step(optimizer, *args, **kwargs) [source]
step() carries out the following two operations: Internally invokes unscale_(optimizer) (unless unscale_() was explicitly called for optimizer earlier in the iteration). As part of the unscale_(), gradients are checked for infs/NaNs. If no inf/NaN gradients are found, invokes optimizer.step() using the unscaled gradients. Otherwise, optimizer.step() is skipped to avoid corrupting the params. *args and **kwargs are forwarded to optimizer.step(). Returns the return value of optimizer.step(*args, **kwargs). Parameters
optimizer (torch.optim.Optimizer) – Optimizer that applies the gradients.
args – Any arguments.
kwargs – Any keyword arguments. Warning Closure use is not currently supported.
unscale_(optimizer) [source]
Divides (“unscales”) the optimizer’s gradient tensors by the scale factor. unscale_() is optional, serving cases where you need to modify or inspect gradients between the backward pass(es) and step(). If unscale_() is not called explicitly, gradients will be unscaled automatically during step(). Simple example, using unscale_() to enable clipping of unscaled gradients: ...
scaler.scale(loss).backward()
scaler.unscale_(optimizer)
torch.nn.utils.clip_grad_norm_(model.parameters(), max_norm)
scaler.step(optimizer)
scaler.update()
Parameters
optimizer (torch.optim.Optimizer) – Optimizer that owns the gradients to be unscaled. Note unscale_() does not incur a CPU-GPU sync. Warning unscale_() should only be called once per optimizer per step() call, and only after all gradients for that optimizer’s assigned parameters have been accumulated. Calling unscale_() twice for a given optimizer between each step() triggers a RuntimeError. Warning unscale_() may unscale sparse gradients out of place, replacing the .grad attribute.
update(new_scale=None) [source]
Updates the scale factor. If any optimizer steps were skipped the scale is multiplied by backoff_factor to reduce it. If growth_interval unskipped iterations occurred consecutively, the scale is multiplied by growth_factor to increase it. Passing new_scale sets the scale directly. Parameters
new_scale (float or torch.cuda.FloatTensor, optional, default=None) – New scale factor. Warning update() should only be called at the end of the iteration, after scaler.step(optimizer) has been invoked for all optimizers used this iteration. | torch.amp#torch.cuda.amp.GradScaler |
get_backoff_factor() [source]
Returns a Python float containing the scale backoff factor. | torch.amp#torch.cuda.amp.GradScaler.get_backoff_factor |
get_growth_factor() [source]
Returns a Python float containing the scale growth factor. | torch.amp#torch.cuda.amp.GradScaler.get_growth_factor |
get_growth_interval() [source]
Returns a Python int containing the growth interval. | torch.amp#torch.cuda.amp.GradScaler.get_growth_interval |
get_scale() [source]
Returns a Python float containing the current scale, or 1.0 if scaling is disabled. Warning get_scale() incurs a CPU-GPU sync. | torch.amp#torch.cuda.amp.GradScaler.get_scale |
is_enabled() [source]
Returns a bool indicating whether this instance is enabled. | torch.amp#torch.cuda.amp.GradScaler.is_enabled |
load_state_dict(state_dict) [source]
Loads the scaler state. If this instance is disabled, load_state_dict() is a no-op. Parameters
state_dict (dict) – scaler state. Should be an object returned from a call to state_dict(). | torch.amp#torch.cuda.amp.GradScaler.load_state_dict |
scale(outputs) [source]
Multiplies (‘scales’) a tensor or list of tensors by the scale factor. Returns scaled outputs. If this instance of GradScaler is not enabled, outputs are returned unmodified. Parameters
outputs (Tensor or iterable of Tensors) – Outputs to scale. | torch.amp#torch.cuda.amp.GradScaler.scale |
set_backoff_factor(new_factor) [source]
Parameters
new_scale (float) – Value to use as the new scale backoff factor. | torch.amp#torch.cuda.amp.GradScaler.set_backoff_factor |
set_growth_factor(new_factor) [source]
Parameters
new_scale (float) – Value to use as the new scale growth factor. | torch.amp#torch.cuda.amp.GradScaler.set_growth_factor |
set_growth_interval(new_interval) [source]
Parameters
new_interval (int) – Value to use as the new growth interval. | torch.amp#torch.cuda.amp.GradScaler.set_growth_interval |
state_dict() [source]
Returns the state of the scaler as a dict. It contains five entries:
"scale" - a Python float containing the current scale
"growth_factor" - a Python float containing the current growth factor
"backoff_factor" - a Python float containing the current backoff factor
"growth_interval" - a Python int containing the current growth interval
"_growth_tracker" - a Python int containing the number of recent consecutive unskipped steps. If this instance is not enabled, returns an empty dict. Note If you wish to checkpoint the scaler’s state after a particular iteration, state_dict() should be called after update(). | torch.amp#torch.cuda.amp.GradScaler.state_dict |
step(optimizer, *args, **kwargs) [source]
step() carries out the following two operations: Internally invokes unscale_(optimizer) (unless unscale_() was explicitly called for optimizer earlier in the iteration). As part of the unscale_(), gradients are checked for infs/NaNs. If no inf/NaN gradients are found, invokes optimizer.step() using the unscaled gradients. Otherwise, optimizer.step() is skipped to avoid corrupting the params. *args and **kwargs are forwarded to optimizer.step(). Returns the return value of optimizer.step(*args, **kwargs). Parameters
optimizer (torch.optim.Optimizer) – Optimizer that applies the gradients.
args – Any arguments.
kwargs – Any keyword arguments. Warning Closure use is not currently supported. | torch.amp#torch.cuda.amp.GradScaler.step |
unscale_(optimizer) [source]
Divides (“unscales”) the optimizer’s gradient tensors by the scale factor. unscale_() is optional, serving cases where you need to modify or inspect gradients between the backward pass(es) and step(). If unscale_() is not called explicitly, gradients will be unscaled automatically during step(). Simple example, using unscale_() to enable clipping of unscaled gradients: ...
scaler.scale(loss).backward()
scaler.unscale_(optimizer)
torch.nn.utils.clip_grad_norm_(model.parameters(), max_norm)
scaler.step(optimizer)
scaler.update()
Parameters
optimizer (torch.optim.Optimizer) – Optimizer that owns the gradients to be unscaled. Note unscale_() does not incur a CPU-GPU sync. Warning unscale_() should only be called once per optimizer per step() call, and only after all gradients for that optimizer’s assigned parameters have been accumulated. Calling unscale_() twice for a given optimizer between each step() triggers a RuntimeError. Warning unscale_() may unscale sparse gradients out of place, replacing the .grad attribute. | torch.amp#torch.cuda.amp.GradScaler.unscale_ |
update(new_scale=None) [source]
Updates the scale factor. If any optimizer steps were skipped the scale is multiplied by backoff_factor to reduce it. If growth_interval unskipped iterations occurred consecutively, the scale is multiplied by growth_factor to increase it. Passing new_scale sets the scale directly. Parameters
new_scale (float or torch.cuda.FloatTensor, optional, default=None) – New scale factor. Warning update() should only be called at the end of the iteration, after scaler.step(optimizer) has been invoked for all optimizers used this iteration. | torch.amp#torch.cuda.amp.GradScaler.update |
torch.cuda.can_device_access_peer(device, peer_device) [source]
Checks if peer access between two devices is possible. | torch.cuda#torch.cuda.can_device_access_peer |
torch.cuda.comm.broadcast(tensor, devices=None, *, out=None) [source]
Broadcasts a tensor to specified GPU devices. Parameters
tensor (Tensor) – tensor to broadcast. Can be on CPU or GPU.
devices (Iterable[torch.device, str or int], optional) – an iterable of GPU devices, among which to broadcast.
out (Sequence[Tensor], optional, keyword-only) – the GPU tensors to store output results. Note Exactly one of devices and out must be specified. Returns
If devices is specified,
a tuple containing copies of tensor, placed on devices.
If out is specified,
a tuple containing out tensors, each containing a copy of tensor. | torch.cuda#torch.cuda.comm.broadcast |
torch.cuda.comm.broadcast_coalesced(tensors, devices, buffer_size=10485760) [source]
Broadcasts a sequence tensors to the specified GPUs. Small tensors are first coalesced into a buffer to reduce the number of synchronizations. Parameters
tensors (sequence) – tensors to broadcast. Must be on the same device, either CPU or GPU.
devices (Iterable[torch.device, str or int]) – an iterable of GPU devices, among which to broadcast.
buffer_size (int) – maximum size of the buffer used for coalescing Returns
A tuple containing copies of tensor, placed on devices. | torch.cuda#torch.cuda.comm.broadcast_coalesced |
torch.cuda.comm.gather(tensors, dim=0, destination=None, *, out=None) [source]
Gathers tensors from multiple GPU devices. Parameters
tensors (Iterable[Tensor]) – an iterable of tensors to gather. Tensor sizes in all dimensions other than dim have to match.
dim (int, optional) – a dimension along which the tensors will be concatenated. Default: 0.
destination (torch.device, str, or int, optional) – the output device. Can be CPU or CUDA. Default: the current CUDA device.
out (Tensor, optional, keyword-only) – the tensor to store gather result. Its sizes must match those of tensors, except for dim, where the size must equal sum(tensor.size(dim) for tensor in tensors). Can be on CPU or CUDA. Note destination must not be specified when out is specified. Returns
If destination is specified,
a tensor located on destination device, that is a result of concatenating tensors along dim.
If out is specified,
the out tensor, now containing results of concatenating tensors along dim. | torch.cuda#torch.cuda.comm.gather |
torch.cuda.comm.reduce_add(inputs, destination=None) [source]
Sums tensors from multiple GPUs. All inputs should have matching shapes, dtype, and layout. The output tensor will be of the same shape, dtype, and layout. Parameters
inputs (Iterable[Tensor]) – an iterable of tensors to add.
destination (int, optional) – a device on which the output will be placed (default: current device). Returns
A tensor containing an elementwise sum of all inputs, placed on the destination device. | torch.cuda#torch.cuda.comm.reduce_add |
torch.cuda.comm.scatter(tensor, devices=None, chunk_sizes=None, dim=0, streams=None, *, out=None) [source]
Scatters tensor across multiple GPUs. Parameters
tensor (Tensor) – tensor to scatter. Can be on CPU or GPU.
devices (Iterable[torch.device, str or int], optional) – an iterable of GPU devices, among which to scatter.
chunk_sizes (Iterable[int], optional) – sizes of chunks to be placed on each device. It should match devices in length and sums to tensor.size(dim). If not specified, tensor will be divided into equal chunks.
dim (int, optional) – A dimension along which to chunk tensor. Default: 0.
streams (Iterable[Stream], optional) – an iterable of Streams, among which to execute the scatter. If not specified, the default stream will be utilized.
out (Sequence[Tensor], optional, keyword-only) – the GPU tensors to store output results. Sizes of these tensors must match that of tensor, except for dim, where the total size must sum to tensor.size(dim). Note Exactly one of devices and out must be specified. When out is specified, chunk_sizes must not be specified and will be inferred from sizes of out. Returns
If devices is specified,
a tuple containing chunks of tensor, placed on devices.
If out is specified,
a tuple containing out tensors, each containing a chunk of tensor. | torch.cuda#torch.cuda.comm.scatter |
torch.cuda.current_blas_handle() [source]
Returns cublasHandle_t pointer to current cuBLAS handle | torch.cuda#torch.cuda.current_blas_handle |
torch.cuda.current_device() [source]
Returns the index of a currently selected device. | torch.cuda#torch.cuda.current_device |
torch.cuda.current_stream(device=None) [source]
Returns the currently selected Stream for a given device. Parameters
device (torch.device or int, optional) – selected device. Returns the currently selected Stream for the current device, given by current_device(), if device is None (default). | torch.cuda#torch.cuda.current_stream |
torch.cuda.default_stream(device=None) [source]
Returns the default Stream for a given device. Parameters
device (torch.device or int, optional) – selected device. Returns the default Stream for the current device, given by current_device(), if device is None (default). | torch.cuda#torch.cuda.default_stream |
class torch.cuda.device(device) [source]
Context-manager that changes the selected device. Parameters
device (torch.device or int) – device index to select. It’s a no-op if this argument is a negative integer or None. | torch.cuda#torch.cuda.device |
torch.cuda.device_count() [source]
Returns the number of GPUs available. | torch.cuda#torch.cuda.device_count |
class torch.cuda.device_of(obj) [source]
Context-manager that changes the current device to that of given object. You can use both tensors and storages as arguments. If a given object is not allocated on a GPU, this is a no-op. Parameters
obj (Tensor or Storage) – object allocated on the selected device. | torch.cuda#torch.cuda.device_of |
torch.cuda.empty_cache() [source]
Releases all unoccupied cached memory currently held by the caching allocator so that those can be used in other GPU application and visible in nvidia-smi. Note empty_cache() doesn’t increase the amount of GPU memory available for PyTorch. However, it may help reduce fragmentation of GPU memory in certain cases. See Memory management for more details about GPU memory management. | torch.cuda#torch.cuda.empty_cache |
class torch.cuda.Event [source]
Wrapper around a CUDA event. CUDA events are synchronization markers that can be used to monitor the device’s progress, to accurately measure timing, and to synchronize CUDA streams. The underlying CUDA events are lazily initialized when the event is first recorded or exported to another process. After creation, only streams on the same device may record the event. However, streams on any device can wait on the event. Parameters
enable_timing (bool, optional) – indicates if the event should measure time (default: False)
blocking (bool, optional) – if True, wait() will be blocking (default: False)
interprocess (bool) – if True, the event can be shared between processes (default: False)
elapsed_time(end_event) [source]
Returns the time elapsed in milliseconds after the event was recorded and before the end_event was recorded.
classmethod from_ipc_handle(device, handle) [source]
Reconstruct an event from an IPC handle on the given device.
ipc_handle() [source]
Returns an IPC handle of this event. If not recorded yet, the event will use the current device.
query() [source]
Checks if all work currently captured by event has completed. Returns
A boolean indicating if all work currently captured by event has completed.
record(stream=None) [source]
Records the event in a given stream. Uses torch.cuda.current_stream() if no stream is specified. The stream’s device must match the event’s device.
synchronize() [source]
Waits for the event to complete. Waits until the completion of all work currently captured in this event. This prevents the CPU thread from proceeding until the event completes. Note This is a wrapper around cudaEventSynchronize(): see CUDA Event documentation for more info.
wait(stream=None) [source]
Makes all future work submitted to the given stream wait for this event. Use torch.cuda.current_stream() if no stream is specified. | torch.cuda#torch.cuda.Event |
elapsed_time(end_event) [source]
Returns the time elapsed in milliseconds after the event was recorded and before the end_event was recorded. | torch.cuda#torch.cuda.Event.elapsed_time |
classmethod from_ipc_handle(device, handle) [source]
Reconstruct an event from an IPC handle on the given device. | torch.cuda#torch.cuda.Event.from_ipc_handle |
ipc_handle() [source]
Returns an IPC handle of this event. If not recorded yet, the event will use the current device. | torch.cuda#torch.cuda.Event.ipc_handle |
query() [source]
Checks if all work currently captured by event has completed. Returns
A boolean indicating if all work currently captured by event has completed. | torch.cuda#torch.cuda.Event.query |
record(stream=None) [source]
Records the event in a given stream. Uses torch.cuda.current_stream() if no stream is specified. The stream’s device must match the event’s device. | torch.cuda#torch.cuda.Event.record |
synchronize() [source]
Waits for the event to complete. Waits until the completion of all work currently captured in this event. This prevents the CPU thread from proceeding until the event completes. Note This is a wrapper around cudaEventSynchronize(): see CUDA Event documentation for more info. | torch.cuda#torch.cuda.Event.synchronize |
wait(stream=None) [source]
Makes all future work submitted to the given stream wait for this event. Use torch.cuda.current_stream() if no stream is specified. | torch.cuda#torch.cuda.Event.wait |
torch.cuda.get_arch_list() [source]
Returns list CUDA architectures this library was compiled for. | torch.cuda#torch.cuda.get_arch_list |
torch.cuda.get_device_capability(device=None) [source]
Gets the cuda capability of a device. Parameters
device (torch.device or int, optional) – device for which to return the device capability. This function is a no-op if this argument is a negative integer. It uses the current device, given by current_device(), if device is None (default). Returns
the major and minor cuda capability of the device Return type
tuple(int, int) | torch.cuda#torch.cuda.get_device_capability |
torch.cuda.get_device_name(device=None) [source]
Gets the name of a device. Parameters
device (torch.device or int, optional) – device for which to return the name. This function is a no-op if this argument is a negative integer. It uses the current device, given by current_device(), if device is None (default). Returns
the name of the device Return type
str | torch.cuda#torch.cuda.get_device_name |
torch.cuda.get_device_properties(device) [source]
Gets the properties of a device. Parameters
device (torch.device or int or str) – device for which to return the properties of the device. Returns
the properties of the device Return type
_CudaDeviceProperties | torch.cuda#torch.cuda.get_device_properties |
torch.cuda.get_gencode_flags() [source]
Returns NVCC gencode flags this library were compiled with. | torch.cuda#torch.cuda.get_gencode_flags |
torch.cuda.get_rng_state(device='cuda') [source]
Returns the random number generator state of the specified GPU as a ByteTensor. Parameters
device (torch.device or int, optional) – The device to return the RNG state of. Default: 'cuda' (i.e., torch.device('cuda'), the current CUDA device). Warning This function eagerly initializes CUDA. | torch.cuda#torch.cuda.get_rng_state |
torch.cuda.get_rng_state_all() [source]
Returns a list of ByteTensor representing the random number states of all devices. | torch.cuda#torch.cuda.get_rng_state_all |
torch.cuda.init() [source]
Initialize PyTorch’s CUDA state. You may need to call this explicitly if you are interacting with PyTorch via its C API, as Python bindings for CUDA functionality will not be available until this initialization takes place. Ordinary users should not need this, as all of PyTorch’s CUDA methods automatically initialize CUDA state on-demand. Does nothing if the CUDA state is already initialized. | torch.cuda#torch.cuda.init |
torch.cuda.initial_seed() [source]
Returns the current random seed of the current GPU. Warning This function eagerly initializes CUDA. | torch.cuda#torch.cuda.initial_seed |
torch.cuda.ipc_collect() [source]
Force collects GPU memory after it has been released by CUDA IPC. Note Checks if any sent CUDA tensors could be cleaned from the memory. Force closes shared memory file used for reference counting if there is no active counters. Useful when the producer process stopped actively sending tensors and want to release unused memory. | torch.cuda#torch.cuda.ipc_collect |
torch.cuda.is_available() [source]
Returns a bool indicating if CUDA is currently available. | torch.cuda#torch.cuda.is_available |
torch.cuda.is_initialized() [source]
Returns whether PyTorch’s CUDA state has been initialized. | torch.cuda#torch.cuda.is_initialized |
torch.cuda.list_gpu_processes(device=None) [source]
Returns a human-readable printout of the running processes and their GPU memory use for a given device. This can be useful to display periodically during training, or when handling out-of-memory exceptions. Parameters
device (torch.device or int, optional) – selected device. Returns printout for the current device, given by current_device(), if device is None (default). | torch.cuda#torch.cuda.list_gpu_processes |
torch.cuda.manual_seed(seed) [source]
Sets the seed for generating random numbers for the current GPU. It’s safe to call this function if CUDA is not available; in that case, it is silently ignored. Parameters
seed (int) – The desired seed. Warning If you are working with a multi-GPU model, this function is insufficient to get determinism. To seed all GPUs, use manual_seed_all(). | torch.cuda#torch.cuda.manual_seed |
torch.cuda.manual_seed_all(seed) [source]
Sets the seed for generating random numbers on all GPUs. It’s safe to call this function if CUDA is not available; in that case, it is silently ignored. Parameters
seed (int) – The desired seed. | torch.cuda#torch.cuda.manual_seed_all |
torch.cuda.max_memory_allocated(device=None) [source]
Returns the maximum GPU memory occupied by tensors in bytes for a given device. By default, this returns the peak allocated memory since the beginning of this program. reset_peak_stats() can be used to reset the starting point in tracking this metric. For example, these two functions can measure the peak allocated memory usage of each iteration in a training loop. Parameters
device (torch.device or int, optional) – selected device. Returns statistic for the current device, given by current_device(), if device is None (default). Note See Memory management for more details about GPU memory management. | torch.cuda#torch.cuda.max_memory_allocated |
torch.cuda.max_memory_cached(device=None) [source]
Deprecated; see max_memory_reserved(). | torch.cuda#torch.cuda.max_memory_cached |
torch.cuda.max_memory_reserved(device=None) [source]
Returns the maximum GPU memory managed by the caching allocator in bytes for a given device. By default, this returns the peak cached memory since the beginning of this program. reset_peak_stats() can be used to reset the starting point in tracking this metric. For example, these two functions can measure the peak cached memory amount of each iteration in a training loop. Parameters
device (torch.device or int, optional) – selected device. Returns statistic for the current device, given by current_device(), if device is None (default). Note See Memory management for more details about GPU memory management. | torch.cuda#torch.cuda.max_memory_reserved |
torch.cuda.memory_allocated(device=None) [source]
Returns the current GPU memory occupied by tensors in bytes for a given device. Parameters
device (torch.device or int, optional) – selected device. Returns statistic for the current device, given by current_device(), if device is None (default). Note This is likely less than the amount shown in nvidia-smi since some unused memory can be held by the caching allocator and some context needs to be created on GPU. See Memory management for more details about GPU memory management. | torch.cuda#torch.cuda.memory_allocated |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.