# setting device on GPU if available, else CPU
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
print('Using device:', device)
print()
#Additional Info when using cuda
if device.type == 'cuda':
print(torch.cuda.get_device_name(0))
print('Memory Usage:')
print('Allocated:', round(torch.cuda.memory_allocated(0)/1024**3,1), 'GB')
print('Cached: ', round(torch.cuda.memory_reserved(0)/1024**3,1), 'GB')
## check torch version //v1.11
print(torch.version) // in jupyter
python -c "import torch; print(torch.version)"
##To move tensors to the respective device:
torch.rand(10).to(device)
##To create a tensor directly on the device:
torch.rand(10, device=device)
Does PyTorch see any GPUs? torch.cuda.is_available()
Are tensors stored on GPU by default? torch.rand(10).device
Set default tensor type to CUDA: torch.set_default_tensor_type(torch.cuda.FloatTensor)
Is this tensor a GPU tensor? my_tensor.is_cuda
Is this model stored on the GPU? all(p.is_cuda for p in my_model.parameters())