PyTorch GPU

AI & Machine Learning Guide

Working with CUDA in PyTorch

PyTorch is an open source machine learning framework that enables you to perform scientific and tensor computations. You can use PyTorch to speed up deep learning with GPUs. PyTorch comes with a simple interface, includes dynamic computational graphs, and supports CUDA. You can also use PyTorch for asynchronous execution.  

In this article, you will learn:

What Is PyTorch?

PyTorch is an open source, machine learning framework based on Python. It enables you to perform scientific and tensor computations with the aid of graphical processing units (GPUs). You can use it to develop and train deep learning neural networks using automatic differentiation (a calculation process that gives exact values in constant time).

Key features of PyTorch include:

  • Simple interface—includes an easy to use API that can be used with Python, C++, or Java.
  • Pythonic in nature—integrates smoothly with the Python data science stack and enables you to leverage Python services and functionalities.
  • Computational graphs—includes capabilities for dynamic computational graphs that you can customize during runtime.

If you are also working with Keras and want to leverage GPUs, check out our article about Keras GPU. Also see our article reviewing the best GPUs for deep learning.

PyTorch CUDA Support

CUDA is a programming model and computing toolkit developed by NVIDIA. It enables you to perform compute-intensive operations faster by parallelizing tasks across GPUs. CUDA is the dominant API used for deep learning although other options are available, such as OpenCL. PyTorch provides support for CUDA in the torch.cuda library.

Tensor creation and use

PyTorch’s CUDA library enables you to keep track of which GPU you are using and causes any tensors you create to be automatically assigned to that device. After a tensor is allocated, you can perform operations with it and the results are also assigned to the same device.

By default, within PyTorch, you cannot use cross-GPU operations. The exception is the use of copy_() or copy-like methods, such as to() and cuda(). To launch operations across distributed tensors, you must first enable peer-to-peer memory access.

Asynchronous execution

GPU operations are asynchronous by default to enable a larger number of computations to be performed in parallel. Asynchronous operations are generally invisible to the user because PyTorch automatically synchronizes data copied between CPU and GPU or GPU and GPU. Additionally, operations are performed in the order of queuing. This ensures that operations are executed in the same fashion as if computations were synchronous.

If you must use synchronous operations, you can force this setting with the CUDA_LAUNCH_BLOCKING=1 environment variable. For example, you may want to do this if you are seeing errors on your GPUs. Synchronous execution ensures that errors are reported when they occur and makes it easier to identify which request originated the error.

Another instance to be mindful of whether to use async or sync operations is with time measurements. With async operations, your measurements won’t be accurate. To work around this while leaving async enabled, you can call torch.cuda.synchronize() before measuring or you can use torch.cuda.Event to record times.

CUDA streams

CUDA streams are linear execution sequences on specific GPUs. These streams are created by default during operation. Within each stream, operations are serialized by order of creation. However, operations from different streams can be executed simultaneously in any relative order. The exception is if you are using synchronize() or wait_stream() methods.

Keep in mind, if you have your default stream set to “current stream”, PyTorch automatically synchronizes data. However, if you are using non-default streams it is your responsibility to perform this synchronization.

How to Use CUDA with PyTorch

There are a few basic commands you should know to get started with PyTorch and CUDA. The most basic of these commands enable you to verify that you have the required CUDA libraries and NVIDIA drivers, and that you have an available GPU to work with. You can verify this with the following command:

torch.cuda.is_available()

Assuming you gain a positive response to this query, you can continue with the following operations.

Moving tensors with the to()function

Every Tensor you create is assigned a to() member function. This function assigns the specified tensor to the device you define, either CPU or GPU. When using this function, you need to assign a torch.device object as an input. This object can be:

cpu
Cuda:{number ID of GPU}

When initializing a tensor, it is often put directly on a CPU. Then, you can move it to GPU if you need to speed up calculations. The following code block shows how you can assign this placement.

if torch.cuda.is_available():
dev = "cuda:0"
else:
dev = "cpu"
device = torch.device(dev)
a = torch.zeros(4,3)
a = a.to(device)

Moving tensors with the cuda() function

You can also use cuda() to place tensors. This function takes an input representing the index of the GPU you wish to use; this input defaults to 0. Using this function, you can place your entire network on a single device. You can see an example code block accomplishing this below.

clf = myNetwork()
clf.to(torch.device("cuda:0"))

Make sure to use the same device for tensors

Although it’s useful to be able to specify which GPUs to use for your tensors, you don’t want to have to manually move all of your tensors. Instead, try to automatically create tensors on single devices. This helps prevent cross-device transfers and the time loss these transfers create.

To automatically assign tensors, you can use the torch.get_device() function. This function is only supported for GPUs and returns the GPU index. You can then use this index to direct placement for new tensors. The following code shows how this function is used.

#making sure t2 is on the same device as t2

a = t1.get_device()
b = torch.tensor(a.shape).to(dev)

Another option is to call cuda() and set the desired default.

torch.cuda.set_device({GPU ID})

Simplified PyTorch GPU Management With Run:AI

Run:AI automates resource management and workload orchestration for machine learning infrastructure. With Run:AI, you can automatically run as many compute intensive experiments as needed.

Here are some of the capabilities you gain when using Run:AI:

  • Advanced visibility—create an efficient pipeline of resource sharing by pooling GPU compute resources.
  • No more bottlenecks—you can set up guaranteed quotas of GPU resources, to avoid bottlenecks and optimize billing.
  • A higher level of control—Run:AI enables you to dynamically change resource allocation, ensuring each job gets the resources it needs at any given time.

Run:AI simplifies machine learning infrastructure pipelines, helping data scientists accelerate their productivity and the quality of their models.

Learn more about the Run:AI GPU virtualization platform.

See Our Additional Guides on Key Artificial Intelligence Infrastructure Topics

We have authored in-depth guides on several other artificial intelligence infrastructure topics that can also be useful as you explore the world of deep learning GPUs.

MLOps

In today’s highly competitive economy, enterprises are looking to Artificial Intelligence in general, and Machine and Deep Learning in particular, to transform big data into actionable insights that can help them better address their target audiences, improve their decision-making processes, and streamline their supply chains and production processes, to mention just a few of the many use cases out there. In order to stay ahead of the curve and capture the full value of ML, however, companies must strategically embrace MLOps.

See top articles in our MLOps guide:

Kubernetes and AI

This guide explains the Kubernetes Architecture for AI workloads and how K8s came to be used inside many companies. There are specific considerations implementing Kubernetes to orchestrate AI workloads. Finally, the guide addresses the shortcomings of Kubernetes when it comes to scheduling and orchestration of Deep Learning workloads and how you can address those shortfalls.

See top articles in our Kubernetes for AI guide: