The Basics and a Quick Tutorial


NVIDIA NGC is a comprehensive and powerful cloud-based platform that offers a diverse range of GPU-optimized software for deep learning and high-performance computing. It provides a catalog of GPU-accelerated containers for AI, machine learning, and HPC that are optimized, tested, and ready to run on supported NVIDIA GPUs, both on-premises and in the cloud.

A main advantage of the platform lies in its ability to harness the power of NVIDIA GPUs. With its extensive library of pre-configured containers, NVIDIA NGC eliminates the complexities associated with setting up software, thereby enabling users to focus more on building solutions and less on operations. Furthermore, NVIDIA NGC is interconnected with major cloud service providers, making it easier to run machine learning workloads in the cloud.

In essence, NVIDIA NGC is a platform that brings together the hardware, software, and services necessary to drive AI innovation.

This is part of a series of articles about NVIDIA A100.

In this article:

What Is the NVIDIA Pytorch NGC Container?

The NVIDIA PyTorch NGC Container is a GPU-optimized container specifically tailored for running PyTorch, one of the most popular deep learning frameworks. Tightly integrated with the NVIDIA hardware and software stack, this container is designed to provide users with a streamlined, efficient, and ready-to-use PyTorch environment. Pre-configured with essential libraries and dependencies, the NVIDIA PyTorch NGC Container ensures maximum compatibility and performance on NVIDIA GPUs.

At its core, the container provides the regular open source PyTorch framework, enhancing its native capabilities with GPU acceleration. This makes it ideal for training complex neural networks, executing large-scale simulations, and processing vast datasets. Additionally, it is continuously updated to include the latest version of PyTorch and related libraries, ensuring users always have access to the latest features and security patches.

You can get the NVIDIA Pytorch NGC container here.

Benefits of Using PyTorch through NVIDIA NGC

Optimized Performance

NVIDIA has optimized PyTorch on its platform to take full advantage of the GPU architecture. This means that when developers use NVIDIA NGC Pytorch, they can expect faster training times and improved computational efficiency, which are critical for large-scale machine learning projects.

Moreover, NVIDIA NGC PyTorch comes with support for mixed precision training. This allows developers to use both 16-bit and 32-bit floating-point types during training, thereby reducing memory usage and improving the speed of computations.

Easy Access to Pre-Built Containers

NGC provides a library of pre-built containers, optimized for NVIDIA GPUs, and ready to run out of the box. This means that developers can get started with their AI projects right away, without the need to spend time on setting up their environment. Users of PyTorch can use these components to run other software components they need as part of their infrastructure.

NGC containers come with all the necessary dependencies and libraries, ensuring that developers have everything they need to get started. This not only simplifies the process of setting up a development environment but also ensures consistency across different stages of the development lifecycle.

Access to Pre-Trained Models and Workflows

With NVIDIA NGC Pytorch, developers also get access to a wide range of pre-trained models and workflows, trained on vast amounts of data. They can be used as a starting point for many AI projects. This can save developers a lot of time and resources that would otherwise be spent on training models from scratch.

Furthermore, these pre-trained models and workflows come with detailed documentation and examples, making it easier for developers to understand how to use them. This makes it easier to get Pytorch machine learning projects off the ground.

Integration with Other NVIDIA Technologies

NVIDIA NGC Pytorch is also integrated with other NVIDIA technologies, giving developers access to a complete ecosystem for AI development. For instance, developers can use NVIDIA TensorRT to optimize their models for inference on NVIDIA GPUs. Similarly, they can use NVIDIA DeepStream for video analytics applications.

Other Machine Learning Frameworks on NGC

TensorFlow on NGC

TensorFlow, another leading deep learning framework, is also available as an NVIDIA NGC container. The TensorFlow NGC container is GPU-accelerated and optimized for NVIDIA's hardware. It provides an efficient, scalable, and reliable environment for developing and deploying TensorFlow applications, from research prototypes to enterprise-level solutions.

MXNet on NGC

MXNet, known for its flexibility and efficiency in scaling across multiple GPUs, is another framework catered for in the NVIDIA NGC catalog. The MXNet container offers a comprehensive, GPU-optimized environment that integrates seamlessly with NVIDIA's hardware and software ecosystem. This enables developers and data scientists to accelerate their model training and inferencing tasks.


JAX, an emerging framework that combines NumPy's ease of use with the capability to execute operations on GPUs, is also part of NVIDIA's NGC offerings. The JAX NGC container maximizes the performance of JAX on NVIDIA GPUs. With automatic differentiation features, it's an excellent choice for researchers looking to experiment with new models and algorithms.

Quick Tutorial: Getting Started with the NGC PyTorch Docker Container

Step 1. Ensure System Requirements

You'll need a system with the NVIDIA Container Toolkit installed.

Your system should also have a GPU that's compatible with the CUDA toolkit. The CUDA toolkit is a software layer that allows direct access to your GPU's virtual instruction set and parallel computational elements. It is not necessary to install the CUDA toolkit, it is provided as part of the NGC container.

Lastly, your system should have enough storage space for the Pytorch containers. The exact amount varies, but a safe estimate would be around 10GB of free storage.

Step 2. Register with NVIDIA NGC

Registration is straightforward. Visit the NGC catalog, click Welcome Guest at the top right, then select Sign In / Sign Up. Fill in the necessary details and create your account.

Once registered, you'll have access to a wealth of resources. These include various containers for different frameworks (including Pytorch), pre-trained models, and software development kits.

Please note that you can access much of the content on NVIDIA NGC even without registering.

Step 3. Accessing NVIDIA NGC PyTorch Containers

Now that you're registered with NVIDIA NGC, let's move on to accessing the Pytorch container.

To access the containers, log in to your NGC account and navigate to the Containers section. Here, you'll see a list of all available containers. Look for the Pytorch container and click on it. This will bring up a page with more information about the container, including its version and a link to documentation.

To pull the container, you'll need to use the docker pull command followed by the NGC registry path for the Pytorch container. This command downloads the container to your system.

Step 4. Run the PyTorch Container

With the Pytorch container downloaded, it's time to run it. Here is the command for Docker 19.03 and newer:

docker run --gpus all -it --rm nvcr.io/nvidia/pytorch:xx.xx-py3

A few comments about this command:

  • Replace xx.xx with the current container version.
  • The --gpus option lets you specify how many GPUs to use
  • The --rm option automatically removes the container when it exits.

Running the container starts a new Pytorch session, allowing you to execute Pytorch scripts and perform machine learning tasks. The Pytorch session runs inside the container, isolated from your main system. This isolation ensures that your system remains clean and unaffected by any changes made within the container.

NVIDIA NGC Pytorch with Run:ai

In the dynamic world of data science and deep learning, where innovation is driven by the latest frameworks and optimized hardware, having the right tools and resources at your disposal can make all the difference. As we've delved into the powerful NVIDIA NGC platform and its GPU-optimized containers, it becomes clear that the key to achieving peak performance and efficiency in your AI projects lies in not just leveraging cutting-edge tools but also in effectively managing and orchestrating your GPU resources – a task made seamless with Run:ai.

Together with Run:ai, NVIDIA NGC platform, you can supercharge your machine learning infrastructure, effortlessly managing GPU resources and orchestrating your experiments.

Here are some of the capabilities you can unlock with Run:ai:

Dynamic Resource Allocation: Empower yourself to dynamic resource allocation, ensuring that each job gets the computational power it requires on-demand.

Efficient Resource Sharing: Advanced visibility and scheduling let you create an efficient pipeline for resource sharing by pooling GPUs, ensuring that your AI experiments run smoothly without resource contention.

Guaranteed GPU Quotas: Set up guaranteed GPU resource quotas to eliminate bottlenecks and optimize costs. No more waiting in line for GPU access – your experiments get the resources they need when they need them.

Run:ai not only streamlines resource management but also gives data scientists more time to experiment, iterate, and push the boundaries of your models. It serves as a catalyst for data scientists, accelerating productivity and helping you to achieve your organization’s AI goals.

Learn more about the transformative capabilities of the Run:ai GPU virtualization platform