NVIDIA Modulus

Bridging the Gap Between Physics and AI

What Is NVIDIA Modulus?

NVIDIA Modulus is an open source AI physics framework, created by NVIDIA, a leading manufacturer of GPU hardware and software for both gaming and professional markets. This framework is designed to bridge the gap between AI and physics by combining physics-based simulations with AI models.

NVIDIA Modulus is designed to enable developers, scientists, researchers, and businesses to incorporate AI-driven physics into their work. This is achieved by providing a flexible and customizable framework that allows users to integrate, train, and deploy AI physics models. This integration of AI and physics helps in solving problems related to fluid dynamics, materials science, and climate modeling, by providing tools to accurately simulate physical systems.

You can get Modulus from the official GitHub repository.

This is part of a series of articles about AI open source projects

In this article:

What Is NVIDIA Modulus Sym?

NVIDIA Modulus Sym is a deep learning framework that combines partial differential equations (PDEs) from the world of physics with AI.

There are three main types of machine learning and neural network-based analysis methods for physics: forward / physics-driven, data-driven, and hybrid approaches that involve both the physics and data assimilation, and Modulus Sym can support all of them. NVIDIA Modulus Sym provides researchers and practitioners APIs to help build and accelerate AI models for physics analysis.

Source: NVIDIA

Benefits of NVIDIA Modulus

AI Toolkit for Physics

The Modulus AI toolkit for physics helps integrate AI into your physics simulations. It's a set of tools that includes everything from data generation and model training to inference and deployment.

This toolkit leverages neural networks to model complex physical phenomena, enabling simulations that were previously infeasible due to computational constraints. By incorporating machine learning algorithms, Modulus allows users to create models that can predict physical behaviors with high accuracy.

Learn more in our detailed guide to Nvidia container toolkit (coming soon)

Near-Real-Time Inference

Modulus can provide near-real-time inference, meaning that the framework processes data and makes predictions almost instantaneously. This is useful for applications that require quick decision-making or real-time responses.

With fast inference, users can react swiftly to changes in their simulations. This can lead to more dynamic and responsive simulations, enhancing their realism and accuracy.

Customize Models

The framework offers extensive customization options, enabling users to tailor AI physics models according to specific requirements. This flexibility means that developers can define their neural network architectures, select training datasets, and adjust the learning process to best suit their simulation needs.

Customization extends to integrating with existing workflows, where Modulus can be used alongside traditional computational physics tools, allowing for a hybrid approach that leverages the strengths of both AI and numerical methods.

Scale with NVIDIA AI

Users of NVIDIA Modulus can use commercial NVIDIA AI offerings to scale their simulations and handle larger and more complex systems. This scalability is crucial for tackling large-scale simulation problems, such as those encountered in weather forecasting, aerospace engineering, and large-scale material analysis.

Quick Tutorial: Installing NVIDIA Modulus and Accessing Examples

Let’s see how to install and start working with NVIDIA Modulus. The code below was shared in the official documentation.

Installing Modulus with Docker Image

The easiest way to get started with Modulus is to use the NVIDIA Modulus NGC Container. This container comes fully equipped with all Modulus software and necessary dependencies.

Step 1: Set up Docker engine

Before you begin toying with Modulus repositories, it's essential to have Docker Engine installed on your machine.

NVIDIA recommends using the NVIDIA Docker toolkit version 1.0.4 or higher. The toolkit is compatible with most Debian-based systems. Install it using this command:


sudo apt-get install nvidia-docker2

Step 2: Obtain the Modulus container

To get the Modulus Docker container via NGC, use this command:


docker pull nvcr.io/nvidia/modulus/modulus:

To initiate a shell session inside the container, input this command:


docker run --shm-size=1g --ulimit memlock=-1 --ulimit stack=67108864 \
           --runtime nvidia -it --rm nvcr.io/nvidia/modulus/modulus: bash

You can mount your current directory within the Docker container as follows:


docker run --shm-size=1g --ulimit memlock=-1 --ulimit stack=67108864 \
           --runtime nvidia -v ${PWD}:/workspace \
           -it --rm nvcr.io/nvidia/modulus/modulus: bash

NVIDIA Modulus Examples

Modulus comes with a range of examples that illustrate how to use the library for different physics analysis use cases.

The following table summarizes the available examples and the computational model used by each one. Click the links to get the example from the official Modulus repo.

Modulus Sym Examples

The Modulus Sym library comes with its own set of examples which can help you learn the framework.

Use case Model
Introductory
Lid Driven Cavity Flow Fully Connected MLP PINN
Anti-derivative Data and Physics informed DeepONet
Darcy Flow FNO, AFNO, PINO
Spring-mass system ODE Fully Connected MLP PINN
Surface PDE Fully Connected MLP PINN
Turbulence
Taylor-Green Fully Connected MLP PINN
Turbulent channel Fourier Feature MLP PINN
Turbulent super-resolution Super Resolution Network, Pix2Pix
Electromagnetics
Waveguide Fourier Feature MLP PINN
Solid Mechanics
Plane displacement Fully Connected MLP PINN, VPINN
Design Optimization
2D Chip Fully Connected MLP PINN
3D Three fin Heatsink Fully Connected MLP PINN
FPGA Heatsink Multiple Models (including Fourier Feature MLP PINN, SIRENS, etc.)
Limerock Industrial Heatsink Fourier Feature MLP PINN
Geophyscis
Reservoir simulation FNO, PINO
Seismic wave Fully Connected MLP PINN
Wave equation Fully Connected MLP PINN
Healthcare
Aneurysm modeling using STL geometry Fully Connected MLP PINN

Managing AI Infrastructure with Run:ai

As an AI developer, you will need to manage large-scale computing architecture to train and deploy AI models. Run:ai automates resource management and orchestration for AI infrastructure. With Run:ai, you can automatically run as many compute intensive experiments as needed.

Here are some of the capabilities you gain when using Run:ai:

  • Advanced visibility—create an efficient pipeline of resource sharing by pooling GPU compute resources.
  • No more bottlenecks—you can set up guaranteed quotas of GPU resources, to avoid bottlenecks and optimize billing.
  • A higher level of control—Run:ai enables you to dynamically change resource allocation, ensuring each job gets the resources it needs at any given time.

Run:ai simplifies machine learning infrastructure pipelines, helping data scientists accelerate their productivity and the quality of their models.

Learn more about the Run:ai GPU virtualization platform.