GPU Abstraction Layer

Unique GPU Abstraction capabilities “virtualize” all available GPU resources and ensure that users can easily access GPU fractions, multiple GPUs or clusters of GPUs

Thin GPU
Provisioning

Dynamically provision GPU resources so that workloads that need it actually get it.

Learn More
Arrow
Fractional
GPU

Allow any GPU resource to be shared across multiple workloads optimizing GPU Utilization.

Learn More
Arrow
Job
Swapping

Seamlessly swap workloads that have been allocated the same GPU resources.

NVIDIA
MIG

Dynamic and automated partitioning for NVIDIA MIG- capable GPUs.

GPU ABSTRACTION LAYER

Thin GPU Provisioning

With Thin GPU Provisioning, whenever a running workload is not utilizing its allocated GPUs, those resources can be provisioned and allocated to a different workload. This innovation is simiar to thin provisioning used in storage systems. Run:AI makes this technology available for AI workloads on GPU resources, allowing for optimal GPU utilization. Data Scientists are now removed from the details of scheduling and provisioning, as the Run:AI platform abstracts that from their day-to-day.

GPU ABSTRACTION LAYER

Fractional GPU

Run:AI’s GPU abstraction capabilities allow GPU resources to be shared without memory overflows or processing clashes. Using virtualized logical GPUs, with their own memory and computing space, containers can use and access GPU Fractions as if they were self-contained processors. The solution is transparent, simple and portable; it requires no code changes or changes to the containers themselves.