Run:AI software dynamically provisions and sets job priorities,
for optimal resource allocation

Run:AI virtualizes and accelerates AI by pooling GPU compute resources to ensure visibility and, ultimately, control over resource prioritization and allocation. This ensures that AI projects are mapped to business goals and yields significant improvement in the productivity of data science teams, allowing them to build and train concurrent models without resource limitations.

There are three key product components of Run:AI’s virtualization platform: 

GPU Machine Scheduling

Tying all of the elements of the Run:AI product together is a dedicated batch scheduler, running on Kubernetes. The scheduler enables critical features for the management of DL workloads. This includes an advanced multi-queue mechanism, fixed and guaranteed quotas, managing priorities and policies, automatic pause/resume, multi-node training, and more. It provides an elegant solution to simplify complex scheduling processes.