Run:ai deployed on NVIDIA VMIs enables multi-cloud scaling as well as ‘lift & shift’ cloud deployments
Tel Aviv, March 24,2022. Run:ai, the company simplifying AI infrastructure orchestration and management, today announced details of a completed proof of concept (POC) which enables multi-cloud GPU flexibility for companies using NVIDIA GPUs in the cloud. NVIDIA’s software suite includes virtual machine images, or VMIs, which are optimized for NVIDIA GPUs running in clouds such asAmazon Web Services, Microsoft Azure, Google Cloud, and Oracle Cloud. Run:ai software deployed on NVIDIA VMIs enables cloud customers to move AI workloads from one cloud to another, as well as to use multiple clouds simultaneously for different AI workloads with zero code changes.
Run:ai’s workload-aware orchestration ensures that every type of AI workload gets the right amount of compute resources when needed, and provides deep integration into NVIDIA GPUs to achieve optimal utilization of these resources. Run:ai’s Kubernetes-based Atlas platform and NVIDIA VMIs were used together in the POC to support ‘lift &shift’ as well as multi-node scaling in the cloud. NVIDIA customers and partners can de-risk their AI cloud deployments with a streamlined and portable solution for cloud AI infrastructure from Run:ai. Customers looking to cost-optimize their cloud computing resources can choose among supported cloud providers for the best-fit configuration. They can also manage AI workloads on multiple clouds with a single control plane.
NVIDIA VMIs are available on each of the major public cloud providers. NVIDIA publishes these with regular updates to both OS and drivers. The VMIs are optimized for performance on the latest generations of NVIDIA GPUs and allow for easy and fast deployment ofGPU-accelerated instances on the public cloud.
“By combining accelerated computing power from NVIDIA with Run:ai’s Atlas platform, organizations have a stellar AI foundation that enables them to successfully deliver on their AI initiatives,”said Omri Geller, CEO and co-founder of Run:ai. “We appreciate the close relationship we have with the NVIDIA cloud team and their commitment to supportNVIDIA accelerated computing customers everywhere.”
“From innovative startups to world-leading enterprises, NVIDIA-accelerated cloud computing provides customers with flexible options for powering their most demanding workloads,”said Paresh Kharya, senior director, Accelerated Computing at NVIDIA. “Paired with NVIDIA-accelerated instances from leading cloud service providers, the Run:ai Atlas platform helps customers maximize the efficiency and value of AI workload operations.”
The Run:ai Atlas Platform brings simplicity to GPU management by providing researchers with on-demand access to pooled resources for any AI workload and has built-in integration with NVIDIATriton Inference Server, NVIDIA’s open source inference serving software that lets teams deploy trained AI models from any framework on GPU or CPU infrastructure.
As an innovative cloud-native operating-system which includes a workload-aware scheduler and a GPU abstraction layer, the platform helps IT managers simplify AI implementation, increase team productivity, and gain full utilization of GPUs. Run:ai now offers a simple solution to teams with a multi-cloud AI infrastructure strategy. The solution is available in beta - reach out to [email protected] to learn more.
Additionally, Run:ai and NVIDIA are further expanding their collaboration to support customers who are operationalizing AI development. Run:ai is among the NVIDIA DGX-Ready Software partners joining the NVIDIA AI Accelerated program, which offers customers validated, enterprise-grade workflow and cluster management, scheduling and orchestration solutions for a variety of NVIDIA accelerated systems.
Here are some more that may interest you.