The time and cost of training new neural network models are among the biggest barriers to meeting business goals of your deep learning initiatives.

AI development is based on running a large number of highly compute-intensive training models in parallel, requiring specialized and expensive processors such as GPUs. IT leaders, MLOps, and data science teams find themselves with limited ability to allocate and control expensive compute resources to achieve optimal speed and utilization.

To solve these challenges Run:AI has built the world’s first virtualization layer for deep learning training models. By abstracting workloads from underlying infrastructure, Run:AI creates a shared pool of resources that can be dynamically provisioned, enabling full utilization of expensive GPU compute. 

IT teams retain control and gain real-time visibility – including seeing and provisioning run-time, queueing, and GPU utilization of each job. A virtual pool of resources enables IT leaders to view and allocate compute resources across multiple sites – whether on-premises or in the cloud. The Run:AI platform is built on top of Kubernetes, enabling simple integration with leading open source frameworks.

Team

omri_pic
Omri Geller Co-Founder and CEO
Ronen-Dar
Dr. Ronen Dar Co-Founder and CTO
Ken-Zamkow
Ken Zamkow General Manager, North America
Yaron-Goldberg
Yaron Goldberg VP Engineering

Investors & Advisors

Rona-Segev
Rona Segev TLV partners
Haim-Sadger
Haim Sadger S capital
Aya-Peterburg
Aya Peterburg S capital
Prof-Meir-Feder
Prof. Meir Feder Co-Founder & Advisor
Shahar-Kaminitz---2018
Shahar Kaminitz Advisor
Benny
Benny Schnaider Advisor