The time and cost of training new neural network models are among the biggest barriers to meeting business goals of your deep learning initiatives.

AI development is based on running a large number of highly compute-intensive training models in parallel, requiring specialized and expensive processors such as GPUs. IT leaders, MLOps, and data science teams find themselves with limited ability to allocate and control expensive compute resources to achieve optimal speed and utilization.

To solve these challenges Run:AI has built the world’s first virtualization layer for deep learning training models. By abstracting workloads from underlying infrastructure, Run:AI creates a shared pool of resources that can be dynamically provisioned, enabling full utilization of expensive GPU compute. 

IT teams retain control and gain real-time visibility – including seeing and provisioning run-time, queueing, and GPU utilization of each job. A virtual pool of resources enables IT leaders to view and allocate compute resources across multiple sites – whether on-premises or in the cloud. The Run:AI platform is built on top of Kubernetes, enabling simple integration with leading open source frameworks.


Omri Geller Co-Founder and CEO
Dr. Ronen Dar Co-Founder and CTO
Ken Zamkow General Manager, North America
Yaron Goldberg VP Engineering

Investors & Advisors

Rona Segev TLV partners
Haim Sadger S capital
Aya Peterburg S capital
Prof. Meir Feder Co-Founder & Advisor
Shahar Kaminitz Advisor
Benny Schnaider Advisor