The time and cost of training new neural network models are two of the biggest barriers to achieving the business goals of deep learning initiatives.

AI development is based on running a large number of highly compute-intensive training models in parallel, requiring specialized and expensive processors such as GPUs. IT leaders, MLOps, and data science teams find themselves with limited ability to allocate and control expensive compute resources to achieve optimal speed and utilization.

To solve these challenges Run:AI has built the world’s first compute-management platform for orchestrating and accelerating AI. By centralizing and virtualizing GPU compute resources, Run:AI provides visibility and control over resource prioritization and allocation while simplifying workflows and removing infrastructure hassles for data scientists. This ensures AI projects are mapped to business goals and yields significant improvement in the productivity of data science teams, allowing them to build and train concurrent models without resource limitations.

IT teams retain control and gain real-time visibility – including seeing and provisioning run-time, queueing, and GPU utilization of each job. A virtual pool of resources enables IT leaders to view and allocate compute resources across multiple sites – whether on-premises or in the cloud. The Run:AI platform is built on top of Kubernetes, enabling simple integration with leading open source frameworks.

Team

omri_pic_bw
Omri Geller Co-Founder and CEO
Ronen-Dar_bw
Dr. Ronen Dar Co-Founder and CTO
Yaron-Goldberg_bw
Yaron Goldberg VP Engineering
yael-dor_bw
Yael Dor VP Business Development
Fara-Hain_bw
Fara Hain VP Marketing

Investors & Advisors

Rona-Segev
Rona Segev TLV partners
Haim-Sadger
Haim Sadger S capital
Aya-Peterburg
Aya Peterburg S capital
Prof-Meir-Feder
Prof. Meir Feder Co-Founder & Advisor
Shahar-Kaminitz---2018
Shahar Kaminitz Advisor
Benny
Benny Schnaider Advisor