The time and cost of training new neural network models are among the biggest barriers to extracting complex insights from your data and creating new deep learning solutions

Deep learning requires experimentation, running slightly-modified training workloads multiple times before they’re accurate enough to use. With some models, a single training iteration can take weeks. The result is long time-to-delivery, and increased workflow complexities and costs. 

Run:AI solves these challenges by rebuilding the virtualization layers from scratch for deep learning workloads. The company’s software is tailored for these new computational workloads and helps take full advantage of new AI hardware. It creates a compute abstraction layer that automatically analyzes the computational characteristics of the
workloads, eliminating bottlenecks and optimizing them for faster and easier execution using graph-based parallel computing algorithms. It also automatically schedules and runs the workloads. This makes deep learning experiments run faster, lowers GPU costs, and maximizes server utilization while simplifying workflows.

Team

omri_pic
Omri Geller Co-Founder and CEO
Ronen-Dar
Dr. Ronen Dar Co-Founder and CTO
Ken-Zamkow
Ken Zamkow General Manager, North America
Yaron-Goldberg
Yaron Goldberg VP Engineering

Investors & Advisors

Rona-Segev
Rona Segev TLV partners
Haim-Sadger
Haim Sadger S capital
Aya-Peterburg
Aya Peterburg S capital
Prof-Meir-Feder
Prof. Meir Feder Co-Founder & Advisor
Shahar-Kaminitz---2018
Shahar Kaminitz Advisor
Benny
Benny Schnaider Advisor