Press Releases

Run:AI Raises $30M Series B Funding

January 26, 2021

Ready for a demo of Run:ai?

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

AI orchestration and virtualization software provider Run:AI will use the Series B investment to rapidly expand headcount

Tel Aviv, Israel, January 26, 2021 – Run:AI, the provider of innovative orchestration and virtualization software for artificial intelligence, today announced that it has raised an additional $30M in funding. The Series B round was led by Insight Partners, with participation from existing investors TLV Partners and S-Capital, bringing Run:AI’s total venture financing to $43M. Lonne Jaffe, Managing Director at Insight Partners, will also join Run:AI’s board. Run:AI, which launched in 2019, will use the investment to fund rapid expansion and recruitment.

The growth of AI in recent years is directly linked to the availability of massive computing power. As technological challenges grow in complexity, AI models running on huge datasets take more computing power to train. The most advanced form of AI, deep learning, typically uses Graphics Processing Units (GPUs) or other specialized hardware to train deep learning models. According to OpenAI, the demand for compute doubles every 3.5 months*. To support this demand, enormous AI clusters are being deployed — on-premises, in public cloud environments, and even at the edge.

Because of this rapidly increasing demand, compute infrastructure inefficiencies are slowing companies’ ability to bring practical AI solutions to market. When GPUs are statically allocated to researchers, even as demand for GPU grows, resources are sitting idle. In addition, most AI is being developed on cloud-native infrastructures, which were originally built to support running workloads on CPUs, not GPUs, and are missing many compute scaling features needed by AI. To make things worse, GPUs are not virtualized and cannot be shared between multiple applications or users. This results in a typical utilization of 25% in AI clusters and the low productivity of data science teams.

Run:AI has built an orchestration and virtualization software layer tailored to the unique needs of AI workloads running on GPUs and similar chipsets. The platform is the first to bring OS-level virtualization software to workloads running on GPUs, an approach inspired by the virtualization and management of CPUs that revolutionized computing in the 1990s. Run:AI’s Kubernetes-based container platform for AI clouds efficiently pools and shares GPUs by automatically assigning the necessary amount of compute power – from fractions of GPUs, to multiple GPUs, to multiple nodes of GPUs – so that researchers can dynamically receive as much compute power as they need. Enterprises and large research centers are using Run:AI to solve their resource challenges for both training and inference; better utilization of their AI computing infrastructure allows them to bring AI solutions to market faster.

“Tomorrow’s industry leaders will be those companies that master both hardware and software within the AI data centers of the future,” said Omri Geller, co-founder and CEO of Run:AI. “The more experiments a team runs, and the faster it runs them, the sooner a company can bring AI solutions to market. Every time GPU resources are sitting idle, that’s an experiment that another team member could have been running, or a prediction that could have be made, slowing down critical AI initiatives.”

“Every enterprise is either already rearchitecting themselves to be built around learning systems powered by AI, or they should be,” said Lonne Jaffe, Managing Director at Insight Partners.” Just as virtualization and then container technology transformed CPU-based workloads over the last decades, Run:AI is bringing orchestration and virtualization technology to AI chipsets such as GPUs, dramatically accelerating both AI training and inference. The system also future-proofs deep learning workloads, allowing them to inherit the power of the latest hardware with less rework. In Run:AI, we’ve found disruptive technology, an experienced team and a SaaS-based market strategy that will help enterprises deploy the AI they’ll need to stay competitive.”

Since its launch, Run:AI has built a global customer base, particularly in the automotive, finance, defense, manufacturing and healthcare industries. Customers using Run:AI see GPU utilization increase from 25 to 75 percent on average**, and one customer saw their experiment speed increased by 3000 percent after installing Run:AI’s platform***.

Dr. M. Jorge Cardoso, Associate Professor & Senior Lecturer in AI at King’s College London, uses Run:AI in the London Medical Imaging & Artificial Intelligence Centre for Value-Based Healthcare (AI Centre). “With Run:AI we’ve seen great improvements in speed of experimentation and GPU hardware utilization. Reducing time to results ensures we can ask and answer more critical questions about people’s health and lives,” said Dr. Cardoso. “The AI Centre is on a journey to change how healthcare is provided and Run:AI empowers us on this journey.”

Run:AI plans to use the $30M investment to triple the size of its team. Run:AI encourages skilled developers to apply to work in its Tel-Aviv-based office and offers data science training for software developers joining the company. Interested developers can apply here: run.ai/careers/.

*https://www.zdnet.com/article/ai-is-changing-the-entire-nature-of-compute/

**https://www.run.ai/wp-content/uploads/2020/05/From-28-to-73-percent-GPU-Utilization-with-RunAI.pdf

***https://www.run.ai/wp-content/uploads/2020/07/AI-Centre-KCL-RunAI-Case-Study-7-6-2020.pdf

About Run:AI

Run:AI helps companies execute on their AI initiatives quickly, while keeping budgets under control, by virtualizing and orchestrating AI compute resources in order to pool, share and allocate resources efficiently. Consolidating computational workloads yields greater server utilization, lowering TCO and speeding delivery of AI initiatives. Data science teams have automatic access to as many resources as they need and can utilize compute resources across sites – whether on premises or in the cloud. The Run:AI platform is built on top of Kubernetes, enabling simple integration with existing IT and data science workflows.

About Insight Partners

Insight Partners is a leading global venture capital and private equity firm investing in high-growth technology and software ScaleUp companies that are driving transformative change in their industries. Founded in 1995, Insight Partners has invested in more than 400 companies worldwide and has raised through a series of funds more than $30 billion in capital commitments. Insight’s mission is to find, fund, and work successfully with visionary executives, providing them with practical, hands-on software expertise to foster long-term success. Across its people and its portfolio, Insight encourages a culture around a belief that ScaleUp companies and growth create opportunity for all. For more information on Insight and all its investments, visit insightpartners.com or follow us on Twitter @insightpartners.