
Case Study – Accelerate AI Experimentation by 3000%
In this talk, Run:AI CEO and co-founder Omri Geller discusses some of the challenges of AI implementation, and how to speed up delivery of AI.

Reduce Cost by 75% with Fractional GPU for Deep Learning Inference
In this post, we’ll address how fractionalizing GPU for deep learning inference workloads with lower computational needs can save 50-75% of the cost of deep

Breaking Static GPU Allocations with Guaranteed Quotas
– Dr. Ronen Dar, CTO Run:AI In a previous article, we discussed the problem of machine scheduling and complications that arise from inefficient GPU utilization

How Can IT Support Emerging Data Science Initiatives?
This week, Omri Geller, Run:AI’s CEO and cofounder spoke on a webinar about supporting the data science lifecycle. His talk centered on the three areas

Challenges in GPU Machine Scheduling for AI and ML Workloads
Dr. Ronen Dar, CTO and co-founder, Run:AI – Like a huge safari animal swatting all day at a pesky fly, sophisticated projects are often hindered

Can New Approaches to GPU Machine Management Speed Delivery of Deep Learning Projects?
At the AI Summit in NYC, Omri Geller, Run:AI CEO spoke on the subject of GPU management. This was his session topic: Can New Approaches