
Run:AI announces General Availability of its K8s-based deep learning virtualization platform
The Run:AI deep learning virtualization platform, now supporting Kubernetes, brings control and visibility to IT teams supporting data science initiatives Tel Aviv — 17 March,

Breaking Static GPU Allocations with Guaranteed Quotas
– Dr. Ronen Dar, CTO Run:AI In a previous article, we discussed the problem of machine scheduling and complications that arise from inefficient GPU utilization

How Can IT Support Emerging Data Science Initiatives?
This week, Omri Geller, Run:AI’s CEO and cofounder spoke on a webinar about supporting the data science lifecycle. His talk centered on the three areas

We Open-Sourced a Gradient Accumulation Tool to Enable Using Large Batch Sizes Even When GPU Memory is Limited
Today, Run:AI published our own gradient accumulation mechanism for Keras – it’s a generic implementation, that can wrap any Keras optimizer (both a built-in one

Challenges in GPU Machine Scheduling for AI and ML Workloads
Dr. Ronen Dar, CTO and co-founder, Run:AI – Like a huge safari animal swatting all day at a pesky fly, sophisticated projects are often hindered

Can New Approaches to GPU Machine Management Speed Delivery of Deep Learning Projects?
At the AI Summit in NYC, Omri Geller, Run:AI CEO spoke on the subject of GPU management. This was his session topic: Can New Approaches

What We’re Reading: Three AI Posts that Got our Attention – vol. 1
What we’re reading from around the web on AI, Machine Learning and Deep Learning. Facebook out of Compute? This post by Tiernan Ray at ZDNet