Deep Learning (DL) orchestration

NVIDIA GTC 2020 – AI Cluster Orchestration

AI has a productivity challenge – can we beat that challenge?

The availability of compute power, particularly Nvidia GPUs, have helped to fuel enormous growth of AI in the enterprise. But getting AI to production quickly and efficiently is still challenging:

  • Because data science workflows – Build, train and inference – have different compute needs, researchers find that resources remain idle much of the time, slowing their progress.
  • AI infrastructure is hard to build and manage and teams often find that managing this complexity hampers their productivity – DS is often managing infrastructure, leading to frustration

Watch the NVIDIA GTC Session below to learn how smart AI cluster orchestration can be used to solve AI productivity challenges. Understand how quotas, policies and job priorities can be used to share resources efficiently. Learn about using dynamic, rather than fixed allocation of resources, and how that increases productivity.

Also included: a short case study from a London-based research university, who utilized AI software orchestration to optimize models in just 2 days, as opposed to 49.


Like this article?

Share on linkedin
Share on LinkedIn
Share on twitter
Share on Twitter
Share on facebook
Share on Facebook
We use cookies on our site to give you the best experience possible. By continuing to browse the site, you agree to this use. For more information on how we use cookies, see our Privacy Policy.