NetApp and Run:AI partner to simplify the orchestration of AI workloads

Partnership helps companies streamline the process of both data pipelines and machine scheduling for deep learning (DL).

Today, at the NetApp Insight Digital 2020 Conference, NetApp shared news of a partnership with us at Run:AI, to enable faster AI experimentation by getting full GPU utilization. The two companies will speed AI by running many experiments in parallel, with fast access to data, utilizing limitless compute resources.

Read more about the partnership on the NetApp blog here.

NetApp’s Rick Huang, a Technical Marketing Engineer and Data Scientist, created a video demo of the Run:AI platform working with the NetApp OnTAP AI solution. For the technical report on how the two solutions work together, you can read more here.

Like this article?

Share on linkedin
Share on LinkedIn
Share on twitter
Share on Twitter
Share on facebook
Share on Facebook
We use cookies on our site to give you the best experience possible. By continuing to browse the site, you agree to this use. For more information on how we use cookies, see our Privacy Policy.