Ready for a demo of Run:ai?
In today's dynamic AI landscape, driven by the continuous advancements in the field, particularly the remarkable ascent of Large Language Models (LLM), Run:ai remains dedicated to delivering state-of-the-art solutions to address the unique challenges faced by data scientists, infrastructure teams, and AI researchers. Run:ai 2.15 has been thoughtfully crafted to provide an exceptional value proposition to organizations involved in building and running LLM models. Let's delve into these innovative features, seamlessly weaving their value into the narrative.
Empowering Data Scientists
Elastic Workload Support
Run:ai 2.15 empowers data scientists with the ability to schedule and dynamically orchestrate elastic workloads. This feature, which includes workloads based on the Ray framework or Apache Spark, provides researchers with the flexibility to scale resources seamlessly for complex experiments.
Simplified Distributed Training
Running distributed training has never been easier. Data scientists can now configure and manage distributed training through a user-friendly UI or CLI. This feature allows them to select the framework of their choice, whether it's MPI, TensorFlow, Pytorch, Ray, or others, streamlining multi-node training and enhancing efficiency.
Collaboration within enterprise customers is now seamless. Assets used for workspaces and training job creation can be shared across organizational scopes. This feature promotes teamwork and efficient resource sharing, fostering collaborative AI development.
Efficiency for Infrastructure Teams
For infrastructure teams, Run:ai introduces features that enhance efficiency, collaboration, and control over AI environments.
Efficient Cluster Cleanup
In response to the issue of AI job clutter, Run:ai 2.15 allows for the automatic deletion of completed jobs. This cleanup can be executed at the job level by data scientists or defined as a policy by the AI infrastructure team, streamlining cluster maintenance and ensuring efficient resource usage.
Granular Role-Based Access Control (RBAC)
Run:ai introduces a revised RBAC mechanism and user interface to align organizational structures with specific roles. This ensures compliance with enterprise standards and gives infrastructure teams control over access and permissions at all levels, enhancing security and management.
Advanced GPU Fractionalization Core Technology
Run:ai's GPU fractionalization core technology receives notable enhancements, adding significant value to resource allocation and management.
Enhanced GPU Fractioning
The improved GPU fractioning capabilities now allow for GPU compute fractioning in addition to GPU memory. This provides finer control over resource allocation and better GPU utilization.
GPU Request and Limit Control
Run:ai now supports GPU memory request and limit definitions, empowering data scientists to go beyond the initially requested memory for their AI workloads while adhering to GPU limit definitions.
Additional General Features
In addition to features that benefit different stakeholders, Run:ai 2.15 introduces several general improvements that enhance the overall AI experience.
Enhanced Kubernetes and OpenShift Support
This update includes support for the latest Kubernetes version 1.28 and OpenShift version 4.13, ensuring compatibility with the latest container orchestration technologies.
Availability as Red Hat Universal Base Images (UBI)
For OpenShift customers, Run:ai is now available as Red Hat Universal Base Images, expanding deployment options for Red Hat-based environments.
Integration with JFrog Artifactory
Run:ai now seamlessly integrates with JFrog Artifactory as a container repository, streamlining container management and enhancing the overall AI environment.
With Run:ai Release 2.15, the AI ecosystem becomes more efficient, collaborative, and flexible, ensuring that all stakeholders can extract maximum value from their AI workloads. Stay at the forefront of AI innovation with Run:ai. This release reflects our ongoing commitment to supporting data scientists, infrastructure teams, researchers, and all those dedicated to advancing AI technology. With Run:ai, the possibilities are limitless, and the future of AI is brighter than ever.