NEW YORK, December 8, 2022 — Comet, provider of the leading MLOps platform for machine learning (ML) teams from startup to enterprise and Run:ai, the leader in compute orchestration for workloads AI Workflows, announced a new partnership that will help ML practitioners accelerate their workflows and benefit from improved support throughout the ML lifecycle. Returning customers benefit from seamless access to this state-of-the-art integrated solution, which combines the experiment management and model production monitoring of Comet with the orchestration of Run:ai, while new customers can take full advantage of this powerful integration to get the most out of their ML initiatives from the first experiments. throughout production.
“Many of our customers rely on Run:ai and this integration enables teams to derive even more value from our respective solutions,” said Gideon Mendels, CEO and co-founder of Comet. “Comet is fully committed to working with companies in the ML ecosystem to accelerate the maturation and adoption of ML. We believe that integrations and collaboration are the best way forward for the community, and we are delighted to be working with Run:ai in pursuit of this goal.
Run:ai is the latest in a series of technology partnerships and integrations aimed at extending Comet’s ecosystem and interoperability. The flexibility of the Comet platform makes it well suited to the changing AI landscape. Comet uniquely offers both experiment tracking and model production monitoring, and its platform can run on any infrastructure, whether cloud, on-premises, or virtual private cloud (VPC) . Comet’s approach not only won an outstanding list of clients, but the company was also recently recognized as a Gartner Cool Vendor and CRN Emerging Vendor.
The new partnership between Comet and Run:ai will streamline ML projects for data scientists, researchers, IT teams, as well as extended team members who are seeking strategic business insights. In addition to Comet’s world-class experiment tracking and model production monitoring capabilities, joint customers can now easily operationalize cloud-native shared GPU clusters with Run:ai.
Run:ai’s Kubernetes-based software platform for orchestrating containerized AI workloads enables the dynamic use of GPU clusters for different deep learning workloads, from creating AI to training, to inference. With Run:ai, jobs at any stage automatically access the computing power they need. Run:ai’s compute management platform accelerates data science initiatives by pooling available resources and then dynamically allocating resources as needed, maximizing accessible compute.
Customers can now use Run:ai’s scheduling and orchestration capabilities to optimize resources. Teams also benefit from a unique ML system of record to maintain an accurate history of all ML experiments, model history, and dataset versioning.
“We’re excited to work with Comet to help ML practitioners accomplish more, faster, and at scale than ever before,” said Omri Geller, CEO and co-founder of Run:ai. “Together, we will empower practitioners throughout the ML lifecycle with smart tools that will make their work easier and more efficient.”
About the comet
Comet provides an MLOps platform that data scientists and machine learning teams use to manage, optimize, and accelerate the development process throughout the ML lifecycle, from training cycles to monitoring models in production . Comet’s platform is trusted by more than 150 enterprise customers, including Affirm, Cepsa, Etsy, Uber, and Zappos. Individuals and academic teams use Comet’s platform to advance research in their fields of study. Founded in 2017, Comet is headquartered in New York, NY, with a remote workforce in nine countries on four continents. Comet is free for individuals and academic teams. Startup, team, and enterprise licenses are also available.
Run:ai’s Atlas platform brings cloud-like simplicity to AI resource management, giving researchers on-demand access to pooled resources for any AI workload. An innovative cloud operating system, which includes a workload-aware scheduler and an abstraction layer, helps IT simplify AI implementation, increase team productivity, and take full advantage of expensive GPUs. With Run:ai, enterprises streamline the development, management, and scale of AI applications on any infrastructure, including on-premises, edge, and cloud.
#Comet #Runai #team #streamline #projects