AI Freedom or Cloud Vendor Lock-in
Large enterprises, AI startups, and government agencies are discovering that Spell is a powerful MLOps alternative to cloud-specific solutions for managing the development, deployment, and infrastructure of complex deep learning and ML applications. Machine learning engineers and software developers are discovering faster development and better collaboration. Data science managers are discovering a new level of visibility for workflow tracking, resource consumption, and model assets. And business stakeholders and technology executives are discovering the lower cost and faster time to value. Spell gives them the freedom to choose the most competitive AI infrastructure, including on-premises, in the cloud on Azure, Google GCP, and AWS, or any combination.
Spell’s co-founders have deep experience in developing and delivering complex AI at scale. CEO Serkan Piantino founded and ran Facebook’s AI Research unit, and CTO Trey Lawrence was the Lead Engineer at machine vision and NLP innovator, Clarifai. Their experience in managing complex deep learning workflows and massive cloud compute infrastructure led to the vision of a new MLOps platform. It would be cloud agnostic, allow developers to use their existing tools, give managers and stakeholders granular operational intelligence, and reduce cost and time to value for AI deployments.
Unlike cloud-specific, development-focused MLOps tools, Spell is a solution for managing the entire model lifecycle, including Development, Deployment, and Infrastructure, providing complete control, visibility, and accountability for the entire model lineage on public, private, and hybrid clouds, or any combination.
Spell gives you detailed, real-time information about experiments, projects, models, resource utilization and more.
In addition to cloud-specific MLOps lock-ins, some cloud vendors also levy compute upcharges for using those tools on their cloud. For example, one cloud vendor adds a unique 25% compute upcharge, even on standard, commodity instances otherwise used for non-ML workloads.
Spell eliminates the MLOps upcharge for compute resources. You can run Spell MLOps workloads on any cloud for the same compute cost as any other workload. Even better, for added savings of as much as 90%, Spell also allows you to use lower-cost spot instances without the risk of long-running training being interrupted. With Spell, you never lose any work when a spot instance is interrupted by the cloud provider. Spell provides an auto-resume feature that lets you instantly pick up on a new instance where you left off on an interrupted instance.
You can use Spell on AWS, the Google Cloud Platform and the Microsoft Azure cloud, allowing you to choose the infrastructure with the best price performance for every workload. Spell also runs on-premises and in private clouds, and it allows for training and other processes for a single project to be distributed across multiple clouds for the best time to value and the lowest cost.
If you are using cloud-specific MLOps tools, the people of Spell are ready to help you move one of your current workflows to the Spell platform for a side-by-side comparison on that cloud. If you are new to MLOps, we’ll show you how you can leverage the entire cloud universe for faster, better model training, deployment, and management, and avoid the expense and limitations of single-cloud solutions.