The growth of both deep learning and large model architectures in production systems has spurred an increasing demand for dedicated hardware design aimed at accelerating training and inference of large neural network based AI systems. Today, Spell is proud to announce a joint partnership with Graphcore to bring one of the first combined offerings of on-demand AI accelerator hardware supported by an end-to-end MLOps platform.
Graphcore’s Intelligent Processing Unit ("IPUs") and additional AI compute systems have a history of delivering outstanding performance and efficiency across critical ML domains (e.g., NLP, Computer Vision) and industry verticals in the enterprise. Combined with Spell’s deep learning platform, Graphcore IPUs are now generally available on preview for access and delivery directly in the cloud, accessible to all practitioners across the expanding AI ecosystem.
IPUs on demand
At Spell, we’re excited to offer an entirely new way for anyone who has heard about the IPU’s amazing capabilities to experience them first-hand. For the first time, Spell and Graphcore are offering self-service IPU compute available directly to users without upfront investment in on-prem hardware or reserved instances. The instances are currently offered on-demand and for free on a limited basis during the preview period.
Directly leveraging the spell run command line remote execution, or Spell Workspaces in the web console, ML engineers can instantly run state-of-the-art applications and quickly iterate experiments on one of our latest MK2 IPU-POD systems.
Our IPU-specific tutorial repos and notebooks provide an interactive way to help users quickly get familiar with key programming principles in the IPU while also exploring its power on the latest and most commonly used machine learning model examples. Developers can use their allocated IPU compute time to try out Graphcore's IPUs in whatever way suits them, from benchmarking performance on pre-existing, IPU-accelerated models, to building and training their own bespoke models using common ML frameworks such as PyTorch or Tensorflow. Spell orchestration manages the IPU compute, storage, and everything under the hood, allowing users to focus on experimentation, model development, and fine-tuning for the IPU. Spell also allows containerised access to Graphcore’s native Poplar Software stack, which has been co-designed with the IPU, for those who want the most granular control over the IPU hardware.
Scaling with Graphcore
We believe that Graphcore’s mission to build dedicated processors for machine learning is core to accelerating innovation both in MLOps and the broader AI ecosystem, and ML operational automation with accessible, high performance computing will be the future of operationalizing AI at scale. Spell is dedicated to continuing its relationship with Graphcore’s AI acceleration efforts to broaden access to AI hardware innovation and continue raising the performance bar in production-level MLOps and AI offerings.
For more information on this offering, head over to our signup page to sign-up and learn more about Graphcore’s capabilities.