Leverage the power of Graphcore’s Intelligence Processing Unit (IPU) designed to enable the next breakthroughs in machine intelligence, in the cloud and directly orchestrated from the Spell Platform
The Graphcore + Spell partnership offers state-of-the-art AI hardware acceleration with a simple, streamlined MLOps user experience accessible from the cloud.
Train models immediately with no on-premise hardware, and no upfront cost (technical preview is currently completely free).
Leverage the performance of Graphcore’s Intelligence Processing Unit (IPU) designed to accelerate machine intelligence. The IPU is a new kind of massively parallel processor with groundbreaking advances in compute, communications and memory.
Accelerate training and inference models with high-performance optimisations across natural language processing, computer vision and more.
Ideal for exploration, the IPU-POD₁₆ gives you all the power, performance and flexibility you need to fast track your IPU prototypes and speed from pilot to production.
Co-designed with the IPU for AI application development and fully integrated with popular ML frameworks so developers can easily port existing models.
Build models in your local IDE or Jupyter notebook, and execute runs on an automatically configured IPU-POD₁₆ systems all within the command line.
Automate IPU resource allocation among multiple concurrent users, orchestrated through the Spell machine scheduler.
Monitor and stream training, usage, and performance metrics through a central control pane within and across all runs. Customize individual user metrics through an extensible Python API.
Interested in leveraging Spell or Graphcore IPUs for your business? Reach out directly to our engineering team for more information or a demo.
Visit our GitHub repository for tutorials and application examples ready to be trained out-of-the-box on an IPU with popular ML frameworks.
Join our Community Slack channel for technical support, news, and community information.