Spell is a platform for training and deploying machine learning models quickly and easily. This quickstart guide will walk through training your first machine learning model using the Spell CLI.
$ pip install spell
Once you have the Spell CLI installed, verify that everything is working as expected by running
spell --help. This should output a helpful list of subcommands.
Before you can do anything useful, you first need to log in:
$ spell login
This will prompt you for your
password. The credentials you use here are the same ones you use to login to the web console. If your login is successful you will see the following greeting:
Otherwise, you will receive an error message. If you are sure you are using the correct password and username and are not able to login to your account, please contact us at firstname.lastname@example.org.
You can check your current login status at any time using:
$ spell whoami
Your first run
Runs are the foundation of Spell. Creating a run is how you execute your code on Spell's computing infrastructure, so the
spell run command is likely the command you'll use most while using Spell.
Each run in Spell is an instance of a single computational job executed on our infrastructure. Runs are typically executed from inside a Git repository. Executing a run will:
- Sync the contents of the repository with Spell.
- Spin up a machine (or set of machines!) on the cloud, and execute your job on those machine(s).
- Save any file outputs from those jobs to our filesystem, SpellFS, for later access.
To execute a run, use
spell run. The simplest command you can run on your computer is
echo "hello world", which will print
hello world to the screen. To run this on Spell:
$ spell run "echo hello world"
✨ Casting spell #1… ✨ Stop viewing logs with ^C ... ✨ Run is running hello world
The run workflow
To dig a bit deeper into runs and see the run workflow in action, we will train an example style transfer network trained on Spell, using the training script in the cysmith/neural-style-tf repository on GitHub.
A style transfer network is a kind of deep learning model that learns to transfer a style from one image onto the content of another image. To train this network, we need three things: pretrained model weights, a style target, and a content target.
An easy, portable way to download pretrained model weight is to have a run do it for us. To begin, execute the following command:
$ spell run \ "wget -O imagenet-vgg-verydeep-19.mat \ 'http://www.vlfeat.org/matconvnet/models/imagenet-vgg-verydeep-19.mat'"
A truncated response to this run:
✨ Casting spell #2 Run created -- waiting for a CPU machine. ... ✨ Run is running http://www.vlfeat.org/matconvnet/models/beta16/imagenet-vgg-verydeep-19.mat ... Length: 576042600 (549M) [text/plain] Saving to: 'imagenet-vgg-verydeep-19.mat' ... (15.8 MB/s) - 'imagenet-vgg-verydeep-19.mat' saved [576042600/576042600] ... ✨ Run 2 complete
Our run manager automatically detects any files that a run has written to the
/spell/ directory on local disk and copies them over to our virtual filesystem, SpellFS, for persistent storage. Files generated as part of a run like this one are saved to the
runs/$RUNID path on SpellFS (replace
$RUNID with your run's id value). You can browse and/or download the files created by a run in Spell UI by visiting the run summary page:
Now that the model checkpoints are available on Spell, the next step is uploading the images. For the purposes of this demo, we will take the style from this Ralph Steadman illustration:
And transfer it onto this image of our teammate's cat:
We need to push these images to SpellFS. Start by creating an empty directory and saving these files to it as
cat.jpg, respectively. Then navigate to that directory in your terminal and run the following CLI command:
$ spell upload --name neural-style-imgs .
spell upload is a simple command that lets you upload files or folders on your local machine to Spell. The files land in the
uploads/neural-style-imgs path on SpellFS.
Notice how data we upload to Spell goes to the
uploads folder, whilst data we create goes to the
runs folder. There is one other top-level folder we are not using, the
public folder, containing a few sample datasets.
Next clone the code repo and
cd inside of it:
$ git clone https://github.com/cysmith/neural-style-tf.git
$ cd neural-style-tf
Now that we have of our data uploaded to Spell and all of our code cloned to our local machine we are ready to train our model! We do so by running the following CLI command:
$ spell run --machine-type T4 \ --mount runs/$RUNID/imagenet-vgg-verydeep-19.mat:imagenet-vgg-verydeep-19.mat \ --mount uploads/neural-style-imgs/antelope.jpg:styles/antelope.jpg \ --mount uploads/neural-style-imgs/cat.jpg:image_input/cat.jpg \ "python neural_style.py \ --style_imgs antelope.jpg \ --content_img cat.jpg"
The first option that we specify is the
machine-type. This option specifies that instead of using the default machine type, CPU, we will use a
T4 (the cheapest GPU instance available on AWS) instead.
The next set of
mount options specify a mapping of files or folders from SpellFS to the local filesystem of the machine running your code. Replace
$RUNID with the ID of the run that saved the pretrained model weights to SpellFS in the previous step.
Finally we have the actual
python command that will get run. This is specific to the
neural-style-tf training script,
neural_style.py, and not to Spell, but we'll cover it anyway for the sake of completion. We have two arguments:
style_imgs, which points to the file inside of the
styles directory that will serve as the style target, and
content_img, pointing to the file inside of the
image_input directory that will serve as the content target.
Run this command now. You can track its progress either in the console or using the web UI. After some time, the run will finish, and the resulting image will be available on SpellFS:
Congratulations, you've now trained your first machine learning model on Spell!
So far we've covered runs and resources. In this last section of the quickstart we will cover one other major feature of Spell: workspaces.
Workspaces are instances of Jupyter Notebook or JupyterLab running on the cloud. Workspaces are designed to replicate your local machine learning development. But because workspaces on the cloud they are more easily replicable, scalable, and sharable.
You can launch a workspace from the web console. First you'll be asked for a name and (optionally) a git repository to initialize the workspace files from. For the purposes of this demo, let's reuse the
Next you set your environment variables, machine type, framework, and any additional
conda dependencies; and toggle Jupyter Lab or Notebook. After that you can optionally mount any resources you need.
Once you've confirmed your settings, the workspace will be created, the page will refresh, and you can get coding.
You may have already noticed that most of the settings for configuring workspaces are the same as those for configuring runs. This is because workspaces are still runs under the hood. They're just runs that start up a Jupyter instance that you can connect to, and which don't terminate until you tell them to (or until they time out—by default, workspaces shut down after 30 minutes of inactivity). Also unlike runs workspaces are mutable: they can be shared, cloned, modified, and deleted as needed.
You can restart a workspace at any time to pick up right where you left off.
That concludes the Quickstart!
In this brief overview we've covered two of the most important features in Spell, runs and workspaces. For a brief tour of the rest of Spell's core features, see Core Concepts. For more ideas on projects, and to see the Spell CLI in action, check out our Guides.