Projects enable you to organize related runs by grouping them together. For example, if you're generating a personalized product recommendation model, you can create a project called "Purchase Predictions" to group relevant runs. Then, all future runs you or your collaborators create to solve this problem can also be added to this project for better organization and collaboration.
Creating a project
To create a project on the web, select the Projects page from the left-hand navigation and click the "Create Project" button in the top right corner. You can also make a project from the CLI with the
spell project create command.
You can view all current projects on the Project page or by running
spell project list from the CLI. If you'd like to archive a project after it's complete, this can be done by clicking the edit button on a project's tile on the web, or with
spell project archive. To view archived projects, toggle "Show Archived" from the filters button near the top right of the Projects page, or run
spell project list --archived.
Adding runs to a project
By default, all runs are uncategorized and can be viewed by clicking the "Uncategorized Runs" link in the sidebar. From here, you can select a number of runs and add them to a new or existing project via the 'Project' button above the table. From the command line, you can also run
spell ps --uncategorized and
spell project add-runs.
When you create a run, you can use a new flag
spell run -p/--project and supply the name of the project you would like the run to be grouped in. This way, you can ensure the run is placed in the right project from the get-go, as opposed to adding it retroactively to a project as an uncategorized run.
Removing runs from a project will reset them to uncategorized. This can be done on the web by selecting a number of runs on the project detail page and clicking 'Actions', or on the CLI with
spell ps --project and
spell project remove-runs.
The Project Details page is meant to be a dashboard for you, your collaborators, and any other stakeholders to easily view and track the progress you've made toward your goals. To this end, we've added specific features to help. Note that everyone in your org sees the same page, so any changes you make are persisted for others as well.
When you're working on a project, often there is a specific metric or a few select metrics you're optimizing for. Perhaps, it's validation accuracy or the MSE for a particular dataset. Whichever metric(s) you need, easily track them by adding them in the Key Metrics section. After selecting a metric name and aggregation type, a scatterplot will appear, showing every run within the project that logged a value for that metric. The x-axis denotes when the run was run, and the y-axis is the metric value. Use to see how your runs have performed against your goals over time. Is your model's accuracy improving?
We've also added some functionality to the runs table to help you better understand your progress. Now, you can customize columns in the runs table by selecting 'Add Column' in the top right above the table. Easily reorder a column by selecting a table header and dragging it to the desired spot, and remove columns by clicking the "x" that appears next to a header on hover. Keep in mind, any changes you make here will be seen by everyone in the org.
These columns can include details about the run i.e. what machine type it was run on, metrics, and parameters.
We've also expanded the filter functionality to enable you to filter on specific metric and parameter values. Now, say you wanted to locate all the runs where the accuracy surpassed a certain value, you can conveniently do so using the filters.
Lastly, we've added some additional filters to help you sift through runs more effectively, including whether the run created any output files. To see all available filters, toggle the filter section above the runs table.
The parameters you can show in columns on the runs table and filter on are defined when you create a run. You can specify
--param for both
spell run and all
spell hyper commands. For a hypersearch this flag specifies a parameter search space and we generate the specific parameter values for each run. For a run you specify the parameter name as well as the value it takes in this run. This bookkeeping combined with metrics allows you to quickly and easily see how different parameter combinations performed.
A set of runs within a Project can be explored in more detail by creating an experiment. To learn more, refer to the Experiment Overview page in the docs.