We’re excited to open applications for the Spell Open Research Grant to support ambitious, large-scale research projects in machine learning. As part of the program, we will be giving each winning project $40,000 in cloud credits and free access to the Spell platform (this is how recipients will access the credits). Additionally, we will be rewarding smaller mini-grants of up to $5,000 in cloud credits for smaller-scale research and ML-based art projects.
Apply here. The deadline to apply is June 1st. Applications will be evaluated on a rolling basis.
We’d like to fund projects that ...
- Are pursuing a well-thought-out research objective with a specific need for large-scale compute resources (if applying for the full grant)
- Are backed by an individual or team with ample prior experience as shown by previous projects, publications, or work experience
- Will be made available to the broader research community, either in the form of open-sourcing the code and weights and/or writing up the methodology and results
Requests for Research
While there are many research areas that do not need large amounts of compute resources, large models have shown to be increasingly important in domains such as language modeling, computer vision, and modeling protein structure. The aim of the Spell Open Research Grant is to enable more researchers to do this type of research at scale — bridging the gap between the type of research that can be done independently and at highly funded research labs.
While we are not tied to these research areas, here are some that we think are exciting and hint at the type of projects we’d like to fund:
Open-source replication of Alphafold or other models that learn protein structure: Though the details of Alphafold have not been fully released, multiple attempts to reproduce are underway. We’d love to fund one of these and help them scale up the training component.
Scaling up speech models: The Wav2Vec paper demonstrates a way to learn speech recognition by leveraging large quantities of unlabeled raw audio data. Spotify recently announced an audio transcription dataset of 50,000 hours of transcribed audio.
Self-supervised learning on video: In the same way that language models learn from predicting the next words in a sentence, video is a rich source of self-supervision in the form of learning future frames as well as from the paired audio data.
Scaling up image generation models: For example, DALL-E, Taming Transformers for High-Resolution Image Synthesis.
"Unconventional" ML research: We'd love to help projects in underresearched domains, such as using a new type of data or framing the problem in a way others have not done before.
Who can apply?
Anyone who is doing interesting work in AI and has a clear plan for use of the grant funds.
How much do the credits awarded equate to in terms of GPU time?
The full grant ($40K in credits) is equivalent to ~100-200 GPUs run over a few weeks, assuming you use spot instances. The maximum mini grant award ($5K in credits) is equivalent to 8 GPUs for 3-4 weeks.
What types of instances can I use?
Spell supports many different types of instances including multi-GPU and high memory instances. See the list of available instance types here.
Does Spell support distributed training?
Spell is built to work with any distributed framework that uses MPI to communicate between nodes. Currently we have support for Horovod for Tensorflow/Pytorch and Torch.Distributed (see some examples). We plan to add support for other distributed frameworks such as DeepSpeed in the future.
Can I use these credits outside of Spell?
How long will the credits be available?
They need to be used by the end of the 2021 calendar year.
I already have cloud credits from another provider, can I use these in conjunction with the grant?
Yes, Spell is cloud-agnostic — we’ll help you set up Spell so you can use all of the credits on the same platform
Will there be any support outside of the credits?
We’ll help you get up to speed on Spell and assist with any scaling issues that arise there.
Questions/thoughts? Send us an email at email@example.com