We’re excited to announce the recipients of the Spell Open Research Grant!
In May, we introduced our grant program as part of an effort to support ambitious, large-scale projects in research, consumer, and creative application of machine learning. As part of the program, Spell has awarded one full grant and four mini-grants (totaling 60k in compute credits) to the following recipients:
Full Grant Recipient
Aran Komatsuzaki and team at Eleuther.ai
Project: Improved Image Generation with Caption-Generating GPT
Using the GPT-Neo language model developed and open-sourced by Eleuther.ai, this project aims to apply this language model within the pipeline of novel image captioning models and compare performance to common benchmarks such as DDPM.
Project: Galaxies ML
The use of modern ML tools and techniques has only recently emerged in astrophysics research. Boscoe’s project utilizes large existing catalogs of astronomy images to test whether generative deep learning models (e.g. GANs) can improve existing image quality or generate new realistic galaxy photography.
Project: Optimization Techniques in Quantum Machine Learning
Optimization techniques for quantum machine learning are understudied today despite being heralded as the “killer application” for hardware. Lockwood’s project provides a systematic overview and empirical analysis of optimization techniques for QML, evaluating a number of classical and quantum optimization techniques on tasks in both noisy and noiseless simulations.
Project: Unpaired Image-to-Image Translation for Microscopy Modality Conversion in Pathology
In traditional histology methods used for clinical pathology diagnostics, it takes several hours and the input of highly skilled technicians to prepare slides for microscope viewing. Abraham’s project aims to utilize unpaired image-to-image translation techniques to develop and augment existing microscopic imaging techniques to improve interpretability and reduce overhead.
Project: Feature Sonification
Recently popularized feature visualization techniques are enabling people to visualize what neural networks “see” when trained on image classification tasks. Held’s project aims to create an audio corollary to such visual techniques to test whether they can be translated for audio use — allowing people to hear what a neural network “hears.” The resulting “sonifications” will be studied to gauge their alignment with understood concepts from music theory and sound studies.
Each of these recipients will be building their projects on Spell through 2021 — stay tuned for publications, conference presentations, and more around these research projects in 2022!