PUBLICATIONS

2019   Jon Gillick and David Bamman, "Breaking Speech Recognizers to Imagine Lyrics." NeurIPS Workshop on Machine Learning for Creativity and Design 2019 [pdf]

2019   Jon Gillick, Carmine-Emanuele Cella, and David Bamman, "Estimating Unobserved Audio Features for Target-Based Orchestration." ISMIR 2019 [pdf]

2019   Jon Gillick, Adam Roberts, Jesse Engel, Douglas Eck, and David Bamman, "Learning to Groove with Inverse Sequence Transformations." ICML 2019 [pdf]

2019   Adam Roberts, Jesse Engel, Yotam Mann, Jon Gillick, Claire Kayacik, Signe Nørly, Monica Dinculescu, Carey Radebaugh, Curtis Hawthorne, and Douglas Eck, "Magenta Studio: Augmenting Creativity with Deep Learning in Ableton Live." MuMe 2019 [pdf]

2018   Jon Gillick and David Bamman, "Please Clap: Modeling Applause in Campaign Speeches." NAACL 2018 [pdf]

2018   Jon Gillick and David Bamman, "Telling Stories with Soundtracks: An Empirical Analysis of Music in Film." NAACL Storytelling Workshop [pdf]

2018   Kimiko Ryokai, Elena Duran, Noura Howell, Jon Gillick, and David Bamman, "Capturing, Representing, and Interacting with Laughter." CHI 2018 [pdf]

2017   David Bamman, Michelle Carney, Jon Gillick, Cody Hennesy, and Vijitha Sridhar, "Estimating the Date of First Publication in a Large-Scale Digital Library." JCDL 2017 [pdf]

2010   Jon Gillick, Kevin Tang, and Robert M. Keller.  "Machine learning of jazz grammars." Computer Music Journal 34.3 (2010): 56-66. [pdf]

2010   Jon Gillick, Kevin Tang, and Robert M. Keller, "Learning jazz grammars." SMC 2009 [pdf]

2009   Jon Gillick. "A clustering algorithm for recombinant jazz improvisations." Wesleyan Honors Theses (2009): [pdf].

Selected Projects

Assisted Orchestration

Assisted Orchestration is about giving inspiration to composers.  The idea is to try to recreate the sound of any audio recording by combining together instruments from a pre-recorded library of samples.  This can be a starting point for a formal orchestral score, or we could imagine suggesting combinations of drum samples to use to get a sound that you want in your track. Check out some examples from Carmine Cella in the Orchidea Software.

We could also "orchestrate" lyrics or poetry by finding words which, when spoken, match the timbre and movement of a given sound.  Here are some examples: http://bit.ly/imgnlyric.

Papers:

2019   Jon Gillick, Carmine-Emanuele Cella, and David Bamman, "Estimating Unobserved Audio Features for Target-Based Orchestration." ISMIR 2019 [pdf]

2019   Jon Gillick and David Bamman, "Breaking Speech Recognizers to Imagine Lyrics." NeurIPS Workshop on Machine Learning for Creativity and Design 2019 [pdf]

Making Beats with Machine Learning

GrooVAE is class of machine learning models for generating and controlling expressive drum performances. We can use GrooVAE to add character to stiff electronic drum beats, to come up with drums that match your sense of groove on another instrument (or just tapping on a table), and to explore different ways of playing the same drum pattern.  A big component of this project was recording a new dataset (free for download/research) of professional drummers playing on MIDI drum kits.

This project began at my internship working with Adam Roberts and Jesse Engel at Google's Magenta team - check out this blog post for demos and examples, and the free Ableton Live plugins for use in Magenta Studio

Papers:

2019   Jon Gillick, Adam Roberts, Jesse Engel, Douglas Eck, and David Bamman, "Learning to Groove with Inverse Sequence Transformations." ICML 2019 [pdf]

2019   Adam Roberts, Jesse Engel, Yotam Mann, Jon Gillick, Claire Kayacik, Signe Nørly, Monica Dinculescu, Carey Radebaugh, Curtis Hawthorne, and Douglas Eck, "Magenta Studio: Augmenting Creativity with Deep Learning in Ableton Live." MuMe 2019 [pdf]

Analyzing the Role of Music in Film

I spent some time working in a couple of studios that produce music and sound for TV, ads, movies, bands, etc. Lots of media content needs music - there's a wide spectrum of contexts ranging from deeply artistic to purely functional.  We can look at large databases (e.g. from IMDB) to see how music been chosen in the past, and try to answer lots of interesting questions: What kinds of music have been used in what kinds of films?  Are there connections between music and audience perception? What songs in a database might fit with a new script that hasn't been made yet?

Papers:

2018   Jon Gillick and David Bamman, "Telling Stories with Soundtracks: An Empirical Analysis of Music in Film." NAACL Storytelling Workshop [pdf]

Learning to Improvise Jazz

I first got into research as an undergrad back in 2008 by teaching computers to improvise jazz music using machine learning.  Machine learning has gotten way more popular in the last few years, but it already worked pretty well back then. MIDI saxophone is still bad. Can you tell which of the four solos below are by Charlie Parker, and which are by a computer program? If you can't figure it out, hover over the magnifying glass to see the answers.

 

That work is built into Bob Keller's Impro-Visor software for jazz education.  I learned a lot about music through technology, so I'm interested to see if and how we can make technology that inspires people (especially those who struggle with the technical and physical aspects of learning a traditional instrument) to develop their musical creativity and ability.

Papers:

2010   Jon Gillick, Kevin Tang, and Robert M. Keller.  "Machine learning of jazz grammars." Computer Music Journal 34.3 (2010): 56-66. [pdf]

2010   Jon Gillick, Kevin Tang, and Robert M. Keller, "Learning jazz grammars." SMC 2009 [pdf]

2009   Jon Gillick. "A clustering algorithm for recombinant jazz improvisations." Wesleyan Honors Theses (2009): [pdf].

Solo 1
Solo 2
Solo 3
Solo 4