top of page

PUBLICATIONS

​​

2022   Jon Gillick. "Creating and Collecting Meaningful Musical Materials with Machine Learning". Ph.D. dissertation. Advisor: David Bamman. University of California, Berkeley [pdf].

​

2022   Elliot Waissbluth, Jon Gillick, and Carmine Cella. "Synthesis by Layering: Learning a Variational Space of Drum Sounds." AI Music Creativity Conference [pdf]

​

2022   Jon Gillick, Noura Howell, Wesley Deng, Julia Park, Yangyang Yang, Carmine-Emanuele Cella, David Bamman, and Kimiko Ryokai. "The Stories Behind the Sounds: Finding Meaning in Creative Musical Interactions with AI." (*Under Review)

​

2021   Jon Gillick, Joshua Yang, Carmine-Emanuele Cella, and David Bamman, "Drumroll Please: Modeling Multi-Scale Rhythmic Gestures with Flexible Grids." TISMIR, Special Edition on AI and Musical Creativity [html]

​

2021   Jon Gillick and David Bamman, "What to Play and How to Play it: Guiding Generative Music Models with Multiple Demonstrations." NIME [html]

​

2021   Jon Gillick, Wesley Deng, Kimiko Ryokai, and David Bamman, "Robust Laughter Detection in Noisy Environments." INTERSPEECH [pdf]

​

2020   Matthew Joerke, Jon Gillick, Matthew Sims, and David Bamman, "Attending to Long-Distance Document Content for Sequence Labeling." Findings of EMNLP [pdf]

​

2019   Jon Gillick and David Bamman, "Breaking Speech Recognizers to Imagine Lyrics." NeurIPS Workshop on Machine Learning for Creativity and Design [pdf]

​

2019   Jon Gillick, Carmine-Emanuele Cella, and David Bamman, "Estimating Unobserved Audio Features for Target-Based Orchestration." ISMIR [pdf]

​

2019   Jon Gillick, Adam Roberts, Jesse Engel, Douglas Eck, and David Bamman, "Learning to Groove with Inverse Sequence Transformations." ICML [pdf]

​

2019   Adam Roberts, Jesse Engel, Yotam Mann, Jon Gillick, Claire Kayacik, Signe Nørly, Monica Dinculescu, Carey Radebaugh, Curtis Hawthorne, and Douglas Eck, "Magenta Studio: Augmenting Creativity with Deep Learning in Ableton Live." MuMe [pdf]

​

2018   Jon Gillick and David Bamman, "Please Clap: Modeling Applause in Campaign Speeches." NAACL [pdf]

​

2018   Jon Gillick and David Bamman, "Telling Stories with Soundtracks: An Empirical Analysis of Music in Film." NAACL Storytelling Workshop [pdf]

​

2018   Kimiko Ryokai, Elena Durán López, Noura Howell, Jon Gillick, and David Bamman, "Capturing, Representing, and Interacting with Laughter." CHI [pdf]

​

2017   David Bamman, Michelle Carney, Jon Gillick, Cody Hennesy, and Vijitha Sridhar, "Estimating the Date of First Publication in a Large-Scale Digital Library." JCDL [pdf]

​

2010   Jon Gillick, Kevin Tang, and Robert M. Keller.  "Machine learning of jazz grammars." Computer Music Journal [pdf]

​

2010   Jon Gillick, Kevin Tang, and Robert M. Keller, "Learning jazz grammars." SMC [pdf]

​

2009   Jon Gillick. "A clustering algorithm for recombinant jazz improvisations." Wesleyan Honors Thesis [pdf].

Selected Projects

Where's the Meaning in AI Music?

My PhD research path has gone sort of like this:  Algorithms for Music → Instruments that use Algorithms → People that use Instruments that use Algorithms. I imagine this path will be a bit of a circular one, but I can't seem to wrap my head around any of these three things individually - they only start to make sense to me when I have some serious perspective on all of them. 

​

This project aims to shed light on the opportunities and the risks of AI music technology by taking a deep dive into the experiences of 10 people as they interact in with AI music algorithms, either as listeners or as composers.  What is it actually like for people to listen to AI music when they have a reason to really be emotionally invested in listening? What is it like to hear music that is individually customized and composed using samples from recordings that are personally meaningful you? And what is it like to work with machine learning algorithms that reflect, manipulate, or reimagine this material during the creative process? Rather than focusing on the outputs of machine learning algorithms, we look at the inputs: the personally meaningful - and emotionally sensitive - material that people put into these algorithms, and how that shapes their experiences with them.

​

In the middle of this research project, with the things I was learning from this research on my mind, I decided put myself to the test by actually taking a leap and incorporating bits and pieces of AI into my own music - music that I really care about - which I'd somehow never convinced myself before was worth trying. The result was the winning song entered into the AI Song Contest, and a new artist project that I'm very intrigued on continuing to experiment with (Let me know if you're an artist looking to collaborate!) More music in the works on this front.

​

Papers:

2022   Jon Gillick, Noura Howell, Wesley Deng, Julia Park, Yangyang Yang, Carmine-Emanuele Cella, David Bamman, and Kimiko Ryokai. "The Stories Behind the Sounds: Finding Meaning in Creative Musical Interactions with AI." (*Under Review)

Guiding Music Generation Models with Demonstrations 

This project explores ways to design generative models (Machine Learning models that synthesize content) that gives users (in this case, people making music) a mechanism to ask for things with specific characteristics.  In Machine Learning circles, people usually talk about this in terms of controllability and interpretability: we'd like to be able to tell our AI's what to do, and we'd like to be able to understand why they do certain (often strange) things.

​

These are hard terms to define.  If you care about a real-world application (like making a drum loop that you actually want to use in a piece of music), I find it helpful to think of these terms in the fuzzy human sense rather than the strict mathematical sense.  For me, "guiding with demonstrations" is more what I'm going for than "controlling with controls", because with most machine learning models, you're never guaranteed it's going to follow what you say.  But that might be ok - if you can get it to do something close enough and you learn how to work with it, there are tons of creative possibilities.

​

I'm finding more and more that learning what a model or a collection of models can do with different kinds of guidance is a lot like learning an instrument.  There's a learning curve, but you can also find great sounds in the process.  Some interesting questions to me now: How do we design good instruments using machine learning? What are useful ways of conceptualizing what the instrument even is? Is a machine learning instrument defined by the dataset used to train it? By the type of model? By the inputs it accepts? All of the above? The way I personally having been thinking about it recently is more as a collection of all the different models I know how to use.

​

Papers:

2021   Jon Gillick and David Bamman, "What to Play and How to Play it: Guiding Generative Music Models with Multiple Demonstrations." NIME [html][video]

Assisted Orchestration

The idea in Assisted Orchestration is to try to recreate the sound of any audio recording by combining together instruments from a pre-recorded library of samples.  This can be a starting point for a formal orchestral score, or we could imagine something more production-oriented like suggesting combinations of drum samples from your own collection that best match some reference when you layer them together. Check out some examples from my collaborator at CNMAT Carmine Cella in the Orchidea Software.

​

I usually think of musical "applications" of machine learning in a very fluid sense, so it's interesting to do the opposite here and accept a really formal definition of orchestration as a problem that can "solved" as long as we define a few constraints (like deciding the instruments and samples we're working with). Of course, we can also forget about tradition, flip the material we're working with and "orchestrate" with something totally different - like words, by constructing lyrical lines which, when spoken, match the timbre and movement of a given sound.

​

Papers:

2019   Jon Gillick, Carmine-Emanuele Cella, and David Bamman, "Estimating Unobserved Audio Features for Target-Based Orchestration." ISMIR [pdf]

2019   Jon Gillick and David Bamman, "Breaking Speech Recognizers to Imagine Lyrics." NeurIPS Workshop on Machine Learning for Creativity and Design [pdf]

Making "Expressive" Beats by Modeling Professional Drummers

GrooVAE is collection of machine learning models for generating and manipulating expressive drum performances. We can use GrooVAE to add character to ("Humanize") robotic electronic drum beats, to come up with drums that match your sense of groove on another instrument (or just tapping on a table), and to explore different ways of playing the same drum pattern.  A big component of this project was recording a new dataset (free for download/research) of professional drummers playing on MIDI drum kits.

​

This project began while I was working with Adam Roberts and Jesse Engel at Google's Magenta team - check out this blog post for demos and examples, and the free Ableton Live plugins for use in Magenta Studio

​

Papers:

2019   Jon Gillick, Adam Roberts, Jesse Engel, Douglas Eck, and David Bamman, "Learning to Groove with Inverse Sequence Transformations." ICML [pdf]

2019   Adam Roberts, Jesse Engel, Yotam Mann, Jon Gillick, Claire Kayacik, Signe Nørly, Monica Dinculescu, Carey Radebaugh, Curtis Hawthorne, and Douglas Eck, "Magenta Studio: Augmenting Creativity with Deep Learning in Ableton Live." MuMe [pdf]

​

Analyzing the Role of Music in Film

I've spent some time working in a couple of studios that produce music and sound for TV, ads, movies, bands, etc. Lots of media content needs music - there's a wide spectrum of contexts ranging from deeply artistic to purely functional.  We can look at large databases (e.g. from IMDB) to see how music been chosen in the past, and try to answer lots of interesting questions: What kinds of music have been used in what kinds of films?  Are there connections between music and audience perception? What songs in a database might fit with a new script that hasn't been made yet?

​

Papers:

2018   Jon Gillick and David Bamman, "Telling Stories with Soundtracks: An Empirical Analysis of Music in Film." NAACL Storytelling Workshop [pdf]

​

Learning to Improvise Jazz

I first got into research way back in 2008 as a college student at Wesleyan University by trying to teach computers to improvise jazz music using machine learning.  Machine learning is way more popular now, but it already worked pretty well back then. MIDI saxophone is still bad. Can you tell which of the four solos below are by Charlie Parker, and which are by a computer program? If you can't figure it out, hover over the magnifying glass to see the answers.

 

That work is built into Bob Keller's Impro-Visor software for jazz education.  I learned a lot about music through technology, so I'm interested to see if and how we can make technology that inspires people who might otherwise feel excluded (especially those who struggle with the technical and physical aspects of learning a traditional instrument) to develop their musical creativity and ability.

​

Papers:

2010   Jon Gillick, Kevin Tang, and Robert M. Keller.  "Machine learning of jazz grammars." Computer Music Journal [pdf]

2010   Jon Gillick, Kevin Tang, and Robert M. Keller, "Learning jazz grammars." SMC [pdf]

2009   Jon Gillick. "A clustering algorithm for recombinant jazz improvisations." Wesleyan Honors Thesis [pdf].

Solo 1
Solo 2
Solo 3
Solo 4

 Charlie Parker

 Charlie Parker

 Computer

Computer

bottom of page