Research

Research

Browse current and past research projects related to musical robotics, new interfaces for musical expression, sound content-based descriptive filtering, machine learning using Magenta, and much more. Click the images to be taken to a page with more information about each project and to listen to a demonstration, or click the download links to read the full paper for each project.

PnP. Maxtools: Autonomous Parameter Control in Max Utilizing MIR Algorithms

This research presents a new approach to computer automation through the implementation of novel real-time music information retrieval algorithms developed for this project. It documents the development of the PnP.Maxtools package, a set of open source objects designed within the popular programming environment MaxMSP. The package is a set of pre/post processing filters, objective and subjective timbral descriptors, audio effects, and other objects that are designed to be used together to compose music or improvise without the use of external controllers or hardware. The PnP.Maxtools package objects are designed to be used quickly and easily using a `plug and play’ style with as few initial arguments needed as possible. The PnP.Maxtools package is designed to take incoming audio from a microphone, analyze it, and use the analysis to control an audio effect on the incoming signal in real-time. In this way, the audio content has a real musical and analogous relationship with the resulting musical transformations while the control parameters become more multifaceted and better able to serve the needs of artists. The term Reflexive Automation is presented that describes this unsupervised relationship between the content of the sound being analyzed and the analogous and automatic control over a specific musical parameter. A set of compositions are also presented that demonstrate ideal usage of the object categories for creating reflexive systems and achieving fully autonomous control over musical parameters.

The Robo-Cajon: An Example of Live Performance With Musical Robotics

The Robo-Cajon is a robotic musical performer capable of real-time improvisation with a human performer. It is a wooden cajon mounted with 2 push/pull solenoids that receives input from a separate cajon mounted with contact microphones. Rhythmic data from the human performer is classified using a multi-layer perceptron and several Markov models are used for prediction and performing rhythmic patterns based on this prior classification. The Robo-Cajon demonstrates a novel approach to human-computer interaction and real-time applications for various machine learning algorithms. The Robo-Cajon was realized using Max/MSP and Arduino.

Cellular Automata Musification Using Python and Magenta

Life Like is an audiovisual project exploring the generation and musification of Life-Like Cellular Automata modeled after John H. Conway’s Game of Life using Magenta’s GANSynth Colab Notebook. This cloud-based Jupyter notebooks contains pre-trained models for synthesizing timbre and audio files. A video and MIDI file of ”live” cells are created from variable birth and survival conditions set before generation in Python.

Aerial Glass: A Live Browser and Aerial Silk Performance

Aerial Glass is a real-time performance, delivered through a website, of aerial silk acrobatics composited with a live browser performance of various video clips and audio composition. It uses the NexusHub framework to distribute a live in-browser performance of pre-rendered, transparent video clips, audio files, and web audio effects to everyone else visiting the website. A live physical performance on aerial silks is recorded in front of a green screen and live streamed to the website to be composited with the browser performance. This approach has created a heightened sense of presence and live action that is a welcome addition when social distancing measures have severely limited or eliminated in- person performances. This presence is heightened above a typical live stream as the browser can be used to extend the performance into a live multimedia performance.

Collabscape

Collabscape is a collaborative musical net art installation that enables users to create in nite soundscapes and ambient music together using a web browser. Users navigate to the website, enter their name and location, and are given a unique on-screen object to click and drag. The position of this object effects an audio sample that is assigned when the user joins the site. The user info, on-screen object, and the assigned audio sample are broadcast and updated in real time to all other users on the site using the NexusHub and Tone.js frameworks, and the final musical result is created through the number of users on the site and the position of their objects on screen.

Composing and Improvising Using Sound Content-Based Descriptive Filtering

The Freesound Player is a digital instrument developed in Max MSP that uses the Freesound API to make requests for as many as 16 sound samples which are filtered based on sound content. Once the samples are returned they are loaded into buffers and can be performed using a MIDI controller and processed in a variety of ways. The filters implemented will be discussed and demonstrated using three music compositions by the authors, along with considerations for composing and improvising using sound content-based descriptive filtering.

Tactus: An Example of Music-to-Vibrotactile Sensory Augmentation

Tactus is an installation that provides a visitor with auditory and vibrotactile stimuli. It is realized using music-to-vibrotactile sensory substitution techniques involving pitch shifting and lowpass filtering. Tactus is proposed for presentation at the arts reception at the TEI conference. The vibrotactile stimuli are presented to the visitor’s fingertips using a Spatially Distributed Vibrotactile Actuator Array (SDVAA).

A Spatially Distributed Vibrotactile Actuator Array for the Fingertips

The design of a Spatially Distributed Vibrotactile Actuator Array (SDVAA) for the fingertips is presented. It provides high-fidelity vibrotactile stimulation at the audio sampling rate. Prior works are discussed, and the system is demonstrated using two music compositions by the authors.

Interested in learning more? Get in touch >>