Browse current and past research projects related to musical robotics, new interfaces for musical expression, sound content-based descriptive filtering, machine learning using Magenta, and much more. Click the images to be taken to a page with more information about each project and to listen to a demonstration, or click the download links to read the full paper for each project.

Aaf. Maxtools: Autonomous Parameter Control in Max Utilizing MIR Algorithms (2022)

This paper documents the development of Aaf.Maxtools: a set of open source objects designed for the real-time automation of control parameters within the popular programming environment Max/MSP. The package is a set of filters, signal analysis and sound description tools, audio effects, and other objects that utilize several music information retrieval algorithms and are designed to be used together to compose music or improvise without the use of external controllers or hardware. A framework is also presented that demonstrates ideal usage of the objects for achieving fully autonomous control over musical parameters.

Read the full paper:

The Robo-Cajon: An Example of Live Performance With Musical Robotics (2022)

The Robo-Cajon is a robotic musical performer capable of real-time improvisation with a human performer. It is a wooden cajon mounted with 2 push/pull solenoids that receives input from a separate cajon mounted with contact microphones. Rhythmic data from the human performer is classified using a multi-layer perceptron and several Markov models are used for prediction and performing rhythmic patterns based on this prior classification. The Robo-Cajon demonstrates a novel approach to human-computer interaction and real-time applications for various machine learning algorithms. The Robo-Cajon was realized using Max/MSP and Arduino.

Read the full paper:

Cellular Automata Musification Using Python and Magenta (2021)

Life Like is an audiovisual project exploring the generation and musification of Life-Like Cellular Automata modeled after John H. Conway’s Game of Life using Magenta’s GANSynth Colab Notebook. This cloud-based Jupyter notebooks contains pre-trained models for synthesizing timbre and audio files. A video and MIDI file of ”live” cells are created from variable birth and survival conditions set before generation in Python.

Read the full paper:

Aerial Glass: A Live Browser and Aerial Silk Performance (2021)

Aerial Glass is a real-time performance, delivered through a website, of aerial silk acrobatics composited with a live browser performance of various video clips and audio composition. It uses the NexusHub framework to distribute a live in-browser performance of pre-rendered, transparent video clips, audio files, and web audio effects to everyone else visiting the website. A live physical performance on aerial silks is recorded in front of a green screen and live streamed to the website to be composited with the browser performance. This approach has created a heightened sense of presence and live action that is a welcome addition when social distancing measures have severely limited or eliminated in- person performances. This presence is heightened above a typical live stream as the browser can be used to extend the performance into a live multimedia performance.

Read the full paper:

Collabscape (2021)

Collabscape is a collaborative musical net art installation that enables users to create in nite soundscapes and ambient music together using a web browser. Users navigate to the website, enter their name and location, and are given a unique on-screen object to click and drag. The position of this object effects an audio sample that is assigned when the user joins the site. The user info, on-screen object, and the assigned audio sample are broadcast and updated in real time to all other users on the site using the NexusHub and Tone.js frameworks, and the final musical result is created through the number of users on the site and the position of their objects on screen.

Read the full paper:

Composing and Improvising Using Sound Content-Based Descriptive Filtering (2020)

The Freesound Player is a digital instrument developed in Max MSP that uses the Freesound API to make requests for as many as 16 sound samples which are filtered based on sound content. Once the samples are returned they are loaded into buffers and can be performed using a MIDI controller and processed in a variety of ways. The filters implemented will be discussed and demonstrated using three music compositions by the authors, along with considerations for composing and improvising using sound content-based descriptive filtering.

Read the full paper:

Tactus: An Example of Music-to-Vibrotactile Sensory Augmentation (2019)

Tactus is an installation that provides a visitor with auditory and vibrotactile stimuli. It is realized using music-to-vibrotactile sensory substitution techniques involving pitch shifting and lowpass filtering. Tactus is proposed for presentation at the arts reception at the TEI conference. The vibrotactile stimuli are presented to the visitor’s fingertips using a Spatially Distributed Vibrotactile Actuator Array (SDVAA).

Read the full paper:

A Spatially Distributed Vibrotactile Actuator Array for the Fingertips (2019)

The design of a Spatially Distributed Vibrotactile Actuator Array (SDVAA) for the fingertips is presented. It provides high-fidelity vibrotactile stimulation at the audio sampling rate. Prior works are discussed, and the system is demonstrated using two music compositions by the authors.

Read the full paper:

Interested in learning more? Get in touch >>