Although ARIADNE, Ben’s experimental A/V duo, releases a/v experiences, their main medium is performance. The performance system Ben has designed for ARIADNE has evolved and improved throughout the 5 years of ARIADNE’s existence, but the current implementation is described below. You can also a watch a video where an overview of the system is given at the Touchdesigner TouchIn NYC 2019 meetup here.
Design Philosophy
The design of the system is shaped by the following core problems and solutions:
Problem: Laptops are bad interfaces for performing musical expression.
Solution: Find or create new expressive interfaces for software based instruments.
Problem: Laptops on stage during performances are distracting for the performer and the audience.
Solution: Hide all laptops and computers from the audience.
Problem: Compared with traditional acoustic instruments, current electronic musical instruments lack a link between visible physical movement and resultant generated sound.
Solution: Creat low latency visual systems that have a direct relationship with the generated sound.
Problem: Computer systems can be unstable and can crash.
Solution: Embrace the inherent instability through physics simulations and machine learning algorithms while using good software design principles to ensure overall system stability.
Implementation
Hardware:
The performance system consists of 3 computers connected via LAN, one for each performer’s sound generation and one for visual generation. These computers along with some audio hardware are contained in a rack that is placed behind the screen, hidden from the audience. The musical interfaces and controllers on stage are connected to the rack via two long cable looms. There is also a HDMI cable that goes to a short throw projector in front of the screen
Although some of the controllers and musical interfaces are commercially available products, the main performance instruments are custom made and use a combination of machine learning algorithms to process both audio and gesture input into expressive and interesting sound output. This will be explained further here at a later date.
Software:
The visual system is built in Touchdesigner where the generated visuals rely heavily on CPU and GPU based physics simulations driven by real time audio input and music controller input. Several scenes are built within Touchdesigner and their sequence is decided by the performers via an OSC based controller during the performance. Touchdesigner also handles projector calibration which can be adjusted by the performer via a wireless OSC controller.
The audio system is built using a combination of Max/MSP and Ableton. Ableton is used for it’s stability while Max/MSP is used for implementation of the machine learning based software instruments and custom DSP.
Most of the communication between computers is done via OSC and is handled by a Node.js server which also implements system wide state logic.