Live algorithms & Machine listening: heretic
Heretic is an artificially intelligent computer music system to be used within the context of human-machine free improvisation. Heretic is written in the SuperCollider programming language with specific aspects of the system implemented in the machine learning software Wekinator. Heretic’s architecture consists of three interdependent modules: interpretive listening, contextual decision making, and musical synthesis. Interpretive listening is a collection of ten multi-layer perceptron neural networks that are organized according to my interpretation of Anthony Braxton’s Language Music System. The Contextual decision making module uses Joe Morris’ Postures of Interaction as a framework for designing a series of cascading Markov models. The musical synthesis module uses a flexible laptop improvisation framework for Heretic to express its musical decisions. Heretic is not an instrument to be controlled by a performer; Heretic autonomously generates and controls its own musical output by listening to the present musical moment, interpreting what it hears, making a musical decision based on this information, and then synthesizing its decision.
Machine listening/Neural Network Demo:
The screen capture displayed on this video’s upper-right corner contains ten sliders labelled Silence, Sparse Formings, etc. These ten sliders represent the real-time signal produced by each output layer within Heretic’s neural network architecture. When Heretic detects an improvisational Language Type, the value of the slider corresponding to the detected Language Type increases.