On Tuesday night I was playing around on Ableton for a future music project when I realised that I could perform the music for my final major project myself. A few days before I was reading and researching hand using hand signals and gestures in improvised music. I came across an essay by Nana Pi Aubo Larsen Conducted Improvisation: A Study of the Effect of the Concept of Signs on Musical Creativity. In the essay Larsen goes into depth the history of signs in music. What came apparent when reading the article is that there has been a lot of attention on developing this somewhat new musical language, the most substantial of them being Sound Painting developed by Walter Thomas, Which has 1200 signs in the language and annual conference to with the aim of developing the language. When reading it became clear to me a lot of time and development goes into creating your own sign language for musical expression. An art form I believe I cannot do justice in less than 4 weeks. Which brings me back to the performing the music myself. Last year was the first time I used Ableton to perform live. My performance was met with some praise and I felt it went well. I believe I have a good foundation to create a performance experience for the examination, and even more so for the degree show. This idea was met with a lot of excitement from me, as I seek to challenge my performing practice with Ableton. I've worked out most of the technical specifications with my set and I even purchased a FCA1616 Behringer 8ins/8outs interface to help me.
Some of the inspiration of this set came from watching Luke Vosper play at the the ID spectral event at Werks Central Brighton on the 11th of April. I was inspired by seeing Luke using Ableton to power external instruments. I plan on using Ableton to send MIDI notes to the microkorg and Blofeld Synthesisers. Which I can then edit in real time through the synths own parameters. I believe this can be using to create live loops that feel more organic and live, compared to if you was to loop a snippet of audio. Looping audio will sound very repetitive because it's repeating the same notes over and over with the same tonality. If you loop MIDI and output it an external synths you can use the synths LFO's and patch control to create variations in the sound. Mixed with some live melodic parts from a melody, vocal or guitar will really bring the music to life. I know there is much for me to learn though, I'm going to look into Abletons randomisation functions, see how that can be used in my performance. Another thing I'm need to look into is the foot switch function on the APC, as I have no idea how it can be used, it might be fruitful to know. I also am not practically familiar with the Blofeld Synthesiser, so I'm going to spend some time learn how to use it and creating sounds.
When I first started this project and had the idea of using Ableton to create generative music, when I was doing so I came across a method of outputting to multiple speakers. I'm going to use this to output to the computer running TouchDesigner as well as sending myself a click track that the audience wont be able to hear. I have yet to set this up yet though.
Below are some photos I took on Wednesday of the visuals in action. This is the visual with a feedback system added to it, which seems to fill it a lot more. In the session I realised I have to do some fine tuning of the visual with the music. I'm planning on going in the performance studio next week to do this with my equipment. some videos of the piece can be found on my instagram:
Below are some pictures of my set up. From Left to Right: