On Tuesday night I was playing around on Ableton for a future music project when I realised that I could perform the music for my final major project myself. A few days before I was reading and researching hand using hand signals and gestures in improvised music. I came across an essay by Nana Pi Aubo Larsen Conducted Improvisation: A Study of the Effect of the Concept of Signs on Musical Creativity. In the essay Larsen goes into depth the history of signs in music. What came apparent when reading the article is that there has been a lot of attention on developing this somewhat new musical language, the most substantial of them being Sound Painting developed by Walter Thomas, Which has 1200 signs in the language and annual conference to with the aim of developing the language. When reading it became clear to me a lot of time and development goes into creating your own sign language for musical expression. An art form I believe I cannot do justice in less than 4 weeks. Which brings me back to the performing the music myself. Last year was the first time I used Ableton to perform live. My performance was met with some praise and I felt it went well. I believe I have a good foundation to create a performance experience for the examination, and even more so for the degree show. This idea was met with a lot of excitement from me, as I seek to challenge my performing practice with Ableton. I've worked out most of the technical specifications with my set and I even purchased a FCA1616 Behringer 8ins/8outs interface to help me.
Some of the inspiration of this set came from watching Luke Vosper play at the the ID spectral event at Werks Central Brighton on the 11th of April. I was inspired by seeing Luke using Ableton to power external instruments. I plan on using Ableton to send MIDI notes to the microkorg and Blofeld Synthesisers. Which I can then edit in real time through the synths own parameters. I believe this can be using to create live loops that feel more organic and live, compared to if you was to loop a snippet of audio. Looping audio will sound very repetitive because it's repeating the same notes over and over with the same tonality. If you loop MIDI and output it an external synths you can use the synths LFO's and patch control to create variations in the sound. Mixed with some live melodic parts from a melody, vocal or guitar will really bring the music to life. I know there is much for me to learn though, I'm going to look into Abletons randomisation functions, see how that can be used in my performance. Another thing I'm need to look into is the foot switch function on the APC, as I have no idea how it can be used, it might be fruitful to know. I also am not practically familiar with the Blofeld Synthesiser, so I'm going to spend some time learn how to use it and creating sounds.
When I first started this project and had the idea of using Ableton to create generative music, when I was doing so I came across a method of outputting to multiple speakers. I'm going to use this to output to the computer running TouchDesigner as well as sending myself a click track that the audience wont be able to hear. I have yet to set this up yet though.
Below are some photos I took on Wednesday of the visuals in action. This is the visual with a feedback system added to it, which seems to fill it a lot more. In the session I realised I have to do some fine tuning of the visual with the music. I'm planning on going in the performance studio next week to do this with my equipment. some videos of the piece can be found on my instagram:
Below are some pictures of my set up. From Left to Right:
on the 8th I got ready for practice with the musicians, I booked the performance studio to have a practice run on the piece. First I had to update the Tough Designer patch, adding the trail CHOPs that I talked about in my last blog post. That went smoothly and had no problems. I then had to set up Touch Designer on a different computer because the technicians informed me I could not use the MacPros in the studios and that I would have to use the older models of the MacPro. I was worried the computer wouldn't be able to handle the visuals because of tech specs, although when I tried it it worked fine even with logic running too. It was running slightly slower than the new models of the MacPro, but it's fine.
I came into some problems setting up OSC to communicate to the computer and only god knows why! I got stuck for an hour trying to get it to work. I've had the odd problem trying to set it up before, but it's never been this bad. The only way around this problem seems to be prescience. I couldn't think of any logical reason why it wasn't work and when it did work I had no idea what I had done to get it to work. So yeah, it's beyond me. To make sure I don't run into this problem on the exam I'll have to prepare the OSC link the day before, it's always worked once the link is made and no settings change.
So once I got that working I started moving some stuff into the performance studio, such as the computer, interface, extension leads, projector and appropriate leads. I then set up the space for the next day. It took roughly an hour and a half to set everything up including the screen (with help from the technician on that). Although I wasn't sure how I wanted it to be set up, now I know a little better should be faster next time, although I also need to take in consideration the setting up of the musicians, which shouldn't be much longer.
I am trying to place the musicians where they can see the projection and myself whilst I conduct. Although this may not be possible. My initial idea of how they are placed can be seen below, where the table and chairs are. on the stage right you can see on the stage right lies the computer.
A few things need to be sorted out the next day: The height of the screen, as most of the project was cut in half. and setting up the power supplies, lamps for the table and setting up the project colour grading, as it was funny when I used it.
On the 9th, the day of practice, I got in early to set up the last bits I mentioned above and everyone made it in by 11. One thing we struggled with was the size of the leads. Tarek lead is probably the most specific so he will need to be close to the computer. But other than that I can grab some long Jack leads from the technicians next time. as for the practicing it went really well. Some real good developments from working with the musicians. I had a go at conducting with some hand signals to guide the music, which worked well for controlling the outcome of the composition, although it was clear the music didn't have a direction so I'm going to write myself some scores of events. I was thinking about using my ideas of form based of the book the Hero's Journey by Joseph Cambell today and thought I could revisit them, although I don't believe it is suitable for the concept. Either way, the composition needs some form and development to stay interesting. It also important that I develop some hand signals/gestures that communicate musical ideas. I'll spend some time developing some for next practice, I believe I must begin by breaking down the different pillars of music, starting from the simple to the more complex. I'll then need to send them over to the musicians so they can familiarise themselves with them.
Additionally I need to give an Microkorg induction to Charlie so that he can better us the synthesiser, I'm aiming to do this tomorrow. But still haven't heard back from Charlie if he is free.
As a group we also talked about doing more than the 3 gigs planned, so thats on the cards. I'm going to talk to the technicians about loaning out one of the computers in the summer, but if not an alternative will have be found.
Written by Jack Cleary