Today I was considering how the sound diffusion lab would be set up on the day of exhibition. This includes the floor plan and speaker set up.
I figured there would be some room by the edge of the projections for the musicians. I have outlined in the photos where I think the musicians would fit, it’s possible to fit 2 people performing on both sides of the room. I tried seeing if instruments would fit but it was hard to tell, next time we get together for a jam I will get a better idea of how this will look. I also wondered how many seats I could set up. It appears 2 rows of 5 is good. And there is space at the back for people to stand, an important part is that people can see the projection.
For the speaker set up, I used recording made yesterday with Josh Thomas, Louis Sterling and Harrison Milton to organise the speaker sound arrangement. I used the speakers that are hung for one main reason, I didn't want the musicians to be right next to the speaker, for the sake of their ears. I tried two different set ups. both set ups had a speaker hard left and hard right with a central speaker. set up 1 had the speakers midway between the front and back, right and left. Set up 2 had the speakers positioned in the back left and right. I decided through experimentation that set up 2 gave more space for each instrument and there was more clarity, plus it wasn't so loud for people sitting in the front row.
I've thought about recording a video of the piece being performed with live. I aim to do this next week with Ebony Grace, Josh Thomas & Tarek El Goraicy. The date is not confirmed but should be shortly, I have no doubt that they will come through. I also need to find someone who is willing to film, working on that too also have no doubt that I'll be able to find someone.
I've been working on synth sounds recently on my Microkorg XL, which is a wonderful small synth. I've been thinking about how sounds are going to be balanced, I've found the Korg can be a wide range of sounds like bass, lead, accompanying. The demos below highlight this I believe. here is a collection of sounds that are all taking on different roles in the music all from on instrument. I'm thinking all these sounds are fair game to use in the piece, tomorrow I'm trying to set up a jam to see how it all feels alongside the visuals.
Written by Jack Cleary
Today I wanted get some technical things out the way, in an earlier post I said I was worried about how OSC was going to be at the uni. I tried it out today and it work fine. I linked the RGB of the levels and wave nod in Touch Designer with Touch OSC on my phone. Although I need to change the ranges of some of the OSC message in Touch OSC. I also wanted to change the opacity with the inputting sound. When I tried to link the Opacity to the math it was oscillating with the sound, which is the desired effect. I have a hunch the audio spectrum CHOP outputs volume only, but I will have to check. a quick google search doesn't reveal anything. But will keep looking.
as another not I started the documenting today, wrote something for the synopsis and artist statement. Which so far is just me trying to summaries the piece, although I think there is still room for things to change.
write by Jack Cleary
A lot of progression has happened in terms of concept for my final major project. I'm shifting my focus to a performance visual art piece. This is mainly due to the gig I played with Louis Sterling the other week. Why so? Well it was a lot of fun, I want to do it more and performances are engaging experience. I'm not sure I'm going to completely ditch the generative aspect, but I think if the performance side is a struggle to get together I'll have to drop the generative side.
So whats the new idea? I want to use multiple channels of audio to add coloured lines that are moving to the sound of the music. Using the visuals I did for Louis as a foundation (which is pretty much what I just described) I started building a new patch with colour added. How I did it was by duplicating the Geometry sending them to their own renders, then added a level top. I sent all the geometry to a composite. The level top can be used to change the colour of TOPs. I then sent all the geometry to a composite to bring them all together. The Idea is that each channel of audio has its own colour and vibration (from the sound). This will then create complex coloured patters.
I have a couple of instrumentalists interested in helping with the performances. Ebony Grace & Josh Thomas from The Case of Us will be bringing the sound to life with synthesisers. Charles Kew & Rosie Robinson have also said they will help out, but as of now it's not clear. I'm Also thinking about asking my friend El Goraicy if he would like to play synth too.
Written By Jack Cleary
It has been a week since I have updated the blog, I feel like my progress has somewhat slowed down. But I may just be reaching a point where I needed to reflect a little. Not to say I haven't been busy, I've been try to learn things for the next stage of the development of my final major project. Having such a wonderful & knowledgeable group of friends around is definitely been crucial to my development, talking about ideas within the visual realm is eye open and many ideas come from that. My friend Jedd Winterburn introduced me to the idea of using TOPs as textures for geometry. The result is something more organic and with depth, which my visuals have lacked until this point. The gist of it is using a PBR shader and an environmental light as a material for the geometry. You can use a TOP with the environment light, it's worth experimenting with different images to see how it effect the outcome. The PBR shader then is used as the material for the Geometry, you then can change the priorities of that PRB shader to make it more reflective or rough etc. I found that Using TOPs that move on the surface of a geometry is very interesting, I could imagine once the geometry has some manipulation and movement to it you will have something very interesting. I think using a particle system will be create some create results with different movement of wind etc. You can see the moving visuals on a geometry > here <
A CHOP that is really cool with working with audio is the wave chop, it's a good way of sculpting the data from the audio. By this I mean it creates different curves of data from the input of the audio. I used this filtered data to change as instancing data for geometry for the visuals below. An interesting thing came out of doing all the visual work, my friend and longtime collaborator Louis Sterling saw the visuals and asked me if I could perform them at his gig on the 15th of February. Although he asked me the day before. But most of the work was already done, I mainly had to find away of performing with them. To perform with touch designer i pretty much was only changing values in the wave CHOP, which were enough to create performative visuals that where striking and dynamic. Because of the nature of playing live I tried using Touch OSC on my phone to control the parameters. I had luck at home with a stable internet connection, but in the venue it was a no go. For some reason the internet wasn't working at the venue, which makes me sceptical over how it's going to work over the universities internet. Although I am am going to have to test it out in the near future. I am going to go in soon with my computer to try working with visuals and audio together, although I think the first session will literally be try to figure out how the computers will communicate to each other. It's a shame I don't have a space of my own to do this, but it's ok, some day.
What this work has brought into my hemisphere is that I could possible have one visual idea that is changing via parameters in touch through OSC. Depending if it works, OSC can also change the lighting which is dope to. I am going to try this out very soon. I just need to find the energy to bring my gear into uni.
See below an example of the visuals I did for the gig.
Written by Jack Cleary
Today I spent a large amount of time in the sound diffusion lab creating visuals in TouchDesigner. I had an idea of taking loads of lines and manipulating them to create a string vibrating effect. The easiest way of doing this was through instancing, which I learnt about through a tutorial by programming for people. You can watch it via > this link <
Through instancing you can take the points of a SOP and tell a Geometry Comp to use that to place more SOPs. This means you can take the points of a square and assign a circle to each of the points via the Geometry Comp. Which will result in 4 circles appearing at the points. I used a line with multiple points to placed lines next to each other. Then I used audio track to manipulate the points, so that you have multiple lines vibrating next to each other. Once that was rendered, I duplicated it and turned one on its side. To get the visuals to be small note, I used a composite TOP. So that they would show only where the lines intersect.
I am spending much time developing different visuals to use. I this will be the best way since I can refine it at a later stage. More visuals on the way...
Wrote by Jack Cleary
Today my friend Sonny lent me his melodica, the intention was to record it into Ableton alongside the generative music. After recording I took the phrases and use placed them in a sampler (Kontakt). I did this by making a new audio track next to the recording of the melodica. I then used Abletons selector tool to drag the phrases I wanted onto the new audio track. Once I had all the phrases I want I consolidated them into individual clips. They then appear in side the project folder. From there I dragged them into Kontakt. I then used PureData to randomly generate the MIDI notes that went to the sampler. This was as I though it would be. very smooth no problems with time. I think this is because the instrument is used a melodically. I posted a video on Instagram of me play them Melodica into ableton, although you can't hear the generative music. You can watch the video > here <
I also thought it might be a cool idea to double up the phrases. So I tried using a delay object in PureData to to trigger a random number between 0-3, which went into a select object which outputted a bang when a 1 is generated. This then went back into the note select to trigger a second sample. It worked well and will probably use this technique in future. I do need to mess around with the values to get the desired effect.
Written by Jack Cleary
Today I made some good progress on some work. The first thing I worked on was using Ableton and PureData to generate music. I was working on my idea for what the yellow part of the installation would sound like. I wanted to make a composition where it sounded like the sound was chasing each other. In Ableton I made percussive sounds from three software synthesisers (FM8, Massive, & Element). I had control outs manipulating the panning of the sound. This was pretty effective when the volume was equal on each speaker. The percussive sounds were tuned to the G pentatonic. I did this because I believe it sounds very childish. I showed my course mate Louis Sterling and he called it meditative. It has a very pure vibe, It's definitely one of the directions I want to go, although I want to add some breaks in there, it's just a matter of programming them in. The next step for the music is adding a lead type instrument. I want to use a Melodica for this, my course mate Sonny has said he will lend me his. I plan on Recording some simple phrases and then triggering them in side of Kontakt.
Today is the first time I've been proud of the visuals I've made. I used the audio analyser to manipulate parameters in TOPs. It's pretty effective, although the reactiveness is still slightly behind. I believe it's something to do with the particle generative aspect. It will be good to try something left CPU intensive. Adding fast rotation seems to create cool results as well. I'll definitely come back to this. You can watch the visuals > here <
Written by Jack Cleary