A lot of progression has happened in terms of concept for my final major project. I'm shifting my focus to a performance visual art piece. This is mainly due to the gig I played with Louis Sterling the other week. Why so? Well it was a lot of fun, I want to do it more and performances are engaging experience. I'm not sure I'm going to completely ditch the generative aspect, but I think if the performance side is a struggle to get together I'll have to drop the generative side.
So whats the new idea? I want to use multiple channels of audio to add coloured lines that are moving to the sound of the music. Using the visuals I did for Louis as a foundation (which is pretty much what I just described) I started building a new patch with colour added. How I did it was by duplicating the Geometry sending them to their own renders, then added a level top. I sent all the geometry to a composite. The level top can be used to change the colour of TOPs. I then sent all the geometry to a composite to bring them all together. The Idea is that each channel of audio has its own colour and vibration (from the sound). This will then create complex coloured patters.
I have a couple of instrumentalists interested in helping with the performances. Ebony Grace & Josh Thomas from The Case of Us will be bringing the sound to life with synthesisers. Charles Kew & Rosie Robinson have also said they will help out, but as of now it's not clear. I'm Also thinking about asking my friend El Goraicy if he would like to play synth too.
Written By Jack Cleary
It has been a week since I have updated the blog, I feel like my progress has somewhat slowed down. But I may just be reaching a point where I needed to reflect a little. Not to say I haven't been busy, I've been try to learn things for the next stage of the development of my final major project. Having such a wonderful & knowledgeable group of friends around is definitely been crucial to my development, talking about ideas within the visual realm is eye open and many ideas come from that. My friend Jedd Winterburn introduced me to the idea of using TOPs as textures for geometry. The result is something more organic and with depth, which my visuals have lacked until this point. The gist of it is using a PBR shader and an environmental light as a material for the geometry. You can use a TOP with the environment light, it's worth experimenting with different images to see how it effect the outcome. The PBR shader then is used as the material for the Geometry, you then can change the priorities of that PRB shader to make it more reflective or rough etc. I found that Using TOPs that move on the surface of a geometry is very interesting, I could imagine once the geometry has some manipulation and movement to it you will have something very interesting. I think using a particle system will be create some create results with different movement of wind etc. You can see the moving visuals on a geometry > here <
A CHOP that is really cool with working with audio is the wave chop, it's a good way of sculpting the data from the audio. By this I mean it creates different curves of data from the input of the audio. I used this filtered data to change as instancing data for geometry for the visuals below. An interesting thing came out of doing all the visual work, my friend and longtime collaborator Louis Sterling saw the visuals and asked me if I could perform them at his gig on the 15th of February. Although he asked me the day before. But most of the work was already done, I mainly had to find away of performing with them. To perform with touch designer i pretty much was only changing values in the wave CHOP, which were enough to create performative visuals that where striking and dynamic. Because of the nature of playing live I tried using Touch OSC on my phone to control the parameters. I had luck at home with a stable internet connection, but in the venue it was a no go. For some reason the internet wasn't working at the venue, which makes me sceptical over how it's going to work over the universities internet. Although I am am going to have to test it out in the near future. I am going to go in soon with my computer to try working with visuals and audio together, although I think the first session will literally be try to figure out how the computers will communicate to each other. It's a shame I don't have a space of my own to do this, but it's ok, some day.
What this work has brought into my hemisphere is that I could possible have one visual idea that is changing via parameters in touch through OSC. Depending if it works, OSC can also change the lighting which is dope to. I am going to try this out very soon. I just need to find the energy to bring my gear into uni.
See below an example of the visuals I did for the gig.
Written by Jack Cleary
Today I spent a large amount of time in the sound diffusion lab creating visuals in TouchDesigner. I had an idea of taking loads of lines and manipulating them to create a string vibrating effect. The easiest way of doing this was through instancing, which I learnt about through a tutorial by programming for people. You can watch it via > this link <
Through instancing you can take the points of a SOP and tell a Geometry Comp to use that to place more SOPs. This means you can take the points of a square and assign a circle to each of the points via the Geometry Comp. Which will result in 4 circles appearing at the points. I used a line with multiple points to placed lines next to each other. Then I used audio track to manipulate the points, so that you have multiple lines vibrating next to each other. Once that was rendered, I duplicated it and turned one on its side. To get the visuals to be small note, I used a composite TOP. So that they would show only where the lines intersect.
I am spending much time developing different visuals to use. I this will be the best way since I can refine it at a later stage. More visuals on the way...
Wrote by Jack Cleary
Today my friend Sonny lent me his melodica, the intention was to record it into Ableton alongside the generative music. After recording I took the phrases and use placed them in a sampler (Kontakt). I did this by making a new audio track next to the recording of the melodica. I then used Abletons selector tool to drag the phrases I wanted onto the new audio track. Once I had all the phrases I want I consolidated them into individual clips. They then appear in side the project folder. From there I dragged them into Kontakt. I then used PureData to randomly generate the MIDI notes that went to the sampler. This was as I though it would be. very smooth no problems with time. I think this is because the instrument is used a melodically. I posted a video on Instagram of me play them Melodica into ableton, although you can't hear the generative music. You can watch the video > here <
I also thought it might be a cool idea to double up the phrases. So I tried using a delay object in PureData to to trigger a random number between 0-3, which went into a select object which outputted a bang when a 1 is generated. This then went back into the note select to trigger a second sample. It worked well and will probably use this technique in future. I do need to mess around with the values to get the desired effect.
Written by Jack Cleary
Today I made some good progress on some work. The first thing I worked on was using Ableton and PureData to generate music. I was working on my idea for what the yellow part of the installation would sound like. I wanted to make a composition where it sounded like the sound was chasing each other. In Ableton I made percussive sounds from three software synthesisers (FM8, Massive, & Element). I had control outs manipulating the panning of the sound. This was pretty effective when the volume was equal on each speaker. The percussive sounds were tuned to the G pentatonic. I did this because I believe it sounds very childish. I showed my course mate Louis Sterling and he called it meditative. It has a very pure vibe, It's definitely one of the directions I want to go, although I want to add some breaks in there, it's just a matter of programming them in. The next step for the music is adding a lead type instrument. I want to use a Melodica for this, my course mate Sonny has said he will lend me his. I plan on Recording some simple phrases and then triggering them in side of Kontakt.
Today is the first time I've been proud of the visuals I've made. I used the audio analyser to manipulate parameters in TOPs. It's pretty effective, although the reactiveness is still slightly behind. I believe it's something to do with the particle generative aspect. It will be good to try something left CPU intensive. Adding fast rotation seems to create cool results as well. I'll definitely come back to this. You can watch the visuals > here <
Written by Jack Cleary
Today I woke up dreaming about my final major project, it's safe to say I'm excited for the coming months and grinding work. Although that may sound sarcastic it's not! I got in early at the studio to play around with some Touch Designer. I believe this will be the most challenging part of my generative audio/visual environment. Since I want the image and the audio to be tightly synchronised it makes it even more challenging. I've also heard from my colleges on the Digital Music & Sound Art course that the MIDI interfacing is still in a beta stage and isn't easy to use. Another worry of mine is switching between different kind of visuals.
One way of creating some synchronisation between the audio is using audio to effect the visual, I did this in my practice 7 module. This was a team project and the visuals were out sourced. (More on that here). I did some of that today, and it was ok, you could tell the audio was having an effect but it was pretty delayed. I'll play around more with this since it could be an effective way of manipulating the visuals. Here is some examples of the work I did today. To get myself going I followed a Matthew Regan tutorial. Although I don't fully understand it yet. Link > here <
When I got home I worked between two tasks:
Setting up PureData to communicate with Logic
And setting up Quadraphonic in my bedroom (I definitely don't have the room for it)
At the beginning of my session I thought I was having some success with triggering MIDI in logic, I thought I was well on my way to getting it to work. I had some notes triggering a sampled marimba. I then set up my Quadraphonic system fairly quickly. It took me just under an hour. I then had ctlouts changing surround panner values. This was pretty effective. I up the metro and random object to give me values. And using the line object smoothed the transition from value to value. This moved the sound in the space very well. It inspired an idea for the colour yellow, I thought I could have to rhythmical elements chasing each other around the room. But I digress.
After getting the surround panning to work, when I was most hopeful, I hit what appeared to be an unbreakable brick wall. When I added in a second instrument channel and tried to input data from PureData many difficult bugs occurred. I could get one MIDI channel to play notes, but anymore than that and the notes cut in and out. Like they where phase canceling. But with MIDI notes. very weird. As well I couldn't send cltouts to more than one channel. After sometime I decided to quit and move back to Ableton. I still wanted the surround, so I followed a tutorial by Eric Kuehnl, which shows you how to get surround in Ableton 9. View > here <This was effective, Although I didn't get around to programming any movement from PureData. That is on tomorrows agenda.
Written by Jack Cleary
Today I worked on the lighting for my final major project at The University of Brighton. The Digital Music & Sound Art course has a DMX lighting installed in one of the studios. Using PureData, the technicians were able to control the colours and intensity of the light via the computer.
I plan on using generative processing with PureData to change the colours of the lighting. PuraData will also generate the music and visual (powered by TouchDesigner). All three elements will be used to create a variety of moods that fluctuate for theoretically infinity. Some good progress was made to day in setting the values, next time I work I will be randomly generative movement to the lights. Watch >here<.
Another thing I wanted to talk about is some inspiration. I was listening to Al Gromer Khan and Amelia Cuni's Collaboration Monsoon Point, whilst having a bath. Perfect music for relaxing to. The music complemented the sensation of the hot water surrounding my body. The experience is euphoric, almost like tripping, the heat makes your heartbeat faster and stronger. This becomes a pulse that you feel through your body, thus becoming rhythmical element to the music. After sometime you start to hallucinate, light flickers in a way you can't describe. Amelia's voice was especially breathe taking, its soothing and other worldly. There something about the experience of the path I would like to capture in the piece. I imagine it would fit with orange lighting and some pulsating visual and sound. I will record some vocals and generate them in the patch. I'll do this by placing audio clips in a sampler in logic, generating MIDI notes from PureData and sending them to the sampler.
Wrote by Jack Cleary
I've been thinking of the concept for my final major project a lot the last few days. And how it will be perceived by the general audience. There are two essential pillars to my idea. The infinite and fluctus which means wave.
The infinite is expressed through the near limitless combination of generative compositional states. A compositional state, is a concept that I'm testing. Its purpose is to describe a musical environment, which can be encapsulated within a few words. For example, a piece of music may sound busy and harmonically exciting. These characteristics describe the state in which the music is in. My idea is to create 24 of these compositional states. A state will be able to transition between 8 other states. This form creates a near limitless amount of different combination of compositional states. Each state will also have an element of randomness to it, which means the amount of variation in the installation is infinite.
The next layer of the concept is waves. I was inspired by a Michael Pierce lecture that I've linked below. In the talk Pierce talks about Heraclitus' two many concepts, eternal flux and eternal logos. Without going into much detail, things are constantly in flux, but there are governing by rules which is the logos. This is a central idea in my pieces, the music never repeats itself exactly, but the music is governed by a set of rules.
On a less philosophical level, the wave aspect is also connect to the medium. Sound and light are the two elements that will change from state to state. In physics both sound and colour are waves.
Through my thoughts I've been able to summaries the project as a :
Generative audio/visual installation with near limitless combination of compositional states.
I'm becoming aware that I need to start conceptualising the different compositional states, so I'll post some material on that in the near future.
written by Jack Cleary
At the moment in time I am working on my Final Major Project for university. The project is a generative audio/visual installation, which uses brightness and colour as a compositional tool. Both the audio and visual can be described as bright and dark. Both the colour wheel and circle of fifths have a bright and a dark side. This idea was inspired by Adam Neely's video "Why is major "Happy" (Link to Video). I intend on using brightness, colour and sound to create moods. Colour will symbolise emotions, such as blue being sadness/depression etc. Sound will act to support the visual. The generative processes will be realised through PureData, which will control the audio visual.
John Cage is an influential figure in Avant-Guard composition. Chance Operation is a defining feature of his music. His work has influence many artists and me personally. In my first year at Brighton University on the Digital Music & Sound Art course I created the composition "Magnetic Soundscape". In the composition I use chance operation to determine what actions to take.
In my latest work for my final major project I use the same idea of chance. But instead this time I use PureData (a node based coding software) to randomly generate MIDI data. This data is then sent to Ableton, triggering notes and control values.
sometimes you find yourself short of time between writing lyrics and band practice. If this is the case: take a note pad, record a rough version of the track on your phone, cram in the car and get the creative juices flowing whilst operating heavy machinery!
Oh sorry, where are my manners, hello, my name is Jack Cleary and I am the music producer and writer of this new born blog. I wish to share with you the stories from my career as a music producer, I work with some interesting and amazing people! One of those people is Vincent, also know as Fitz from Fitz & Yeah. When I met Vincent it came apparent to me he was a very talented lyric writer! Any producer with their head screwed on will tell you to keep the talented lyricist close, they are few and far between!
So what were we writing in Vincent’s car on the way to practice? We were writing the lyrics for a soulfully electronic music track I had composed and produced about 2 months ago! I knew it had to have a vocal on it (although, I lack the ability to write good lyrics without them coming out cheesier than cheddar)! So I acquired help from Vincent. even though I had capable hands on board we definitely have struggled to write the lyrics! Vincent said “I never spent so long writing lyrics for a track”! Over the last 2 months we have grouped together four to five times to write the extremely difficult lyrics! Yesterday we agree to meet to finish the last verse; we were determined to get the lyrics finished, but it seemed the Brighton and Hove City Council were out to stop us! North Street (The vain of the city) had been closed and the city had been traffic congested, Especially on the road leading to my house! originally we planned to meet at two pm, Vincent didn’t arrive until fifteen minutes past 3. At which point we had to set off to band practice, firstly we had to drive to Lorna’s (The Bassist) place to pick her up. On the trip to Lorna’s we got the first half of the second verse finished, but the lyric flow came to a halt as we picked up Lorna and turned our travels onwards to Micheal’s (The drummer). On arriving to Michael, Vincent, Lorna, Micheal and I practiced vigorously for 3 hours solid. If you don’t know, band practices can be mentally draining and tiring, This was one of those practices. After practice, I started to fell the euphoric tiredness kick. I don’t know if it helped the lyric process, but despite all of our troubles we had finally finished the lyrics, celebratory selfies was in order!
So what is the next step? Now that the lyrics are done we must record them soulful vocals, which will be provided by my sister Angel! I shall write a follow up blog post after recording the vocals.
The track will arrive to you in November, to make sure you get the release for FREE sign up to my newsletter through my website, just look to the side bar!
In the mean time, you can get another FREE track entitled JazzyMan from my up and coming debut album “The One Who Became Many” at my BandCamp:
Fitz & Yeah band page:
Angel Cleary’s Cover of Black & Gold by Sam Sparro:
Angel Cleary’s and Jack Cleary’s cover of You know I’m no good Amy Winehouse: