Imaginaerum OST – behind the scenes

Now that my official Imaginaerum OST gig is close to its final stages (the 7.1 stems are already in Canada as I write this), I’m taking some time to open up a few techniques used in the sound design and pre-production stage, concerning both the creative method and the application/plug-in side as well. I wrote a rather sketchy blog article focusing on the project as a whole for Nightwish’s Imaginaerum The Movie facebook page (on.fb.me/H2onWf). Me being such a geek – well, a 30% geek, because the other 70% wants to rock with my cock and other pets out – wanted to elaborate the technical side a bit. I’d say the experience level for reading this is somewhere between beginner and intermediate. There’s nothing new or groundbreaking in what I’ve done, that must be said right here, in the beginning.

Plunging into the world of a finished track by means of some Protools project file and everything it consists of, is a really long day’s work. Usually it took two to three days to just prepare the raw material for further processing, do the rough edits, truncate begins and ends, etc. Luckily Nightwish’s mixing engineer, Mikko Karmila, is a hard-boiled pro, who names everything carefully and keeps everything really tidy. The project files were perfect. There were even short notes about microphones used, made by himself or any assistant engineer. It was a peek into a magical world, as I’ve admired Karmila’s productions as long as I can remember, and he’s also one of the very few persons I truly appreciate.

At the listening/selecting stage I virtually had no idea whatsoever what the tracks would turn into. I just listened track after track after track, starting from Jukka’s drums, his overhead stereo file, room mics… everything, and proceeded further track by track. If there was a nice note or a rhythmic thing that caught my ear, I’d cut it off and immediately check the neighbour channels if they happened to share something with what I had just cut out. Sometimes there was suitable audio material, sometimes not. I tried to find patterns, both rhythmical and harmonic. For instance, Marco’s bass playing usually accompanied Emppu’s guitar shredding, so it was natural to chop out similar one bar, two bar, four bar performances from the neighbours as well, whatever there was.

With Jukka’s rhythmical stuff, I bounced some stems from his separate kick-snare-hat-toms-cymbals(overhead)-room-whatnot tracks and transferred the stems into Ableton Live. To those non-musicians, Ableton Live is a clever piece of software that allows separate handling of time and pitch, unrelated to each other. Usually, if you’re lowering the playback speed, the pitch of the signal will lower, too. Just like on the cassette or a vinyl. (If you’ve been around long enough to use such machinery.) That speed/pitch treatment could be done on pretty much every other tool or DAW as well, Protools among them, but Ableton Live’s got many nice built-in tools and plug-ins for mangling the audio stuff beyond recognition. My favorites for Live drum/percussion treatment are Corpus and Resonators. I also did some MaxForLive programming for weird higher-end audio treats. Stuff that mangles stuff, you know.

A good mixture of Resonators and Max stuff can be heard in the Last Ride Of The Day, which will be named differently for the movie and the OST. A rollercoaster usually makes a ticking noise when speeding through the track – so immediately I thought about taking Jukka’s overhead, high-pitched, crystal clear cymbal/hihat/leaking drums and chop them into 1/16th notes, carefully fading out the end of each hit. Added a resonator to enhance the metallic tone – then put it through Corpus to add some “noise components”. Then added some multiband compression to tame out wild lower middle frequencies. That was probably one of the easiest things. Usually the process got out of hands when I just kept going further and further for hours, just playing around. It didn’t take too long to prepare “a path” inside your brain – you heard a raw track and immediately figured out what it could be turned into. After the first week I was literally swimming in files. There were thousands of them, everywhere. Folder after folder filled up. If I prepared a pitched loop, I’d prepare every semitone as well, so if the pitch was supposed to spread over two octaves, I made a separate sample for each key. Transposing a sample sounds worse than playing a specifically prepared sample for that particular key, in my opinion. To make things even more complicated, the pitching was seldom enough. Usually a blend of Ircam Trax in conjunction with some other esoteric plugins was implemented. It was an amusement park, truly.

I really loved putting toms and kickdrums (and taiko as well) through Corpus, turning them into huge bass synths with a percussive envelope. Arabesque is one of the tracks that utilised that technique. A booming, pulsating groove underneath the ethnic percussion… sweet. Delicious. If Corpus only were a reliable when it comes to tuning. Let’s just say a lot depends on its input. Tough transients make it bow and bend, I’ve noticed. When these sub-bass pitched components were mixed in with the original instrument, together they created something wonderful. I knew I was to walk the right track.

Emppu & Marco faced similar treatment, although the tools were somewhat different. I sometimes removed all noisy components (as in tonal versus noise) from the signal, leaving only a mass of sine waves – or rather, a shred part without the harsh edge… would it be called “fredding” then? The outcome of this process has thousands of “glassy” artifacts, which sometimes are delicious, sometimes not. I went a bit far with some guitar stuff, as I pitch-perfected each string separately, and not only that, sometimes I removed all pitch fluctuation from the low A and E strings to “keep the package together”. Of course, after reamping, it was no longer a guitar as such, instead, it became almost an ethnic instrument due to the strange flavor it got in the tuning process. Luckily the stringed instruments provided me with enough material for some of those really low brass-sounding stabs that echo in, for example, the teaser. Of course, those ROAAARR stabs are always doubled (or rather, quadrupled) with something else, as it’s all about texture. A bit like looking at a really good picture: if you see a beautiful portrait, you probably see the skin first, then you pay attention to skin glow, irregularities, maybe hair too, the makeup, clothes, lighting, posing, environment, dust in the air, the moustache on a woman… it’s not “a portrait” of a hairless naked transparent albino cave olm floating in space, it’s an interesting person in a realistic setting realised by professionals. The final work of art consists of thousands of details; layers.

Usually using just “a sample” creates that olm feeling. Layering two sounds makes just… a pair of olms. I’ll return to this a bit later.

Sometimes the two shredding masters of Nightwish created a groove that had a magnificent “suck” to it, their playing was literally in a pocket. Unfortunately – forgive me for saying this right after praising their playing – the riffs they played quite often employed notes not necessarily feasible for transposing, all the semitone/tritone jumps. They are incredibly effective for pounding the stuff through in one key, but I couldn’t use them as such, I needed to cheat a bit. During the listening/preproduction stage, I usually start forming a track inside my head, not necessarily knowing what I’m doing – it’s an almost subconsious process, where I grab a chorus chord progression and try to fit in another part of the song, which often was in a different key. It’s a process of “I could use this. And this. Maybe that too, and those”, picking up what I loved in a certain song – or songs – then glueing them all together by means of technology. And often all of the aforementioned tracks/stems were in a totally different key. Including the orchestra. And the ambience channels, too.

Also, for creating the riffing guitar/bass parts, I had to create straight riffing/shredding patterns out of the original parts, which included a lot of jumps. Which had to be taken away, still retaining the original groove. I just had to cut off the “wrong” or “unneeded” notes and replace them with surrounding chugs/notes from similar subpositions elsewhere. By just replacing a second 1/8th note of a second beat with something from a different beat destroyed the groove. Definitely not a good idea. So – I just had to listen to the tracks really carefully and find exactly the right beat, right note with which I could cover up the hole left in the original groove.

By now you start thinking “why the hell didn’t he just call in the guys and have them play it for him” – but it would’ve been against the concept. And timetables. When I started pre-production phase, they had different engagements to the album, tour preparing, vacations… there were many different things. And – my luxury – I had plenty of time to mangle the stuff beyond recognition. Why wouldn’t I do that if I’m allowed to? Projects such as Imaginaerum OST are a rarity. When I come across one, I use it for studying as well, as I’m constantly eager to learn new things.

After the holes were filled, I needed to create realistic transposing for the gtr/bass patterns. Also, as I really wasn’t too sure in which tempo/tempi the certain tracks were setting into, I needed to have the shred patterns disconnected from their original BPM. Oh crap. Enter Native Instruments’ Kontakt 5 – a brilliant, brilliant sampler plug-in, a piece of software that works as a stand-alone application or, which is how I use it, as a plug-in inside my choice of DAW (digital audio workstation), Logic. Their previous version, NI Kontakt 4 had some issues with transpositional artifacts, especially when using the more specialized versions of the sample playback engines, Tone Machine as well as Time Machines 1 and 2. Luckily, their version 5 fixed nearly all of the bugs, and their newest Time Machine Pro sounded realistic, even with signals containing a lot of distortion. After a few short tests I was convinced with the results, as the transposed sections blended in with the untransposed rather nicely. I did a few artificial transposed versions to extend the tuning ranges for more… well, dramatic sections. Drop C tuning was still a bit high at times… I was afraid I’d lose the coherence of the low end in Marco’s and Emppu’s playing, but with some help from my Symbolic Sound Kyma system, I was able to create a bunch of murderous, angry tones which kept their tuning really, really well. Kyma is a hardcore DSP scripting/processing/creating environment, virtually a tame black hole. You can easily lost days whilst noodling with it, just testing out a few algorithms. All the most memorable sounds in the 90’s and 00’s and 10’s are – almost without exception – treated with Kyma. With a reason. That… thing is like a superhero amongst humans. The quality of its output is always above all expectations (greetings to Carla Scaletti and Kurt Hebel), even though the user were an idiot. I know what I’m talking about.

After the additional transposes were finished, it was time to put together a virtual Emppu/Marco entity. EmMa? Marppu? Emco? Just dragging the right samples onto the right notes in Kontakt 5 and spreading the key groups gave me a reasonable assurance; yes, this works. I added a round-robin feature (which randomly selects a different, yet similar sample from a specified stack of samples, thus preventing a stupid machine-gun-like effect usually associated with badly prepared sample sets) and some volume control, filtering, even a slightly randomized multiple eq settings (only +-0.5 dB with narrow Q values)… every trick I knew – and it sounded even better. The finished EmMa (sorry, guys) allows a rather large tempo adjustment, +-30 BPM away from the original 157.0139, so I could use it in other OST cues as well. Brilliaaaaaant!

Making a virtual bassist from Peppepappa on Vimeo.

Note: similar approach was taken with Emppu’s guitars. Nobody was left aside. Kontakt 5 can be seen in the latter section of the video.

But, without the true passion of the original performance, it would be nothing. Again I must emphasize the fact that the tracks oozed with artistry and passion, yet they were executed with immaculate professionalism, no glitches whatsoever. It was a set of every sound designer’s wet dream; not just a bunch of tracks for raw material, but a bunch of tracks with a Meaning. They had had a lot of fun during the recording, I’m sure. You can hear it.

Whenever retuning was ever needed (a rarity, unless I was after some effects), my choice of weapons were Melodyne (which produced quirky end results at times, probably due to complex input signals), AutoTune (for extreme quirky effects), Reaktor (grain cloud stuff and eerie stuff) and Zynaptiq’s PitchMap, which thankfully came out just in time before the final production phase. The last one, PitchMap (PM) got used a lot, in some song structures there were about a dozen or so instances. PM allowed me to change pitches polyphonically in realtime, without preprocessing. I wish every DAW application had PM as a built-in tool, it’s that versatile.

Another trick I used often was Melodyne’s polyphonic correction tool (when it worked). I sometimes bounced an orchestral riser (a crescendo) off the Protools projects and imported it into Melodyne and had it analyzed with the highest possible setting, then meticulously retuning the lower notes onto absolute notes separated by an octave (or two). Even though the original riser was dissonant, the retuning brought it back to normality, but I was able to control all the harmonics with the retuning process, sometimes removing the pitch fluctuations completely from certain frequencies, sometimes exaggerating it by 100% – as much as I could. The results are heard in the first reel of the movie, in the second open-air scene with the main characters. If you can hear a sound resembling a bomber plane screeching through the clouds at about 13 minutes and 44 seconds from the start, it’s an orchestra. Brass section. Note: as these are “work-in-progress” files, you’ll hear a lot of artifacts. The final files are in the Imaginaerum OST.

I often doubled the EmMa stuff with staccato or spiccato strings, or, celli and contrabass sections. I tried to use the original orchestral tracks as much as I could, but there were times when section and instrument leakage was so heavy I had to recreate it with samples. The leakage is unavoidable; there are dozens of players in the same room, all blasting off at the max volume, so they’re going to leak into each other’s microphones. I think they’d leak even though contact microphones were used (which grab the signal through a resonating solid material, not through air), the sheer volume of everybody playing forte fortissimo makes everything resonate; brass, wood, strings, cymbals. Everything. The leaking makes a part of the sound, it unifies the sound field, melts every single instrument into one big instrument: an orchestra. Without leaking it would sound like a badly done synth orchestration, sterile and embalmed. Dead. Leaking & dirt = good. Just as in real life: good sex is always a bit dirty. I think I’ve said that before…

After some heavy testing, I chose LASS (Audiobro’s Los Angeles Scoring Strings sample library) to accompany the shredding. Sometimes I emphasized the sound with some other sample sets, even with some staccato low end brasses as well, but about 70% of occasions, I managed with LASS alone – although with several layered sections, doing slightly different things.

Yep. Layers. There are a lot of them. If there’s a pad, it’s not just one pad or blast or choir or whatever, it’s a choice of three, four, don’t-know-how-many sounds each playing a bit different part. The more further away the notes of a certain part are, the quieter it is. Often thinner, too. Sometimes I layered a “resin” sample with a granular choir, to make it sound a bit string section-like. There was a lot of controller info, sometimes affecting a parametric EQ, sometimes the amount of granularity (i.e. more granularity creates a decomposed sound made of bits and pieces of the original signal)… and always the layers were tightly tied to certain frequency areas to keep the whole patch easily playable. Actually, now that I’m looking at the instrument sets I’ve done, I’m quite convinced that almost everything could be played live with some careful planning, but it would require a lot of work and a large sampler or a workstation. Maybe two or three.

Here’s a real-time demo á la Imaginaerum OST (just a noodling, improvisation):

As I said earlier, I’m more player than a geek. I’m very fond of instruments that are simple to create with, which is my “secret weapon”. One can have the most incredible sample library at hand, but without at least a decent usability it’s worthless. I seldom take sample libraries as is, usually I’ll spend at least a month or so creating my own set of instruments which I can handle easily, without extra thinking. In a creative are such as mine, it is imperative to know your tools and how to get everything out of them. You must concentrate on the creative side and raise above the technical issues. You must be able to think quickly, without a hinder. You must be a mad geek to tame all the power, but you also must be able to express your passion through playing. The tech is there to be conquered, it is a good worker but a lousy master.

I must admit I felt a bit embarrassed to treat Tuomas’s keyboard tracks in any way. He’s an excellent player and just being able to dissect his compositions felt a bit odd at first. If possible, I kept his keyboards in every cue as long as possible, but since the original performances were audio only (no editable midi data available, thus preventing me from using his playing with other sounds, other than recorded ones), I faced a dead end every now and then. For instance, if a cue needed a softer piano or fewer notes, it didn’t exist. It had to be recreated. To an extent, I could use Melodyne Editor. It took most of the piano parts without pain, but just as soon as there was some harmonic or phase movement, it ran into problems. Luckily the aforementioned PitchMap arrived in time. However, if I needed to create a midi file from Tuomas’s playing, it had to be done with Melodyne, which can output the notes contained in an audio file into a certain type of file that can be used in every DAW application. The moral? Screwdriver isn’t enough. A carpenter needs a saw and a hammer too. And quite a few other tools as well.

Sometimes even that, the whole toolbox, didn’t do the trick, and I had to play a lot of stuff in myself. I aimed for maintaining the original voicing and the choice of intervals and chord inversions whenever possible, but at times I had to create a section from scratch, or the movie needed something that couldn’t be found on the Protools projects. Needless to say I was overnervous when sending a such track to Tuomas and Stobe (Harju, the director of Imaginaerum, who also wrote most of the screenplay), because if there had been anything not worth a shit, they’d surely kicked my ass immediately, although politely. Think about Alan Rickman saying “how grand it must be to have the luxury of not fulfilling the expectations. Now go back to your pathetic chamber and concentrate like your life depended on it. Which it, of course, does.” To be honest, none of that ever happened, but I sure felt like a wand-waving apprentice at times. Turning gazelles into frogs, or, with some luck, into unicorns.

Of course, the processing didn’t end there. Sometimes I had to get rid of trombones leaking into everything else, sometimes – although it was a rarity – flutes were playing on top of a perfectly usable high violin/viola motive. Eventually I started questioning my sanity, and especially the methods I was using? Did I really have to do this and that. I spent literally dozens of hours with Sonicworx•Isolate, which allows one to “draw off” unwanted signals. By December, it was really painful. I found errors in my versions, things that didn’t fit in, or was just lame, repeating itself, or sub-quality in my opinion. Or did I just imagine everything? The project started to haunt me, and a short while I was really angry to let people hear something I wasn’t sure of. During the first two weeks in December I re-did a lot of stuff, partially due to the first finished cut of the movie, as there were a lot of off-beat picture changes which I had to take into account, make tempo changes, re-arrange etc. Slowly, the versions were reformed and it was as if a new backbone had been installed. Finally, the clicking into place started.

I’ve used the term loosely here, “clicking”. But it describes perfectly the feel I was after. Puzzle pieces should just click, they shouldn’t be put into form with force. If force was used, the pieces would break and render unusable, creating just a mess. I felt the score needed a few lighter versions to counterweigh the angry and sinister side. Mermaids and CrowOwlDove happened that way – actually, even though COD is a shorter version than its original, it is one of my favorites, as the chorus/verse parts were mixed, and the song structure really builds into something else. It was a song virtually conceived by mixing up the pieces and rebuilding it blind. Or rather, deaf. I just looked at the previews, the sausages – or the continuous wire waveform – and cut the pieces without having my loudspeakers on, just trusting my instinct. The end result was surprising. Of course I had to hone out a few rough ends, but it did sound rather good. I fixed a few fighting notes from the lead vocal and that was it.

If I should explain why I chose the parts I chose, I think my most overused answer would be “I felt that one would be a nice one”. I made those choices very, very quickly during the first few listening rounds, just ran blind, trusting my gut feeling. There were bass hits, single drums, percussion rolls, orchestra blasts/crescendi/whatnot – and sometimes I chose just noise, noise from the orchestra’s ambient channels. Sometimes a chose a leak, say, trombone leaking to doublebass channel, or vice versa. I wasn’t after the most beautiful single note or phrase, I was constantly looking for dirt and interesting, fiddly parts that would benefit a lot from corrections and/or editing. A lot of ambience stretching, denoising and retuning was done to create the surreal pad instruments often decorating the simpler scenes. Of course, there are a lot of commercial libraries and software instruments involved, as well as my trusty hardware gear – especially my modular synth, which processed a lot of the tracks and even got to be used as an ambient piano in the beginning scene! There are a lot of lower midrange growls from my trusty old Oberheim Xpander. Some guitar parts were doubled with a Dave Smith Mopho Keyboard, a tiny monophonic analog synth, as its sound structure is just perfect for my use, and I can get it sound like an angry banshee on a charcoal grill.

More about bad samples: Just using a good snippet would be comparable to taking a picture of Mona Lisa’s shoulder and glueing it into your canvas. Then repeating that, until you’ve got a Mona Lisa Polaroid Puzzle. It could be interesting, but the outcome would still be merely a bad copy of an original masterpiece. My philosophy was to sketch Mona Lisa’s shoulder onto white paper with a pencil, then take my polaroid and mangle the polaroid beyond recognition, creating a “multi-technique collage” that would possibly have a life of its own, disconnected from its origins, yet nodding towards them with respect. I just wish I could’ve used those funny and dirty sounding eerie pads even more, but the movie had almost an action vibe to it, especially closer to the end.

And that vibe needed a healthy dose of percussive instruments. Some of it is from Tonehammer, which later on became 8dio and Soundiron, some of it is my own library I’ve recorded throughout the years, as I began doing this when I was in high school, some of the samples I use even today were originally sampled into an Ensoniq Mirage, back in 1986. Oh, the agony of not having enough memory! Things are quite different today. Sometimes I needed to employ three computers, one took care of the percussion, another took the strings and main machine did the rest. They were connected by means of Vienna Ensemble Pro 5, which allows one to connect multiple computers with an ethernet cable, carrying midi and audio signals. During December, I installed multiple SSD drives into my system, removing the “your computer is too slow” message almost completely. I regret I didn’t do that earlier. Not every cue required three computers, though, especially after the SSDs entered the house. About 95% of final mixing/editing time, one Mac Pro was enough. It was only the most crowded string and percussion arrangements that needed a few extra hands.

I personally happen to like lower frequencies a lot. With some careful programming and arranging one can cause the toughest cold turkeys, making each hair stand like they were dropping off suicidally. In the autumn 2011 I began experimenting with piano and harp, as well as brass instruments, which I noticed benefit from pitching down their tuning. I collected a lot of lower midrange and bass instruments from my libraries and Nightwish Protools sessions, and created some hybrid instruments based on the lowered samples. The results? Let’s just say the tripods from The War Of The Worlds remake were their little brothers. Also, it was a respecting nod to the other epic direction: Inception. It was almost obligatory to incorporate keyboards into the sound, both Tuomas and myself being keyboardists. I also happened to have a background of playing pipe organs (I probably had the best teacher in the world – whom I didn’t appreciate at that time, unfortunately: Kalevi Kiviniemi), so I put in some of the lower, richer reed stops that were recorded in Lahti in a church back in the early 90’s. I’d definitely like to re-do that session, by the way. That and a grand piano. And a harp. And EmMa. And… a Novachord? Actually, not a real one (as it’s beyond most people’s budget).

Improvised á la Imaginaerum OST:

(By the way, look for Kalevi Kiviniemi’s version of Gabriel Pierné’s Prélude, Op. 29, No.1 from iTunes and let it play – it grows slowly. By 1’35” it turns into a gothic masterpiece. THAT is how you play organs, note also the grand, lush ambience of a gothic church it was recorded in. The G chord ending that one is something unreal. I’ve been practising a lot lately, so I might match him as early as 2040. Maybe sooner, with some unexpected luck.)

With some songs I regretted the order in which they had to be incorporated into the movie. For instance, one of the “underused” tracks was Song Of Myself, an epic that should’ve had more room, but alas, the script was changed and most of it had to be taken away. The most beautiful part of it is still there, luckily. Without giving out any hints of the plot or the script, I’ll say it underlines one of the most beautiful and wistful moments of the movie, with a firm yet tender voice.

I used a lot of choir and vocal takes from Imaginaerum The Album – but left most of them alone, choirs weren’t processed or turned into dinosaurs wailing at the peak of their climaxes, nothing like that. I felt it was important to preserve yet another element from “classic” Nightwish – although I created a monster monk chanting crowd based on Marco’s incredible voice. Anette’s lead vocals appear here and there, and in a few occasions one lead vocal and its doubles were turned into a choir of 30+ vocalists. Her voice, however, was so clear and pure, that it felt a bit awkward to mutilate it in any way. A bit like employing some bradpitt and putting a monster mask on him for the whole movie – what’s the point?

And no. Brad’s not in this film.

If you’ve really read this far, I’m nothing short of amazed. WIth all these references to technical issues, applications and plugins, text becomes easily stodgy and onerous. I’m sure I’ll write a few more lines in the future – maybe dissecting a few scenes sonically, but it’s a totally different story then.

Cheers, enjoy the ride!

Alan Wake’s Sound Design article

Mark Yeend, the Head of Audio at Microsoft Game Studios was interviewed by designingsound.org, and what a brilliant read it is! The true audio talents are rarely exposed to the public, and their work at best goes unnoticed – mainly due to the fact that if the sound design is just right and flawless, you don’t pay any attention to it.

I think I haven’t given kudos to the three brilliant sound designers, which is unforgivable, but better late than never. I’ve played AW thru twice now, third time under its way, and finally I get to just explore and enjoy the surroundings. The woods sound particularily pleasant, if only the Taken stayed away…

Besides, AW was re-released via XBox Live – as a complete package, so grab it now. It’s winter time, nights are long, the Taken are loose. Boo!

Read the article here: http://bit.ly/g6QjGQ , it’s a long one, containing input from Mark Yeend, Peter Hajba, Michael Schwendler, Peter Comley, Alan Rankin and myself. A great and thorough article.

Yamaha D-85 add-ons

Before I’ve looped all the necessary Custom Voices and Special Presets, I thought it would be wise to deliver another coffee breakers: D-85 Arpeggiator and Bass Pedal.

If you’re too curious to find out what the D-85 does with its arpeggio engine, just click the lower left picture, scanned from the nicely written, child friendly D-85 manual. I’m still unable to get over the fact they really used hand drawn illustrations back in the 1980. In electronic device manuals. Unbelievable.

The Arpeggiator is just the sampled output of my D-85’s three arpeggio instruments (that is, without the arpeggiator engine, as one can add it later on with Kontakt 4’s scripts) with modulation wheel controlling decay time of the samples, whereas Bass Pedal is probably the most descriptive title ever: just that, one octave’s worth of everything D-85’s bass department could ever produce, which, to be honest, is really not that much. Also, the Tuba and Bass 8′ sounds are missing from the package, for a reason: coffee break was over before I was finished. 😀

The samples, being such simple sounds by their origin, are quite happily transposed much more than one could ever guess. Also, owning a license of Melodyne will prove being quite helpful as well.

The Solo Synth section of D-85 is couplable to pedals as well, but I didn’t have enough time to start recording that right now, as it needs to be done properly due to its filter system. I’ll do it, though – some day, using Kontakt 4’s AET.

Download: Arpeggiator and Bass Pedal. Note: Arpeggiator needs Kontakt 4, Bass Pedal works on Kontakt 3. Consider these as raw material, not finished products. Feel free to explore and if you’ll ever come up with anything cool using this stuff, drop me a line or two.

Next: BBQ outside, it’s sunshine and summertime.

Yamaha D-85 drum machine

Even though my trusty old Yamaha Electone D-85 electronic organ is one damn noisy bastard, I decided to create a few Kontakt3/4 sample sets, containing all samples and basic rhythms of its built-in drum machine. I didn’t raise my finger to remove the hiss from the samples, instead I just let them be as is. I did, however, include a fade-out for 50 ms into every sample.

As the D-85 has a balance fader between the percussion and cymbal channels, I sampled both separately – didn’t want to mess with Kontakt’s scripting engine, I decided to let the end user dive into that hell.

Also included are all the basic midi files, created from these preset rhythm patterns. Again, I didn’t want to sample the variations 1-3 and the fill-ins 1-6, but in case someone needs, I’ve got a service manual for that thing, in which they also included all the rhythm patterns, printed in Roland Style – dots in a matrix. For those willing to explore, I also included an “everything” sample set, containing all possible sounds D-85’s rhythm machine could ever produce.

To be honest, I was a bit surprised to notice D-85 had such a marvelous accuracy, and the timing of the rhythm section was very coherent, especially after I had let it warm long enough. Next: Custom Voices and Special Presets. Looping takes ages, especially if Symphonic Ensemble or Celeste are used. But, eventually I’ll put them here.

All rhythm sample sets are free, no copyright whatsoever.

Download: midi files, rhythm sets, and the single shots. Native Instruments Kontakt 3 or 4 needed.

Dumplo coffee breaker

Yet another tiny set of two instruments, made of plastic building bricks manufactured by a Danish toy company for 1-5 year old kids (the larger bricks, hence the name “Dumplo”). A brilliant source of higher middle frequency percussion. Recorded with a Zoom H4n and OKM mics, no processing in the middle.

Dumplo_sml has only four velocity levels and eight round robin groups, Dumplo_lrg has 6 velocity levels and again 8 round robin groups, with the same random parameters: pitch, eq. AET is present here as well, so unfortunately no Kontakt 2 or 3, K4 only.

This one was used in Alan Wake’s level 15, by the way. A small addition, yet it provided a lot of movement, especially due to a long stereo S/H Noise IR sample used for processing these.

Download: Dumplo_sml and Dumplo_lrg.

By the way, it’s incredible how fast one can work if you’ve set up your Kontakt’s user settings (controllers, for instance) and done some templates for percussive instruments, looped stuff, whatever. A quick mouse hand helps as well.

Another coffee break instrument

Asprine. Misspelled by purpose. One might use that for headaches, this one was for rhythm. It came in a form of cola-orange flavored granule sachet packages, a tiny pocket-size cardboard box with 10 headache granule sachets inside. The box and the sachet bags were shaken and tapped with fingers, recorded with OKM ortophonic microphones and an Edirol R09.

Result: A rhythm instrument not too far away from regular shakers or cabasas, only with a bit different twist of taste this time – literally. For Kontakt 4 only; 7 velocity layers, 8 round robin groups, AET and several real-time controllers plus humanizing randomness.

I thought of using the original name (due to the origins of this sample set), but I guess I’d likely violated a copyright. An earlier version of this was used in Alan Wake (AW) soundtrack, together with clapstick or toms (mixed in the background), in order to bring more randomness to a machinegun feel. Worked for me, probably works for someone else. If you’ve played the latter levels of the game, you’ve heard this one for sure.

Have a go, Asprine’s here. EQ it, compress the hell out of it, don’t leave it as is. Use it, abuse it. There are others coming as well, all from the AW instrument folder.

Pebbles – a coffee break instrument

Even though those two limestone pebbles may look like two potatoes, believe you me, they’re the real thing. I wanted to test a Zoom H4n for field recording, then create a Native Instruments Kontakt 4 instrument out of what I’d recorded, just to prove myself I really don’t need a portable 8-tracker worth 4500 €.

It turned out both the pebbles and H4n were pretty good, even though I managed to clip some of the samples. And, what’s even more positive: I think I’ll ditch the idea of acquiring myself an expensive piece of something that’ll be used twice a year.

The two instruments are called “pebbles” and “pebbles_low”, both recorded at 96 kHz/24-bit, then resampled to 44.1 kHz with Audiofile Engineering‘s Sample Manager (SM for short). Some batch editing was also done in SM, but nothing drastic, though.

What really impressed me (again) was Kontakt 4’s AET (Authentic Expression Technology), used in conjunction with 8 randomly cycling round robin groups, each containing 9 velocity layers. The result may not be the greatest limestone sample, but it goddamn worked pretty well in the backing track of a certain track in the making.

Also included is a heavy dose of randomization (pitch, eq1, eq2) and some Time Machine controls, pitch bend controlling the playback rate, modulation wheel lowering the pitch two octaves. I recorded these for Alan Wake just before my deadline, by the way. They ended up in the in-game music.

Grab the both instruments, pebbles and pebbles_low, they’re free.

(Coffee break instrument = took less than 30 minutes of time to create.)

(not quite a) CS-80…

…but pretty close, unless you don’t let the phase synchronized oscillators spoil your fun. However, my trusty old Yamaha Electone D-85 has been vacuum cleaned and its rotating speaker system oiled. A few other parts had to be replaced as well, but it’s quite nice now.

A few cosmetic touches need to be done – and polish the outer parts. As you can tell from the picture, it’s been a while since the last dust-off. Next in the agenda: customize the whole organ, insert quite a few potentiometers and have it retrofitted with MIDI. It would be nice to be able to program new arpeggios and rhythms as well, but enough is enough. It’s a shame, though, that these things don’t exist in large numbers anymore. I’d gladly get myself an E-series organ, and E-75, preferably. The only thing missing is space for that thing.

I’ve done some serious sampling on my D-85 and one day in the future I’ll put the whole library up for downloading. There’s plenty to do, looping etc., but it’s slowly taking shape. Upper Keyboard and all the related Orchestra, Custom Voices and Special Presets are all recorded 48 kHz/24 bit, with and without noise (cleaned and unprocessed). With Ensemble/Celeste and dry. It’s not perfect, but it’s actually damn convincing.

There’s still a lot to do, the solo keyboard, for instance, requires heck a lot of attention due to its nuances, controls and everything. Sampling is, after all, comparable to taking Polaroids and only as accurate as it is allowed to be: the credibility of the end result depends totally on the size and amount of the imaginary polaroid snapshots. The more, the merrier. 🙂

From MIDI to Automation (with Love)

I must have been one of the many trying to figure out how to turn midi events (such as note-ons or control data) into plugin fader movements – or region automation. It’s actually easier than though at first, and can be done in a few different ways. I’m definitely not sure whether this is the fastest and simplest way (Logic being such a black hole of customising), but it’s damn effective.

First, record a few bars of random notes, one at a time (1/8ths, whatever), just keep within the C1-C6 limits.

Second, prepare a few Transform sets. The first one is meant to convert the note-ons of a 5-octave keyboard into control data (or fader data, just replace the resulting Control selection with Fader, in the lower section). Since the note-ons include also note-offs, they need to be erased from the resulting control data. Of course, it’s not mandatory to use note-ons, one could just as easily twiddle some knobs (sorry) or throw faders – or wiggle the joystick until it perishes. What I’d like to emphasize is to clean up all unwanted data by quantizing or removing all unnecessary note-ons to produce clean stream of controls. Note that this Transform Set doesn’t produce zeros, it scales the bottom C1 of the 5-octave keyboard to 1 and highest C6 to 127 and everything else in between them.

The resulting data is tidied up by another Transform Set. With this, all zeros caused by note-offs are deleted – just remember to press “Select” first, then “Operate”, or if you’re certain and confident enough, press “Select and Operate”, but be warned: my Logic Pro 8.0.2 doesn’t always work correctly. Sometimes it just decides to delete everything in the event list. A bug, perhaps? What you’ll have in the Event List are only the control data events, between 1 and 127, which you’re about to use as such, or convert into something more comfortable. You could, however, combine this and the following into one Transform Set.

Create another Transform Set and use it to convert freshly-created Control data to Fader data. If you already have plugins in any of the insert slots, you’ll probably see some parameters in the Length/Info field of an Event List. Note: on the Audio Tracks, insert slot #1 is on a MIDI Channel 2, slot #2 is Channel 3 and so forth – with Instrument Tracks, the first effect insert slot is MIDI Channel 3, slot #2 is MIDI Channel 4, etc. ad nauseam.

Next, activate a plugin on an insert slot of an Audio Track (or Instrument). Say, you’re trying to create a “random phaser”: insert MicroPhaser. If everything’s gone well, you should have a) Audio or Instrument Track with a few bars of “Fader” data AND MicroPhaser in the first insert slot of that particular track. Double click on the Fader data region, it should open in the Event List on the right hand side of the Logic Pro’s main view. If not – well… click on the region, make sure you’ve got the preferences right (Preferences – Global – Editing – Double clicking a MIDI region opens Event List) and try again. Now there should be a list of Fader data.

Select all and set the “Ch” accordingly: on an Audio Track, insert slot #1, use MIDI Channel 2, on an Instrument Track, insert slot #1, use MIDI Channel 3. Click on the “Val” column and roll your mouse. You should see the parameter list – sometimes the plugin control numbers are found in different places, with MicroPhaser, its Intensity control was Channel 3, control Num 15. If you have put an audio region onto your track, you’ll hear some random phasing going on if you hit Play. Yes, Audio Tracks can play back Midi Regions containing Fader data!

Word of warning here, though: Some plugin parameters understand restricted values, for instance, Echo plugin’s Time can be selected from 1 to 11 only, though sending in Value 127 doesn’t crash Logic… at least immediately. Saving often is recommended when experimenting and I can’t emphasize enough it’s about time Apple developers create an Auto-Save for Logic. It’s about men in Mars and we still struggle without Auto-Save! What is this, Sysmä or Oregon? Also, some plugins tend to cause clicks when changing the parameters if there’s something running thru them. Of course, glitch fans go crazy as soon as they try any of these tricks on Delay plugins. Just be careful with Delay Designer’s parameters: it was the only one causing VERY much hassle, although nothing was permanently gone, as long as your project is properly saved.

Happy Automating!

Noise… eh, space design?

I’m quite convinced that at least 80% of Logic users are using Space Designer as a preset-only plugin, just because of the sheer amount of the presets. Don’t get me wrong, there’s nothing bad with that, it’s just so overwhelmingly cool tool to be overlooked. It’s quite ok to do Impulse Responses with a specific Apple-provided tool placed in the Utilities folder on your hard drive, but Space Designer can take just about any sample and use that sample to process your tracks.

For instance, we could use this ES-2 filtered and flangered stereo noise: es2_noise and turn ep_mello_dry into something like ep_mello_fx, even though the difference is quite subtle at modest listening levels, try putting your headphones on. Nice, isn’t it? Only one instance of Space Designer, and even my old 2 x 2.7 GHz G5 didn’t show much load. If you’re daring enough and use resonance or flangers carelessly, you might want to insert a compressor before the Space Designer, to lower the resulting frequency peaks a bit. And the fun doesn’t have to end here. Put the noise file on an audio track and cut and slice it to your heart’s content and turn it into something like es2_gated_noise.

Load that as an IR and the previous ep_mello_dry turns into a pulsing ep_mello_fx_gated. Now that is something I might need. A reverb that’s not a reverb nor it’s a delay.

Link: ES2_noise.zip (load it into your Space Designer as an IR).