• Subscribe


Adam Moseley is a successful record producer, engineer and mixer who started his career at the legendary Trident studios in London, where he worked with Phil Ramone, Tom Dowd, Mutt Lange, Steve Lillywhite, Tina Turner, The Cure, Wet Wet Wet, Roxette, Kiss, and Rush. Moseley was based at The Boat studio in Silverlake for many years where he recorded the likes of Beck, Nikka Costa/Lenny Kravitz, Wolfmother, John Cale, Spike Jonze’s “I’m Here” movie with Nick Zinner, U2 with Dave Sitek and many more. Recently, he has mixed the score for feature movies such as The Big Wedding featuring Robert De Niro, “The Dark Valley” and other TV and film music and is currently working with a number of very diversified artists.

A Conversation with Adam Moseley

You’re working with so many different types of music and artists, and you’re listening in your chosen monitoring environment. How important is it to have a point of reference for all these different sounds and projects?

I’ve worked in some of the top studios in lots of different countries,” says Moseley, “and I’ve found that the hardest thing is getting a room that’s true and getting a sound that’s true to my own acoustic perception and design. I’ve been working with these Barefoot MM45s now for about a month, and really love the definition in the high mid-range.  The bottom end is great, it’s warm.

What I’ve found over the years in studios, and I’ve worked in some of the top studios in lots of different countries, is that the hardest thing is getting a room that’s true and getting a sound that’s true to my own acoustic perception and design, so that whatever you’re hearing, when you take it away, when you send it to mastering, when you go somewhere else, when you listen in the car, it’s going to be what you expect it to sound like. That’s one of the biggest challenges in studios. It’s not always a given science. It takes a lot of time to set up a room and get it right. Monitoring, absolutely, that’s your only reference as a producer or mixer or engineer. It’s the only reference that you have. Suddenly you are in a situation, turning dials on the board expecting to hear more or less of a certain frequency, and it’s not responding how you’d expect.

When I left Trident, when I started going out into the big world on my own, I would be on a board that I didn’t know. I’m thinking, “Oh, this sound needs a certain frequency.” I’d go to that frequency and I’d be adding it or subtracting it, and my ear wouldn’t be getting what I thought it should be getting. It’s like someone had pulled the rug away from under my feet. It’s the scariest feeling. I learned to adapt to that. I would always take my speakers with me, ones that were tried and tested. By now I’ve got quite a few reference songs that over the decades I’ve heard in different studios, different systems, and I know how they should sound.

The most important thing for me is that what I’m hearing is true, that I can trust what I’m hearing, and also a speaker that’s going to give me the positioning that I want, the detail that I want. I get into such finite detail of positioning and delays, and delays that go into reverbs. Maybe you don’t hear the delay, you just hear the reverb, and the reverb has reflections, but reflections have to cut off before the next part comes in, so my listening experience has to be absolutely precise.

I’ve been working with these Barefoots, the mm45s, now for about a month, and really love the definition in the high mid-range. The bottom end is great, it’s warm. I always love a lot of bottom end in my mixes, but the hardest thing on most speakers is to get that definition in the mid-range, because that’s where you hear a lot of the reflections in your reverbs and a lot of the delays.

With a delay, I always EQ the delay differently from the original signal so the delay isn’t bouncing back with the same frequencies. A lot of stuff happens in the high mids, and a lot of your reverbs, a lot of your spatials happen up there. That’s been one of my most enjoyable experiences with these speakers — I can really get my definition, get my placement, and hear back what I’m hearing in my head and actually make it happen, and then send it to mastering and they say, “We don’t really have to do too much with this.” That’s always good. It’s always the most intimidating thing to send your work to mastering and then wait for the comments, if I can’t attend mastering myself.

Speakers that are absolutely true give you the definition, give me back the real proof of concept that I’m going for, and the visualization of spatial events.

Can you give me a brief description of your studio set-up that you have here in downtown LA, and how your gear works with your listening environment.

My main work set-up is Pro Tools. I run Ableton through it in Rewire with Apogee Quartet converters, and then Barefoot monitors, which I’m starting to really love. A little bit of outboard: Joe Meek, some Trident EQ, a great Studio Projects VTB! Mic pre that I love on DI’s and sometimes vocals and that’s essentially it. Practically everything I do when I get to this stage is in the box. I’ll overdub guitars sometimes, keyboards. I’ve got a little Korg MicroKorg. I’ve got an MPD32 for drum pads and beats and stuff. I do guitars here, sometimes I’ll do vocals here if I need to, but basically everything I do these days at this stage is in the box, whether it’s for a record or film.

Let’s talk a little bit about your work for film and what you’re trying to get in a cinematic sense audio-wise.

With film I’m always working to picture, whether it’s for a TV series or a feature movie. I’m looking at picture, getting the concept in the visual, and then working out how I can not just play the performance of the music and bring the best out of the music, but also adapt it to the visual, work with the flow and the mood and building it up. I also have sound design tracks as well that I’m mixing with as well as dialogue, so I have to be very aware of the whole picture, bring the music to life, but really respond to the visual image, the concept, the depth, the mood.

Are you trying to create an aural 3D type of experience?

Yes, completely. Over the years I’ve just developed, more and more, an awareness and a search to get more of a 3D dimension to music, to take instruments and set them back in the distance, maybe reduce their volume, then bring them up and bring them closer, dryer, reducing any effects or reverb, just to really get 3D dynamics happening. That works whether it’s an orchestral arrangement or whether I’m working with loops or doing a drop in a song, cutting out the drums and changing the shift of positioning of things. It’s sometimes what I call shape-shifting in the mix. I want to use the width and the height and the depth of the mix, but also get things to go behind the speakers or to come out and have things pop out in front, to come out and move out, and get a real 3D experience, sound that just comes out and surrounds you.

How does your monitoring situation facilitate that type of complicated sound, so that it works for the various listening situations that people find themselves in? How do you compensate to get that cinematic feeling even on a little boom box or in a home theater? How does that work within your monitoring situation?

The interesting thing these days is that all music ends up pretty much as MP3s that people listen to through computer speakers or earbuds, but I don’t believe you can go into making your music with that end listener situation in mind. You’ve got to be aware of it, but you still have to do the best possible sonic job that you can do and make sound as glorious and beautiful and deep and rich and dynamic and just get the best sound you can get. Oddly enough, I use great speakers, and then I’ve got my earbuds coming off the same Apogee converters, and I’ll often check the mixes on my earbuds afterwards because that’s how they’re going to be heard. Except they won’t be heard in the quality that I’m listening to here, it will be heard as an MP3.

When I started back in the day, like at Trident, they had these huge CADAC monitors, floor to ceiling. They looked like a wardrobe. We had those, which we’d listen to at like ear-bleed level for these Kiss albums and Rush albums and so on. Then, the other option was Aurotones, the little cube boxes. Sometimes someone would bring in from BBC Radio 1 or from another radio station a little mono speaker that they would swear had the same compressor that the BBC used. You’d check your mixes back through it and make sure it was okay, but you couldn’t mix by that as your end result. You have to make the most glorious sonic picture you can, and then reference certain smaller things, reference on computer speakers and such, but you’ve got to make it the best it can possibly be. If it sounds good on that, then it’ll sound good anywhere.

You teach at UCLA. Could you tell us about your philosophy of production, your philosophy of mixing — how do you bring them into today’s workflow?

I think the philosophy of music is just understanding arrangement. That’s how I learned, just by studying music, studying records, and not even aware of what I was doing, but just putting the pieces together and figuring how things worked, how a drummer played, how drums played with bass, how the guitar worked in, and growing up listening to such varied music.

I think the philosophy of music is just understanding arrangement. That’s how I learned, just by studying music, studying records, and not even aware of what I was doing, but just putting the pieces together and figuring how things worked, how a drummer played, how drums played with bass, how the guitar worked in, and growing up listening to such varied music. In my era, almost every day there was something fresh coming out and it was always different, always reinventing music. Growing up in that era with a jazz musician father who loved Latin American music, I had the richest experiences. I had a collection of great music that I was being fed all the time.

When I’m talking to my students at UCLA in the Extension program, the thing I push most is that music is about arrangements, is about arrangement of the musical notes into parts and then how you arrange those parts within your sonic picture. When you go back and study how music producers came about, all the great music producers were arrangers. Quincy Jones, Phil Ramone, Phil Spector, Brian Wilson, George Martin, Arif Mardin, all the great original producers of modern day music started as arrangers, and that is something that I just push and push on my students.

It doesn’t matter what your parts are, how great they are. If the parts are interfering with each other, it will kill your dynamics. If the sonics aren’t right, it’s not going to be exciting. If you’re not holding back on your higher frequencies and punching out at certain times with them, you’re not playing your mix to the full. That’s the most important thing — understanding arrangement, not just of your musical parts but also of your sonics, your frequencies, your stereo width, your depth, and just how to use the whole picture. I think that’s the essence of it all.

You mentioned four artists that you’re working with. Could you reference how the workflow is different with different projects?

Just recently I was doing a hybrid pop classical album with Nathan Pacheco, an opera singer. It’s very contemporary classic opera. Some of the songs had a hundred and twenty tracks of orchestration, then lots of tracks of choir, lots of programming, lots of bass synth, lots of really great percussive elements. The mixes took a long time. They were very involved. In that situation, like when I’m working with a film that’s got an orchestra, typically I work from the score. I have the score spread out in front of me. I’m reading the score, listening to the music, getting to know the session, getting to know the interplay between the instruments and understand the arrangement, understand which instruments are the lead, which are the response, which are the harmonics, which reinforce the melody, which smooth out the dynamic.

I really have to literally read the arrangements to see what’s happening, and listen at the same time, learn the session, learn every single part. If it’s a hundred and twenty tracks or more, then I learn every single track so that I can really work out how to pace the mix, who should have the foreground, who will be in the background, when to switch that perspective, and when to maximize. First of all, it’s just understanding arrangements, and then to bring my mixing concept and visualization of “Where do I want to take this? How can I build on this and take it into a further realm?”

There was a track of Nathan’s where I had to build and build the first crescendo of the song, but I couldn’t make it as big as the second one because I still had to leave myself somewhere to go. After this big build, and I got it there, then I just wanted everything to dissipate and fade away, and for the instruments, the little melody lines, just to be a little bit quieter and positioned differently to draw you into this magic garden that I was trying to create. Then, suddenly, that swoops back out and the instruments come back into the foreground, less reverb, more dry, more impact, and it’s like, “Whoa, we’re off again. Here we go.” Now we’re up to another build, we’re going up another hill, and this is going to be bigger than the one before.

The whole thing is about literally reading the arrangement with my ears and my eyes the arrangement and learning how to play the dynamics, and then using what I’ve learned over the years from mixing various records, different styles of music, just bringing it all together and really getting the absolute most I can get out of it.

By contrast, let’s talk about Zoe Violet. Tell us about that artist.

I work on all different kinds of music, so going from Nathan’s contemporary classical opera album, I’ve just finished an EP with a girl from Manchester called Zoe Violet, produced and written by Rie Sinclair. Completely different. Very Brit synth-pop artist. It’s a very different kind of perspective. The vocal treatments are different. The vocal effects are different. The beat has to be up and foremost and in your face and just driving, but then all the synths, there’s so much sound design and different texturing going on where certain instruments bleed into the next thing and the other instrument takes over. I have to leave myself room and space to reintroduce the instrument. Sometimes I’ll use panning to bring instruments in and then spread it out to allow space for the next event to come in as the first one just spreads away, or I’ll use panning to move it to a side and create space for the next musical event…

Zoe’s project is completely different. Again, the most important thing is concept. Before I can start, I have to know exactly where I’m going with a song. I have to almost draw a picture of the song, of the shapes of the instruments, their positioning, and really explore the session and see what exists and where I can take it.

Next project you are working on, Hurry Death.

Again, I’m working in all different kinds of music. I’m currently about to start a second EP with a band from here in LA, in Echo Park, a band called Hurry Death. Very alternative, acoustic. Great songs. Great layering. Really beautiful layering. Produced by a friend of mine, Johnny Watt. We’ve done a lot of projects together. With Hurry Death, it’s a completely different sonic approach. There’s a really interesting situation on one of the songs, “The Hurt Happened Here,” where the guitars are double-tracked. They start playing an arpeggiated line on the lower strings. It’s simple, and as it develops the line gets busier and moves up to higher notes. We double-tracked that.

At the beginning, because it’s a simpler part and it’s in the low notes, it was played a lot tighter, and so when I got the guitars positioned in stereo I really experimented to decide what width was needed. It’s not extreme width, necessarily. The choice was it might be a sixty-five position or a forty-five position in stereo left and right. I’ll figure it out and play the arrangement and place the other instruments and find exactly where that’s going to work. The interesting thing that I discovered was that on the lower notes, it’s quite obvious in a way, but as the guitars were played tighter together they summed more in the middle, so the guitar had a natural thing on the low notes of being more central. As it moved up the arpeggio, it spreads out to the sides. The stereo position becomes more apparent. Then I was able to introduce the piano chord and bring that up slowly into the space as the guitar spread to the sides, and then drop the piano as the guitar went back to the same phrase with the low notes.

I’m playing within the parts in what I call shape-shifting. The guitar starts more in the middle. As it gets higher it moves to the sides, which leaves space for another instrument to come through and take your attention. Then that instrument goes into the background and into a delayed reverb, that fades away in time for the focus to come back to the guitars again, and then the vocals drop in. It’s just a constant movement of positioning, not so that you disorientate people, but often just by introducing a sound, or in classical music by introducing a theme of an instrument, once the ear has heard it you can sometimes drop the volume of that because the ear recognizes the pattern, so then it doesn’t have to be so loud because the idea is the ear has identified with it. You can reduce it. It gives you space to bring something else in, but the listener is still hearing the line that you’ve made quieter because the pattern is ingrained in the memory now. It’s about understanding the sonic awareness of how things work, and just playing that, maximizing that.

Let’s wrap it up with your newest project with the band Braves.

The project I’m about to start is a band from Australia called BRAVES. It’s very unusual. It’s pop, two brothers, but it’s very, very different. I can’t describe it. When I first heard some of the demos and rough mixes and I was asked to mix the project, I actually had to think to myself, “Where can I go with this that I haven’t gone before, sonically, spatially, visually, and conceptually?” It took me quite a few weeks to clear my mind out, because I’m pretty clear about where I go with a song and the concept I have and the sonic concept that I have for spacing and positioning and so on, but with this band it threw me a little bit through a loop for about three weeks, trying to figure out what is it that I haven’t done before, that I don’t know, and how can I find out what that is and explore my own mind, and also listen and absorb the influences of this music and work out how I can take it to somewhere further?

I came up with a few ideas of how certain sounds will morph into different sounds and go further away and come back, and I’ll introduce them in a different position. Again, never to distract from the song, because that’s the essence. You don’t want to confuse the listener, but there’s so much more to just balancing and shaping and positioning. There’s so much more movement you can do within the parts, just like in classical where your attention is constantly being changed between maybe the cello or the violas or the first violins and the seconds, and then maybe the oboes will do a response that works with the violas and the flutes will come in. Constantly your attention is being drawn to a different area of the sonic picture, but you’re not distracted, there’s the themes that are constant. In my opinion, Beethoven is still one of the greatest music producers and mixers of all time.

What I like to do in music is to just build and build on a sonic picture. With BRAVES, I want to ‘boldly’ go somewhere where I haven’t gone before, and I think I know where that is. To be continued on that one… it’s a “concept in progress..”



To find out more about the NEW Footprint01, join our EMAIL NEWSLETTER for the latest updates on sales, contests, gear, and more!
We respect your privacy and do not share your information