Signal To Noise Podcast

288. Behind The Scenes

ProSoundWeb

This week, Andy takes us behind the scenes for a look at how an episode of Signal to Noise is produced and edited. He walks through the recording, processing, and editing chain, sharing tips and tricks on how to efficiently produce an unscripted conversational podcast, as well as some plugins and tips that are equally applicable to live mixes to broadcast or streaming.

Links:
iZotope RX 11
SoundRadix POWAIR
REAPER
Episode 288 Transcript

Connect with the community on the Signal To Noise Facebook Group and Discord Server. Both are spaces for listeners to create to generate conversations around the people and topics covered in the podcast — we want your questions and comments!

Also please check out and support The Roadie Clinic, Their mission is simple. “We exist to empower & heal roadies and their families by providing resources & services tailored to the struggles of the touring lifestyle.”

The Signal To Noise Podcast on ProSoundWeb is co-hosted by pro audio veterans Andy Leviss and Sean Walker.

Want to be a part of the show? If you have a quick tip to share, or a question for the hosts, past or future guests, or listeners at home, we’d love to include it in a future episode. You can send it to us one of two ways:

1) If you want to send it in as text and have us read it, or record your own short audio file, send it to signal2noise@prosoundweb.com with the subject “Tips” or “Questions”

2) If you want a quick easy way to do a short (90s or less) audio recording, go to https://www.speakpipe.com/S2N and leave us a voicemail there

Episode 288 Transcript

Episode 288 - Behind the Scenes of Signal to Noise


Note: This is an automatically generated transcript, so there might be mistakes--if you have any notes or feedback on it, please send them to us at signal2noise@prosoundweb.com so we can improve the transcripts for those who use them!


Voiceover: You’re listening to Signal to Noise, part of the ProSoundWeb podcast network, proudly brought to you this week by the following sponsors:


Allen & Heath, whose new dLive RackUltra FX upgrade levels up your console with 8 next-generation FX racks – putting powerful tools like vocal tuning, harmonizing, and amp simulation right at your fingertips. Learn more at allen-heath.com


RCF and TT+ AUDIO.... Delivering premium audio solutions designed for tour sound and music professionals for over 75 years. Hear TT+ AUDIO's GTX 10 and GTX 12 line passive line array and the GTS 29 dual 19" passive subwoofer.... all powered by RCF's XPS 16k amplifier, live in the arena at Winter NAMM 2025. Many other RCF products will be demo'd in Hall A in room #17108. Visit RCF at RCF-USA.com for the latest news and product information.


Music: “Break Free” by Mike Green


Andy Leviss: Hey, welcome to another episode of Signal to Noise. I'm your host, Andy Leviss, and today I'm flying solo because we're doing something a little bit different from a normal episode. So, this is kind of something that folks have asked about for a little while. And, uh, I've sort of debated doing because it's a little bit of off our normal beaten path for signal noise since we focus on live sound or live to broadcast live to tape. 

Uh, but enough folks have asked, and I think it might be helpful for folks that I'm going to actually lift the curtain and take us behind the scenes on the whole process that I use here to record and edit. an episode of signal to noise and see if I can't give you a peek into the the processing I use the mixing techniques I use and I'm going to put it out there right up front here. 

I am not remotely an expert at podcast editing. This is the one podcast I edit. I don't do it for a living. I am a, like most of you out there, a live sound engineer. So, I am not necessarily going to offer that this is the best way to edit a podcast. Uh, or at least this type of podcast, and we'll dig into that a little bit, what I mean by that. 

Uh, or, or even necessarily the best way to edit. signal to noise. What I will say is that over the, you know, year and a half or so I've been producing and editing the podcast, these are a bunch of tips and tricks I figured out to streamline the workflow, make it really efficient to give a fairly consistent product for y'all to listen to, particularly keeping in mind the places people listen to it, on earbuds, in the car, that sort of thing. 

And again, I think it would be useful for a lot of our listeners because I know while most of us are predominantly live sound engineers. We do have a tendency to get thrown this or that or do recordings of like panel events and that sort of thing Also some of the processing that I've been using and this is also things we've talked about that I will use on like corporate talking Head events and that sort of thing. 

So I'm hopeful this is helpful to folks I also fully acknowledge that there is a nonzero probability that out there as you're listening I'm gonna explain how I do something And some of y'all are gonna be like, Oh my God, that's the thing. I can't stand in every episode. Now I know what it is about it that I don't like. 

And you're gonna want to write in and tell me, Hey, stop doing that. That's driving us crazy. Awesome. I love that. Uh, Andy at prosoundweb. com. Please let me know. I'd love to know what's working and not working for y'all listening. Like some of the tweaks we've made to how the show sounds over time have been. 

Due to feedback from y'all and you know and then feedback on there. So let me know if this if this episode clues you into Into some feedback you can give me for how we can improve the show great if it gives you useful tricks for other work You're doing also great And if not, maybe it'll just be interesting to like, you know, learn how a different aspect of audio can work. 

So starting with the recording side Which we're not going to dig too deep into because that's fairly straightforward. We all understand recording. But, uh, as I'm assuming most listeners know, it is very rare for me, Sean, and the guests to all be in the same place. Typically, we're in three entirely different places. 

I'm usually in New York unless I'm traveling for a job. Sean's usually in Washington. Uh, you know, in the Seattle area, and our guests could be anywhere. We've had guests at home, we've had guests five minutes from me, we've had guests on tour anywhere, we've had guests in Europe. So, the question is how to solve that. 

There are a bunch of, uh, software platforms out there now for podcasting, both video and audio, that are designed they, the best way I can describe them is Zoom, but with ISO record capabilities. And now of course there is Zoom ISO, so if you're doing Zoom for corporate events you'll often run into that because that gives a way to get a local to the host ISO of everything. 

That works great for those kind of events where you just need to be able to isolate a video and an audio feed coming into Zoom, and you're fine with it still being Zoom quality. That's for when you're doing it in real time. In a podcast, we're not necessarily doing it in real time unless we're streaming a live episode, which is a slightly different thing, but what we do care about is audio quality. 

We want the highest audio quality we can get, so we use a platform called Riverside. fm. And again, there are a few other platforms that work in similar ways. What Riverside does, it basically acts like Zoom, except it records the audio locally on everybody's computer, and then uploads an isolated audio track as we go along that's completely unadulterated, with a little asterisk that I'm going to explain in a second. 

So, basically, we get two recordings from them. We get one that's everybody mixed together with the usual, you know, going through the internet, compression, things can sound a little squirrely sometimes, as a backup. But then we get individual, locally recorded, uncompressed, nothing happening to them, audio files, that we can then piece together and edit as we need. 

Now, for signal to noise at this point, we don't do a whole lot of editing for content and time. We try and keep it conversational, unless You know, if something goes on, if, you know, when we had Wayne Polly on, he had a door to door window salesman come by, or, you know, like if I have, uh, my dog, or, or my baby, you know, starting to go crazy in the middle of an episode. 

Um, or very occasionally, if somebody realizes after we recorded, like, oh shit, we talked about a thing that I had an NDA on, can you cut that part out? And we'll do it. Other than that, at this point, Sean and I are comfortable enough with How we know what knowing what we're doing or not knowing what we're doing enough that we just kind of let stuff fly and keep It casual makes life easier for us more genuine for y'all to listen to But should we need to dig into things those ISOs really help the other place that helps is in Making the audio quality as consistent as we can between each of us when we're talking So we have a wide range of audio interfaces devices, uh, you know, microphones that people use. 

Uh, I'm talking to you right now on an EVR E20, because I'm at home and that happens to be what I have here. If we want to get into the weeds about it, on a bunch of episodes early on, I was on an SM7B, and truth be told, I really like how the SM7B in full range mode sounds on my voice when you get that little bit of proximity in there. 

The problem is, when I'm recording an hour podcast at my desk and moving around and having a more casual conversation, that doesn't stay consistent. And it wasn't worth it enough for me to keep that, uh, or to try and process it to either add it a little artificially when I back off the mic or remove it when I get close, which again, as we go through the signal chain, we'll get into that a little bit. 

Um, but at that point, the RE 20 is a little more forgiving, it's, that's kind of the whole part and parcel of that series of microphones from EV is that I can be really close on it like this, or I can back off it like this, and it's going to sound relatively close. The level might change, uh, but the tonality stays fairly consistent because of the, the venting and all the acoustic design of that microphone. 

So, I switched to that, even though I don't necessarily like the sound on my voice as much as an SM7B when I'm in the sweet spot. Sean records from a couple different places, so sometimes he's on an SM58. There's been some episodes where he's on a pair of wired AirPods. Guests can be on anything in between. 

We've had guests show up in a studio with, like, uh, you know, a fancy tube mic. We've had guests show up with a 58 or 57. We've had guests that because they're on tour and they just can't make an audio interface to happen in a pinch, record on the built in mic on their like phone or laptop. That's the lowest on our preference list about the only. 

Near hard and fast rule I try and enforce is no Bluetooth microphones, because Bluetooth microphones sound like Bluetooth microphones, and there's no processing magic in the world that can fully undo that. I can make them sound not as terrible, but they're never going to sound great. That said, if my choice is to get a guest that we really want to share with you guys on who can only use Bluetooth, their microphone in their laptop is junk. 

You know, they, they can't get a wire. Like, for example, we sometimes get manufacturer folks who don't always have the microphone resources that like the rest of us as audio engineers have. So if somebody needs to come on Bluetooth, we'll, we'll let it roll for an episode or two, but I try and avoid it because I try and get the best audio quality I can for all you. 

Um, likewise, when you're recording podcasts, using those built in microphones in devices can be a pain. Both. They don't necessarily sound as great. They also pick up a lot more extraneous noise. You have to be really on top of folks to Hey, I need you if you're using your laptop microphone to not type anything else while we're here Like normally if you're ADHD ing a little bit We're all sound engineers. 

We all do it. Not the end of the world. But, in that situation, it's, it's tough. And again, I'm gonna let you in on a couple tricks that have pulled miracles for me in situations where, like, extraneous noise like that, or like somebody scribbling notes on a, on a notepad next to the microphone helps. Um, and I'm gonna try and duplicate an example of that here. 

I'm not going to pull up audio from the archives because I don't want to throw anybody under the bus for that sort of thing, but I will try and recreate based on a couple of real examples for y'all just to show you what some of these tools can do. Um, but yeah, so the long and short is we're all recording in different places. 

We're recording through Riverside on our browsers. Some of us on really nice microphones, some of us on good microphones, some of us on a little tiny, you know, laptop or airpod microphone. We make it work. So, over the last, you know, year and a half or so that I've been doing the podcast, I've kind of developed a bunch of strategies to take all those vast sources, get them sounding as close as possible as I can to each other. 

Balance things out and make a product that is easy to listen to comfortable to listen to and that won't jump out from like all the other podcasts you listen to when you're like listening in your car on your commuter will walk in the dog. Um, and again, we're going to talk a little bit about that too, because there are standards for podcasts. 

Not every podcast meets that. Which does create issues, um, I know the, the loudness that this podcast has been mastered to has changed over time. Uh, so we'll get into that a little bit too, because it's both useful for any of you who do podcasts or are considering editing podcasts. And I can also, we've talked about loudness before, because it comes into play if you're doing streaming or broadcast mixes. 

So, we'll dig into that a little bit. But like I said, I wanted to get a little of those basics on, like, how we get the recordings in and where those come from to start. So, typically what happens after we record an episode And I mean, I'm not going to get into, like, booking guests and stuff. I mean, I'm happy to. 

The short is, we're always looking for guests. If you are somebody who has an interesting story to tell and wants to come on and tell it, or if you know somebody else who might, you know, be an interesting guest and you want to connect us up, please, we're always, always looking for folks. Uh, usually for that, send us an email to, um, Uh, Signal 2 Noise, that's Signal the number 2 Noise at prosoundweb. 

com and send it along, that'll go to the whole team so we all see it and we'll make it happen. Um, but beyond that, like the, none of you care, I barely care about the logistics of booking and scheduling the podcast. That's, that's not what any of us audio folks are interested in. Um, so, but once all that's set, we record through Riverside. 

Um, obviously, if, if we can record something in person, we will, and then we'll multi track it, but that tends to be rare, just occasionally at conferences. We actually were originally gonna record with, uh, Wayne in person at NAMM the day after the show we talked about last episode. Uh, but then there was some scheduling issues on NAMM's side with the studio they had there, and so we weren't able to make it happen. 

Uh, so we were back doing it through Riverside, which was fine. Um, but yeah, so we got those recordings. Now, I will do a bit of ingest processing to make life easier for me and more efficient, uh, when editing it, because you could, and on some early episodes, I did just like drop the audio pretty straight into my DAW and just go from there, and you can do all the processing in the DAW. 

It's, there's some processing that just works a little better offline. And also, I just find, if I can batch process things on NGS, that means I can record an episode, download the audio, drop it in there and have it run while I'm sleeping, and then come back the next day, and it just gets things in a more even place for me to edit. 

It's again, it's not necessary. I could pretty much come up with the same end product with the little footnote I'm going to talk about later without doing that just makes life easier and a little more efficient. It's one less thing to have to do. It's a little less latency to have to monitor than if I'm putting plugins like crazy on every track. 

But there we go. So yeah, about pre processing. I use like, honestly, I almost wish I could get them to sponsor the podcast. Uh, isotope RX, RX advanced is. magic for any sort of post production audio. Like there are other plugins that do a lot of this stuff for ones where, where I know there's an obvious alternative. 

I'll try and mention it as I go through this, but I love me some isotope. They do not pay me. I paid for these plugins out of my own pocket. Like even like I own these before I started doing signal to noise. So like pro sound web didn't pay for them. Like Andy Levis paid for them himself. Um, watch out for when they do sales, they do big sales every year. 

We can get like the whole, Everything Isotope makes bundle, that is the way to go. Also, they do, you know, whenever a new version comes out, they do like crazy upgrade discounts. Um, but yes, this, this is the dammit Andy for this episode is, uh, I, if you can't afford the full RX Advanced, go for it. Most of these plugins are available in like some of the lower level bundles, but you'll need to dig into it a little bit to see which ones. 

So, Isotope works 90 something percent of the RX plugins work either as standalone auto unit or VST plugins, or They work in their stand alone RX 11 editor. Uh, RX 11 is the current version of RX. And that, that offline editor is great, cause it, it does involve, give you a lot of like, manual cleanup tools that you can't do with a plugin. 

But then it also allows batch processing in, in two different ways. You can go in and you can build a module chain, and that module chain can be any of their plugins, plus any other AUs, that you can build a chain in order, you know, set settings for each of it, and it'll process the audio through it. And you can do that on an individual file, or you can do that as a batch, so you load in your module chain, you load in a pile file, so I load in Sean's audio, my audio, and however many guests we have at once. 

I load up that chain, I hit render, and I go away, and I come back to duplicate copies of the files that are all processed through that chain all at once. So, that tends to be a real, real handy way to do it. Um, and, so, I use Again, like we've talked on the show a lot about, do you use a preset? Do you not use a preset? 

I will say for this, iZotope also has a fantastic preset that I've made one significant change to, but that works for this. They have a pre built module chain they call podcast preparation, which is exactly what it says. It is a, it is designed for exactly what I'm doing with it. It is designed to take all the isolated recordings for a podcast and get them in a nice happy state, you know, polish all the edges. 

ready to go and edit. So going down the list of what's in that, they have their D clip, which is should any of the inputs have gotten clipped at any point during the recording, it'll, you know, do its best to interpolate and repair that clip so it's not egregious. Again, if you have a recording that's clipping the entire time, not gonna help you so much. 

But if there's an occasional clip, it does a pretty good job of repairing it. I said it's not necessarily gonna be perfect, we're also just doing spoken word, we're not doing music, so, you know, I'm gonna be less precious about it. And then there's also a D Click plugin, which both does catch some of the little click pop, clip pops that D Click might not get, but also just anything else, anything, whether there's something tapping on a desk, uh, you know, if there's a connection or a cable issue at some point, you know, it just covers, covers the bases. 

Both of those are plugins that are in the chain, they're the first thing in the chain. 90 something percent of the time, the audio runs through them and they don't have to do a thing, and that's great. They're there just as a safety buffer in case we need them. Easier to run them through and faster to run them through than to listen to the whole thing and figure out if we need it or not. 

Then going down the chain This is one of the two places where I sometimes make a chain to the default iZotope, uh, podcast prep preset. Because by default, they use their Voicy Noise plugin, which is the, I've talked about this a lot, it's incredibly low latency. So I use this a lot on corporate events as my bus insert if I'm not on a Yamaha and don't have Dance or if I don't have a Cedar available. 

Uh, it's, and their default for it is in adaptive mode. Which means you don't have to pre train it on noise. It'll just dynamically look for noise. It's got some smart programming in it that it knows what is speech and vocals and what is not. And it'll try and get rid of anything else there. And they actually have two modes for it. 

They have a dialogue and a music mode. So it's in dialogue mode, which is the same way I would use it on a corporate event. And they have two filter types, surgical and gentle. Surgical is, oh my god, there's a lot of noise. It is egregious if I would rather not have that noise and occasionally have like a little like that like robotic, you know, crispiness to some of the speech where it's noisiest, as long as it gets rid of that noise and does it really tightly and doesn't get rid of the other stuff I need. 

Gentle is a little more chill about it. It's like, ah, we're gonna do our best to get the noise out, but If some of it slips through, we're gonna make sure, like, do our best to make sure that the sound doesn't get crunchy and digital and robotic. So, we're using that. So we're in Dialogue mode, we're in Gentle, and then Threshold is, in this iZotope plugin, is basically the, where it kicks in over what it figures is noise. 

So that gives a little bit of buffer that like to keep it from getting too heavy, you know, just like threshold on a gate or a cop would, and their default in this preset is a negative three threshold. I find that does work very well. And then there's a reduction, which is how much the noise gets pulled out. 

And that's where it gets interesting in the way that iZotope has built this preset. And that is not a way I would have thought to do it, but works rather well. Which is, they, rather than doing a real heavy pass of reduction, they actually have two instances of the plugin, with six dB of reduction each. So it'll do one pass, bring the noise down about 6 dB, and then it'll do a second pass. 

And because those are in adaptive mode, they're getting slightly different noise profiles each time. And it's both a little subtler and a little more thorough that way than it would be if you just did like 12 dB down on one pass. That said, on corporate events and live broadcasts, I certainly have done larger reduction amounts in one pass when I just needed to work quick, because these, they're near zero latency. 

It will add up as you start stacking them. So, you know, try it both ways, but it's an interesting approach to consider both for podcasts and other things to compare two less heavy handed passes in serial versus one. Get rid of it all now pass. So 90 percent of the time that's what I'm using on processing The audio for folks in in terms of that section of the chain Unless shit gets really noisy it with like interruptions So isotope has two different sort of cousin plugins for this They have voicey noise and then they have dialogue isolate and you can see they kind of come at it from opposite sides voicey noise Does its best to take the voice and get rid of anything that's not the voice and Remove the noise die isolate goes out of its way. 

No matter what is going on to pull the dialogue out forgetting anything else It won't work with music. It only works in speech And it can be very aggressive. What's super interesting? Is that in iZotope 11, they actually added a couple new things. First of all, there is a quality adjustment on it that has a real time mode, which is relatively low latency. 

It's not quite zero. I'd have to look up exactly what it is. But it's close enough that like, if I need to, I can put it as a plugin on a track in my DAW and use it there. If I were really hosed on like somebody coming in from Zoom that had noise and I had them turn their noise isolation off in Zoom and brought it in, or if somebody was coming in, you know, over some other connection format, like a codec or whatever, uh, it would certainly work on a live event. 

I, it would have to be a pretty bad room for me to need to do it on a feed from like a normal in house like LAV corporate panel, but it's there. And again, their, their quality descriptors are good real time and best offline. And that really is what it is. The, the processing algorithm for the real time, it's pretty good. 

I mean, like we've heard Zoom and other platforms, like, their noise reduction stuff can work pretty well. It's not the best, it's basically, when you're running it in real time, you get a little more artifacting most of the time. Offline can take a while, like for an hour audio, it could take two, two and a half hours to process. 

But it, 90 something percent of the time, does a better job. I will say every once in a while when we've had somebody with like a lot of background noise or like a dog going crazy or, you know, squeaking away with like Sharpie notes in the background, sometimes I'm actually surprised and running it through real time mode works better. 

I think because there's like some AI or at least what marketing calls AI and like learning algorithms in this. You don't always get exactly the same result every time you run it. So it is entirely even possible that if I ran it through offline, like best quality, two different times, I would get the same, slightly different results. 

That said, uh, it's sometimes worth trying one or the other if you have the time, but if you don't best offline modes tends to work the best. So there used to be a lot of confusion in RX 10 for isotope over. What this plugin did versus voice de noise and initially it did not de reverb it did not or it did de reverb It did not really de noise other than like as a side effect and that's getting a little bit into the weeds It's getting a little pedantic and somebody from isotope would have to explain exactly what the difference is But you didn't have control over noise reduction in it In iZotope 11, they basically said, Okay, everybody wants this to be voicey noise turned up to 11. 

So we're gonna make it that. So they added, there's basically three faders in it now. There's voice, there's reverb, and there's noise. And you can adjust each of those individuals. So when I do have to resort to this, it's either when there's lots of background noise or when somebody is in a horrendously reverberant space and it's just like, it's Killing us. 

So typically what I'll leave it set on is I, the voice is at 0dB, I want the voice to come through full. The reverb is down about 12dB, so it pulls the room back, and I'll listen and adjust that if I need to, but typically that does enough to like, make a room not sound totally dead, but get rid of any egregious room that, that's killing us, like when somebody is recording from like a dressing room in an arena, or like a hotel room that doesn't have any deadening in it. 

And then noise in this case, this is where I go up to 11. I have that turned all the way down. I want. Any noise that's in the track to go because I'm from resorting this plug in. Like I said, it's because there's like typing or something going on So here I think is a good point. I'm gonna Kind of drop a marker in here so I can add in a little clip of me talking while typing And I'm also gonna record it on the built in mic on my laptop So let's play that and then I can go from there and show you how the plug in works So, this is me talking from a bit of a distance into the built in mic, and I'm gonna sit here and type on the keyboard a little bit, and we're gonna see if that picks up. 

And, uh, there's the end of the typing. So, for the sake of y'all listening to it and balancing levels out and not having people jump in their car for the volume knob or in their headphones, What you just heard was partially processed. I've done basically the rest of the signal chain except for the dialogue isolate to it, just to balance it out. 

Um, and we will get to the last two parts of that signal chain in a minute. That said, now having heard it without dialogue isolate, here's that same recording with dialogue isolate. So, this is me talking from a bit of a distance into the built in mic, and I'm going to sit here and type on the keyboard a little bit. 

And we're going to see if that picks up. 

And, uh, there's the end of the typing. So you can see it cleans it up a fair amount. It's, it's pretty impressive, to be honest. So that's the, the dialer isolate, uh, plug in and isotope. And that is the last of, like, the heavy duty, crazy repair stuff that I do before we start editing an episode. Then there's two other parts that I do in that isotope batch module chain to make my life a little easier. 

Uh, the first of those is, iZotope has an EQ match plugin, and this is really cool. What it allows you to do is either select a preset, or play a piece of audio into it and capture the tonal profile of that, and then it will EQ the original track to try and match that tonal profile. So rather than like saying like, I want a boost at 6dB, but if I've already got a 6dB I've now got like 12dB of low end boost. 

It'll look for that boost in it, and if it needs to cut, it'll cut. If it needs to boost, it'll boost, and kind of just massage the audio into that profile. I find this really handy on the podcast, whereas, like, it's a little more automated, less artsy than I would normally go for, like, mixing other things. 

Because, like I said, I'm on an RE20, Sean might be on, like, a 58 one day, and we might have a guest on, you know, like, wired AirPods or a laptop mic. And I want to get them as close as I can. Now, again, this isn't going to make miracles. If the information isn't there in the audio, it's not going to make it appear out of nowhere. 

Like there are tools in iZotope. They can try and like recover spectrum that isn't there. It's not quite there yet. I don't use it. I don't think we need it for that, but this at least gets things close So I'll still do EQing further down the chain to just adjust things to taste per episode particularly for guests at this point Sean and my EQ stays pretty much the same But again, this gets things closer to the ballpark then the last part of my pre processing chain is isotopes loudness control plug in and which is exactly what it says. 

It goes through a piece of audio, measures the integrated loudness in, in LUFS, LKFS, uh, again, we won't go down the rabbit hole of the difference between the two, short version, basically there isn't one at this point, um, but that is the measure of the perceived loudness of the track. And I do a very, very light touch to that at this point. 

just to normalize files. It's a little smarter than doing, just normalizing to like a peak or RMS level because it's adapting and it's going for an even loudness throughout. But like, I'm not going for broadcast or podcast levels just yet, because I don't want everything to be completely crunchy, uh, and squished. 

So I'm running it, like, in the, like, NAG 20 to NAG 24 range. There's actually, Loudness Control, uh, has a preset in it for audiobook delivery, and I tend to use that preset because it does a good job of getting things in a reasonable neighborhood, but without going completely over the top on it. That gets things into a place that I can bring them into the DAW, not have to do a lot of writing of levels and adjusting, and have them pretty close. 

Um, before I started using that, I would use Waves Vocal Rider to do it, and I actually still have that in my template here, and that still rides levels a little bit for quiet parts versus loud parts, but I've just found the way that works versus the way Loudness Control, Loudness Control I find is just a little subtler, a little more aware of long term. 

Vocal Rider is going for like, Uh, a shorter period of, like, matching a level and just trying to match that level continuously. Loudness control is bouncing out over the course of the entire episode for that track. So, I, I personally feel it gives it a little more dynamic range without just making it too flat. 

Cause I don't want to go over. I will say As far as this podcast goes, uh, that if I need to, if I need to err on the side of slightly over processed versus things sounding consistent, I'm going to err on consistency because of what the end product is. Um, so, you know, that's kind of where we land on that. 

And that brings us into the DAW. Into the DAW, into the DAW, into the DAW. Sorry, that one was for the musical theater kids out there. Uh, love y'all. Um, and I guess a quick word on DAWs. Um, honestly, like, I'm not gonna get into a DAW war here. I know 90 percent of y'all use Pro Tools. Awesome. Like, I've used Logic in the past. 

I personally use Reaper. Um, I I find it works a little bit for the way my brain goes. I do pay for it to get the meme, and if you use Reaper, you should. It's such a bargain. Uh, I find the way it works, its customizability works really well for me. Uh, there's a lot of cool features in it that other DAWs don't have or don't have as smoothly. 

Um, so, that's where I've landed. Uh, you know, you do yours though. Um, if anybody wants to like hop into the Discord and I can talk about more of the stuff of what I personally like about Reaper, happy to do it. Not gonna bore y'all here with that. Um, so, once the audio's loaded in, we've got a template in DAW, in the DAW in Reaper. 

That has, like, the ad reads at the beginning of the episode, because those only update, you know, every few months or so. So those get built in, the music in, the music out, and then a little bit of magic we'll get into at the end. Uh, and then it's got templates for me, for tracks for me, for Sean, and for a guest. 

And, you know, I'll duplicate that for however many guests we have. Drop that pre processed audio in. And then from there There's a little bit of a signal chain that goes on in, uh, in Reaper. Although, again, a lot of it is bypassed at this point because of that pre processing I've built over time. Uh, there's a little bit of a Wave C1 gate on there just as a backstop for, for super extraneous noise. 

Again, it's really there as a security blanket at this point. The denoise is doing its job. I could delete that plugin. I don't think any of us would notice it wasn't there anymore. Um, I don't go crazy with EQ. I use the built in, like, ReaEQ from Reaper and ReaComp. Light bit of compression. Uh, little bit of, you know, fixing EQ if I need it past that EQ match. 

And then, uh, Sibilance Mono, which is a Waves plugin. I have gone through every de esser and Sibilance plugin I can. iZotope has a pretty great one. Waves has a couple. At this point, Waves, Sibilance. Of course, as I keep saying, sibilant, sibilant, sibilant, over and over again, you'll, you'll see how it works for you, for yourself. 

I try not to go crazy with it. That's the one that I think sounds the most transparent. Your mileage may vary, is what it is. Uh, and then there is a light touch of Waves C4, their multiband compression. Um, it's, I started with their, like, voiceover preset. And I've tweaked it and dialed it back to be a little gentler because we're not in a world where. 

So we don't need quite that level of preset, but for spoken word, that is the closest starting point I found. And I use it to just, again, if there is a proximity effect happening on somebody, it'll help tame it a little bit. Likewise, if somebody leans in or out of the mic, it can help. You know push the high end back in if we need it and again EQ match does 90 percent of that This is just adding that a little bit of polish on the way out just to make sure And like I said, I do have vocal rider in there Which I mentioned earlier and it's basically an automated fader based off a target level you give it a range that it's allowed to adjust in and a target to hit and within that it'll ride the fader within that range and Uh, to try and hit that target. 

As I said, I rely on that a lot less now, because I've started using loudness control to do some of that pre production. Uh, I think that works a little smoother, but still there's moments where like, one of us gets a little quieter than the other, and because I have loudness control set to leave those dynamics in it, a little bit of that fader writing can help balance it out, make it feel a little, a little better balanced. 

Um, so I do leave that in there. It's not really doing a lot. Last step in, in like, the dynamics chain is, uh, is Relimit, just the built in limiter in iZotope. And again, having gone through loudness control, it's really just an emergency backstop. It's not really doing much of anything at this point, but I don't take it out because it doesn't hurt anything to leave it in there, and I'd rather it be there to catch something if it needs. 

And then finally, if I need it, I use RX11 Breath Control. And I put that as a plug in here. I could probably do that in that pre production chain, but I like having it in here after all the other EQ and dynamic stages, so that if there's breaths that slip through before things are fully processed, This is getting them at that very last end and just catching, you know, any gasps for air or, you know, breathing. 

It slips through the rest of the chain. That's the input chain. It's pretty straightforward. Like I said, the bulk of the processing happens pre pro. There's a little bit that happens in the DAW. A lot of that is, is just kind of there as a backstop for things. That's kind of the gist of it. Uh, then everything goes to a bus, uh, for all the speech. 

And that's where things get a little interesting. Because We're doing enough cleanup on stuff and we're in enough different rooms that stuff can get dry and like pauses can get a little awkward when you're listening and you're like, did the podcast just end? Dead air gets weird. Like, I'm sure you've listened to podcasts where like there is just dead air between speech and it can get disconcerting sometimes. 

So early on I started adding in just a subtle bit of room tone, looped underneath everything. It's like a 20 minute track that loops. So like y'all have, we're about 16 minutes into the second loop of it right now. Uh, and that just gives a little bit of that air to it. And it's actually, I generated that podcast in a funny way 'cause uh, isotope also has a, an offline, uh, uh, processor that will allow you to match room tone and match noise. 

To a track so you can pull the tone off of one room put it on to another if I wanted to go bonkers I could do that to just make us all like strip out all the noise with dialogue isolate and then do that to put us All in the same room. Don't need to do that. That's overkill for what we're doing here what I figured out was that if I take a 20 minute loop of silence and run that through that plug in with a podcast room tone sample like a podcast studio Which I did manage to find a just a Podcast Studio, which is a good semi live room tone. 

If you run silence through a room tone matcher, you end up with 20 minutes of room tone. Built that into a loop. Here we go. Uh, that's kind of where I leave it at this point. What I used to do in some early episodes, um, Oh, and one thing, credit, shout out to Tyler, who was the one who suggested adding that early on after the first couple episodes. 

Love you, bud. Great suggestion there. Um, I do think it's made a difference in, in the podcast product. Uh, and I hope everybody else does too. Uh, the other thing I tried when I added that was also I would put a little very, very subtle touch of reverb onto all of our vocals to kind of bring us into the same room. 

Um, and, and I would also tend to strip it very dry in denoise and dialogue isolate. Uh, If there was a like particularly prominent room reverb and over time I've realized that that even various I'm talking like almost negative 40 level on the verb return even there it got to be a bit much But also as we said earlier, I don't think anybody's under the illusion that we're all in the same room Like I said a coherent room tone underneath it It's not so much to put us in the same room, so much as to provide just a, a solid like background basis for those dead spots between the speech. 

Putting us all in the same reverb though, it sounds weird because you know we're not in the same room. We're on very different microphones, we're in very different places. It's creating an illusion that isn't reality and it doesn't match expectation. So I've stopped doing that, but if like a year or so ago there were episodes that sounded really roomy, really reverberant. 

That's why, and that's why that went away. Um, that said, there are situations where you may want to do that if you're editing a podcast. I mention it because it is a useful trick for certain situations. It just turns out it's not really the right, uh, trick for this situation. Um, but that's, I said earlier that there's a little, a little, you know, kind of magic going on to, to tie coherently. 

And it's, it's that room tone loop and formerly that reverb. Uh, then the last thing we do, which is really important to me, is loudness leveling for the podcast as a whole. So, there are different standards depending on what platform you listen to for what loudness target you should be going for. Uh, Spotify and Apple Podcasts tend to be the two biggest ones, and they, they differ slightly from each other, but overlap, uh, you know, close enough that if you hit one target, you're close enough to the other to be fine. 

Um, and so I use the Apple podcast, which is, uh, negative 16 LUFS. This is where that one note I mentioned earlier, that not everybody follows those standards, comes into play a little bit. So, generally, if podcasts are following those standards, and most of the, like, better produced podcasts, or, better isn't a fair term, most of the more produced podcasts, like the 20, 000 Hertz, the 99PI, any of the NPR podcasts, any of the major podcasts, will fit somewhere around that loudness profile, which will keep you from having to dive for the volume control. 

A lot of other smaller audience podcasts don't necessarily follow that, don't do mastering like that. Uh, back in the early days, you know, noise wasn't necessarily mastered to that level. So like, that's why you'll see a level jump between some of the early episodes and Like the last year or so of episodes, um, and that's why occasionally somebody will reach out and say, Hey, signal noise is a lot louder than the other podcasts. 

I listened to what's up with that. And the answer is that we're hitting those standards that the, the podcast services tend to ask and, and, or suggest we hit for consistency. Not every podcast does that. So if you have other podcasts you listen to that are significantly quieter and lend you to, like, reach for the volume knob compared to this podcast, that's fine. 

Maybe reach out to them and very politely and gently suggest that they, uh, master to the loudness specs that the services suggest. So that's kind of what I have to say on that, on that subject. And again, you could use Isotope Loudness Control to do that. Reaper can actually do loudness processing on its render. 

I like to be able to monitor the loudness as I go. So there's a plug in company called Sound Radix, and we've talked about this before. Um, I forget whether it was either Aram or Mike Curtis who turned me on to this. I think it was Mike. They make a compressor called Power. P O W A I R. And it is a compressor that is set. 

In, with a LUFS target, and it's awesome. I can set this, and it's, it's got a little too much latency to use on like a live, on a live feed to a room. Cause it's like into like 30 to 50 milliseconds, depending on various variables. Uh, but for editing a podcast, it works great, cause I can listen to it at final loudness. 

Um, and it renders out at final loudness. It saves me an extra step of rendering the loudness downstream. The one tricky thing about power is that it does not have a, a true peak limiter at the end of it. Um, some of these other loudness processors that have more latency do have that in and when you're doing the loudness processing that that's the thing you need to be really conscious of not only limiting but it has to be true peak limiting because basically you can get what's known as inter sample clipping where the two samples don't clip but the basically the movement that happens between them to reconstruct the audio waveform Causes a clip that's louder than either of those two samples again Somebody out there listening is screaming at my horrible explanation to it Feel free to drop into the discord and explain it better. 

Please. I'm not an expert in that kind of digital audio processing So I'm not gonna I'm not claiming to be explaining it perfectly, but you need something there So I end up using and just the built in re limit plug in in Reaper It has a true peak mode. Uh, you said it's a true peak limit, you know, just a little below clip that keeps everything happy, render that out, render 10 B3 and away we go. 

The one last bit I'll share is, and this is one folks are either going to be like oh yeah, or laugh at me that I'm an idiot that it took me so long to realize this, is for the sake of saving download size, for the first couple months of editing Signal to Noise when I started hosting and producing, I did it in mono, because I'm like, it's a podcast, we don't need stereo, we'll do it in mono, cuts the download size in half, great, awesome. 

And then we did the holiday episode where we had, uh, the Livesound Bootcamp, uh, dudes, and me and Sean all together. And that was the one where as I was editing it, I was like, we You can't tell the voices apart. So that was the very first episode I did in stereo, and I've never gone back. Because like, the second I did that episode in stereo and started doing the next few, I literally got emails from a bunch of y'all being like, Oh my god, thank you, like, I can, when two people are talking at the same time, I can understand them both now. 

So, again, we think of it for music a lot, we don't necessarily think of it for speech. Stereo is your friend. I don't go completely crazy, but typically on a three guest episode, or a three person episode with one guest, I'll be panned 20 percent to one side, Sean will be panned 20 percent to the other, and I'll put the guest in the middle. 

That way you can pick all our voices out. Obviously, this episode, I'm the only one talking, so I'm pan center. And then, if we have multiple guests, typically I won't pan me and Sean out wider than about 20 or maybe 25, but I'll just split, you know, however many guests we have evenly in between there so that you can pick voices out. 

And that's kind of that last bit of advice I'll give you for if you're editing podcasts, is when you've got multiple people talking, in this type of conversational podcast, stereo is key. Absolutely, your friend. Don't overlook it. Don't try and save that little bit of download, uh, uh, space for folks. It will be far, far better to pan things and give it that bit of separation. 

And again, uh, I, I sort of alluded to this earlier. This is very much for, like, this unscripted conversational podcast style. Like, They're a narrative podcast, you know, like, like 20, 000 Hertz, 99 percent invisible, or that sort of thing that are very heavily edited. Those are different beasts. Some of these tricks are going to work for that, some of them are not. 

Some of them won't be necessary because you don't have people talking over each other the same way that you do in the kind of conversational podcast we're doing. But for this type of podcast, that's what seems to work for me. So, yeah, that's kind of a walkthrough, start to finish. Uh, how I record and edit and process an episode of Signal to Noise. 

I hope it's helpful for some folks who find themselves doing this type of stuff. And again, I hope it's helpful for some of you that maybe don't necessarily like something you hear on the podcast and now you know what it is that you don't like and can reach out to me at andy at prosoundweb. com and tell me, hey, maybe stop doing that because You know, I don't, like, it doesn't sound good doing that. 

Ideally, if you can suggest an alternative, I love it. And if, you know, enough people suggest alternatives to what I'm doing that, that seem to work, I'll do a follow up or, you know, tag a note onto this episode with, you know, some updated tips and tricks that came from you all. Um, that being said, I think that's where I'm going to leave it. 

And, uh, we'll catch you next time. Uh, thanks to RCF and Alan and Heath for, uh, keeping the lights on here at the Virtual Studio and making awesome consoles and speakers. And, uh, Lenin, me, and usually Sean, yammer on at you every week. And, uh, I'm Andy. This is Signal to Noise. This is not how we ever wrap up an episode again. 

So, that's the pod, y'all.

 

Music: “Break Free” by Mike Green

People on this episode