LAST WORD: MICHAEL CARNES, PART 2 - AudioTechnology
Michael Carnes, Part 2
Founder of Exponential Audio
Last issue, Michael talked about his time at Lexicon and how he started Exponential Audio, a developer of plug-in reverbs and effects. This issue he tackles the sticky debate of convolution vs algorithmic reverbs, and hardware vs software.
The issue with convolution reverb is when you take an impulse response of a hall, you’re typically only projecting the impulse from a few places on the stage captured by microphones in only a handful of places. You will get a perfect impulse response from that point source to that microphone at that instant in time.
Of course, things change — people move around, humidity changes. Just say you had a perfect impulse response, I defy you to find an orchestra or band you can fit into two or three points. They occupy space, a thousand points of radiation would be understating it.
If you moved around a few seats from the microphone’s position, it would sound pretty much the same, whereas the impulse response would be different. What’s happening? The impulse response is a primary expression of what’s going on, but it’s not what you hear.
The human brain and hearing system is seriously into data reduction. Our brain could not possibly cope with hearing individual impulses. We’d have to eat 50 million calories a day just to get the energy into our brain to do it. You think MP3 is bad, a whole lot less gets to your brain from most of the things you hear.
It’s critical we hear early sounds to identify where something’s coming from because that might be something that will eat us. There’s no evolutionary advantage to having a more sophisticated sense of reverb.
Your sense of space is driven by the inter-aural difference; the momentary sensation of positive pressure on your left ear at a certain frequency, negative pressure on your right ear at the same frequency, and whether that changes 10ms later. A convolver could emulate that with enough impulse responses taken from enough points around the room with enough microphones. But why go to the bother? It’s a huge amount of computing for no advantage.
A lot of reverberation is about trying to get a sense of what you really hear, and what matters. It turns out you can express that with a lot less data and a lot less computing power. Which means you can run more reverbs in a mix because you’re not bogging down a processor.
In the end, you might want to see a picture of an Abbey Road chamber while you’re doing a mix, but if it doesn’t work in the mix, it doesn’t work. If you want to get a little more space around a bass, you don’t want to dive through impulse after impulse trying to match a picture of what you think it’s supposed to sound like just to tame it at 50Hz. It’s a lot easier to change a parameter. That’s the advantage of algorithmic reverbs, you can tailor them to your material a lot better.
Could I model a room that’s 12 miles across, four miles high, with linoleum surfaces? Yes, but it wouldn’t be useful in a mix. Some of my reverbs will give you ridiculously long reverb times, but it’s just something that falls out of the math rather than a pursued outcome. Some people will dream up something really novel, so you give them a bit of rope to hang themselves with. A lot of the boundaries comes from thinking practically; you can’t be infinite.
I’ve been at it long enough that I know the rules fairly well. I’m looking more for atypical stuff, which tends to be bad environments they might need in post — closets or bathrooms — or good exteriors.
My wife isn’t embarrassed any more by what I do. There’s hardly a canyon or hillside I walk by that I won’t clap in and make mental notes about what I heard. Halls are the same way; you shout, you sing, you clap your hands. If there’s some percussion instruments in there, you give them a whack if you don’t get caught.
Everyone in the reverb business is way past 90% good. Now we’re shooting for 95% good, 99% good, and so on. To make it absolutely clear there was ‘no artificial reverb used’.
I record a lot of chamber music. There’s no money in it, so a lot of the performance spaces are just awful — trucks going by, harsh art galleries. Most of my mic placement has more to do with eliminating the room than anything else. Once I’ve cleaned it up (kudos to Izotope) I’ll pop some of my own reverb on top to bring a sense of space. My favourite thing is when musicians say, ‘I love how you captured what I heard in that room,’ when there’s none of the original room in there.
Confirmation bias says whatever you believe is what the facts are going to support, no matter what they really say. An awful lot of people are invested in saying hardware reverb is better than software reverb. Firstly, hardware reverb is just little processors running software. People will argue, ‘there’s something about the converters.’ Yeah, there was, 25 years ago. Even crap converters are really good now.
It partly boils down to workflow. They’re moving faders on a desk, have got a LARC, and that’s the way they’ve worked for 20-30 years. If they’re making good stuff, more power to them.
If you have an algorithm that demands the power of entire CPU, and had a double blind metric where people could agree that one algorithm is demonstrably better for a given style — because pop music demands a different vocabulary from classical or folk music — well, okay, that hardware might be better. A few years later, a software reverb will take care of that just fine.
Nevertheless, I’m unaware of any algorithms with that level of demand. I have a few friends in the business on the hardware side that insist they need that level of power and would find it very difficult to port to a computer. I’m not sure that’s the case, but they’re doing fine.
Any of my reverbs use considerably more power than any reverb Lexicon has ever made in its entire history. However, I can take an eight-core ‘trashcan’ Mac and run a couple of hundred instances at once. That tells you how fast technological progress really goes. Why go back?