Issue 60



February 27, 2015


Joe Hayes has spent 20 years refining a studio monitor system that suits the way our ears work, and it doesn’t require a custom ear mould.

Review: Mark Davie

It’s mid-morning at AudioTechnology HQ, and Joe Hayes rocks up with his latest invention in tow. He surveys the office’s mid-century bay windows, hardwood floor, timber desks, adjoining bass-trap of a kitchen and says, “This’ll be perfect” as he starts to pull out his sub-sat studio monitor system. Hardly perfect; the office’s only redeeming acoustic treatment is a bookshelf of old magazines, and the desk clutter I like to call ‘random diffusion’. Any other studio monitor manufacturer would laugh, cry, or promptly do a 180 and get back in their hire car. It’s definitely not a serious monitoring environment, and to be clear, we don’t use it as one. But to Hayes, it was just another emulation of a home studio’s crappy acoustics, and he could hardly care less.

See, Hayes isn’t worried about your room. His monitor placement guide doesn’t come with a preamble on room acoustics or diagrams pointing out angles of incidence. He’s got a whole different way of thinking about studio monitors and diffusion; a relationship he’s been patenting since the ’90s.

The general gist is this: rather than adjust the acoustics of your space to try and flatten out its frequency response at your listening position, why not just get rid of the room’s acoustic altogether?



We’ve reviewed Hayes’ inventions before. One of our mainstay reviewers, Brad Watts, still has an early pair of Spatial One nearfields. Hayes’ new iteration under the Acoustic3D company name is called the Emergence AS8 Pro S, which I’m not convinced is a better name than Spatial One.

It’s a sub and satellite combo, with conspicuously Independence Day spacecraft-looking speakers. As you can probably tell, just by looking at it, that is indeed an angled quadratic residue diffusor leering over each satellite’s driver. But it’s not just any quadratic residue diffusor, it’s a correctly tuned one, according to Hayes. What he means is, Schroeder’s original concept treated sound as if it had a planar wavefront, but sound radiates from a source with a spherical wavefront, like a ripple in a 3D pond. By Hayes’ calculations, your back wall diffusor would have to be at least 6m away from a source to have the desired effect. So putting a QRD right at the speaker’s mouth without doing something to account for this curved wavefront would be ineffective. On this design, he’s compensated accordingly to diffuse across a quoted range of 1.4-20kHz.

So why diffuse at the source? Well, it’s a good question and one that requires a re-think about how our ears work.


I was re-introduced to the story of Daniel Kish not long ago, the blind man who can ride a bike by echolocating with clicking noises. Well, it’s really his gimmick to entice morning shows into booking him to talk about what he really cares about. Kish’s crusade is to stop institutions telling blind people they can’t see. That’s right. He says blind people can see, if only we’d change our expectations surrounding blindness. And it turns out, he’s right. When blind people train themselves to echolocate, the part of the brain they retrain is the primary visual cortex. It lights up like a Christmas tree as they ping out a car, tree, recording console, whatever. They see, just with a different methodology. After all, what we interpret as eyesight or vision is really only our visual perception of what’s going on around us based on processing the information contained in visible light.

Hayes sees acoustics in a similar way. As much as he hates statistics, he reckons acoustics is all about stats. “Perception is the probability of what’s going on,” he says. And with normal studio monitors we create a perception problem. First, there’s the recorded acoustic, which is a sound source plus a bunch of statistical properties, including the number of reflections per second that might influence the general tonal quality of the reverb. “When you put that into your listening room,” says Hayes, “which has a whole new bunch of statistics itself, you end up with two sets of stats that the brain just can’t really work out.” So, to solve this mash-up of acoustic statistics, we try to adjust the response of our listening environment rather than ‘fix it at the source’. Hmm, where have we heard that before?

Hayes has spent the last 20 years trying to prove why flattening the listening environment is a bad idea.

But back to this idea of perception and how we hear. When Hayes finished university, he says the hardest thing he had to do was unlearn much of his acoustics training. In 1984, Jean Morlet modified physicist Dennis Gabor’s work to come up with the Morlet wavelet and later on proposed it could be useful for analysing music. When Hayes came across Morlet’s work it was all pretty recent, but it helped inspire his invention’s first iteration in 1990. A Fourier transform, one of the more common acoustic measurements, receives signals in a window of time and gives you a picture of what happened in that window. So, you might have had 10kHz appear at some point, you just don’t know when. Morlet wavelets can be scaled to tell you precisely when 10kHz occurred in that window of time. “Finding out when a flautist plays a C sharp is the kind of thing you can do with a Morlet wavelet,” said Hayes. “You can set up the scale of a Morlet wavelet, so when you put a signal through, it only stands up when the flute plays a C sharp. They de-compose the signal to when that condition is met, then suddenly turn on, no matter how much signal-to-noise there is. And they have a nice property in that they’re perfectly time-aligned. So there might be latency, but the standing up and down of events is perfectly time-aligned.”

And Hayes sees this as a key to understanding how our brains perceive sound. Cochlear implants work on a similar principle, it recognises that a certain frequency is happening at a certain time, and transmits a jolt of electricity into the basilar membrane. Because when you think about it, our basilar membranes are set up to decode sound in a similar way. Got a low frequency coming through? Well, it’s only being picked up by the hairs right down at the skinny end of your cochlear spiral. Likewise, high frequencies are being decoded right at the throat of the horn. Hayes has a suspicion that the parts of the ear you can see, the pinna, is the wavelet encoder, which gives you the directionality as well the frequency. It explains why cochlear implants aren’t useful for spatial cues. But that’s just little more than a strong hunch at this stage.


In 2001 I reviewed a pair of monitors based on the same design principals as the Emergence AS8-Pro S system. The designer, Joseph Hayes, had spent quite some time setting the monitors and associated sub woofer in my studio, but in all honesty, this was hardly required as the Spatial One system (as it was then branded) sounded brilliant wherever you placed the units. The highly irregular design reminded me of a monitor I’d seen during the 1980s (the Omni F80), which as it transpired, was also designed by Mr Hayes. The monitors incorporated ‘CHAOS Speaker Technology’, whereby the upper frequency drivers were directed toward a diffuser just like the Emergence system.

I was immediately hooked on the design and the stereo image. The Spatial One monitors ended up becoming my own, and to this day still function as my ‘every day monitors’. For all day listening and stereo imaging wherever you’re sitting, they are extremely un-fatiguing and a pleasurable listening experience. They’re the monitors I switch to to ‘recentre’ my hearing and sit between my near/mid field monitors and my extremely near-field Auratones. While I didn’t find the Emergence system as much to my liking as the Spatial Ones, the system does provide a very enjoyable, and rather wide, ‘sweet-spot’ and the same uncanny imaging.


So, where does that put us? Well, Hayes sees that a quadratic residue diffusor, when correctly tuned, “produces a ball of scaled wavelets that when added together have a flat response, are perfectly diffuse, and have no statistics.” So from tests in anechoic chambers, if you shoot 6kHz at a QRD, you’ll get a nicely-distributed petal pattern. Do the same at 8kHz, and you’ll get a different petal pattern.” Basically, the QRD is de-composing the signal into scaled Morlet wavelets. So by firing your source into a tuned QRD, you’re de-composing the signal straight off the bat (across the QRD’s operating range). The result is supposed to be akin to an acoustic hologram, where the speaker satellites reconstruct the acoustics and depth of the original recording more accurately than a conventional nearfield set. Instead of wrestling with two sets of statistics — the set from the original acoustic plus the set from your listening environment — you only have one.

The second benefit of this de-composition, according to Hayes, is that it theoretically has zero autocorrelation. There’s no really easy way to explain autocorrelation or its uses, except that it’s taking a signal and cross-correlating it with itself by pushing a copy in time to find similarities. Hayes believes: “we seem to have semi-objective proof now, that when something with zero autocorrelation reflects off a wall, there is immediately a reversal effect, and it self extinguishes. This is the anti-echoic effect of the speakers.” It’s completely different to how we usually think sound will respond when it hits a boundary — comb filters, ugly reflections, the lot.

It’s a big claim, but the simplest experimental proof he could offer was to record a hand clap or impulse and play it back through the AS8 speakers into a room with a nasty acoustic. The diffused source from the speaker will have zero autocorrelation and self-extinguish at the boundaries, while the unrecorded hand clap will flutter as normal.



After listening to these speakers in different environments for a while I decided to go ahead with the hand clap test. I took my dead hand clap recording into my eldest son’s room. He’s only a little tacker, so all his stuff is miniaturised and hasn’t quite grown into the size of his bedroom. It’s pretty square, has mostly hard surfaces, and flutters like a moth stuck in between your door and fly screen.

I set up the AS8 Pro S system alongside a Dynaudio powered monitor and got to work.

This is a hard test to get right. For one, a big part of the sound of a hand clap is the room reflections. So recording a dead clap through mic, preamp and interface, then playing it back through a speaker, is difficult to nail — it’s hard to know what you’re trying to match exactly. Anyway, I pressed on and played a variety of material through both speaker sets, including different hand claps at different levels, synthesised dry hand claps, and other synthesised impulses to see what effects I could trigger in the room acoustic. Each time, I would set the levels of both speakers to read a target dB SPL at one metre. I was mostly concerned with hearing the difference in response between the two speakers — the AS8 Pro S, and the Dynaudio standing in as the conventional nearfield monitor — rather than trying to match the recorded hand clap to me physically clapping my hands together.

I did notice a bit of difference in the reflections coming back from both sources. The Dynaudio was overall, more pronounced. But it definitely wasn’t the raging success I — must confess — was hoping for. It brought out a weakness in the design, that’s obvious to anyone with a passing knowledge of QRDs. That is, it’s pretty much impossible to build a completely broadband QRD. To get down to lower frequencies the wells have to be very deep, and the higher frequency cutoff is determined by the width of the wells — skinnier goes higher.

So, even though the AS8 Pro S’s diffusor is tuned for the driver placement, its reach doesn’t extend down into the mid range and low frequencies below 1400Hz. You’re still at the mercy of your room acoustics past that point. And that’s what I was hearing in this experiment. The AS8 Pro S’s had some discernible effect at the top of the range, but the main issues in the room were happening below that range and could be heard no matter the source; physical hand clap or speaker.


To compensate for this lack of control in the middle and down low, the AS8 Pro S has a built-in DSP, which gives the user control over the voicing of both the satellites and sub. There’s a forum of users on the A3D Emergence website who’ve been playing around with different voicings, and the later DSP settings have become more balanced. You can also ping the room with a low-budget measurement mic and DSP, to give you a bit more of a personalised setting for your room’s acoustics.

It’s impossible to talk about the voicing of the system, because not only is it constantly changing, it’s also user adjustable. With a Windows PC, you can get in and adjust the voicing with a whole suite of filters, and upload them to the DSP via USB. There isn’t an application for Mac OS at the moment, but there is a plan in the works to push all the DSP adjustments to a web browser environment, making it platform agnostic.


It’s a difficult task trying to compare the AS8 Pro S system with a more conventional two- or three-way nearfield monitor. The sound stage it sets is just entirely different. While the hand clap test showed up the lack of diffusion below 1400Hz, the QRD creates a unique effect above that point. The effect is described as an acoustic hologram. But while it seems an apt description, my image of what a hologram looks like is coloured by early sci-fi, where the hologram sits above a point, or projected from a point into a room. What the AS8 Pro S system does is surround itself with an image of the original recording’s space. So, things that are recessed in the sound field feel further back than on a standard stereo speaker configuration, and vocals also feel like they sit more in the picture, rather than in front. Overall, it made my typical nearfield setups feel more like they were presenting the music in line with an imaginary cord strung between the two monitors.

It takes a bit of adjustment to mix on this system, especially if you’ve relied on the same monitoring system for a while. I made a few direct to stereo recordings in order to hear how the system replicated the space, and started to notice not only how deep the sound stage presented, but how far outside the speakers some sounds felt.

I replayed Jeff Lang’s record through the system (AT Issue 102), which was mixed by placing pre-recorded instrument performances in different speakers around an ABC live room, then re-recording it through a stereo ribbon and omni condenser. The result is a unique sense of space in itself, but through the AS8 Pro S system, it was like reconstructing the speakers in the room.


Now we’ve gone through the technical principles of the system, I guess I should actually make some conclusions about the product itself.

My main problems with the device is that without having the Windows app open, there’s no feedback on the system’s settings. Because it has a built-in DAC, you can switch between multiple inputs on the supplied remote — analogue (stereo RCA or mini-jack), S/PDIF, TOSLink and USB. The problem I found was the lack of LED or LCD displaying the current input. I found myself scrolling through the inputs depending on where I had the system setup and what program I had running. Not knowing you’re on the right input and trying to troubleshoot other signal chain issues can be problematic. There are also volume controls on both the remote and sub unit, but neither give you an indication of the unit’s current level (bad news if you change inputs, go hunting for the new source and find the input level is a lot higher than the last one). Lucky the amp has a protection circuit in it to stop it overloading.

The satellite shape is a bit hi-fi weird, but more a ‘form follows function’-type of arrangement. I can live with it. And there are constant refinements happening behind the scenes to increase the rigidity.

Would I recommend it? Well, while it’s not ‘impulse buy’ cheap, at under $1000 for a flexible, unique system that will add a completely different perspective to your monitoring choices, it’s definitely worth trying to hear a system. At worst, you’ll have a really capable hi-fi to use around the house that will draw you back into the space of your favourite recordings. And though it’ll allow you to cut down on acoustic treatment in your home studio, I don’t think you’ll escape bass trapping and mid frequency diffusion for now.


Leave a Reply

Your email address will not be published. Required fields are marked *

More for you

Issue 60