READ ONLINE NOW
Online
Issue 93

REVIEWED

Ableton Live 12
What’s in. What’s out. What to expect.

0

Opera Flies Outdoors

Opera Australia’s Madama Butterfly on Sydney Harbour requires plenty of audio innovation to stay afloat.

By

19 June 2014

Story: Robert Clark
Photos: James Morgan

Since the Handa Opera on Sydney Harbour (HOSH) series began four years ago, Opera Australia’s lavish outdoor productions have created complex live sound scenarios for Sound Designer, Tony David Cray. Not to mention the wind, rain and occasional out-of-tune ferry horn. This year’s production of Madama Butterfly was certainly no exception, and once again it took some real ingenuity to pull it off. 

VOICING CONCERNS

From the beginning, questions around how to amplify the voice in such an environment were the most important for Cray and the team of engineers from Norwest Productions who are engaged specifically for these productions. “An operatic voice is one of the most powerful instruments,” says Cray. “So we had to ask: what technology do we use? What mics do we use? What transmitters do we use in that chain to try and capture the nature of the voice and share it with the audience out here?” Headsets with in-ear monitoring were really the only option, and Cray admits that this is actually a pretty “radical” thing in opera; not just due to the vocal amplification aspect, but because wearing in-ears creates unique problems for operatic performers: “A couple of singers found that their closed aspect was very isolating. The way an opera singer will create the node, there’s a lot of resonances; it’s a very physical process generating that sound. Having the ears blocked creates an inward pressure and it’s very disorientating. It would reinforce certain frequencies, too.”

For many of the singers, having time to rehearse with the in-ear monitoring systems — a combination of Sennheiser G2s, Shure PSM200s and UR4Ds using Shure UR1M transmitters — was enough to overcome the discomfort, but for others who still experienced difficulties, a creative solution was available thanks to one of the more experienced singers. As Cray explains, “Jonathan Summers came up with this idea of using the generic in-ears, taking the foam cover off and just getting the transducer taped in his ear. So he had a little piece of spaghetti going into his ear, but he could still hear acoustic sound. It gave him enough present sound of the orchestra to time and to pitch to, but it was open.” This technique is now lovingly referred to as the ‘Jonathan Summers Method’, and some singers opt for this, while the majority persevere and wear the headsets as is.

OUTTA SPACE

Handling the audio feed from all these headsets, in what could only be described as a ‘guerrilla encampment’ under the stage, is Norwest’s John Watterson. His role as monitoring engineer also includes piping audio from the orchestra, which is enclosed in a sealed-off pit behind him. There is more room than in previous years, but once performances begin there is still no chance of any technicians squeezing in to make adjustments. Thinking about ways to minimise the likelihood of that scenario, Cray and the Norwest team installed Aviom A16-II Personal Mixers on the musicians’ stands, and a lot of groundwork was laid to coach the players to be responsible for their own microphones. The string and brass players all have DPA 4099s clamped to their instruments, while other members of the orchestra have a combination of Schoeps CMC 6-MKs, Neumann TLM 103s and Royer 122 ribbon microphones. Cray describes the scenario as “like a close mic studio gig”, which is not just because of limited space. His experience in previous years has shown that such a boxed-in environment typically creates a build-up of low-to-mid frequencies that can be tricky to eliminate down the signal path. Better to mic close and add space later (with the help of an Altiverb reverb Cray modelled on the Opera House Concert Hall years ago).

In contrast to the pit, the immense size and odd shape of the outdoor space created tricky coverage and delay issues for Cray and his team. The performance area consists of a wide, rectangular stage anchored in the harbour, with tiered seating installed on the bank amidst the Botanical Gardens. Due to the dimensions of the stage, the whole venue is far wider than it is deep, which is the opposite concern for most live performances. This meant the task of finding a stereo centre was quite a challenge. They ultimately took a predominantly front-fill approach, with seven speakers embedded into the front of the stage at a very shallow angle; the driver of each pointing “to about 60 per cent up the auditorium”. The flown array of Adamson Y18s, in concert with left and right stacks at stage level, provide extra coverage on the sides and rear of the auditorium.

EXPECT DELAYS

The expansive stage means the amount of delay is considerable. “If I’m standing down the front of the stage” says Cray, “my voice is going to take 12 milliseconds to get to the first row, but if I’m standing towards the back it’s going to take 45.” The solution was to calibrate the throw of each speaker to an artificial “time zero” point about 4.5m back from the front of the stage, which is where most of the cast act. The delay from one side of the stage to the other isn’t exactly minimal, either. Cray estimates the acoustic delay between singers on either side of the stage is “60 to 100 milliseconds”, which he points out “at some tempos is a sixteenth.”

Foldback for the singers is provided via a combination of EAW JF-80 and Adamson M15 low-profile wedges installed above the speakers at the front of the stage. This helps singers who aren’t relying on in-ears for timing and, as Cray puts it, adds a level of “energy” to the performance space. All in all it is a complex audio environment for singers to navigate, as mezzo-soprano Anna Yun — who plays Suzuki in the opera — tells: “We can hear the front-of-house speakers and there is a fraction of delay there, which is unavoidable. At times, depending on the position on the stage, we can also pick up the sound coming out of the orchestra pit in real time (usually brass instruments), so there can be three different timings for the same phrase [including that of the in-ears].” Yun insists that these issues were not insurmountable, however, and that allowing for the delay became “second nature” by the end of the rehearsal period.

Using in-ear monitors was a necessity, but one the opera singers really had to get used to.

HOSH Series Crew

Tony David Cray:
Sound Designer & FOH Mix Engineer — Opera Australia/Sydney Opera House

Adrian Riddell:
Project Account Manager/Head of Sound — Norwest Productions/Onset Audio

Matt Whitehead:
Systems Engineer — Norwest Productions

John Watterson:
Monitor Engineer — Norwest Productions

Steve Caldwell:
RF Engineer — Norwest Productions

Dane Cook:
Stage Technician — Norwest Productions

Alison Bremner:
Radio Mic Fitter — Message Stick Productions

Roy Jones:
Radio Mic Fitter — Norwest Productions

Brittany Wright:
Secondment Queensland University of Technology

The massive custom stage on Sydney Harbour with Norwest's Adamson rig flown, and monitoring neatly grafted into the hillside.
Tony David Cray rigged up a handy Ableton Live setup to link with the Digico SD7 and give him a little more flexibility in plug-in choice. With Lemur on an iPad and a small Akai LPD8 controller, he was able to grab four bands of Fabfilter's ProQ.
The makeshift orchestra pit was cosy, but Cray used an Altiverb model he'd taken of the Opera House to bring back the space.

THERE’S AN APP FOR THAT

One thing pit musicians can never simply adjust to is loud percussion reverberating in a closed, tight space. This problem is especially acute in an opera like Butterfly, where a famously loud gong is an essential part of the score. “It would just cane the rest of the pit,” says Cray. “So I suggested to them that we record it and play it back, and they were open to that idea.” As time was running out to figure out how best to achieve this, he sat down one night and “made a little app on the iPhone as a joke.” This turned out to be just the right tool for the job, though, and after creating an interface with a gong that is simply tapped on cue, it was mounted on a stand and routed into the signal path via the Aviom system for any musicians wishing to hear it. It’s otherwise totally silent in the pit and, being pre-recorded, perfectly balanced in the front-of-house mix every time. “I think this is a good example of how we can just do things slightly differently to achieve a good outcome,” says Cray.

AND FOR THAT…

It’s also a good example of the kind of creative thinking behind his decision to “outsource” the show’s DSP to some unconventional platforms. “Primarily for the audiophile aspect,” says Cray. “The EQ and the compression algorithms on the [show’s Digico SD7] console are good, but they really get exposed when dealing with orchestral music and opera. The operatic voice is a fearsome instrument; it’s quite a challenge to deal with.” He decided to start “farming out” the DSP using his own plug-ins of choice, particularly FabFilter’s Pro Q, but then came across the problem of how to tie them all in to an interface he could easily use on the fly during performances. Eventually the highly customisable Lemur platform was chosen, which allowed him to create a “quick drag-and-drop graphical environment” on an iPad and map it into Ableton Live. Incidentally, this is a workstation he’d never considered for live opera before, but after seeing colleague Bob Scott using it effectively and appreciating its stability and native MIDI support, he took the leap and hasn’t looked back.

The operatic voice is a fearsome instrument; it’s quite a challenge to deal with

A real violin and the makeshift gong app, devised so the orchestra wouldn't get blown away every time Madama Butterfly took flight.

WIND INSTRUMENT

The key parameters on the Lemur interface were determined by the EQ and filters Cray uses most on his recording studio console at the Opera House, which is a Euphonix System 5 that gets plenty of work handling orchestras and vocalists. This constrained the number of filters in the Pro Q plug-in to four; crucially streamlining his process. In further service of creating an intuitive and efficient DSP environment, Cray added an Akai LPD8 hardware controller within easy reach in the control room, with dedicated EQ just for the orchestra. Cray recalls a night where the wind was particularly bad, and having such easily accessible and carefully chosen controls enabled him to respond quickly to a potentially ugly scenario. “I was dreading the notion of the geishas coming on,” he says. “Because I knew it would result in a wild flapping wind sound when I suddenly open 24 mics. But I was able to, in a moment, look at my little hardware controller and quickly assign a filter into the chorus bus. So as they came on stage I could instantly initiate a steep high-pass filter and roll it up to a point where I almost lost them but got rid of all of the wind. And that was during the show — seeing a massive problem and actually implementing a change that just required one little turn of a knob. It’s fantastic.”

SAVING FOR A RAINY DAY

Of course, filtering on the fly is one thing, but troubleshooting during a performance is another altogether. With wet weather an unavoidable reality, redundancy was essential. The Digico SD7 console in the site control room (situated in a tower halfway up the auditorium) was designed by Norwest head of sound, Adrian Riddell, to run two simultaneous 64-channel drive chains divided into ‘Engine A’ and ‘Engine B’, which can be manually switched via a MADI bridge system in the event of a failure. And if the digital network goes down, they also have the option of switching to the console’s analogue outputs, which are fed into Dolby Lake DLP Processors that handle both digital and analogue inputs. There is also comprehensive DSP redundancy, with two individual instances of Ableton Live (each with a full suite of plug-ins) running simultaneously off networked Mac Minis with RME cards.

Of course, such a long and complex processing chain comes at the expense of latency. Cray says the “round trip” takes 12 milliseconds, but “on this crazy site,” he adds, “the vocal stems themselves need to be delayed at least 15 milliseconds, so I was in a window that allowed me to do that. Which is just as well, because it’s pretty scary when you take it out of line and listen to what’s going on.”

THE SHOW WILL GO ON

The technical experience of Cray and his Norwest team certainly comes to the fore in these large-scale scenarios, but refreshingly, his emphasis is always on the ‘big picture’ elements of his job. The extensive research into third-party apps and plug-ins, the programming, the sophisticated redundancy, the intricate DSP; all of this serves ultimately to simplify his role to the point where detail fades into the background. “The main focus,” he says, “is to try and bring opera to a broader audience, and at the same time, to always remain as true as possible to the art form.” With the HOSH series recently confirmed for another three years, it’s good to know Cray and Co. will have more opportunities to refine and innovate in this genre. 

RESPONSES

Leave a Reply

Your email address will not be published. Required fields are marked *

More for you

READ ONLINE NOW
Online
Issue 93

REVIEWED

Ableton Live 12
What’s in. What’s out. What to expect.