SoundLab residency 1st report

I am currently an artist in residence at ARUP in Sydney, sponsored by Create NSW. This is my first report which I hope will be useful for other surround sound engineers.

The SoundLab is an acoustically isolated space equipped with 18 speakers, 16 of which are arranged in a sphere, with two sub-speakers which I’m not using. It’s the Sydney installation of identical connected SoundLabs at each of the ARUP global sites. Their purpose is to pre-sonify architectural spaces to check their acoustic properties for quality and health issues. (ARUP was responsible for the practical design of the Sydney Opera House, where sound is obviously an issue!)

Not being exactly sure of the set up I prepared a range of third-order ambisonic compositions which I thought I could adapt to whatever format was available. I used REAPER as my DAW. I guessed at the most likely arrangement of speakers and used IEM’s AllRADecoder to set up the array. The guess turned out correctly – although we didn’t know for a few hours!

The SoundLab uses a system called DANTE which carries large numbers of sound channels over a local area network. I installed a virtual DANTE sound card on my laptop which fed into the system over a single Ethernet cable (very tidy) but I had not much idea of the next steps. Staff helped set up a 48 channel connection, of which I used 20 channels (I have no idea why those exact numbers). We then got absolutely nowhere for quite some time until I figured out how REAPER thinks about this.

Each track in REAPER must be 16 channels to fit any of 1st 2nd 3rd order. Then the Master track should have the IEM Decoder on 20 channels sent directly to the hardware (so to the range DANTE1 – DANTE20). Even though it’s a virtual sound card it thinks like a box with 20 outlets connected to the individual speakers. You are not mixing the 16 ins to 20 outs in the master track, you are just passing them through.

Once we had sound I discovered a few things. My home headphone mixes were generally good, but when I tried to use the same headphone techniques to make new tracks it didn’t go well. Artificial reverb is generally a bad idea on the speakers, tending to ‘bland out’ the directions by overlaying on the room acoustics. Raw panning to a specific speaker direction worked best. A prime example was DearVR which sounded bad whenever I tried it, even if I carefully matched the reverb settings on individual tracks (it also overloaded the CPU pretty quickly on this laptop because reverb really should be a send). Much better to use the Stereo Panner in IEM and maybe a little of their reverb effect as a send. I also need to learn more about using 3rd party surround impulse responses which are all over the net.

The system is designed to approximate second-order ambisonics, but I definitely got some extra quality when running third-order mixes. First-order is a bit washy. Trying out COMPASS upscaler sounded good to me, but maybe that’s a bit of a sugar hit. I don’t think 4th order up is really going to make a difference in most projects.

Some mild panning animation is good, but moving your head is even better. I feel encouraged to go outside the sweet spot – not possible on headphones but something that will always happen with speakers and needs to be designed into the mix as best as can be done.

I also need to start figuring how to make work in the space and then bring it back into consumer formats such as Dolby ATMOS. But that’s the second week!

1 Comment

  1. Just pointing out there could be small latency issues with using a network to send sound to a number of speakers involving buffering stages. It would be ideal to have a time base and something that sends samples as per the common time base, if that is not already the case. Also in terms of positioning of speakers, it is possible positioning the speakers is not accurate enough, but you have to consider where the moving parts of the drivers are in space. Oh well.

Leave a Reply

Your email address will not be published. Required fields are marked *