(Cleaned and updated Sept. ’21)
For some years I’ve been collecting and writing about electronic music hardware on this site – some good, not so good, and utter garbage. I’ve had fun popping a few bubbles while handing out praise where it was really due. But I’ve arrived at a larger goal than just sniping at low hanging piffle. I finally feel empowered – I’ve used up considerable money and time crawling around the floor and behind racks, sourcing SCSI cables and voltage adapters, and wondering why the fuck this MIDI signal went the wrong way. I have seen for myself. I am one of you.
I now ask is music hardware really better than software?
We need to verify what we mean by ‘hardware’ and ‘software’, because quite a lot of ‘hardware’ is a hybrid. A Moog model D is all hardware. A Blofeld is software in a box. But the Arturia Origin is software that runs only on a very particular TigerSHARC microchip – when the chips are gone, the machine is extinct. Same is true for the Access Virus TI and the Freescale DSP 56367. This is hardware because no specific physical device means no sound. Recent software emulations of this chip set are leading me to question this kind of hardware.
Better? We need to set some rules. Firstly, we are talking about musical instruments being used for music. Implicit in music is a listener and that something humane is being communicated – awe, hope, fear, something worth listening. What does the sound of the device offer for others? Slavishly copying a sound plucked from some antique recording is not music. Matching the shape of a waveform on an oscilloscope is not music. These are feats for athletic carnivals.
You may be pleased by the feel of the knobs, or the type of wood at the cheeks of the thing, but what does this do for a listener? Maybe the wood inspires you to delicate adjustments, but it’s not a real violin. The way your machine looks, how pretty the lights, the styling of it – it may as well be a Dyson vacuum cleaner or an Apple phone. I’m tempted to judge that only the resulting sound matters, but I have to concede that interface might make reaching that sound more likely.
Exemplars of ideas
I am buying and reviewing more software. Exemplars are often software because the majority of hardware is cowardly, conservative and dead boring. I’ve just sold a ‘legendary’ (and very heavy) old Roland keyboard for some good profit. The reason being that if you divided the sound by the amount the damn thing weighed, you’d have no change left over for coffee. Any half decent virtual analogue could make that noise – especially Roland’s own. Do a blind test. Can the audience really hear the difference? You are a musician, aren’t you?
I’m selling the majority of my hardware, such is the faith I have in my answer. The time spent racking, un-racking, cabling around the back of things, assembling A frames etc. is like the days when people would run clothing through a mangle before hanging it on a clothesline. You can definitely run into problems with virtual studios. I have. But generally, when I visit other people’s hardware studios the damn things are NEVER FINISHED – a great excuse for why no actual music is being produced. In my case I’m now trying to reduce my sources down to exemplars of ideas. Carefully limiting the orchestra (like cholesterol) so that you can actually score music. Are you a musician, or are you building a model rail road?
Some hardware is interesting. For example, I’ll keep my UltraProteus because of the weird thought process behind its operation, the SY77 because of its particular timbre and the Super Jupiter because it has a deeply exotic stomach-ache. But I’ve sold the Yamaha FS1r because as crazy as it is, the sounds it makes aren’t that great. And that’s what matters.
Doomed to be a paleofuture?
Why did we even start this thing? Because we wanted more than the instrumentation that we had. Synthesis was once a desire to hear new sounds, make new music, go places that hadn’t been heard before. That idea started to die with Tomita and is now truly dead when you’re trying to emulate some noise from 40 years ago. Synthesis, as a mainstream activity, has become terribly OLD FASHIONED. We’re old cowards clinging to old safe things. I say fuck recreating the Blade Runner soundtrack now we’re past the year the story was supposed to take place. Software can now pull a sound apart, make a wavetable, or an additive snapshot, change every aspect of it, build entirely new sounds from audio atoms – and people are still talking about ladder filters?
If for example you spend enough time with additive synthesis in Alchemy you will find a world of new sounds. Or take a recording and hack away at it with spectral editing – that’s what I did with my records Donut and Aversion. It’s synthesis, but it’s not hiding in the last century, terrified of stepping outside the norms.
I can now say what I mean by ‘better’ – I mean true to the ambition that synthesis is all about.