Some hardware has appeared on eBay that unleashed a blurry flurry of thought I’d put to rest years ago. An unfortunate person had ordered a feast of Kyma boxes and passed away before they arrived – an even sadder person is now trying to sell them on eBay. I’d thought about Kyma a while back and decided it was well beyond my understanding and all a bit expensive when other PC software did nearly the same thing. But one process in which Kyma excels is sound morphing, which brings up a whole gaggle of ideas.
Morphing is best known as a visual transition between two moving images that gives the illusion of a flowing, magical change of one object to another. It was once a high end process for the rich and powerful – it has since become cheap and easy for the common folk. Back in my day you would have to place markers on significant features of the starting image and manually move them to their final resting place on the target image. This took much time and skill. You can now find “AI” tools that use no markers. Like most “AI” the results can be ‘pretty good’ – but never quite expert.
Many sound tools sold as ‘morphing’ are actually vocoders. They manage the task as modulating a carrier – the typical robot voice is speech (a modulator) filtering a synthesiser note (a carrier). There’s an endless yawn of these available and not what we want. Fewer tools are phase vocoders, which restructure sound as a series of frequencies before performing transforms – most often time stretching and pitch (as in Steinberg’s Padshop2 or Apple’s Alchemy.) Also look at https://www.soundhack.com/pvoc for a variety of plugins for phase vocoding.
The popular zynaptiq MORPH 3 uses a range of algorithms based on phase vocoding, alongside ‘style transfers’ not made explicit. None of these sound anything like Kyma – e.g. in their video you can hear elements that change tempo to match the target sound. My bit of reading suggests a possible cause – when setting up a Kyma morph you may need to manually place markers just like you once did for visual morphs – it’s not just running a couple of sounds into an ‘AI’ effect and sliding a single fader. Not magic, in fact a bit old fashioned. You could possibly set control points in a DAW that provide speed changes in the plug in etc. but it’s not clear at this point.*
Hardware is how Symbolic Sound make their living, so don’t expect a VST any time soon. There are possible alternatives to Kyma – cheap (even free) but not easy.
You will be completely flummoxed by Composer’s Desktop Project for the first few sessions. It’s very powerful and very ‘academic’ a la the early 2000’s. But you are not paying for elegance – it’s as ugly as a bulldog’s bum and looks like a spreadsheet (there’s a good reason in that the cells convey a flow of transformations). I can’t verify at my level of understanding that this will equal the processes in Kyma. My own experiments show it can (after my fussing) produce excellent morph cross fades – but no markers. (The CDP software is also included in Renoise – but that’s even more difficult for most people not used to Amiga trackers!)
This guy is helpful (if you like video tutorials).
Try as much as I can I’m not seeing the direct equivalent of Fantamorph for sounds. Which is a bit sad as there’s a lot of fun music that could be made that way. If you know of one please describe it in a comment.
No, I’m not buying the Kyma.
* Actually on further reading the ‘key points’ are transients which are detected and aligned in the algorithm. It seems that Kyma has adopted the functionality of Loris – about which you can read here. It’s too hard for me to explain sorry.
There is some discussion that a similar process may be in CDP.
There’s always the Lexicon Vortex, which morphs from one sound-mangling algorithm to another, and can be held at any point between the two for truly unearthly effects.