Hartmann Neuron (and balls) 🧻🧻

This page was inspired by a YouTube comment: “You wonnah have a laugh? The much underrated Technics WSA1 from about 1995 does about the same without all the stupid rethorics and so called revolutionary technology claims. And it can still be had for a few hundred bucks. If you throw the effects off both the WSA actually sounds better.”

As a previous WSA1 owner I appreciate the chutzpah. The funny thing is … the Neuron does make that same penny whistle sound. HE MAY BE RIGHT.

Here I celebrate all things Hartmann and Neuron

Hartmannlittle red knobs laid out back to front on grey surfaces with teeny tiny lettering
The Ion, Waldorf Largo et al.

Neuron – well, there is still Zynaptiq and they still lead the world in techno-babble.

From the manual: “The MORPH technology is the latest result of research that goes back to the 1980’s, when our lead scientist Stephan Bernsee created the world’s first audio morphing algorithm on the then top-of-the-line SGI platform. Along the way, this research resulted in the highly regarded morphing algorithms in Prosoniq’s sonicWORX series of sample editors, the morphing synthesis engine in the Hartmann Neuron synth, and MORPH’s predecessor, Prosoniq Morph.” Yes that sounds familiar – as the do the results which are much the same as any other vocoder.

I’ve owned Morph for a few years now. It’s possible the problem exists between computer and chair but there is no magic here – there are as many horrible noises out of this as the Neuron. Morph is not an instrument, you process your sounds on the fly. Instead of requiring a long computation to create the models it does so in real time and you can capture that flow. It’s where the Neuron went, like a cicada, leaving the empty shell for people to collect from Reverb.

You need to make three audio tracks in your DAW. Then to place your A sound on the first, B sound on the second and route these to the third track on which you place your Morph plug in. With some fiddling you get A vocoding B on one side, B vocoding A on the other and cross fades between these in the middle. I’m sure Bernsee would be cross with me for using the V word, but you will find that e.g. a percussive sound (like a voice) and a constant sound (like a tone) will give the most recognisable results.

Neuron Baby Balls

Yes there is a software Neuron VS. Like everyone else years ago I downloaded the free this only works on a Mac built between April and May 2001 if the wind is blowing in the right direction version, tried to get it working and realised that I was closer to death than before I started. But just yesterday a very kind man John T showed me how to set this up on my Music PC. After I sacrificed a few kittens – it works!

I don’t have the Nuke controller so had to set up an orchestra of knobs to do the business. You see each of those ball things is a Resynator. There are 5 virtual balls per ball. Three v-balls are the parameters of the object making the noise. You need an X and a Y knob for each. Then there are two v-balls for the environment that your object resonates – two more X’s and Y’s. Two balls with a column between them makes 10 v-balls or 20 knobs. Right. So to make a noise you start with a sample, which has been turned into a ‘scape’. That means you can change aspects of the scape such as how big it is, and how stringy or noisy it is via the three balls, then change aspects of the ‘sphere’ and you end up with scapesphere… look I’m trying OK. Just think 20 knobs for scraping the two balls.

It makes some pretty decent noises, but by arcane and mysterious means. I think this idea took a wrong turn at the traffic lights. In comparison the UltraProteus offers an insane choice of filters which sound just as ballsy, and even the Fantom XR can sling three MFX units in a row to get you much the same distance without breaking a sweat. I love the Neuron as a folly. I would buy the keyboard if it was fleacore cheap. But I really think that in an age when morphing can done in real time you should reward Bernsee by buying Morph instead.

7 comments

  1. The keyboard isn’t really necessary. It contains a PC motherboard running Linux with output through a custom Prosoniq 5-way sound card contraption. I have Neuron VS (full package for about $200) in a Dell Latitude D830 that I got from Seattle Goodwill for $24 completely working, running Windows XP. More powerful than the mobo in the Neuron. I attach an Alesis Photon X25 controller and use its internal audio processor which is better than the Dell’s. Thus, a Neuron more powerful than a Neuron. Sort of the Tyrell Nexus 6 of Neurons. Bad noises, good noises. The thing that really interests me about Neuron is its internal neural network, that purportedly learns as it works, and as time goes by tends to lean toward sounds that the user likes. I have not used it long enough to confirm the claim, but I like the idea. It leads me to work towards building my own self-growing heuristic (self-learning, as in HAL “I’m sorry, this jam session serves no useful purpose” 9000) neural network using Python and get all the synths in on the fun. Will it work? Who knows. Everyone needs a hobby, especially with Captain Trips out there.

    1. I like the idea too but I have never heard that one before. How does it lean towards sounds, given that the models are built as static data? It doesn’t rewrite the models on the fly … does it?

      1. As far as I know, the neural network in the software end changes its configuration depending on how it is used, then saves its image. Each time it loads, it’s basically a slightly different version (personality?) that thinks, in its limited way, more like its user each time. It isn’t HAL 9000 by any stretch of the imagination, but it isn’t a dumb piece of software that runs exactly the same way every time. The sound spheres and scapes aren’t part of that process, they are just the end result. The software evolves as it is being used. There would not be any point to using a neural network in software if it wasn’t for procedural and environmental learning.

        1. The neural network MODEL of your sound is made with ModelMaker. The MODEL is thereafter unchanged. No more learning. However, while you play the MODEL in realtime and adjust the scapes and spheres, you are changing the weights of the neurons inside the neural net model. In the beginning, Alex Hartmann wanted a synth where he could grab musicially useful parts of a sound and chang them. Metallicity, hollownesss, size, and so forth. Stephan Bernsee made a system that would allow this.

  2. I just made a new NeuronDB folder. All sounds are “initsound”. All models are “—-“. The games will now begin.

Leave a Reply

Your email address will not be published. Required fields are marked *