Saturday, 12 November 2016

INSIDIOUS 6581 Part 4: What makes the SID sound like the SID is not the SID

Part 3, "When a filter doesn't behave" is available here.

Not Unique

Aside from the filter, the SID chip does not do anything particularly special. It didn't demonstrate any newly-invented synthesis, it didn't have better capabilities than other actual synthesizers of the time, but what it did do was produce reasonably complex sounds for a very low price.

If you just take a standard SID chip connected to a keyboard and play the oscillators like a usual synthesizer, it really doesn't sound like anything unique. Listen to the SID tune below that I started a while back. Until the drums come in, you really can't tell that it was produced using SID chips. I deliberately had to put in C64-type drums and fast arpeggios to give it that SID sound.

So what is it that makes "that SID sound"? Well I'll tell you...it's the television. Ok, there's more to it than that, but for the most part the nature of the television standards coupled with the detailed control allowed by direct programming of the CPU is responsible for the general sound that is associated with the SID chip.

I shall explain.

Pulse Width Madness

Firstly, I want to get PWM (pulse width modulation) out of the way. The PWM 'neeeeoooowwwww' sound is something that is particularly attributable to the SID for SID fans. Although it's available on most early synthesizers, it has never been widely used in the raw, brash way in which it is used on the SID. The reason it is so used in SID tunes is because of processing cost. If you only write SID music using the standard waveforms, the sound you get is very static and rudimentary, much like music on the earliest games or on almost every other computer sound chip. For example, take a listen to the music from the 1983 game Slalom, which uses only the raw sawtooth wave on every channel. It is tonally simplistic and a far cry from the music produced in later years after audio specialist musician/programmers came along and started to use PWM and other more advanced techniques.

PWM gives a tonally complex sound and, as mentioned before, is very cheap in terms of processor usage and audio channel usage. All other complex tonal features on the SID (such as ring mod or chorus) require the usage of 2 audio channels, but PWM can be implemented using just one and takes almost no processor time. It's the natural choice and so is found in most of the classic SID music.

So PWM is a reasonably big deal in contributing to the sound of the C64. Most tracks use it because it's cheap and dynamic, and although PWM is not unique and can be found on most synthesizers, its use in C64 music is rather in-your-face.

Graphics constrain sound

Now about that television thing...first it's necessary to know something about how graphics are processed, especially in the context of games.

I will go into the technical details about the hows and whys in another post, but fundamentally what you need to know is that a TV refreshes its screen at a regular interval and every time the refresh happens, the parameters for audio are updated.

This means that the audio can not be updated more frequently than 50 times per second (or 60 if using an American TV). Pitch can not be changed, waveforms can not be changed, the filter can not be changed. These things can only occur when the TV screen refreshes. For the more worldly among you, this is the reason that music plays faster on an American C64 than on a European one; American TVs update 60 times per second rather than 50, so the audio updates occur 20% more often. For the rest of this post I will refer to only 50Hz updates.

It should be noted that this is the case for every other gaming soundchip at the time when the TV was king. Atari, NES, Gameboy, Amiga they all work in the same way. Even the handhelds have a hardware design that is based on the idea of regular screen updates at the rate of a television.

The SID chip has got three channels; it can play three sounds simultaneously. It also has a limited number of waveforms. Once you've reached the limit of this configuration, what is there to do? Well let's say you wanted to play a three note chord. You can either use all three channels or you can use one channel and run through each note in the chord as fast as you can. "How fast would that be?", you ask. "Why, 50 times per second of course", I reply. The video to the right demonstrates the effect. The video regards the same effect in NES music, but begins with a C64 tune.

Faking it

How about if you want to make the sound of a snare drum? Fundamentally, a snare sound consists of the impact tone of the drumstick against the skin resonating inside the drum itself plus the noisy sounds of the snare wires rattling underneath the drum. This can be emulated using two channels, one for the impact tone and one for the noise of the snares.

In fact, this is exactly what Tim Follin did for the L.E.D. Storm music. Have a listen on the right.

The problem with using two channels is, obviously, that it uses two channels, and taking up 66% of your channels for a snare drum while still keeping a full sound requires skills of the likes of Tim Follin. The compromise is to squeeze both sounds into one channel instead. How about switching between the two sounds very quickly to make it sound like two channels? Well, that's perfectly reasonable and if you could do it fast enough, you'd be hard-pressed to hear the difference. Unfortunately, C64 games couldn't go that fast without wasting too much CPU, but they could do it 50 times per second. At this frequency, it's very noticeable and that's what gives C64 drums their trill-like sound.


Take a look at this lovingly-prepared image of the snare sound from the title music of the game Delta. You can see how the sound switches between triangle and noise waves every 50th of a second. Click on the image to see a bigger version. Click on the play button beneath to hear how it sounds.

It's important to note that this technique of switching between sounds so quickly is only possible because the processor has direct control over the SID chip and it can be programmed to switch so frequently. Standard synthesizers do not allow anything like it, mostly due to the fact that they need a user-friendly interface and in general the friendlier the interface, the less flexible something is.

Can you imagine Nick Rhodes from Duran Duran programming sounds in the 80s for his synthesizers in assembly language? What a thrilling band practice that would have made:
"Nick, can you make that sound a bit less harsh? It's drowning out the guitar."
"Yep, hold on, just let me connect the computer and set up the cross-assembler. It'll take me about half an hour. Talk amongst yourselves in the meantime."
That's why he used a Roland Jupiter-8 and not a Commodore 64.

Smashing sounds together


The technique of switching sounds at this speed happens across all sorts of instruments, not just snare drum elements and it gives a very characteristic sound. One very common technique is to play a short percussion sound at the beginning and then switch to another sound like a bass or lead. On the left you can see an image of the snare/bass sound from Chris Huelsbeck's R-Type music. Click on it to see it in a larger view or click on the player button beneath it to hear it. It has snare elements right at the beginning and then continues with a bass sound for the rest of the instrument. There is an equivalent instrument that contains a kick drum in place of the snare and also one for the same bass sound without the snare or kick part.

This sort of thing can be found all over SID music to make it sound like there are more than 3 channels being played at once. The minimum switching speed of 1/50th of a second gives it a sort of scratchy stuttering sound that massively contributes to what people think of as being sound of the SID.

For comparison of how different things could sound without the 50Hz Vertical Blank timing restriction, here is the snare sound from 'I swore a vow on my dying breath' by Sascha Zeidler (a.k.a. Linus). In this track from 2006, there is no Vertical Blank restriction. The audio updates are done with the CIA timer at 200 times per second. There is a massive difference in the sound compared to the Delta snare sound above. It doesn't really sound like a SID drum sound at all.

Not Just Noise

This contribution to defining the sound style of the SID does not just apply to percussion sounds. The marvellous Lightforce by Rob Hubbard is rammed full of these 50Hz sound switches.

To the right is the tune itself. To the left is the sound used in the main arpeggio. The defining bloopiness of the sound is due to the waveform change and pitch change that you can see clearly in the image of the waveform.

Listen to the rest of the music and see if you can spot other areas where the above techniques are applied. They're all in there at various points.

And hopefully, now that you know what to look out for, you'll be able to hear the same thing across the gamut of SID music. Happy listening. Comments section for questions.

Next: "Tables. How to emulate SID switching without making the users insane."

Tuesday, 7 July 2015

Old Audio Part 4: Psygnosis before the Playstation

Part 3, "Amiga, career-maker" can be found here.

Storage at a premium

Before I joined Psygnosis as an employee, I did the music for Bill's Tomato Game on the Amiga. As usual, memory was highly restricted so I looked into using a music program I'd bought called Mugician. It had a very efficient synthesis technique and could produce very impressive sounding stuff in very little RAM, but Bill (the programmer) had trouble getting the binary playroutine to work (no source code was provided) so I went back to using Protracker. Although it was straight-forward because of Protracker, it was a lot of work. There were 10 areas with 2 tunes-per-area and a title tune, high score tune etc. The total was 27 pieces of music, 10 stingers and some sound effects. The music all had to be done in 3 channels so as to leave one channel free for sound effects and I had 40KB for samples.

One thing that is probably not realised these days is that it was not just RAM that was the problem. Disks on the Amiga were 880KB and if I'd used 40KB for every tune, that would have taken more space than one whole disk. To keep the disk space down, I really cut down everything as much as I possibly could. I managed to get the samples for the prehistoric levels down to only 5966 bytes. The sample banks were refreshed on each level load because I used a technique in Protracker to get pulse-width-modulation that left the modulating sample corrupted.

Soon after I'd finished everything, I got a full-time job at Psygnosis as a game evaluator; a glorified tester really. Bill's Tomato Game wasn't released for another few months, so I ended up doing loads of testing on it. For the final few release candidates, I was the only person who could play through the entire game in one go. I must have done 10 complete run-throughs from start to finish as the last bugs were fixed.

Is this what real jobs are like?

But this is how amazing and ridiculous my first days at Psygnosis were:
  • Day 1: Got my own desk, brand new Amiga and monitor, phone, unreleased games to play.
  • Day 2: The whole company (approximately 50 people) went go-karting. I got into the final of the tournament.
Lemmings Title Image
I started there in September 1992 and the next year was very productive for Psygnosis. As a result, although there was a large amount of game testing, I ended up doing all sorts of stuff: I designed levels for Hired Guns, got involved in design sessions for other games, drew the title screens for Lemmings on the Atari Lynx, evaluated new game submissions and, of course, did a whole bunch of audio.

Operation G2 in-game imageOne of the earlier audio jobs that came along was Operation G2. It was an adventure game and sort-of sequel to the game Obitus. It was never actually released, although a single-level demo did appear a long time after I'd finished working on it. There were a bunch of sound effects and some music to do. I first did a title tune that was tense and rhythmical but they wanted something more ambient, so I did a bunch of variations on a sci-fi-sounding tune that I'd recently started on my brand new Korg O1/Wfd (took me two years to pay off). There were no problems really, except for when they asked for the sound effects. I created them and sent them over only for them to ask "How do we play them?". Being far too proud to say "I don't know", I said that I'd sort out a player for them.

The impetus is strong in this one

My first real bit of programming.
I dug out a little 68000 assembler book that I'd got from somewhere and for the next two weeks, I sat on the train every morning with a pad working out the logic and writing assembler code. Although I'd had a go at messing with little example bits of assembler code written by Brian Postma, Amiga and assembler programming still never really sat right with me. That was until around the third day, when I was on the train and the metaphorical lightbulb switched on in my head. I literally exclaimed, "Oh!" and then it all suddenly made sense to me. I've read many reports of other people having the same sort of programming epiphany.

The player worked in the manner in which I thought was surely how everybody did it, but in my naivety I made something that was far more advanced. For my player I set priorities on each sound and roughly calculated the play position during playback.
A gobbledygook box
Whenever a new sound was played it would check the current channels to see if any were free and if not only play if the priority of the new sound was higher than all of the currently playing ones. It was only later that I realised that nobody did anything like that. Everything else either used a round-robin mechanism or just played on one channel, stopping the existing sound when the next one came in.

During this period, Martyn Chudley (founder of Bizarre Creations) was sitting in the corner programming Wiz'n'Liz on the Sega Megadrive (aka Genesis). That got me thinking, why don't I write a music player for the Megadrive? It uses the same 68000 processor as the Amiga so it should be pretty easy, no?

And so, flushed with confidence at my new-found programming ability I asked one of the producers to get me the Megadrive hardware reference manual so I could have a bash at it. When it arrived, I eagerly turned the pages and was presented with such gems as:
To write to Part I, write the 8 bit address to 4000 and the data to 4001. To write to Part II, write the 8-bit address to 4002 and the data to 4003.
CAUTION: Before writing, read from any address to determine if the YM-2612 I/O is still busy from the last write. Delay until bit 7 returns to 0.
CAUTION: in the case of registers that are “ganged together” to form a longer number, for example the 10-bit Timer A value or the 14-bit frequencies, write the high register first.
I had absolutely no idea whatsoever what I was looking at. All I really managed to understand was that there were 22 registers and to use them you had to decipher a load of gobbledygook. I sat there going through it for about 2 hours, prior confidence abandoning ship, and got absolutely nowhere. Defeated, I gave back the manual, explaining that it was too complex for me.

Next up: FMV, The FM Towns, Mega-CD and Super Nintendo.

Wednesday, 1 July 2015

Old Audio Part 3: Amiga, career-maker

Part 2 of this series can be found here.

How to get into the game industry?

tl;dr Make stuff. Meet people.

Well that's a pretty open-ended question This is how I got into the game industry, your mileage may vary, but this should hopefully contain some useful inspiration.

I was heavily into my Commodore 64. Playing games on it was what I mostly used to do when I wasn't at school. I tried programming a few things on it in BASIC, but it just didn't really click for me. I remember the manual having some sound programming examples. It was all going well until it I wanted to modify one of the programs to play a chord instead of a single note. I expected a way to setup the three channels and then do a single command to play all three at once, but the only way to do it seemed to be to play one channel, then the next, then the next. My undeveloped brain couldn't comprehend that even though the program would play them sequentially, the computer ran so fast (less than 1Mhz!) that all three notes would effectively began at exactly the same time.

A life-changing invention
When the Amiga 500 became available, it was the clear upgrade path from the C64. I sold all of my C64 gear (games, magazines, printer, disk drive and all that) and gave the money to my parents so that they could add to it to get me an Amiga for Christmas. When it arrived, I spent all of my free time on it, again mostly playing games, and latched onto anyone I knew who also had one (which wasn't many people at that time and this was many years before the internet was available to the public). I heard about a computer club nearby and started going there every week. It was the geekiest thing you could possibly imagine. It was held at St. Laurence's Parish Centre in Birkenhead and although it tried to be a general computer club, it was mostly a copy-as-many-games-as-you-can club. X-Copy could be seen running on most of Amiga screens every week. The organizers tried to stop the copying a few times, but never could.

Why copy it? Because it's there

A box I've never seen in real life
One week, I copied whatever new stuff I could as usual and the next day went through the disks to see what they were like. I booted one of them up and didn't have a clue what I was looking at. It was The Ultimate Soundtracker by Karsten Obarski. After much clicking on stuff and messing with it, I somehow managed to load a demo song and realised that it was a music program. "Oh. Interesting", thought I. It was quite an achievement to do that because it had no file dialog; you had to know the filename of the song you wanted to load into it and being a dodgy copy, there were no instructions. The original soundtracker never had mod files (a mod file is a soundtracker file with the music data and samples all in a single file). Everything was split into songs and instruments so that after the song data had loaded it asked you to put in whichever disk each of its samples referenced. The program came with one disk of instruments, labelled ST-01. This set of samples, sampled from various keyboards of the era, became very well known in Amiga circles.

The way it operated made sense to me immediately. I messed around with it a bit and from then on, I was sold. I would come home from school, go straight upstairs, put on my Amiga and load Soundtracker, and write music. Every day. They started out pretty badly of course. I don't have to try and remember how badly because I somehow still have almost all of them apart from the first 4 and few of which I only have corrupted files. Initially, everything was restricted to using the sounds that came with the default sample disk, ST-01, but I later learned how to rip the samples from games and other modules. You have to understand that there was very limited availability for samples. There was no internet, no-one else I knew writing music or who could afford a sampler and synths, and so no other way of getting new samples, When the tunes started to become reasonable in quality (which was probably after I'd written around 30 of them), I would put some of them on a disk and give copies to people at the computer club.

One of the older guys there who took a copy, whose name escapes me, was friends with Dave Kelly who ran Consult Software, whose office was actually very close to the club's location. He gave the disk to him and a week or so later (I can't remember how he got my phone number), Dave phoned me up asking me to do the music for Last Ninja 2. I hadn't even known that Consult Software existed at that point.

The two major things to take away from this is that if you want to be successful in any creative industry you have to a) produce stuff. A lot of it. And b) Meet people. There is a huge amount of luck in this of course. In my case, the timing was just right as they were looking for someone right to do audio just as I appeared on the radar, but if I hadn't have made all of those tunes and met those people, the timing would have been irrelevant.

It was thanks to that same computer club that I got to do the music for Bill's Tomato Game and got a job at Psygnosis. One of the friends I made there, Chris Stanley, got a job as a tester at Psygnosis and he was responsible for introducing me to the company. Firstly by getting me to be an external tester, in which I would play beta versions of games and submit bug reports, and then later by offering my music services to the programmer when Bill's Tomato Game got signed. I'll be forever grateful to him for that. Later, I asked for a job there by writing to the producer of Bill's Tomato Game. I was there for 6 marvellous months as a Game Evaluator before I was switched to doing full-time audio, which was my plan all along of course. It wasn't the plan to be paid so badly, but nevertheless, I was a full-time member of the company that made Shadow Of The Beast and making music and sound effects all day.

Next: Psygnosis before the Playstation

Monday, 29 June 2015

Old Audio Part 2: Some Lesser Home Computers

The previous article, 'Part1: You People Don't Know You're Even Born' is available here.

A frustrated beginning

If only I'd learned to program properly when I was younger. I started pretty young, with my first professional game work happening when I'd just turned 16, but if I could have actually programmed the playroutine as well as writing the music I wouldn't have had to contend with the myriad of software and systems by different programmers and companies. I could have worked more quickly and probably earned more money until CD-Audio became the standard. It would have made my job at Psygnosis easier early on if I could have just spent a few weeks writing a player for whichever platform I had to use. It happened to some extent, but I wasn't yet a good enough programmer to do more complex stuff.

In 1990, when I had just turned 16, I got a phone call asking me if I wanted to do the music on Last Ninja 2 on the Amiga. Now then, when I read other pieces about how other people got started in whatever they do, there's always very important stuff missing. Like when you read interviews with someone and they casually drop in "...and then I was signed by Sony". Woh there! Hold on a minute! How exactly did that happen? Telepathy? Did a Sony A&R walk past you in the street, smell that you could write music and then immediately pop a contract from his suit pocket? I always want to know how it got to that point. So for what happened in my case, you'll have to read Part 3, but for now, continue on...

It Begins

So...In 1990, when I had just turned 16, I got a phone call asking me if I wanted to do the music on Last Ninja 2 on the Amiga. Obviously I did want to because I'm not insane and thanks to the timing, the half-term holiday was coming up at school. For the week of that holiday, I went to the office of Consult Software in Birkenhead, sat there with a tape recording of the C64 music and converted it to the Amiga with Noisetracker. I got £40 per tune, which is the equivalent of around £90 today. They got a bargain, but 7 tunes equalled an amount of money I'd never comprehended owning before. It was brilliant. I splashed out on a joystick just because I had the money. I've still got it and it still works like new. :)


The original ZX SpectrumAnyway, more on the Amiga later. While I was there, Consult Software were also doing some other game conversions and it was on one of them that I got to do my only ZX Spectrum (known as the Timex Sinclair in the USA) music, for Donald's Alphabet Chase. Going from the interface that I can remember, and doing a search, I'm pretty sure that the software I used was "Wham! The Music Box". it was taglined 'The Complete Sound System For Your Spectrum', but I think they must have got confused about the meaning of the word "complete".Wham! The Music Box editing screen It had two notes of polyphony, which was actualy mildly impressive for the spectrum when it was released in 1985. Notes had to be entered into a stave, but it was basically a step-time sequencer (like a horizontal Tracker) because every note was a quaver. If you wanted longer notes, you just put multiple consecutive quavers of the same note. With it I made an absolutely shocking version of the Alphabet Song in about 15 minutes. I also did a rubbish Amiga version and the Amiga sound effects as well, which led me onto doing...

You've got an ST? State!

Atari ST imageThe Atari ST was £100 cheaper than the Amiga because it was £100 less impressive. The soundchip was the AY-3-8910, an utterly dreadful excuse for a synthesizer with three square waves and a noise channel. It's a shame really because the ST was supposed to have the AMY chip, which would have had 8 channels with additive synthesis (but no sample playback), but it got cut for cost and timing reasons. I wasn't thrilled with the idea of doing Atari ST music but it was only music for the title screen. There was a music package for the ST called Quartet, which allowed playback of 4 sample channels, basically giving it the same capability as the Amiga at the expense of almost the entire CPU. As I'd already done the Amiga version of the game, I just ported it to Quartet using its not-a-tracker notation-based interface. Compared to using a tracker, it was a bit slow going and took about a day to do, but it ended up sounding identical and therefore exactly as poor as the Amiga version.

Bad as the ST was for sound, Atari made the bizarre decision to add MIDI ports, which gave it a life in music far beyond what would be expected. Cubase and Logic originated on the Atari ST and it garnered a large following for general music production.

And that was pretty much my entire output for anything that wasn't an Amiga in the 8 and 16-bit transitional era. You might notice that luckily enough, tools were available for what I needed to do. Sadly, I never got to do any music for my beloved Commodore 64. Because I couldn't yet program. Because I had clearly not yet learned how to stop being an idiot.

Still to come: Psygnosis stuff, Amiga, The FM Towns, SNES, a failed Megadrive attempt, Playstation, PC, N64, beepy mobile phones.

Next Up: Part 3: Amiga, career-maker

Friday, 26 June 2015

Old Audio Part 1: You People Don't Even Know You're Born

Chiptunes have quite a following these days. But why?

Is it really that great to have Gameboy music playing through massive speakers? A lot of people seem to think so, even though it's mostly just square/pulse waves. Yet, Roland would be laughed at if they released a synth that only had three square waves and a noise channel, so what's the big deal with it?

The square wave genre, including AY chips and the other doorbell sound chips, certainly has its charm. It sounds pretty unique compared to any other style of music and the limited capabilities definitely forge creativity (giving us some great melodies), but my word it can sound like an absolute travesty without a skilled person at the helm. A true assault of sonic knives stabbing at the ear drum, and make no mistake it was the same back in the day. There was an abundance of absolutely appalling music on every game platform.

But the good stuff can be very good indeed. Who doesn't love Chipzel? She's brilliant. And hilarious. It's understandable that people would want to emulate her and other's success, but there are so many people trying to make Gameboy music right now that it's borderline ridiculous. Chiptunes are 'in' right now and the barrier to entry is unbelievably low, almost as low as picking up a guitar and strumming E, A, and D. It's very easy now to download a Gameboy emulator, load LSDJ into it, and start making music thinking that you're just like game composers of yore.

But things were not so easy back then. Even after somehow getting hold of the dev kit, it could be a complete pain to do any audio on the machines of old and it was a constant battle against the restrictions of the hardware. It was not a particularly pleasant situation and all we ever wanted was for those restrictions to go away. I find it somewhat amusing that people now celebrate what was a mostly awful restricting environment, but the power of nostalgia (for things real or romanticised) can be very strong indeed.

In this series of articles I shall describe the sometimes crazy processes I've had to go through to create music/sound on various game platforms, beginning of course with the Gameboy.

If only now had been then

How can you hate MIDI, Mr. Clarke? You must be an idiot!

I don't hate MIDI for writing music in general, but when dealing with low-spec devices, MIDI is one of the worst possible formats. It's fine for trying to recreate a jazz band on a synthesizer, but appalling for fine control of a sound chip. The direct control over the sound that you get from a Tracker is a world apart from the fuzziness of MIDI.

MIDI has no concept of audio channels or polyphony. Either a player would have to restrict itself to one MIDI channel per sound channel or you'd have to be very careful when placing notes so that there are no notes overlapping. If there were, the second note could spill your lead sound into the next sound channel and cut out a bass note.

Instrument changes are incredibly laborious in MIDI. It takes ages to do clever effects like switching samples very quickly on each note. The nature of how MIDI is purely a song data format meant that designing instruments for your music was a completely separate process from writing the music. In a limited capability environment having immediate and direct control over the sounds while editing the music data is really important.

Doing pitch and volume effects in MIDI is really cumbersome. You have to set the pitch bend limits in the instrument and then insert hundreds of pitch bend values into the MIDI data. With a tracker you could just repeat a command at each note step and it would continually perform that command. You can do nice effects with this like repeating a pitch command with a high value while retriggering the sound at different notes, or repeating a decrease volume command while resetting the volume each step to get a staccato effect. Dynamic portamento was also very easy to do with this method and very controllable.

How about the bubbly-sounding fast arpeggiations that chip tunes are known for? Most trackers have this as an arpeggiation command where you specify the relative note numbers to cycle through, but with MIDI you'd have to enter each individual arpeggiation note by hand. Why do those arpeggiations sound so characteristically like old game music? Because the rate at which the notes changes is the refesh rate of the TV, i.e. 50 or 60 times per second. If you go faster or slower than that, it starts to sound different.

Basically, if you want your music to sound good on a weak chip, you have to do clever stuff that is really awkward to do in MIDI. Step-based players like trackers or custom playroutines were always designed to take advantage of the nature of the chip itself and made it much easier to implement that effects that could make it all sound good.

I will touch more on MIDI when I get to PCs and mobile phone audio in a later article.
Most of my music was on the Amiga using Protracker, which is really the basis for all of these modern chiptune trackers (although it could be argued that Protracker and its originator The Ultimate Soundtracker were based on earlier programs like Soundmonitor on the C64, which itself was based on earlier hand-coded music players and works in a very similar way to the Real-Time Composer interface of the Fairlight CMI sampling workstation). In a way, I had it easy because I could use a tracker while others before me had no choice but to use self-written code. On other platforms, I was not so lucky.

There were few public tools available for computers and none at all for consoles (as you needed to get a licence and a devkit from the hardware manufacturer). Most tools and players were written in-house at whichever game company was writing whichever game. These tools would never see the light of day outside that company. To even get tools in the first place could be problematic. They would obviously have to be written by a programmer who hopefully understood something about audio, and that would take time and money. It was a big investment for a company and programmers interested in audio were very hard to come by, so it would be pretty common that a programmer would shovel together the bare minimum and leave the musician to just get on with it. This could be anything from just a music player with the music data having to be written directly in hexadecimal in a text editor to more user-friendly toolsets with instrument editors and so on.

Never again, please

I only ever worked on one Gameboy game. It was called Force 21 and was originally a PC realtime strategy game from Red Storm Entertainment. The port was by The Code Monkeys and they had their own Gameboy audio player that they had used on previous games. Worst case scenario, I thought, was a bunch of assembler code into which I would write music data, which is how I mostly worked on the SNES and my own players. But no. It was worse than that. It was a MIDI player. I always hated MIDI.

It got worse.

To get the music into the correct format, I had to use their MIDI converter, which was written for the Atari ST.

It got even worse.

Their MIDI converter wouldn't convert the MIDI files that I output from Bars & Pipes on the Amiga. As it turned out, it would only convert MIDI files that were exported from a specific version of Cubase.

And yes, it was even worse than that.

After getting the music into the correct format, I then had to email the data file to the programmers who would package it up in their music player and send me back a Gameboy ROM file of the player with the music embedded to check if it worked okay.

The original music that I had to convert was a set of really nicely done orchestral music by David Frederick. I had to convert that into 1 square wave, 2 pulse waves and white noise. Not easy to do without it sounding like crap, but I usually enjoyed trying to get the most out of limited capabilities. It was a pain though, converting orchestral stuff was definitely the hardest thing to have to do.

To do it, I made a Gameboy emulation with my K2000 (possibly the finest synthesizer ever made). It was just a simple setup that matched the setup of the instruments on the Gameboy, with one sound on each MIDI channel. Then I wrote the music in Bars & Pipes on the Amiga that was connected up to the K2000 (as was my other gear), like with all of my music. Once that was done, it went through an utterly ludicrous process to be ready for the game...

Here's the complete sequence of events:
  1. Convert the music from the MP3s of the original orchestral score to fit the limitations of the 4 channel Gameboy.
  2. Export a MIDI file from Bars & Pipes.
  3. Copy that MIDI file from my Amiga to the PC via a floppy disk.
  4. Load the MIDI file into Cubase.
  5. Export a MIDI file from Cubase, having performed no operations on the original.
  6. Put the MIDI file into a specific folder.
  7. Load the Atari ST emulator and run the conversion program inside it.
  8. Load the MIDI file into the conversion program.
  9. Save out the data file.
  10. Email the data file to the programmers.
  11. Wait.
  12. Receive the ROM file from the programmers.  One time, it was 3 days until I got the ROM file back.
  13. Load it into the No$GB emulator.
  14. Listen to the new track and make sure everything sounded as expected.
The experience was so insane that after I'd finished, I spent the next three weeks writing my own Gameboy music driver in z80 assembler. It had plugin synthesis modules and sample playback, but it never got used for anything. I stopped being freelance and moved to London to do full-time audio programming. I enjoyed creating it though. There's something cathartic about going through an opcode list and realising that you can save two clock cycles if you replace that clear command with a multiply by zero instead. If I can ever find it, I'll post the source code here.

Force 21 PC music
Force 21 Gameboy music 

Next: Part2: Some Lesser Home Computers

Monday, 22 June 2015

INSIDIOUS 6581 Part 3: When a filter doesn't behave

Part 2, "How on earth does someone copy a 30-year old sound chip?" can be found here.

It is often that a mistake leads to greatness. There have been many times when I've been writing music when I've hit the wrong note or had the wrong sound selected and it's led me down a different, better path. And so it is that the filter of the 6581 is a great thing because it is broken.

'Mention that filter one more time
and I'll cut ya!', said Ben Daglish
Unfortunately, if you were a C64 musician back in the day, the choice to use the filter was fraught with peril. After spending hours making your sounds just right, you could then go and play it on a different Commodore 64 only to be presented with a completely different sound, maybe even resulting in an entire channel being muted.

The type of transistors used to make the 6581 SID meant that due to inaccuracies of the manufacturing process each one had slightly different amount of resistance. The effect of this is that the cutoff frequency of the filter is different for every chip. And so, as explained above, you could have a perfect filtered tone on one machine, a muffled mess on another and no detectable filtering at all on another. I wonder how many people loaded a game with heavily-filtered music by David Dunn and thought that the music was awful because of their SID filter.

In the words of Bob Yannes, designer of the SID chip, "the resistance of the FETs varied considerably with processing, so different lots of SID chips had different cutoff frequency characteristics. I knew it wouldn't work very well, but it was better than nothing and I didn't have time to make it better".

When used for recorded music production, the filter cutoff problem is mostly irrelevant. You just write the music how you want it for your particular chip and render the master file. The cutoff problem is not what makes the 6581 SID great. What makes it stand out above any old filter is the fact that another bug in the hardware causes it to distort. And it's not a nasty harsh clipping distortion, but a really nice warm one that is very musically pleasant.

What seems to happen is that the filter has a saturation point where the waveform gets squashed at its extremes. If you can make the waveform level push past this threshold, it then distorts exponentially. I can't easily recreate this exactly in Reaktor, so I just tried to model the above behaviour. It doesn't sound exactly like my real SIDs, but it sounds similar enough to sound good.

If you're wondering how I know this, it's because of Antti Lankila's excellent work on opening the secrets of the SID filter.

It's worth noting that the later 8580 SID chips do not benefit from this error as their filters are much more stable and sadly do not distort to any great extent. However, they do have a much better resonant peak than on the 6581. The maximum 6581 resonance is unfortunately pretty weak.

Here's how it works:


Starting from the left in the wiring above, we have A and B. These are the rotary controllers for root cutoff  frequency and resonance that you can see in the corresponding panel to the right, also labelled A and B. To the right of those, we have C, the Modulation module. This corresponds to the complete C panel on the right. It combines the results of all of the modulation controls and outputs a value for the cutoff frequency, which is then added to the value of A. In Reaktor, you can double-click on this to see inside. Moving on, we have D, the filter module. The signal is routed through here into a Reaktor filter module and filtered according to the controls and filter type selection. Again, this can be double-clicked in Reaktor to see the routing.

If the output of the Filter module was sent to the main output, it would result in a very clean, very boring standard filter, much like that in the SID 8580.

Section E is where the action really is. First, the signal passes through a parabolic saturator. This squashes the waveform as it approaches the threshold value (1.75 in this case) and gives it a bit of a growl. From there it gets split up and passed into a Clipper, which clips off the top edges of the waveform beyond values of 1.25. I then take the clipped signal and subtract it from the non-clipped signal to leave me a waveform that consists of just the very top and bottom of the wave. This represents the signal that has been pushed beyond the threshold point in the real SID. As I don't want this to get overblown, I route it through a high-pass filter; we really only want to add the fizz back in. Then I multiply this waveform by 4 to enhance those high frequencies. The SID is apparently exponential here, but a multiply seemed enough. Then I add that back on top of the original signal

After all that, I route it back into another parabolic saturator, labelled F, using a higher threshold value than before, just to calm it down a bit and dampen any new clipping that could result from the newly multiplied signal.

The result of all this is that for single channels, no distortion occurs, but when multiple channels are routed through it the input gets loud enough to gain a soft saturated distortion. This sounds especially good when the channels are set to modulate against each other, for example by detuning one channel.

Now that's all very well and good, but for all that work it's simply not right. It's a filter. It saturates and distorts with thresholds similar to the SID. But it's not the SID. In fact, when you really look into to, it's very far away from how the real thing sounds. So while writing this entry, I went back to the HardSID and started looking more closely at the actual SID output with a software oscilloscope. The results of that investigation will appear in a later article.

Next: "What makes the SID sound like a SID is not the SID"

Sunday, 14 June 2015

INSIDIOUS 6581 Part 2: How on earth does someone copy a 30-year old sound chip?

View part 1 of this series here.

Let's copy the SID chip. How hard can it be?

To get a reasonable approximation of the SID chip is not that difficult. To get a useful interface to control it, though, certainly is. There's a fine balance between a capable interface and ease of use. It's easy to make a complex, convoluted interface that exposes every possible function, but difficult to make an intuitive one that still allows all the control that you need. What I wanted was an emulation accurate enough to be able to recreate some actual Commodore 64 game music, but one with an interface that didn't require you to be a programmer to understand it. The interface would evolve as the sound capabilities in the ensemble evolved.
A single channel in INSIDIOUS 6581
Compact, flexible, and hopefully easy to understand
for people who aren't me
By now, the SID has been pretty well researched and there is quite a lot of information about it. I searched the internet for detailed specifications on the chip so that I could get those details right in my emulation.

The fundamentals

The SID chip is actually quite simple. It doesn't really do very much (although compared to most other sound chips, it's vastly more capable). A few waveforms, some combination modes and a filter. And that's about it.
Triangle, Sawtooth, Square and Noise waveforms
There are 4 fundamental waveforms: Triangle, Sawtooth, Square/Pulse and Noise. The reasons that these waveforms are present and not others like Sine is because they are easy to generate using counters and timers, which themselves are easy to create in hardware. A sawtooth wave, for example, can be created by regularly increasing a value by the same amount over time and then resetting it to zero when it reaches its limit. A square wave can be created by regularly flipping a value between zero and one. It's very simple stuff.

The SID can also combine these waveforms together to give some extra sounds. How it does this is even now still under debate as what the original hardware specification states does not match the sound that is produced.

A SID musician
Ring mod/Hard sync
are hard to demonstrate
in one image, so here's
an alpaca instead
The two most advanced features are ring modulation and hard sync. Both require two channels to operate:

Ring modulation is an effect where the amplitude of one waveform is modified very quickly by the shape of another. Basically, imagine if you could play waveform A with a volume knob that could go below zero and turn the sound upside down. You then turn its volume knob incredibly fast in a pattern that matches the shape of waveform B. Ring modulation is responsible for bell-type sounds and most of the weird screechy noises that litter Rob Hubbard tracks. It only works on the SID when waveform A is set to be a triangle waveform.

Hard sync is where one waveform plays at its own frequency, but gets reset back to its start every time the second waveform loops. It is responsible for the rising modulating sound in Ben Daglish's Wilderness music from The Last Ninja and the ludicrous intro noise in Martin Galway's Roland's Rat Race music.

Lastly, the SID has a filter that can be configured as Low-pass, Band-pass, High-pass or combinations of all of them. There is only one filter, but each sound channel can be selectively routed through it or not. It is the filter that can make the SID really stand out. A bug in the implementation means that it distorts quite easily giving a nice warm, growly tone. It was not very widely used in games because instability in the design and manufacturing process meant that the intensity of the filter could be massively different between two chips. Martin Galway, when asked about his best and worst memories of the C64 remarked, "Worst memory is that damn filter! I wish they were able to fix it."

The hardware has some other features, like the ability to route external audio through its filter, two analogue-to-digital converters (intended for reading paddle controllers) and there is are also a bug that allows sample playback, but they are not relevant for creating an emulation of its synthesis capabilities.


Starting the recreation

Reaktor's Tri Sync oscillator moduleRight at the very beginning, it became clear that choosing to use Reaktor was a good idea. Reaktor, being a modular synth, has a whole load of different diverse modules to wire together, a group of which are various oscillator types. Clearly seeing into the future and noticing that I wanted to copy the SID chip, Native Instrument provide oscillator modules for Sawtooth, Triangle and Pulse (Square) that have a facility for both ring modulation and hard sync. What a stroke of luck. I was expecting to have to construct the oscillators from scratch using Reaktor's 'Core Cell' facility, whereby you can perform very low-level operations on raw audio data, but it was already done for me.

The top-level of a single SID channel in INSIDIOUS 6581
Wiring up the three oscillators and adding an LFO to vary the pulse width on the pulse oscillator was very easy and instantly produced that whiny C64 lead PWM sound. It annoyingly varied the DC offset with the pulse width value, but that was easily solved by subtracting the pulse width value from the whole waveform to balance it back out again.

Who noise how to solve it?

Adding the noise channel was rather more of a problem. The noise oscillator in Reaktor is described in the manual as producing "a random signal containing equal amounts of all possible frequency components". That means that it is what is known as 'white noise'. To get theoretically perfect white noise you take an infinite number of sine waves at every possible frequency and add them all together at the same volume level. The problem is that the SID noise waveform is by no means white noise. It can be pitched up and down as can be clearly heard in the title music of Uridium by Steve Turner. Because white noise represents infinite frequencies, pitching it up or down makes no sense, so how to re-create the SID noise if not with the Reaktor noise oscillator?

White noise, probably
The noise waveform of the SID is generated by a pseudo-random number generator. It streams out random numbers at whatever frequency it is asked to. When the frequency is lower, the numbers change more slowly and when the frequency is higher, the numbers change faster. There are actually other noise and random generators in Reaktor, but they all have the same problem. I toyed with the idea of creating my own noise module to copy how the SID generates random numbers, but that would have been unnecessary. Whatever would have come out of the output would actually end up being no different than a recorded sample of white noise, which is a fixed set of random numbers. By pitching such a sample up and down, the output numbers change faster or slower, exactly like the SID. So that's what I did. I generated 3.5 seconds of white noise (long enough to not notice any repeating loop) and inserted it into a Reaktor sample player for use as the white noise channel. I then set the sample player to playback at 'poor' quality (so as not to have any smoothing) and tuned the root playback frequency so that the tone matched the other oscillators.

The combined waveforms


The Triangle + Square
waveform, which sounds as
scratchy as it looks
When you instruct a SID to use multiple waveforms at the same time on one channel, the hardware specification suggests that it performs a binary AND operation on the waves. It doesn't. It does something very weird. I tried to find out exactly what happens so that I could make a Core Cell to copy it exactly, but it's very difficult to get the right information. I didn't actually need to go that far because the output of the combined waveforms is deterministic; it always comes out the same for the same combination of waveforms.

The mostly useless
Triangle + Saw waveform
Whether I generated the waveforms in real-time or just played back a tiny sample from a real SID chip, the result would be exactly the same. It felt like cheating, but using samples is a whole lot easier, so that's what I did. Unfortunately, this is the worst-sounding part of the emulation. When the samples are pitched up, they alias quite badly. I tried to resolve this by having a separate sample for each octave, but the Sample player in Reaktor wouldn't work properly once I'd set it up. I do intend to resolve it somehow, but for now there is modulating aliasing on the combined waveforms.

I wanted to support every possible feature, so I needed to support hard sync on the combined waveforms. As they are samples and don't have a built-in Sync features like the fundamental oscillators, I had to hack it in. The sample player supports resetting the playing sample back to the start when it receives a positive signal at one of its inputs. I set up a mechanism to generate events with a value of one to restart the sample waveform every time the input sync waveform looped. It works, but Reaktor is not entirely happy about it and clicks and pops a bit. It's the best I could do and seeing as all but one of the combined waves are barely audible anyway, it's of very limited usage so its poor quality is not a deal-breaker for me. I don't think any C64 game music uses hard sync on a combined waveform.

The end?

Believe it or not, that is the complete implementation of all of the (documented) sound generation features of the SID chip. I could have stopped there, but all I would have been left with would have been a crappy-sounding monophonic synth. It needed the filter and some extra control features to make it sound how it should.

Next: "When a filter doesn't behave"