Wednesday, 18 March 2015

Unity and quality settings

We've been pretty bogged down in learning the Unity IDE and picking our way through the differences in Javascript and UnityScript (yep, that really is a thing - and it's ever-so-slightly different to "regular" javascript). But this evening we took an hour out to play with the quality settings. And we got some interesting results.

At the minute we're developing on a PC - it's a quad-core 3Ghz something-or-other with 6Gb of RAM and a whopping 1Gb graphics card (these numbers actually don't mean very much to me, personally, but the guy I got the machine off seemed to think they sounded impressive). It's the most powerful PC I've ever used. But then again, for the last ten years or so, most of my coding has been done in NotePad (well, EditPlus2 if I'm honest) or Visual Studio!

Anyway, it's a half-decent machine (as far as I'm concerned anyway) and it runs 3D graphics fairly well. So during a bit of "downtime" we had a play with the quality settings.

I didn't even know this kind of thing existed - it was only after asking Steve about how he prepares his software for "real-world applications" that he suggested using the same code-based and simply dropping the graphics quality for lower-end devices. It seemed like an idea, so we had a play to see what the different settings did:

On our machine, there was actually very little difference between the first three settings, fastest, fast and simple. Maybe we didn't have enough lights and effects to tell them apart; in any of these settings, there were few or no shadows on any of the objects.


Noticing the quality level change slightly as we went to "good" quality, we actually turned off shadows, as these were a little distracting. At this stage, we were more concerned with how our shapes were rendered, rather than the "sparkle" that particle systems and lighting occlusion (is that even a thing?) added to a scene.


Compared to "simple" mode, where the edges of all the shapes on screen had very definite "jaggies" along the edge, the "good" mode did a decent job of smoothing all the edges. So we were expecting great things from "beautiful" mode...


 Beautiful mode sharpened up a lot of things in the scene; it was only after comparing screenshots between "good" and "beautiful" we noticed what the actual difference was. The bitmaps on the floor tiles are much sharper (we hadn't really noticed that the deforms on the floors in "good" mode actually made them look quite blurry on second glance).

But in sharpening up some of the bitmaps, something else happened too. Our animated (soldier) character started to display little white dots along the seams of the arms. They only appeared every now and again, and only for a single frame of animation. But they were noticeable enough to be distracting.

If you looked at the surroundings (as you might for a first-person shoot'em up) beautiful was definitely an improvement over "good". But if you looked at animated characters in the scene (as you might with a third-person shooter, for example) "good" actually gave better results than "beautiful" - the characters were certainly better animated, and the edges were not so jaggy (though the perspective distortion on the bitmaps did make them appear a bit more blurry).

Strangley, things didn't improve with the ultimate "fantastic" setting.



Once again, the scenery got that little bit sharper, but the animated character still had those annoying flashes of white every now and again. Not there all the time, but noticeable enough if you watched the animation - a little like the way you might spot a rabbit's tail as it runs away from you in an empty field. If you look for it, it's hard to notice - but just gaze at the scene and every now and again you see a flash of white.


While "good" does create distorted floor (and ceiling) tiles, we're actually thinking of sticking with "good" as the default quality setting, if only because those flashes of white on the animated character are so distracting. The jagged edges of the walls and floors (and, for that matter, around the character) in "beautiful" mode are pretty off-putting too.

Just when we thought we'd found the best graphics settings for us we then discovered a single-line script, which confirmed our choice: QualitySettings.antiAliasing.

This can be set to zero, one, two, four or eight.


The differences in quality are most noticeable with antiAliasing set to eight.
The screenshot above shows the same scene, at "good" setting, with antiAliasing set to zero (left) and eight (right). The scene on the right is much easier on the eye!

The decision was cast, when we flipped back to "beautiful" settings, even with antiAliasing set to maximum, and we still got jaggies around our animated character. At "good" quality, the character sits "inside" the scene - he looks like he's part of the spaceship interior. At "beautiful" quality or above, the jaggies around the edges - and in particular along the gun barrels - make it look like an animated overlay, plopped on top of a 3D render.

So there we have it. It may be peculiar to our machine, and work perfectly fine on other devices. But for now, we're sticking with max antialiasing, and limiting our graphics quality to "good". We'll just have to learn to live with the slightly blurry tiles (or perhaps whizz the camera around quickly, and call it motion blur!)


Monday, 16 March 2015

Creating animations with Unity - complete noobs only!

While we're still finding our way around Unity, we're stumbling about falling into all kinds of little gotchas and not quite understanding how it all works. But today we managed to create our own custom animation, entirely from scratch. Something which has been a really headache for days now, as screens appear and disappear, previously seen windows no longer available and so on.

Here's how we made some sci-fi sliding doors for our Starship Raiders board game. It may not be the best way to do it. It may not even be the right way to do it! But this is how we did it, and it works, so better get it written down before we forget!

This is the animation we created:


It's simply a door frame with two door sections "inside" it. When a function call is made, the doors slide apart. Another function call brings the two back together. A simple spot effect is played as the doors open/close.

To begin with we're using the Top Down Sci Fi (mobile) environment from Manufactura K4. We just placed a couple of their source models into our scene (to keep things simple).


The doors are originally designed to slide horizontally. But in our game, we're going to be putting a floor and ceiling in place, and sliding them vertically. If we stuck with horizontal, then we'd have to keep at least one blank panel alongside each edge of the door, so that the doors aren't visible after we've pushed them open. By turning them to operate vertically, it doesn't matter if they protrude above the ceiling, or below the floor, as they will be hidden by the floor and ceiling tiles anyway.

After rotating the doors, we placed them inside the frame - lining them up "by eye". We also made sure that each door was made a child of the doorframe. This is important, should we want to make copies of the sliding door to use again in the game. By simply cloning the parent (frame) we don't have to mess about setting up the doors again in future.


To keep things nice and tidy (it's not really necessary, but after using Unity for half a dozen times or so, you quickly learn it really does make things easier in the long run!) we created a folder for our animations, selected one of the doors and selected "Animation" (not animator) from the menu


At first we really struggled with even this basic premise. We tried creating an empty animation file, then tried to find some way of telling it which object to apply the animation(s) to. It may be possible to do it this way, be we just got confused. So this is our method - select the item you want to move, then bring up the animation window.

When you click "add property" you'll be prompted to save your animation to a file. Enter a suitable filename here. We called our first animation "door1_open".


Gotcha number two - if you don't see the "add property" button, it's probably because you've nothing selected that can be animated. It took is ages to work this one out. By selecting an object before bringing up the animation window, you should always have an "add property" button, because Unity already knows which object you want to animate.


There are a few things to note here:
Firstly, we selected the "position" property of our door, and the animator immediately displayed two sets of keyframes. Since we want our door to start off in the closed position, we left everything alone and moved the playhead (indicated by the arrow) to the second set of keyframes.

At this point, the record button, playbar at the top, and the x/y/z properties were all lit up in red. This tells us that we're in "record mode". Anything that we move around here will be recorded in the animation.

Making sure that the playhead was in the last set of keyframes, we lifted the door upwards, to it's final open position (that it's popping out of the frame doesn't really matter - when the ceiling tiles are in place, it'll just look like it has slid inside the frame). Click off the record button and the changes are committed to the animation (the door drops back to its original position in the scene as well).

Here's gotcha number three: we want to create a second animation, moving the door from the open to the closed position. It's quite simple, once you know what you're doing, but it took us ages to work this one out!


Animations are applied to objects - so you must have an object selected in order to animate it. We spent ages creating second, empty animation clips, then wondering how to get to this bit again, where we could add keyframes and move things around. The answer is, with the object selected, click the drop down in the top of the animation window and create a new clip.

This will throw up the save dialogue window, and allow you to create your second, separate animation file. Because the door object is selected, it already knows which object to animate, and so, once again, you're presented with the "add property" button.

As before, we selected transform - position, and this time on the first frame moved the door to the same Y co-ordinates as at the end of the previous animation (you can just type into the co-ordinate boxes in the inspection panel). Because we're closing the door - moving it from up in the air back to it's resting place, we left the last set of keyframe values as they were; you can always hit play to preview the animation.

With our "open" and "close" animations for door 1 complete, we repeated the process for door 2, until we had a total of four animations

[edit: it has been pointed out, that had we selected the door frame and started our animations, we could have set the y-co-ordinates of both door1 and door2 in a single animation, since both are child objects of the door frame. This would have meant having just two animations - one that animated both door1 and door2 to the open position at the same time, and a second which brought them both to the closed position. In future we'll use this method, but for now we're leaving this instruction post as-is, because this is the method that worked at the time!]


With our four animations in place, it's time to create an animation controller and bung some script in, to make the door open and close!

Selecting each  of the doors, we created an animation controller for them (we'll put the animations in place in a moment) then selecting the door frame (not the individual doors) we created a script and dropped this onto the (parent) door frame too.


Inside each door controller we repeated the same process. Select a door and then "Animator" (not animation) from the menu. Create a boolean parameter and call it isOpen. Create a blank, empty state, and make this the default

Next drag the appropriate open/closed animations into the animator window.
So if you've got door one selected in the scene, drop the door1_open and door1_close animations into the window. If it's door two you have selected, drop door2_open and door2_close in there.


Now our default state is "door is closed". So we want a transition from the default state to door1_open, when the boolean value isOpen is set to true. Click the default state (to select it) then right click, and select "make transition" before drawing a line to the door1_open state.

Click on the white arrow that appears between the two states, and from the properties panel, add a new condition - isOpen is true. This tells Unity that at any time we're in the default state, we can play the "door opening" animation whenever the boolean value is set to true (we'll do that later with a bit of scripting).


Now we need to create a transition from door1_open to door1_close. The condition for this is isOpen = false. This is because after we've set the isOpen value to true, the open door animation will play, and the "state machine" in the animator will remain in the "open" state. So when we've opened the door, Unity will keep monitoring the isOpen property and if it ever goes false (while the door is open) it will then play the door1_close animation.


Lastly, we make a transition from the _close back to the _open animation, any time the isOpen property ever goes true again. Once all this is in place, we repeat the whole lot all over again, for door2.

If you hit play at this point, nothing particularly exciting happens. In fact, nothing should happen at all. If your doors are opening and closing at this point, something has gone wrong (and you're probably feeling like we did for two days!) Let's write some code to make these things move!
In the door frame we placed a controller script. This needs editing now...

#pragma strict
var door1:GameObject;
var door2:GameObject;
var door1Anim:Animator;
var door2Anim:Animator;
var doorState:boolean;

function Start () {
   
     // loop through the children of this object (rather than just
     // use object.find which could return any matching name on the map!)
     // and get the two door components for this frame object.
     var allChildren = gameObject.GetComponentsInChildren(Transform);
     for (var child in allChildren) {
          var s:String=child.name;
          if(s=="Doors_02A"){ door1=child.gameObject; }
          if(s=="Doors_02B"){ door2=child.gameObject; }
     }

     doorState=false;   
     door1Anim = door1.GetComponent(Animator);
     door2Anim = door2.GetComponent(Animator);
}

function Update () {

}

function openDoor(b:boolean){
     door1Anim.SetBool("isOpen",b);
     door2Anim.SetBool("isOpen",b);   
     doorState=b;
}

This little script runs as soon as the door frame object is created at "runtime".
It basically gets a reference (pointer) to the child door objects, and the animator objects that control their animations.

The openDoor function is a publicly accessible function - it's going to be called by our main controller in a minute - and can accept either true or false; Whichever value is sent into this function is passed to the two door controller objects. If the door is in either its default position, or the closed position, we created a transition to play the open animation, whenever the isOpen parameter goes true.

Similarly if the door has played the open animation, it plays the closed animation whenever the isOpen parameter is true. Any other combination of true/false is ignored (so if the door is open and the function openDoor(true) is called, nothing happens - you've tried to set the door to open, and it's already open, so it is correct to ignore this request).

So now all we really need to do is to create a script to allow us to call the openDoor function on the doorframe...


There are probably a hundred ways you can do this. We like to create a new, blank gameObject and call it "mainController" and add a script to this. It just makes it easier to keep everything in the same sort of place, once the project gets a little larger (and a bit more unwieldy).

In our main controller script, we just place a couple of buttons on the screen so we can call the openDoor function. In reality, our game logic will be making all kinds of decisions and deciding which doors need to open and close. But for testing our animations, this will do for now.

#pragma strict

function Start () {

}

function Update () {

}

function OnGUI(){
     if (GUI.Button (Rect (10,10,150,30), "doors open")){
          var d:GameObject=GameObject.Find("Gate_02");
          d.GetComponent.<door_controller>().openDoor(true);
     }
   
     if (GUI.Button (Rect (10,50,150,30), "doors close")){
          var e:GameObject=GameObject.Find("Gate_02");
          e.GetComponent.<door_controller>().openDoor(false);
     }
}

And that's it!
Marvel at your amazing sliding doors


One last little gotcha - if your doors open then flip shut, then start opening again, make sure you haven't got the "loop" option ticked in the door1_open, door1_close, door2_open, door2_close animations.


For added authenticity, you can add in a "hydraulic swoosh" sound, as the doors open and close. But that's probably a bit much for one night. For now we're just thrilled that we managed to understand enough about the crazy visually-based Unity editor to get some doors to open and close!

Good luck.......


Animating characters in Unity - not all models are the same

There are some brilliant models on the Unity Asset Store website. We've already invested quite heavily in Unity (over just the last few weeks, it's quite alarming at how quickly $10 and $30 there soon adds up to a few hundred dollars!) and in doing so have found a wide range of quality in the Unity models.

There are plenty of 3d models available online, not necessarily designed for Unity, but with animations and poses that can be (relatively) easily imported into Unity. Then there are some which are an absolute nightmare to get working!

Originally we were really impressed with the Mixamo website.
It offers loads of character models, and some really cool animations (albeit at $5 per animation, which could quickly end up being quite a pricey way to put together a simple game!). Their online rigging tool is particularly impressive.



Simply upload a mesh (even without a skeleton or any complicated rigging or bones) and give the system a few cues -  where to find major joints like elbows, knees and wrists - and let it run for a few minutes. The resulting rigged character is surprisingly easy to animate; just select one from hundreds of different actions, and apply to the rig. It's as easy as that!

Mixamo looks like a great way of quickly producing characters for your games. Except, it doesn't always play nice with Unity.

Now, we only new to Unity, but already we know what a decent model looks like. You import it, drag-n-drop a few controllers and, hey presto! you get a working model. The StarDude characters are great examples of this.

Mixamo claim to have worked with Unity for a number of years, so we were quite looking forward to quickly and easily assembling a zombie horde for another of our game ideas (we've tried importing models into Blender and 3DS Max, and applying some pre-built mo-cap animations, but it's a lot of work, and a bit hit-and-miss as to whether it'll ultimately be successful or not).

But the Unity/Mixamo integration isn't place nice - either with Unity4 or Unity5, we get the same results. Now it might just be that we're doing something wrong - but it's no different to how we've successfully managed to get a number of animated characters from other suppliers, so perhaps there's just something we're not quite getting.

Here's how we tried animating our Mixamo free character (screenshots are from Unity4, but we get the same results using Unity5):

After installing the Mixamo plug-in for Unity, a screen very much like the Asset Store appears in the Window menu. We simply downloaded the (free) zombie model from Mixamo and dropped it onto the screen.


The Mixamo plug-in allows you to try out their animations in your project window. Simply select an animation then drag-n-drop your model onto it and click "preview". The animation is downloaded and a clone of your character acts out the animation in both the game and scene windows.


So far so good. In the screenshot above, we can see the original zombie character, dropped into the scene, as well as the clone character, carrying out the walk cycle animation. But this is where things start going a bit weird.


We downloaded the Mixamo (free) zombie walk cycle (by "buying" it at $0.00) and imported it into our project. Just like we have done with so many other models, we then created an animation controller and dropped it onto the model.


We then opened the Animator windows and dropped the (newly downloaded) animation into it, setting "walk" as our default animation - just as we have done so many times, with so many other models from so many other providers.


Then simply set the game playing, to marvel at our shambling zombie walking animation. Ta-da!


Oh.
And this is where we got stuck.
Well and truly stuck.
Stuck like there's no answer to this! We tried all the online tutorials and followed them to the letter. Then we tried the forums and made sure that our model was rigged as "humanoid" (it was) and the animation was set to "humanoid" (not legacy). We tried running the animation as "legacy" and even tried dropping the animation straight onto the model (instead of using an animation controller).

It got so bad that we even entered Unity in debug mode and changed the animation type from 1 to 2 (as suggested in one of the online forums). Nothing worked.

No amount of deleting, restarting, reinstalling, tampering, tinkering and hacking got us any further than this weird, slightly cramped pose. The Mixamo preview animation was exactly as we wanted it, but we can't find

a) what we're doing wrong, to get the zombie to behave like this and
b) what we need to do to make it animate properly.

It's really frustrating - because Mixamo have a massive library of characters and animations which it says are designed to simply drag and drop into Unity, to allow you to get on with the fun stuff of writing your games.

Which is exactly what we want to.
If anyone can shed any light on why this doesn't work, please leave a comment below!


Friday, 13 March 2015

Using Stardudes rigged characters in Unity

So we hit upon this genius idea about rewriting our entire board game app, moving away from Flash and trying to build the whole thing in Unity.

Firstly, Unity compiles down to a multitude of platforms. Flash is great for iPhone/iOS development, and does a passable job of creating Windows executables. But Unity does all this and a whole load more! It can compile for Linux and Mac, as well as iOS, Android and Windows, as well as consoles like XBox. And it works with native .NET code, as well as Unity-targetted sort-of-javascript.

All this meant we had to give it a try.
First up, we hit the Asset Store and bought an entire mobile-friendly sci-fi environment called Top Down Sci-Fi from Manufactura K4


While we're still not sure how to optimise for mobile (it involves creating single sheet textures, fake lighting and low-poly count objects, apparently) this kit looks great not just on mobiles, but even on our large dual-screen setup.


Using the environment was as simple as throwing some money at the screen, following the installation instructions and hitting play! It worked straight "out-of-the-box" with no coding or setup required. We're going to have a play about with that at some point in the future, but now we needed some characters to populate our sci-fi environment.

Now there are plenty of online tutorials about creating meshes, setting up skeletons and rigs and animating characters using software like 3DS Max and Blender. But this is a whole world of development that we just don't have time to invest in! Far easier to exchange some cash for some pre-made assets from the Unity Asset Store....

Over the last couple of weeks - partly out of eagerness, and partly because we've no idea what we're doing - we've done a lot of this kind of swapping cash-for-ones-and-zeros, and have a few different assets for Unity, all of varying quality.

Some of the better assets we invested in are the StarDudes by Allan McDonald.


These are not only great-looking characters, but are relatively low-poly (so ideal to put into our mobile-friendly environment). They also have a variety of head types, to easily create different character races, and different materials/textures to quickly change the look and feel of their space suits.


The characters also come with an assortment of great ready-to-go animations. These include two idling animations (where the character stands still and looks around, casually) as well as some walking, firing and drop-down-dead animations to boot.

The slightly toony looking characters don't look at all out of place onboard our 3d spaceship - which itself has a slightly cartoony feel about it, thanks to the solid black lines and bold use of colours.


There are a few different tutorials online describing different ways of creating animations in Unity. In recent releases, it uses a system called Mechanim, which uses a simple state machine to blend between animations - there's nothing to shatter the illusion of immersive game play like a character that snaps from one animation straight into another. The Mechanim system does away with this, creating "transition blends" from one (or multiple) poses into another (or others).

It has taken some time to get used to the mix of visual drag-n-drop and script/coding approaches that are required to make Unity work, but once you know what you're doing, animating a character can be quite straightforward (until you know what you're doing, it can be a horrible, confusing, frustrating experience as there's nothing immediately obvious to tell you why some characters will happily take up an animation sequence, while others stubbornly remain in their default T-pose).

Every character (that needs animating) needs an animator controller. This is a file that describes the state machine that controls the animations. "Inside" the animator controller live all the animations, and the relationships between them.

Because Unity still supports the "legacy" method of animating (by placing the animations directly onto the model, without the use of an animator controller) and also animation by scripting (where a script placed on a model manipulates the rotation and position of the rig bones directly) simply comparing two or more models to see how they work often leads to more confusion than explanation!

Here's how we animated our StarDudes characters:

First, place a character in the scene.



Create an animation controller. At this stage, it's little more than an empty file.



Drop the controller onto the model in the scene. In the model properties you should now see the controller linked to the model


It's at this point that we need to add our animations.
With the model selected, open up the "animator" (not the animation) window from the menu


Now find the animations you want to use with this model, and drag and drop them into the animator controller window. To preview an animation, expand the file containing the animation (a lot of animations "hide" inside .fbx files so click the little grey arrow to see all the contents of the file). A single click on the animation will display it in the preview window. Once you've found the animation you want simply drag and drop it into the Animator window.


The first animation placed in the window becomes the default animation. If you add in more than one animation here, you can choose which one should be the default. The animation in orange shows the default animation - all other animations appear in grey.

At this point, you can try out the game,  and see the (default) animation being applied to the model. If all has gone well, instead of the default T-pose, the character should be playing your animation:



Flip back to the Animator window, add some more animations, and right-click and drag the transitions between the animations. Click on each transition arrow to set the criteria that triggers the transition.


For example, we might decide that we have our character idling to begin with, so we right-click and make our idling animation the default. We might then decide that should the player's speed increase beyond zero, the character should transition from idling to walking.


Any transition defaults to blending from one to the other after the first animation has finished playing. Exit time shows how quickly one will fade into the other.

We do this by creating a "one-way" transition from idling to walking, and set the parameter "speed" to "greater than zero". This means that as soon as our player speed is positive, Unity will gently blend the idling animation into the walk cycle animation. There's no need to do anything other than create this releationship - Unity takes care of making one animation transition smoothly into the other, without any nasty jumping or flailing limbs.


But now our character just walks and walks and keeps on walking. We need a way of getting him to stop, when his speed reaches zero again. This means we need to create a second transition - only this time the "direction" goes from the walk cycle to the idling animation; and this time we set the criteria to "speed equals zero".

That's it.

Now, whenever the player speed is non-zero (and positive) and the character is displaying the idle animation, Unity will ease the character into the walk cycle. And whenever the character is running the walk animation, and the player speed drops to zero, Unity will east the character into the standing-still-and-idling animation.

All with a bit of dragging and dropping, and not a single line of code!


Getting started with Unity 3D

Two weeks ago, Steve demonstrated an awesome project he's been working on for a little while. It's a game controller that appears a regular keyboard and mouse combination. Which means you can plug it into the micro-usb port on an Android phone and use it for just about any game that supports keyboard-and-mouse navigation. Perfect for first person shooter type games (FPS).

Steve also has a version of the impressive "Google Cardboard" VR headset, which uses stereoscopy to convert a single flat screen image into "two screens" to create a full-depth virtual reality environment.


Not many old first-person-shooters use this technology - so Steve set about creating his own!
It took less than a week to get a workable demo up and running; mostly using pre-built assets and a bit of SteveCode to glue it all together. But still, from nothing to a working 3D FPS in under a week is pretty impressive.


Since the last video game to really make an impression on us was Head over Heels on the ZX Spectrum in about 1988 (other than the point-and-click PC classic Day of The Tentacle perhaps) we've not really been much bothered by first-person 3D shoot-em-ups.

But Unity is more than just a 3D engine. It's just a shame that most budding game developers use it for that! It's a full .NET supporting, multi-platform, mobile-friendly, Mac/OS, Windows and Linux targetting development toolkit. That's right - Windows, Mac, iOS, Android, Linux - even XBox Playstation and Wii can all be targetted!

And it supports both C# and Javascript as a programming language (plus something called Boo, which we've never even heard of). Suddenly Unity is starting to look like a viable development toolkit - not just for 3d games, but for all kinds of things, on all kinds of platforms. How very exciting!

So, after a PC upgrade (Unity runs on a 1.5Ghz laptop with 2Gb of RAM, but it is a bit slow) we finally installed Unity on our quad-core 3Ghz machine with a massive 6Gb of RAM and 1Gb graphics card, with dual monitor support.


Such a programming environment is a dream to work in! Plenty of desktop space for docking windows, moving assets around, organising work and so on. How we managed on the tiny 17" laptop screen for this long is a mystery!

It's surprisingly easy to get something very impressive up and running, very quickly using Unity. There are loads of tutorials all over the 'net so we're not going to go into too much detail here. Of course you can create your own 3D models, rig them, animate them and import them into Unity. But you can also just download some ready-to-go models straight from the Asset Store and drop them into your project and use them straight away!

So that's the reason for the lack of updates over the last few weeks - it's not very interesting to keep reading "tried this in Unity but had no idea what I was doing, so it doesn't work". And it's not very interesting to keep writing stuff like that either. But now a few of us have started to get a bit more involved with Unity, there'll be plenty of blog posts recording the ongoing development....




Monday, 23 February 2015

Sending MIDI messages from our MIDI keyboard

Originally we hadn't planned on creating a complete MIDI keyboard. The idea was to simply modify an existing (working) keyboard and add some lights to it. The Oregon keyboard we eventually went with just looked so cool, would couldn't pass it up - even if it meant buying it "sold as seen".

And although it was seen with all the lights working, ultimately it turned out that it was "sold as broken". So we've had to gut the thing completely and build our own controller for it, as well as add in our light-up keys.

This does have a few benefits (despite meaning loads more work) - firstly, we'll be able to use any MIDI soundbank as we like (the keyboard was first built in the mid-80s and some of the synthesized sounds were... erm, a bit sketchy to say the least!). It also means we've about a million un-used buttons and sliders all over the original casing, which we can re-use for any purpose we like.

Luckily, the original device used a relatively simple 8-bit multiplexing method of reading the entire keypad - or at least, if it didn't the hardware suggests that it did something very similar.



We can read each bank of eight keys by sending their common line low and detecting which of our input pins has gone low (indicating that a key has been pressed). Then we send the common line of the next back of keys low, and detect which of those keys have been pressed by monitoring the same set of 8 input pins. The reason we think this is how the keyboard originally worked is because every single pad on the breakout circuit board has been connected with a diode, which isolates each bank of eight keys and only allows the current to flow "from" the keypad connector "towards" the common rail - the ideal set-up if you're using pull-up resistors on your input pins, and shorting the keys to ground when they are pressed.

So now we can detect keypresses, we need to monitor (and remember) the state of each key as it is both pressed and released.

Voice MessageStatus ByteData byte1Date byte2
Note off8xKey numberNote Off velocity
Note on9xKey numberNote On velocity
Polyphonic Key PressureAxKey numberPressure amount
Control changeBxControl numberControl value
Program changeCxProgram number-
Channel pressureDxPressure value-
Pitch bendExMSBLSB


We're going to be focussing on note on and off signals (our hardware isn't really set up for velocity, so we'll just set this to "full whack" - or maybe make it an editable option). We may use control and program change messages for some of the re-purposed buttons, but for now let's concentrate on getting the keys working: there's no point having a MIDI keyboard that doesn't detect and respond to keypresses!

Our 49 keys mean storing the current state of each key across seven bytes. We're going to imagine these bytes not in groups of eight bits, but as one, big, massive 56 bit value. Simply put, each time a key is pressed, we set the corresponding bit in this big long number to a one, and when it is released, we un-set the corresponding bit to zero.

Each time we read the input port, we compare the value on the input pins to the corresponding byte in this big long number (since the keys are read in groups of eight also). If they are the same, then nothing has changed.

If they are different, we look to see whether a bit value has gone from zero to one (a key has been pressed) or a bit has gone from one to zero (a key has been released). We can then generate and send the appropriate MIDI signal.



When we start, all our values are zero (all the keys are released). Let's say we hit a C-major chord. The first byte is now 10001001 and the second byte is 00001000 (if we do this higher up the keyboard, it may be the third and fourth bytes, or the same pattern may be split over bytes two and three, but you get the idea).

To detect keypresses we do this: Take the previous pattern and XOR the current key pattern over it.
This gives us all the keys that have recently changed - note, this isn't necessarily the keys that have just been pressed, as we'll soon see.
So our original pattern of

00000000 00000000 XOR
10001001 00001000 =
===============
10001001 00001000

While this does correspond to all the keys that we've just pressed, it's important to understand that this is a pattern of keys that have changed. The next time we scan the keypad, we get

10001001 00001000 XOR (previous pattern of keys pressed)
10001001 00001000 = (current pattern of keys pressed)
===============
00000000 00000000 (nothing has changed).

Now let's release the keys and see what happens:
10001001 00001000 XOR (previous pattern of keys pressed)

00000000 00000000 = (current pattern of keys pressed)
===============
10001001 00001000 (the keys that make up the C major chord have changed)

So we're able to detect which keys  have changed - now we need to work out whether the key has been pressed or released. We need to do this is two stages - first work out which keys have been released then work out which keys have been pressed:

Take the pattern of keys that have changed and bit-wise AND with the current (input) pattern.

Any change from nothing to C-major to nothing again isn't going to demonstrate things as nicely as changing from one chord to another, so here goes - we're going from C to Cm7 (hey, maybe this is some quirky jazz number or something?)



We're holding down a C major chord, which we represent (in binary) as 1000100100001000
Now let's move to C minor 7th, which is represented as 1001000100100000

First, get the keys that have changed by bit-wise XOR-ing the two values:

1000100100001000 XOR (before)
1001000100100000 = (after)
===============
0001100000101000

The changed pattern shows not only the keys that have been released, but also the keys that have been pressed. So let's get the keys that have been released first (assuming you lift your fingers off the keys before placing them in new locations).

Take the resultant "changes" pattern and bitwise AND it with the previous pattern

0001100000101000 AND (result)
1000100100001000 = (previous pattern)
==============
0000100000001000

And if we compare this back to the keyboard, we can see that we've identified



And it turns out that these are the two keys that we've lifted off. So far so good. Now let's work out which keys have just been pressed. To do this we bitwise AND the current (input) pattern with the resulting pattern of changes.

0001000000100000 AND (result)
1001000100100000 = (new/current pattern)
===============
0001000000100000

And if we compare this back to the keyboard, we've identified


and it just so happens that these are the two keys we've just pressed/added to the previous chord. The notes that didn't change (the C and the G) don't appear anywhere, because they didn't actually change.

So there we have it - we create a pattern to find out which keys have changed by XOR-ing the current pattern with the previous pattern.

To find out which keys have been released, we AND this result with the previous pattern. To find out which keys have been pressed, we AND this result with the current pattern.

Now we can detect keys being pressed and released, we need to send our MIDI messages to the synthesizer/midi sequencer. We'll add some clever stuff in later, to change which channel we're sending data on, so for now, we'll assume channel one (but it could be any channel from zero to 15).

Most ad-hoc MIDI messages are made up of just a simple three byte packet. The upper half of the first byte is 8 for a note off message (1000----). The lower half of the byte is channel number (we're using 1 for now ----0001). So our first byte, for any key release will start with 0x81 (or 10000001 in binary).

The second byte is the note value, in the range 21-108, where 0 is low A0 (27.5hz) and 108 is C8 (4186hz). Concert pitch A (the note most instruments use as a reference for tuning, at 440hz) is MIDI note number 69 (decimal, 0x45 in hex).

Note a note on MIDI values - all values (except for the first byte) must remain in the range 0-127. This allows MIDI devices to detect the beginning of a midi message - it should be the only byte in the entire packet that has the MSB set to one; all values contained within the message should always have a MSB un-set to zero.

The final byte in a MIDI message is the velocity (how hard you hit the note). We're going to just use a global variable for this - so you can set the velocity for every key, else just set it to maximum (0x7F - note that the leading MSB is cleared).

That's as simple as our MIDI messaging needs to be.

For key presses, we do exactly the same, except the first byte will begin 0x91 (the upper half of the first byte set to 0x09 indicates a note on, whereas 0x08 indicates note off).

We've a whole load of buttons still to mount into the enclosure when the time comes, which means we could have control change messages and system messages and all that kind of stuff floating around - but we're going to stick to simple note on and note off messages for now.



Tuesday, 17 February 2015

MIDI keyboard - lights test

It all started this evening, with a simple lights test, animating through a full octave on the keyboard.



Wiring individual LEDs to the MAX7219 wasn't quite as straight-forward as wiring an 8x8 LED matrix (though quite why, we're still not sure). But, after a bit of fiddling about and debugging by plugging just one set of wires in at a time, we eventually got some working firmware for our light-up keyboard.




It's all pretty encouraging - dial in a chord or scale and all the available notes correctly light up! There are a few modes of operation. At the minute we're using a rotary dial to select everything, but in time, we'll use the second keypad to select the current key/scale, and plenty of different buttons to quickly and easily change between minor, major, full scale, chords etc.

In "chord" mode, it's pretty obvious what's happening: the chord shape is repeated in two places on the keyboard (originally we were going to just display it once, but then thought we've no idea whether the player would want to use their left hand or their right hand to play the chord - so put a chord shape under each!).

In "show full chord" mode, all of the notes that make up the select chord appear across the entire keyboard. This is  a bit like showing a scale, but not quite. For example, a C major chord (based on the triad of C-E-G) is very different to the C major scale (consisting of the seven notes C-D-E-F-G-A-B). So what's the point?

Chords/triads have lots of different inversions.
A C major chord doesn't have to start with the lowest note on C.
You can play a C major chord as G-C-E. This is known as the "second inversion" (the first inversion is played E-G-C). So by displaying all the notes of the chord and repeating them multiple times over, you can simply play any consecutive three notes and you'll get a C major - even if you're not starting on the root note C.

Full scale mode is pretty self-explanatory: select a scale and the entire scale is displayed across the entire keyboard. This should make it easy for players to jam along, much like a guitarist might do, when playing a solo. If you know the song is, for example, a shuffle/blues in A, there's no reason why you can't dial in "A pentatonic" and you can pick pretty well any notes out of the scale and produce something vaguely melodic (that's how a lot of guitar players do it anyway!)