Monday 31 August 2015

Static grass applicator from a bug zapper

Static grass is a method of applying little strands of nylon to a board, to give the appearance of a field or grassy area. The nylon strands are given a static charge before applying, to encourage them to stand up on end, rather than lay flat in a straggly, messy way.

There are plenty of static grass applicators on the 'net and, priced at about £20-£25, they're not exactly going to break the bank. But there's something a bit "suspect" about them.


The idea is that you fill the tea strainer with static grass, connect the other, exposed wire to the board you're applying grass to (usually by driving a small nail or pin into a patch of wet PVA glue) switch on and give it a shake.



The high-voltage potential difference between the (metal) tea strainer and the wet area of (negatively charged) glue creates a static effect, causing the nylon strands (static grass) to stand on end.


It's a relatively simple job, and a relatively simple tool. But there's something about that applicator that looks a bit familiar....


In fact, many people have indeed made their own static grass applicators from a metal sieve and a bug zapper. So we hit the local Aldi and picked up both for less than a fiver!



Opening up the zapper reveals a relatively simple  circuit, containing a transformer, transistor and a few capacitors. One side of the output coils of the transformer are connected to the outer mesh of the bat and the other side connected to an inner mesh. The idea  is that a bug getting caught between the two meshes is instantly "zapped".


Whenever we see a power source, with capacitors and a transistor, already we're thinking "oscillator output". The transformer is a twin coil affair, with a massive step-up effect. The remaining capacitors create a "charge pump" effect, to allow a large voltage to be created from a relatively low voltage source (2 x AA batteries).

In fact, this particular bug zapper creates a pulse of about 800V. From experience, we quickly learned that you don't really want to go touching the two output wires while the device is being powered!

Looking at some of the static grass applicators online, it's obvious that some of them are little more than a "hacked-up" bug zapper - with the zapper meshes removed and a metal sieve connected to one side of the transformer, leaving the other side free to connect to a pin or small nail. It took us less than five minutes to get a similar thing soldered up and re-assembled; giving us a static grass applicator for less than five pounds and a total of about ten minutes work.

So how does it perform?
Here are the results.


It's not bad. But some of the grass looks like it's "wilted over" a little bit. Since the circuit is simply a step-up transformer and a few caps, and given we're pulsing the output, rather that leaving on continuously, we thought we'd push it a little bit and see if it could cope with a higher input voltage (the idea being that we'd get a higher output voltage, and thus increase the static effect). Instead of two AA batteries, giving us 3V, we stuck a PP3, 9V battery on the supply.

The results:




Multiplexing an array of hall sensor inputs

We've tried a few times to multiplex an array of hall sensors, and each time ended up getting a load of random garbage as input data. Power up a hall sensor, leave it with a permanent power supply and read back it's value, and everything works fine.

But try to multiplex the inputs (using the traditional rows and columns approach) and things start to go a bit awry. The sensors give the impression of having floating pins on the inputs - sometimes they're activated and sometimes not, whether or not a magnet is over the sensor.

That's why we'd invested so much time and effort into creating daisy-chained hall-sensors-and-shift-register inputs. These work really well. But the actual assembly of them is pretty time consuming. They're also fairly expensive to manufacture, since each panel of 12x12 squares requires no less than 18 dedicated PCBs (each with 8 resistors, a couple of capacitors and a hall sensor on each).

So the other day, we returned to the copper-tape-and-multiplexing idea - just to give it one last shot.



The main difference this time, is that instead of trying to source the current for the hall sensors directly from the i/o pins on the mcu, we're going to sink the current  - this means that every hall sensor has a permanent 5v supply to it, and we toggle the ground pin on each one, to turn it on and off.

Sadly, this made little difference - except that the hall sensors remained resolutely unresponsive. So, with one last throw of the dice, we decided to try switching the ground lines; not directly through the i/o pins, but each "row" through an NPN transistor (I think we used 2N7000 transistors, but they might have been BS170s).

This time - amazingly - the multiplexing worked.
Well, it worked with a 4x2 matrix of hall sensors.
But we coded things up to accommodate a 12x12 matrix, and the pins that actually had hall sensors attached still reported back the correct values, even when we multiplexed with a 5ms delay between polling each row on the board (the delay is to allow the hall sensors time to settle to give a stable reading).

With 12 rows and a 5ms delay between reading each row, we've a latency of about 60ms. For an input device that involves picking up a piece and putting it back down again, 1/16th of a second response time is pretty good. There's probably more latency between the board controller, the ethernet/wifi module and the app running on the PC/smart device than there is between polling the rows and columns of a multiplexed array!

The multiplexing approach means a number of  changes, even at this late stage, to our game idea. Firstly, the cost is slightly reduced (since we don't need all those pcbs, shift registers and so on) but each panel now needs an mcu with at least 25 i/o lines (12 pins for rows, 12 pins for columns and one dedicated pin for serial/UART communication) - so we're losing 18 x 7p shift registers, but now require a slightly more expensive mcu with more pins! Although, we would no longer need to get 18 pcbs manufactured.

It also means assembly time should be reduced a little - although preparing the board (with strips of copper tape) can take a little while, it's still probably less than hand-soldering 18 x 5-way wire connectors for each of the pcbs carring the shift register and 18 hall sensors.

There's only one sure-fire way to find out - and that's to make up a few panels, using all the different techniques we now have available, and see which
  • works best
  • is quickest
  • works out cheapest
Like every electronics-based project, the trick is to find the best compromise between all three!

Friday 14 August 2015

I won an Orc!

For a little while now, we've be focussing quite heavily on manufacturing processes for turning our electronic board game idea into a product, including PCB populating, CNC routing and Unity game development.

One thing that has been ignored a little bit of late is miniature painting.
There are a few people on Patreon that I support, throwing a few quid into a pot each month, to help hobbyists pay for paint, materials and so on (if you're not already a Patreon supporter, why not check them out and bung a few quid/dollars to someone you like the look of? Every penny helps, and it's always nice to think you're keeping your hobby community alive, whatever it may be!)

Anyway, I'm currently supporting:



They get just a few dollars a month from me, but some of the guys have enough supporters/patrons to be able to pay for (and demonstrate, make, paint and give away) so really cool stuff. (I think The Painting Buddha runs his operation as a full-time living, as he sells instructional DVDs and has a cool workshop in Berlin that I didn't quite manage to get to last time I was over there).

Anyway, Alan (The Apathetic Fish) recently ran a random prize draw.
And I won it!
And about three weeks after getting notification, I forgot all about it. So it was a pleasant surprise when this little chap turned up in the post:





Ok, so the bright, harsh flash from the phone camera doesn't do the paint job any favours, but it's a great looking mini. The highlighting is bright enough to be noticeable without being distracting and the details are picked out with real precision.

Now I just need to either find and paint another half-a-dozen Orcs to include them in a sci-fi shooty game, or enter about a thousand other raffles and hope to win some more....



Tuesday 11 August 2015

Creating electronic board game sections

Having sorted out the electronics for our electronic board game panels, we've tried a number of ways of building the hardware to embed them in. After lots of aborted attempts (using mixtures of card, foamex/foamboard and laser cut acrylic) we've settled on routed MDF.

Creating a consistent template for routing the channels, such that the pre-built PDBs will fit perfectly into the grooves, was a real pain, so we've come up with a simple router jig, out of offcuts of wood and scrap metal runners.


The rails are placed parallel to each other, at exactly 420mm apart (in truth, we used a piece of 420mm square cut wood to line up the rails, only to find they're about 420mm apart at one end and 418mm apart at the other, occasionally binding on the wood as it passes through!) But it's good enough for a first try.

The base is marked at 35mm intervals, so that we simply push our board through, under the router, then line up the leading edge with the marks. Set the router head spinning, plunge, then run the router from left to right. As our cutting head is 6mm wide, and the plunge depth set to 3mm, this makes 12 channels for our hall sensors to sit in.


Then we rotate the wood through 90 degrees and use a 16mm cutting head to create wider grooves for the PCBs at 70mm apart (though in the image above, our first board was cut exclusively with a 6mm bit, making the wider grooves with multiple offset passes).


We made a bit of a mess of the first offset (at both ends) for the PCB channel - but once we'd worked it out, the rest of the channels were cut quickly and easily.

The end result is a piece of MDF into which we can easily drop our pre-populated circuit boards. In practice, it may be easier to connect the wires between the boards before fitting (as we did with our earlier jig-based soldering) but we're still ironing out the exact manufacturing process!

The nice thing about this process is that the boards can all be routed independently of the circuit boards, which are then fitted quickly and easily into the grooves. All that then remains is to fit the main controller (that is used to connect multiple boards to each other) on the underside, and a thin cardboard layer fitted to the top, to cover the currently exposed circuit boards.


Sunday 9 August 2015

Playing video with alpha transparency in Unity 5 using a custom shader

Playing videos in Unity has become much easier thanks to version 5 - even in the "personal" edition, you can apply a video as a texture/material to just about any surface (prior to v5 you had to have the "professional" version of Unity to play videos).

Having given up on Flash, we're looking to Unity for our game-writing needs, and that includes playing video with a transparent background. Given that it's 2015, surely Unity is capable of playing videos with a transparent background?

Well, it turns out not. Or at least, not easily.
Unity uses some variant of the Ogg Vorbis format for video (the Theora codec apparently) and, bizarrely, Quicktime to handle importing videos into the editor. And since the Theora codec doesn't support video alpha transparency, for a while it looked like we'd hit yet another brick wall. For a while at least.

A quick check on the Unity Asset store reveals a number of custom shaders and scripts that can apply chroma keying at runtime. So instead of using a video with an alpha channel, we should be able to simply use a video with a flat, matte green background, and key out the green on-the-fly.

One such off-the-shelf plugin is the Universal Chroma Key shader. But it doesn't work with Unity 5 (and we're still waiting for our refund, Rodrigo!). One that does work is the slightly more expensive UChroma Key by CatsPawsGames. It's easy enough to set up and use, and gives a half-decent result. It's good.

But not great.



After removing the green background, we're still left with a green "halo" around our main character - which is particularly noticeable when the video is actually playing. Some frames suffer more than others, but the result is a discernible green haze around our hero character. The only way to remove the green completely is to make the keying so "aggressive" that part of the main character starts to disappear too. And, obviously, this is no good for what we're after!

We spent quite a bit on different plug-ins from the online store, but UChroma gave the best results. And even that wasn't quite good enough for what we're after.

Just as we were about to give up on Unity, Scropp (Stephen Cropp) came up with a crazy idea - write our own shader. The reason it was so crazy? Well, some of us couldn't tell a texture from a material, let alone even understand what a shader is or does. Luckily, Stephen had a basic idea of how it all worked. And there were plenty of examples online about creating your own custom shader.

Our thinking was to create a shader which would accept not one, but two, material/textures. The first would be the footage we wanted to play, and the second would be a second movie texture, containing just the alpha data for the first movie. It seemed like it might be worth investigating.

We know that Unity doesn't support videos with alpha channels, but the idea of splitting it over two layers - one containing the footage, and one containing the alpha mask as a greyscale sequence seemed like a sensible one. So that's exactly what we did!

After creating our video footage with a green background and just the RGB (no alpha) channels -


- we exported the same footage from After Effects, only this time, exporting only the alpha channel (to a video format that also only supported RGB, no alpha)


Now it was simply a case of applying our footage video to one plane/quad and the alpha movie to another. Then simply play both movies together, and write a shader which took the alpha from one movie and applied it to the other. So far, so easy. One thing to look out for is that a movie texture doesn't automatically begin playing after being applied to a plane/quad. You need a simple script to kick it into life.

We simply created a "play video" script and dropped it onto each of our video containers. When we hit the "play" button in Unity, both videos started playing automatically:

using UnityEngine;
using System.Collections;
[RequireComponent (typeof(AudioSource))]

public class play_video : MonoBehaviour {

     // Use this for initialization
     void Start () {
          MovieTexture movie = GetComponent<Renderer>().material.mainTexture as MovieTexture;
          movie.loop = true;
          GetComponent<AudioSource>().clip = movie.audioClip;
          GetComponent<AudioSource>().Play ();
          movie.Play ();
     }
     
     // Update is called once per frame
     void Update () {
     
     }
}


The video player script should be pretty self-explanitory: simply get the audio and movietexture elements of the object that this script is placed on, and set them playing. It's that easy! The only stumbling block now is the "write your own custom shader" idea.

Let's start with the simplest of shaders - an unlit, video player shader.
Even if you don't full understand the "shader language" (as we still don't) it is possible to vaguely follow what's going on in the shader, with a bit of guesswork:

Shader "Custom/alpha1" {

     Properties{
          _MainTex("Color (RGB)", 2D) = "white"
     }

     SubShader{
          Tags{ "Queue" = "Transparent" "RenderType" = "Transparent" }

          CGPROGRAM
          #pragma surface surf NoLighting alpha

          fixed4 LightingNoLighting(SurfaceOutput s, fixed3 lightDir, fixed atten) {
               fixed4 c;
               c.rgb = s.Albedo;
               c.a = s.Alpha;
               return c;
          }

          struct Input {
               float2 uv_MainTex;
          };

          sampler2D _MainTex;
          void surf(Input IN, inout SurfaceOutput o) {
               o.Emission = tex2D(_MainTex, IN.uv_MainTex).rgb;
               o.Alpha = 1;
          }

          ENDCG
     }

}


So what's going on with this shader?
The first thing to notice is the #pragma command. This is basically called on every pixel that passes through this shader. So we've created a surface shader, each pixel is passed through the "surf" (surface render) function, the "nolighting" (lighting render) function, and the parameter "alpha" tells the shader that not all pixels are fully opaque (apparently, this can make a difference to the drawing order of the object, but we've taken care of this with our "tags" line).

Our nolighting function simply says "irrespective of the surface type or lighting conditions, each pixel should retain its own RGB and Alpha values - nothing in this shader will affect anything in the original texture".

A shader which applies specific lighting effects would have much more going on inside this function. But we don't want any lighting effects to be applied, so we simply return whatever we're sent through this function.

The other function of note is the "surf" (surface render) function. This accepts two parameters, an input texture and a surface output object. This function is slightly different to most shaders in that we don't set the surface texture (Albedo) on our output surface, but the "emission" property.

Think of it like this - we're not lighting a texture that's "printed" onto the plane - we're treating the plane like a television; even with no external lighting, your telly displays an image - the light is emitted from the actual TV screen - it's not dependent on external lighting being reflected off it. This is the same for our plane in our Unity project. Rather than set the "texture" of the plane surface, to match the pixels of the input texture, we're emitting those same pixels from the plane surface.

This is why we have the line
o.Emission = tex2D(_MainTex, IN.uv_MainTex).rgb;
instead of
o.Albedo = tex2D(_MainTex, IN.uv_MainTex).rgb;

If you drop your video onto a plane and add the shader (above), you should see the video being drawn as an unlit object (unaffected by external lighting) with no shadows (since the light is emitting from the plane, not being reflected off it). Perfect! We're written our first "no-light" shader!


Now the trick is to apply the alpha from the alpha material to the shader being used to play the video. That's relatively simple, but needs a bit of a careful editing to our original shader:

Shader "Custom/alpha1" {

     Properties{
          _MainTex("Color (RGB)", 2D) = "white"
          _AlphaTex("Color (RGB)", 2D) = "white"
     }

     SubShader{
          Tags{ "Queue" = "Transparent" "RenderType" = "Transparent" }

          CGPROGRAM
          #pragma surface surf NoLighting alpha

          fixed4 LightingNoLighting(SurfaceOutput s, fixed3 lightDir, fixed atten) {
               fixed4 c;
               c.rgb = s.Albedo;
               c.a = s.Alpha;
               return c;
          }

          struct Input {
               float2 uv_MainTex;
               float2 uv_AlphaTex;
          };

          sampler2D _MainTex;
          sampler2D _AlphaTex;
          
          void surf(Input IN, inout SurfaceOutput o) {
               o.Albedo = tex2D(_MainTex, IN.uv_MainTex).rgb;
               o.Alpha = tex2D(_AlphaTex, IN.uv_AlphaTex).rgb;
          }

          ENDCG
     }

}


The main changes here are:

Properties now include two materials/textures: the texture for the video to play, and a texture for the alpha mask. The input structure now includes both the texture to play and the alpha texture and the final line in our "surf" (surface render) function has changed:

Instead of setting the alpha (transparency) of every pixel to one (fully opaque) we look at the pixel in the alpha texture and use this to set the transparency of the pixel. Now, instead of each pixel being fully opaque, a white pixel in the alpha texture represents fully opaque, a black pixel represents fully transparent, and a grey shade, somewhere inbetween, indicates a semi-transparent pixel.



We also need to ensure that the play_video.cs script is applied to both planes and untick the "mesh render" option in the alpha plane (so that it does not get drawn to the screen when the game is played).


Hitting play now and we see our video footage playing, with an alpha mask removing all of the green pixels.

Well. Sort of.

After the video has looped a couple of times, it becomes obvious that the two movies are slowly drifting out of sync. After 20 seconds or more, the lag between the movies has become quite noticeable.



It looks like we're almost there - except it needs a bit of tweaking. Having our video and alpha footage on two separate movie clips is obviously causing problems (and probably putting quite a load on our GPU). What we need is a way of ensuring that the alpha and movie footage is always perfectly in sync.

What we need is to place the video and alpha footage in the same movie file!

After a bit of fiddling about with our two movies in After Effects and that's exactly what we've got....

And now we need yet another amendment to our shader. We lose the alpha texture (since it's now part of our main texture) which actually simplifies the script a little bit. But instead of using the main texture for the alpha channel, we use the main texture shifted half-way in the y-axis.



Here's the code for the updated shader:

Shader "Custom/alpha1" {

     Properties{
          _MainTex("Color (RGB)", 2D) = "white"
     }

     SubShader{
          Tags{ "Queue" = "Transparent" "RenderType" = "Transparent" }

          CGPROGRAM
          #pragma surface surf NoLighting alpha

          fixed4 LightingNoLighting(SurfaceOutput s, fixed3 lightDir, fixed atten) {
               fixed4 c;
               c.rgb = s.Albedo;
               c.a = s.Alpha;
               return c;
          }

          struct Input {
               float2 uv_MainTex;
          };

          sampler2D _MainTex;

          void surf(Input IN, inout SurfaceOutput o) {
               o.Emission = tex2D(_MainTex, IN.uv_MainTex).rgb;
               o.Alpha = tex2D(_MainTex, float2(IN.uv_MainTex.x, IN.uv_MainTex.y-0.5)).rgb;
          }

          ENDCG
     }

}


Now we're almost there.


Gone are the nasty out-of-sync issues, and the alpha mask applies perfectly on every frame of the video. Underneath the video we can still see our alpha mask, wherever the video footage has been drawn. We've decided that this is because if no more mask exists, the shader uses the previous "line" of mask and continues to draw in the same place as the previous "scan line".

This could easily be fixed by drawing a single horizontal black line along the bottom of our alpha mask footage (so the final scan line of the mask is 100% transparent, along its full length, and this would be applied to every pixel below our main character).

Another approach might be a simple "if" statement:

if(IN.uv_MainTex.y < 0.5){
     o.Alpha=0;
}else{
     o.Alpha = tex2D(_MainTex, float2(IN.uv_MainTex.x, IN.uv_MainTex.y-0.5)).rgb;
}


which yields the following result:


Well, it looks like we're nearly there.
There's still an occasional slight tinge around the character, particularly where there's a lot of fast motion (or motion blur) in the footage. If you look closely under the arm supporting the gun in the image above, you can see a slight green tinge. But in the main, it's pretty good.

There are a couple of ways of getting rid of this extra green:

The most obvious one is to simply remove the green from the video footage. Since we're no longer using this colour as a runtime chroma-key, there's no need for the background to be green - it could be black, or white, or some grey shade, somewhere inbetween (grey is less conspicuous as an outline colour than either harsh black or harsh white, depending on the type of background to be displayed immediately behind the video footage).

The other way of removing this one-pixel wide halo is to reduce the alpha mask by three pixels, then "blur" it back out by two pixels. This creates a much softer edge all the way around our main character (helping them to blend in with any background image) and removes the final pixel all the way around the alpha mask.

We had settled on a combination of both of these techniques when Scropp came up with the "proper" way to fix it: "clamp green to red" rendering. What this basically means is that for each pixel, compare the green and red channels of the image, and if the green value exceeds the red, limit it to be not more than the red channel.

If you think about it, this means that any pure green colour will be effectively reduced back to black. But any very pale green tint will also be reduced a tiny bit. Pure white, however, would not be affected, since it has the same amount of red as it does green.

Any pure green colours get blended to black. Very light green shades are blended more to yellow (when mixing RGB light, equal parts red and green create yellow) or pale blue (a turquoise colour of equal parts blue and green with very little red would be blended more to a blue colour as the green element is reduced). But for the colour to be a very light shade, it has to have lots of red, green and blue in it, so the amount of colour adjusted is actually pretty small.

So our final shader code looks like this:

Shader "Custom/alpha1" {

     Properties{
          _MainTex("Color (RGB)", 2D) = "white"
     }

     SubShader{
          Tags{ "Queue" = "Transparent" "RenderType" = "Transparent" }

          CGPROGRAM
          #pragma surface surf NoLighting alpha

          fixed4 LightingNoLighting(SurfaceOutput s, fixed3 lightDir, fixed atten) {
               fixed4 c;
               c.rgb = s.Albedo;
               c.a = s.Alpha;
               return c;
          }

          struct Input {
               float2 uv_MainTex;
          };

          sampler2D _MainTex;
          if( tex2D(_MainTex, IN.uv_MainTex).g > tex2D(_MainTex, IN.uv_MainTex).r ){
                o.Emission = tex2D(_MainTex, IN.uv_MainTex).r;
           }else{
                o.Emission = tex2D(_MainTex, IN.uv_MainTex).rgb;
           }

           if(IN.uv_MainTex.y < 0.5){
                o.Alpha=0;
           }else{
                o.Alpha = tex2D(_MainTex, float2(IN.uv_MainTex.x, IN.uv_MainTex.y-0.5)).rgb;
           }

          ENDCG
     }

}


And the result looks something like this:


Bang and the nasty green halo effect is gone!


The best bit about Stephen's approach is that as well as getting rid of the green spill that creates the green halo effect around our main character, it also helps reduce any obvious green areas on the skin and clothes, where the greenscreen might have been reflected onto it.

So there we have it - from three days ago, with no obvious way of being able to play a transparent video against a programmable background, to a custom Unity shader that does exactly the job for us! Things are starting to look quite promising for our zombie board/video game mashup....


What's the point of Adobe Flash now?

Don't get me wrong. I love Flash. I've used it since about version 4 and have created some pretty crazy stuff with it over the years. Back in version 5 I was re-creating pseudo-3d isometric puzzle games and as soon as xml sockets were supported, realtime multiplayer maze games (like Rise of the Triad and Doom). Flash could do some amazing stuff. And it did it all in tiny web-friendly file sizes.
To top it all, Flash could be embedded into PC desktop applications, for fancy menus in "real" commercial systems - it wasn't just an animated web banner tool (though that's probably what 90% of it's users were building with it).

But I tried using it recently to do some cool stuff I used to use Flash for, and Adobe have butchered it so badly that it's pretty much useless for anything useful at all now.


Until quite recently, I still used it to draw simple shapes and logos to create printable vector artwork for paper documents. I love(d) the drawing tools in Flash - they were so easy and intuitive. And it used to have a massive array of different output formats.

Quite often I'd make a drawing or a diagram for something for the missus, who still uses (MS desktop publishing software) Publisher. She could position it and guarantee it would print in all it's vector-y goodness, scaled and without jagged edges and so on. WMF and EMF were my favourite image formats for exporting.

Now Flash lets you export images in about three different formats, all of them rasterised. The only vector option is SVG. And even then, it rasterises gradients and embeds them as shitty 72DPI images. And, quite often, it doesn't even get that right. So it's rubbish for exporting images.




I used to create web apps using Flash, and all kinds of realtime socket-based clients, as it actively installed on something like 90% of all internet-connected devices (mostly PCs) at one time, across a wide range of browsers.

Then Apple decided that Flash was poo and in the kind of backwards step that we're still reaping the benefits from today (145Mb animated GIFs on the Google homepage over a 3G/mobile connection anyone? Because that's better than a 30kb animated vector file surely?) refused to support it.

There's no point getting into the whys and wherefores about why the Flash player couldn't be updated to fixed whatever Apple had against it - it's a long lost discussion - but hate it they did, and slowly Flash lost favour as a web-based development platform. It's still immensely powerful (or, at least, would be if you could get an older browser plug-in to work today) and still does stuff that todays' web-based technologies can only dream of (without consuming 100% CPU and locking up the entire device for 10-20 seconds at a time!). So Flash is rubbish for web-based apps.



Just as I was about to give up on Flash completely (I remained a faithful supported long after many other people had jumped ship to HTML5/jQuery or - shudder - Python for desktop-based apps) the first glimmer of good news came out, around about when Flash reached version CS5.5 - Flash could compile apps for native Android or iOS. Very exciting!

But Flash doesn't get access to many of the operating system functions and only a subset of them are exposed by installing and hacking "black box" pre-compiled ANEs (modifying is too generous a term, since they're usually badly-written with no support or documentation to speak of).

So if you want to create an animated, interactive advert and call it an app, Flash is great to get your content quickly onto a mobile device. But if you want to create a "serious" app (and access things like peer-to-peer networking, or control the rendering for your creation to use the GPU instead of CPU to avoid draining the battery and causing the entire handset to warm up from chewing up too many CPU cycles) you're out of luck.

In fact, for anything other than pretty basic interactive screens, Flash is rubbish for mobile apps.

One thing is was good at was interactive videos.
We're making one of those (although the interactive interface is a board game, not a series of buttons on the screen). So despite a brief dalliance with Unity, we thought we'd make far faster progress with good old Flash. 

Interactive videos and Flash - like two old friends back together after a ten year break. 
Flash introduced us to .flv video files, which supported background transparency. You could simply drop a movie onto your timeline, stick a bitmap behind it and you had an interactive movie!

Flash could even do "green screening" on-the-fly. In fact, it's almost ten years ago since examples like this (http://www.quasimondo.com/archives/000615.php) were really capturing people's imaginatons:



If your browser will still play the embedded Flash file, have a play with the sliders and see how easy it is to "key out" background colours in videos at runtime.

This is just the thing we were looking for to get our zombie game coded up quickly, to create a working demo for our zombie board game. But it seems that Flash doesn't support transparent videos any more.

In fact, Flash doesn't even support .flv files any more. Adobe's official response to anyone who wants/needs to work with videos with an alpha channel is "use an older version of the software, that's why we made it possible for you to install multiple instances of Flash on your computer".

But the last versions of Flash to properly support alpha-videos/.flv don't compile mobile apps properly any more. So there's no way of creating a mobile/tablet (or even PC) app that can play a video with a transparent background. And since the example (above) was written using a really old version of Flash, you can't even import that and use it to build some runtime-chroma-key routines in the latest version - because later versions of Flash refuse to open anything that has any reference to AS2 in it at all.

So it seems that Flash is rubbish for making interactive movies too.



And given that Adobe has a massive suite of tools for doing things like drawing and animating and web development, it begs the question - "what's the point of Adobe Flash now?"

I think I'm going to cancel my monthly subscription. I've not really used Flash in over 12 months for anything more than quick-and-dirty prototyping, and I can't see it ever being used for a commercial grade project again.

And spend the next few days really getting to understand Unity properly, to see if that can do what I need - namely play a video with a transparent background over a scriptable background. Fingers crossed........

Saturday 8 August 2015

Creating flat matte videos for chroma keying in Unity with Adobe After Effects

The time seems to be whizzing by lately - it only seems like yesterday we were filming our greenscreen zombie footage and it's already been nearly two weeks! Now we've a load of video footage, and plenty of work to do, to make it "game ready" (i.e. dropping out all the unusable background). We've loads of little clips, so we're starting small - but this is the main technique we used for pretty much all of them (and will continue with for future footage):

Here's a quick clip of Andrea wandering around, getting ready to shoot some zombies with his puny pistol.


As you can see, the background is hardly a perfect flat green (although it didn't quite look that bad when we were actually shooting the footage - the higher contrast of the output video makes the wrinkles in the screen really stand out!)


The footage is also "too big" - it was shot at 1920 x 1088 and included far too much footage, so we've had to crop the video and rescale to a more manageable 720 x 405, as well as trim out the unwanted footage both before and after the action we want to capture


After importing the video into After Effects, the final output render size was massive and there was a lot of unwanted extra footage.

We used the keyboard shortcut to "splice" the video footage at the current keyframe, using Ctrl + Shift + D. This not only creates a razor cut in the video, but splits it out onto two separate frames. Before cutting any footage, we rescaled the actual video and moved it around inside our composition window, so that only the actual footage we wanted was captured.


We made a cut at the start and at the end of the section of footage we wanted to keep. This split the video onto three layers, making it easy to select the bits before and after to remove them completely.

Now, we've had mixed success with the following technique, using the entire footage, so at this point we exported our video selection to a new, separate file, and imported it into a new project. As we've still plenty of work still to do with this footage, we exported it as uncompressed video without using any compression or using a video codec. This makes massive files (100Mb for 5 seconds) but does mean fewer compression artifacts and no blockiness in the new video.

Next we imported the newly rendered file and made sure that it filled our composition size of 720 x 405 (the earlier footage was much larger than this and needed to be cropped). Then, following the instructions at creativecow website (https://library.creativecow.net/articles/rabinowitz_aharon/junk_mattes/video-tutorial) we made the entire background transparent. The basic steps are:

With the video footage selected in After Effects, apply the KeyLight (v1.2) key chroma effect and select a green close to the actor (or one nearest the "average" green of the background).


After applying KeyLight chroma key, we can see that there are lots of imperfections in the footage that didn't get captured by the key colour.

This takes away most of the background, but still leaves semi-transparent shadows in the background. So now, we need to "trace" our actor to completely isolate them from the background. We do this by selecting the video footage and then Layer - AutoTrace and tracing based on the alpha of the channel.


Maybe 70% is a bit much, but it worked for this particular bit of footage, so we're sticking with it - expect to use values 50%-70%.

A short while after applying the trace (it takes a few seconds to go through every frame in the video) a new layer appears with an outline around the main character. We can scrub through the footage to make sure that it captures just our main character on each frame of the video and that nothing is either truncated, or left in when it should be removed.



Next comes the clever bit - we need to expand the outline a few pixels away from the main character, to create a green "halo" around the character. So we select the traced layer and then Effect - Matte - Simple Choker. Setting the choke value to a negative value expands the matte away from the main character. We went with something in the region of -25


Depending on your version of After Effects, you may need to locate and click the "Toggle Switches/Modes" button to enable the "track matte" feature on the original footage. Set this track matte setting to the traced layer (immediately above it).


What's the point of doing all this? Well, what it means is we're only using a traced area of the original footage now, for the chroma key - we've got rid of most of the (wrinkly, unevenly lit) green background, and we're only actually keying the "green halo" around our main character. It basically means we need a much less "harsh" keying setting to completely remove all of the remaining green background (since there is so much less of it to begin with).


What we've done so far is create a "junk mask" around our main character, so remove most of the background, but there are still a few little areas around him that are not quite right. Now we just need to play with the KeyLight Chroma Key settings to get rid of any extra artifacts around the outside of our main actor.

We set the screen gain to 110 and the Matte Black setting to about 15 and all of the artifacts were gone.



One thing we found was that with a slightly more "aggressive" setting to remove the green, we started to lose opacity on our main character. In particular, the body started to disappear, a bit like Marty McFly at the Enchantment Under The Sea Ball. It was quickly corrected by adjusting the Matte White setting down from 100 to about 80.

The final step is to simply export the video footage to a format that supports RGB+ (or RGB+Alpha or 32-bit RGBA colour). For this project, we went with Quicktime as it created the smallest files (about 2.5Mb for our 5 second video). To ensure that the video plays back ok in media players that may not support alpha transparency, we double-checked that our composition settings had a bright green (#00FF00) background. Where alpha channels are supported, the video should play back with a transparent background. Where they are not, a flat, matte, solid green background should appear.



Gone are those nasty greenscreen wrinkles and badly-lit backgrounds! Our video footage is now "game ready".


With our footage now "game-ready" all we need to do is drop it into a game engine that supports alpha videos (Flash has been able to do this since about version 7 or 8) and place the appropriate background bitmap behind it.


The absolute worst case scenario would be to use Flash or Unity (or some other gaming engine) to chroma key out the green at runtime, now that we've got it a nice, solid, uniform bright green throughout the entire video...




Tuesday 4 August 2015

Windows 10 Upgrade - did I miss something?

I'd read online about being able to install the latest Windows 10 upgrade as either a "proper upgrade" to Windows 8.1 or being able to install it on a new partition, as a fresh install.

As I clicked around this morning, looking for the "fresh install" option, I set the wheels in motion and suddenly found myself facing a black screen with the dreaded "do not turn off your PC" message - Windows was upgrading, whether I liked it or not!

It was a scary couple of hours (it really does take that long) despite Microsoft reassuring me that everything was ok, and I should "sit back and relax".


And when it was finished, it took a scary-long time before the entirely black screen flashed into life with the new Windows 10 logo


A few more reboots, and a few heart-in-mouth moments, while the screen remained absolutely black with no sign of activity for 5-10 minutes at a time, and eventually, Windows 10 was installed.

And the difference?

Well, none really. The task bar looks a little bit shit. It's now black. And semi-transparent. Like the colour scheme Google tried out for their G+ homepage about four years ago and gave up on. And the long-awaited return of the start menu is nothing to write home about - it's like the metro screen, but small, and squashed up into about a quarter of the screen estate.



Even if you make the start menu full screen - to make it more like the Windows 8.1 metro screen (which I've actually go to quite like in recent months) - it just doesn't quite "feel right".


I had to spend ages re-arranging my icons back into the groups I had them on my metro screen (the upgrade had them jumbled about all over the place). And now they scroll vertically instead of horizontally. What an amazing improvement on the Metro screen.... big deal.

Other than a black task bar, and a not-even-as-good-as-Windows-8.1 style metro screen, nothing else looks that much different.

Of course, it performs differently.
As in, it's slow.
Painfully slow.

The amazing new web browser, Edge, has "draw on webpages" as a big selling point. But when you do, it simply creates a bitmap screen grab, super-imposes your text/scribbles as a transparent bitmap, and saves a local copy to your hard drive. The screen grabs are even saved as nasty 80% jpegs, complete with weird-coloured artifacts around text, making it nasty to read back.

After being impressed, comparing IE11 to (the hateful) Google Chrome, I was really looking forward to the new web browser from MS. It's not terrible. It's just not as good as I'd hoped. The extra functionality feels like it would have been cutting edge in the first release of Windows 98, and everything just feels a bit slow and sluggish.

Windows 10 isn't the revolutionary new OS their TV adverts suggest. It's not even an improvement on Windows 8.1. Just as Windows 3.11 was a slight improvement on Windows 3.1, back in the day, I think I'd have referred to this latest "upgrade" as an "update" and called it Windows 8.11

Maybe after some prolonged use, the improvements to the OS will become obvious. But right now, I just don't see them. If you're happy with Windows 7, stick with it. If you hate Windows 8 (as a lot of people did) then this upgrade isn't the magic bullet to make everything cool again. It just makes your PC feel a bit unfamiliar for a while, as you learn where they've tucked away all the settings and apps you normally use, this time!

Sunday 2 August 2015

Not all zombies are scary

In fact, looking over the video footage from last weekend's quick-and-dirty video shoot, it would seem that not everyone was taking it entirely seriously at all times either!