Using a Drone to make a BMX Racing video

In 2021 I put together an action video of BMX racing here in New Zealand, made up of a combination of drone video footage and still images. You can watch it here on YouTube – recommend sound on:

While I have published videos before, these have been simple clips. This one, on the other hand, contained many different clips taken from my DJI Mavic Air 2 drone at the Kapiti BMX Championships in February 2021, and I mixed in a number of photos I have taken from similar events at the Kapiti track over the last few years just to mix it up a bit. I also put in a soundtrack and tried to make the visuals match the music.

This was a new experience for me, and one of my aims was to use this exercise to teach myself video editing. This post goes through some of the process, tools, and challenges I faced in being an almost total video editing beginner.

Flying with Permission

Kapiti BMX track is close to Kapiti Airport, and so I had to get permission to fly. Most air traffic is private light aviation, the odd helicopter, and a short haul passenger service. A phone call to the control tower was all it took – they were incredibly helpful. I was flying in a shielded location, which meant as long as I didn’t go over a certain height (about 30 meters) there was no clear unobstructed flight path between where I was flying and the airport. This safety constraint meant that in the unlikely event of a loss of control of the drone, it could not fly to the airport without hitting in this case trees or houses up on a slight hill. The Mavic Air 2 has an altitude limiter which meant I did not need to worry about accidentally flying too high.

I cannot stress enough the importance of ensuring you have permission to fly, especially near an airfield. In addition to the obvious safety benefits, ensuring ATC know of your plans means that if someone reports drone activity then the controllers won’t have to divert all air traffic while they verify where someone is flying. It will probably also save a visit from the Police.

Those of you who are familiar with drone flying in New Zealand may at this point be asking why I didn’t use the AirShare app to book my flight plan. This is an app by Airways New Zealand (, which lets you do an online application for flying in restricted areas. In this case I did actually use it, but there was no response to confirm or deny permission. Rather than just assume I could fly, I looked up the number of Kapiti Airport’s ATC and spoke to them on the morning – always good to have a backup plan.

My filming window was a couple of hours, and as soon as I landed for the final time a quick phone call to the control tower informed them I was done. Shortly after that I noticed planes started to fly much closer to the track, rather than doing a hard turn out after take off. This made me appreciate that air traffic control had been doing me a great favour to enable me to indulge in this drone shoot. Thank you Kapiti Airport! 


I wanted to get some good dynamic shots of the racing, including trying to follow riders around the circuit. I soon discovered that following riders around and changing angles was a flying challenge, especially as I did not want to get too close and risk being a distraction. As you can see from the image below, I had a very tight area I could film in – just the track area. Add in the challenge of lampposts, telephone wires and trees, and you have a good recipe for crashes.

Kapiti BMX Track

Spoiler alert – I managed not to crash. Came close a couple of times though!

I had about 90 minutes of flying time with the three batteries I had, which allowed me to film a decent number of races. I took around 70 separate clips, ranging from a few seconds to a few minutes. Each race can take between 2 to 5 minutes from start gate to finish line depending on the age group of the riders. 

I shot a mixture of 4k 60fps and 1080p 120fps sequences for what I hoped would be some great slow motion action. I also took a large number of still photos just to try and get some perspective of the track.

In retrospect I realise one of the things I could have done much better was to have properly planned the shots and sequences I wanted to capture with a view on the final video. But I didn’t – instead I just rocked up and flew around a lot hoping something would come together. This made life in edit phase much harder and time consuming. But on the other hand, it was quite freeing just to scoot around and try things, and this was a largely experimental exercise – I didn’t really go into it thinking I was going to make a full video afterwards.

Creating the Video

After downloading all the clips to my laptop, I was keen on producing some kind of short video with a combination sequences. I had no real vision of what the end result would be, but I did want to include a few elements:

  • An intro showing the racing arena
  • Some sequences closely following the riders
  • Some still images
  • A music track 

The final video was created using Adobe Premiere Pro, which I pay a monthly subscription for as part of Adobe’s Creative Cloud offering. I aim on writing a separate post about the video editing challenges, but a lot of time has passed since I created the original, so it may morph into a more generic post. I would say “watch this space”, but given the lack of frequency of posting on this blog, that may be a long wait!

Do Blogs Rust?

OK, I admit it, I write here in fits and starts. I have a lot to say, but it’s been nearly five years since my previous post. If blogs don’t rust, perhaps the authors do.

But, you know, life….

Rusting Away
Feeling rusty (click for original)

A lot has been happening professionally and personally. I had a great time at my first New Zealand employer especially in the latter stages working with the Friday innovation group, playing with all things VR, AR, and anything else that took our fancy. I really should write about the idea I developed and prototyped for an immersive crowd participation experience aimed at the Wellington Lux festival. That was a lot of fun! Brought back memories of my previous life working in computer graphics software development.

I continue to do a lot of photography. Take a peek at my Flickr site for the sort of things I shoot – which is pretty much anything. Link here.

As part of growing my photography I joined the Photographic Society of New Zealand (PSNZ), and my local club the Kapiti Coast Photographic Society where for my sins I ended up President for a couple of years. I also maintain the IT systems to help run the club, and recently have taken on a role to help out with PSNZ’s back end IT.

Let’s see how rusty this post gets.

More Swift – A minor refactor

In my earlier I post I talked about not being very sure that a method I wrote for one of the Stanford iOS Programming course assignments was really embracing Swift as a language. I then posted about using Swift’s unit test framework that would make any refactoring easier and better.

And so on to the refactoring…

Well, with a fresh day came a fresh eye, and there was not too much I could really do to make the method more Swift than C/C++. I ended up mainly folding some let statements into inline expressions. Not exactly hard, but when all was completed I think the code is more or less as tight as it could be and still easily readable – opinions to the contrary are very welcome.

Continue reading

Using Swift’s Unit Test Framework

In an earlier blog post I wrote about my solution to one of the homework assignments for Stanford University’s iOS programming course. My conclusion was that, while the code worked, it seemed a little bit too C-like and not really embracing some of the elegance of the Swift programming language.

My intention is to rewrite the code and see if I can get it looking somewhat more Swifty. But of course in doing that I don’t want to break it – there’s a lot of different cases the CalculatorBrain has to handle. This is a classic scenario that can be solved using automated unit testing.

When you create a new application project in XCode, as well as the main build target you get given a test target with boilerplate code for using XCode’s unit testing framework. It is simple to use – just add new functions to the test class, and use XCTAsserts to ensure the right results.

Continue reading

Programming in Swift – perhaps badly

I have been periodically dabbling in learning Apple’s relatively new Swift programming language. Despite many years of experience in a number of languages including C, C++, C# and Objective-C – yep, that’s a lot of C’s! – my primary reasons for wanting to learn Swift is mental exercise and general interest. My day job these days involves very little programming, but I still consider myself a software developer at heart despite moving heavily into the management side of things. So it is good to keep current and familiar with later technologies. And a further reason for choosing Swift is my interest in developing mobile applications, and Swift looks to be the future of iOS development.

My primary source for learning has been the excellent free Stanford University course CS193P: Developing iOS Apps with Swift, presented by the hugely engaging Paul Hegarty. This course is available for free via iTunesU as a series of videos, slides, and practical exercises. I cannot recommend it enough.

This article is about the homework task of extending the Calculator demo: specifically adding a recursive description method to the CalculatorBrain object to present a readable and mathematically correct text description of the contents of the calculator’s operation stack.

The code snippet below is my implementation. The public getter for the description property contains a loop to solve the formatted output of multiple separate expressions, each comma separated, and is pretty straightforward:


    var description: String {
        get {
            var descriptionText = ""
            var (desc,remainingOps) = formatDescription(opStack)
            descriptionText = desc!
            while remainingOps.count > 0 {
                // Comma separate any following complete expressions
                let (desc1,remainingOps1) = formatDescription(remainingOps)
                if desc1 != nil {
                    descriptionText = desc1! + "," + descriptionText
                remainingOps = remainingOps1
            return descriptionText

The guts of the solution is in the recursive formatDescription method. It works for the test cases mentioned in the assignment, for example, an operation stack entered in this order:

3 <enter> 5 <enter> 4 + +

gets displayed as:


It also handles error conditions such as missing operands being displayed as “?”. So it works.

    // Recursive function to format description string
    private func formatDescription(ops: [Op]) -> (desc: String?, remainingOps: [Op]) {
        if !ops.isEmpty {
            var remainingOps = ops
            let op = remainingOps.removeLast()
            switch op {
            case .Operand(let operand):
                return ("\(operand)", remainingOps)
            case .UnaryOperation(let operation,_):
                let (op1Text, remainingOps1) = formatDescription(remainingOps)
                let op1ActualText = op1Text ?? "?"
                let returnText = "\(operation)(\(op1ActualText))"
                return (returnText, remainingOps1)
            case .BinaryOperation(let operation,_):
                let (op1Text, remainingOps1) = formatDescription(remainingOps)
                let (op2Text, remainingOps2) = formatDescription(remainingOps1)
                let op1TextActual = op1Text ?? "?"
                let op2TextActual = op2Text ?? "?"
                let returnText = "\(op2TextActual) \(operation) \(op1TextActual)"
                return (returnText, remainingOps2)
            case .Variable(let variable):
                return (variable, remainingOps)
            case .Constant(let constant):
                return (constant, remainingOps)
        return (nil, ops)

This brings me to the point of this blog post – I think I’m missing something. The formatDescription method does not feel very elegant. Swift has a fantastic type inference engine, and features like optional chaining, which I feel my solution does not take advantage of. You could say it is a little too C or C++ like. All those “let” statements seem overkill.

What do you think? Is there a better more elegant solution?

AppInventor – Preserving Button State When Switching Screens

This is the third tutorial note to be published from a collection I created in support of a schools IT programme. It covers preserving state between screens, refactoring of code, and passing of values between screens. It is rather a long article as it goes into some detail.

As a reminder, the notes here address one or more specific problems that the students had while writing their own application.

Problem Statement

The app has some buttons to represent a Tic-Tac-Toe game – each button in a 3×3 array can show nothing, a X or a 0. Each tap of the button changes the state to the next one. (This game is also known as Noughts and Crosses.)

It looks something like this:


The problem to solve is that the app needs to remember the state of the buttons when switching to a different screen and then coming back.

Continue reading

AppInventor – Sprite Collision

This is the second of my tutorials created as part of my contribution to mentoring teams of school children in an IT challenge here in New Zealand – see earlier article for details of that.

These notes were prepared in answer to a question from a student “How do you make something happen when a sprite collides with a barrier”. The following describes one way you can detect when one sprite hits a barrier.

There are in fact two types of collision detection:

  • Colliding with other sprites
  • Colliding with the edge of the screen (or, more accurately, the edge of the canvas that the sprite moves across).

These notes describe the first case – the second one is really easy.

By the way, “collision detection” is fundamental to pretty much any computer game, and understanding the basics of it is really useful.

Continue reading

AppInventor – Setting the Start Screen

This article is from some notes I made when helping out as a mentor for a schools IT challenge – see recent posts for more details. A common problem the pupils had was that they would create their pages in an arbitrary order driven by their individual learning processes. After developing their app further at some point they would want to reorder these screens so that a specific one would appear as the start screen when the app runs.

It turns out setting this in AppInventor is surprisingly difficult and is certainly not obvious. My solution below may not be the best, and is more of a workaround to what is probably just a current limitation of AppInventor. If future updates make this better, I will update this post.

When you run your app, which screen appears first depends on how the app is run.

In the first case, when you have AppInventor running and you connect to the emulator, the screen that appears is the one that you are currently editing. So if you want to ensure a particular screen appears first when running the emulator, just select it from the screen drop down:


The second circumstance is when you run the app from the phone’s (or emulator’s) home screen. In this case, the app always starts with the first screen in the list.

The tricky thing is if you want to have a different screen than the first one to be the startup screen. I cannot see any easy way to do this. As a workaround the general approach is to copy the current first screen to another screen, then replace the contents of this screen with a copy of the components from your desired start screen. Details of how to do this are:

  1. Create a new screen. Let’s call this ScreenX.
  2. Copy all the controls and blocks from Screen1 to ScreenX.

You then have a couple of choices:

  1. If the second screen in the list is now the one you want to be the first, select Screen1 and choose “Remove Screen”. This will move the desired second screen be the first one.
  2. If the screen you want to be the first screen (let’s call this ScreenY) is further down the list, then you will have to
    1. Delete all the controls and blocks from Screen1.
    2. Copy everything out of ScreenY into the now empty Screen1.
    3. Remove ScreenY.

Copying between screens is unfortunately a manual process. It might be easier if you have two computers side by side with the same project loaded into AppInventor, and refer to the original while making changes. This may work better if you make a safe copy of the original project and use this for reference.

You can use the Backpack to make copying a block from one screen to another easier. To do this, grab a code block and drag and drop it onto the backpack icon: aistartscreen2

You can then switch to your destination screen, and when you click on the backpack again a copy of the block will be displayed, and you can move it out of the backpack.

I will keep looking for a better solution. It could be for demonstration purposes you can ignore the start screen issue by running the demo from AppInventor using the emulator, and just ensuring you have your preferred screen selected as described above.

Finally, as a general bit of advice, I would suggest using meaningful names for a screen when you first create it. E.g. “StartupScreen”, “LoginScreen”, “QuizScreen1” etc. This will help you when editing different screens as you won’t need to remember that “Screen2” is for one thing. “Screen5” for another etc. It may also help you if you ever need to merge two projects together as there is less likelyhood of a name clash which will prevent the merge. Merging projects will be a topic for a later note.

Using MIT AppInventor in Schools IT

As described in my previous post here, I have been involved in a schools mentoring programme to help expose kids to software development. The New Zealand Techhub Schools IT Challenge started last year in Wellington, and this year has rolled out on a larger scale across the major population centres of the country. The 2015 winners went on to create a shipping mobile phone app with the help of my employer, Datacom. I was fortunate enough to lead that development team, where we took the core ideas of the winning team, and with their help created a brand new application in just a few weeks. Go here if you want to see earlier articles for links to both the iPhone and Android versions.

Now before you start to get the idea that a big corporate was exploiting the ideas of school children, I should mention that:

  1. the apps are free;
  2. IP remains with the school;
  3. this was an investment by the company in people and resources worth tens of thousands of dollars.

The team got a lot out of it personally, and genuinely got a buzz out of teaching the three girls about software product development and programming. It was satisfying for everybody to take some raw ideas and turn them into shipping code.

A word about the winners. From St Mary’s College for girls in Wellington, for the competition they had to start from nothing, come up with some ideas, and implement what they could using MIT’s AppInventor – more on that later. Other schools in the competition had boys teams, and the stereotypical expectation might be that the boys would win given the much higher proportion of males in the IT workforce. So it was very pleasing to see the girls not only being right up there with the boys teams, but grabbing the prize too.

All of the above is a bit of a long winded introduction to the point of this post. The choice of tools the competitors used was down to the individual schools. The organisers recommended a number of alternatives, but did not dictate any specific platform or technology. One of the options was MIT’s AppInventor. Originally created by Google, the Massachusetts Institute of Technology now maintain and promote it as an educational programming tool to create Android based mobile applications. The school I was mentoring at, St Mary’s College, decided that all teams from that school must use it. For me to be most help, I decided I really ought to learn it too.

I am an experienced developer, and always up for learning new stuff. I have done some iOS programming, but Objective C or Swift and full IDE is a different kettle of fish to a web based graphical programming tool. So I spent some time doing various online tutorials, and this armed me to help answer questions from my teams.

But I would sometimes get questions from the pupils that couldn’t be answered immediately, or required more time and careful explanation than was available in the classroom. They were interesting real-world problems encountered outside the realms of existing online tutorials. I ended up answering these by preparing after class, and then writing something up. Over the next few weeks I intend to publish these notes and targeted AppInventor tutorials in the hope that they will be useful for others.

What is AppInventor?

AppInventor is a web based programming tool for creating Android based applications, and sharing them with others. The tool provides a screen designer that you use to assemble the buttons, text boxes, images, and various user interface components visually. You can customise the behaviour of the controls by writing code blocks. But rather than typing to a particular syntax, code blocks are assembled graphically and connect together using differently shaped connectors to enforce structure. The following example shows a typical code block:



This is an event handler for a sprite object called Ball1, which gets invoked if the sprite hits any edge of the display area. If it does, it calls a built in method of the sprite that makes it bounce of the edge, and then sets the colour of the ball to a random one chosen from a predefined list. All of these blocks are presented in graphical palettes that the user just drags and drops into their work area. Very slick.

To see your code running, you can launch an Android emulator on your PC or Mac. AppInventor connects to this, loads the code and executes it. Alternatively, you can also connect an Android device via USB or wireless, and the program will be loaded and run from there.

The beauty of AppInventor is that it provides instant gratification. In other words, it is incredibly easy to set something up, write a bit of code, and quickly execute it to see if it does what you expect. As you make changes, they are dynamically uploaded to the emulator, providing a quick feedback loop that is essential to keep young inquiring minds engaged.

I won’t go into more detail on AppInventor here as that can be picked up from existing online resources and my forthcoming notes. Give it a go and play!

Mentoring kids in IT

Last year I became involved in a schools IT challenge here in Wellington as a mentor for a pilot program run jointly by the Royal Society and the New Zealand Institute of IT Professionals. The aim was to challenge teams of year 9 and 10 pupils from a few schools in Wellington to create an app. It was to be a full-cycle development, taking in initial design, coding, testing, and finally demonstrations in front of a Dragons Den style panel who chose the eventual winner.

Around 40 teams took part, each of up to four pupils, with the brief to create a school time tabling app. I was one of several IT professionals from various companies in Wellington who volunteered to be mentors to one or more teams. In my case it was two from St Mary’s College. It entailed advising on all aspects of the project, and providing technical input where appropriate – and without doing the work for them.

The challenge took place over several weeks. Mentors visited the classes as often as needed, and were also available to provide support via email. Personally I found the whole experience very rewarding, and it was great to have a chance to introduce the next generation to various aspects of IT as a profession.  The programme is rolling out on a more nationwide scale over the coming years, and I would encourage anyone to put their hands up as mentors.

The prize on offer for the winners was donated by my employer, Datacom, to provide resources to develop the winning idea into a shipping application. The call went out around the company asking for volunteers to join a small engineering team which I ended up leading.
The winning pupils coincidentally came from the same school where I was mentoring, although they were being guided by a different mentor (also a Datacom employee), and called themselves the Pastel Programmers.

An initial launch and brainstorming workshop was held at the end of September with the three St Mary’s pupils. This set down the core design principles and ideas behind what became a school timetabling app called Pastel Planner. The team quickly created a mockup using Flinto, and shortly afterwards the shell of the app was created on both Android and iOS platforms. Through October and November the app evolved with continuous feedback from weekly workshops with the Pastel Programmers. They contributed ideas, artwork and assets, and also took part in some pair programming with the Datacom developers – something they said was perhaps the highlight of the whole experience.

Pastel Planner lets a student manage their timetable, subjects, and homework in one central location. Camera integration makes is easy to grab homework details from the whiteboard.


By the end of the exercise we had created a functional and useful app, which also incorporated a few fun ideas. But more importantly, the students were exposed to and involved with the full cycle of product development, and will have picked up skills and knowledge that they can apply to any team project.

The app is available free on both the  Apple Appstore and Google Play at the following URL’s: