In Pursuit of the Club Robot

I spent about twenty minutes earlier today with a needle file, carefully and patiently filing away part of a hard metal battery terminal that I’d inadvertently epoxied oh so very slightly too high, such that it was blocking the little tab meant to fit in that hole and keeping me from reattaching the black plastic battery cover of my new Zumo robot. In the photo above you can see that tiny bit of metal peeking up through the hole, with that bit of black plastic to the left less than a millimeter thick. Very fragile polystyrene plastic. One slip and I’d break it.

I’d already filed away some plastic to leave space for the battery wires to exit the battery compartment, which normally fits snug against the PC board. Normally one would just solder the battery terminals directly to the PC board, meaning: permanent connections. I have been avoiding permanent connections. This is not a reflection on my private life, just the Zumo.

To some degree this represents the end of my journey. For over a year now I’ve been searching for a “club robot”, that is, a starter robot with enough appeal for beginners but still with enough growth potential that it wouldn’t sit on the shelf either due to its owner losing interest in robotics or growing out of it into a more functional robot.

It’s a bit of a tall order. My requirements included:

  1. low cost: the overall cost of the project should be under NZ$200 (~US$145)
  2. easy to buy: use a hardware platform that if single-sourced would be from a reputable company with good support, or otherwise could be built from readily-available parts using a recipe
  3. easy to build: anyone should be able to build the robot using common mechanical skills and tools
  4. easy to modify: the robot should permit customisation, such as adding new sensors
  5. programmable in python: preferably programmable in Python rather than C/C++, since the latter’s difficulty and learning curve are significantly higher [one goal of my own robot journey was to learn Python]. Many single board computers and microcontrollers can be programmed in multiple languages; some, like the Arduino, only in C/C++

First Steps

The MakeBlock mBot

My thoughts for a club robot first went to various small robots such as the MakeBlock mBot, which is an attractive option but targeted at primary school children. They’re simple robots often designed as part of a larger school learning programme. These would excel at the first three requirements but fail on the fourth and fifth. They’re advertised as an “entry-level” robot for beginners. Fair enough. But not for this club’s club robot.

The mBot uses a micro:bit, an inexpensive single board microcontroller designed by the BBC (yes, the British Broadcasting Service, that BBC). The one thing about the micro:bit that keeps it somewhat in the running is that it can be programmed using either of two different graphical user interfaces: Microsoft MakeCode or Scratch, or using an online tool that edits code in Python and downloads it to the micro:bit to execute.

Assembling the AlphaBot2

Last year I’d helped some friends buy a micro:bit-based Waveshare Alphabot2 robot for their daughters, which after batteries and charger ended up north of NZ$250. It’s a nicely designed and manufactured robot but after purchasing the LiPo batteries and a safe battery charger, somewhat too expensive, and not expandable, kind of a fixed entity. There’s not even a place on the robot to attach any additional features. I haven’t asked lately if it’s gathering dust or if the eldest girl, who is now 11, has shown any ongoing interest in the robot. It seemed like a success at the time, but the whole exercise pointed out the likelihood that, without some guidance and mentoring, a lot of kids might feel a bit left adrift with what is a lot more than simply a toy.

But all three of the avenues for programming the micro:bit are very limited. It’s not the kind of tool set that would appeal much to a teenager, much less an adult, so it’s not really suitable as the processor for a club robot. So the AlphaBot with micro:bit was out, and its Arduino and Raspberry Pi brethren too, since the robot is not expandable and doesn’t have the ability to fit motor encoders or additional sensors.

The Zumo Sumo Robot

Then the idea of the Zumo robot came up, though it uses an Arduino, rather old-school and slow nowadays with its 16MHz ATmega32u4 processor, and only programmable in C/C++. That being said, there are numerous code libraries available in C/C++ for Robot Sumo competitions, line-following and many other challenges and techniques. It’s widely known and widely available, so it certainly qualifies as a viable club robot if I’m willing to give up the Python requirement. But I wasn’t, at least not initially.

The default Zumo Robot for Arduino consists of a chassis, the Arduino shield, a pair of motors, a stainless steel blade to push off combatants, and an optional array of infrared sensors to sense the black and white transition at the edge of the Sumo arena. You’d then add sensors to locate the opponent. If you build it exactly as according to that plan it’s a robot wiht a reasonable cost and reasonable build time.

But I wasn’t going to give up on programming it in Python so I also ordered some other parts that would permit me to swap out the Arduino shield for a Kiktronik Robotics Board for the Raspberry Pi Pico. The Pico can be programmed in C/C++ but also in MicroPython.

So I put together an order of Zumo parts from Pololu, as well as the Kiktronik/Pico parts from Pimoroni.

What seemed like a very long time later the packages had both arrived.

I’m myself pretty happy with working in Python and believe it’s a good job skill, and thought: I wouldn’t want to be tasked with teaching any kids C++. I don’t even like it as a language. So unless there’s some way to program a Zumo in Python it’s not in the running.

So why even bother?

I then remembered a conversation a few weeks’ back with David Anderson of the Dallas Personal Robotics Group (DPRG), who in what I believe out of sincere curiosity had asked me: “Why are you bothering to try to start a club anyway? Why not simply build your own robots?” I had a nice pat answer about enjoying teaching people things and wanting to share some of the knowledge I’d gained over the years. But that question kinda lodged in my skull and has been festering there since then. It’s a good question.

I believe David explained his own motivation as simply wanting to build some great robots with the understanding that if people are interested they’ll be inspired by them and get involved in robotics. Lead by example. Seeing David’s robots in action were actually what spurred me to consider, after almost 40 years, to get back into robotics. David seems to be mostly interested in building robots, not trying to teach anyone or lead a group of people. It occurred to me that he’d also asked: “Why do you care if people are involved in robotics or not?” And he had me there. Why do I care? It’s not like I would benefit if more people built robots. There’s this idealistic notion that the world needs more engineers, and it’s a nice thought that I might inspire someone to take up engineering, but that’s pretty indirect. I’m not a school teacher, I don’t have any students. There are currently no NZPRG members except myself. I’m not even sure I even have the time to actually lead a local club. And I’m sure David would rightly say that all the time I’d spend running a club would be time taken away from doing the thing I actually enjoy doing, which is building robots.

My Customised Zumo

Today I finally finished the hardware of my Zumo. I could have simply built it according to the plans, which would likely have taken me less than an hour. But I was in my head exploring this idea that with suitable modifications it might suit as a club robot. There should be in how much time I’d invested an answer to the question of it posing as a possible club robot: absolutely no way. I wasn’t keeping track of the hours, but it was certainly more than an eight hour day’s work. Pretty silly.

The Zumo robot from Pololu comes in two forms: an already-assembled Zumo 32U4 Robot model and a Zumo Robot for Arduino. With the latter the robot effective plugs upside-down onto the top of an Arduino microcontroller as a shield, the common name for Arduino accessories. The 32U4 version actually has more features built-in, like a little LCD display, motor encoders and line-following sensors, whereas the shield is, in theory, more flexible. That theory turned out to be rather illusory.

But for the Arduino shield version that I’d built, I had to figure out how to add motor encoders, which aren’t part of the deal normally. The motors are held in their own small space in the chassis, with no holes even for the twelve wires to exit. Yeah, twelve wires. Each motor has positive and negative power, plus the four wires (A1, B1, A2, B2) for the encoder. The Pololu encoders come in three varieties, but only the bare board version would seem to fit into the space.

Just enough room for the motor encoders. Now where to put twelve wires?

So where to go with twelve wires? If one is going to fit the infrared sensor array it can’t be forward, as the connector fits snugly against the front of the chassis. We can’t go up as that would run into the Arduino shield, nor back as that is the battery compartment. So it had to be down, though that is the ground. There’s a few millimeters of clearance between the bottom of the motor chamber and the actual bottom of the robot, so that’s it.

The twelve wires found their way out. This also shows the Reflectance Sensor Array in place.

So I managed to find a way to connect the motors and encoders and route the wires up to the controller.

This also meant that the twelve wires and two battery connections (now a black and red wire) couldn’t be soldered to the Arduino shield either, I’d have to use Dupont connectors.

So… long story short, I ended up after many hours with an Arduino Leonardo atop the Pololu Zumo shield, all connected with a bunch of wires. The infrared array plugs very cleanly underneath the shield. But I’ve not yet added any ability for the robot to see Sumo opponents, like a pair of Sharp analog infrared sensors or an ultrasonic sensor.

The completed Zumo with motor encoders and a removable shield.

The End

Yeah, also a song by The Doors. But having mostly-completed the hardware for this Zumo brought me to what had been that question at the back of my head all along: why am I doing this? It was all in mind of this “club robot” idea.

The End: a little bit depressing.

I have several much larger, more complicated robots that are outfitted with a Raspberry Pi and lots of sensors, and I’ve been happily programming them in Python. Over the past few years I’ve developed a fair bit of code in support of the hardware, and continue to work on that. Was I now going to start over in C++ on this Zumo? Really?

What would be perhaps an ideal platform for someone else now just felt like a burden. As I said earlier, I don’t even like programming in C++, and while there are available libraries for the Zumo, I’d be treading on territory others have passed through long ago. I’d be going from a robot with more than a dozen sensors to one where even adding a few is rather more difficult. Why would I do this?

What’s after The End?

So what’s next? Well, I could sell the Zumo. It’s perfectly functional and has motor encoders. There’s also the idea that I could swap out that Arduino shield for the Kiktronik Robotics Board and program the Raspberry Pi Pico in MicroPython.

The Kiktronik Robotics Board for Raspberry Pi Pico

I’d basically be putting the shield and the Leonardo in a plastic storage box and never looking at it again, which seems a bit of a waste. But this was an experiment, a thesis, and my thesis has borne its fruit: the knowledge that while a robot based on the Zumo Robot for Arduino is a perfectly fine small robot for anyone who wants to program in C++ and compete in Mini Sumo competitions — which is what it’s designed for — it is difficult to modify. If one wants to use odometry for navigation the readymade Zumo 32U4 Robot already has motor encoders built in, and would therefore be more suited for more advanced robotics, something beyond a Sumo competition.

But to fulfill my set of requirements, a robot based on the Zumo chassis still remains a real possibility, using a different controller. I’ll explore that in a future post.

Differential Drive

KD01 Side View

This is the first in the multi-part series about the KD01 Robot ( 1 | … ).

As I described in my series of posts on the KR01 robot, I got back into robotics after watching a YouTube video of the DPRG‘s David Anderson demonstrating his robots, talking about the details of their design and how they work. I’m very happy to say that David and I are now friends and he’s been mentoring me and answering some of my more tedious questions with the patience of a very good teacher.

The basic design of a differential drive robot

The KR01 robot uses a motor controller running a pair of motors on the port side and a pair of motors on the starboard side. The motors are wired up in parallel, so the four motors are treated (electrically) like a single pair of motors. But performance-wise they don’t act like two motors.

If I program the KR01 to rotate in place, how it performs depends a lot on what surface the robot is on, as with four motors the robot relies upon the slippage of its black silicon rubber wheels to permit it to rotate at all. And how the robot’s weight is distributed influences which wheels slip and which don’t. On a relatively sticky surface like a wood floor the KR01 visibly shudders as it tries to rotate in place. If it rotates continually it tends to move randomly across the floor.

All of this is due to those four wheels, which provide a lot of stability and are great for four-wheeling out on my front deck, but not so great for precise navigation, or rotating in place. The solution for this is a different robot design, called a differential drive robot.

At about 2:30 in David’s video he says:

“The advantage of differential drive is a zero turning radius. Zero turning radius is what we humans have, so if you want to build a robot that can run around in the same space as humans run around in, it’s a useful thing to do. And that means we can turn around pretty much in our own space. There’s some tremendous advantages to being able to do that. It greatly simplifies your navigation software if you don’t ever have to back up.”

David Anderson’s SR04 robot
(image used with permission)

The other part of the design requirement for a differential drive robot in being able to turn in place is that the full extent of the robot fit within a circle, with its tail wheel placed on the perimeter of that circle. If the tail wheel or the robot frame extends beyond that circle the robot won’t be able to turn in place without bumping its big bum into something.

Now, I’ll still be using the KR04 for many of my robotics experiments, but I wanted to try out a differential drive robot. I realised the design I now had in my head looked remarkably like something I’d seen before. David’s robots are typically differential drive designs, such as his SR04, the robot that inspired me to start building robots again, and it now seems a bit like the older brother to my new robot, which I’ve dubbed the KD01, where the “D” is for Differential. I stilll can’t think where the “K” came from…

KD01 Requirements

So in addition to desiring a differential drive robot, I thought the new robot should be able to re-use wherever possible the existing ideas, components and parts from the KR01, including its motor controller and drive train: the OSEPP 9V 185rpm motors, annodised aluminum wheels, silicon rubber tires, and Hall effect motor encoders. As the encoders are designed to be mounted on the same beam as the motors, this entailed even the same OSEPP/MakeBlock chassis components.

The KR01 and KD01 drive train showing motors and Hall effect motor encoders (click to enlarge)

The KR01’s four motors and four wheels permit a substantial carrying capacity, but its Makita 18v 6.0Ah power tool battery weighs 760g (1.7lb.). By comparison, the KD01 without batteries weighs 1525g; having a battery weight half that of the robot seems a bit extreme. Also, two drive wheels and half the number of motors doing all the work suggested something a bit lighter, perhaps the Makita 12v 2.0Ah power tool battery, which weights a 263g, some savings in both weight and size.

The Integrated Front Sensor

The KR01’s Integrated Front Sensor (IFS) supports five Sharp 10-150cm analog infrared sensors and three pairs of small lever switches attached to a clear polycarbonate plastic bumper. I definitely didn’t want to have to build this another time if I could reuse it.

To be able to mount the IFS requires chassis compatibility with the KR01. Thankfully this is simple, as the IFS is attached to the front of the robot with two 5mm stainless bolts and to the Raspberry Pi via a single five-wire I²C connector. With this hardware compatibility I can transfer the IFS between the two robots and use much of the same sensor software I’ve already developed. All this will save me a lot of time and trouble. Re-use is good.

The overall design is based around the existing drive train, with its 18cm (~7 1/8″) wheelbase.

I thought about buying a fancy German-made caster, but decided to build my own from a spare OSEPP wheel mounted with some the roller bearings (that came with their tank robot kit) to a commercial caster swivel mount, even using a pair of chrome guitar strap buttons as spacers. The end result is quite tall, but means that the KD01 will use three wheels of the same size, no small back caster that might reduce the floor clearance.

The tall back caster required a stepped platform with enough clearance for the caster to swivel 360 degrees. With the battery mounted just slightly behind the wheel axis, just getting everything onto the robot platform was a challenge. I ended up locating the ThunderBorg motor controller and the board holding the connections to the encoders underneath the battery, with the Raspberry Pi at the back of the robot, as was done with the KR01.

With all that I took to paper. Design is largely juggling all the requirements and constraints. and this was no exception.

The KD01 Differential Drive Robot

Things came together pretty quickly once I’d finalised the design on paper. I cut the two chassis pieces out of 3mm black Delrin plastic, using L-shaped aluminum for other parts of the frame and to provide a bed for the battery above some of the electronics.

The boards are held together using 3mm Lynxmotion anodised black aluminum hex standoffs. These are lightweight and strong, and permit the robot to be disassembled easily. I’ve been using M3 (3mm) stainless nuts and bolts for pretty much everything now.

The KD01 from above

The photo above shows the Raspberry Pi 3 B+ and a Pimoroni Breakout Garden hat at the rear (left), the Integrated Front Sensor at front (right), and the PiBorg ThunderBorg motor controller underneath where the battery sits. It’s currently sporting an Adafruit BNO085 IMU on a sparkly red nylon mast (made from a cutting board), and another Adafruit FXOS8700+FXAS21002 IMU mounted on a small black breadboard, affixed to the robot using small velcro dots. I’m currently doing some experiments with two IMUs. I also decided it was helpful to have a tiny Adafruit 135×240 Mini Pi TFT display screen as I had one in my inventory, so this was mounted on a 3mm Lynxmotion standoff using some bits of L-shape aluminum to provide a 3D swivel adjustment.

While the drive wheels and caster fit within a 212mm circle, the Integrated Front Sensor does extend outside of the circle, but not by much. This does mean that rotating in place could potentially catch the edge of the polycarbonate bumper on something as the robot turns. David’s SR04 is similar in this regard. He solved this in a later robot called the RCAT, which uses a curved plastic front bumper that fits within the extent of that circle. A great design.

Rotate in Place

Well, how did we do mechanically? I took the robot out onto the front deck and tried it out with a rotate-in-place.py test. On the deck the robot’s wheels hitting the spaces between its boards seemed a distraction, so I tried it again on a piece of smooth particle board. The result was posted to the NZPRG YouTube channel:

A Problem Exposed

Running a test sometimes exposed other problems. This was evident when running the KD01’s port wheels counter-clockwise: the encoder outputs were way out of range. Normally (such as on the two encoders of the KR01 or the starboard encoder of the KD01) there’d be almost exactly 494 steps per wheel rotation, but the port encoder on the KD01 was registering an inconsistent 337-342 steps per rotation but only when running counter-clockwise. I pulled out the trusty ol’ Iwatsu oscilloscope and wired it up to the encoder outputs with the robot on the bench.

Hardware debugging using the Iwatsu SS-5710 (circa 1991)

The only thing that seemed obviously different about this encoder is that the offset between the A and B waveforms’ falling edges was very close together.

The A and B waveforms before (left) and after (right)

In the image on the left you can see that the distance between the falling edge of the A (top) and B (bottom) waveforms almost coincides. This means that when the quadrature measurement occurs, the alternate waveform may not be in its opposite state and the tick count won’t change. On the other three encoders the two waveforms are significantly more offset.

It’s good to have some friends with experience. In this case, on today’s video conference with the DPRG I asked Carl, Doug, David and the other experts and they totally knew the problem was with that offset, or the lack thereof.

So I turned the KD01 upside down, pulled the little disk magnet off of the motor shaft, and very very carefully bent one of the tiny three-wire encoder sensors very very slightly, maybe a tenth of a millimeter. While its distance to the magnet seemed okay, it wasn’t quite parallel with the surface of the magnet. I then put everything back together, wired things up again and guess what? That fixed it. The image on the right shows the waveform after the adjustment. The offset between the two waveforms is now almost exactly 90 degrees, and the edge transitions now occur a long way from each other.

To check, I hand-rotated the port and starboard wheels while counting the encoder ticks with my motors_test.py program. The port encoder now turns -4941 ticks over ten clockwise turns of the wheel, and +4939 ticks over ten counter-clockwise rotations. Or 494 ticks per wheel rotation in either direction.

Success!

Big Things in Small Packages

Nuvoton 8051 NuTiny
The Pimoroni IO Expander, which uses a Nuvoton MS51 as its controller

My exposure to the Nuvoton MS51 microcontroller came about due to its use in a number of Pimoroni Breakout Garden products, initially their IO Expander board, which provides six PWM/digital and eight analog IO pins, programmed using a Python library on a Raspberry Pi using a single I²C connection. Very handy.

Part of this exploration has been mere curiosity about the MS51, which is essentially a 1980s-era technology still being used almost 40 years later. The Nuvoton MS51 is an MCS-51/8051-compatible microcontroller, using the same 8 bit architecture.

The bright red board shown at the top of the post is a bit misleading as it’s actually two boards: on the right is a USB linker board, which is used to connect to and program the MS51 microcontroller, which is on the left. These two boards can be cut apart and reconnected via the 10 pin connectors you can see in the middle. Many of the Nuvoton boards have made this even easier by perforating down the center so they can be easily broken apart. The MS51 processor itself is on the left side, that small surface mount black thing surrounded by pins labeled Nuvoton N79E85JALG, actually 4x6mm. On the IO Expander the MS51 processor is only about 3mm square. More on size below.

VL53L1X
(~3x mag.)

Now, I’m quite happy with continuing to program my robot in Python and wouldn’t want to go back to either C/C+ or assembly language, the latter I last touched in about 1981. But in my conversations with the DPRG we’ve talked about distributed processing for robots, that is, using something like a Raspberry Pi as a main brain, distributing the sensor and control tasks to various sub-processors. In a recent conversation this was likened to an octopus, which apparently has about two-thirds of its 500 million neurons in its arms. The sensor boards are generally small — about the size of a postage stamp — but the sensors on these boards often themselves also have a built-in microcontroller, like the ST VL53L1X, a Time-of-Flight sensor that’s about 5×2.5mm.

Last year Pimoroni released a pair products that internally use the MS51, both RGB LED knob controllers, a RGB Encoder and RGB Potentiometer. Recently they even added another MS51 implementation, a tiny Super Dinky Blinky (an LED blinker), even providing a github link with instructions on how to hack/reprogram the device. So it’s clear Pimoroni have some engineers on staff who like the MS51 as a general-purpose microcontroller.

The Pimoroni RGB Encoder, also using an MS51

Back in October 2020 I sent a message into the Pimoroni online forum regarding the use of the Nuvoton MS51 microcontroller on their IO Expander Breakout Garden board, asking if they might help me figure out how to hack it.

Well, nobody answered so I checked out the Nuvoton website and found a plethora of development boards, using ARM Cortex M0, M4, M23 and 8051-compatible processors in “Tiny” and “Maker” (Arduino UNO style) form factors. There’s also a suite of development tools.

Pimoroni Super Dinky Blinky, another MS51 implementation

It looks like both their hardware and software is quite well-documented, with a lot of PDFs on site. Each board is provided with a hardware specifications and information on their Software Development Kit or SDK (e.g. the one for the board shown at the top of this post, the NuTiny-SDK-N79E715). The Taiwanese-English is pretty good, and their product line looks to be very extensive. Not a small company.

Some of the Tiny-style boards are truly tiny, and their Maker-style boards (e.g., NuMaker-PFM-M2351) look like nice Arduino replacements, e.g., 64MHz, operating voltage 1.6-3.6v, memory up to 512kb RAM/ 96kb SRAM. Lots of pins and an UNO-compatible USB link connector. Even an on-board Wifi module. Like Arduinos, the processors themselves are typically surface-mounted but are also available in DIPP packaging should someone want to experiment with one on a breadboard.

A Nuvoton NuMaker MS51 board, pin-compatible with Arduino shields. The USB linker board is on the right.

The ARM Cortex-based processors would be Arduino-like, whereas the 8051 boards (e.g., NuTiny-N76e616) use the MS51/8051, back-to-the-future of the 1980s. Everything looks to be about US$25.00.

Software Development on the MS51

The link to the list of Nuvoton development tools includes IDEs for all of its Cortex M0/M4/M23 and 8051 controller boards. This does actually include a customised version of the Eclipse IDE called NuEclipse, with distributions available for both Windows or Linux. It’s based on a rather old (2015) version of Eclipse “Mars” but seems functional.

I downloaded and installed NuEclipse for Linux, which seems to be Eclipse customised for C/C++ development, comes with a GCC OpenOCD installer, code support for the 60-odd boards Nuvoton sells, user manuals and some sample files.

There are also three KEIL IDEs: one for US$395, one commercial, one free for the Cortex controllers, and two different IAR IDEs for the M0/M4/M23 and 8051 processors, resp. I downloaded the free version but it unfortunately won’t run on the only Windows machine I have available.

As I’m mostly interested in the 8051 I went to the IAR site, searched for the NuTiny-N76E885 board (which is currently on 90% discount for US$2.50!), then downloaded the IAR EW8051 Embedded Workbench IDE, available for either a free 30 day trial or a 4K code size-limited installation. When you first open the IDE you have a choice of registering online for the Evaluation Copy. It won’t operate until registered.

On starting the IAR IDE it looks a bit like an older, customised version of Eclipse, but basically functional. It doesn’t seem like there’s any way to connect it to the Pimoroni IO Expander that uses the Nuvoton MS51, so if I want to further investigate I’ll have to consider getting one of the Nuvoton controller boards.

I’ve also contacted their sales department to see about the price for purchase of the IAR IDE for a quantity of one. The installer included a “dongle driver” so I’m hoping they don’t use dongle-based license management (yuck). At least for the trial license there’s no dongle. If the commercial price of the KEIL IDE ($395) is any indication, the IAR one might be rather expensive. I hope to get a quote from their sales rep, otherwise I’ll be limited to 4K code files.

A Brief Review: Size Matters

I won’t pretend to be thorough here, as I don’t want to be unfair to Nuvoton. I received my order for the Nuvoton Tomato and NuMicro 8051 NuTiny boards. I also received a Nuvoton NuMaker-RTU-NUC980 Chili (on sale for US$14.50) which I may write about in the future.

This might have been a case where I hadn’t read the specifications very clearly, except there wasn’t anything specific on the product pages about size. One looks at photos online that don’t have a coin or a hand next to them for context and makes assumptions. My first impression on opening the packages is that the Nuvoton boards are hardly “tiny”. They seem overly large, with lots of empty space. Apparently the Nuvoton designers didn’t think size matters.

Clockwise from top left: Nuvoton Tomato, Raspberry Pi 3 B+, Itsy Bitsy M4 Express, Nuvoton 8051 NuTiny

The Nuvoton Tomato uses a single-core 32-bit ARM926EJ-S NUC976DK62Y microprocessor with a clock speed up to 300MHz. The ARM926EJ-S is circa-2001 technology, no longer under active development but still used in industry. By comparison, the circa-2016 Raspberry Pi 3 B+ has a much beefier four-core 64 bit ARM BCM2837B0 Cortex-A53 microprocessor running over four times faster at 1.4GHz. The Tomato is targeted at the Internet-of-Things (IoT) market, is a low-power device and has an embedded Linux OS which apparently has an optimised Java JVM. It also has the connectors to allow an Arduino shield to be plugged onto the top of it. Nice. It’s probably not fair to compare the Tomato and the Raspberry Pi. Apart from what’s on the boards, it’s impossible to ignore the fact that the Tomato is big at 67x120mm. But if I were thinking of building a Linux-based, Arduino-compatible robot using Arduino shields (like the Adafruit Motor/Stepper/Servo Shield), I’d certainly give it a whirl. It’s an interesting hybrid.

The bigger surprise was the 8051 “NuTiny”. Bad name really; it may be “Nu” but it’s by no means Tiny. The board is comprised of two parts: the 8051 processor board and a Nu-Link-Me (the Nu pun wears off quickly) board that’s used to connect and program the 8051 processor. But even with the Nu-Link-Me board broken off the 8051 board itself is still 35x52mm, compared with the Itsy Bitsy M4 Express at 18x36mm. If for convenience’ sake we leave the boards together the NuTiny is wider than a Raspberry Pi. Whereas the 8051 CPU chip is only 4x6mm (about half the size of the Itsy Bitsy’s ATSAMD51), the carrier board is over five times the size of the Itsy Bitsy, and doesn’t even come with any mounting holes, despite all that wasted PC board space. A bit of a shame, really. None of this hardly disqualifies it from use on a robot. And I am interested in playing with the 8051 processor, if doing so isn’t too inconvenient.

Summary

As I mentioned above, I have no plans to build a robot based upon one of these microcontrollers, though that’s perfectly feasible and indeed a pretty common approach. I’m mostly interested in distributing my robot’s tasks to sub-processors, with the Arduino compatibles (including Tiny, Trinket, Feather, Particle, Itsy Bitsy, etc.), the MS51-based Pimoroni IO Expander, and potentially any of the Nuvoton ARM Cortex or MSS51-based development boards are all suitable contendors for that purpose.

Odometry on a Mecanum Robot Using an Optical Flow Sensor

While I’ve lately been mostly focused on my KR01 robot, I’ve also been planning a Mecanum-wheeled robot to be called the KRZ02. I long ago decided my robots would incorporate odometry, counting the rotations of each motor in order to track the robot’s location. This is accomplished by either an optical or Hall-effect encoder connected to the left and right motor shafts. On the KR01, since I know the gear ratio of the motor, the size of the wheel (and that it travels 218mm per rotation), and that the encoder sends 494 steps per rotation of the wheel, I can calculate there’ll be 2262 steps per meter.

To be honest, I didn’t actually come up with such a formula but simply measured this by repeatedly having the robot move exactly one meter forward. In science this is called an observation. Observing is generally easier than calculating, but is still a valid means of finding things out. You can make calculations without having a robot, but if your calculation (the model) is flawed or incomplete it won’t be accurate, whereas careful observation of the robot while in the actual working environment can be quite accurate. My guess is that if I had gone to the trouble to develop an odometric formula I wouldn’t have ended up with a value of 2262 steps per meter, as somewhere in the mix a concrete, physical system often introduces variables that haven’t been accounted for in the model.

But back to the point. With knowledge of how many steps the motor encoders return I can calculate both the velocity and distance traveled for each motor, and therefore with a reasonable degree of accuracy where the robot is located from its starting location. David Anderson of the DPRG has made odometry into a fine art, and his robots can travel great distances in both complex indoor environments and even across difficult terrain up in the mountains and return to within inches of their starting locations. I find David’s work very inspiring.

But all this clever odometry falls apart when using Mecanum wheels, which have a series of rubber rollers that each have an axis of rotation at 45° to the wheel plane and at 45° to the axle line.

A Mecanum wheel with micro metal gearmotor and built-in Hall-effect encoder.

Whereas a traditional wheel translates its energy to the ground in a rather predictable way, the rotation of a Mecanum wheel interacts in complex ways with the other Mecanum wheels. I’m sure there’s a mathematical formula for this, one that would involve the direction and rotational velocity of each wheel, the total weight and weight distribution of the robot, the hardness of and therefore how much of each roller is in contact with the ground, the traction and rolling friction of the wheel rollers, the friction due to contact with the ground, roller slippage, and probably another half dozen unknowns. Call me lazy but my life is too short to even consider trying to create that formula and develop sensors for the various parameters of that equation. And we’re back to that abstract model versus concrete observation issue. If the goal is accurately tracking the robot’s movement over the ground (odometry), then we need to come up with another method.

Flying drones can’t do motor-based odometry but instead use a specialised camera called an optical flow sensor, something we might call a camera-as-sensor. An optical flow sensor’s camera looks down at the ground, generating a series of image frames, calculating the distance the drone has moved across the landscape by tracking the differences in position between image frames. This is returned as an x,y value at the frame rate of the camera. With both a GPS unit and a LiDAR providing a distance to ground measurement it’s possible to accurately perform odometry in mid-air.

PMW3901MB-TXQT
2x actual size

Now, my robot lives on the ground but there’s no reason we can’t try a similar trick. The actual sensor I’m using is from PixArt Imaging, designated the PMW3901MB-TXQT, and is all of 6 x 6 x 3 mm in size, using 6 milliamps of current. That’s including the camera and all of the electronics. This reminds me of the sensor used in the VL531LX Time of Flight sensor, which is a LiDAR the size of a grain of rice.

These sensors are generally sold on a carrier board so that they can be integrated into a commercial or hobbiest application. The carrier board I’m using is from Pimoroni, called the PMW3901 Optical Flow Sensor Breakout, part of their Breakout Garden series of sensors. The cost is about what we pay for two meals at a local Indian restaurant. You can solder a 7 pin header to the carrier board or simply plug it into an SPI socket on a Breakout Garden board.

The Pimoroni PMW3901 Optical Flow Sensor SPI Breakout

The PMW3901 has a frame rate of 121 frames per second and a minimum range of about 80mm. I’m hoping to mount it looking down from the underside of the robot’s upper board, which will be just above that 80mm minimum. The problem is twofold: how to provide the sensor’s camera with a clear view of the ground, and how to mount it in the center of the robot. This is necessary so that when the robot rotates around its center the sensor won’t register any absolute movement, just a rotational theta.

On the KRZ02 I’m using 48mm Mecanum wheels made by Nexus Robot. They’re a high quality steel framed wheel with some rather solid brass hubs and a load capacity of 3kg. When mounted on a Pololu Micro Metal Gearmotor the bottom of the robot’s 3mm thick Delrin plastic lower board is 30mm from the ground. This means the PMW3901 needs to be at least 47mm above that board.

I built a test rig frame out of a cheap nylon chopping board. A photo of the rig can be found at the top of this post. This holds the PMW3901 at a distance of 90mm from the ground, looking through a 50mm hole cut into a lower board. 50mm is the biggest hole cutter I have:

I used a Raspberry Pi Zero W and wrote a Python library and test file. In the test, the PMW3901 sensor returns positive and negative x and y values as it senses movement. Based on those values I’m setting the RGB LEDs to red for a movement to port, green for starboard, cyan for forward and yellow for aft/reverse.

The 5×5 RGB Matrix used to indicate direction.

My initial tests quickly proved faulty. The white plastic of the cutting board was providing all sorts of reflections of ambient light as well as the PMW3901’s illumination LEDs, and this confused the sensor to no end. Even at rest it would be indicating what seemed to be random motion. I taped some black matte paper to the board and this almost entirely eliminated the problem. I won’t be using white nylon on the robot but rather black Delrin plastic, which I can sand to a dull matte finish, so hopefully this will be sufficient. If not, there’s a rubberised black fabric used in photography that reflects almost no light.

The following video shows the test rig in operation.

As I noted in the video, the results indicate the test is a success, i.e., mounting a PMW3901 Optical Flow Sensor at about 90mm from the ground can provide odometry information for a ground-based robot.

With this important test out of the way I can now finish the plans and begin building the KRZ02 robot.

Credit where credit’s due: the beautiful fabric you can see in the photos and video is actually a reusable grocery bag designed by one of my favourite New Zealand artists, Michel Tuffery, as part of the Paper Rain Project.

The Four Corners Challenge

This is just a little ditty that I will update as things progress. It comes about from conversations on the DPRG mailing list about one of their commonly held robot contests, called the Four Corners Competition. Here’s the actual definition from April of 2018:

Objective: The robot will travel a rectangular path around a square course. The corners of the course will be marked with a small marker or cone. Before the robot makes its run, a mark or sticker will be placed on the center front of the robot and on the floor of the course. The objective is to minimize the distance between the two marks at the end of the run.

David Anderson pointed out that this contest goes back to a 1994 University of Michigan Benchmark (called UMBmark), “A Method for Measuring, Comparing, and Correcting Dead-reckoning Errors in Mobile Robots“, developed by J. Borenstein and L. Feng. As David describes it, the “concept is to drive around a large square, clockwise and counter-clockwise, while tracking the robot’s position with odometry, and stop at the starting point and measure the difference between the stopping point and the starting point. This shows how much the odometry is in error and in which direction, and allows calibration of the odometry constants and also the potential difference in size between the two wheels of a differentially driven robot. The DPRG uses this calibration method as a contest.” He even wrote a paper about it.

Well, I keep maintaining the purpose of my robotics journey is not to engage in competition (and I swear that I’m not a competitive person, I’m really not, no I am not), but I do think that this benchmark is a good exercise for fine-tuning a robot’s odometry. Nuthin’ to do with competition, nope, just a challenge.

The key to this challenge seems to be twofold: 1. getting the odometry settings correct; and 2. being able to accurately point the robot at that first marker. As regards the latter, the contest permits the robot to be aimed at the first marker using any method, so long as the method is removed prior to the contest starting. Over the years various approaches at this have been tried: ultrasonics, aiming the robot using a laser pointer, etc. I tried creating a gap between two boards and seeing if my existing VL53L1X sensor could see the gap, but then realised its field of view is 27°, so it’s not going to see a narrow gap at a distance of several meters. I then contemplated mounting a different, more expensive LiDAR-like sensor with a 2-3° field of view, but at 8-15 feet (the typical size of the course) that’s still not accurate enough.

Tactical Hunting Super Mini Red Dot Laser

This challenge has somehow lodged itself in the back of my head, the buzzing sound of a mosquito in a darkened bedroom. As I may resort to aiming the robot using a laser pointer I’ve put in an order from somewhere in China for a tiny “tactical hunting super mini red dot laser” (which kinda says it all). I’ll in any case definitely install the tactical hunting super mini red dot laser just ’cause it will look so cool and dangerous. But a laser pointer feels a bit like cheating: it’s not the robot doing the hard work, it’s like aiming a diapered, blindfolded child towards grandma’s waiting knees and hoping she makes it there. Hardly autonomous.

A Possible Solution: The Pi Camera

I’ve been planning to install a Raspberry Pi camera on the front of the KR01 robot for awhile, and since I was going to have a camera available I thought: heck, the robot will be at very least facing that first marker, so why not use its camera to observe the direction and let it try to figure its own way there? No hand-holding, no laser pointer aiming, no diapers, no grandma. Autonomous.

The Pink LED

So what would I use for a target? How about an LED? What color is not common in nature? What color LED do I have in stock? Pink (or actually, magenta). So I mounted a pink LED onto a board with a potentiometer to adjust brightness, using a 9 volt battery for the power source. Simple enough.

The Raspberry Pi camera’s resolution is 640×480. I wrote a Python script to grab a snapshot from the camera as an x,y array of pixels. I’m actually processing only a subset of the rows nearer the center of the image, since the robot is likely to be looking for the target of the Four Corners Challenge somewhat near the vertical center of the image, not closer to the robot or up in the sky.

The Four Corners Course with target LED at the 1st corner

I found an algorithm online to measure the color distance between two RGB values. The color distance is focused mostly on hue (the angle on the color wheel), so if that particular pink is sufficiently unusual in the camera image, the robot should be able to pick it out, regardless of relative brightness. I took a screenshot of the camera’s output, opened it up in gimp and captured the RGB color of the pink LED. I avoided the center of the LED, which showed up as either white (R255, G255, B255, hue=nil) or very close to white, and instead chose a pixel that really displayed the pinky hue (R151, G55, B180, hue= 286).

For each pixel in the array I calculated the color distance between its color and that of our target pink. To be able to see the results of the processing I then printed out not the pixel array of the original image but an enumerated conversion of the color distance — just ten possible values. So magenta is very close, red less close, yellow even less, et cetera down to black (not close at all). So the image is what we might call a “color distance mapping”. I just printed out each row to the console as it was processed, so what you’re seeing is just a screen capture of the console, not a generated image.

What the Rasbperry Pi camera sees (click to enlarge)

My first attempts were of just the LED against a dark background, enough to try out the color distance code. Since that seemed to work I tried it against a much more complicated background: the bookcase in my study (see photo). The distance from the camera to the pink LED was about 2 meters. Despite several objects on my bookcase being a fairly close match to the LED’s color, things seemed to still work: the LED showed up pretty clearly as you can see below:

The LED can be seen in the upper left quadrant on the bookcase shelf (click to enlarge)

That object just to the left of the LED with the 16 knobs is a metallic hot pink guitar pedal I built as a kit a few years ago. There’s another guitar pedal that same color on the shelf below. There’s enough difference between that hot pink and the magenta hue of the LED that it stands out alone on the shelf. Not bad.

Outdoors Experiments

So today, on a relatively bright day I tried this out on the front deck. There was a lot more ambient light than in my study and I was able to set the LED a full 3 meters (9 feet 10 inches) from the front of the robot. How would we fare in this very different environment?

The view from the robot

The 3mm LED I’d been using turned out to be too small at that distance, so replaced it with a larger pink LED and turned up the brightness. Surprisingly, the LED is clearly visible below:

This is a pretty happy result: the robot is able to discern a 5mm pink LED at a distance of 3 meters, using the default Raspberry Pi 640×480 camera. This required nothing but a camera I already had and less than a dollar’s worth of parts.

The Python code for this is called pink_led.py and is available in the scripts project on github.

Next step: figure out how to convert that little cluster of pixels into an X coordinate (between 0 and 640), then using that to set the robot’s trajectory. It could be that converting that trajectory into a compass heading and then following that heading might get the robot reliably to that first course marker.

But I’m still going to install that tactical hunting super mini red dot laser.

We’re Goin’ Mecanum!

Not that I’ve spent much time with my KRZ01 robot. I feel almost bad that I haven’t let the project develop much before making some significant changes. Like that high school girlfriend with braces.

Mecanum wheel with micro metal gearmotor and built-in rotary encoder

It’s just that my main project, the KR01 robot, is where I’ve been devoting most of my time and energy, and the KRZ01 isn’t frankly that different, despite being rather petite: about 1/4th the size and 6% of the weight (160g with its battery). Both are wheeled robots intended to operate using the Robot Operating System (ROS) I’m writing in Python.

So what made the KRZ01 the target of a redesign was the purchase of four Mecanum wheels. This post describes the beginning of this project — I’m only at the design stages right now.

What are Mecanum Wheels?

There’s a lot of descriptions (e.g., the YouTube video below) and demonstrations of Mecanum wheels on the web already (such as a Turkish Mecanum forklift!) so I won’t go into much detail here, suffice it to say that they allow a robot to travel in any direction without changing its compass heading. Well… not up or down. But crab travel, sure.

Just to mess with your head I’ll quote the description of how they work from the Wikipedia page on Mecanum wheels:

  • Running all four wheels in the same direction at the same speed will result in a forward/backward movement, as the longitudinal force vectors add up but the tranverse vectors cancel each other out;
  • Running (all at the same speed) both wheels on one side in one direction while the other side in the opposite direction, will result in a stationary rotation of the vehicle, as the transverse vectors cancel out but the longitudinal vectors couple to generate a torque around the central vertical axis of the vehicle;
  • Running (all at the same speed) the diagonal wheels in one direction while the other diagnoal in the opposite direction will result in a sideway movement, as the transverse vectors add up but the longitudinal vectors cancel out.

That kind of talk totally does my head in, but the concept obviously works, so I’m on board. When I get to the point of programming the motor controller for this I’m sure I’ll need a couple shots of good bourbon to focus my mind appropriately to the task.

The “Before” Photo (click to enlarge)

There are some design considerations regarding Mecanum wheels, and both David Anderson’s advice and my own experience with the KR01 suggest that I want the robot as balanced as possible, both in terms of weight and the position of the wheels relative to the center of the robot. With Mecanum wheels, weight distribution is even more critical than on a normal wheeled robot. Having too much weight on one wheel would significantly alter its behaviour, and not in a good way.

The Plans

So I have been planning this out. I can’t actually build anything just yet because I stupidly only bought two of the brass wheel hubs rather than four (“there were two in the photo” he says in his defense), so I’m waiting on another shipment from Canada.

Because it will be a fundamentally different robot the redesign will be called the KRZ02.

My plan for a Mecanum-equipped KRZ01 robot (click for larger view)

Update as of 2020-05-12: That first plan had a glaring error, in that while the centers of the wheels were on the circumference of a circle, that hardly meant they were equidistant, meaning their centers would fit on the four corners of a square. I’d drawn the diagram wrong. I tried a second time, this time also moving the motors as close towards the center of the robot as I dared, and including the positions of the Raspberry Pi Zero W, the two Picon Zero motor controllers, the Pimoroni Black Hat Hack3r expansion board, and the Breakout Garden Mini to hold one SPI and two I²C sensors . The robot got bigger but also more symmetrical. Witness version 2:

KRZ01 Mecanum Plan Plan 2

One thing seems pretty clear at the outset: the current KRZ01 has two “moon buggy” wheels and a ball caster, and its physical extent (i.e., how much space it takes up) is a circle 128mm in diameter — it’s a small robot. Plan 1 expands that to 210mm. Plan 2 above expands that extent to 227mm, almost double in size. It looks like it’d be a much larger robot than the KRZ01. Plan 1 used a 75mm chassis width, which is the current width of the KRZ01. Plan 2 has the motors as close together as seems reasonable but by fixing my design “bug” the robot is almost as big as David Anderson’s SR04 at 11″ (280mm). Not a small robot anymore.

I discussed the issue of symmetry with the guys at the DPRG and it seems that weight balance is critically more important than symmetry. I’m not happy with the Plan 2 being such a big robot, and from the plans there seems to be a fair bit of wasted space (i.e., it’s a lot longer than is strictly necessary) so I think I might try a third, shorter design.

More later…

Facilius Est Multa Facere Quam Diu

KR01 obstacle avoidance

This is another article in the series about the KR01 robot.

The title translates as “It is easier to do many things than one thing consecutively“, attributed to Quintilian, a Roman educator about two thousand years ago. It sounds curiously like a motto for either multi-tasking or multi-threaded processing. But also for how I’ve approached designing and building the KR01 robot.

One thing I’ve learned about building a robot is that, at least for me, the hardware and software is ever-changing. I guess that’s what makes the journey enjoyable. In my last post I ended up with too much philosophising and not enough about the robot, so this one makes up for that and provides an update of where things are at right now.

But before we get into the hardware and software I thought to mention that I’ve been quite happily welcomed into the weekly videoconferences of the Dallas Personal Robotic Group (DPRG) and about a week ago did a presentation to them about the KR01:

The DPRG is one of the longest-standing and most experienced personal robotics club in the US, with a great deal of experience across many aspects of robotics. They’re also some very friendly folks, and I’ve really been enjoying chatting with them. New friends!

Hardware

So… the biggest issue with the KR01 was imbalance. There was simply no room on the chassis for the big, heavy Makita 3.0Ah 18 volt power tool battery so I’d, at least temporarily, hung it off the back on a small perforated aluminum plate.

[You can click on any of the images on this page for a larger view.]

Earlier design: the black platform at the aft end held the Makita battery

The KR01 without a battery weighed 1.9 kilograms (4 lbs 3oz), so at 770 grams (1 lb 11 oz) the Makita battery and its holder comprised about 40% of the the robot’s total weight. By comparison, my little KRZ01 robot weighs 160 grams, including the battery. Since that photo was taken the robot has gained a bit of weight, now up to 2.1 kg. But that imbalance remained.

With the battery hanging off the back, when trying to spin in place the KR01 would typically sit on one back wheel and rotate around that wheel, but which wheel was almost arbitrary. I’d kinda knew something like this might happen but I was willing to keep moving forward on other parts of the robot (“facilius est…“), because fixing that problem meant making some big hardware changes.

Since I’d gotten to the point where I was actually testing the robot’s movement, I finally needed to bite the bullet and bought another piece of 3mm Delrin plastic. This time I positioned the battery as close to the physical center of gravity of the robot as possible. Then I spent a lot of time reorganising where things fit, as well as finally adding all of the sensors I’d been planning. I think I might have gone overboard a bit. The current design is shown below.

The redesigned KR01 with space for the battery closer to the center of gravity

The copper shielding is my attempt at cutting down on the amount of high-pitched ambient noise put out by the speaker (hidden underneath the front Breakout Garden Mini, next to the servo). This didn’t seem to make much difference but it looks kinda cool and a bit NASA-like so I’ll leave it for now. Yes, the KR01 can now beep, bark, and make cricket sounds. It also has a small 240 x 240 pixel display screen (visible at top center) and two wire feelers to theoretically protect the upper part of the robot, basically an emergency stop. I have no idea how well that will work. The inside of my house is pretty hazardous for a small defenseless robot.

Side view of the KR01 robot’s chassis, showing the power/enable switches and a Samsung 250gb SSD drive lodged underneath

Earlier versions of the robot had a 15cm range infrared sensor for the center, which being digital, replied with a yes or no. It worked as advertised, but 15cm wasn’t enough distance to keep the robot from running into things, even at half speed, so I’ve since replaced it with a longer range analog infrared sensor (the long horizontal black thing in the cutout in the plastic bumper, shown below) that I’ve coded to react to two separate ranges, “short” (less than 40cm) and “long” (triggered at about 52 cm). This permits the robot to slow down rather than stop when it gets within the longer range of an object.

The robot currently has a 15 cm range infrared sensor on each side but I’m planning to replace them with a pair of Sharp 10-150cm analog distance sensors, which hopefully will permit some kind of wall-following behaviour. I’m eagerly awaiting another package in the post…

Front view of the KR01 robot’s chassis, with polycarbonate bumper-sensor and infrared distance sensors

The KR01 now sports a variety of sensors from Adafruit, Pimoroni’s Breakout Garden, Pololu and others, including:

  • a servo-mounted 4m ultrasonic sensor, or
  • a servo-mounted Time of Flight (ToF) laser rangefinder with a range of 4m and accuracy of 25mm
  • four Sharp digital 15cm range infrared distance sensors
  • a Sharp analog 80cm infrared ranging sensor
  • an infrared PIR motion detector (for detecting humans and cats)
  • an X-Band Bi-Static Doppler microwave motion detector (for detecting humans and cats through walls)
  • a 9 Degrees of Freedom (DoF) sensor package that includes Euler and Quaternion orientation (3 axis compass), 3 axis gyroscope, angular velocity vector, accelerometer, 3 axis magnetometer, gravity vector and ambient temperature
  • a 6 channel spectrometer
  • two 5×5 RGB LED matrix displays
  • one 11×7 white LED matrix display
  • an HDMI jack for an external monitor (part of the Raspberry Pi)
  • WiFi capability (part of the Raspberry Pi)
  • a microphone
  • a speaker with a 1 watt amplifier

This is all powered by a Makita 18V power tool battery. The clear polycarbonate bumper (inspired by David Anderson’s SR04 robot) has six lever switches, with two wire feelers protecting the upper part of the robot.

All that just for the territory of my lounge. Or maybe my front deck.

Software

So while working on hardware I’ve also been working on the software. I’ve been writing a Behaviour-Based System (BBS) based on a Subsumption Architecture as the operating system for the KR01 in Python. The concept of a BBS is hardly new. Rodney Brooks and his team at MIT were pioneering this area of research back in the 1980s; it’s an entire field of research in its own right. Here’s a few links as a beginning:

The idea with a BBS is that each sensor triggers either a “servo” or a “ballistic” behaviour. Servo Behaviours (AKA “feedback and control systems”) immediately alter the robot’s movement or make a temporary change in its behaviour, such as speeding up or slowing down, in a relatively simple feedback loop. Ballistic Behaviours (AKA “finite state machines”) are small sub-programs that are (theoretically) meant to run from start to completion without interruption. The below video shows a ballistic behaviour that might occur should the robot find itself facing a wall: it backs up, scans the neighbourhood for a place where there’s no barrier, then attempts to drive in that direction. Yes, that is a sonar “ping” you hear.

This video shows a simple obstacle avoidance behaviour.

My understanding (i.e., the way I’m writing the software) is that for every sensor there is an associated servo or ballistic behaviour, and that each of these behaviours are prioritised so that the messages sent by the sensors contend with each other (in the subsumption architecture), the highest priority message being the one that the robot executes. It does this in a 20ms loop, with ballistic behaviours taking over the robot until they are completed or subsumed by a higher-priority ballistic behaviour. It’s the emergent behaviour as a consequence of these programmed behaviours that gives the robot its personality. When the robot has nothing to do it could begin a “cruising around” behaviour, whistle a tune, or go into standby mode awaiting the presence of a cat.

Not that my cat pays much attention to the robot. In the robot-plus-cat experiments that have been performed in our home laboratory he sniffs at the robot a bit and nonchalantly stays out of its way.

He wasn’t fooled for a moment by the barking.

Trial and Error. Lots of it.

[This post started as a progress report and ended up being more of a mental progress report. My next post will provide details about the current hardware and software of the KR01 robot.]

I just got another package in the mail. A couple of pieces of plastic. 3mm black Delrin. Very nice plastic.

While building the KR01 robot there’ve been a few lessons learned. One is that any notion of a “final design” was pretty foolish. If anything I’ve been constantly changing things. Donald Knuth, author of The Art of Computer Programming, famously wrote that “premature optimization is the root of all evil”, but we’re not talking here about optimisation: I’ve simply been trying to come up with functional hardware and software. Even for a small robot that’s proven to be anything but trivial. And the mistakes I’ve made have been due to both complicated and very trivial reasons.

ThunderBorg connections
What’s wrong with this picture?

Here’s a simple quiz (one that I’ve failed, twice):

Okay. You have a motor controller with connections labeled M1+ and M1-, M2+ and M2-. I’ve designated the port (left side) motor as M1 and the starboard (right side) motor as M2. You connect the white wire of the port motor to M1+, the black wire of the port motor to M1-, the white wire of the starboard motor to M2+ and the black wire of the starboard motor to M2-, just like in the photo. Correct? BZzzZZttt! Wrong. Can you guess why, or what will happen?
Answer: If you connect both motors to the controller in the same way (same polarity) and tell both motors to move forward, well, they will do exactly as you ask. Except that “forward” for the port motor means forward around the central axis of the robot, so the robot will spin counter-clockwise in place. Like many things this seems perfectly obvious, in retrospect. In retrospect.

I’ve been a software developer for many years, and iterative design is something I’ve long believed in, or at least have (by default) practiced. That is, you don’t get it right the first time, or the fifth. You keep hammering away until some portion of the work is done and then just move on to the next bit. Perseverance furthers. This isn’t that silly waterfall vs. Agile argument. It’s good to plan ahead. You need to plan, as much as you can. But at the beginning you can’t possibly see the road ahead. A hypothesis, a direction, yes.

Unlike software systems, with robots we have both hardware and software to deal with, and the entire approach is different, compared with say, a client-server infrastructure for a national weather service (which I have some experience with). The latter is enormously more complicated, but writing a robot operating system for a custom-built robot is just plain tricky, both getting the hardware functional and getting the software to do what you want. The hardware is a myriad of compromises of space, weight, cost; the software is reacting to an ever-changing series of sensor events, in real time. The more sensors, the more complicated things become. The motor controller softare can itself be quite a challenge.

It’s pretty tricky to create even a simple line-following behaviour, where there’s a single program that reads the values of two infrared sensors and alters the speed of two motors.

NASA’s Sojourner robot, which landed on Mars in 1997, sported an 8085 8 bit microprocessor with a 2 MHz clock (rated 100,000 instructions per second), addressing 64 KB1 of memory, 64 KB of RAM for the main processor, 16 KB of radiation-hardened PROM, 176 KB of non-volatile storage, and 512 KB of temporary data storage2. By comparison, the Raspberry Pi 3 B+ in the KR01 costs all of US$35 but is many orders of magnitude more powerful — somewhere over 200 million floating point instructions per second — with a 64 bit processor, a 1.4 GHz clock and 2 GB3 of memory. Rather than an SD card I’m using a Samsung 250 GB SSD for data storage, which costs about US$80. The Raspberry Pi is running a Linux operating system and I’m programming the robot in Python, not assembly language. So even if my budget is miniscule, I have some very significant advantages over NASA in the mid-1990s. The limitations of my project are entirely me.

I have no idea how difficult it must be to design an autonomous robot like NASA’s Perseverence robot, where there are seven large scale instrument systems, a five-jointed robotic arm and 23 cameras. It’s not a remote controlled robot (called telerobotics, basically a drone), as while it can be commanded, it’s autonomous: it can “think” for itself. It also weighs 1 metric ton, its power supply uses 5 kg of plutonium, and its mission has a budget of US$2.04 billion. Of course, it’s going to Mars. My KR01 only needs to navigate my lounge.

Perseverence would just fit in my lounge, if I moved my sofa (photo: NASA)

As is my nature, I jumped in headfirst and built a robot with quite a few sensors and hardware that I’ve been changing almost continually since I started. The photo at the top of this post is testament to the number of changes: it’s the “motherboard” of the robot that held the Raspberry Pi and all of the sensors that weren’t attached to the front bumper, and I’ve been almost continuously shifting things around both under and over that 3mm plastic boundary. Lots of holes.

While drilling all those holes, I learned a few things:

  • Everything is a compromise when it comes to the real world. Designs are abstract. Even if you know exactly what your goals are when designing a robot, the choice of materials, motors, batteries, sensors, and how they all fit together into a single hardware system, well, you’re not going to get it right the first time, and only by repeatedly failing will you arrive at a solution that kinda works. And it will only work some of the time. Then you go back and make some changes and try again. Repeat. And then once you’ve learned things by doing all that, your goals change.
  • You don’t know how things will go until you actually try it out. Design and its implementation are never the same. Robots and systems comprised of purely software are different in some rather profound ways. Rodney Brooks talked about two aspects of robots: situatedness and embodiment in his 1991 paper Intelligence Without Reason (see Some Notes on Artificial Intelligence ). This means that we’re not programming for an abstract system, we’re designing a control system for something that exists in the real world and has to deal with real world physical limitations and obstacles, where how long something takes to happen is affected by many factors, like floor surfaces, traction, and battery life. Things are often unpredictable. No two DC motors perform exactly the same, and are affected by things like heat and gear train friction. Sensors don’t perform exactly as we might think, and can be affected by bright lights, dust, radio frequency interference. There are wood floor to carpet transitions. Cat hair.
  • There’s going to be hardware and software bugs. I’m pretty meticulous, I like to think. But the number of mistakes I’ve made while building this robot, even after double- and triple-checking things, is rather humbling. To be fair, not everything was a mistake, sometimes it was trial and error, but there’ve been quite a few errors as well. For example, I’ve made mistakes creating wiring harnesses, forgot to provide power to sensors, drilled the holes for something and found it was too close to something else, and found a couple of push connectors that when pushed onto the connector pin just backed out of their housings and failed to make the connection. Sometimes these mistakes were obvious, some took awhile to discover and fix.
  • Things keep changing. Things wear out. You think you can bundle up those wires with a nylon wire tie? Think again. When you make a change tomorrow you’ll need to cut it off. This isn’t a reason to avoid the wire tie, just know that nothing is permanent, nothing lasts. This is also a central premise of Buddhism. Stay flexible, design for change.
  • Expect the unexpected. Robot hardware and software just doesn’t do what you think it’s going to do. It just doesn’t. Something always jumps up and grabs your ankle, and you’re sitting in your lounge with a mint julep, not prowling a desolate graveyard in a howling storm.
  • Learn from those who’ve gone before you. There’s little point in making mistakes others have already made. You can go it alone and make all those same mistakes, or do a bit of research to learn what to avoid, and also to seek inspiration.
  • Don’t be afraid to ask for help. Similar to that last item, there’s a lot of experience out there. If you’re polite people are usually happy to answer your questions, or point you at resources and documentation.
  • Try to stay organised. By this time I’ve got a lot of spare parts, nuts and bolts, sensors I haven’t yet tried out, or tried out and am not currently using. Just as in any shop, it’s smart to keep sort things out, keep things in divided containers, put your tools away when they’re not in use.
  • Stay within your budget. Have some idea of what you want to spend on your project, add a 20-50% percentage overrun, and design your robot to reasonably fit within that budget. All the miscellaneous adds up.
  • Be prepared. Make sure you have the right tools for the job. Have some spare parts handy. I’ve purchased extra 2.5mm, 3mm and 5mm nuts and bolts, washers, lock washers, nylon lock washers. A packaged set of 2.5mm nylon standoffs and a selection of LynxMotion 3mm aluminum standoffs in various lengths. I got a good quality soldering iron, and already had a multimeter and oscilloscope (the latter has proven pretty handy, if not strictly necessary). I have a well-lit, ergonomically-correct workspace. Power tools.

Hmm. I’m not sure if that list was really about building robots.

Here are some robot-specific ones (though suspiciously still generally-applicable):

  • Balance is important. You need to position the heavy objects (like batteries) so that the weight is over the center point of the robot, or if your robot is two wheeled with a trailing ball or wheel caster, low and slightly behind the main axle. If the weight is not on-center the robot’s motion will be affected.
  • Make sure you have enough power, and the right power. The Raspberry Pi needs a regulated 5 volt power supply: you’ll either need a USB “power bank” or a 5 volt regulator running off of a higher voltage battery. Your batteries might be lithium ion or lithium polymer supplying 3.7 or 7.4 volts, or nickel metal hydride (1.2 volts each) or alkaline AA batteries (1.5 volts each). An AA battery holder can hold four, six or more rechargeable or alkaline batteries supplying anywhere from 4.8 to 12 volts. Four AA rechargeable batteries provide 4.8 volts (not enough) and four alkaline AA batteries provide 6 volts (too much). Or like the KR01, you could use a power tool battery supplying 12 or 18 volts (which in reality seems to be around 20 volts). Your motors will likely need 6, 9 or 12 volts. Many of the motor controllers are configurable to handle a larger input voltage than the motors can handle, but this still requires care and proper configuration. Beyond voltage is current: how long your robot will run depends on the capacity (in amps or milli-amps) of your battery supply. Do you use a single battery for both the microcomputer/microcontroller as well as the motors? If you only want your robot to run for a few minutes at a time your battery capacity can be significantly lower.
  • Measure three times, cut once. Even with careful planning it’s easy to cut something the wrong size, or drill a hole in the wrong place. As you can see from the photo of the KR01, there’s not a lot of extra space anywhere on the robot, and I had to shift things around a lot to get things to fit. If there are any moving parts (like a servo-mounted ultrasonic sensor or camera), you need to be sure there’s enough clearance so that it won’t run into some other part of the robot.

Hmm. I might add to this list as time goes on but I can’t think of anything else right now. My beer is still cold, and only half full.

My next post will provide a progress report on the KR01 robot.

Some Notes on Artificial Intelligence

[These are some still-disorganised notes on Robotics, Artificial Intelligence, and Knowledge Representation that will likely be moved over to the wiki once it’s up and running. Likewise, at the bottom are some references, which will also end up on the wiki…]

“SMPA: the sense-model-plan-act framework. See section 3.6 for more details of how the SMPA framework inuenced the manner in which robots were built over the following years, and how those robots in turn imposed restrictions on the ways in which intelligent control programs could be built for them.”

— Brooks 1985, p.2

From Brooks “Intelligence Without Reason” [Brooks 1991]:

“There are a number of key aspects characterizing this style of work.

  • Situatedness: The robots are situated in the world — they do not deal with abstract descriptions, but with the here and now of the world directly influencing the behavior of the system.
  • Embodiment: The robots have bodies and experience the world directly — their actions are part of a dynamic with the world and have immediate feedback on their own sensations.
  • Intelligence: They are observed to be intelligent — but the source of intelligence is not limited to just the computational engine. It also comes from the situation in the world, the signal transformations within the sensors, and the physical coupling of the robot with the world.
  • Emergence: The intelligence of the system emerges from the system’s interactions with the world and from sometimes indirect interactions between its components — it is sometimes hard to point to one event or place within the system and say that is why some external action was manifested.”

Brooks notes that the evolution of machine intelligence is somewhat similar to biological evolution, with “punctuated equilibria” as a norm, where “there have been long periods of incremental work within established guidelines, and occasionally a shift in orientation and assumptions causing a new subfield to branch off. The older work usually continues, sometimes remaining strong, and sometimes dying off gradually.”

He expands upon these four concepts starting on page 14:

  • The key idea from situatedness is: The world is its own best model.
  • The key idea from embodiment is: The world grounds regress.
  • The key idea from intelligence is: Intelligence is determined by the dynamics of interaction with the world.
  • The key idea from emergence is: Intelligence is in the eye of the observer.

I might note that Brooks’ criticisms of the field of Knowledge Representation reflect my own findings, observed during the four years of my doctoral research on KR at the Knowledge Media Institute.

It is my opinion, and also Smith’s, that there is a fundamental problem still and one can expect continued regress until the system has some form of embodiment.

— Brooks 1991

The lack of grounding of abstract representation is evident from the almost complete
lack of the KR researchers to even bother to definitively explicate the two terms in the field’s title: “Knowledge” and “Representation”. How can one rationally explore a field when one doesn’t yet know what knowledge is, or where there is no epistemologically-sound definition of the word representation? The greatest related advances in that field belong to the likes of C.S. Peirce, John Dewey, Wilfred Sellars, Richard Rorty and Robert Brandom, but this seems (at this point in time) to be still disconnected to the concept of “embodiment” as explored in robotics (but I’m hardly the person to judge that issue). So it’s grounded neither in mathematics 1 nor in the real world.

I must agree with Brooks, that embodiment is a necessary precondition for research into intelligence. Brooks’ paper was from 1991, my doctoral programme began in 2002. I wish I’d read his paper prior to 2002. I met Doug Lenat in 2000 and over dinner in Austin we discussed the idea of working for his company, Cycorp (the corporate home of the Cyc Ontology). The whole thing is a giant chess set, a massive undertaking that as of 2020 is still essentially doing what it did when I saw it for the first time at SRI in 1979; it’s as Brooks says, it’s just followed the advances in computing technology but not really provided any real breakthroughs.

Regarding scale or size:

“The limiting factor on the amount of portable computation is not weight of the computers directly, but the electrical power that is available to run them. Empirically we have observed that the amount of electrical power available is proportional to the weight of the robot.”

— Brooks 1991, p. 18

References


Some Goals

What has become the New Zealand Personal Robotics Group began not long ago as a robotics project. So this is all pretty new. Below is a jumbled collection of some of my goals for the robotics project. Perhaps these goals will be shared by other people?

Meta-Goals

  • Explore ideas: both the philosophical as well as experience-based research within the field of Artificial Intelligence, specifically as related to robotics
  • Explore robotics hardware: the latest hobbiest-level sensors, motors, platforms, etc.
  • Explore robotics software: where have we gone since Rodney Brooks’ 1985 ideas about subsumption architectures [Brooks 1985], odometry using PID controllers [PID]?
  • Explore Self-Adaptive Software Systems
  • Learn the Python programming language

Goals

  • Try out various sensors and robot modules:
    • Time of Flight (ToF) laser distance sensors
    • infrared sensors (various distance ranges)
    • ultrasonic sensors (via PiBorg’s Ultraborg)
    • install Hall-effect motor encoders on the motor shafts, and write a Python-based PID motor controller
    • analog-to-digital converters
    • motion detector, to detect humans (and cats)
    • a robot front bumper, modeled after David Anderson’s SR04 robot [SR04]
    • motor control (via PiBorg’s Thunderborg)
    • Uninterruptible Power Supply (UPS) and battery management (via PiJuice)
  • Try out several robotic hardware platforms, for a low-cost, entry-level robot
    • OSEPP Tank Kit
    • Adafruit CRICKIT for Circuit Playground Express
    • robot chassis available from Adafruit:
      • Purple Aluminum Chassis for TT Motors – 2WD
      • Mini Robot Rover Chassis Kit – 2WD with DC Motors
      • Mini Round Robot Chassis Kit – 2WD with DC Motors
      • Mini 3-Layer Round Robot Chassis Kit – 2WD with DC Motors
  • motors:
    • OSEPP 25mm motors (part of the Tank kit), with encoders
    • yellow plastic motors
    • continual-rotation micro servos
    • Pololu micro gearmotors (1:298, with encoders)
  • Try out several robotic processor platforms, for a low-cost, entry-level robot
    • Raspberry Pi (Zero, Zero W)
    • Circuit Playground Express
    • Arduino
    • other microcontrollers, e.g., Adafruit ItsyBitsy M4 Express
    • Espruino WiFi (Javascript-based microcontroller)
    • MicroBit?
  • Explore use of I2C bus

Also see the page on Artificial Intelligence.