Awhile back I’d tested out the PWM3901 Optical Flow Sensor, a Breakout Garden board from Pimoroni. Its a vision sensor that contains a tiny camera, typically used on flying drones. Coupled with a drone’s GPS sensor and Inertial Measurement Unit (IMU), an optical flow sensor assists in navigational control by observing the motion of the ground below it, taking successive pictures and comparing how far an image has moved relative to the previous one. Since the camera itself doesn’t know how far it is from the surface it can only provide relative numbers, not distances in measurement units like feet or meters.
Since the PWM3901’s chip was designed for drones, its camera’s effective range is 80mm to infinity. This means that if used on a ground-based robot its lens must be at least 80mm from the ground, and its view must not be impeded. This can pose a challenge for the design of a robot, whose bottom must either be at least 80mm from the ground or contain a large open area for the sensor, ideally right in the center of the robot. But even then we’d be using the sensor at the extreme lower limit of its range; it’s really tuned for being way up in the sky.
Now it turns out that PixArt, the maker of the sensor chip, also manufactures a very similar sensor called the PAA5100JE-Q that is tuned for working close to the ground, apparently used in service robots and robot vacuum cleaners. Whereas the Pimoroni carrier board is 24 by 24 mm, the PixArt sensor itself is only 6mm square. Depending on your screen resolution the image to the right is roughly the size of the sensor. According to the PAA5100JE datasheet, the chip is advertised as having a tracking speed of up to 45 inches per second and a range of 10-35mm, which seems ideal for hanging from the bottom of a robot. They consider it suitable for use on “carpet, granite, tiles, wooden floors with dry or wet surfaces.”
So I went ahead and sent an email to Pimoroni, suggesting that since the two sensors are very similar it might be possible to develop a new version of their existing carrier board, but using the PAA5100JE.
I discussed the idea of a ground-based optical flow sensor with the DPRG during the weekly video conference and there was a general consensus that this sounded like a great idea, especially for mechanum-wheeled robots where wheel odometry was problematic. Its president Carl is currently building a mechanum-wheeled robot and the availability of such a sensor would potentially enable visual odometry.
I received an email from Pimoroni indicating that they were going to develop a new Breakout Garden board using the PAA5100JE, and in March told me they’d send me one prior to it being released. The photo of the boards being fabricated was quite exciting to see. So I waited patiently for the package to arrive in the post from the UK. In New Zealand I think we’re accustomed to waiting for overseas shipments to arrive.
The New Sensor Arrives
Last Friday a small mysterious package arrived containing the new PAA5100JE-Q Near Optical Flow Sensor. When placed next to the original PMW3901 it’s clear they’re related, the only discernable differences seem to be that the PAA5100JE has a different lens, the illumination LEDs seem a bit larger (though this may not be significant), and the SMT components to the right of the lens have been reorganised. Perhaps the nicest thing is that I didn’t have to wait for a software to be developed as the open source library for the PWM3901 seemed to work just fine.
Ground Testing
So I thought the first thing to do was test the new sensor to see how I might use it for visual odometry. The output is simply an x,y value that is pushed from the sensor when it senses motion. But there are no units to that x,y, nor could there be since its camera doesn’t know how far away the image is. We’ve got to help it a bit.
One of the first surprises was that the optical flow varied quite significantly depending on the surface below it, which seems to be related to the level of surface detail.
Treated Pine Fence
Concrete
Unfinished Birch Ply
Hardwood Floor
Lawn
Persian Rug
Painted Wood
Tile
Hardwood Deck
The sensor was tested on a variety of surfaces.
About a week after the sensor arrived I received an email from Pimoroni stating that their engineers had finished the modifications to the Python library and that it was now available. This uses the same file as the PWM3901 but you import the PAA5100 class instead, e.g.,
from pmw3901 import PAA5100
I updated the library on my Raspberry Pi and changed the import statement in my test file, then went ahead and retested the sensor across my nine test surfaces. It turns out this made a significant difference in the results.
PWM3901 vs. PAA5100JE
It’s clear that the ranges of the PWM3901 (80mm to infinity) and the PAA5100JE (10-35mm) represent two distinct design choices. If your robot’s base is close to the ground, that is, between 10 and 35mm, the latter will work just fine.
If you’re building something like a Mars Rover or some other design where the lowest part of the chassis is above 35mm (making it unsuitable for the PAA5100JE) and greater than 80mm, then the PWM3901 would work better. It’s that span between 35mm and 80mm that may present a design problem: either hang the PAA5100JE lower than the robot’s chassis (reducing ground clearance) or somehow position the PWM3901 up higher on the robot so that it is at least 80mm from the ground. Not ideal in either case.
This blog entry isn’t really finished as I need to do more testing, but the KROS-Core Python library project has been taking up most of my time of late, so I’m publishing it now just to get it out the door (somebody has asked to see it). So I’ll come back to this later when I get a bit more time…
I spent about twenty minutes earlier today with a needle file, carefully and patiently filing away part of a hard metal battery terminal that I’d inadvertently epoxied oh so very slightly too high, such that it was blocking the little tab meant to fit in that hole and keeping me from reattaching the black plastic battery cover of my new Zumo robot. In the photo above you can see that tiny bit of metal peeking up through the hole, with that bit of black plastic to the left less than a millimeter thick. Very fragile polystyrene plastic. One slip and I’d break it.
I’d already filed away some plastic to leave space for the battery wires to exit the battery compartment, which normally fits snug against the PC board. Normally one would just solder the battery terminals directly to the PC board, meaning: permanent connections. I have been avoiding permanent connections. This is not a reflection on my private life, just the Zumo.
To some degree this represents the end of my journey. For over a year now I’ve been searching for a “club robot”, that is, a starter robot with enough appeal for beginners but still with enough growth potential that it wouldn’t sit on the shelf either due to its owner losing interest in robotics or growing out of it into a more functional robot.
It’s a bit of a tall order. My requirements included:
low cost: the overall cost of the project should be under NZ$200 (~US$145)
easy to buy: use a hardware platform that if single-sourced would be from a reputable company with good support, or otherwise could be built from readily-available parts using a recipe
easy to build: anyone should be able to build the robot using common mechanical skills and tools
easy to modify: the robot should permit customisation, such as adding new sensors
programmable in python: preferably programmable in Python rather than C/C++, since the latter’s difficulty and learning curve are significantly higher [one goal of my own robot journey was to learn Python]. Many single board computers and microcontrollers can be programmed in multiple languages; some, like the Arduino, only in C/C++
First Steps
My thoughts for a club robot first went to various small robots such as the MakeBlock mBot, which is an attractive option but targeted at primary school children. They’re simple robots often designed as part of a larger school learning programme. These would excel at the first three requirements but fail on the fourth and fifth. They’re advertised as an “entry-level” robot for beginners. Fair enough. But not for this club’s club robot.
The mBot uses a micro:bit, an inexpensive single board microcontroller designed by the BBC (yes, the British Broadcasting Service, that BBC). The one thing about the micro:bit that keeps it somewhat in the running is that it can be programmed using either of two different graphical user interfaces: Microsoft MakeCode or Scratch, or using an online tool that edits code in Python and downloads it to the micro:bit to execute.
Last year I’d helped some friends buy a micro:bit-based Waveshare Alphabot2 robot for their daughters, which after batteries and charger ended up north of NZ$250. It’s a nicely designed and manufactured robot but after purchasing the LiPo batteries and a safe battery charger, somewhat too expensive, and not expandable, kind of a fixed entity. There’s not even a place on the robot to attach any additional features. I haven’t asked lately if it’s gathering dust or if the eldest girl, who is now 11, has shown any ongoing interest in the robot. It seemed like a success at the time, but the whole exercise pointed out the likelihood that, without some guidance and mentoring, a lot of kids might feel a bit left adrift with what is a lot more than simply a toy.
But all three of the avenues for programming the micro:bit are very limited. It’s not the kind of tool set that would appeal much to a teenager, much less an adult, so it’s not really suitable as the processor for a club robot. So the AlphaBot with micro:bit was out, and its Arduino and Raspberry Pi brethren too, since the robot is not expandable and doesn’t have the ability to fit motor encoders or additional sensors.
The Zumo Sumo Robot
Then the idea of the Zumo robot came up, though it uses an Arduino, rather old-school and slow nowadays with its 16MHz ATmega32u4 processor, and only programmable in C/C++. That being said, there are numerous code libraries available in C/C++ for Robot Sumo competitions, line-following and many other challenges and techniques. It’s widely known and widely available, so it certainly qualifies as a viable club robot if I’m willing to give up the Python requirement. But I wasn’t, at least not initially.
The default Zumo Robot for Arduino consists of a chassis, the Arduino shield, a pair of motors, a stainless steel blade to push off combatants, and an optional array of infrared sensors to sense the black and white transition at the edge of the Sumo arena. You’d then add sensors to locate the opponent. If you build it exactly as according to that plan it’s a robot wiht a reasonable cost and reasonable build time.
But I wasn’t going to give up on programming it in Python so I also ordered some other parts that would permit me to swap out the Arduino shield for a Kiktronik Robotics Board for the Raspberry Pi Pico. The Pico can be programmed in C/C++ but also in MicroPython.
So I put together an order of Zumo parts from Pololu, as well as the Kiktronik/Pico parts from Pimoroni.
What seemed like a very long time later the packages had both arrived.
I’m myself pretty happy with working in Python and believe it’s a good job skill, and thought: I wouldn’t want to be tasked with teaching any kids C++. I don’t even like it as a language. So unless there’s some way to program a Zumo in Python it’s not in the running.
So why even bother?
I then remembered a conversation a few weeks’ back with David Anderson of the Dallas Personal Robotics Group (DPRG), who in what I believe out of sincere curiosity had asked me: “Why are you bothering to try to start a club anyway? Why not simply build your own robots?” I had a nice pat answer about enjoying teaching people things and wanting to share some of the knowledge I’d gained over the years. But that question kinda lodged in my skull and has been festering there since then. It’s a good question.
I believe David explained his own motivation as simply wanting to build some great robots with the understanding that if people are interested they’ll be inspired by them and get involved in robotics. Lead by example. Seeing David’s robots in action were actually what spurred me to consider, after almost 40 years, to get back into robotics. David seems to be mostly interested in building robots, not trying to teach anyone or lead a group of people. It occurred to me that he’d also asked: “Why do you care if people are involved in robotics or not?” And he had me there. Why do I care? It’s not like I would benefit if more people built robots. There’s this idealistic notion that the world needs more engineers, and it’s a nice thought that I might inspire someone to take up engineering, but that’s pretty indirect. I’m not a school teacher, I don’t have any students. There are currently no NZPRG members except myself. I’m not even sure I even have the time to actually lead a local club. And I’m sure David would rightly say that all the time I’d spend running a club would be time taken away from doing the thing I actually enjoy doing, which is building robots.
My Customised Zumo
Today I finally finished the hardware of my Zumo. I could have simply built it according to the plans, which would likely have taken me less than an hour. But I was in my head exploring this idea that with suitable modifications it might suit as a club robot. There should be in how much time I’d invested an answer to the question of it posing as a possible club robot: absolutely no way. I wasn’t keeping track of the hours, but it was certainly more than an eight hour day’s work. Pretty silly.
The Zumo robot from Pololu comes in two forms: an already-assembled Zumo 32U4 Robot model and a Zumo Robot for Arduino. With the latter the robot effective plugs upside-down onto the top of an Arduino microcontroller as a shield, the common name for Arduino accessories. The 32U4 version actually has more features built-in, like a little LCD display, motor encoders and line-following sensors, whereas the shield is, in theory, more flexible. That theory turned out to be rather illusory.
But for the Arduino shield version that I’d built, I had to figure out how to add motor encoders, which aren’t part of the deal normally. The motors are held in their own small space in the chassis, with no holes even for the twelve wires to exit. Yeah, twelve wires. Each motor has positive and negative power, plus the four wires (A1, B1, A2, B2) for the encoder. The Pololu encoders come in three varieties, but only the bare board version would seem to fit into the space.
So where to go with twelve wires? If one is going to fit the infrared sensor array it can’t be forward, as the connector fits snugly against the front of the chassis. We can’t go up as that would run into the Arduino shield, nor back as that is the battery compartment. So it had to be down, though that is the ground. There’s a few millimeters of clearance between the bottom of the motor chamber and the actual bottom of the robot, so that’s it.
So I managed to find a way to connect the motors and encoders and route the wires up to the controller.
This also meant that the twelve wires and two battery connections (now a black and red wire) couldn’t be soldered to the Arduino shield either, I’d have to use Dupont connectors.
So… long story short, I ended up after many hours with an Arduino Leonardo atop the Pololu Zumo shield, all connected with a bunch of wires. The infrared array plugs very cleanly underneath the shield. But I’ve not yet added any ability for the robot to see Sumo opponents, like a pair of Sharp analog infrared sensors or an ultrasonic sensor.
The End
Yeah, also a song by The Doors. But having mostly-completed the hardware for this Zumo brought me to what had been that question at the back of my head all along: why am I doing this? It was all in mind of this “club robot” idea.
I have several much larger, more complicated robots that are outfitted with a Raspberry Pi and lots of sensors, and I’ve been happily programming them in Python. Over the past few years I’ve developed a fair bit of code in support of the hardware, and continue to work on that. Was I now going to start over in C++ on this Zumo? Really?
What would be perhaps an ideal platform for someone else now just felt like a burden. As I said earlier, I don’t even like programming in C++, and while there are available libraries for the Zumo, I’d be treading on territory others have passed through long ago. I’d be going from a robot with more than a dozen sensors to one where even adding a few is rather more difficult. Why would I do this?
What’s after The End?
So what’s next? Well, I could sell the Zumo. It’s perfectly functional and has motor encoders. There’s also the idea that I could swap out that Arduino shield for the Kiktronik Robotics Board and program the Raspberry Pi Pico in MicroPython.
I’d basically be putting the shield and the Leonardo in a plastic storage box and never looking at it again, which seems a bit of a waste. But this was an experiment, a thesis, and my thesis has borne its fruit: the knowledge that while a robot based on the Zumo Robot for Arduino is a perfectly fine small robot for anyone who wants to program in C++ and compete in Mini Sumo competitions — which is what it’s designed for — it is difficult to modify. If one wants to use odometry for navigation the readymade Zumo 32U4 Robot already has motor encoders built in, and would therefore be more suited for more advanced robotics, something beyond a Sumo competition.
But to fulfill my set of requirements, a robot based on the Zumo chassis still remains a real possibility, using a different controller. I’ll explore that in a future post.
This is the first in the multi-part series about the KD01 Robot ( 1 | … ).
As I described in my series of posts on the KR01 robot, I got back into robotics after watching a YouTube video of the DPRG‘s David Anderson demonstrating his robots, talking about the details of their design and how they work. I’m very happy to say that David and I are now friends and he’s been mentoring me and answering some of my more tedious questions with the patience of a very good teacher.
The KR01 robot uses a motor controller running a pair of motors on the port side and a pair of motors on the starboard side. The motors are wired up in parallel, so the four motors are treated (electrically) like a single pair of motors. But performance-wise they don’t act like two motors.
If I program the KR01 to rotate in place, how it performs depends a lot on what surface the robot is on, as with four motors the robot relies upon the slippage of its black silicon rubber wheels to permit it to rotate at all. And how the robot’s weight is distributed influences which wheels slip and which don’t. On a relatively sticky surface like a wood floor the KR01 visibly shudders as it tries to rotate in place. If it rotates continually it tends to move randomly across the floor.
All of this is due to those four wheels, which provide a lot of stability and are great for four-wheeling out on my front deck, but not so great for precise navigation, or rotating in place. The solution for this is a different robot design, called a differential drive robot.
At about 2:30 in David’s video he says:
“The advantage of differential drive is a zero turning radius. Zero turning radius is what we humans have, so if you want to build a robot that can run around in the same space as humans run around in, it’s a useful thing to do. And that means we can turn around pretty much in our own space. There’s some tremendous advantages to being able to do that. It greatly simplifies your navigation software if you don’t ever have to back up.”
The other part of the design requirement for a differential drive robot in being able to turn in place is that the full extent of the robot fit within a circle, with its tail wheel placed on the perimeter of that circle. If the tail wheel or the robot frame extends beyond that circle the robot won’t be able to turn in place without bumping its big bum into something.
Now, I’ll still be using the KR04 for many of my robotics experiments, but I wanted to try out a differential drive robot. I realised the design I now had in my head looked remarkably like something I’d seen before. David’s robots are typically differential drive designs, such as his SR04, the robot that inspired me to start building robots again, and it now seems a bit like the older brother to my new robot, which I’ve dubbed the KD01, where the “D” is for Differential. I stilll can’t think where the “K” came from…
KD01 Requirements
So in addition to desiring a differential drive robot, I thought the new robot should be able to re-use wherever possible the existing ideas, components and parts from the KR01, including its motor controller and drive train: the OSEPP 9V 185rpm motors, annodised aluminum wheels, silicon rubber tires, and Hall effect motor encoders. As the encoders are designed to be mounted on the same beam as the motors, this entailed even the same OSEPP/MakeBlock chassis components.
The KR01’s four motors and four wheels permit a substantial carrying capacity, but its Makita 18v 6.0Ah power tool battery weighs 760g (1.7lb.). By comparison, the KD01 without batteries weighs 1525g; having a battery weight half that of the robot seems a bit extreme. Also, two drive wheels and half the number of motors doing all the work suggested something a bit lighter, perhaps the Makita 12v 2.0Ah power tool battery, which weights a 263g, some savings in both weight and size.
The KR01’s Integrated Front Sensor (IFS) supports five Sharp 10-150cm analog infrared sensors and three pairs of small lever switches attached to a clear polycarbonate plastic bumper. I definitely didn’t want to have to build this another time if I could reuse it.
To be able to mount the IFS requires chassis compatibility with the KR01. Thankfully this is simple, as the IFS is attached to the front of the robot with two 5mm stainless bolts and to the Raspberry Pi via a single five-wire I²C connector. With this hardware compatibility I can transfer the IFS between the two robots and use much of the same sensor software I’ve already developed. All this will save me a lot of time and trouble. Re-use is good.
I thought about buying a fancy German-made caster, but decided to build my own from a spare OSEPP wheel mounted with some the roller bearings (that came with their tank robot kit) to a commercial caster swivel mount, even using a pair of chrome guitar strap buttons as spacers. The end result is quite tall, but means that the KD01 will use three wheels of the same size, no small back caster that might reduce the floor clearance.
The tall back caster required a stepped platform with enough clearance for the caster to swivel 360 degrees. With the battery mounted just slightly behind the wheel axis, just getting everything onto the robot platform was a challenge. I ended up locating the ThunderBorg motor controller and the board holding the connections to the encoders underneath the battery, with the Raspberry Pi at the back of the robot, as was done with the KR01.
With all that I took to paper. Design is largely juggling all the requirements and constraints. and this was no exception.
The KD01 Differential Drive Robot
Things came together pretty quickly once I’d finalised the design on paper. I cut the two chassis pieces out of 3mm black Delrin plastic, using L-shaped aluminum for other parts of the frame and to provide a bed for the battery above some of the electronics.
The boards are held together using 3mm Lynxmotion anodised black aluminum hex standoffs. These are lightweight and strong, and permit the robot to be disassembled easily. I’ve been using M3 (3mm) stainless nuts and bolts for pretty much everything now.
The photo above shows the Raspberry Pi 3 B+ and a Pimoroni Breakout Garden hat at the rear (left), the Integrated Front Sensor at front (right), and the PiBorg ThunderBorg motor controller underneath where the battery sits. It’s currently sporting an Adafruit BNO085IMU on a sparkly red nylon mast (made from a cutting board), and another Adafruit FXOS8700+FXAS21002 IMU mounted on a small black breadboard, affixed to the robot using small velcro dots. I’m currently doing some experiments with two IMUs. I also decided it was helpful to have a tiny Adafruit 135×240 Mini Pi TFT display screen as I had one in my inventory, so this was mounted on a 3mm Lynxmotion standoff using some bits of L-shape aluminum to provide a 3D swivel adjustment.
While the drive wheels and caster fit within a 212mm circle, the Integrated Front Sensor does extend outside of the circle, but not by much. This does mean that rotating in place could potentially catch the edge of the polycarbonate bumper on something as the robot turns. David’s SR04 is similar in this regard. He solved this in a later robot called the RCAT, which uses a curved plastic front bumper that fits within the extent of that circle. A great design.
Rotate in Place
Well, how did we do mechanically? I took the robot out onto the front deck and tried it out with a rotate-in-place.py test. On the deck the robot’s wheels hitting the spaces between its boards seemed a distraction, so I tried it again on a piece of smooth particle board. The result was posted to the NZPRG YouTube channel:
A Problem Exposed
Running a test sometimes exposed other problems. This was evident when running the KD01’s port wheels counter-clockwise: the encoder outputs were way out of range. Normally (such as on the two encoders of the KR01 or the starboard encoder of the KD01) there’d be almost exactly494 steps per wheel rotation, but the port encoder on the KD01 was registering an inconsistent 337-342 steps per rotation but only when running counter-clockwise. I pulled out the trusty ol’ Iwatsu oscilloscope and wired it up to the encoder outputs with the robot on the bench.
The only thing that seemed obviously different about this encoder is that the offset between the A and B waveforms’ falling edges was very close together.
In the image on the left you can see that the distance between the falling edge of the A (top) and B (bottom) waveforms almost coincides. This means that when the quadrature measurement occurs, the alternate waveform may not be in its opposite state and the tick count won’t change. On the other three encoders the two waveforms are significantly more offset.
It’s good to have some friends with experience. In this case, on today’s video conference with the DPRG I asked Carl, Doug, David and the other experts and they totally knew the problem was with that offset, or the lack thereof.
So I turned the KD01 upside down, pulled the little disk magnet off of the motor shaft, and very very carefully bent one of the tiny three-wire encoder sensors very very slightly, maybe a tenth of a millimeter. While its distance to the magnet seemed okay, it wasn’t quite parallel with the surface of the magnet. I then put everything back together, wired things up again and guess what? That fixed it. The image on the right shows the waveform after the adjustment. The offset between the two waveforms is now almost exactly 90 degrees, and the edge transitions now occur a long way from each other.
To check, I hand-rotated the port and starboard wheels while counting the encoder ticks with my motors_test.py program. The port encoder now turns -4941 ticks over ten clockwise turns of the wheel, and +4939 ticks over ten counter-clockwise rotations. Or 494 ticks per wheel rotation in either direction.
My exposure to the Nuvoton MS51 microcontroller came about due to its use in a number of Pimoroni Breakout Garden products, initially their IO Expander board, which provides six PWM/digital and eight analog IO pins, programmed using a Python library on a Raspberry Pi using a single I²C connection. Very handy.
Part of this exploration has been mere curiosity about the MS51, which is essentially a 1980s-era technology still being used almost 40 years later. The Nuvoton MS51 is an MCS-51/8051-compatible microcontroller, using the same 8 bit architecture.
The bright red board shown at the top of the post is a bit misleading as it’s actually two boards: on the right is a USB linker board, which is used to connect to and program the MS51 microcontroller, which is on the left. These two boards can be cut apart and reconnected via the 10 pin connectors you can see in the middle. Many of the Nuvoton boards have made this even easier by perforating down the center so they can be easily broken apart. The MS51 processor itself is on the left side, that small surface mount black thing surrounded by pins labeled Nuvoton N79E85JALG, actually 4x6mm. On the IO Expander the MS51 processor is only about 3mm square. More on size below.
Now, I’m quite happy with continuing to program my robot in Python and wouldn’t want to go back to either C/C+ or assembly language, the latter I last touched in about 1981. But in my conversations with the DPRG we’ve talked about distributed processing for robots, that is, using something like a Raspberry Pi as a main brain, distributing the sensor and control tasks to various sub-processors. In a recent conversation this was likened to an octopus, which apparently has about two-thirds of its 500 million neurons in its arms. The sensor boards are generally small — about the size of a postage stamp — but the sensors on these boards often themselves also have a built-in microcontroller, like the ST VL53L1X, a Time-of-Flight sensor that’s about 5×2.5mm.
Last year Pimoroni released a pair products that internally use the MS51, both RGB LED knob controllers, a RGB Encoder and RGB Potentiometer. Recently they even added another MS51 implementation, a tiny Super Dinky Blinky (an LED blinker), even providing a github link with instructions on how to hack/reprogram the device. So it’s clear Pimoroni have some engineers on staff who like the MS51 as a general-purpose microcontroller.
Well, nobody answered so I checked out the Nuvoton website and found a plethora of development boards, using ARM Cortex M0, M4, M23 and 8051-compatible processors in “Tiny” and “Maker” (Arduino UNO style) form factors. There’s also a suite of development tools.
It looks like both their hardware and software is quite well-documented, with a lot of PDFs on site. Each board is provided with a hardware specifications and information on their Software Development Kit or SDK (e.g. the one for the board shown at the top of this post, the NuTiny-SDK-N79E715). The Taiwanese-English is pretty good, and their product line looks to be very extensive. Not a small company.
Some of the Tiny-style boards are truly tiny, and their Maker-style boards (e.g., NuMaker-PFM-M2351) look like nice Arduino replacements, e.g., 64MHz, operating voltage 1.6-3.6v, memory up to 512kb RAM/ 96kb SRAM. Lots of pins and an UNO-compatible USB link connector. Even an on-board Wifi module. Like Arduinos, the processors themselves are typically surface-mounted but are also available in DIPP packaging should someone want to experiment with one on a breadboard.
The ARM Cortex-based processors would be Arduino-like, whereas the 8051 boards (e.g., NuTiny-N76e616) use the MS51/8051, back-to-the-future of the 1980s. Everything looks to be about US$25.00.
Software Development on the MS51
The link to the list of Nuvoton development tools includes IDEs for all of its Cortex M0/M4/M23 and 8051 controller boards. This does actually include a customised version of the Eclipse IDE called NuEclipse, with distributions available for both Windows or Linux. It’s based on a rather old (2015) version of Eclipse “Mars” but seems functional.
I downloaded and installed NuEclipse for Linux, which seems to be Eclipse customised for C/C++ development, comes with a GCC OpenOCD installer, code support for the 60-odd boards Nuvoton sells, user manuals and some sample files.
There are also three KEIL IDEs: one for US$395, one commercial, one free for the Cortex controllers, and two different IAR IDEs for the M0/M4/M23 and 8051 processors, resp. I downloaded the free version but it unfortunately won’t run on the only Windows machine I have available.
As I’m mostly interested in the 8051 I went to the IAR site, searched for the NuTiny-N76E885 board (which is currently on 90% discount for US$2.50!), then downloaded the IAR EW8051 Embedded Workbench IDE, available for either a free 30 day trial or a 4K code size-limited installation. When you first open the IDE you have a choice of registering online for the Evaluation Copy. It won’t operate until registered.
On starting the IAR IDE it looks a bit like an older, customised version of Eclipse, but basically functional. It doesn’t seem like there’s any way to connect it to the Pimoroni IO Expander that uses the Nuvoton MS51, so if I want to further investigate I’ll have to consider getting one of the Nuvoton controller boards.
I’ve also contacted their sales department to see about the price for purchase of the IAR IDE for a quantity of one. The installer included a “dongle driver” so I’m hoping they don’t use dongle-based license management (yuck). At least for the trial license there’s no dongle. If the commercial price of the KEIL IDE ($395) is any indication, the IAR one might be rather expensive. I hope to get a quote from their sales rep, otherwise I’ll be limited to 4K code files.
A Brief Review: Size Matters
I won’t pretend to be thorough here, as I don’t want to be unfair to Nuvoton. I received my order for the Nuvoton Tomato and NuMicro 8051 NuTiny boards. I also received a Nuvoton NuMaker-RTU-NUC980 Chili (on sale for US$14.50) which I may write about in the future.
This might have been a case where I hadn’t read the specifications very clearly, except there wasn’t anything specific on the product pages about size. One looks at photos online that don’t have a coin or a hand next to them for context and makes assumptions. My first impression on opening the packages is that the Nuvoton boards are hardly “tiny”. They seem overly large, with lots of empty space. Apparently the Nuvoton designers didn’t think size matters.
The Nuvoton Tomato uses a single-core 32-bit ARM926EJ-S NUC976DK62Y microprocessor with a clock speed up to 300MHz. The ARM926EJ-S is circa-2001 technology, no longer under active development but still used in industry. By comparison, the circa-2016 Raspberry Pi 3 B+ has a much beefier four-core 64 bit ARM BCM2837B0 Cortex-A53 microprocessor running over four times faster at 1.4GHz. The Tomato is targeted at the Internet-of-Things (IoT) market, is a low-power device and has an embedded Linux OS which apparently has an optimised Java JVM. It also has the connectors to allow an Arduino shield to be plugged onto the top of it. Nice. It’s probably not fair to compare the Tomato and the Raspberry Pi. Apart from what’s on the boards, it’s impossible to ignore the fact that the Tomato is big at 67x120mm. But if I were thinking of building a Linux-based, Arduino-compatible robot using Arduino shields (like the Adafruit Motor/Stepper/Servo Shield), I’d certainly give it a whirl. It’s an interesting hybrid.
The bigger surprise was the 8051 “NuTiny”. Bad name really; it may be “Nu” but it’s by no means Tiny. The board is comprised of two parts: the 8051 processor board and a Nu-Link-Me (the Nu pun wears off quickly) board that’s used to connect and program the 8051 processor. But even with the Nu-Link-Me board broken off the 8051 board itself is still 35x52mm, compared with the Itsy Bitsy M4 Express at 18x36mm. If for convenience’ sake we leave the boards together the NuTiny is wider than a Raspberry Pi. Whereas the 8051 CPU chip is only 4x6mm (about half the size of the Itsy Bitsy’s ATSAMD51), the carrier board is over five times the size of the Itsy Bitsy, and doesn’t even come with any mounting holes, despite all that wasted PC board space. A bit of a shame, really. None of this hardly disqualifies it from use on a robot. And I am interested in playing with the 8051 processor, if doing so isn’t too inconvenient.
Summary
As I mentioned above, I have no plans to build a robot based upon one of these microcontrollers, though that’s perfectly feasible and indeed a pretty common approach. I’m mostly interested in distributing my robot’s tasks to sub-processors, with the Arduino compatibles (including Tiny, Trinket, Feather, Particle, Itsy Bitsy, etc.), the MS51-based Pimoroni IO Expander, and potentially any of the Nuvoton ARM Cortex or MSS51-based development boards are all suitable contendors for that purpose.
[No, not detecting cheap blobs, but rather blob detection on the cheap.]
Way back in mid-May (seems a very long time ago now), I’d done a bit of exploring using the default Raspberry Pi camera in sight of a possible solution for the DPRG Four Corners Competition. The challenge is to have a robot trace out a large square on the ground marked with orange traffic cones/pylons. Even with well-tuned odometry it’s a difficult trick, as part of the requirements include that the square the robot traces must align with the four markers of the course. You can’t just travel off in any direction, make three 90 degree turns and come back to the start, you have to be able to aim at that first marker accurately. There’s the rub: how to aim at something.
So to address that challenge I’d mounted a Raspberry Pi camera to the front of my KR01 robot and posted a 5mm pink LED as a sentinel three meters in front of the robot. Would it be possible for the robot to aim at that pink LED? The preliminaries from that experiment were largely positive: the robot could at least see the LED at a distance of 3 meters, even in daylight. What I’d managed to achieve was with a deliberately very low resolution image, detection of a blob of pixels indicating a close match to the hue of the pink LED.
The next part would be how to aim the robot at that blob. This post further explores that solution, still just using the Raspberry Pi camera.
Building an LED Beacon
First, one thing about LEDs is that they’re largely directional, a bit like a small laser encased in plastic. Some use translucent rather than transparent plastic to diffuse the light a bit, but if the LED is facing away from the robot it will basically disappear from view, except for perhaps its light shining on something else, which wouldn’t be helpful (see more on floor reflections, below).
As you do, there was this bit of plastic I’d been saving for what, over ten years? that I kept trying to find a use for. It was the diffuser off of a pretty useless LED camp light, useless as it was large, didn’t put out that much light, its plastic case had gone sticky, and I was already carrying a LED headlamp that was smaller and brighter. So I discarded the rest and kept the omnidirectional diffuser. I also had this rather beautiful polished aluminum hub from an old hard drive, and it turns out the plastic diffuser fit exactly onto the hub.
So I wired up a small PC board with three pink LEDs and a potentiometer to control the voltage to the LEDs, and built a small frame out of 3mm black Delrin plastic to hold it and the USB hub providing the 5 volt power supply. Another thing just lying around the house…
I added a tiny bit of cotton wool inside to further diffuse the light, so it would show up on the Pi’s camera as a pink blob. I opened several images taken from the robot in a graphics program and noted the RGB values of various pixels within the pink of the beacon. One of these colors would be the “target color”.
Blob Detection
Now the bright idea I had was that we didn’t really need much in terms of image data to figure out where the pink blob was.
Given my robot is a carpet crawler, we can make some relatively safe assumptions about the environment it will be in, even if I let it out onto the front deck. Lighting and floor covering may vary, but my floors and the front deck are largely horizontal. Since the floor and the robot are mostly level, we can therefore expect to find the beacon near the vertical center of the image, and safely ignore the top and bottom of the image. All we care about is where, horizontally, that blob appears.
I’ll talk more about the Python class I wrote to handle the blob processing below.
I placed the beacon on the rug, plugged it into a power supply and pointed the robot at it. The Raspberry Pi camera’s resolution can be set in software. Set at High Definition (1920×1080) we see the view the robot sees, with the pink beacon exactly one meter away from the camera:
Now, for our purposes there’s no need for that kind of resolution. It’s instead set at what seems to be around the Pi camera’s minimum resolution of 128 x 64 pixels. While irritatingly small for humans, robots really don’t care. Or if they do care, they don’t complain.
The Blob Class
The Python class for the blob detector is just named Blob (and available as with the rest of the code on github). In a nutshell it does the following:
Configure and start up the Raspberry Pi camera.
Take a picture as a record, storing it as a JPEG.
Processing the image by iterating over each row, pixel-by-pixel, by comparing the color distance (hue) of the pixel with the target color (pink). Each pixel’s color distance value is stored in a new array the same size as the image. We enumerate each color distance value and assign a color from a fixed palette, printing a small square character (0xAA) in that color. We do this row by row, displaying a grid-like image representing color distance. Note that there’s a configuration option to start and stop processing at specified rows, effectively ignoring the top and bottom of the image.
Sum the color distance values of each column, reducing the entire image to a single line array.
Enumerate the distribution of this array so that there are only ten distinct values, replacing each element with this enumerated value. We also use a low-pass filter to eliminate (set to zero) all elements where there wasn’t enough color match to reach that threshold. Like the color distance image we print this single line after the word “sum” in order to see what this process has turned up.
Find all the peaks by determining which of the values in this array are the highest of all, given a threshold, and again print this line after the word “peaks”.
Find the highest peak: If there are multiple peaks within 5% of the image width of each other we assume they’re coming from the same light source, so we average them together, again drawing a line of squares following the word “peak”.
Return a single value from the function (the index of the highest peak in the array), considered as the center of the light source. If there are too many peaks or they’re too far apart we can’t make any assumptions about the location of the beacon and the function returns -1 as an error value.
The result of this is shown below:
So if the image resolution is 128 x 64, the result will either be a -1 if we can’t determine the location of the beacon, or a value from 0 to 128.
So what to do with this value? If the robot is aiming straight at the beacon the value will be 64. A lower value means the beacon is off to the left (port), higher than 64 the beacon is off to the right (starboard). I’m playing around righ tnow with figuring out what these values mean in terms of turn angle by comparing what 0 and 128 represent when compared against an image taken from the robot of a one meter rule taken at a distance of one meter (see above).
One of the things we’ve talked about in the weekly DPRG videoconferences is the idea that the difference between two PID controller’s velocities as an integral can be used for steering. Or something like that, I can’t pretend to understand the math yet. But it occurs to me that the value returned from the Blob function as a difference from center could be used to steer the robot towards the beacon.
Performance & Reliability
In my earlier tests there was clearly an issue of performance. While the Blob processing time is pretty quick the time the Pi camera was taking to create the image was around 600ms for that 128 x 64 pixel image, with of course more time for larger images. It turns out this was due to a bug in my code: I was creating and warming up the camera for each image. The Pi’s camera can stream images at video speed, at least 30 frames per second, so I’ve got a bit of work to figure out how to grab JPEG images from the camera at speed. So the code posted on github as of this writing is okay for a single image but can’t run at full speed. Until I fix the bug. I’ll update this blog post when I do.
And as to reliability, the pink LED beacon is not very bright. In high ambient light settings it’s more difficult for the robot to discern. Some of the DPRG’s competitions, which are usually held either outdoors or indoors under bright lighting conditions, use a beer can covered in flourescent orange tape. Since the Blob color distance method is designed for hue, if the robot were running in a bright room perhaps an orange beer can would work better than a pink LED beacon. If we tuned that algorithm to also include brightness, perhaps a red LED laser dot might work. All room for experimentation.
I’d noted early on that it was somewhat easy to fool the image processor. My house has a wide variety of colors, rugs, books, all manner of things. That particular pink doesn’t show up much however, and if I threw away the times when multiple peaks showed up, basically gave up when I wasn’t sure, then the reliability was relatively high. If the beacon’s light is reflected on another surface (like my couch) the robot might think that was the beacon so I need to make sure the beacon is not too near another object. I also noted that when placing the beacon on a wood floor the reflection on the floor would show up on camera, but this actually amplified the result, since the reflection will always been directly below the beacon and we’re only concerned with horizontals.
Next Steps
There’s plenty of room to both optimise and improve the Blob class. I’m still getting either too many false positives or abandoning images where the algorithm can’t make out the beacon.
It occurs to me also that we don’t have to train the Blob class to look for pink LEDs or orange cans. It would be relatively trivial to convert it to a line follower by aiming the camera down towards the front of the robot, altering the color distance method to deal solely with brightness, and using the returned value to tell the robot the location of the line.
While I’ve lately been mostly focused on my KR01 robot, I’ve also been planning a Mecanum-wheeled robot to be called the KRZ02. I long ago decided my robots would incorporate odometry, counting the rotations of each motor in order to track the robot’s location. This is accomplished by either an optical or Hall-effect encoder connected to the left and right motor shafts. On the KR01, since I know the gear ratio of the motor, the size of the wheel (and that it travels 218mm per rotation), and that the encoder sends 494 steps per rotation of the wheel, I can calculate there’ll be 2262 steps per meter.
To be honest, I didn’t actually come up with such a formula but simply measured this by repeatedly having the robot move exactly one meter forward. In science this is called an observation. Observing is generally easier than calculating, but is still a valid means of finding things out. You can make calculations without having a robot, but if your calculation (the model) is flawed or incomplete it won’t be accurate, whereas careful observation of the robot while in the actual working environment can be quite accurate. My guess is that if I had gone to the trouble to develop an odometric formula I wouldn’t have ended up with a value of 2262 steps per meter, as somewhere in the mix a concrete, physical system often introduces variables that haven’t been accounted for in the model.
But back to the point. With knowledge of how many steps the motor encoders return I can calculate both the velocity and distance traveled for each motor, and therefore with a reasonable degree of accuracy where the robot is located from its starting location. David Anderson of the DPRG has made odometry into a fine art, and his robots can travel great distances in both complex indoor environments and even across difficult terrain up in the mountains and return to within inches of their starting locations. I find David’s work very inspiring.
But all this clever odometry falls apart when using Mecanum wheels, which have a series of rubber rollers that each have an axis of rotation at 45° to the wheel plane and at 45° to the axle line.
Whereas a traditional wheel translates its energy to the ground in a rather predictable way, the rotation of a Mecanum wheel interacts in complex ways with the other Mecanum wheels. I’m sure there’s a mathematical formula for this, one that would involve the direction and rotational velocity of each wheel, the total weight and weight distribution of the robot, the hardness of and therefore how much of each roller is in contact with the ground, the traction and rolling friction of the wheel rollers, the friction due to contact with the ground, roller slippage, and probably another half dozen unknowns. Call me lazy but my life is too short to even consider trying to create that formula and develop sensors for the various parameters of that equation. And we’re back to that abstract model versus concrete observation issue. If the goal is accurately tracking the robot’s movement over the ground (odometry), then we need to come up with another method.
Flying drones can’t do motor-based odometry but instead use a specialised camera called an optical flow sensor, something we might call a camera-as-sensor. An optical flow sensor’s camera looks down at the ground, generating a series of image frames, calculating the distance the drone has moved across the landscape by tracking the differences in position between image frames. This is returned as an x,y value at the frame rate of the camera. With both a GPS unit and a LiDAR providing a distance to ground measurement it’s possible to accurately perform odometry in mid-air.
Now, my robot lives on the ground but there’s no reason we can’t try a similar trick. The actual sensor I’m using is from PixArt Imaging, designated the PMW3901MB-TXQT, and is all of 6 x 6 x 3 mm in size, using 6 milliamps of current. That’s including the camera and all of the electronics. This reminds me of the sensor used in the VL531LX Time of Flight sensor, which is a LiDAR the size of a grain of rice.
These sensors are generally sold on a carrier board so that they can be integrated into a commercial or hobbiest application. The carrier board I’m using is from Pimoroni, called the PMW3901 Optical Flow Sensor Breakout, part of their Breakout Garden series of sensors. The cost is about what we pay for two meals at a local Indian restaurant. You can solder a 7 pin header to the carrier board or simply plug it into an SPI socket on a Breakout Garden board.
The PMW3901 has a frame rate of 121 frames per second and a minimum range of about 80mm. I’m hoping to mount it looking down from the underside of the robot’s upper board, which will be just above that 80mm minimum. The problem is twofold: how to provide the sensor’s camera with a clear view of the ground, and how to mount it in the center of the robot. This is necessary so that when the robot rotates around its center the sensor won’t register any absolute movement, just a rotational theta.
On the KRZ02 I’m using 48mm Mecanum wheels made by Nexus Robot. They’re a high quality steel framed wheel with some rather solid brass hubs and a load capacity of 3kg. When mounted on a Pololu Micro Metal Gearmotor the bottom of the robot’s 3mm thick Delrin plastic lower board is 30mm from the ground. This means the PMW3901 needs to be at least 47mm above that board.
I built a test rig frame out of a cheap nylon chopping board. A photo of the rig can be found at the top of this post. This holds the PMW3901 at a distance of 90mm from the ground, looking through a 50mm hole cut into a lower board. 50mm is the biggest hole cutter I have:
I used a Raspberry Pi Zero W and wrote a Python library and test file. In the test, the PMW3901 sensor returns positive and negative x and y values as it senses movement. Based on those values I’m setting the RGB LEDs to red for a movement to port, green for starboard, cyan for forward and yellow for aft/reverse.
My initial tests quickly proved faulty. The white plastic of the cutting board was providing all sorts of reflections of ambient light as well as the PMW3901’s illumination LEDs, and this confused the sensor to no end. Even at rest it would be indicating what seemed to be random motion. I taped some black matte paper to the board and this almost entirely eliminated the problem. I won’t be using white nylon on the robot but rather black Delrin plastic, which I can sand to a dull matte finish, so hopefully this will be sufficient. If not, there’s a rubberised black fabric used in photography that reflects almost no light.
The following video shows the test rig in operation.
As I noted in the video, the results indicate the test is a success, i.e., mounting a PMW3901 Optical Flow Sensor at about 90mm from the ground can provide odometry information for a ground-based robot.
With this important test out of the way I can now finish the plans and begin building the KRZ02 robot.
Credit where credit’s due: the beautiful fabric you can see in the photos and video is actually a reusable grocery bag designed by one of my favourite New Zealand artists, Michel Tuffery, as part of the Paper Rain Project.
This is just a little ditty that I will update as things progress. It comes about from conversations on the DPRG mailing list about one of their commonly held robot contests, called the Four Corners Competition. Here’s the actual definition from April of 2018:
Objective: The robot will travel a rectangular path around a square course. The corners of the course will be marked with a small marker or cone. Before the robot makes its run, a mark or sticker will be placed on the center front of the robot and on the floor of the course. The objective is to minimize the distance between the two marks at the end of the run.
David Anderson pointed out that this contest goes back to a 1994 University of Michigan Benchmark (called UMBmark), “A Method for Measuring, Comparing, and Correcting Dead-reckoning Errors in Mobile Robots“, developed by J. Borenstein and L. Feng. As David describes it, the “concept is to drive around a large square, clockwise and counter-clockwise, while tracking the robot’s position with odometry, and stop at the starting point and measure the difference between the stopping point and the starting point. This shows how much the odometry is in error and in which direction, and allows calibration of the odometry constants and also the potential difference in size between the two wheels of a differentially driven robot. The DPRG uses this calibration method as a contest.” He even wrote a paper about it.
Well, I keep maintaining the purpose of my robotics journey is not to engage in competition (and I swear that I’m not a competitive person, I’m really not, no I am not), but I do think that this benchmark is a good exercise for fine-tuning a robot’s odometry. Nuthin’ to do with competition, nope, just a challenge.
The key to this challenge seems to be twofold: 1. getting the odometry settings correct; and 2. being able to accurately point the robot at that first marker. As regards the latter, the contest permits the robot to be aimed at the first marker using any method, so long as the method is removed prior to the contest starting. Over the years various approaches at this have been tried: ultrasonics, aiming the robot using a laser pointer, etc. I tried creating a gap between two boards and seeing if my existing VL53L1X sensor could see the gap, but then realised its field of view is 27°, so it’s not going to see a narrow gap at a distance of several meters. I then contemplated mounting a different, more expensive LiDAR-like sensor with a 2-3° field of view, but at 8-15 feet (the typical size of the course) that’s still not accurate enough.
This challenge has somehow lodged itself in the back of my head, the buzzing sound of a mosquito in a darkened bedroom. As I may resort to aiming the robot using a laser pointer I’ve put in an order from somewhere in China for a tiny “tactical hunting super mini red dot laser” (which kinda says it all). I’ll in any case definitely install the tactical hunting super mini red dot laser just ’cause it will look so cool and dangerous. But a laser pointer feels a bit like cheating: it’s not the robot doing the hard work, it’s like aiming a diapered, blindfolded child towards grandma’s waiting knees and hoping she makes it there. Hardly autonomous.
A Possible Solution: The Pi Camera
I’ve been planning to install a Raspberry Pi camera on the front of the KR01 robot for awhile, and since I was going to have a camera available I thought: heck, the robot will be at very least facing that first marker, so why not use its camera to observe the direction and let it try to figure its own way there? No hand-holding, no laser pointer aiming, no diapers, no grandma. Autonomous.
So what would I use for a target? How about an LED? What color is not common in nature? What color LED do I have in stock? Pink (or actually, magenta). So I mounted a pink LED onto a board with a potentiometer to adjust brightness, using a 9 volt battery for the power source. Simple enough.
The Raspberry Pi camera’s resolution is 640×480. I wrote a Python script to grab a snapshot from the camera as an x,y array of pixels. I’m actually processing only a subset of the rows nearer the center of the image, since the robot is likely to be looking for the target of the Four Corners Challenge somewhat near the vertical center of the image, not closer to the robot or up in the sky.
I found an algorithm online to measure the color distance between two RGB values. The color distance is focused mostly on hue (the angle on the color wheel), so if that particular pink is sufficiently unusual in the camera image, the robot should be able to pick it out, regardless of relative brightness. I took a screenshot of the camera’s output, opened it up in gimp and captured the RGB color of the pink LED. I avoided the center of the LED, which showed up as either white (R255, G255, B255, hue=nil) or very close to white, and instead chose a pixel that really displayed the pinky hue (R151, G55, B180, hue= 286).
For each pixel in the array I calculated the color distance between its color and that of our target pink. To be able to see the results of the processing I then printed out not the pixel array of the original image but an enumerated conversion of the color distance — just ten possible values. So magenta is very close, red less close, yellow even less, et cetera down to black (not close at all). So the image is what we might call a “color distance mapping”. I just printed out each row to the console as it was processed, so what you’re seeing is just a screen capture of the console, not a generated image.
My first attempts were of just the LED against a dark background, enough to try out the color distance code. Since that seemed to work I tried it against a much more complicated background: the bookcase in my study (see photo). The distance from the camera to the pink LED was about 2 meters. Despite several objects on my bookcase being a fairly close match to the LED’s color, things seemed to still work: the LED showed up pretty clearly as you can see below:
That object just to the left of the LED with the 16 knobs is a metallic hot pink guitar pedal I built as a kit a few years ago. There’s another guitar pedal that same color on the shelf below. There’s enough difference between that hot pink and the magenta hue of the LED that it stands out alone on the shelf. Not bad.
Outdoors Experiments
So today, on a relatively bright day I tried this out on the front deck. There was a lot more ambient light than in my study and I was able to set the LED a full 3 meters (9 feet 10 inches) from the front of the robot. How would we fare in this very different environment?
The 3mm LED I’d been using turned out to be too small at that distance, so replaced it with a larger pink LED and turned up the brightness. Surprisingly, the LED is clearly visible below:
This is a pretty happy result: the robot is able to discern a 5mm pink LED at a distance of 3 meters, using the default Raspberry Pi 640×480 camera. This required nothing but a camera I already had and less than a dollar’s worth of parts.
The Python code for this is called pink_led.py and is available in the scripts project on github.
Next step: figure out how to convert that little cluster of pixels into an X coordinate (between 0 and 640), then using that to set the robot’s trajectory. It could be that converting that trajectory into a compass heading and then following that heading might get the robot reliably to that first course marker.
But I’m still going to install that tactical hunting super mini red dot laser.
Not that I’ve spent much time with my KRZ01 robot. I feel almost bad that I haven’t let the project develop much before making some significant changes. Like that high school girlfriend with braces.
It’s just that my main project, the KR01 robot, is where I’ve been devoting most of my time and energy, and the KRZ01 isn’t frankly that different, despite being rather petite: about 1/4th the size and 6% of the weight (160g with its battery). Both are wheeled robots intended to operate using the Robot Operating System (ROS) I’m writing in Python.
So what made the KRZ01 the target of a redesign was the purchase of four Mecanum wheels. This post describes the beginning of this project — I’m only at the design stages right now.
What are Mecanum Wheels?
There’s a lot of descriptions (e.g., the YouTube video below) and demonstrations of Mecanum wheels on the web already (such as a Turkish Mecanum forklift!) so I won’t go into much detail here, suffice it to say that they allow a robot to travel in any direction without changing its compass heading. Well… not up or down. But crab travel, sure.
Running all four wheels in the same direction at the same speed will result in a forward/backward movement, as the longitudinal force vectors add up but the tranverse vectors cancel each other out;
Running (all at the same speed) both wheels on one side in one direction while the other side in the opposite direction, will result in a stationary rotation of the vehicle, as the transverse vectors cancel out but the longitudinal vectors couple to generate a torque around the central vertical axis of the vehicle;
Running (all at the same speed) the diagonal wheels in one direction while the other diagnoal in the opposite direction will result in a sideway movement, as the transverse vectors add up but the longitudinal vectors cancel out.
That kind of talk totally does my head in, but the concept obviously works, so I’m on board. When I get to the point of programming the motor controller for this I’m sure I’ll need a couple shots of good bourbon to focus my mind appropriately to the task.
There are some design considerations regarding Mecanum wheels, and both David Anderson’s advice and my own experience with the KR01 suggest that I want the robot as balanced as possible, both in terms of weight and the position of the wheels relative to the center of the robot. With Mecanum wheels, weight distribution is even more critical than on a normal wheeled robot. Having too much weight on one wheel would significantly alter its behaviour, and not in a good way.
The Plans
So I have been planning this out. I can’t actually build anything just yet because I stupidly only bought two of the brass wheel hubs rather than four (“there were two in the photo” he says in his defense), so I’m waiting on another shipment from Canada.
Because it will be a fundamentally different robot the redesign will be called the KRZ02.
Update as of 2020-05-12: That first plan had a glaring error, in that while the centers of the wheels were on the circumference of a circle, that hardly meant they were equidistant, meaning their centers would fit on the four corners of a square. I’d drawn the diagram wrong. I tried a second time, this time also moving the motors as close towards the center of the robot as I dared, and including the positions of the Raspberry Pi Zero W, the two Picon Zero motor controllers, the Pimoroni Black Hat Hack3r expansion board, and the Breakout Garden Mini to hold one SPI and two I²C sensors . The robot got bigger but also more symmetrical. Witness version 2:
One thing seems pretty clear at the outset: the current KRZ01 has two “moon buggy” wheels and a ball caster, and its physical extent (i.e., how much space it takes up) is a circle 128mm in diameter — it’s a small robot. Plan 1 expands that to 210mm. Plan 2 above expands that extent to 227mm, almost double in size. It looks like it’d be a much larger robot than the KRZ01. Plan 1 used a 75mm chassis width, which is the current width of the KRZ01. Plan 2 has the motors as close together as seems reasonable but by fixing my design “bug” the robot is almost as big as David Anderson’s SR04 at 11″ (280mm). Not a small robot anymore.
I discussed the issue of symmetry with the guys at the DPRG and it seems that weight balance is critically more important than symmetry. I’m not happy with the Plan 2 being such a big robot, and from the plans there seems to be a fair bit of wasted space (i.e., it’s a lot longer than is strictly necessary) so I think I might try a third, shorter design.
This is another article in the series about the KR01 robot.
The title translates as “It is easier to do many things than one thing consecutively“, attributed to Quintilian, a Roman educator about two thousand years ago. It sounds curiously like a motto for either multi-tasking or multi-threaded processing. But also for how I’ve approached designing and building the KR01 robot.
One thing I’ve learned about building a robot is that, at least for me, the hardware and software is ever-changing. I guess that’s what makes the journey enjoyable. In my last post I ended up with too much philosophising and not enough about the robot, so this one makes up for that and provides an update of where things are at right now.
But before we get into the hardware and software I thought to mention that I’ve been quite happily welcomed into the weekly videoconferences of the Dallas Personal Robotic Group (DPRG) and about a week ago did a presentation to them about the KR01:
The DPRG is one of the longest-standing and most experienced personal robotics club in the US, with a great deal of experience across many aspects of robotics. They’re also some very friendly folks, and I’ve really been enjoying chatting with them. New friends!
Hardware
So… the biggest issue with the KR01 was imbalance. There was simply no room on the chassis for the big, heavy Makita 3.0Ah 18 volt power tool battery so I’d, at least temporarily, hung it off the back on a small perforated aluminum plate.
[You can click on any of the images on this page for a larger view.]
The KR01 without a battery weighed 1.9 kilograms (4 lbs 3oz), so at 770 grams (1 lb 11 oz) the Makita battery and its holder comprised about 40% of the the robot’s total weight. By comparison, my little KRZ01 robot weighs 160 grams, including the battery. Since that photo was taken the robot has gained a bit of weight, now up to 2.1 kg. But that imbalance remained.
With the battery hanging off the back, when trying to spin in place the KR01 would typically sit on one back wheel and rotate around that wheel, but which wheel was almost arbitrary. I’d kinda knew something like this might happen but I was willing to keep moving forward on other parts of the robot (“facilius est…“), because fixing that problem meant making some big hardware changes.
Since I’d gotten to the point where I was actually testing the robot’s movement, I finally needed to bite the bullet and bought another piece of 3mm Delrin plastic. This time I positioned the battery as close to the physical center of gravity of the robot as possible. Then I spent a lot of time reorganising where things fit, as well as finally adding all of the sensors I’d been planning. I think I might have gone overboard a bit. The current design is shown below.
The copper shielding is my attempt at cutting down on the amount of high-pitched ambient noise put out by the speaker (hidden underneath the front Breakout Garden Mini, next to the servo). This didn’t seem to make much difference but it looks kinda cool and a bit NASA-like so I’ll leave it for now. Yes, the KR01 can now beep, bark, and make cricket sounds. It also has a small 240 x 240 pixel display screen (visible at top center) and two wire feelers to theoretically protect the upper part of the robot, basically an emergency stop. I have no idea how well that will work. The inside of my house is pretty hazardous for a small defenseless robot.
Earlier versions of the robot had a 15cm range infrared sensor for the center, which being digital, replied with a yes or no. It worked as advertised, but 15cm wasn’t enough distance to keep the robot from running into things, even at half speed, so I’ve since replaced it with a longer range analog infrared sensor (the long horizontal black thing in the cutout in the plastic bumper, shown below) that I’ve coded to react to two separate ranges, “short” (less than 40cm) and “long” (triggered at about 52 cm). This permits the robot to slow down rather than stop when it gets within the longer range of an object.
The robot currently has a 15 cm range infrared sensor on each side but I’m planning to replace them with a pair of Sharp 10-150cm analog distance sensors, which hopefully will permit some kind of wall-following behaviour. I’m eagerly awaiting another package in the post…
The KR01 now sports a variety of sensors from Adafruit, Pimoroni’s Breakout Garden, Pololu and others, including:
a servo-mounted 4m ultrasonic sensor, or
a servo-mounted Time of Flight (ToF) laser rangefinder with a range of 4m and accuracy of 25mm
four Sharp digital 15cm range infrared distance sensors
a Sharp analog 80cm infrared ranging sensor
an infrared PIR motion detector (for detecting humans and cats)
an X-Band Bi-Static Doppler microwave motion detector (for detecting humans and cats through walls)
a 9 Degrees of Freedom (DoF) sensor package that includes Euler and Quaternion orientation (3 axis compass), 3 axis gyroscope, angular velocity vector, accelerometer, 3 axis magnetometer, gravity vector and ambient temperature
a 6 channel spectrometer
two 5×5 RGB LED matrix displays
one 11×7 white LED matrix display
an HDMI jack for an external monitor (part of the Raspberry Pi)
WiFi capability (part of the Raspberry Pi)
a microphone
a speaker with a 1 watt amplifier
This is all powered by a Makita 18V power tool battery. The clear polycarbonate bumper (inspired by David Anderson’s SR04 robot) has six lever switches, with two wire feelers protecting the upper part of the robot.
All that just for the territory of my lounge. Or maybe my front deck.
Software
So while working on hardware I’ve also been working on the software. I’ve been writing a Behaviour-Based System (BBS) based on a Subsumption Architecture as the operating system for the KR01 in Python. The concept of a BBS is hardly new. Rodney Brooks and his team at MIT were pioneering this area of research back in the 1980s; it’s an entire field of research in its own right. Here’s a few links as a beginning:
The idea with a BBS is that each sensor triggers either a “servo” or a “ballistic” behaviour. Servo Behaviours (AKA “feedback and control systems”) immediately alter the robot’s movement or make a temporary change in its behaviour, such as speeding up or slowing down, in a relatively simple feedback loop. Ballistic Behaviours (AKA “finite state machines”) are small sub-programs that are (theoretically) meant to run from start to completion without interruption. The below video shows a ballistic behaviour that might occur should the robot find itself facing a wall: it backs up, scans the neighbourhood for a place where there’s no barrier, then attempts to drive in that direction. Yes, that is a sonar “ping” you hear.
My understanding (i.e., the way I’m writing the software) is that for every sensor there is an associated servo or ballistic behaviour, and that each of these behaviours are prioritised so that the messages sent by the sensors contend with each other (in the subsumption architecture), the highest priority message being the one that the robot executes. It does this in a 20ms loop, with ballistic behaviours taking over the robot until they are completed or subsumed by a higher-priority ballistic behaviour. It’s the emergent behaviour as a consequence of these programmed behaviours that gives the robot its personality. When the robot has nothing to do it could begin a “cruising around” behaviour, whistle a tune, or go into standby mode awaiting the presence of a cat.
Not that my cat pays much attention to the robot. In the robot-plus-cat experiments that have been performed in our home laboratory he sniffs at the robot a bit and nonchalantly stays out of its way.