Cheap Blob Detection

[No, not detecting cheap blobs, but rather blob detection on the cheap.]

I’m still kinda blown away that Steve McQueen was in this movie… and the theme song was composed by Burt Bacharach!

Way back in mid-May (seems a very long time ago now), I’d done a bit of exploring using the default Raspberry Pi camera in sight of a possible solution for the DPRG Four Corners Competition. The challenge is to have a robot trace out a large square on the ground marked with orange traffic cones/pylons. Even with well-tuned odometry it’s a difficult trick, as part of the requirements include that the square the robot traces must align with the four markers of the course. You can’t just travel off in any direction, make three 90 degree turns and come back to the start, you have to be able to aim at that first marker accurately. There’s the rub: how to aim at something.

So to address that challenge I’d mounted a Raspberry Pi camera to the front of my KR01 robot and posted a 5mm pink LED as a sentinel three meters in front of the robot. Would it be possible for the robot to aim at that pink LED? The preliminaries from that experiment were largely positive: the robot could at least see the LED at a distance of 3 meters, even in daylight. What I’d managed to achieve was with a deliberately very low resolution image, detection of a blob of pixels indicating a close match to the hue of the pink LED.

The front of the KR01 robot showing the Rasperry Pi camera and three analog infrared detectors

The next part would be how to aim the robot at that blob. This post further explores that solution, still just using the Raspberry Pi camera.

Building an LED Beacon

Photo of disassembled beacon

First, one thing about LEDs is that they’re largely directional, a bit like a small laser encased in plastic. Some use translucent rather than transparent plastic to diffuse the light a bit, but if the LED is facing away from the robot it will basically disappear from view, except for perhaps its light shining on something else, which wouldn’t be helpful (see more on floor reflections, below).

Photo of assembled beacon

As you do, there was this bit of plastic I’d been saving for what, over ten years? that I kept trying to find a use for. It was the diffuser off of a pretty useless LED camp light, useless as it was large, didn’t put out that much light, its plastic case had gone sticky, and I was already carrying a LED headlamp that was smaller and brighter. So I discarded the rest and kept the omnidirectional diffuser. I also had this rather beautiful polished aluminum hub from an old hard drive, and it turns out the plastic diffuser fit exactly onto the hub.

So I wired up a small PC board with three pink LEDs and a potentiometer to control the voltage to the LEDs, and built a small frame out of 3mm black Delrin plastic to hold it and the USB hub providing the 5 volt power supply. Another thing just lying around the house…

I added a tiny bit of cotton wool inside to further diffuse the light, so it would show up on the Pi’s camera as a pink blob. I opened several images taken from the robot in a graphics program and noted the RGB values of various pixels within the pink of the beacon. One of these colors would be the “target color”.

Blob Detection

Now the bright idea I had was that we didn’t really need much in terms of image data to figure out where the pink blob was.

Given my robot is a carpet crawler, we can make some relatively safe assumptions about the environment it will be in, even if I let it out onto the front deck. Lighting and floor covering may vary, but my floors and the front deck are largely horizontal. Since the floor and the robot are mostly level, we can therefore expect to find the beacon near the vertical center of the image, and safely ignore the top and bottom of the image. All we care about is where, horizontally, that blob appears.

I’ll talk more about the Python class I wrote to handle the blob processing below.

I placed the beacon on the rug, plugged it into a power supply and pointed the robot at it. The Raspberry Pi camera’s resolution can be set in software. Set at High Definition (1920×1080) we see the view the robot sees, with the pink beacon exactly one meter away from the camera:

With the Pi camera set at 1920×1088 we can see the view from the robot
Actual size image

Now, for our purposes there’s no need for that kind of resolution. It’s instead set at what seems to be around the Pi camera’s minimum resolution of 128 x 64 pixels. While irritatingly small for humans, robots really don’t care. Or if they do care, they don’t complain.

The Blob Class

The Python class for the blob detector is just named Blob (and available as with the rest of the code on github). In a nutshell it does the following:

  • Configure and start up the Raspberry Pi camera.
  • Take a picture as a record, storing it as a JPEG.
  • Processing the image by iterating over each row, pixel-by-pixel, by comparing the color distance (hue) of the pixel with the target color (pink). Each pixel’s color distance value is stored in a new array the same size as the image. We enumerate each color distance value and assign a color from a fixed palette, printing a small square character (0xAA) in that color. We do this row by row, displaying a grid-like image representing color distance. Note that there’s a configuration option to start and stop processing at specified rows, effectively ignoring the top and bottom of the image.
  • Sum the color distance values of each column, reducing the entire image to a single line array.
  • Enumerate the distribution of this array so that there are only ten distinct values, replacing each element with this enumerated value. We also use a low-pass filter to eliminate (set to zero) all elements where there wasn’t enough color match to reach that threshold. Like the color distance image we print this single line after the word “sum” in order to see what this process has turned up.
  • Find all the peaks by determining which of the values in this array are the highest of all, given a threshold, and again print this line after the word “peaks”.
  • Find the highest peak: If there are multiple peaks within 5% of the image width of each other we assume they’re coming from the same light source, so we average them together, again drawing a line of squares following the word “peak”.
  • Return a single value from the function (the index of the highest peak in the array), considered as the center of the light source. If there are too many peaks or they’re too far apart we can’t make any assumptions about the location of the beacon and the function returns -1 as an error value.

The result of this is shown below:

The console output from Blob showing the 128 x 64 pixels each colorised depending on how close it in hue to pink (irregardless of brightness or saturation). Click on the image for a full size view.

So if the image resolution is 128 x 64, the result will either be a -1 if we can’t determine the location of the beacon, or a value from 0 to 128.

An image taken from the Pi camera of a 1 meter rule at a distance of 1 meter

So what to do with this value? If the robot is aiming straight at the beacon the value will be 64. A lower value means the beacon is off to the left (port), higher than 64 the beacon is off to the right (starboard). I’m playing around righ tnow with figuring out what these values mean in terms of turn angle by comparing what 0 and 128 represent when compared against an image taken from the robot of a one meter rule taken at a distance of one meter (see above).

One of the things we’ve talked about in the weekly DPRG videoconferences is the idea that the difference between two PID controller’s velocities as an integral can be used for steering. Or something like that, I can’t pretend to understand the math yet. But it occurs to me that the value returned from the Blob function as a difference from center could be used to steer the robot towards the beacon.

Performance & Reliability

In my earlier tests there was clearly an issue of performance. While the Blob processing time is pretty quick the time the Pi camera was taking to create the image was around 600ms for that 128 x 64 pixel image, with of course more time for larger images. It turns out this was due to a bug in my code: I was creating and warming up the camera for each image. The Pi’s camera can stream images at video speed, at least 30 frames per second, so I’ve got a bit of work to figure out how to grab JPEG images from the camera at speed. So the code posted on github as of this writing is okay for a single image but can’t run at full speed. Until I fix the bug. I’ll update this blog post when I do.

And as to reliability, the pink LED beacon is not very bright. In high ambient light settings it’s more difficult for the robot to discern. Some of the DPRG’s competitions, which are usually held either outdoors or indoors under bright lighting conditions, use a beer can covered in flourescent orange tape. Since the Blob color distance method is designed for hue, if the robot were running in a bright room perhaps an orange beer can would work better than a pink LED beacon. If we tuned that algorithm to also include brightness, perhaps a red LED laser dot might work. All room for experimentation.

I’d noted early on that it was somewhat easy to fool the image processor. My house has a wide variety of colors, rugs, books, all manner of things. That particular pink doesn’t show up much however, and if I threw away the times when multiple peaks showed up, basically gave up when I wasn’t sure, then the reliability was relatively high. If the beacon’s light is reflected on another surface (like my couch) the robot might think that was the beacon so I need to make sure the beacon is not too near another object. I also noted that when placing the beacon on a wood floor the reflection on the floor would show up on camera, but this actually amplified the result, since the reflection will always been directly below the beacon and we’re only concerned with horizontals.

Next Steps

There’s plenty of room to both optimise and improve the Blob class. I’m still getting either too many false positives or abandoning images where the algorithm can’t make out the beacon.

It occurs to me also that we don’t have to train the Blob class to look for pink LEDs or orange cans. It would be relatively trivial to convert it to a line follower by aiming the camera down towards the front of the robot, altering the color distance method to deal solely with brightness, and using the returned value to tell the robot the location of the line.

Maybe next time.

The Four Corners Challenge

This is just a little ditty that I will update as things progress. It comes about from conversations on the DPRG mailing list about one of their commonly held robot contests, called the Four Corners Competition. Here’s the actual definition from April of 2018:

Objective: The robot will travel a rectangular path around a square course. The corners of the course will be marked with a small marker or cone. Before the robot makes its run, a mark or sticker will be placed on the center front of the robot and on the floor of the course. The objective is to minimize the distance between the two marks at the end of the run.

David Anderson pointed out that this contest goes back to a 1994 University of Michigan Benchmark (called UMBmark), “A Method for Measuring, Comparing, and Correcting Dead-reckoning Errors in Mobile Robots“, developed by J. Borenstein and L. Feng. As David describes it, the “concept is to drive around a large square, clockwise and counter-clockwise, while tracking the robot’s position with odometry, and stop at the starting point and measure the difference between the stopping point and the starting point. This shows how much the odometry is in error and in which direction, and allows calibration of the odometry constants and also the potential difference in size between the two wheels of a differentially driven robot. The DPRG uses this calibration method as a contest.” He even wrote a paper about it.

Well, I keep maintaining the purpose of my robotics journey is not to engage in competition (and I swear that I’m not a competitive person, I’m really not, no I am not), but I do think that this benchmark is a good exercise for fine-tuning a robot’s odometry. Nuthin’ to do with competition, nope, just a challenge.

The key to this challenge seems to be twofold: 1. getting the odometry settings correct; and 2. being able to accurately point the robot at that first marker. As regards the latter, the contest permits the robot to be aimed at the first marker using any method, so long as the method is removed prior to the contest starting. Over the years various approaches at this have been tried: ultrasonics, aiming the robot using a laser pointer, etc. I tried creating a gap between two boards and seeing if my existing VL53L1X sensor could see the gap, but then realised its field of view is 27°, so it’s not going to see a narrow gap at a distance of several meters. I then contemplated mounting a different, more expensive LiDAR-like sensor with a 2-3° field of view, but at 8-15 feet (the typical size of the course) that’s still not accurate enough.

Tactical Hunting Super Mini Red Dot Laser

This challenge has somehow lodged itself in the back of my head, the buzzing sound of a mosquito in a darkened bedroom. As I may resort to aiming the robot using a laser pointer I’ve put in an order from somewhere in China for a tiny “tactical hunting super mini red dot laser” (which kinda says it all). I’ll in any case definitely install the tactical hunting super mini red dot laser just ’cause it will look so cool and dangerous. But a laser pointer feels a bit like cheating: it’s not the robot doing the hard work, it’s like aiming a diapered, blindfolded child towards grandma’s waiting knees and hoping she makes it there. Hardly autonomous.

A Possible Solution: The Pi Camera

I’ve been planning to install a Raspberry Pi camera on the front of the KR01 robot for awhile, and since I was going to have a camera available I thought: heck, the robot will be at very least facing that first marker, so why not use its camera to observe the direction and let it try to figure its own way there? No hand-holding, no laser pointer aiming, no diapers, no grandma. Autonomous.

The Pink LED

So what would I use for a target? How about an LED? What color is not common in nature? What color LED do I have in stock? Pink (or actually, magenta). So I mounted a pink LED onto a board with a potentiometer to adjust brightness, using a 9 volt battery for the power source. Simple enough.

The Raspberry Pi camera’s resolution is 640×480. I wrote a Python script to grab a snapshot from the camera as an x,y array of pixels. I’m actually processing only a subset of the rows nearer the center of the image, since the robot is likely to be looking for the target of the Four Corners Challenge somewhat near the vertical center of the image, not closer to the robot or up in the sky.

The Four Corners Course with target LED at the 1st corner

I found an algorithm online to measure the color distance between two RGB values. The color distance is focused mostly on hue (the angle on the color wheel), so if that particular pink is sufficiently unusual in the camera image, the robot should be able to pick it out, regardless of relative brightness. I took a screenshot of the camera’s output, opened it up in gimp and captured the RGB color of the pink LED. I avoided the center of the LED, which showed up as either white (R255, G255, B255, hue=nil) or very close to white, and instead chose a pixel that really displayed the pinky hue (R151, G55, B180, hue= 286).

For each pixel in the array I calculated the color distance between its color and that of our target pink. To be able to see the results of the processing I then printed out not the pixel array of the original image but an enumerated conversion of the color distance — just ten possible values. So magenta is very close, red less close, yellow even less, et cetera down to black (not close at all). So the image is what we might call a “color distance mapping”. I just printed out each row to the console as it was processed, so what you’re seeing is just a screen capture of the console, not a generated image.

What the Rasbperry Pi camera sees (click to enlarge)

My first attempts were of just the LED against a dark background, enough to try out the color distance code. Since that seemed to work I tried it against a much more complicated background: the bookcase in my study (see photo). The distance from the camera to the pink LED was about 2 meters. Despite several objects on my bookcase being a fairly close match to the LED’s color, things seemed to still work: the LED showed up pretty clearly as you can see below:

The LED can be seen in the upper left quadrant on the bookcase shelf (click to enlarge)

That object just to the left of the LED with the 16 knobs is a metallic hot pink guitar pedal I built as a kit a few years ago. There’s another guitar pedal that same color on the shelf below. There’s enough difference between that hot pink and the magenta hue of the LED that it stands out alone on the shelf. Not bad.

Outdoors Experiments

So today, on a relatively bright day I tried this out on the front deck. There was a lot more ambient light than in my study and I was able to set the LED a full 3 meters (9 feet 10 inches) from the front of the robot. How would we fare in this very different environment?

The view from the robot

The 3mm LED I’d been using turned out to be too small at that distance, so replaced it with a larger pink LED and turned up the brightness. Surprisingly, the LED is clearly visible below:

This is a pretty happy result: the robot is able to discern a 5mm pink LED at a distance of 3 meters, using the default Raspberry Pi 640×480 camera. This required nothing but a camera I already had and less than a dollar’s worth of parts.

The Python code for this is called pink_led.py and is available in the scripts project on github.

Next step: figure out how to convert that little cluster of pixels into an X coordinate (between 0 and 640), then using that to set the robot’s trajectory. It could be that converting that trajectory into a compass heading and then following that heading might get the robot reliably to that first course marker.

But I’m still going to install that tactical hunting super mini red dot laser.

Facilius Est Multa Facere Quam Diu

KR01 obstacle avoidance

This is another article in the series about the KR01 robot.

The title translates as “It is easier to do many things than one thing consecutively“, attributed to Quintilian, a Roman educator about two thousand years ago. It sounds curiously like a motto for either multi-tasking or multi-threaded processing. But also for how I’ve approached designing and building the KR01 robot.

One thing I’ve learned about building a robot is that, at least for me, the hardware and software is ever-changing. I guess that’s what makes the journey enjoyable. In my last post I ended up with too much philosophising and not enough about the robot, so this one makes up for that and provides an update of where things are at right now.

But before we get into the hardware and software I thought to mention that I’ve been quite happily welcomed into the weekly videoconferences of the Dallas Personal Robotic Group (DPRG) and about a week ago did a presentation to them about the KR01:

The DPRG is one of the longest-standing and most experienced personal robotics club in the US, with a great deal of experience across many aspects of robotics. They’re also some very friendly folks, and I’ve really been enjoying chatting with them. New friends!

Hardware

So… the biggest issue with the KR01 was imbalance. There was simply no room on the chassis for the big, heavy Makita 3.0Ah 18 volt power tool battery so I’d, at least temporarily, hung it off the back on a small perforated aluminum plate.

[You can click on any of the images on this page for a larger view.]

Earlier design: the black platform at the aft end held the Makita battery

The KR01 without a battery weighed 1.9 kilograms (4 lbs 3oz), so at 770 grams (1 lb 11 oz) the Makita battery and its holder comprised about 40% of the the robot’s total weight. By comparison, my little KRZ01 robot weighs 160 grams, including the battery. Since that photo was taken the robot has gained a bit of weight, now up to 2.1 kg. But that imbalance remained.

With the battery hanging off the back, when trying to spin in place the KR01 would typically sit on one back wheel and rotate around that wheel, but which wheel was almost arbitrary. I’d kinda knew something like this might happen but I was willing to keep moving forward on other parts of the robot (“facilius est…“), because fixing that problem meant making some big hardware changes.

Since I’d gotten to the point where I was actually testing the robot’s movement, I finally needed to bite the bullet and bought another piece of 3mm Delrin plastic. This time I positioned the battery as close to the physical center of gravity of the robot as possible. Then I spent a lot of time reorganising where things fit, as well as finally adding all of the sensors I’d been planning. I think I might have gone overboard a bit. The current design is shown below.

The redesigned KR01 with space for the battery closer to the center of gravity

The copper shielding is my attempt at cutting down on the amount of high-pitched ambient noise put out by the speaker (hidden underneath the front Breakout Garden Mini, next to the servo). This didn’t seem to make much difference but it looks kinda cool and a bit NASA-like so I’ll leave it for now. Yes, the KR01 can now beep, bark, and make cricket sounds. It also has a small 240 x 240 pixel display screen (visible at top center) and two wire feelers to theoretically protect the upper part of the robot, basically an emergency stop. I have no idea how well that will work. The inside of my house is pretty hazardous for a small defenseless robot.

Side view of the KR01 robot’s chassis, showing the power/enable switches and a Samsung 250gb SSD drive lodged underneath

Earlier versions of the robot had a 15cm range infrared sensor for the center, which being digital, replied with a yes or no. It worked as advertised, but 15cm wasn’t enough distance to keep the robot from running into things, even at half speed, so I’ve since replaced it with a longer range analog infrared sensor (the long horizontal black thing in the cutout in the plastic bumper, shown below) that I’ve coded to react to two separate ranges, “short” (less than 40cm) and “long” (triggered at about 52 cm). This permits the robot to slow down rather than stop when it gets within the longer range of an object.

The robot currently has a 15 cm range infrared sensor on each side but I’m planning to replace them with a pair of Sharp 10-150cm analog distance sensors, which hopefully will permit some kind of wall-following behaviour. I’m eagerly awaiting another package in the post…

Front view of the KR01 robot’s chassis, with polycarbonate bumper-sensor and infrared distance sensors

The KR01 now sports a variety of sensors from Adafruit, Pimoroni’s Breakout Garden, Pololu and others, including:

  • a servo-mounted 4m ultrasonic sensor, or
  • a servo-mounted Time of Flight (ToF) laser rangefinder with a range of 4m and accuracy of 25mm
  • four Sharp digital 15cm range infrared distance sensors
  • a Sharp analog 80cm infrared ranging sensor
  • an infrared PIR motion detector (for detecting humans and cats)
  • an X-Band Bi-Static Doppler microwave motion detector (for detecting humans and cats through walls)
  • a 9 Degrees of Freedom (DoF) sensor package that includes Euler and Quaternion orientation (3 axis compass), 3 axis gyroscope, angular velocity vector, accelerometer, 3 axis magnetometer, gravity vector and ambient temperature
  • a 6 channel spectrometer
  • two 5×5 RGB LED matrix displays
  • one 11×7 white LED matrix display
  • an HDMI jack for an external monitor (part of the Raspberry Pi)
  • WiFi capability (part of the Raspberry Pi)
  • a microphone
  • a speaker with a 1 watt amplifier

This is all powered by a Makita 18V power tool battery. The clear polycarbonate bumper (inspired by David Anderson’s SR04 robot) has six lever switches, with two wire feelers protecting the upper part of the robot.

All that just for the territory of my lounge. Or maybe my front deck.

Software

So while working on hardware I’ve also been working on the software. I’ve been writing a Behaviour-Based System (BBS) based on a Subsumption Architecture as the operating system for the KR01 in Python. The concept of a BBS is hardly new. Rodney Brooks and his team at MIT were pioneering this area of research back in the 1980s; it’s an entire field of research in its own right. Here’s a few links as a beginning:

The idea with a BBS is that each sensor triggers either a “servo” or a “ballistic” behaviour. Servo Behaviours (AKA “feedback and control systems”) immediately alter the robot’s movement or make a temporary change in its behaviour, such as speeding up or slowing down, in a relatively simple feedback loop. Ballistic Behaviours (AKA “finite state machines”) are small sub-programs that are (theoretically) meant to run from start to completion without interruption. The below video shows a ballistic behaviour that might occur should the robot find itself facing a wall: it backs up, scans the neighbourhood for a place where there’s no barrier, then attempts to drive in that direction. Yes, that is a sonar “ping” you hear.

This video shows a simple obstacle avoidance behaviour.

My understanding (i.e., the way I’m writing the software) is that for every sensor there is an associated servo or ballistic behaviour, and that each of these behaviours are prioritised so that the messages sent by the sensors contend with each other (in the subsumption architecture), the highest priority message being the one that the robot executes. It does this in a 20ms loop, with ballistic behaviours taking over the robot until they are completed or subsumed by a higher-priority ballistic behaviour. It’s the emergent behaviour as a consequence of these programmed behaviours that gives the robot its personality. When the robot has nothing to do it could begin a “cruising around” behaviour, whistle a tune, or go into standby mode awaiting the presence of a cat.

Not that my cat pays much attention to the robot. In the robot-plus-cat experiments that have been performed in our home laboratory he sniffs at the robot a bit and nonchalantly stays out of its way.

He wasn’t fooled for a moment by the barking.

After Despair, Some Joy

This article is the fourth in the multi-part series “Building the KR01 Robot” ( 1 | 2 | 3 | 4 ). Further articles can be found tagged “KR01“.

Hsü

Sometimes when you’re experimenting you fail. Sometimes over and over. I had two failures this week, one that had a solution and one that didn’t seem to. But as the I Ching says: perseverance furthers1.

Tank Treads

Celebration of a tank failure…

After some deliberation I’d decided to base my robot on a tank. I’d considered the benefits of a “dual-differential drive configuration, balanced by a non-driven tail wheel caster” on David Anderson’s SR04 robot and thought the OSEPP Tank Kit might be able to provide a suitable drive platform for my robot. One of the requirements of being able to determine the location of your robot is to accurately know how far its left and its right drive motors have traveled. This is necessary in order to perform odometry.

The SR04 has a left drive wheel and a right drive wheel and can rotate in place if the wheels turn in opposite directions (David’s robot impressively can spin in place on a table without changing position). A tank is able to do the same but requires a lot of tread slippage, and this “odometry error” would need to be accounted for somehow, perhaps using another type of locational awareness.

I hadn’t thought of the other problem, and there was a bit of delay in even finding out that there was another problem (one of those “unknown unknowns”). During the assembly of the robot I’d taken the silicon tank treads off a few times, and one morning one of the tread pieces had torn, and it’d been my last spare. I’d contacted OSEPP and they’d been very nice in sending out some replacements. Being this is New Zealand shipping things here from North America took awhile.

By the time the replacement treads arrived I’d gotten the robot to the point where it could perform its first test drive, so I put on the tank treads, wrote a quick Python program to move forward, turn around (do a 360 degree turn) and come back.

I put it down on the carpet and executed the program. The robot moved forward just fine (baby steps!), but as soon as it began to turn around, to my horror the treads slipped and twisted up, partly came off and then caught under the robot. I hit the kill switch (actually, Control-C from the ssh session), put the treads back on and tried it again. Same result. It was better on a wooden floor, but if the rotation was too fast it still sometimes did the same thing.

NZ Tomtit
New Zealand Tomtit
(image: Graham Commins (CC))

Now, I can’t blame the OSEPP people for this. I measured my robot and without a battery it weighs 1.7kg. With the smaller 12 volt Makita power tool battery it’s up to 1.92kg, and with the Makita 18v 3Ah battery2 it tops the scale at 2.35kg (5.2 lbs). If one watches the OSEPP Tank promo video on YouTube their little tank zips around with just an Arduino and a six AA cell battery pack, so it’s carrying nowhere near as much hardware. I’d be comparing a kererū with a tomtit. Unfair. I think a robot with a weight similar to the original kit would be fine.

The kererū, or New Zealand wood pigeon,
(image: NZ Forest and Bird)

My big fat kererū just couldn’t use the tank treads. I’d considered the tread slippage problem (but not the treads-falling-off-disaster) and ordered four of the OSEPP silicon wheels, and after some floor and carpet testing found that they would work. There’s enough slippage on wood floor or carpet that the four wheels do a reasonably good in-place rotation.

Thank goodness. I didn’t want to have to go back to the drawing board.

The Other, Seemingly Intractable Problem

OSEPP Motor Encoder: four connections!

The other problem was with the motor encoders. These are tiny Hall Effect sensors mounted to the motor shaft (before the gearbox) and are meant to provide a pair of waveforms (labeled A and B) that permit a determination of engine direction, speed and the number of times the shaft has rotated.

I’d spent days debugging this. I have this cool old Iwatsu SS-5710 oscilloscope I bought at a local pawn shop. It’s everything you want in an oscilloscope and more: complicated, mysterious, lots of knobs, bright image, nice lines, good coloring, even engages in scintillating dinner conversation. Okay, maybe not that last one.

I’d tried everything to get a usable waveform off of the sensors. Tantalisingly, I was able to get some tiny waveforms, similar to those NASA receives from Mariner 9 at the edge of the universe. But not enough to peg a Raspberry Pi GPIO pin. I’d disassembled the robot, adjusted the encoders, tried adding a 74HC14 Schmidt trigger circuit (forgetting that the encoders already do this), nothing worked. Ugh.

This morning I was reconnecting the 6 pin IDC cable I’m using between the chassis and the platform holding the Raspberry Pi, and I remembered that there were six pins. Six pins. The left and right motors each had an A and a B (i.e., A1, B1, A2, B2). What had I used the other two pins for?

Oh yeah… power.

I hadn’t provided power to the motor encoders. As soon as I connected ground and 5 volts to the encoders and fired up the robot while connected to the Iwatsu, lo and behold: I had some square waves. Success! If it’d been later in the day I would have cracked open a beer.

Motor Encoder Output
Finally, some square waves!

I then rummaged through all manner of half-completed Python scripts to find one that with some slight modification was able to count up and count down as the motor drove forward and back. So… the robot now functional motor encoders.

Next: to begin figuring out how to write a PID Controller.

The Wiring Begins

first KR01 prototype

This article is the third in the multi-part series “Building the KR01 Robot” ( 1 | 2 | 3 | 4 ), and describes beginning to design and build the hardware of the KR01 robot project.

With the robot chassis largely complete (at least for now) I began to plan out where I’d mount the Raspberry Pi, motor controller and other PC boards.

Shakey the Robot

Historically, robots seem to generally have mounted their drive systems on the bottom of a horizontal platform, with their control systems on the top. You can even seen this on Shakey, the first autonomous robot, which was developed back in the late 1960s at Stanford Research Institute (SRI).

My modified OSEPP Tank Kit provided a horizontal area to mount parts but I’d have to drill into the aluminum1 and that seemed rather inflexible, and the mounting holes of the various components didn’t match that of the OSEPP beams, which use a 16mm grid.

I wanted to mount my components on something lightweight and non-conductive, cheap, and easy to modify and/or replace. Some kind of plastic seemed right. I could have used used acrylic (called “perspex” here in New Zealand) but it tends to be rather brittle and easy to crack or split, so I settled on Delrin (a trade name for polyoxymethylene plastic), which is a bit softer, tougher, and almost indestructible. Delrin is often used for making bearings.

The Lower Bits

One thing I learned long ago: it’s all very well to be able to build something but you also need to be able to disassemble it easily. I figured that I needed some way to gather the various wires from the motors and motor encoders in such as way that I could use detachable cables to easily remove the top platform from the chassis. So one principle I’m using on the KR01 is to try to use jumper wires and single and dual header pins for the connections, so that things don’t have to be permanently soldered together.

chassis interface pinout
Chassis Interface Board pinouts

For what I decided to call the Chassis Interface Board I planned to use two 6 pin IDC cables for the connections to the upper part of the robot and one of the AdaFruit Perma-Proto boards to hold all the parts and organise the wiring, which just happened to fit into the area available. I mapped out the pin layout and then soldered some header pins to the board. I also cut a bit of 10mm aluminum “L” section to hold the SPST power switch, the DPDT motor kill switch, and a status LED (you can see this in the photo below).

chassis interface board
The Power Controls (left) and Chassis Interface Board (right)

I ended up drilling two small holes (the horror!) in the aluminum rails to hold some nylon standoffs, then mounted the Chassis Interface Board and wired things up.

Even with all my planning I didn’t get it right the first time and had made a wiring mistake. Apart from the mistake, now that I’m done with these components, the nice thing is that because I’ve not soldered everything together (except in creating the components themselves) I can take it all apart when I decide to make a design change. And that’s bound to happen.

The Platform

I decided to mount my components onto a black Delrin platform using nylon standoffs, so I bought an assortment of 2.5mm black nylon standoffs from Adafruit there’d be no issue with short circuits. A robot used for off-road or robot combat might need to use metal for strength, but the KR01 is strictly a domesticated house robot2

The closest plastics store in Petone didn’t carry sheet Delrin but Macplas up in Auckland did. After a brief phone conversation about which plastics were most appropriate for a small robot, I ordered some black 3mm Delrin for the platform and some clear 3mm polycarbonate for the front bumper. I find that when you involve people in the details of what you’re doing they can use their expertise to best help you.

component layout
Component Layout

Rather than start with the Delrin (which is kinda expensive) I prototyped the board first using a milky white nylon chopping board I bought at the Warehouse for $5. Yes, it occurred to me that I could have just used the nylon but the Delrin is thinner and much cooler. I mean, who makes a robot out of a chopping board?

I taped some paper to the plastic and laid out the various components, then drilled the holes. They say “measure twice, cut once” but I still made a mistake. So maybe it should be “measure thrice, cut once”, 3

Stuff Begins Arriving in the Post…

Early Prototype

This article is the second in the multi-part series “Building the KR01 Robot” ( 1 | 2 | 3 | 4 ), and describes beginning to design and build the hardware of the KR01 robot project.

Inspired by David Anderson’s SR04 robot (in particular, his YouTube video) I searched around for a suitable robot platform, the kind of chassis and motor that fit the scale of the design-in-my-head, and a few other factors. Having read David’s documentation of the project I rather liked his “very loose” design criteria:

  1. Survive in a wide range of (cluttered) human environments autonomously and continuously, without getting stuck.
  2. Provide a robust and reliable platform for developing navigation and behavior software.
  3. Be entertaining and aesthetic for the local human population.

I thought I’d have a go at updating what he’d done in 1998 to see what 22 years might have brought to progress in the world of “personal robots”. I’d been perusing the AdaFruit and Pimoroni websites and had seen all manner of pretty amazing sensors for prices I could afford. It was time to stop making Raspberry Pi night lights and try something more ambitious.

I admit to having strayed from one of David’s stronger design principles in the SR04, that being his “dual-differential drive platform with the geometry of an 11 inch circle” 1. That symmetry is valuable and I’m hoping that my tank-tread design (or four wheels if the treads don’t work out so well) won’t suffer. Watching the SR04 rotate continuously on a table without moving in place is pretty impressive. But I have to start somewhere. I can always modify the design…

OSEPP Tank Kit
The OSEPP Tank

So, I settled on an OSEPP Tank Kit. It’s a bit like Lego or Meccano in that the kit is provided as a set of red-anodised aluminum beams, some accessory plates and connector bits, using 4mm nuts and bolts to hold things together. There’s some flexibility in this, and OSEPP sells accessory kits. I bought an extra set of beams, as I knew of one deficiency in the Tank Kit I wanted to immediately change: it has four wheels but only two motors: the port motor at the front, the starboard motor at the rear.

Since David’s design uses a PID Controller I knew I’d need to use motor encoders, which was one of the reasons I chose the OSEPP kit: they offer a pair of motor encoders using Hall Effect sensors. I’d seen an image of two OSEPP motors and encoders mounted along a single beam, quite an elegant design. It seemed prudent to have both of the encoders on the same pair of motors (either the front or the rear). The Tank would have to be wider and I also wanted four drive motors, not just two. Using tank treads is not very efficient so I figured there’d be insufficient horsepower to drive a robot with only two.

In New Zealand orders from overseas can take anywhere from a few days to weeks in waiting, so I started making decisions and putting in orders. Locally I bought some stainless 4mm hardware from Mitre 10 and Coastal Fasteners. (See Vendors on the NZPRG wiki.)

The Kit Arrives

I’m not going to do one of those ridiculous unboxing videos. Yes, the box arrived. I opened it. I didn’t keep track much with videos or photos. I was playing, not performing.

prototyping in the kitchen
Playing on the Kitchen Table

The OSEPP kit is well-designed, though it’s impossible not to leave a bit of rash on the red anodisation. If you simply built the Tank Kit as intended this wouldn’t be an issue so much, but I tried at least four or five different permutations before settling on one design, and then had to modify it several times when I tried adding things like the front bumper supports and the mount for the power switches.

Beautifully Machined Wheels

The hardware is fun to work with. Not like Lego, where it can be a struggle to connect things securely, the OSEPP kit’s parts are held together by 4mm stainless steel nuts and bolts.

I locally sourced some stainless lock nuts (also called “nyloc nuts”) as I prefer them to the serrated flange nuts provided with the kit (though these work just fine too).

The Motor Encoder kit hadn’t arrived so I built it without remembering that photo I’d seen with the single beam holding both two motors and their encoders. The design as shown above on the kitchen table had no place to mount the encoders. The photo below shows each pair of motors mounted to a single beam, with the motor encoders attached to the front (top) pair.

Front and rear pairs of motors, you can see the encoders mounted on the motor shafts of the front pair. The left and right motors are wired together so they’ll appear as a left drive and a right drive.

When the motor encoders finally arrived I did another round of building and came up with what I thought was the final chassis, but even that had to change once I tried to mount the tank treads. As you can see, there’s not much clearance between the front bumper and the treads. And of course, the front bumper was only a stand-in until I could begin building the real bumper.

Next time: we begin the wiring and mounting the platform for the circuitry…

The KR01 Robot Project

This article is the first in the multi-part series “Building the KR01 Robot” ( 1 | 2 | 3 | 4 ), and describes some of the background leading up to the project. Further posts can be found under the KR01 tag.

About a month ago, when I started, I hadn’t really thought much into the future of my rather humble robot project, certainly not enough to consider where it might lead. Certainly not enough to think about starting a robotics group. At this point I have no idea where that group will lead (if anywhere), but I can at least blog about the project itself.

A Little History

Z80 Single Board Computer, ca. 1979

In my senior year in high school in 1979 I designed and built a robot. It used an 8 bit Intel Z80 single board computer1 with 1K of RAM memory running at a whopping 2MHz. The PC board was about one square foot (30cm), had a hexadecimal keypad and a six-digit red LED display. It sat on top of a chassis I built out of aluminum and some large circular PC boards I found in a surplus shop on the outskirts of Des Moines, Iowa, that apparently were from the insides of a missile. It used two large DC motors, a wheel caster and a 6V lead acid battery. It was an ambitious project for a high school student and I never quite got the ultrasonic sensors working properly (the schematic was from a fish finder), but it was a good learning experience, a lot of fun, and led eventually to an IT career.

Over that career I had the fortunate experience of working at NASA Headquarters for a few years, where as a fellow Mac enthusiast I met Dave Lavery, the head of the robotics division. At the time he had a prototype of the Mars Sojourner rover sitting on his desk. I remember marvelling at the beauty of the machining of the wheels, and wishing I had that kind of budget (and a machine shop). While helping to set up a public demo I also had the opportunity to pilot a telerobotics sled under the ice in Antarctica. Not surprisingly it was an amazing place to work.

NASA Sojourner Mars Rover
Photo courtesy NASA

Years have passed and I now live in New Zealand, where most of my creative energy has over the past few years been in music (I have an improvisational abstract band named Barkhausen; we just finished our second CD).

The combination of experimenting with some DIY microcontroller-based Eurorack synthesizer modules and the advances in the world of Raspberry Pi has found me back into robotics. For the past few months I’ve been purchasing various playthings from Pimoroni and Adafruit and doing some experimenting.

While browsing around doing research for the project I came upon a YouTube video “David Anderson demonstrates his method for creating autonomous robots“, where David showed a local group of people some of his robots:

David Anderson demonstrates his method for creating autonomous robots

Now, I wouldn’t say David’s robots are the most sophisticated ones I’ve seen, not walking around, not androids with faces that move, not MIT’s Shakey nor something from NASA. But they are remarkably clever designs. He also seems like a really nice, down-to-earth guy. What struck me was the fact that his robots were within the reach of normal people to build. Something I could build.

Following David’s trail led me to the Dallas Personal Robotics Group (DPRG), which claims to be “one of the oldest special interest groups in the world devoted to personal and hobby robotics”. Undoubtedly. They were founded in 1984, five years after I’d built my robot in high school.

I ended up joining the DPRG mailing list. In replying to one of their members’ messages where I mentioned I’d started building my own robot, he was very friendly and encouraged me to blog about it. Well, the only blog I had has been devoted to my band and that didn’t seem particularly appropriate. Then, last night we had some friends over for dinner and I was surprised to learn that their 9 year old girl was quite keen to learn about my robot project. So I proposed the idea of starting a Pukerua Bay robotics group.

It turns out there isn’t any national robotics group in New Zealand, nor even a local one near Wellington, so when I was shopping for a domain name it turns out that “robots.org.nz” was available so I bought it. We’ve gone national!

So, if you’re interested, you’re very welcome to follow me on this journey to build a robot…

Next: stuff begins arriving in the post…