The Four Corners Challenge

This is just a little ditty that I will update as things progress. It comes about from conversations on the DPRG mailing list about one of their commonly held robot contests, called the Four Corners Competition. Here’s the actual definition from April of 2018:

Objective: The robot will travel a rectangular path around a square course. The corners of the course will be marked with a small marker or cone. Before the robot makes its run, a mark or sticker will be placed on the center front of the robot and on the floor of the course. The objective is to minimize the distance between the two marks at the end of the run.

David Anderson pointed out that this contest goes back to a 1994 University of Michigan Benchmark (called UMBmark), “A Method for Measuring, Comparing, and Correcting Dead-reckoning Errors in Mobile Robots“, developed by J. Borenstein and L. Feng. As David describes it, the “concept is to drive around a large square, clockwise and counter-clockwise, while tracking the robot’s position with odometry, and stop at the starting point and measure the difference between the stopping point and the starting point. This shows how much the odometry is in error and in which direction, and allows calibration of the odometry constants and also the potential difference in size between the two wheels of a differentially driven robot. The DPRG uses this calibration method as a contest.” He even wrote a paper about it.

Well, I keep maintaining the purpose of my robotics journey is not to engage in competition (and I swear that I’m not a competitive person, I’m really not, no I am not), but I do think that this benchmark is a good exercise for fine-tuning a robot’s odometry. Nuthin’ to do with competition, nope, just a challenge.

The key to this challenge seems to be twofold: 1. getting the odometry settings correct; and 2. being able to accurately point the robot at that first marker. As regards the latter, the contest permits the robot to be aimed at the first marker using any method, so long as the method is removed prior to the contest starting. Over the years various approaches at this have been tried: ultrasonics, aiming the robot using a laser pointer, etc. I tried creating a gap between two boards and seeing if my existing VL53L1X sensor could see the gap, but then realised its field of view is 27°, so it’s not going to see a narrow gap at a distance of several meters. I then contemplated mounting a different, more expensive LiDAR-like sensor with a 2-3° field of view, but at 8-15 feet (the typical size of the course) that’s still not accurate enough.

Tactical Hunting Super Mini Red Dot Laser

This challenge has somehow lodged itself in the back of my head, the buzzing sound of a mosquito in a darkened bedroom. As I may resort to aiming the robot using a laser pointer I’ve put in an order from somewhere in China for a tiny “tactical hunting super mini red dot laser” (which kinda says it all). I’ll in any case definitely install the tactical hunting super mini red dot laser just ’cause it will look so cool and dangerous. But a laser pointer feels a bit like cheating: it’s not the robot doing the hard work, it’s like aiming a diapered, blindfolded child towards grandma’s waiting knees and hoping she makes it there. Hardly autonomous.

A Possible Solution: The Pi Camera

I’ve been planning to install a Raspberry Pi camera on the front of the KR01 robot for awhile, and since I was going to have a camera available I thought: heck, the robot will be at very least facing that first marker, so why not use its camera to observe the direction and let it try to figure its own way there? No hand-holding, no laser pointer aiming, no diapers, no grandma. Autonomous.

The Pink LED

So what would I use for a target? How about an LED? What color is not common in nature? What color LED do I have in stock? Pink (or actually, magenta). So I mounted a pink LED onto a board with a potentiometer to adjust brightness, using a 9 volt battery for the power source. Simple enough.

The Raspberry Pi camera’s resolution is 640×480. I wrote a Python script to grab a snapshot from the camera as an x,y array of pixels. I’m actually processing only a subset of the rows nearer the center of the image, since the robot is likely to be looking for the target of the Four Corners Challenge somewhat near the vertical center of the image, not closer to the robot or up in the sky.

The Four Corners Course with target LED at the 1st corner

I found an algorithm online to measure the color distance between two RGB values. The color distance is focused mostly on hue (the angle on the color wheel), so if that particular pink is sufficiently unusual in the camera image, the robot should be able to pick it out, regardless of relative brightness. I took a screenshot of the camera’s output, opened it up in gimp and captured the RGB color of the pink LED. I avoided the center of the LED, which showed up as either white (R255, G255, B255, hue=nil) or very close to white, and instead chose a pixel that really displayed the pinky hue (R151, G55, B180, hue= 286).

For each pixel in the array I calculated the color distance between its color and that of our target pink. To be able to see the results of the processing I then printed out not the pixel array of the original image but an enumerated conversion of the color distance — just ten possible values. So magenta is very close, red less close, yellow even less, et cetera down to black (not close at all). So the image is what we might call a “color distance mapping”. I just printed out each row to the console as it was processed, so what you’re seeing is just a screen capture of the console, not a generated image.

What the Rasbperry Pi camera sees (click to enlarge)

My first attempts were of just the LED against a dark background, enough to try out the color distance code. Since that seemed to work I tried it against a much more complicated background: the bookcase in my study (see photo). The distance from the camera to the pink LED was about 2 meters. Despite several objects on my bookcase being a fairly close match to the LED’s color, things seemed to still work: the LED showed up pretty clearly as you can see below:

The LED can be seen in the upper left quadrant on the bookcase shelf (click to enlarge)

That object just to the left of the LED with the 16 knobs is a metallic hot pink guitar pedal I built as a kit a few years ago. There’s another guitar pedal that same color on the shelf below. There’s enough difference between that hot pink and the magenta hue of the LED that it stands out alone on the shelf. Not bad.

Outdoors Experiments

So today, on a relatively bright day I tried this out on the front deck. There was a lot more ambient light than in my study and I was able to set the LED a full 3 meters (9 feet 10 inches) from the front of the robot. How would we fare in this very different environment?

The view from the robot

The 3mm LED I’d been using turned out to be too small at that distance, so replaced it with a larger pink LED and turned up the brightness. Surprisingly, the LED is clearly visible below:

This is a pretty happy result: the robot is able to discern a 5mm pink LED at a distance of 3 meters, using the default Raspberry Pi 640×480 camera. This required nothing but a camera I already had and less than a dollar’s worth of parts.

The Python code for this is called pink_led.py and is available in the scripts project on github.

Next step: figure out how to convert that little cluster of pixels into an X coordinate (between 0 and 640), then using that to set the robot’s trajectory. It could be that converting that trajectory into a compass heading and then following that heading might get the robot reliably to that first course marker.

But I’m still going to install that tactical hunting super mini red dot laser.

Leave a Reply

Your email address will not be published. Required fields are marked *