Stuff Begins Arriving in the Post…

Early Prototype

This article is the second in the multi-part series “Building the KR01 Robot” ( 1 | 2 | 3 | 4 ), and describes beginning to design and build the hardware of the KR01 robot project.

Inspired by David Anderson’s SR04 robot (in particular, his YouTube video) I searched around for a suitable robot platform, the kind of chassis and motor that fit the scale of the design-in-my-head, and a few other factors. Having read David’s documentation of the project I rather liked his “very loose” design criteria:

  1. Survive in a wide range of (cluttered) human environments autonomously and continuously, without getting stuck.
  2. Provide a robust and reliable platform for developing navigation and behavior software.
  3. Be entertaining and aesthetic for the local human population.

I thought I’d have a go at updating what he’d done in 1998 to see what 22 years might have brought to progress in the world of “personal robots”. I’d been perusing the AdaFruit and Pimoroni websites and had seen all manner of pretty amazing sensors for prices I could afford. It was time to stop making Raspberry Pi night lights and try something more ambitious.

I admit to having strayed from one of David’s stronger design principles in the SR04, that being his “dual-differential drive platform with the geometry of an 11 inch circle” 1. That symmetry is valuable and I’m hoping that my tank-tread design (or four wheels if the treads don’t work out so well) won’t suffer. Watching the SR04 rotate continuously on a table without moving in place is pretty impressive. But I have to start somewhere. I can always modify the design…

OSEPP Tank Kit
The OSEPP Tank

So, I settled on an OSEPP Tank Kit. It’s a bit like Lego or Meccano in that the kit is provided as a set of red-anodised aluminum beams, some accessory plates and connector bits, using 4mm nuts and bolts to hold things together. There’s some flexibility in this, and OSEPP sells accessory kits. I bought an extra set of beams, as I knew of one deficiency in the Tank Kit I wanted to immediately change: it has four wheels but only two motors: the port motor at the front, the starboard motor at the rear.

Since David’s design uses a PID Controller I knew I’d need to use motor encoders, which was one of the reasons I chose the OSEPP kit: they offer a pair of motor encoders using Hall Effect sensors. I’d seen an image of two OSEPP motors and encoders mounted along a single beam, quite an elegant design. It seemed prudent to have both of the encoders on the same pair of motors (either the front or the rear). The Tank would have to be wider and I also wanted four drive motors, not just two. Using tank treads is not very efficient so I figured there’d be insufficient horsepower to drive a robot with only two.

In New Zealand orders from overseas can take anywhere from a few days to weeks in waiting, so I started making decisions and putting in orders. Locally I bought some stainless 4mm hardware from Mitre 10 and Coastal Fasteners. (See Vendors on the NZPRG wiki.)

The Kit Arrives

I’m not going to do one of those ridiculous unboxing videos. Yes, the box arrived. I opened it. I didn’t keep track much with videos or photos. I was playing, not performing.

prototyping in the kitchen
Playing on the Kitchen Table

The OSEPP kit is well-designed, though it’s impossible not to leave a bit of rash on the red anodisation. If you simply built the Tank Kit as intended this wouldn’t be an issue so much, but I tried at least four or five different permutations before settling on one design, and then had to modify it several times when I tried adding things like the front bumper supports and the mount for the power switches.

Beautifully Machined Wheels

The hardware is fun to work with. Not like Lego, where it can be a struggle to connect things securely, the OSEPP kit’s parts are held together by 4mm stainless steel nuts and bolts.

I locally sourced some stainless lock nuts (also called “nyloc nuts”) as I prefer them to the serrated flange nuts provided with the kit (though these work just fine too).

The Motor Encoder kit hadn’t arrived so I built it without remembering that photo I’d seen with the single beam holding both two motors and their encoders. The design as shown above on the kitchen table had no place to mount the encoders. The photo below shows each pair of motors mounted to a single beam, with the motor encoders attached to the front (top) pair.

Front and rear pairs of motors, you can see the encoders mounted on the motor shafts of the front pair. The left and right motors are wired together so they’ll appear as a left drive and a right drive.

When the motor encoders finally arrived I did another round of building and came up with what I thought was the final chassis, but even that had to change once I tried to mount the tank treads. As you can see, there’s not much clearance between the front bumper and the treads. And of course, the front bumper was only a stand-in until I could begin building the real bumper.

Next time: we begin the wiring and mounting the platform for the circuitry…

Some Notes on Artificial Intelligence

[These are some still-disorganised notes on Robotics, Artificial Intelligence, and Knowledge Representation that will likely be moved over to the wiki once it’s up and running. Likewise, at the bottom are some references, which will also end up on the wiki…]

“SMPA: the sense-model-plan-act framework. See section 3.6 for more details of how the SMPA framework inuenced the manner in which robots were built over the following years, and how those robots in turn imposed restrictions on the ways in which intelligent control programs could be built for them.”

— Brooks 1985, p.2

From Brooks “Intelligence Without Reason” [Brooks 1991]:

“There are a number of key aspects characterizing this style of work.

  • Situatedness: The robots are situated in the world — they do not deal with abstract descriptions, but with the here and now of the world directly influencing the behavior of the system.
  • Embodiment: The robots have bodies and experience the world directly — their actions are part of a dynamic with the world and have immediate feedback on their own sensations.
  • Intelligence: They are observed to be intelligent — but the source of intelligence is not limited to just the computational engine. It also comes from the situation in the world, the signal transformations within the sensors, and the physical coupling of the robot with the world.
  • Emergence: The intelligence of the system emerges from the system’s interactions with the world and from sometimes indirect interactions between its components — it is sometimes hard to point to one event or place within the system and say that is why some external action was manifested.”

Brooks notes that the evolution of machine intelligence is somewhat similar to biological evolution, with “punctuated equilibria” as a norm, where “there have been long periods of incremental work within established guidelines, and occasionally a shift in orientation and assumptions causing a new subfield to branch off. The older work usually continues, sometimes remaining strong, and sometimes dying off gradually.”

He expands upon these four concepts starting on page 14:

  • The key idea from situatedness is: The world is its own best model.
  • The key idea from embodiment is: The world grounds regress.
  • The key idea from intelligence is: Intelligence is determined by the dynamics of interaction with the world.
  • The key idea from emergence is: Intelligence is in the eye of the observer.

I might note that Brooks’ criticisms of the field of Knowledge Representation reflect my own findings, observed during the four years of my doctoral research on KR at the Knowledge Media Institute.

It is my opinion, and also Smith’s, that there is a fundamental problem still and one can expect continued regress until the system has some form of embodiment.

— Brooks 1991

The lack of grounding of abstract representation is evident from the almost complete
lack of the KR researchers to even bother to definitively explicate the two terms in the field’s title: “Knowledge” and “Representation”. How can one rationally explore a field when one doesn’t yet know what knowledge is, or where there is no epistemologically-sound definition of the word representation? The greatest related advances in that field belong to the likes of C.S. Peirce, John Dewey, Wilfred Sellars, Richard Rorty and Robert Brandom, but this seems (at this point in time) to be still disconnected to the concept of “embodiment” as explored in robotics (but I’m hardly the person to judge that issue). So it’s grounded neither in mathematics 1 nor in the real world.

I must agree with Brooks, that embodiment is a necessary precondition for research into intelligence. Brooks’ paper was from 1991, my doctoral programme began in 2002. I wish I’d read his paper prior to 2002. I met Doug Lenat in 2000 and over dinner in Austin we discussed the idea of working for his company, Cycorp (the corporate home of the Cyc Ontology). The whole thing is a giant chess set, a massive undertaking that as of 2020 is still essentially doing what it did when I saw it for the first time at SRI in 1979; it’s as Brooks says, it’s just followed the advances in computing technology but not really provided any real breakthroughs.

Regarding scale or size:

“The limiting factor on the amount of portable computation is not weight of the computers directly, but the electrical power that is available to run them. Empirically we have observed that the amount of electrical power available is proportional to the weight of the robot.”

— Brooks 1991, p. 18

References


Some Goals

What has become the New Zealand Personal Robotics Group began not long ago as a robotics project. So this is all pretty new. Below is a jumbled collection of some of my goals for the robotics project. Perhaps these goals will be shared by other people?

Meta-Goals

  • Explore ideas: both the philosophical as well as experience-based research within the field of Artificial Intelligence, specifically as related to robotics
  • Explore robotics hardware: the latest hobbiest-level sensors, motors, platforms, etc.
  • Explore robotics software: where have we gone since Rodney Brooks’ 1985 ideas about subsumption architectures [Brooks 1985], odometry using PID controllers [PID]?
  • Explore Self-Adaptive Software Systems
  • Learn the Python programming language

Goals

  • Try out various sensors and robot modules:
    • Time of Flight (ToF) laser distance sensors
    • infrared sensors (various distance ranges)
    • ultrasonic sensors (via PiBorg’s Ultraborg)
    • install Hall-effect motor encoders on the motor shafts, and write a Python-based PID motor controller
    • analog-to-digital converters
    • motion detector, to detect humans (and cats)
    • a robot front bumper, modeled after David Anderson’s SR04 robot [SR04]
    • motor control (via PiBorg’s Thunderborg)
    • Uninterruptible Power Supply (UPS) and battery management (via PiJuice)
  • Try out several robotic hardware platforms, for a low-cost, entry-level robot
    • OSEPP Tank Kit
    • Adafruit CRICKIT for Circuit Playground Express
    • robot chassis available from Adafruit:
      • Purple Aluminum Chassis for TT Motors – 2WD
      • Mini Robot Rover Chassis Kit – 2WD with DC Motors
      • Mini Round Robot Chassis Kit – 2WD with DC Motors
      • Mini 3-Layer Round Robot Chassis Kit – 2WD with DC Motors
  • motors:
    • OSEPP 25mm motors (part of the Tank kit), with encoders
    • yellow plastic motors
    • continual-rotation micro servos
    • Pololu micro gearmotors (1:298, with encoders)
  • Try out several robotic processor platforms, for a low-cost, entry-level robot
    • Raspberry Pi (Zero, Zero W)
    • Circuit Playground Express
    • Arduino
    • other microcontrollers, e.g., Adafruit ItsyBitsy M4 Express
    • Espruino WiFi (Javascript-based microcontroller)
    • MicroBit?
  • Explore use of I2C bus

Also see the page on Artificial Intelligence.

It All Started With a Night Light…

VL53L1X Time of Flight Sensor

I’d been playing around with Arduinos and Raspberry Pis for quite awhile, never with much of a purpose. The closest I came to a purpose was when a few months ago I’d found what for a robot enthusiast must almost be a dream, something called “Time of Flight” sensor, basically the operating part of a LIDAR (Light Detection and Ranging), one of the main sensors used in driverless cars. This one is smaller than a postage stamp, weighs a few grams, and uses a tiny laser to measure distance up to 4m at an accuracy of 20-25mm (or about an inch accuracy at 13 feet). How much does this thing cost? About US$19. Amazing.

To the consternation of my partner I ended up building a night light. We actually have a night light already, one of those plastic Japanese 100 yen things that work just fine, but I was having none of that.

The Pi Zero W RESTful Nightlight with Pew Pew Laser

It turned out to be a Raspberry Pi Zero W based nightlight with a RESTful web interface, three Red-Green-Blue (RGB) potentiometer controls for the colour of the 5 x 5 LED matrix running into an ADS1015 analog-to-digital converter, with the number of LEDs lit as a function of the distance the ToF sensor measures to your hand.

Overkill, absolutely. My partner? Not impressed. She only barely accepts its presence in the bedroom.

The KR01 Robot Project

This article is the first in the multi-part series “Building the KR01 Robot” ( 1 | 2 | 3 | 4 ), and describes some of the background leading up to the project. Further posts can be found under the KR01 tag.

About a month ago, when I started, I hadn’t really thought much into the future of my rather humble robot project, certainly not enough to consider where it might lead. Certainly not enough to think about starting a robotics group. At this point I have no idea where that group will lead (if anywhere), but I can at least blog about the project itself.

A Little History

Z80 Single Board Computer, ca. 1979

In my senior year in high school in 1979 I designed and built a robot. It used an 8 bit Intel Z80 single board computer1 with 1K of RAM memory running at a whopping 2MHz. The PC board was about one square foot (30cm), had a hexadecimal keypad and a six-digit red LED display. It sat on top of a chassis I built out of aluminum and some large circular PC boards I found in a surplus shop on the outskirts of Des Moines, Iowa, that apparently were from the insides of a missile. It used two large DC motors, a wheel caster and a 6V lead acid battery. It was an ambitious project for a high school student and I never quite got the ultrasonic sensors working properly (the schematic was from a fish finder), but it was a good learning experience, a lot of fun, and led eventually to an IT career.

Over that career I had the fortunate experience of working at NASA Headquarters for a few years, where as a fellow Mac enthusiast I met Dave Lavery, the head of the robotics division. At the time he had a prototype of the Mars Sojourner rover sitting on his desk. I remember marvelling at the beauty of the machining of the wheels, and wishing I had that kind of budget (and a machine shop). While helping to set up a public demo I also had the opportunity to pilot a telerobotics sled under the ice in Antarctica. Not surprisingly it was an amazing place to work.

NASA Sojourner Mars Rover
Photo courtesy NASA

Years have passed and I now live in New Zealand, where most of my creative energy has over the past few years been in music (I have an improvisational abstract band named Barkhausen; we just finished our second CD).

The combination of experimenting with some DIY microcontroller-based Eurorack synthesizer modules and the advances in the world of Raspberry Pi has found me back into robotics. For the past few months I’ve been purchasing various playthings from Pimoroni and Adafruit and doing some experimenting.

While browsing around doing research for the project I came upon a YouTube video “David Anderson demonstrates his method for creating autonomous robots“, where David showed a local group of people some of his robots:

David Anderson demonstrates his method for creating autonomous robots

Now, I wouldn’t say David’s robots are the most sophisticated ones I’ve seen, not walking around, not androids with faces that move, not MIT’s Shakey nor something from NASA. But they are remarkably clever designs. He also seems like a really nice, down-to-earth guy. What struck me was the fact that his robots were within the reach of normal people to build. Something I could build.

Following David’s trail led me to the Dallas Personal Robotics Group (DPRG), which claims to be “one of the oldest special interest groups in the world devoted to personal and hobby robotics”. Undoubtedly. They were founded in 1984, five years after I’d built my robot in high school.

I ended up joining the DPRG mailing list. In replying to one of their members’ messages where I mentioned I’d started building my own robot, he was very friendly and encouraged me to blog about it. Well, the only blog I had has been devoted to my band and that didn’t seem particularly appropriate. Then, last night we had some friends over for dinner and I was surprised to learn that their 9 year old girl was quite keen to learn about my robot project. So I proposed the idea of starting a Pukerua Bay robotics group.

It turns out there isn’t any national robotics group in New Zealand, nor even a local one near Wellington, so when I was shopping for a domain name it turns out that “robots.org.nz” was available so I bought it. We’ve gone national!

So, if you’re interested, you’re very welcome to follow me on this journey to build a robot…

Next: stuff begins arriving in the post…