Robotics

Divergence Mapping, Mark II

Mark II sounds technical. So is this post. In the last post, I described how divergence mapping works. Fundamentally, divergence mapping creates a 3d image using two cameras, much like the human eye. The link above goes into more detail on our high-level method; this post is about hardware.

Here, you'll learn how to modify a PS3 eye for synchronized images. (Apologies, but I'm shy on pictures of the camera coming apart.)

First, remove all of the screws from the camera's backside. They're all hidden under the round rubber pads, which will pry off with a slotted screwdriver. The screws themselves are all Phillips. 

Next, gently pry the back off of the camera. The board and lens are attached to the front piece of plastic; the back will pull off. The two sides are attached with plastic hooks. I found that needling a slotted screwdriver between the layers of plastic and then twisting worked well. Start at the top, and save the round bottom for last (it's tough to get open).

Divergence Mapping

One of the most important sensors on the robot is a depth sensor, used to pick out obstacles blocking the robot. If the obstacles were uniform, color and pattern matching would suffice, but they're massively varied. The course includes garbage cans (tall, round, green or gray), traffic cones (short, cone, orange), construction barrels (tall, cylindrical, orange), and sawhorses (they look different from every side). Sophisticated computer vision could pick them out, but a depth sensor can easily separate foreground and background. 

Most teams use LIDAR. These expensive sensors send a ping of light and time the echo. Time is converted to distance, and the ping is swept 180 degrees around the robot. We can't afford LIDAR. Our depth-sensing solution is divergence mapping. The sensing works in much the same way as a human eye: two cameras, a small distance apart, capture images synchronously. The images are compared, and their small differences are used to find depth. People do it subconsciously; the computer searches for keypoints (features that it can identify between frames). The matching fails on uniform surfaces, but works well on textured ones. (Like grass; isn't that convenient?)

A depthmap visualized.The depthmap can only be generated when the images match closely. That means that the cameras need to be fixed relative to each other, and the images need to be taken simultaneously.

Approaches

Now that you know what the IGVC is about, I'll go into some detail about our high-level approach. The challenge provides some obvious requirements:

  • A reference frame relative to earth to navigate to waypoints; GPS provides position and compass provides heading
  • Line detection to stay within lanes; a color camera suffices
  • Obstacle detection; a 2D rangefinder is the is logical choice
  • Precise measurement of velocity; provided by encoders, accelerometers, and gyros

I'll go through each requirement in a bit of depth, describe typical solutions, and design constraints. 

An absolute reference tells the robot exactly where in the world it is. GPS and compass provide that data. Unfortunately, GPS is only accurate to about 6', so it isn't useful for local navigation. A 2' difference in position is the difference in hitting an obstacle and avoiding it. The GPS is useful for providing long-range direction: the goal is 60' southwest.

Most teams improve their GPS accuracy with exotic antennas and exact correction services (the speed of light varies through the atmosphere, distorting timestamps and GPS location. A corrective signal provides the difference in true and measured time-of-flight, improving accuracy.) Although these units offer remarkable resolution (within 6"), they are prohibitively expensive ($20,000). 

Lafayette's team, named Team Terminus, will use an inexpensive GPS receiver, accurate to roughly 6'. It costs less than $100. (Cost, by the way, will be a persistent theme. Most IGVC robots run $20,000 to $80,000; Lafayette budgeted Terminus $6000.)

IGVC Introduction

The robot, in profile.
Lookit it! It's our robot!

The robot, in profile. This semester, I was invited to work on a Mechanical Engineering Senior Project. (And yes, I need all of those capital letters.) The team is, with the exception of yours truly, entirely composed of MechE's. I was invited when the team leader spoke with Helen, who in turn suggested that I would be useful, what with my background in FIRST Robotics and Linux. (So many capital letters.)

Each year, the Mechanical Engineering department sponsors a half-dozen senior projects. Each gives a team of engineers some capstone challenge, including SAE Aero Design, Formula Car, and many others. Most of these teams have been around for years; the team one I worked on was brand-new.

IGVC. That's the competition. The competition - the Intelligent Ground Vehicle Competition - challenges teams to design and build robots capable of autonomously navigating a course. Two courses are provided: basic and advanced.