Using MEMS and Visual Odometry in a GPS denied environment.

In a GPS denied environment,  how to use Visual Odometry and a single camera?

*****
 I’d like to point out that there are different mechanisms for robotic localization which serves different purposes but isn’t limited to the answer provided. 
*******

I will introduce you to localization using visual odometry by using a single camera.
One would argue that in today’s modern robotics one of the most common localization systems are Simultaneous Localization and Mapping (SLAM), we can find this system in some advanced automated home vacuums to avoid obstacles, however, the process requires a number of image processing steps limiting real-time data collection.

What is visual odometry? This technique allows the robot to determine its position and orientation by analyzing the images from the camera.  When we are using one camera it is called Monocular Visual Odometry. When we are using more than two cameras it is known as Stereo Visual Odometry.

Fun Fact: 
The Greeks invented and it means “Route Measure.”

Traditional odometry methods utilize rotary encoders attached to its wheels to measure rotations, keep in mind that using motion estimation alone is unreliable due to slippage errors.

Results from research indicate that the measurements from a single downward-pointing camera are helpful to determine localization information but the data wouldn’t be considered high-precision yet the results are accurate to a modest level.

When a GPS-denied environment is presented, the use of vision-based sensors is a common alternative and combined with Inertial Measurement Units (IMU)  it helps to reset errors generated by wheel drifting. The Mars Exploration Rover extracts visual information from a pair of stereo images [1] and [2]. For stereo-based image processing, the required pre-processing uses a CPU intensive and limits real data rate. To solve for this, as an alternative, it will be presented Microelectromechanical systems (MEMS) based navigation system.

In contrast to stereo vision, if you use a single camera it will reduce computational requirements by eliminating the requirement of stereo matching algorithms but it will obtain only two-dimensional translational information.

The single downward pointing camera is intended to replace or augment encoder-based odometry for applications on steering robotic platforms. For visual odometry measurements, a frame capture device is required; to ensure proper calibration, it is necessary to maintain a constant distance from the camera optics to the tracked object.

***
Note: It is possible to achieve feature tracking using ground images from different surfaces.
Expect gray-scale images.
***

By using a low-cost MEMS-based inertial sensor to provide improved localization or navigation, first, use visual odometry to determine the path traveled. A standard linear Kalman filter with position and velocity as states and two measurements is implemented –> It is available in OpenCV Library: Use Kalman Localization Routine.

Now convert the optical flow measurements: 

Optical Flow = Velocity / Time between acquired frames

*****
Optical Flow: Use the tracked features to perform optical flow calculations between the two image frames.
******

Assuming that an inertial unit is not being used. This visual odometry system is sensing translation within the image frame and NOT used to sense rotations.

While Optical Flow measurements are useful for localization information, potential designers should expect a few challenges such as:

  • Valid optical flow measurements are not available at a constant rate from the hardware.
  • The loss of the optical flow measurements will occur probably because of over or underexposed image frames (There are some ways to reduce loss).
  • Visual odometry measurements to estimate position by adding up displacement measurements will be an error.
    A technique to apply is to convert instantaneous velocity measurement and then used within an estimation algorithm.

As with any visual processing algorithms, the computational intensity of the routines is always a concern.

If the system is only capable of a one frame per second acquisition rate, then the total
field of view associated with the camera must not have been traveled during that one-second time frame.
To approach this problem, moving the camera further away from the target, the rate of displacement may be increased, but it comes at the cost of reduced resolution and hence fewer tracked objects.

***
HD camera might exhibit better performance.
***

To recap,  The technique uses feature detection and optical flow measurements to provide sensor information to localization algorithms.

The application is specifically targeted to robotic platforms in GPS-denied environments.

*********

Reference:

NASA:
Localization Using Visual Odometry and a Single Downward-Pointing Camera

Please read research to find further information on methodology, definitions, and results.

Advertisements