Autonomous Navigation

Autonomous Navigation
Software kernels, intelligent algorithms, perception systems, and more

It can be argued that the "holy grail" of unmanned systems is autonomous operation; that is, a robot that can complete tasks without direct control by a human. Autonomous operation would fundamentally revolutionize our way of life, with robots finding widespread application in areas far beyond bomb-defusing and vacuum cleaning.

One of the most fundamental autonomous behaviors is navigation: getting from point A to point B without human intervention. While this may sound very simple (follow GPS breadcrumbs, right? Ha!) it is actually a very difficult problem. In real-world applications, one must contend with a variety of terrain, avoid obstacles, maintain rollover stability, manage power, and much more.

Solving the autonomous navigation problem will take a lot of engineering and technical breakthroughs in sensing and control (amongst other areas). Quantum Signal is developing math-based technologies to address these challenges and make autonomy a reality. Some of our current work includes:

  • Visual Odometry – a technology for estimating robot position and orientation based on analysis of camera (vision) data
  • Terrain Sensing – technologies for autonomously sensing and analyzing soil and terrain characteristics
  • Mobility Prediction – leveraging sensed soil and terrain information and new, advanced algorithms to analyze robotic traversability

These capabilities are being integrated on a variety of robotic platforms to achieve real-world autonomous navigation capabilities. For more information, please contact us at

Visual Odometry

One difficulty in robotics is to imbue these systems with vision capabilities similar to that of human beings. The ability to recognize objects, navigate through environments, and other capabilities would greatly increase the breadth of applications for robots and robotic systems. While the dream of robust, human-like robotic vision has not been realized, much progress has been made and specific challenges have been overcome.

Visual odometry (VO) is the process of estimating location based on analysis of data from vision sensors. VO is useful for helping unmanned systems comprehend changes in location in environments where GPS or other localizations may be problematic. In monocular visual odometry, data from a single camera is used for analysis. Monocular VO is an area of increasing interest in military circles because it can be used onboard UGVs with low-cost, non-emissive cameras.

QS has developed proprietary cutting edge VO algorithms and software that enable real-time estimation of position onboard manned or unmanned vehicles. Targeted specifically at military UGVs, QS VO operates in real-time in a variety of outdoor and indoor environments from a monocular video feed. The key component of this system is a novel set of proprietary algorithms developed by QS and termed the "Fast Scene Comparison Framework." The framework is able to compare any two video frames (not necessarily consecutive) and find the common scene elements independent of their positions (and scale) in their respective frames. By leveraging this advanced technology, QS VO is able to overcome many of the issues that plague other VO algorithms and perform very robustly under difficult circumstances.

If your application could benefit from Visual Odometry, please contact us at You won't regret it!

Terrain Sensing

Terrain sensing is an important technology if unmanned ground vehicles (UGVs) are to autonomously navigate through the difficult and varied outdoor terrain. One key limitation on autonomous terrain analysis arises from current approaches to sensor data processing. A significant amount of research in the robotics community has been devoted to interpreting data from common UGV sensors. Most current approaches to non-geometric terrain property analysis rely on classification: that is, analyzing remotely observable "features" (such as color, texture, and/or elevation) to assign a terrain region as a member of a pre-specified semantic class, such as "rock," "loam," "asphalt," etc. The purpose of this STTR research program is to develop a method for terrain characterization for UGVs operating in outdoor, unstructured environments. The methods generalize locally-sensed physical terrain features to remotely-sensed data to infer properties about UGV mobility through its surroundings.

sand, beach grass and rock
sand, beach grass and rock analyzed

In the current concept, "local" (i.e. proprioceptive) sensors measure signals related to physical UGV-terrain interaction, including wheel torque, sinkage, angular velocity, and/or vibration signatures. Local sensor feedback is analyzed to identify terrain patches that possess unique mobility characteristics, and visual features associated with these terrain patches are correlated with remote data, thereby creating a "mobility map" of the surrounding environment. Contrary to classical terrain classification methods, this map will not delineate semantically-labeled terrain boundaries (i.e. "rock," "loam,"), but rather will delineate mobility-labeled terrain boundaries (i.e. "easily traversable terrain," "poorly traversable terrain," etc.). The technology is being developed into an on-board capability package for military UGVs.

This work is a collaboration between researchers at the Massachusetts Institute of Technology's Robotics Mobility Group, US Army ERDC, and Quantum Signal, LLC. For more information, please contact us at

concept diagram

Mobility Prediction

Those involved in unmanned ground vehicles have substantial interest in understanding and predicting the mobility of military vehicles in natural terrain. For example, future US Army operations (under FCS) will employ small (i.e. sub-500 kg) autonomous or semi-autonomous UGVs in both cross-country and urban environments, and a fundamental requirement of these UGVs is to quickly and robustly predict their ability to successfully negotiate terrain regions and surmount obstacles.

mobility_prediction_picture_1 (214K)

This mobility prediction capability is critical to successful deployment of UGVs that can operate effectively in challenging terrain with minimal or no human supervision. QS and the MIT Robotics Mobility Laboratory are developing a robust, efficient method for UGV mobility prediction on behalf of US Army ERDC. This method exploits recent advances in statistical simulation to yield a fundamentally new approach to mobility prediction for small UGVs. By coupling rigorous statistical techniques with physics-based UGV and terrain models, the methods will yield accurate predictions of mobility in general 3D terrain and not rely on idealized obstacle "primitives". In essence, the work will allow a manned or unmanned UGV to answer the question: what paths can I take through this terrain that will not get me stuck?


Contact Quantum
MIT Robotics Mobility