Skip to content

Change parameters

matlabbe edited this page Jun 3, 2020 · 20 revisions

Introduction

Analytics EDIT Newest versions of RTAB-Map have most of these parameters set by default. I recommend to start with the default values, then look at this page for fine tuning.

I will show in this page how we can speed-up RTAB-Map for small environments and computers with less computing power. This example will also use only codes under the BSD License, so without SIFT or SURF here.

The referred configuration can be downloaded here: config.ini. You must have at least RTAB-Map 0.7. You can load this configuration file in RTAB-Map with the Preferences->General settings (GUI)->"Load settings..." button or see the screenshots below to adjust the parameters manually. Important: This configuration may use different descriptors (size/type) from your actual configuration. If you have already created a database, you must delete it first (Edit -> Delete memory) to start with a clean database, or you will have an error like this:

[ERROR] (2014-10-02 15:01:07) VWDictionary.cpp:399::rtabmap::VWDictionary::addNewWords() Descriptors (size=16) are not the same size as already added words in dictionary(size=64)

Parameters

I will show each panel from the Preferences dialog with the corresponding parameters of the configuration file (*.ini). Important parameters are identified on the screenshots.

General settings (GUI)

  1. This parameter should be true to update the 3D map.
  2. When odometry inliers between consecutive images are below this threshold, the background of the 3D map becomes yellow to warn the user that the scanning area doesn't have many discriminative features. Staying in this zone would cause odometry to loose tracking (red background).

3D Rendering

  1. Don't show multiple clouds for the same scanned area. Clouds taken from the same point of view will be filtered to keep only the most recent one. Less point clouds shown will increase display performance.
  • Radius of the filtering zone.
  • Angle of the filtering zone.
  1. On a computer with limited graphic ressources, high decimation of the point clouds will increase the display performance.

Source

  1. Increase input rate to 20/30 Hz to feed the fast odometry approach used below.

RTAB-Map settings

  1. Activate SLAM mode if you want to extend the current map. Deactivate to just localize in a prebuilt map.
  2. Increasing detection rate will increase the number of loop closures found when returning to a previous area, so more constraints used for the map optimization and possibly a better map quality. Increasing the detection rate will increase the map size too, so more processing power are required. At 2 Hz, the visual dictionary will increase in size 2x times faster than at 1 Hz.
  3. The data buffer size should be set at 1 to only use the more recent odometry data available.
  4. In this example, we assume that we are scanning a small area for which the real-time limit will not be reached, so we deactivate memory management.
  5. Same thing for the maximum working memory size, set it to infinite.
  6. "Publish statistics" and "Publish raw sensor data" should be true to send map data to GUI. "Publish pdf" and "Publish likelihood" can be set to false if you don't look at loop closure hypothesis values and likelihood.

Bayes filter

  1. Don't regenerate all prediction matrix of the Bayes filter, just increment removed/added nodes.
  2. Use tf-idf approach to compute the likelihood, which is faster than comparing images to images.

Memory

  1. Bad signatures should never be ignored in RGB-D SLAM mode.

Database

  1. Keep raw sensor data in the database to be able to regenerate clouds from the resulting database.
  2. Keeping rehearsed locations are only helpful for debugging, so don't keep them.
  3. If you have enough RAM, use database in memory to increase database access speed. Otherwise, if you are limited in RAM, set it to false.

Visual word

  1. Publishing visual words is only used to show features on the images in the GUI.
  2. Here we use GFTT+BRIEF visual dictionary. This binary descriptor can be used under the BSD License. Though less descriptive than SIFT/SURF, for small environnements it works ok. On large-scale environments, SURF/SIFT are preferred. When changing the feature type, you must restart with a clean database (Edit->Delete memory).
  3. Maximum depth is set to be under the Kinect maximum precision distance.
  4. The maximum words per image is set to 0 because GFTT has already this parameter set below. EDIT GFTT now uses this parameter to limit the number of extracted features. So set it to value set below (400).
  5. NNDR (Nearest Neighbor Distance Ratio): Default 0.8.
  6. We use binary descriptors, so use Brute Force matching (kd-tree is only for float descriptors like SURF/SIFT).

BRIEF

  1. Use the smallest BRIEF descriptor. Maybe on medium size environment, more discriminative descriptors (size 32/64) would be better for loop closure detection (better likelihood for the Bayes filter).
### GFTT

  1. Set maximum of 400 features to track. Increasing this number would increase processing time for odometry and visual dictionary update, though increasing quality of odometry and loop closures detected. EDIT Should be set now on the maximum words per image parameter above.
### RGBD-SLAM

  1. Must be checked for RGB-D SLAM.
  2. We don't refine loop closure constraint transformations (computed visually) to save CPU time. While ICP 2D is useful on a robot with a 2D laser, 3D ICP suffers from difficult problems to detect (see ICP page for a discussion why ICP 3D is not used by default).
  3. When checked, the referential of the map is set according to the last node added to the map's graph, so the last map created.
  4. I disabled local loop closure detection because I use the local history approach for odometry. I've found that activating both at the same time results in a lower quality of the map. Local loop closure detection adds also more constraints in the graph, so more time required to optimize the graph (not ideal when RTAB-Map's update rate is fast).

Loop closure constraint

  1. Higher this threshold, higher the quality of the loop closure transformation computed. However, less loop closures would be accepted.
  2. It looks like that increasing this threshold to 2 cm (default 1 cm) doesn't decrease a lot the quality of the loop closure transform, because more visual words correspondences are kept.
  3. We just want good visual words with reliable depth values, which are under 3 m for Kinect-like sensors.
  4. Binary descriptors are less discriminative than float descriptors, so generally there are less matching visual words between images of a loop closure. Because there are less corresponding visual words, the rejecting rate of the loop closure is higher. Here, we re-extract features from both images and compare them together (without using the visual dictionary), increasing the number of correspondences between the images.
  5. To not add too much overhead, we use the FAST keypoints detector instead of GFTT used for visual dictionary and odometry. The quality of the keypoints of FAST is lower than with GFTT, but it is very more fast (under 10 ms to extract keypoints).
  6. Don't limit the number of features extracted, we want the maximum of correspondences between the image.
  7. Binary desriptors, use Brute Force matching here too.
  8. The nearest neighbor distance ratio (NNDR) used for matching features.
### Odometry

  1. With some tests, I've found that GFTT+BRIEF gives the higher reliability across all available binary feature detectors. FAST/ORB detectors are faster than GFTT but they generate less good features to track. FREAK and BRIEF descriptors give similar results.
  2. Local history size is used to keep a local map of the features instead of using only frame-to-frame odometry. The local map represents the 1000 most discriminative features across the past frames. A big advantage to use the local map is to limit the camera drifting when it is not physically moving. Another advantage is when the camera is moved in an area with less features and the odometry looses tracking, it will be easier to track back with the local map (with at last 1000 features) than only the last frame (that could have very low features and thus more difficult, even not possible, to resolve lost odometry). Local loop closure in time should also be disabled when the local history is enabled.
  3. It looks like that increasing this threshold to 2 cm (default 1 cm) doesn't decrease a lot the quality of the loop closure transform, because more visual words correspondences are kept.
  4. Higher this threshold, higher the quality of the loop closure transformation computed. However, there are more chances to lost odometry.
  5. We just want good visual words with reliable depth values, which are under 3 m for Kinect-like sensors.
  6. Binary desriptors, use Brute Force matching here too.
  7. The nearest neighbor distance ratio (NNDR) used for matching features.