International Journal on Advanced Science, Engineering and Information Technology, Vol. 8 (2018) No. 4-2: Special Issue on Empowering the Nation via 4IR (The Fourth Industrial Revolution)., pages: 1423-1430, Chief Editor: Khairuddin Omar | Editorial Boards : Shahnorbanun Sahran Hassan, Nor Samsiah Sani, Heuiseok Lim & Danial Hoosyar, DOI:10.18517/ijaseit.8.4-2.6834

Multiple Descriptors for Visual Odometry Trajectory Estimation

Mohammed Salameh, Azizi Abdullah, Shahnorbanun Sahran

Abstract

Visual Simultaneous Localization and Mapping (VSLAM) systems are widely used in mobile robots for autonomous navigation.  One important part in VSLAM is trajectory estimation. Trajectory estimation is a part of the localisation task in VSLAM where a robot needs to estimate the camera pose in order to precisely align the real visited image locations.  The poses are estimated using Visual Odometry Trajectory Estimation (VOTE) by extracting distinctive and trackable keypoints from sequence image locations having been visited by a robot. In the visual trajectory estimation, one of the most popular solutions is arguably PnP-RANSCA function. PnP-RANSAC is a common approach used for estimating the VOTE which uses a feature descriptor such as SURF to extract key-points and match them in pairs based on their descriptors. However, due to the sensor noise and the high fluctuating scenes constitute an inevitable shortcoming that reduces the single visual descriptor performance in extracting the distinctive and trackable keypoints. Thus, this paper proposes a method that uses a random sampling scheme to combine the result of multiple key-points descriptors. The scheme extracts the best keypoints from SIFT, SURF and ORB key-point detectors based on their key-point response value. These keypoints are combined and refined based on Euclidean distances. This combination of keypoints with their corresponding visual descriptors are used in VOTE which reduces the trajectory estimation errors. The proposed algorithm is evaluated on the widely used benchmark dataset KITTI where the three longest sequences are selected, 00 with 4541 images, 02 with 2761 images and 05 with 1101 images. In trajectory estimation experiment, the proposed algorithm can reduce the trajectory error of 44%, 8% and 13% on KITTI dataset for the sequence 00, 02 and 05 respectively based on translational and rotational errors. Also, the proposed algorithm succeeded in reducing the number of keypoints used in VOTE as combined with the state-of-the-art RTAB-Map.

Keywords:

Visual Odometry, Trajectory Estimation, Structure from Motion, RANSAC, Selection scheme, Feature Matching.

Viewed: 143 times (since Sept 4, 2017)

cite this paper     download