基于注意力和长短时记忆网络的视觉里程计

    Visual Odometer Based on Attention and LSTM

    • 摘要: 近年来通过利用视觉信息估计相机的位姿,实现对无人车的定位成为研究热点,视觉里程计是其中的重要组成部分.传统的视觉里程计需要复杂的流程如特征提取、特征匹配、后端优化,难以求解出最优情况.因此,提出融合注意力和长短时记忆网络的视觉里程计,通过注意力机制增强的卷积网络从帧间变化中提取运动特征,然后使用长短时记忆网络进行时序建模,输入RGB图片序列,模型端到端地输出位姿.在公开的无人驾驶KITTI数据集上完成实验,并与其他算法进行对比.结果表明,该方法在位姿估计上的误差低于其他单目算法,定性分析显示该算法具有较好的泛化能力.

       

      Abstract: In recent years, the use of visual information to estimate the pose of the camera to realize the positioning of unmanned vehicles has become a research hotspot. Visual odometry is an important part of it. Traditional visual odometry requires complex processes such as feature extraction, feature matching, and post-processing. It is difficult to solve the optimal situation. Therefore, a visual odometer that combines attention and long short-term memory (LSTM) was proposed in this paper. The convolutional network was enhanced by the attention mechanism, which extracted motion features from the changes between frames. Then, the long and short-term memory network was used for timing modeling. The input was a sequence of RGB pictures, and a pose of end-to-end was output by the model. The experiment was completed on the public unmanned driving KITTI data set and compared with other algorithms. Results show that the error of the method in pose estimation is lower than that of other monocular algorithms, and through qualitative analysis, it has good generalization ability.

       

    /

    返回文章
    返回