CN112461228B - A secondary loop-closing detection and positioning method based on IMU and vision in a similar environment - Google Patents

A secondary loop-closing detection and positioning method based on IMU and vision in a similar environment Download PDF

Info

Publication number
CN112461228B
CN112461228B CN202011206955.4A CN202011206955A CN112461228B CN 112461228 B CN112461228 B CN 112461228B CN 202011206955 A CN202011206955 A CN 202011206955A CN 112461228 B CN112461228 B CN 112461228B
Authority
CN
China
Prior art keywords
imu
image
loop
pose
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011206955.4A
Other languages
Chinese (zh)
Other versions
CN112461228A (en
Inventor
吕婧
邹霞
岳定春
应旻
涂良辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanchang Hangkong University
Original Assignee
Nanchang Hangkong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanchang Hangkong University filed Critical Nanchang Hangkong University
Priority to CN202011206955.4A priority Critical patent/CN112461228B/en
Publication of CN112461228A publication Critical patent/CN112461228A/en
Application granted granted Critical
Publication of CN112461228B publication Critical patent/CN112461228B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/005Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a secondary loop detection positioning method based on IMU and vision in a similar environment, which comprises the following steps: step one, calibrating and synchronizing parameters of a binocular camera and an IMU; step two, extracting image features and matching the image features; thirdly, pose estimation and movement track formation; step four, loop detection; step five, a secondary loop detection mechanism; and step six, repositioning. The invention can carry out rough comparison constraint on the pose of the current image frame through the IMU pose information, namely, the current position direction of the IMU is compared with the position direction of the primary closed-loop image, and the direction consistency constraint pre-judgment is carried out, so that the problem that the surrounding scenes are similar, the current position of the IMU cannot be the same position and the surrounding scenes are wrong according to the image similarity and the image similarity can be judged to be closed-loop is prevented; the orientation error caused by similar images in similar environments is prevented through the direction prejudgment of the IMU; and after repositioning, updating the high-precision pose for correcting the accumulated drift error of the IMU, and improving the robustness of the secondary loop detection positioning method.

Description

一种相似环境下基于IMU和视觉的二次回环检测定位方法A secondary loop-closing detection and positioning method based on IMU and vision in a similar environment

技术领域technical field

本发明涉及二次回环检测定位方法,具体为一种相似环境下基于IMU和视觉的二次回环检测定位方法。The invention relates to a secondary loop detection and positioning method, in particular to a secondary loop detection and positioning method based on IMU and vision in a similar environment.

背景技术Background technique

无人机正在向智能无人机和智能系统的方向发展,自主定位导航能力决定着无人机的智能化水平。无人机在军事侦察、电力巡查、地质勘探、森林防火等领域都对其自主定位能力提出了迫切需求,尤其是随着卫星导航信号干扰与诱骗技术的出现,无人机要求在大场景下具备完全自主定位导航能力,以摆脱对卫星导航信号的依赖。视觉与惯性(Inertial measurement unit,简称 IMU)组合SLAM(Simultaneous LocalizationAndMapping,同时定位与建图) 为无GPS信号下自主定位导航的实现提供了新的技术支撑和有效途径。通过两种不同传感器的互补,使IMU对视觉定位信息进行误差修正补偿,提高了无人机的实时定位能力,但随着时间的推移,IMU长时间运行后自身也存在着累积漂移误差,如果不采取有效手段,无人机在长时间后仍会定位失效。同时,在视觉与IMU组合式定位方法中,现有的后端回环检测定位方法通过判断当前位置是否是以前访问过的环境区域来优化之前的相关图像帧的位姿,存在着误判问题,这是由于在实际环境中存在着大量相似环境,例如外表相似的成群的楼宇、起伏的沙丘、茂密的森林环境都会造成在无GPS环境下,由于相似环境下大量图像相似造成的回环时检测闭环的困难判断,从而造成实际是回环,算法却判断不是回环,或实际不是回环,算法判断是回环的误判断问题,以致错误回环,定位失败。因此,在无GPS的相似环境下,迫切需要解决的关键问题是如何防止由于大量相似环境图片而错误回环,导致定位失败。UAVs are developing in the direction of intelligent UAVs and intelligent systems, and the ability of autonomous positioning and navigation determines the level of intelligence of UAVs. Unmanned aerial vehicle (UAV) puts forward an urgent demand for its autonomous positioning ability in the fields of military reconnaissance, power inspection, geological exploration, forest fire prevention, etc., especially with the emergence of satellite navigation signal interference and deception technology, UAV requires It has completely autonomous positioning and navigation capabilities to get rid of the dependence on satellite navigation signals. Vision and inertial (Inertial measurement unit, IMU for short) combined SLAM (Simultaneous Localization And Mapping, simultaneous positioning and mapping) provides a new technical support and effective way for the realization of autonomous positioning and navigation without GPS signals. Through the complementarity of two different sensors, the IMU can perform error correction and compensation on the visual positioning information, which improves the real-time positioning capability of the UAV. However, as time goes by, the IMU itself also has accumulated drift errors after long-term operation. If no effective measures are taken, the UAV will still fail to locate after a long time. At the same time, in the vision and IMU combined positioning method, the existing back-end loop detection positioning method optimizes the pose of the previous related image frames by judging whether the current position is a previously visited environment area, and there is a problem of misjudgment. This is because there are a large number of similar environments in the actual environment, such as groups of buildings with similar appearance, rolling sand dunes, and dense forest environments, which will cause loopback detection in a non-GPS environment due to a large number of similar images in similar environments. It is difficult to judge the closed loop, which leads to the fact that it is a loop, but the algorithm judges that it is not a loop, or it is not actually a loop. The algorithm judgment is a misjudgment of the loop, resulting in a wrong loop, and the positioning fails. Therefore, in a similar environment without GPS, the key problem that needs to be solved urgently is how to prevent error loopback due to a large number of similar environment pictures, resulting in positioning failure.

发明内容Contents of the invention

本发明要解决的技术问题在于提供一种相似环境下基于IMU和视觉的二次回环检测定位方法,能够实现在无GPS的相似环境下修正视觉累积误差和 IMU累积漂移误差,并正确回环定位的目的。The technical problem to be solved by the present invention is to provide a secondary loop detection and positioning method based on IMU and vision in a similar environment, which can correct the visual cumulative error and IMU cumulative drift error in a similar environment without GPS, and correct loop positioning Purpose.

为实现上述目的,本发明提供如下技术方案:To achieve the above object, the present invention provides the following technical solutions:

一种相似环境下基于IMU和视觉的二次回环检测定位方法,包括以下步骤:A secondary loop detection and positioning method based on IMU and vision in a similar environment, comprising the following steps:

步骤一,双目相机与IMU参数标定与同步:Step 1, binocular camera and IMU parameter calibration and synchronization:

统一双目相机与IMU的时间戳,标定所述双目相机与IMU的参数,包括双目相机启动延迟时间、IMU启动延迟时间、加速度计零偏和陀螺仪零偏(零偏,输入为零,即IMU不动时,输出不为零的初始值),同步双目相机与IMU 时间,读取双目相机获取的图像与IMU位姿信息;Unify the time stamps of the binocular camera and the IMU, and calibrate the parameters of the binocular camera and the IMU, including the start-up delay time of the binocular camera, the IMU start-up delay time, the zero bias of the accelerometer and the zero bias of the gyroscope (zero bias, the input is zero , that is, when the IMU does not move, the output is not an initial value of zero), synchronize the time of the binocular camera and the IMU, and read the image obtained by the binocular camera and the IMU pose information;

步骤二,图像特征提取与图像匹配:Step 2, image feature extraction and image matching:

检测双目相机获取的图像中的角点作为特征点,对这一角点周围图像进行描述,形成特征描述子,对连续两幅图像中的特征描述子使用汉明距离进行匹配,筛选特征匹配点对,以防误匹配;Detect the corner point in the image acquired by the binocular camera as a feature point, describe the image around the corner point, form a feature descriptor, use the Hamming distance to match the feature descriptor in two consecutive images, and filter the feature matching points Yes, in case of mis-match;

步骤三,位姿估计及移动轨迹形成:Step 3, pose estimation and trajectory formation:

通过匹配好的多组特征匹配点对的3D信息,采用最小二乘方法,计算使特征匹配点对误差平方和达到极小值的旋转向量矩阵R和平移向量t,得到无人机的位姿和移动地图点轨迹;By matching the 3D information of multiple sets of feature matching point pairs, the least square method is used to calculate the rotation vector matrix R and translation vector t that make the error square sum of the feature matching point pairs reach a minimum value, and the pose of the drone is obtained. and moving map point trajectories;

当相机移动过快造成图像模糊,丢失定位信息时,通过积分统一时间戳下的IMU位姿信息,获取当前双目相机位姿信息,更新相关联地图点;When the camera moves too fast and the image is blurred and the positioning information is lost, the current binocular camera pose information is obtained by integrating the IMU pose information under the unified time stamp, and the associated map points are updated;

步骤四,回环检测:Step 4, loopback detection:

根据两幅图像的相似性判断是否回到之间经过的位置以确定回环检测关系,通过构建树形聚类数据库的方式度量图像帧间相似性以判断是否位置应一致而回环;对在图像匹配时提取的特征点与特征描述聚类化,形成树杈结构,以对图像特征做快速筛选,输出候选图像,减少相似度判断时间;计算与当前图像有公共特征类的非相连但相邻关键帧的类向量相似度,判断是否回环;According to the similarity of the two images, it is judged whether to return to the passing position to determine the loop detection relationship, and the similarity between image frames is measured by building a tree clustering database to determine whether the position should be consistent and the loop is closed; for image matching The extracted feature points and feature descriptions are clustered to form a tree branch structure to quickly screen image features, output candidate images, and reduce similarity judgment time; calculate non-connected but adjacent key points that have common feature classes with the current image The class vector similarity of the frame is used to determine whether to loop back;

步骤五,二次回环检测机制:Step 5, secondary loopback detection mechanism:

在上述步骤闭环确认后,先不进行闭环,而是通过IMU位姿信息对当前图像帧位姿进行粗比较约束,即以IMU当前位置方向与一次闭环图像位置方向比较,进行方向一致性约束预判,以再次确认回环的正确性,形成二次检测机制;After the closed-loop confirmation in the above steps, the closed-loop is not performed first, but the current image frame pose is roughly compared and constrained by the IMU pose information. Judgment to reconfirm the correctness of the loopback and form a secondary detection mechanism;

步骤六,重定位:Step six, relocation:

通过回环确认后的误差结果,采用全局非线性优化,将回环误差平均分配到所有关键图像帧中,以优化和更新所有关键图像帧在世界坐标系下的位姿和相关联地图点,获取重定位;同时对更新后的当前地图点微分处理,以修正当前IMU累积漂移误差。Through the error results confirmed by the loop closure, the global nonlinear optimization is used to evenly distribute the loop closure error to all key image frames, so as to optimize and update the pose and associated map points of all key image frames in the world coordinate system, and obtain weight Positioning; at the same time, differentially process the updated current map point to correct the accumulated drift error of the current IMU.

优选的,所述步骤三中最小二乘方法的计算公式为:Preferably, the calculation formula of the least squares method in the step 3 is:

其中R为旋转向量矩阵,t为平移向量矩阵,pi为当前图像中的特征点,为上一个图像与pi匹配对应的特征点。Where R is the rotation vector matrix, t is the translation vector matrix, p i is the feature point in the current image, is the feature point corresponding to the match between the previous image and p i .

优选的,所述步骤四中类向量相似度的计算公式为:Preferably, the calculation formula of the class vector similarity in the step 4 is:

其中,v1、v2为两幅图像的特征类向量。Among them, v 1 and v 2 are feature class vectors of the two images.

本发明的有益效果:Beneficial effects of the present invention:

1、本发明采用的视觉图像采集频率低,IMU采集频率高,在同步时间戳后,在初期IMU没有高累积漂移误差时,IMU具有比视觉图像更精确的定位信息,可用IMU信息修正视觉累积误差;1. The visual image acquisition frequency used in the present invention is low, and the IMU acquisition frequency is high. After synchronizing the time stamp, when the initial IMU has no high cumulative drift error, the IMU has more accurate positioning information than the visual image, and the IMU information can be used to correct the visual accumulation error;

2、本发明通过组合式IMU,可在相机移动过快造成图像模糊,丢失定位信息时,通过积分同步时间戳下的IMU位姿信息,获取当前位姿信息,更新相关联地图点,弥补视觉图像失效带来的定位失败问题;2. Through the combined IMU, the present invention can obtain the current pose information by integrating the IMU pose information under the synchronous time stamp when the camera moves too fast and the image is blurred and the positioning information is lost, so as to update the associated map points and make up for the visual The problem of positioning failure caused by image failure;

3、本发明提供的二次回环检测定位方法,通过IMU位姿信息对当前图像帧位姿进行粗比较约束,即以IMU当前位置方向与一次闭环图像位置方向比较,进行方向一致性约束预判,防止出现周围场景类似如楼宇之间相似度相近,按照图像相似度可判断为闭环,而按照IMU当前实际位置根本不可能是同一位置而错误回环的问题,即通过IMU的方向预判防止相似环境下相似图像造成的定位错误;3. The secondary loopback detection and positioning method provided by the present invention uses the IMU pose information to roughly compare and constrain the pose of the current image frame, that is, compares the current position direction of the IMU with the position direction of the primary closed-loop image, and performs direction consistency constraint pre-judgment , to prevent the surrounding scenes from being similar, such as the similarity between buildings. According to the similarity of the image, it can be judged as a closed loop, but according to the current actual position of the IMU, it is impossible to be the same position at all, so the problem of looping back is wrong, that is, through the direction prediction of the IMU to prevent the similarity Positioning errors caused by similar images in the environment;

4、本发明重定位后更新的高精度位姿,用以修正IMU累积漂移误差,提高了基于IMU和视觉的二次回环检测定位方法的鲁棒性。4. The high-precision pose updated after the relocation of the present invention is used to correct the accumulated drift error of the IMU, and improves the robustness of the secondary loop detection and positioning method based on the IMU and vision.

附图说明Description of drawings

图1为本发明的相似环境下基于IMU和视觉的二次回环检测定位方法的流程图。FIG. 1 is a flow chart of the secondary loop detection and positioning method based on IMU and vision in a similar environment of the present invention.

具体实施方式Detailed ways

下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。The following will clearly and completely describe the technical solutions in the embodiments of the present invention with reference to the accompanying drawings in the embodiments of the present invention. Obviously, the described embodiments are only some, not all, embodiments of the present invention. Based on the embodiments of the present invention, all other embodiments obtained by persons of ordinary skill in the art without making creative efforts belong to the protection scope of the present invention.

请参阅图1,本发明提供的一种实施例:一种相似环境下基于IMU和视觉的二次回环检测定位方法,包括以下步骤:Please refer to Fig. 1, an embodiment provided by the present invention: a secondary loop detection and positioning method based on IMU and vision in a similar environment, including the following steps:

步骤一,双目相机与IMU参数标定与同步:Step 1, binocular camera and IMU parameter calibration and synchronization:

统一双目相机与IMU的时间戳,标定所述双目相机与IMU的参数,包括双目相机启动延迟时间、IMU启动延迟时间、加速度计零偏和陀螺仪零偏(零偏,输入为零,即IMU不动时,输出不为零的初始值),同步双目相机与IMU 时间,读取双目相机图像与IMU位姿信息;Unify the time stamps of the binocular camera and the IMU, and calibrate the parameters of the binocular camera and the IMU, including the start-up delay time of the binocular camera, the IMU start-up delay time, the zero bias of the accelerometer and the zero bias of the gyroscope (zero bias, the input is zero , that is, when the IMU does not move, the initial value is not zero), synchronize the binocular camera and IMU time, and read the binocular camera image and IMU pose information;

步骤二,图像特征提取与图像匹配:Step 2, image feature extraction and image matching:

检测图像中的角点作为特征点,对这一角点周围图像进行描述,形成特征描述子,对连续两幅图像中的特征描述子使用汉明距离进行匹配,筛选特征匹配点对,以防误匹配;Detect the corner point in the image as a feature point, describe the image around the corner point, form a feature descriptor, use the Hamming distance to match the feature descriptors in two consecutive images, and filter the feature matching point pairs to prevent errors match;

步骤三,位姿估计及移动轨迹形成:Step 3, pose estimation and trajectory formation:

通过匹配好的多组特征匹配点对的3D信息,采用最小二乘方法,计算使特征匹配点对误差平方和达到极小值的旋转向量矩阵R和平移向量t,得到双目相机的位姿和移动地图点轨迹;By matching the 3D information of multiple sets of feature matching point pairs, the least squares method is used to calculate the rotation vector matrix R and the translation vector t that make the error square sum of feature matching point pairs reach a minimum value, and obtain the pose of the binocular camera and moving map point trajectories;

当相机移动过快造成图像模糊,丢失定位信息时,通过积分统一时间戳下的IMU位姿信息,获取当前双面相机位姿信息,更新相关联地图点;When the camera moves too fast and the image is blurred and the positioning information is lost, the current double-sided camera pose information is obtained by integrating the IMU pose information under the unified time stamp, and the associated map points are updated;

步骤四,回环检测:Step 4, loopback detection:

根据两幅图像的相似性判断是否回到之间经过的位置以确定回环检测关系,通过构建树形聚类数据库的方式度量图像帧间相似性以判断是否位置应一致而回环;对在图像匹配时提取的特征点与特征描述聚类化,形成树杈结构,以对图像特征做快速筛选,输出候选图像,减少相似度判断时间,计算与当前图像有公共特征类的非相连但相邻关键帧的类向量相似度,判断是否回环;According to the similarity of the two images, it is judged whether to return to the passing position to determine the loop detection relationship, and the similarity between image frames is measured by building a tree clustering database to determine whether the position should be consistent and the loop is closed; for image matching The extracted feature points and feature descriptions are clustered to form a tree branch structure to quickly filter image features, output candidate images, reduce similarity judgment time, and calculate non-connected but adjacent keys that have common feature classes with the current image. The class vector similarity of the frame is used to determine whether to loop back;

步骤五,二次回环检测机制:Step 5, secondary loopback detection mechanism:

在上述步骤闭环确认后,先不进行闭环,而是通过IMU位姿信息对当前图像帧位姿进行粗比较约束,即以IMU当前位置方向与一次闭环图像位置方向比较,进行方向一致性约束预判,以再次确认回环的正确性,形成二次检测机制;After the closed-loop confirmation in the above steps, the closed-loop is not performed first, but the current image frame pose is roughly compared and constrained by the IMU pose information. Judgment to reconfirm the correctness of the loopback and form a secondary detection mechanism;

步骤六,重定位:Step six, relocation:

通过回环确认后的误差结果,采用全局非线性优化,将回环误差平均分配到所有关键图像帧中,以优化和更新所有关键图像帧在世界坐标系下的位姿和相关联地图点,获取重定位;同时对更新后的当前地图点微分处理,以修正当前IMU累积漂移误差。Through the error results confirmed by the loop closure, the global nonlinear optimization is used to evenly distribute the loop closure error to all key image frames, so as to optimize and update the pose and associated map points of all key image frames in the world coordinate system, and obtain weight Positioning; at the same time, differentially process the updated current map point to correct the accumulated drift error of the current IMU.

本实施例中,所述步骤三中最小二乘方法的计算公式为:In this embodiment, the calculation formula of the least squares method in the step 3 is:

其中R为旋转向量矩阵,t为平移向量矩阵,pi为当前图像中的特征点,为上一个图像与pi匹配对应的特征点。Where R is the rotation vector matrix, t is the translation vector matrix, p i is the feature point in the current image, is the feature point corresponding to the match between the previous image and p i .

本实施例中,所述步骤四中类向量相似度的计算公式为:In this embodiment, the calculation formula of the class vector similarity in the step 4 is:

其中,v1、v2为两幅图像的特征类向量。Among them, v 1 and v 2 are feature class vectors of the two images.

综上所述,本发明的实施例提供一种相似环境下基于IMU和视觉的二次回环检测定位方法,通过IMU位姿信息对当前图像帧位姿进行粗比较约束,即以IMU当前位置方向与一次闭环图像位置方向比较,进行方向一致性约束预判,防止出现周围场景类似如楼宇之间相似度相近,按照图像相似度可判断为闭环,而按照IMU当前实际位置根本不可能是同一位置而错误回环的问题,即通过IMU的方向预判防止相似环境下相似图像造成的定位错误;重定位后更新的高精度位姿,用以修正IMU累积漂移误差,提高了基于IMU和视觉的二次回环检测定位方法的鲁棒性。To sum up, the embodiment of the present invention provides a secondary loop detection and positioning method based on IMU and vision in a similar environment. The pose of the current image frame is roughly compared and constrained by the pose information of the IMU, that is, the current position and direction of the IMU Compared with the position and direction of a closed-loop image, the direction consistency constraint is pre-judged to prevent the surrounding scenes from being similar, such as buildings with similar similarities. According to the similarity of the image, it can be judged as a closed loop, but according to the current actual position of the IMU, it is impossible to be the same position at all. The problem of error loopback is to prevent positioning errors caused by similar images in similar environments through the direction prediction of the IMU; the high-precision pose updated after relocation is used to correct the accumulated drift error of the IMU, which improves the two-dimensional image based on IMU and vision. Robustness of secondary loop detection localization methods.

对于本领域技术人员而言,显然本发明不限于上述示范性实施例的细节,而且在不背离本发明的精神或基本特征的情况下,能够以其他的具体形式实现本发明。因此,无论从哪一点来看,均应将实施例看作是示范性的,而且是非限制性的,本发明的范围由所附权利要求而不是上述说明限定,因此旨在将落在权利要求的等同要件的含义和范围内的所有变化囊括在本发明内。不应将权利要求中的任何附图标记视为限制所涉及的权利要求。It will be apparent to those skilled in the art that the invention is not limited to the details of the above-described exemplary embodiments, but that the invention can be embodied in other specific forms without departing from the spirit or essential characteristics of the invention. Accordingly, the embodiments should be regarded in all points of view as exemplary and not restrictive, the scope of the invention being defined by the appended claims rather than the foregoing description, and it is therefore intended that the scope of the invention be defined by the appended claims rather than by the foregoing description. All changes within the meaning and range of equivalents of the elements are embraced in the present invention. Any reference sign in a claim should not be construed as limiting the claim concerned.

Claims (4)

1. The secondary loop detection positioning method based on IMU and vision in similar environment is characterized by comprising the following steps:
step one, calibrating and synchronizing parameters of a binocular camera and an IMU:
unifying time stamps of the binocular camera and the IMU, calibrating parameters of the binocular camera and the IMU, including binocular camera start delay time, IMU start delay time, accelerometer zero bias and gyroscope zero bias, synchronizing the binocular camera and the IMU time, and reading images acquired by the binocular camera and IMU pose information;
step two, image feature extraction and image matching:
detecting corner points in images acquired by a binocular camera as characteristic points, describing images around the corner points to form characteristic descriptors, matching the characteristic descriptors in two continuous images by using a Hamming distance, and screening characteristic matching point pairs to prevent mismatching;
thirdly, pose estimation and movement track formation:
calculating a rotation vector matrix R and a translation vector t which enable the square sum of errors of the feature matching point pairs to reach a minimum value by using a least square method through 3D information of the matched multiple groups of feature matching point pairs, and obtaining the pose of the binocular camera and the locus of the moving map point;
when the camera moves too fast to cause image blurring and positioning information is lost, acquiring pose information of the current binocular camera through integrating IMU pose information under a unified timestamp, and updating associated map points;
step four, loop detection:
judging whether to return to the passing position according to the similarity of the two images so as to determine a loop detection relation, and measuring the similarity between image frames in a mode of constructing a tree cluster database so as to judge whether the positions are consistent so as to loop; clustering the feature points and feature descriptions extracted during image matching to form a crotch structure, so as to rapidly screen image features, output candidate images and reduce similarity judging time; calculating the similarity of class vectors of non-connected but adjacent key frames with common feature classes of the current image, and judging whether to loop;
step five, a secondary loop detection mechanism:
after the closed loop is confirmed in the steps, the closed loop is not firstly carried out, but the pose of the current image frame is roughly compared and restrained through IMU pose information, namely, the current position direction of the IMU is compared with the position direction of the primary closed loop image, and the direction consistency constraint pre-judgment is carried out, so that the correctness of the loop is confirmed again, and a secondary detection mechanism is formed;
step six, repositioning:
through the error result after loop confirmation, global nonlinear optimization is adopted, loop errors are evenly distributed to all key image frames, so that pose of all key image frames under a world coordinate system and associated map points are optimized and updated, and repositioning is obtained; and meanwhile, differentiating the updated current map point to correct the accumulated drift error of the current IMU.
2. The method for detecting and positioning secondary loop based on IMU and vision according to claim 1, wherein the condition of selecting the feature matching point pair in the second step is that the hamming distance of the descriptor is less than twice the minimum distance.
3. The method for detecting and positioning the secondary loop based on the IMU and the vision in the similar environment according to claim 1, wherein the calculation formula of the least squares method in the third step is:
Figure FDA0004165114380000021
wherein R is a rotation vector matrix, t is a translation vector matrix, and p i As a feature point in the current image,
Figure FDA0004165114380000022
for the last picture and p i Matching the corresponding feature points.
4. The method according to claim 1, wherein the formula for calculating the similarity of the class vectors in the fourth step is:
Figure FDA0004165114380000023
wherein v is 1 、v 2 Is a feature class vector of two images.
CN202011206955.4A 2020-11-03 2020-11-03 A secondary loop-closing detection and positioning method based on IMU and vision in a similar environment Active CN112461228B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011206955.4A CN112461228B (en) 2020-11-03 2020-11-03 A secondary loop-closing detection and positioning method based on IMU and vision in a similar environment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011206955.4A CN112461228B (en) 2020-11-03 2020-11-03 A secondary loop-closing detection and positioning method based on IMU and vision in a similar environment

Publications (2)

Publication Number Publication Date
CN112461228A CN112461228A (en) 2021-03-09
CN112461228B true CN112461228B (en) 2023-05-09

Family

ID=74834896

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011206955.4A Active CN112461228B (en) 2020-11-03 2020-11-03 A secondary loop-closing detection and positioning method based on IMU and vision in a similar environment

Country Status (1)

Country Link
CN (1) CN112461228B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113506342B (en) * 2021-06-08 2024-01-02 北京理工大学 SLAM omni-directional loop correction method based on multi-camera panoramic vision
CN113900517B (en) * 2021-09-30 2022-12-20 北京百度网讯科技有限公司 Route navigation method and device, electronic equipment and computer readable medium
CN115631319B (en) * 2022-11-02 2023-06-23 北京科技大学 A Loop Closure Detection Method Based on Intersection Attention Network
CN117291981B (en) * 2023-10-09 2025-02-18 中国船舶科学研究中心 Binocular vision synchronous positioning method and system

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110044354A (en) * 2019-03-28 2019-07-23 东南大学 A kind of binocular vision indoor positioning and build drawing method and device
CN110533722A (en) * 2019-08-30 2019-12-03 的卢技术有限公司 A kind of the robot fast relocation method and system of view-based access control model dictionary
CN110986968A (en) * 2019-10-12 2020-04-10 清华大学 Method and device for real-time global optimization and error loop judgment in three-dimensional reconstruction
CN111060101A (en) * 2018-10-16 2020-04-24 深圳市优必选科技有限公司 Vision-assisted distance SLAM method and device and robot
CN111462231A (en) * 2020-03-11 2020-07-28 华南理工大学 Positioning method based on RGBD sensor and IMU sensor
CN111693047A (en) * 2020-05-08 2020-09-22 中国航空工业集团公司西安航空计算技术研究所 Visual navigation method for micro unmanned aerial vehicle in high-dynamic scene
CN111767905A (en) * 2020-09-01 2020-10-13 南京晓庄学院 An Improved Image Method Based on Landmark-Convolution Features

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111060101A (en) * 2018-10-16 2020-04-24 深圳市优必选科技有限公司 Vision-assisted distance SLAM method and device and robot
CN110044354A (en) * 2019-03-28 2019-07-23 东南大学 A kind of binocular vision indoor positioning and build drawing method and device
CN110533722A (en) * 2019-08-30 2019-12-03 的卢技术有限公司 A kind of the robot fast relocation method and system of view-based access control model dictionary
CN110986968A (en) * 2019-10-12 2020-04-10 清华大学 Method and device for real-time global optimization and error loop judgment in three-dimensional reconstruction
CN111462231A (en) * 2020-03-11 2020-07-28 华南理工大学 Positioning method based on RGBD sensor and IMU sensor
CN111693047A (en) * 2020-05-08 2020-09-22 中国航空工业集团公司西安航空计算技术研究所 Visual navigation method for micro unmanned aerial vehicle in high-dynamic scene
CN111767905A (en) * 2020-09-01 2020-10-13 南京晓庄学院 An Improved Image Method Based on Landmark-Convolution Features

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Optimized LOAM using ground plane constraints and SegMatch-based loop detection;Liu X 等;《Sensors》;第19卷(第24期);第1-19页 *
基于单目视觉与惯性测量单元的SLAM技术研究;余威;《中国优秀硕士学位论文全文数据库信息科技辑》(第02期);第I138-2043页 *
基于深度学习的视觉SLAM回环检测方法;余宇等;《计算机工程与设计》;第40卷(第02期);第529-536页 *

Also Published As

Publication number Publication date
CN112461228A (en) 2021-03-09

Similar Documents

Publication Publication Date Title
CN112461228B (en) A secondary loop-closing detection and positioning method based on IMU and vision in a similar environment
CN111561923B (en) SLAM (simultaneous localization and mapping) mapping method and system based on multi-sensor fusion
CN112734852B (en) Robot mapping method and device and computing equipment
Sola et al. Fusing monocular information in multicamera SLAM
CN105865454B (en) A kind of Navigation of Pilotless Aircraft method generated based on real-time online map
CN112556719B (en) Visual inertial odometer implementation method based on CNN-EKF
AU2013343222A1 (en) Cloud feature detection
Hide et al. Low cost vision-aided IMU for pedestrian navigation
CN113432604B (en) An IMU/GPS integrated navigation method capable of sensitive fault detection
CN108196285A (en) A kind of Precise Position System based on Multi-sensor Fusion
CN105352509A (en) Unmanned aerial vehicle motion target tracking and positioning method under geographic information space-time constraint
CN110542916A (en) Satellite and vision tightly coupled positioning method, system and medium
US20120218409A1 (en) Methods and apparatus for automated assignment of geodetic coordinates to pixels of images of aerial video
CN108446710A (en) Indoor plane figure fast reconstructing method and reconstructing system
Li et al. Fast vision‐based autonomous detection of moving cooperative target for unmanned aerial vehicle landing
CN103411587A (en) Positioning and attitude-determining method and system
CN116380079A (en) Underwater SLAM method for fusing front-view sonar and ORB-SLAM3
CN114485640A (en) Monocular visual-inertial synchronous positioning and mapping method and system based on point and line features
Wang et al. GIVE: A tightly coupled RTK-inertial–visual state estimator for robust and precise positioning
Andert et al. On the safe navigation problem for unmanned aircraft: Visual odometry and alignment optimizations for UAV positioning
Choi et al. Federated‐filter‐based unmanned ground vehicle localization using 3D range registration with digital elevation model in outdoor environments
CN117760427A (en) Inertial navigation-map fusion positioning method based on environment landmark detection
Fong et al. Computer vision centric hybrid tracking for augmented reality in outdoor urban environments
WO2022179047A1 (en) State information estimation method and apparatus
CN114842224A (en) An absolute visual matching positioning scheme for monocular UAV based on geographic basemap

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant