CN110070577B - Visual SLAM key frame and feature point selection method based on feature point distribution - Google Patents

Visual SLAM key frame and feature point selection method based on feature point distribution Download PDF

Info

Publication number
CN110070577B
CN110070577B CN201910363058.5A CN201910363058A CN110070577B CN 110070577 B CN110070577 B CN 110070577B CN 201910363058 A CN201910363058 A CN 201910363058A CN 110070577 B CN110070577 B CN 110070577B
Authority
CN
China
Prior art keywords
feature point
frame
key frame
points
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910363058.5A
Other languages
Chinese (zh)
Other versions
CN110070577A (en
Inventor
朱策
徐榕键
赵希
冯家琪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN201910363058.5A priority Critical patent/CN110070577B/en
Publication of CN110070577A publication Critical patent/CN110070577A/en
Application granted granted Critical
Publication of CN110070577B publication Critical patent/CN110070577B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras

Abstract

The invention discloses a visual SLAM key frame and feature point selection method based on feature distribution, and relates to the field of visual synchronous positioning and mapping. Specifically, the invention provides a method for selecting key frames and feature points during visual synchronous positioning and mapping. When there are more ranges not stored in the map at the edge part of a certain frame image, the frame is defined as a key frame. The selected characteristic points should cover the whole image as much as possible in a certain frame of image. The invention improves the tracking stability, reduces the number of key frames and key points and meets the engineering application requirements while not affecting the visual synchronous positioning and mapping precision.

Description

Visual SLAM key frame and feature point selection method based on feature point distribution
Technical Field
The invention belongs to the technical field of visual synchronous positioning and mapping, and relates to a visual SLAM key frame and feature point selection method based on feature point distribution.
Background
The research of Visual synchronous positioning and mapping (Visual SLAM) is a big hot spot in recent years, the traditional SLAM mainly uses a laser radar as a sensor, but with the development of computer vision correlation, the laser radar is used in unmanned aerial vehicle,
Visual localization and mapping is also beginning to be used on mobile robots, AR wearable devices. The visual positioning and mapping technology is one of core technologies for supporting unmanned aerial vehicles, mobile robots and other equipment to realize autonomous movement.
The visual positioning and mapping technology mainly comprises the steps of preprocessing sensor data, front-end visual odometer, rear-end nonlinear optimization, loop detection, mapping and the like. The sensor data mainly comprises an inertial measurement unit, a camera and the like. Common cameras include monocular cameras, binocular cameras, RGB-D cameras, and the like. The monocular camera has the advantages of low cost, no limitation of distance, but the defects of scale uncertainty and the like, the binocular camera can calculate depth, the distance is also not limited, the configuration is complex, and the calculated amount is large. The RGB-D camera can actively measure depth, has good reconstruction effect, but has smaller measurement range. The front-end visual odometer is used for quantitatively estimating the movement of the camera according to the images of the adjacent frames, if the movement tracks of the adjacent frames are integrated, the movement track of the camera can be deduced, positioning is realized, the position of the camera at each moment is estimated, and the position of the pixels in space is obtained, so that the map is obtained. The front-end visual odometer is the key of accurate positioning and image construction. Because the front-end visual odometer only estimates the motion of adjacent frames, a larger track drift can be generated after a certain time of accumulation due to errors generated in each estimation. To reduce track drift, back-end optimization and loop-back detection are required. The rear-end optimization is performed according to the result obtained by the front-end visual odometer, so as to obtain the best pose estimation. The back-end optimization mainly comprises two methods, namely optimization based on a filtering theory and nonlinear optimization (graph optimization). Because SLAM systems are nonlinear non-Gaussian systems, graph optimization has become the dominant approach, and traditional filtering theory uses less and less. Loop detection is also a method for reducing drift, and by judging the similarity of images, it is confirmed whether the current position is reached, so that the accumulated error is eliminated from the new adjustment map and track. The synchronous positioning and mapping technology can establish different maps according to the needs, and common maps include a 2D grid map, a 2D topological map, a 3D point cloud map and the like. The 2D grid map is most commonly used, a map can be obtained to obtain information of a two-dimensional plane, the 2D topological map emphasizes the communication information among elements, many details are removed, and the expression is more refined. The 3D point cloud map can be used for visual reconstruction of a real scene, but the occupied space is too large, and redundant information is too much.
All parts of visual positioning and image construction need to be mutually connected and closely matched, wherein a front-end visual odometer is a key ring for accurately positioning and image construction. The existing method has the defects of up to standard precision, insufficient quantity of matched characteristic points, insufficient tracking stability and difficulty in being applied to engineering.
Disclosure of Invention
The invention aims at: the method aims to solve the problems that the existing algorithm is not stable enough in tracking and redundant information stored in a map is too much. In particular, the key frame judgment and the feature point selection are required to achieve a certain precision in engineering practice, and meanwhile, stable tracking is required. If the environment map is used for visual positioning and navigation of the robot, more characteristic points must be matched to realize tracking stability. The invention provides a visual SLAM key frame and feature point selection method based on feature point distribution.
The technical scheme of the invention is as follows:
as shown in fig. 1, the visual SLAM key frame and feature point selection method based on feature point distribution is characterized by comprising the following steps:
s1: inputting a certain frame of image obtained by a camera;
s2: extracting the frame characteristic points, and matching with the latest (last) key frame to obtain matching points;
s3: judging whether the pose amplitude of the camera is changed too much, if so, executing S7; no, dividing the current frame into a plurality of areas a 1 ,a 2 ,…,a n The method comprises the steps of carrying out a first treatment on the surface of the The partitioning method includes but is not limited to the method shown in fig. 2;
s4: counting the number of feature points of each region;
s5: and according to the result of S4, counting the number of the effective areas of the whole graph. The effective area is an area with the number of the characteristic points larger than a threshold value;
s6: judging whether the number of the effective areas without the matching points is not less than 2, if yes, executing S7; if not, the frame is a non-key frame and ends;
s7: selecting the frame as a key frame;
s8: handle a 1 Setting the current area;
s9: is the current region feature points? If yes, executing S10; if not, setting the next area as the current area;
s10: selecting a feature point with the minimum depth of the current area, storing the feature point into a map, and removing the feature point from the current area;
s11: judging whether the feature points are stored in the map completely or the number of the feature points stored in the map reaches a threshold value, if yes, finishing the storage of the key frame information; if not, executing S12;
s12: setting the next area as the current area, a n The next region of a is a 1 S9 is performed.
The method has the beneficial effects that the prior method mainly aims at improving the precision when the mobile robot performs visual positioning and mapping, so that the related algorithm cannot keep the tracking stability and cannot be effectively put into practical use in engineering. The invention can select more representative key frames and more quality characteristic points under the condition that the precision is basically equal to the effect of the existing algorithm, so that the key frames required to be selected are obviously reduced, but the number of the matched characteristic points can be obviously increased. The invention improves the tracking quality, reduces the occupation of the storage space, and can be better put into engineering use compared with the existing algorithm.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 is one possible implementation of the region division method used in the present invention;
FIG. 3 is a graph comparing the number of key frames obtained by performing experiments on EuRoC data sets in accordance with the present invention;
fig. 4 is a graph comparing the number of matching feature points obtained by experiments performed on the EuRoC dataset according to the present invention.
Detailed Description
The embodiments of the present invention have been described in detail in the summary section, and are not further described herein.
Fig. 3 and 4 are diagrams showing the comparison of the method of the present invention and the conventional method under the same condition, it can be seen that the number of key frames required by the method of the present invention is significantly reduced, and the number of matched feature points is significantly increased. The required storage space is reduced, and meanwhile, the tracking quality is improved. Embodying the utility of the invention.

Claims (1)

1. The visual SLAM key frame and feature point selection method based on feature point distribution is characterized by comprising the following steps of:
s1: inputting a certain frame of image obtained by a camera;
s2: extracting the characteristic points of the frame image, and matching with the previous key frame to obtain matching points;
s3: judging whether the pose amplitude of the camera is changed too much, if so, executing S7; no, dividing the current frame into a plurality of areas a 1 ,a 2 ,…,a n
S4: counting the number of feature points of each region;
s5: according to the result of S4, counting the number of the effective areas of the whole graph, wherein the effective areas refer to areas with the number of the characteristic points larger than a threshold value;
s6: judging whether the number of the effective areas without the matching points is not less than 2, if yes, executing S7; if not, marking the frame as a non-key frame, and ending;
s7: selecting the frame as a key frame;
s8: handle a 1 Setting the current area;
s9: judging whether the current area has characteristic points or not, if so, executing S10; if not, setting the next area as the current area;
s10: selecting a feature point with the minimum depth of the current area, storing the feature point into a map, and removing the feature point with the minimum depth from the current area;
s11: judging whether the feature points are stored in the map completely or the number of the feature points stored in the map reaches a threshold value, if yes, finishing the storage of the key frame information; if not, executing S12;
s12: setting the next area as the current area, a n The next region of a is a 1 S9 is performed.
CN201910363058.5A 2019-04-30 2019-04-30 Visual SLAM key frame and feature point selection method based on feature point distribution Active CN110070577B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910363058.5A CN110070577B (en) 2019-04-30 2019-04-30 Visual SLAM key frame and feature point selection method based on feature point distribution

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910363058.5A CN110070577B (en) 2019-04-30 2019-04-30 Visual SLAM key frame and feature point selection method based on feature point distribution

Publications (2)

Publication Number Publication Date
CN110070577A CN110070577A (en) 2019-07-30
CN110070577B true CN110070577B (en) 2023-04-28

Family

ID=67370135

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910363058.5A Active CN110070577B (en) 2019-04-30 2019-04-30 Visual SLAM key frame and feature point selection method based on feature point distribution

Country Status (1)

Country Link
CN (1) CN110070577B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11725944B2 (en) 2020-03-02 2023-08-15 Apollo Intelligent Driving Technology (Beijing) Co, Ltd. Method, apparatus, computing device and computer-readable storage medium for positioning
CN111538855B (en) * 2020-04-29 2024-03-08 浙江商汤科技开发有限公司 Visual positioning method and device, electronic equipment and storage medium
CN113219440A (en) * 2021-04-22 2021-08-06 电子科技大学 Laser radar point cloud data correction method based on wheel type odometer

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104537709A (en) * 2014-12-15 2015-04-22 西北工业大学 Real-time three-dimensional reconstruction key frame determination method based on position and orientation changes
CN106296812A (en) * 2016-08-18 2017-01-04 宁波傲视智绘光电科技有限公司 Synchronize location and build drawing method
CN107437258A (en) * 2016-05-27 2017-12-05 株式会社理光 Feature extracting method, estimation method of motion state and state estimation device
CN108133493A (en) * 2018-01-10 2018-06-08 电子科技大学 A kind of heterologous image registration optimization method mapped based on region division and gradual change
CN108648215A (en) * 2018-06-22 2018-10-12 南京邮电大学 SLAM motion blur posture tracking algorithms based on IMU
KR20180113060A (en) * 2017-04-05 2018-10-15 충북대학교 산학협력단 Keyframe extraction method for graph-slam and apparatus using thereof
CN108780577A (en) * 2017-11-30 2018-11-09 深圳市大疆创新科技有限公司 Image processing method and equipment
CN108961385A (en) * 2017-05-22 2018-12-07 中国人民解放军信息工程大学 A kind of SLAM patterning process and device
CN109509211A (en) * 2018-09-28 2019-03-22 北京大学 Positioning simultaneously and the feature point extraction and matching process and system built in diagram technology
CN109583409A (en) * 2018-12-07 2019-04-05 电子科技大学 A kind of intelligent vehicle localization method and system towards cognitive map

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2506338A (en) * 2012-07-30 2014-04-02 Sony Comp Entertainment Europe A method of localisation and mapping
WO2014154533A1 (en) * 2013-03-27 2014-10-02 Thomson Licensing Method and apparatus for automatic keyframe extraction
WO2017166089A1 (en) * 2016-03-30 2017-10-05 Intel Corporation Techniques for determining a current location of a mobile device

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104537709A (en) * 2014-12-15 2015-04-22 西北工业大学 Real-time three-dimensional reconstruction key frame determination method based on position and orientation changes
CN107437258A (en) * 2016-05-27 2017-12-05 株式会社理光 Feature extracting method, estimation method of motion state and state estimation device
CN106296812A (en) * 2016-08-18 2017-01-04 宁波傲视智绘光电科技有限公司 Synchronize location and build drawing method
KR20180113060A (en) * 2017-04-05 2018-10-15 충북대학교 산학협력단 Keyframe extraction method for graph-slam and apparatus using thereof
CN108961385A (en) * 2017-05-22 2018-12-07 中国人民解放军信息工程大学 A kind of SLAM patterning process and device
CN108780577A (en) * 2017-11-30 2018-11-09 深圳市大疆创新科技有限公司 Image processing method and equipment
CN108133493A (en) * 2018-01-10 2018-06-08 电子科技大学 A kind of heterologous image registration optimization method mapped based on region division and gradual change
CN108648215A (en) * 2018-06-22 2018-10-12 南京邮电大学 SLAM motion blur posture tracking algorithms based on IMU
CN109509211A (en) * 2018-09-28 2019-03-22 北京大学 Positioning simultaneously and the feature point extraction and matching process and system built in diagram technology
CN109583409A (en) * 2018-12-07 2019-04-05 电子科技大学 A kind of intelligent vehicle localization method and system towards cognitive map

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Jhen-Wei Ruan et al..Cooperative Visual Simultaneous Localization and Mapping by Ordering Keyframes Similarity.《2018 IEEE International Conference on Consumer Electronics-Taiwan》.2018,1-2. *
刘浩敏 等.面向大尺度场景的单目同时定位与地图构建.《中国科学:信息科学》.2016,14. *

Also Published As

Publication number Publication date
CN110070577A (en) 2019-07-30

Similar Documents

Publication Publication Date Title
CN109949375B (en) Mobile robot target tracking method based on depth map region of interest
CN108520554B (en) Binocular three-dimensional dense mapping method based on ORB-SLAM2
CN110070577B (en) Visual SLAM key frame and feature point selection method based on feature point distribution
CN110807809B (en) Light-weight monocular vision positioning method based on point-line characteristics and depth filter
CN107590827A (en) A kind of indoor mobile robot vision SLAM methods based on Kinect
CN108519102B (en) Binocular vision mileage calculation method based on secondary projection
CN109579825B (en) Robot positioning system and method based on binocular vision and convolutional neural network
CN101650178B (en) Method for image matching guided by control feature point and optimal partial homography in three-dimensional reconstruction of sequence images
CN104537709A (en) Real-time three-dimensional reconstruction key frame determination method based on position and orientation changes
CN104197928A (en) Multi-camera collaboration-based method for detecting, positioning and tracking unmanned aerial vehicle
CN111881790A (en) Automatic extraction method and device for road crosswalk in high-precision map making
CN113916243A (en) Vehicle positioning method, device, equipment and storage medium for target scene area
CN104281148A (en) Mobile robot autonomous navigation method based on binocular stereoscopic vision
CN116449384A (en) Radar inertial tight coupling positioning mapping method based on solid-state laser radar
CN111123953B (en) Particle-based mobile robot group under artificial intelligence big data and control method thereof
CN113781525B (en) Three-dimensional target tracking method based on original CAD model
CN113155126B (en) Visual navigation-based multi-machine cooperative target high-precision positioning system and method
CN108876861B (en) Stereo matching method for extraterrestrial celestial body patrolling device
CN113393413B (en) Water area measuring method and system based on monocular and binocular vision cooperation
CN116242374A (en) Direct method-based multi-sensor fusion SLAM positioning method
CN110674327A (en) More accurate positioning method and device
CN111239761B (en) Method for indoor real-time establishment of two-dimensional map
CN111583331B (en) Method and device for simultaneous localization and mapping
CN110021041B (en) Unmanned scene incremental gridding structure reconstruction method based on binocular camera
CN105812769A (en) High-precision parallax tracker based on phase correlation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant