CN108682027A - VSLAM realization method and systems based on point, line Fusion Features - Google Patents

VSLAM realization method and systems based on point, line Fusion Features Download PDF

Info

Publication number
CN108682027A
CN108682027A CN201810449541.0A CN201810449541A CN108682027A CN 108682027 A CN108682027 A CN 108682027A CN 201810449541 A CN201810449541 A CN 201810449541A CN 108682027 A CN108682027 A CN 108682027A
Authority
CN
China
Prior art keywords
frame
line
features
feature
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810449541.0A
Other languages
Chinese (zh)
Inventor
王行
周晓军
杨淼
李朔
李骊
盛赞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing HJIMI Technology Co Ltd
Original Assignee
Beijing HJIMI Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing HJIMI Technology Co Ltd filed Critical Beijing HJIMI Technology Co Ltd
Priority to CN201810449541.0A priority Critical patent/CN108682027A/en
Publication of CN108682027A publication Critical patent/CN108682027A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/38Registration of image sequences
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The vSLAM realization method and systems based on point, line Fusion Features that the invention discloses a kind of.Picture frame sequence including step S110, acquisition target scene;Step S120, each frame image is pre-processed;Step S130, according to the point feature of successful match and line feature, initialization context map;Step S140, estimated into line trace, and to the pose of current frame image based on environmental map;Step S150, judge whether current frame image meets key frame condition, if so, S160 is thened follow the steps, if it is not, then repeating step S110 to step S150;Step S160, the step of executing local map thread;Step S170, the step of executing closed loop detection thread;Step S180, the step of executing global optimization thread, with the environmental map after being optimized, completion synchronous superposition.The extraction of line feature, matching process are improved, to improve the accuracy of data correlation in front end, so as to effectively overcome vSLAM existing deficiencies under complicated, low texture scene.

Description

vSLAM realization method and system based on point and line feature fusion
Technical Field
The invention relates to the field of visual synchronous positioning and map construction (SLAM), in particular to a vSLAM implementation method based on point and line feature fusion and a vSLAM implementation system based on point and line feature fusion.
Background
Synchronous positioning and mapping (SLAM) originally originated from the field of robots, and aims to reconstruct a three-dimensional structure of an environment in an unknown environment in real time and position the robot itself at the same time. Early SFM techniques were generally processed off-line, and later real-time SFM techniques appeared with the development of technology, possiblyTo fall under the scope of SLAM. The V-SLAM technology deduces the position of a camera in an unknown environment according to shot video information, and simultaneously constructs an environment map, wherein the basic principle is a multi-view geometric principle. The aim of the V-SLAM is to recover the camera motion parameter C corresponding to each frame of image at the same time1...CmAnd a three-dimensional structure X of the scene1...Xn(ii) a Wherein each camera motion parameter CiContaining camera position and orientation information, typically expressed as a 3 x 3 rotation matrix RiAnd a three-dimensional position variable Pi
Since the feature-based V-SLAM needs to perform matching of image features, its stability depends heavily on the richness of scene features. When texture information in a scene is lost or an image is blurred due to rapid movement of a camera, the number of point features is often small, and the pose estimation precision is influenced. Although the direct tracking method alleviates the feature dependence problem to some extent, dense and semi-dense direct tracking has a large amount of calculation and cannot be operated on a platform with limited computing power. In a man-made structured environment, there are structured features, such as line features, plane features, etc. The line segment features and the point features are complementary to each other, and it can be observed that the point features are hardly extracted on the ground and the wall surface, for example, when a camera shoots a pure white wall, the motion of the camera cannot be recovered from the image only. And abundant line segment characteristics exist at the junction of the ground and the wall surface and the like. Compared with point features, the line segment features are higher-level features, and an environment map constructed by using the line segment features has more visual geometric information and can improve the precision and the robustness of the SLAM system.
Chinese patent application CN104077809A discloses a "visual SLAM method based on structural lines", in which a visual SLAM method based on structural lines is provided, in which a camera device for collecting surrounding environment images is provided; the method comprises the steps of utilizing structural lines of a building as characteristic lines to achieve real-time positioning and map building (SLAM); the method comprises the following steps: SLAM initialization: selecting a leading direction, collecting lines in the leading direction as characteristic lines, and parameterizing the newly added characteristic lines; in the SLAM process: and predicting the motion of the camera equipment for each frame of image, predicting the position of the characteristic line in the next frame of image according to the motion, searching for a matching line in the next frame of image near the predicted position to obtain the actual position of the characteristic line in the next frame of image, further calculating the deviation between the predicted position and the actual position, and updating the position of the characteristic line and the position and the posture of the camera equipment by using a Kalman filter. The invention utilizes the leading direction information of the structural lines, can globally limit the predicted direction and greatly improve the precision of the track and the map. The method has the following defects: 1. the method for optimizing the pose by using the Kalman filter has certain limitation, and as the state variable only keeps the pose at the current moment, the pose at the past moment is not updated any more, so that the prior information with inaccurate estimation is transmitted to the next moment and is irradiated into an accumulated error; 2. the method does not realize the function of closed loop detection and has certain expansion limitation.
Chinese patent application CN107392964A discloses an "indoor SLAM method based on combination of indoor feature points and structure lines", in the method, a visual SLAM algorithm relating to combination of indoor feature points and structure lines includes: calibrating internal parameters of the camera; extracting feature points and structural lines aiming at video frame image data acquired by a camera; tracking the feature points and the structure lines according to the obtained feature points and the structure lines, and selecting key frames; according to the obtained tracking information of the characteristic points and the structure lines, drawing the space points and the space lines of the surrounding environment and optimizing the platform positioning; and judging whether the motion track of the platform forms a closed loop or not, acquiring a correct closed loop key frame, and performing overall optimization on the overall image posture and the map. The method has real-time performance and high efficiency, the posture of the image and the surrounding environment are charted by utilizing the matched characteristic points and the matched structural lines, loop detection processing is carried out, the structural lines are fully utilized to reduce drift errors, and meanwhile, the loop detection is utilized to finally obtain a better mobile robot platform positioning result and the structural characteristics of the surrounding environment. The method has the following defects: 1. a DBoW2 dictionary is constructed by a loop detection part, only feature points are described, feature lines are not described, and the accuracy of loop detection is low; 2. the calibration of the binocular camera and the computation complexity of the pixel depth are high.
Therefore, how to overcome the above-mentioned deficiencies in the prior art, a front-end optimization and reliable loop detection function based on depth camera extraction point, line feature fusion and a graph model are really realized to form a complete SLAM system, which becomes a technical problem to be solved in the field.
Disclosure of Invention
The invention aims to at least solve one of the technical problems in the prior art and provides a method and a system for realizing vSLAM based on point and line feature fusion.
In order to achieve the above object, a first aspect of the present invention provides a vSLAM implementation method based on point and line feature fusion, including:
step S110, obtaining an image frame sequence of a target scene, wherein the image frame sequence comprises a plurality of frame images;
step S120, preprocessing each frame of the image, wherein the preprocessing comprises the steps of carrying out distortion removal on each frame of the image according to a preset calibrated depth camera parameter matrix and a distortion parameter, extracting point features and line features in each frame of the image, carrying out feature matching on the point features and the line features, and extracting the successfully matched point features and line features; the feature matching comprises feature matching of adjacent frame images and local map feature matching;
step S130, initializing an environment map according to the successfully matched point characteristics and line characteristics;
step S140, tracking based on the environment map, and estimating the pose of the current frame image;
step S150, judging whether the current frame image meets the key frame condition, if so, executing step S160, otherwise, repeatedly executing step S110 to step S150;
step S160, executing a local map thread;
step S170, executing a closed loop detection thread;
and step S180, executing a global optimization thread to obtain an optimized environment map, and completing synchronous positioning and map construction.
Optionally, the step S120 includes:
detecting and describing the point features in the images of each frame by adopting an ORB (object-oriented bounding box), and measuring the similarity of the point features by utilizing the Hamming distance of the feature vectors corresponding to the point features in the images of adjacent frames;
detecting line features in the images of each frame by adopting an LSD (local Linear discriminant) and describing the line features in the images of each frame by adopting an LBD (local binary discriminant), wherein the line features in the images of each frame are matched to meet a preset geometric constraint; wherein the predetermined geometric constraint satisfies:
the included angle of the direction vectors of the characteristics of the two lines is less than
Length ratio of two-segment line characteristics
Overlap region length of two segment line feature
The distance of LBD characteristic vectors corresponding to the characteristics of the two lines is smaller than a set threshold value rhoTAnd search forThe minimum in the cable area is considered as successful matching;
wherein,the maximum threshold value of the included angle of the direction vectors corresponding to the characteristics of the two lines;
min(l1.l2)~max(l1.l2) The interval range satisfied by the geometric distance of the characteristic space of the two lines;
tau is a threshold value of the ratio of the minimum distance to the maximum distance in the geometrical distances of the feature spaces of the two lines;
loverlapthe length of the overlap region for two line features;
β is a threshold for the length of the overlap region of the two line features;
ρTthe hamming distance maximum between the feature LBD vectors corresponding to the two line features is taken.
Optionally, the step S130 includes:
according to the point pair coordinates after the matching of the two adjacent frames of images, calculating the parallax and triangularizing the parallax to calculate the corresponding space point coordinates;
respectively taking two end points of the line feature in the previous frame image as references, and making a parallel line through the end points of the line feature to intersect with the line feature in the current frame image so as to obtain the depths of the two end points of the line feature in the previous frame image;
and calculating corresponding Prock coordinates according to the depths of the two end points so as to complete the initialization of the environment map.
Optionally, the step S140 includes:
respectively obtaining the matching relation between the space point and the line characteristic and the plane point and the line characteristic based on the characteristic matching of the adjacent frame images and the characteristic matching of the local map;
according to the matching relation of adjacent frame images, the tracked space points and the coordinates of space lines are determined, a graph model is constructed by taking the pose of the current frame as a state variable to be optimized, and the pose is solved by minimizing the following cost function:
according to the matching relation of the local map, assuming that the coordinates of the space points and the space lines in the local map are determined, and optimizing the pose of the current frame;
wherein x iscRepresenting a matching pair set between adjacent frames, wherein the first half part on the right side of the equal sign of the function is point characteristic information, the second half part is line characteristic information, rho is a cost function of Huber, sigma is a covariance matrix, e is a projection error, and p isiljIs a feature set.
Optionally, the key frame condition satisfies:
the insertion of the key frame has passed 20 frames or the local map building thread is idle since the last time; and the number of the first and second groups,
the current frame at least tracks 50 characteristic points and 15 space straight lines; and the number of the first and second groups,
the current frame contains less than 75% of the features in the reference key frame.
Optionally, the step S160 includes:
local map management, including adding and deleting, updating and key frame removing of space points and space lines;
and (4) local map optimization, namely extracting a part of poses and landmarks from the environment map, and optimizing a graph model formed by the poses and the landmarks.
Optionally, the step S170 includes:
closed loop detection, specifically, for each inserted key frame, converting the key frame into a word packet vector by using a visual dictionary obtained by offline training, and constructing an online database according to the word packet vector to be used as an inverted index;
closed-loop correction, specifically, calculating a pose transformation matrix between a current frame and a closed-loop frame according to the information of the two frames; and the number of the first and second groups,
establishing a pose graph model by taking key frames contained in a minimum spanning tree in an environment map as vertexes and taking relative pose transformation between the key frames as edges so as to carry out closed-loop correction;
optionally, the step S180 includes:
and optimizing all the road signs and poses in the environment map to obtain the optimized environment map, and completing synchronous positioning and map construction.
In a second aspect of the present invention, a vSLAM implementation system based on point and line feature fusion is provided, which includes:
the device comprises an acquisition module, a processing module and a display module, wherein the acquisition module is used for acquiring an image frame sequence of a target scene, and the image frame sequence comprises a plurality of frame images;
the preprocessing module is used for preprocessing each frame of the image, and the preprocessing comprises the steps of carrying out distortion removal on each frame of the image according to a preset calibrated depth camera parameter matrix and distortion parameters, extracting point features and line features in each frame of the image, carrying out feature matching on the point features and the line features, and extracting the successfully matched point features and line features; the feature matching comprises feature matching of adjacent frame images and local map feature matching;
the initialization module is used for initializing the environment map according to the successfully matched point characteristics and line characteristics;
the pose calculation module is used for tracking based on the environment map and estimating the pose of the current frame image;
the judging module is used for judging whether the current frame image meets the key frame condition;
the local map thread module is used for executing a local map thread;
the closed loop detection thread module is used for executing a closed loop detection thread;
and the global optimization thread module is used for executing the steps of the global optimization thread to obtain the optimized environment map and finish synchronous positioning and map construction.
Optionally, the preprocessing module is configured to:
detecting and describing the point features in the images of each frame by adopting an ORB (object-oriented bounding box), and measuring the similarity of the point features by utilizing the Hamming distance of the feature vectors corresponding to the point features in the images of adjacent frames;
detecting line features in the images of each frame by adopting an LSD (local Linear discriminant) and describing the line features in the images of each frame by adopting an LBD (local binary discriminant), wherein the line features in the images of each frame are matched to meet a preset geometric constraint; wherein the predetermined geometric constraint satisfies:
the included angle of the direction vectors of the characteristics of the two lines is less than
Length ratio of two-segment line characteristics
Overlap region length of two segment line feature
The distance of LBD characteristic vectors corresponding to the characteristics of the two lines is smaller than a set threshold value rhoTAnd the minimum in the search area is considered as successful matching;
wherein,the maximum threshold value of the included angle of the direction vectors corresponding to the characteristics of the two lines;
min(l1.l2)~max(l1.l2) The interval range satisfied by the geometric distance of the characteristic space of the two lines;
tau is a threshold value of the ratio of the minimum distance to the maximum distance in the geometrical distances of the feature spaces of the two lines;
loverlapthe length of the overlap region for two line features;
β is a threshold for the length of the overlap region of the two line features;
ρTthe hamming distance maximum between the feature LBD vectors corresponding to the two line features is taken.
The method and the system for realizing the vSLAM based on the point and line feature fusion can finish the steps of initializing an environment map, tracking and pose estimation according to the environment map, and sequentially executing a local map thread, a closed loop detection thread and a global optimization thread by extracting the point and line features in the image frame sequence and performing feature matching, thereby finishing synchronous positioning and map construction. Therefore, the method and the system for realizing the vSLAM based on the point and line feature fusion improve the extraction and matching processes of the line features to improve the accuracy of data association in the front end, thereby effectively overcoming the defects of the vSLAM in complex and low-texture scenes, truly realizing the front end based on the point and line feature fusion of the depth camera and the rear end optimization and reliable loop detection function of a graph model, and forming a complete SLAM system.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention. In the drawings:
fig. 1 is a flowchart of a vSLAM implementation method based on point and line feature fusion in an embodiment of the present invention;
fig. 2 is a flowchart of a vSLAM implementation method based on point and line feature fusion in an embodiment of the present invention;
FIG. 3 is a flow chart of image feature processing according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of a vSLAM implementation system with point and line feature fusion according to an embodiment of the present invention.
Description of the reference numerals
100: a vSLAM implementation system with point and line feature fusion;
110: an acquisition module;
120: a preprocessing module;
130: initializing a module;
140: a pose calculation module;
150: a judgment module;
160: a local map thread module;
170: a closed loop detection thread module;
180: and a global optimization thread module.
Detailed Description
The following detailed description of embodiments of the invention refers to the accompanying drawings. It should be understood that the detailed description and specific examples, while indicating the present invention, are given by way of illustration and explanation only, not limitation.
As shown in fig. 1, the vSLAM implementation method based on the point and line feature fusion mainly includes a tracking thread, a local map thread, a closed-loop detection thread, and a global optimization thread.
Tracking the thread: the input is a sequence of image frames collected by a depth camera, which are divided into a color image and a depth image, and the image at the same time is called a frame. The image preprocessing part comprises distortion correction of the image, detection and description of feature points and feature line segments, and feature matching. The tracking is divided into two stages, namely tracking between adjacent frames and tracking a local map, and the pose of the camera is obtained by minimizing the reprojection error. And finally, judging the key frame of the current frame.
Local map thread: after the tracking thread inserts the keyframe, the points, lines and poses in the local map are optimized. And meanwhile, removing spatial points and spatial lines in the map according to the statistical information, reserving a stable tracking part, and removing key frames with redundant information in the map. After the key frame insertion, a new map point and line will be created in conjunction with another frame within the local map.
Closed loop detection thread: and performing closed loop detection through the dictionary tree, calculating SE (3) transformation of a closed loop frame and a current frame when a closed loop is detected, and correcting accumulated errors and correcting the poses of map points and lines through optimization of a pose graph.
And (3) global optimization thread: in the closed-loop thread, the global optimization cannot be ensured by adopting the mode of optimizing the position and the pose of the position camera and then adjusting the spatial points and the linear position and needs to be carried out.
Besides, a scene recognition module is constructed based on the point-line characteristics and used for closed-loop detection. Meanwhile, the system maintains elements in the environment map, including map points, map lines, key frames, and connection relations established among the key frames, namely, the common view and the minimum spanning tree subgraphs. If the two frames have the commonly observed features, the two frames are taken as the top points in the graph and the number of the commonly observed features is taken as the edge, an undirected graph is reconstructed, finally, a common view is formed, and the minimum spanning tree is a sub-graph with higher weight in the common view. By inquiring the common view, a window connected with the current frame can be obtained to form a local map.
As will be described in detail later, steps S110 to S150 are steps of tracking a thread.
A first aspect of the present invention, as shown in fig. 1 and fig. 2, relates to a vSLAM implementation method S100 based on point and line feature fusion, including:
step S110, obtaining an image frame sequence of a target scene, wherein the image frame sequence comprises a plurality of frame images.
Step S120, preprocessing each frame of the image, wherein the preprocessing comprises the steps of carrying out distortion removal on each frame of the image according to a preset calibrated depth camera parameter matrix and a distortion parameter, extracting point features and line features in each frame of the image, carrying out feature matching on the point features and the line features, and extracting the successfully matched point features and line features; wherein the feature matching comprises feature matching of adjacent frame images and local map feature matching.
And S130, initializing an environment map according to the successfully matched point feature and line feature.
And step S140, tracking based on the environment map, and estimating the pose of the current frame image.
Step S150, determining whether the current frame image satisfies the key frame condition, if yes, performing step S160, otherwise, repeating steps S110 to S150.
And step S160, executing the local map thread.
And step S170, executing a closed loop detection thread.
And step S180, executing a global optimization thread to obtain an optimized environment map, and completing synchronous positioning and map construction.
In the vSLAM implementation method S100 based on the point and line feature fusion in this embodiment, the steps of initializing the environment map, performing tracking and pose estimation according to the environment map, and sequentially executing the local map thread, the closed-loop detection thread, and the global optimization thread are performed by extracting the point and line features in the image frame sequence and performing feature matching, so that synchronous positioning and map construction can be completed. Therefore, in the vSLAM implementation method S100 based on the point and line feature fusion in this embodiment, the extraction and matching processes of the line features are improved to improve the accuracy of data association in the front end, so that the defects of the vSLAM in a complex and low-texture scene can be effectively overcome, the front end based on the depth camera extraction of the point and line feature fusion and the rear end optimization and reliable loop detection function of the graph model can be truly realized, and a complete SLAM system is formed.
Alternatively, as shown in fig. 1 and 3, the step S120 includes:
detecting and describing point features in the images of each frame by adopting an ORB (object oriented features), measuring the similarity of the features of each point by utilizing the Hamming distance of a feature vector corresponding to the features of each point in the images of adjacent frames, and if the distance of the corresponding feature vector at the limit is smaller than a set threshold value and the distance in a search area is minimum, considering that the matching is successful; finally, sorting the Hamming distances of all the matching pairs according to the sizes, adaptively selecting a threshold value, and rejecting some matching pairs with larger distances;
detecting line features in the images of each frame by adopting an LSD (local Linear discriminant) and describing the line features in the images of each frame by adopting an LBD (local binary discriminant), wherein the line features in the images of each frame are matched to meet a preset geometric constraint; wherein the predetermined geometric constraint satisfies:
the included angle of the direction vectors of the characteristics of the two lines is less than
Length ratio of two-segment line characteristics
Overlap region length of two segment line feature
The distance of LBD characteristic vectors corresponding to the characteristics of the two lines is smaller than a set threshold value rhoTAnd the minimum in the search area is considered as successful matching;
wherein,the maximum threshold value of the included angle of the direction vectors corresponding to the characteristics of the two lines;
min(l1.l2)~max(l1.l2) The interval range satisfied by the geometric distance of the characteristic space of the two lines;
tau is a threshold value of the ratio of the minimum distance to the maximum distance in the geometrical distances of the feature spaces of the two lines;
loverlapthe length of the overlap region for two line features;
β is a threshold for the length of the overlap region of the two line features;
ρTthe hamming distance maximum between the feature LBD vectors corresponding to the two line features is taken.
The ORB descriptor and the LBD descriptor are 256-bit binary descriptors, and the storage structures are the same, so that convenience is brought to operations of establishing an offline dictionary integrating dotted line features, querying an image database and the like.
Optionally, the step S130 includes:
for point characteristics, calculating the parallax and triangularization of the point coordinates after matching of two adjacent frames of images so as to calculate corresponding space point coordinates;
for line features, due to the large uncertainty of the end points, the end points of the matched line segments may not be on the same horizontal line, and the coordinates of the two end points cannot be directly recovered through triangulation. Respectively taking two end points of the line feature in the previous frame image as references, and making a parallel line through the end points of the line feature to intersect with the line feature in the current frame image so as to obtain the depths of the two end points of the line feature in the previous frame image;
and calculating corresponding Prock coordinates according to the depths of the two end points so as to complete the initialization of the environment map. Although the end points have larger noise, the end points play an important role in matching between frames and displaying an interface. In the initialization process of the line features, if the right line segment is nearly parallel, a large error occurs in the calculation of the parallax, so the initialization of the line segment is not considered in the method.
Optionally, the step S140 includes:
respectively obtaining the matching relation between the space point and the line characteristic and the plane point and the line characteristic based on the characteristic matching of the adjacent frame images and the characteristic matching of the local map;
according to the matching relation of adjacent frame images, the tracked space points and the coordinates of space lines are determined, a graph model is constructed by taking the pose of the current frame as a state variable to be optimized, and the pose is solved by minimizing the following cost function:
and according to the matching relation of the local map, assuming that the coordinates of the space points and the space lines in the local map are determined, and optimizing the pose of the current frame. And the pose solved between the adjacent frames can be used as an initial value of the optimization, and the good initial value is beneficial to reducing the iteration times of the optimization. In the process of optimization solution, re-projection errors of points and lines can be solved again, and some mismatching pairs are removed according to chi-square inspection.
Wherein x iscRepresenting adjacent framesThe first half part on the right side of the function equal sign is point characteristic information, the second half part is line characteristic information, rho is a cost function of Huber, sigma is a covariance matrix, e is a projection error, and p is a projection erroriljIs a feature set.
Optionally, the key frame condition satisfies:
the insertion of the key frame has passed 20 frames or the local map building thread is idle since the last time; and the number of the first and second groups,
the current frame at least tracks 50 characteristic points and 15 space straight lines; and the number of the first and second groups,
the current frame contains less than 75% of the features in the reference key frame.
Optionally, the step S160 includes:
and local map management, including addition and deletion, updating and key frame elimination of spatial points and spatial lines.
And restoring newly-added space points of the space points through triangulation by matching pairs of unmatched feature points and adjacent key frames in the current frame until the newly-added space points meet the conditions of parallax, reprojection errors, limit constraints and the like so as to be added into the environment map.
And (3) removing the space point lines: false triangularization may result due to mismatch problems, or only a few key frames are observed for the added road sign, which are not observed in subsequent frames. These landmarks increase the dimensionality of the system and mismatches increase the error of the system. Therefore, the newly added road sign needs to be strictly screened to determine whether it is a high-quality road sign by observing consecutive frames, and a stable road sign is observed by at least 3 key frames.
And key frame elimination: in order to make the graph model more compact, key frames with redundant information need to be detected. If most tracked features of one key frame are tracked by other key frames, the key frame is considered to be redundant and needs to be eliminated.
Maintenance of the spatial straight line end: in all optimizations, the spatial straight line is represented by an infinitely extended line segment, and the end points of the spatial straight line have no influence on the result of the final optimization. Limiting the matching search range through the end point of the projection space straight line; meanwhile, the end points also play an important role in the visualization of the environment map, so the system needs to maintain two end points of a spatial straight line.
And (4) local map optimization, namely extracting a part of poses and landmarks from the environment map, and optimizing a graph model formed by the poses and the landmarks. The nearest n key frames and associated signposts can be generally used as state variables to be optimized, and the fixed window is not flexible and cannot judge the relation between the selected key frames and the current frame. And how much each frame in the map is observed together with the current frame can be known through the common view. So as to process the key frame f at presentiIn common view with fiConcatenated key frame fcAnd the landmarks v observed by these keyframes as a local map. At the same time, road sign v will be observed and not belong to fiAnd fcThe key frame is used as a node which is not optimized, and the effect of stabilizing the optimization result is achieved. And optimizing the local map by assuming that the pose and the landmark outside the local map are accurate and optimizing the variable in the local map by minimizing the cost function.
Optionally, the step S170 includes:
and closed-loop detection, specifically, converting each inserted key frame into a word packet vector by using a visual dictionary obtained by offline training, and constructing an online database according to the word packet vector to be used as an inverted index. By inverted indexing, all key frames containing a certain visual vocabulary can be quickly searched. The similarity score of the key frame and the current frame in the environment map is calculated when the key frame and the current frame have a common vocabulary. After the line features are added, similarity scores of points and lines can be calculated respectively, summation needs to be carried out through certain weights, and the weights of the lines are larger in indoor scenes with rich line features.
And closed-loop correction, namely calculating a pose transformation matrix between the current frame and the closed-loop frame according to the information of the current frame and the closed-loop frame. Since the depth camera is adopted in the invention, the map scale information constructed by the depth camera is determined, so that SE (3) between two frames is calculated. Firstly, feature matching is carried out on a current frame and a closed-loop frame, line features are divided into a certain layer of a dictionary tree by utilizing a constructed visual dictionary, and violence matching is carried out on the line features belonging to the same clustering center, so that matching of the line features is accelerated. After the matching pairs of the points and the lines are obtained, the pose is solved in a 3D-2D mode, and the correlation of error data in the pose can be well eliminated by combining RANSAC. A minimum of 3 matching pairs, i.e. 3 point matching pairs or 3 line matching pairs, is required to solve the problem. For the matching pairs of the point characteristics, solving through EPnP; and solving the matching pairs of the line features through the trifocal tensor relation in the images in the two frames. Under the condition of more point features, the invention preferentially uses the point features to calculate the pose, simultaneously calculates the errors of all the feature matching pairs, and if the errors are less than a certain threshold value, the feature matching pairs are considered as interior points. And if the number of the solved inner points in the attitude is enough, carrying out nonlinear optimization on all the inner points.
And establishing a pose graph model by taking the key frames contained in the minimum spanning tree in the environment map as vertexes and taking relative pose transformation between the key frames as edges so as to carry out closed-loop correction. This approach enables fast closed loop correction, with the error being evenly distributed across all key frames. Pose graph optimization is also essentially a least square problem, the optimization variables are the poses of each vertex, and the edges come from pose observation constraints.
Optionally, the step S180 includes:
and optimizing all the road signs and poses in the environment map to obtain the optimized environment map, and completing synchronous positioning and map construction.
In a second aspect of the present invention, as shown in fig. 4, there is provided a vSLAM implementation system 100 based on point-line feature fusion, including:
an obtaining module 110, configured to obtain an image frame sequence of a target scene, where the image frame sequence includes multiple frames of images;
a preprocessing module 120, configured to preprocess the image of each frame, where the preprocessing includes performing distortion removal on the image of each frame according to a pre-calibrated depth camera parameter matrix and distortion parameters, extracting point features and line features in the image of each frame, performing feature matching on the point features and the line features, and extracting successfully matched point features and line features; the feature matching comprises feature matching of adjacent frame images and local map feature matching;
an initialization module 130, configured to initialize an environment map according to the successfully matched point feature and line feature;
a pose calculation module 140, configured to track based on the environment map and estimate a pose of the current frame image;
the judging module 150 is configured to judge whether the current frame image meets the key frame condition;
a local map thread module 160 for executing local map threads;
a closed loop detection thread module 170 for executing a closed loop detection thread;
and the global optimization thread module 180 is configured to execute the steps of the global optimization thread to obtain an optimized environment map, and complete synchronous positioning and map construction.
In the vSLAM implementation system 100 based on the point-line feature fusion in this embodiment, the steps of initializing the environment map, performing tracking and pose estimation according to the environment map, and sequentially executing the local map thread, the closed-loop detection thread, and the global optimization thread are completed by extracting the point-line features and performing feature matching in the image frame sequence, so that synchronous positioning and map construction can be completed. Therefore, the vSLAM implementation system 100 based on the point and line feature fusion in this embodiment improves the extraction and matching processes of the line features to improve the accuracy of data association in the front end, so that the defects of the vSLAM in a complex and low-texture scene can be effectively overcome, the front end based on the depth camera extraction of the point and line feature fusion and the rear end optimization and reliable loop detection function of the graph model can be truly realized, and a complete SLAM system is formed.
Optionally, the preprocessing module 120 is configured to:
detecting and describing the point features in the images of each frame by adopting an ORB (object-oriented bounding box), and measuring the similarity of the point features by utilizing the Hamming distance of the feature vectors corresponding to the point features in the images of adjacent frames;
detecting line features in the images of each frame by adopting an LSD (local Linear discriminant) and describing the line features in the images of each frame by adopting an LBD (local binary discriminant), wherein the line features in the images of each frame are matched to meet a preset geometric constraint; wherein the predetermined geometric constraint satisfies:
the included angle of the direction vectors of the characteristics of the two lines is less than
Length ratio of two-segment line characteristics
Overlap region length of two segment line feature
The distance of LBD characteristic vectors corresponding to the characteristics of the two lines is smaller than a set threshold value rhoTAnd the minimum in the search area is considered as successful matching;
wherein,included angle of direction vector corresponding to two-segment line characteristicA maximum threshold;
min(l1.l2)~max(l1.l2) The interval range satisfied by the geometric distance of the characteristic space of the two lines;
tau is a threshold value of the ratio of the minimum distance to the maximum distance in the geometrical distances of the feature spaces of the two lines;
loverlapthe length of the overlap region for two line features;
β is a threshold for the length of the overlap region of the two line features;
ρTthe hamming distance maximum between the feature LBD vectors corresponding to the two line features is taken.
In addition, the vSLAM implementation system 100 based on the point and line feature fusion in the present invention is further configured to execute the rest of the vSLAM implementation method 100 based on the point and line feature fusion described above, and reference may be made to the relevant description above, which is not repeated herein.
It will be understood that the above embodiments are merely exemplary embodiments taken to illustrate the principles of the present invention, which is not limited thereto. It will be apparent to those skilled in the art that various modifications and improvements can be made without departing from the spirit and substance of the invention, and these modifications and improvements are also considered to be within the scope of the invention.

Claims (10)

1. A vSLAM realization method based on point and line feature fusion is characterized by comprising the following steps:
step S110, obtaining an image frame sequence of a target scene, wherein the image frame sequence comprises a plurality of frame images;
step S120, preprocessing each frame of the image, wherein the preprocessing comprises the steps of carrying out distortion removal on each frame of the image according to a preset calibrated depth camera parameter matrix and a distortion parameter, extracting point features and line features in each frame of the image, carrying out feature matching on the point features and the line features, and extracting the successfully matched point features and line features; the feature matching comprises feature matching of adjacent frame images and local map feature matching;
step S130, initializing an environment map according to the successfully matched point characteristics and line characteristics;
step S140, tracking based on the environment map, and estimating the pose of the current frame image;
step S150, judging whether the current frame image meets the key frame condition, if so, executing step S160, otherwise, repeatedly executing step S110 to step S150;
step S160, executing a local map thread;
step S170, executing a closed loop detection thread;
and step S180, executing a global optimization thread to obtain an optimized environment map, and completing synchronous positioning and map construction.
2. The vSLAM implementation method of claim 1, wherein the step S120 comprises:
detecting and describing the point features in the images of each frame by adopting an ORB (object-oriented bounding box), and measuring the similarity of the point features by utilizing the Hamming distance of the feature vectors corresponding to the point features in the images of adjacent frames;
detecting line features in the images of each frame by adopting an LSD (local Linear discriminant) and describing the line features in the images of each frame by adopting an LBD (local binary discriminant), wherein the line features in the images of each frame are matched to meet a preset geometric constraint; wherein the predetermined geometric constraint satisfies:
the included angle of the direction vectors of the characteristics of the two lines is less than
Length ratio of two-segment line characteristics
Overlap region length of two segment line feature
The distance of LBD characteristic vectors corresponding to the characteristics of the two lines is smaller than a set threshold value rhoTAnd the minimum in the search area is considered as successful matching;
wherein,the maximum threshold value of the included angle of the direction vectors corresponding to the characteristics of the two lines;
min(l1.l2)~max(l1.l2) The interval range satisfied by the geometric distance of the characteristic space of the two lines;
tau is a threshold value of the ratio of the minimum distance to the maximum distance in the geometrical distances of the feature spaces of the two lines;
loverlapthe length of the overlap region for two line features;
β is a threshold for the length of the overlap region of the two line features;
ρTthe hamming distance maximum between the feature LBD vectors corresponding to the two line features is taken.
3. The vSLAM implementation method of claim 1, wherein the step S130 comprises:
according to the point pair coordinates after the matching of the two adjacent frames of images, calculating the parallax and triangularizing the parallax to calculate the corresponding space point coordinates;
respectively taking two end points of the line feature in the previous frame image as references, and making a parallel line through the end points of the line feature to intersect with the line feature in the current frame image so as to obtain the depths of the two end points of the line feature in the previous frame image;
and calculating corresponding Prock coordinates according to the depths of the two end points so as to complete the initialization of the environment map.
4. The vSLAM implementation method of claim 1, wherein the step S140 comprises:
respectively obtaining the matching relation between the space point and the line characteristic and the plane point and the line characteristic based on the characteristic matching of the adjacent frame images and the characteristic matching of the local map;
according to the matching relation of adjacent frame images, the tracked space points and the coordinates of space lines are determined, a graph model is constructed by taking the pose of the current frame as a state variable to be optimized, and the pose is solved by minimizing the following cost function:
according to the matching relation of the local map, assuming that the coordinates of the space points and the space lines in the local map are determined, and optimizing the pose of the current frame;
wherein x iscRepresenting a matching pair set between adjacent frames, wherein the first half part on the right side of the equal sign of the function is point characteristic information, the second half part is line characteristic information, rho is a cost function of Huber, sigma is a covariance matrix, e is a projection error, and p isiljIs a feature set.
5. The vSLAM implementation method of claim 1, wherein the key frame condition satisfies:
the insertion of the key frame has passed 20 frames or the local map building thread is idle since the last time; and the number of the first and second groups,
the current frame at least tracks 50 characteristic points and 15 space straight lines; and the number of the first and second groups,
the current frame contains less than 75% of the features in the reference key frame.
6. The vSLAM implementation method of any of claims 1 to 5, wherein the step S160 comprises:
local map management, including adding and deleting, updating and key frame removing of space points and space lines;
and (4) local map optimization, namely extracting a part of poses and landmarks from the environment map, and optimizing a graph model formed by the poses and the landmarks.
7. The vSLAM implementation method of any of claims 1 to 5, wherein the step S170 comprises:
closed loop detection, specifically, for each inserted key frame, converting the key frame into a word packet vector by using a visual dictionary obtained by offline training, and constructing an online database according to the word packet vector to be used as an inverted index;
closed-loop correction, specifically, calculating a pose transformation matrix between a current frame and a closed-loop frame according to the information of the two frames; and the number of the first and second groups,
and establishing a pose graph model by taking the key frames contained in the minimum spanning tree in the environment map as vertexes and taking relative pose transformation between the key frames as edges so as to carry out closed-loop correction. .
8. The vSLAM implementation method of any of claims 1 to 5, wherein the step S180 comprises:
and optimizing all the road signs and poses in the environment map to obtain the optimized environment map, and completing synchronous positioning and map construction.
9. A vSLAM implementation system based on point and line feature fusion is characterized by comprising:
the device comprises an acquisition module, a processing module and a display module, wherein the acquisition module is used for acquiring an image frame sequence of a target scene, and the image frame sequence comprises a plurality of frame images;
the preprocessing module is used for preprocessing each frame of the image, and the preprocessing comprises the steps of carrying out distortion removal on each frame of the image according to a preset calibrated depth camera parameter matrix and distortion parameters, extracting point features and line features in each frame of the image, carrying out feature matching on the point features and the line features, and extracting the successfully matched point features and line features; the feature matching comprises feature matching of adjacent frame images and local map feature matching;
the initialization module is used for initializing the environment map according to the successfully matched point characteristics and line characteristics;
the pose calculation module is used for tracking based on the environment map and estimating the pose of the current frame image;
the judging module is used for judging whether the current frame image meets the key frame condition;
the local map thread module is used for executing a local map thread;
the closed loop detection thread module is used for executing a closed loop detection thread;
and the global optimization thread module is used for executing the steps of the global optimization thread to obtain the optimized environment map and finish synchronous positioning and map construction.
10. The vSLAM implementation system of claim 9, wherein the preprocessing module is configured to:
detecting and describing the point features in the images of each frame by adopting an ORB (object-oriented bounding box), and measuring the similarity of the point features by utilizing the Hamming distance of the feature vectors corresponding to the point features in the images of adjacent frames;
detecting line features in the images of each frame by adopting an LSD (local Linear discriminant) and describing the line features in the images of each frame by adopting an LBD (local binary discriminant), wherein the line features in the images of each frame are matched to meet a preset geometric constraint; wherein the predetermined geometric constraint satisfies:
the included angle of the direction vectors of the characteristics of the two lines is less than
Length ratio of two-segment line characteristics
Overlap region length of two segment line feature
The distance of LBD characteristic vectors corresponding to the characteristics of the two lines is smaller than a set threshold value rhoTAnd the minimum in the search area is considered as successful matching;
wherein,the maximum threshold value of the included angle of the direction vectors corresponding to the characteristics of the two lines;
min(l1.l2)~max(l1.l2) The interval range satisfied by the geometric distance of the characteristic space of the two lines;
tau is a threshold value of the ratio of the minimum distance to the maximum distance in the geometrical distances of the feature spaces of the two lines;
loverlapthe length of the overlap region for two line features;
β is a threshold for the length of the overlap region of the two line features;
ρTthe hamming distance maximum between the feature LBD vectors corresponding to the two line features is taken.
CN201810449541.0A 2018-05-11 2018-05-11 VSLAM realization method and systems based on point, line Fusion Features Pending CN108682027A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810449541.0A CN108682027A (en) 2018-05-11 2018-05-11 VSLAM realization method and systems based on point, line Fusion Features

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810449541.0A CN108682027A (en) 2018-05-11 2018-05-11 VSLAM realization method and systems based on point, line Fusion Features

Publications (1)

Publication Number Publication Date
CN108682027A true CN108682027A (en) 2018-10-19

Family

ID=63805964

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810449541.0A Pending CN108682027A (en) 2018-05-11 2018-05-11 VSLAM realization method and systems based on point, line Fusion Features

Country Status (1)

Country Link
CN (1) CN108682027A (en)

Cited By (54)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109341694A (en) * 2018-11-12 2019-02-15 哈尔滨理工大学 A kind of autonomous positioning air navigation aid of mobile sniffing robot
CN109509230A (en) * 2018-11-13 2019-03-22 武汉大学 A kind of SLAM method applied to more camera lens combined type panorama cameras
CN109522832A (en) * 2018-11-06 2019-03-26 浙江工业大学 It is a kind of based on order cloud sheet section matching constraint and track drift optimization winding detection method
CN109540148A (en) * 2018-12-04 2019-03-29 广州小鹏汽车科技有限公司 Localization method and system based on SLAM map
CN109579840A (en) * 2018-10-25 2019-04-05 中国科学院上海微系统与信息技术研究所 A kind of close coupling binocular vision inertia SLAM method of dotted line Fusion Features
CN109682385A (en) * 2018-11-05 2019-04-26 天津大学 A method of instant positioning and map structuring based on ORB feature
CN109712170A (en) * 2018-12-27 2019-05-03 广东省智能制造研究所 Environmental objects method for tracing, device, computer equipment and storage medium
CN109978919A (en) * 2019-03-22 2019-07-05 广州小鹏汽车科技有限公司 A kind of vehicle positioning method and system based on monocular camera
CN110039536A (en) * 2019-03-12 2019-07-23 广东工业大学 The auto-navigation robot system and image matching method of indoor map construction and positioning
CN110132278A (en) * 2019-05-14 2019-08-16 驭势科技(北京)有限公司 A kind of instant method and device for positioning and building figure
CN110288650A (en) * 2019-05-27 2019-09-27 盎锐(上海)信息科技有限公司 Data processing method and end of scan for VSLAM
CN110349207A (en) * 2019-07-10 2019-10-18 国网四川省电力公司电力科学研究院 A kind of vision positioning method under complex environment
CN110490085A (en) * 2019-07-24 2019-11-22 西北工业大学 The quick pose algorithm for estimating of dotted line characteristic visual SLAM system
CN110570474A (en) * 2019-09-16 2019-12-13 北京华捷艾米科技有限公司 Pose estimation method and system of depth camera
CN110570473A (en) * 2019-09-12 2019-12-13 河北工业大学 weight self-adaptive posture estimation method based on point-line fusion
CN110647609A (en) * 2019-09-17 2020-01-03 上海图趣信息科技有限公司 Visual map positioning method and system
CN110782494A (en) * 2019-10-16 2020-02-11 北京工业大学 Visual SLAM method based on point-line fusion
CN110852356A (en) * 2019-10-24 2020-02-28 华南农业大学 Method for extracting characteristic points of V-SLAM dynamic threshold image of mobile robot
CN110866497A (en) * 2019-11-14 2020-03-06 合肥工业大学 Robot positioning and image building method and device based on dotted line feature fusion
CN111060113A (en) * 2019-12-31 2020-04-24 歌尔股份有限公司 Map updating method and device
CN111091621A (en) * 2019-12-11 2020-05-01 东南数字经济发展研究院 Binocular vision synchronous positioning and composition method, device, equipment and storage medium
CN111311742A (en) * 2020-03-27 2020-06-19 北京百度网讯科技有限公司 Three-dimensional reconstruction method, three-dimensional reconstruction device and electronic equipment
CN111368015A (en) * 2020-02-28 2020-07-03 北京百度网讯科技有限公司 Method and device for compressing map
CN111390975A (en) * 2020-04-27 2020-07-10 浙江库科自动化科技有限公司 Inspection intelligent robot with air pipe removing function and inspection method thereof
CN111435244A (en) * 2018-12-26 2020-07-21 沈阳新松机器人自动化股份有限公司 Loop closing method and device and robot
CN111462210A (en) * 2020-03-31 2020-07-28 华南理工大学 Monocular line feature map construction method based on epipolar constraint
CN111489393A (en) * 2019-01-28 2020-08-04 速感科技(北京)有限公司 VS L AM method, controller and mobile device
CN111506687A (en) * 2020-04-09 2020-08-07 北京华捷艾米科技有限公司 Map point data extraction method, device, storage medium and equipment
CN111796600A (en) * 2020-07-22 2020-10-20 中北大学 Object recognition and tracking system based on quadruped robot
CN111815684A (en) * 2020-06-12 2020-10-23 武汉中海庭数据技术有限公司 Space multivariate feature registration optimization method and device based on unified residual error model
CN111899334A (en) * 2020-07-28 2020-11-06 北京科技大学 Visual synchronous positioning and map building method and device based on point-line characteristics
CN112037261A (en) * 2020-09-03 2020-12-04 北京华捷艾米科技有限公司 Method and device for removing dynamic features of image
CN112240768A (en) * 2020-09-10 2021-01-19 西安电子科技大学 Visual inertial navigation fusion SLAM method based on Runge-Kutta4 improved pre-integration
CN112507778A (en) * 2020-10-16 2021-03-16 天津大学 Loop detection method of improved bag-of-words model based on line characteristics
CN112634395A (en) * 2019-09-24 2021-04-09 杭州海康威视数字技术股份有限公司 Map construction method and device based on SLAM
CN112734839A (en) * 2020-12-31 2021-04-30 浙江大学 Monocular vision SLAM initialization method for improving robustness
CN112880687A (en) * 2021-01-21 2021-06-01 深圳市普渡科技有限公司 Indoor positioning method, device, equipment and computer readable storage medium
CN113160130A (en) * 2021-03-09 2021-07-23 北京航空航天大学 Loop detection method and device and computer equipment
CN113298014A (en) * 2021-06-09 2021-08-24 安徽工程大学 Closed loop detection method, storage medium and equipment based on reverse index key frame selection strategy
CN113344980A (en) * 2021-06-29 2021-09-03 北京搜狗科技发展有限公司 Target tracking method and device for target tracking
CN113450412A (en) * 2021-07-15 2021-09-28 北京理工大学 Visual SLAM method based on linear features
CN113465617A (en) * 2021-07-08 2021-10-01 上海汽车集团股份有限公司 Map construction method and device and electronic equipment
CN113524216A (en) * 2021-07-20 2021-10-22 成都朴为科技有限公司 Fruit and vegetable picking robot based on multi-frame fusion and control method thereof
CN113576780A (en) * 2021-08-04 2021-11-02 北京化工大学 Intelligent wheelchair based on semantic vision SLAM
CN113970974A (en) * 2020-07-22 2022-01-25 福建天泉教育科技有限公司 Line track prediction method and terminal
WO2022016320A1 (en) * 2020-07-20 2022-01-27 深圳元戎启行科技有限公司 Map update method and apparatus, computer device, and storage medium
CN115727854A (en) * 2022-11-28 2023-03-03 同济大学 VSLAM positioning method based on BIM structure information
CN115982399A (en) * 2023-03-16 2023-04-18 北京集度科技有限公司 Image searching method, mobile device, electronic device and computer program product
US11629965B2 (en) 2019-01-28 2023-04-18 Qfeeltech (Beijing) Co., Ltd. Methods, apparatus, and systems for localization and mapping
CN116030136A (en) * 2023-03-29 2023-04-28 中国人民解放军国防科技大学 Cross-view visual positioning method and device based on geometric features and computer equipment
US11670047B2 (en) 2019-07-02 2023-06-06 Tata Consultancy Services Limited System and method for integrating objects in monocular slam
WO2023184968A1 (en) * 2022-04-02 2023-10-05 华南理工大学 Structured scene visual slam method based on point line surface features
CN117170501A (en) * 2023-08-24 2023-12-05 北京自动化控制设备研究所 Visual tracking method based on point-line fusion characteristics
CN117649536A (en) * 2024-01-29 2024-03-05 华东交通大学 Visual synchronous positioning and mapping method for fusing dot line and line structural features

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170161901A1 (en) * 2015-12-08 2017-06-08 Mitsubishi Electric Research Laboratories, Inc. System and Method for Hybrid Simultaneous Localization and Mapping of 2D and 3D Data Acquired by Sensors from a 3D Scene
CN106909877A (en) * 2016-12-13 2017-06-30 浙江大学 A kind of vision based on dotted line comprehensive characteristics builds figure and localization method simultaneously
CN107392964A (en) * 2017-07-07 2017-11-24 武汉大学 The indoor SLAM methods combined based on indoor characteristic point and structure lines
CN107909612A (en) * 2017-12-01 2018-04-13 驭势科技(北京)有限公司 A kind of method and system of vision based on 3D point cloud positioning immediately with building figure

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170161901A1 (en) * 2015-12-08 2017-06-08 Mitsubishi Electric Research Laboratories, Inc. System and Method for Hybrid Simultaneous Localization and Mapping of 2D and 3D Data Acquired by Sensors from a 3D Scene
CN106909877A (en) * 2016-12-13 2017-06-30 浙江大学 A kind of vision based on dotted line comprehensive characteristics builds figure and localization method simultaneously
CN107392964A (en) * 2017-07-07 2017-11-24 武汉大学 The indoor SLAM methods combined based on indoor characteristic point and structure lines
CN107909612A (en) * 2017-12-01 2018-04-13 驭势科技(北京)有限公司 A kind of method and system of vision based on 3D point cloud positioning immediately with building figure

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
ALBERT PUMAROLA 等: "PL-SLAM: Real-time monocular visual SLAM with points and lines", 《2017 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION》 *
JIANWEN YIN 等: "Mobile Robot Loop Closure Detection Using Endpoint and Line Feature Visual Dictionary", 《2017 2ND INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION ENGINEERING》 *
XINGXING ZUO 等: "Robust visual SLAM with point and line features", 《2017 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS》 *
谢晓佳: "基于点线综合特征的双目视觉", 《中国优秀硕士学位论文全文数据库信息科技辑》 *

Cited By (83)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109579840A (en) * 2018-10-25 2019-04-05 中国科学院上海微系统与信息技术研究所 A kind of close coupling binocular vision inertia SLAM method of dotted line Fusion Features
CN109682385A (en) * 2018-11-05 2019-04-26 天津大学 A method of instant positioning and map structuring based on ORB feature
CN109522832A (en) * 2018-11-06 2019-03-26 浙江工业大学 It is a kind of based on order cloud sheet section matching constraint and track drift optimization winding detection method
CN109522832B (en) * 2018-11-06 2021-10-26 浙江工业大学 Loop detection method based on point cloud segment matching constraint and track drift optimization
CN109341694A (en) * 2018-11-12 2019-02-15 哈尔滨理工大学 A kind of autonomous positioning air navigation aid of mobile sniffing robot
CN109509230A (en) * 2018-11-13 2019-03-22 武汉大学 A kind of SLAM method applied to more camera lens combined type panorama cameras
CN109509230B (en) * 2018-11-13 2020-06-23 武汉大学 SLAM method applied to multi-lens combined panoramic camera
CN109540148B (en) * 2018-12-04 2020-10-16 广州小鹏汽车科技有限公司 Positioning method and system based on SLAM map
CN109540148A (en) * 2018-12-04 2019-03-29 广州小鹏汽车科技有限公司 Localization method and system based on SLAM map
CN111435244A (en) * 2018-12-26 2020-07-21 沈阳新松机器人自动化股份有限公司 Loop closing method and device and robot
CN111435244B (en) * 2018-12-26 2023-05-30 沈阳新松机器人自动化股份有限公司 Loop closing method and device and robot
CN109712170B (en) * 2018-12-27 2021-09-07 广东省智能制造研究所 Environmental object tracking method and device based on visual inertial odometer
CN109712170A (en) * 2018-12-27 2019-05-03 广东省智能制造研究所 Environmental objects method for tracing, device, computer equipment and storage medium
CN111489393A (en) * 2019-01-28 2020-08-04 速感科技(北京)有限公司 VS L AM method, controller and mobile device
US11629965B2 (en) 2019-01-28 2023-04-18 Qfeeltech (Beijing) Co., Ltd. Methods, apparatus, and systems for localization and mapping
CN110039536A (en) * 2019-03-12 2019-07-23 广东工业大学 The auto-navigation robot system and image matching method of indoor map construction and positioning
CN109978919A (en) * 2019-03-22 2019-07-05 广州小鹏汽车科技有限公司 A kind of vehicle positioning method and system based on monocular camera
CN110132278A (en) * 2019-05-14 2019-08-16 驭势科技(北京)有限公司 A kind of instant method and device for positioning and building figure
CN110288650A (en) * 2019-05-27 2019-09-27 盎锐(上海)信息科技有限公司 Data processing method and end of scan for VSLAM
CN110288650B (en) * 2019-05-27 2023-02-10 上海盎维信息技术有限公司 Data processing method and scanning terminal for VSLAM
US11670047B2 (en) 2019-07-02 2023-06-06 Tata Consultancy Services Limited System and method for integrating objects in monocular slam
CN110349207B (en) * 2019-07-10 2022-08-05 国网四川省电力公司电力科学研究院 Visual positioning method in complex environment
CN110349207A (en) * 2019-07-10 2019-10-18 国网四川省电力公司电力科学研究院 A kind of vision positioning method under complex environment
CN110490085B (en) * 2019-07-24 2022-03-11 西北工业大学 Quick pose estimation algorithm of dotted line feature vision SLAM system
CN110490085A (en) * 2019-07-24 2019-11-22 西北工业大学 The quick pose algorithm for estimating of dotted line characteristic visual SLAM system
CN110570473A (en) * 2019-09-12 2019-12-13 河北工业大学 weight self-adaptive posture estimation method based on point-line fusion
CN110570474B (en) * 2019-09-16 2022-06-10 北京华捷艾米科技有限公司 Pose estimation method and system of depth camera
CN110570474A (en) * 2019-09-16 2019-12-13 北京华捷艾米科技有限公司 Pose estimation method and system of depth camera
CN110647609B (en) * 2019-09-17 2023-07-18 上海图趣信息科技有限公司 Visual map positioning method and system
CN110647609A (en) * 2019-09-17 2020-01-03 上海图趣信息科技有限公司 Visual map positioning method and system
CN112634395B (en) * 2019-09-24 2023-08-25 杭州海康威视数字技术股份有限公司 Map construction method and device based on SLAM
CN112634395A (en) * 2019-09-24 2021-04-09 杭州海康威视数字技术股份有限公司 Map construction method and device based on SLAM
CN110782494A (en) * 2019-10-16 2020-02-11 北京工业大学 Visual SLAM method based on point-line fusion
CN110852356B (en) * 2019-10-24 2023-05-23 华南农业大学 Method for extracting V-SLAM dynamic threshold image feature points of mobile robot
CN110852356A (en) * 2019-10-24 2020-02-28 华南农业大学 Method for extracting characteristic points of V-SLAM dynamic threshold image of mobile robot
CN110866497B (en) * 2019-11-14 2023-04-18 合肥工业大学 Robot positioning and mapping method and device based on dotted line feature fusion
CN110866497A (en) * 2019-11-14 2020-03-06 合肥工业大学 Robot positioning and image building method and device based on dotted line feature fusion
CN111091621A (en) * 2019-12-11 2020-05-01 东南数字经济发展研究院 Binocular vision synchronous positioning and composition method, device, equipment and storage medium
US12031837B2 (en) 2019-12-31 2024-07-09 Goertek Inc. Method and device for updating map
CN111060113A (en) * 2019-12-31 2020-04-24 歌尔股份有限公司 Map updating method and device
CN111060113B (en) * 2019-12-31 2022-04-08 歌尔股份有限公司 Map updating method and device
CN111368015B (en) * 2020-02-28 2023-04-07 北京百度网讯科技有限公司 Method and device for compressing map
CN111368015A (en) * 2020-02-28 2020-07-03 北京百度网讯科技有限公司 Method and device for compressing map
CN111311742A (en) * 2020-03-27 2020-06-19 北京百度网讯科技有限公司 Three-dimensional reconstruction method, three-dimensional reconstruction device and electronic equipment
CN111462210B (en) * 2020-03-31 2023-06-16 华南理工大学 Monocular line feature map construction method based on epipolar constraint
CN111462210A (en) * 2020-03-31 2020-07-28 华南理工大学 Monocular line feature map construction method based on epipolar constraint
CN111506687A (en) * 2020-04-09 2020-08-07 北京华捷艾米科技有限公司 Map point data extraction method, device, storage medium and equipment
CN111506687B (en) * 2020-04-09 2023-08-08 北京华捷艾米科技有限公司 Map point data extraction method, device, storage medium and equipment
CN111390975A (en) * 2020-04-27 2020-07-10 浙江库科自动化科技有限公司 Inspection intelligent robot with air pipe removing function and inspection method thereof
CN111815684A (en) * 2020-06-12 2020-10-23 武汉中海庭数据技术有限公司 Space multivariate feature registration optimization method and device based on unified residual error model
CN111815684B (en) * 2020-06-12 2022-08-02 武汉中海庭数据技术有限公司 Space multivariate feature registration optimization method and device based on unified residual error model
WO2022016320A1 (en) * 2020-07-20 2022-01-27 深圳元戎启行科技有限公司 Map update method and apparatus, computer device, and storage medium
CN111796600A (en) * 2020-07-22 2020-10-20 中北大学 Object recognition and tracking system based on quadruped robot
CN113970974B (en) * 2020-07-22 2023-04-28 福建天泉教育科技有限公司 Line track prediction method and terminal
CN113970974A (en) * 2020-07-22 2022-01-25 福建天泉教育科技有限公司 Line track prediction method and terminal
CN111899334A (en) * 2020-07-28 2020-11-06 北京科技大学 Visual synchronous positioning and map building method and device based on point-line characteristics
CN111899334B (en) * 2020-07-28 2023-04-18 北京科技大学 Visual synchronous positioning and map building method and device based on point-line characteristics
CN112037261A (en) * 2020-09-03 2020-12-04 北京华捷艾米科技有限公司 Method and device for removing dynamic features of image
CN112240768A (en) * 2020-09-10 2021-01-19 西安电子科技大学 Visual inertial navigation fusion SLAM method based on Runge-Kutta4 improved pre-integration
CN112507778A (en) * 2020-10-16 2021-03-16 天津大学 Loop detection method of improved bag-of-words model based on line characteristics
CN112734839A (en) * 2020-12-31 2021-04-30 浙江大学 Monocular vision SLAM initialization method for improving robustness
CN112880687B (en) * 2021-01-21 2024-05-17 深圳市普渡科技有限公司 Indoor positioning method, device, equipment and computer readable storage medium
CN112880687A (en) * 2021-01-21 2021-06-01 深圳市普渡科技有限公司 Indoor positioning method, device, equipment and computer readable storage medium
CN113160130A (en) * 2021-03-09 2021-07-23 北京航空航天大学 Loop detection method and device and computer equipment
CN113298014A (en) * 2021-06-09 2021-08-24 安徽工程大学 Closed loop detection method, storage medium and equipment based on reverse index key frame selection strategy
US11645846B2 (en) * 2021-06-09 2023-05-09 Anhui Polytechnic University Closed-loop detecting method using inverted index-based key frame selection strategy, storage medium and device
US20220406059A1 (en) * 2021-06-09 2022-12-22 Anhui Polytechnic University Closed-loop detecting method using inverted index-based key frame selection strategy, storage medium and device
CN113344980A (en) * 2021-06-29 2021-09-03 北京搜狗科技发展有限公司 Target tracking method and device for target tracking
CN113465617A (en) * 2021-07-08 2021-10-01 上海汽车集团股份有限公司 Map construction method and device and electronic equipment
CN113465617B (en) * 2021-07-08 2024-03-19 上海汽车集团股份有限公司 Map construction method and device and electronic equipment
CN113450412A (en) * 2021-07-15 2021-09-28 北京理工大学 Visual SLAM method based on linear features
CN113524216B (en) * 2021-07-20 2022-06-28 成都朴为科技有限公司 Fruit and vegetable picking robot based on multi-frame fusion and control method thereof
CN113524216A (en) * 2021-07-20 2021-10-22 成都朴为科技有限公司 Fruit and vegetable picking robot based on multi-frame fusion and control method thereof
CN113576780A (en) * 2021-08-04 2021-11-02 北京化工大学 Intelligent wheelchair based on semantic vision SLAM
WO2023184968A1 (en) * 2022-04-02 2023-10-05 华南理工大学 Structured scene visual slam method based on point line surface features
CN115727854A (en) * 2022-11-28 2023-03-03 同济大学 VSLAM positioning method based on BIM structure information
CN115982399B (en) * 2023-03-16 2023-05-16 北京集度科技有限公司 Image searching method, mobile device, electronic device and computer program product
CN115982399A (en) * 2023-03-16 2023-04-18 北京集度科技有限公司 Image searching method, mobile device, electronic device and computer program product
CN116030136A (en) * 2023-03-29 2023-04-28 中国人民解放军国防科技大学 Cross-view visual positioning method and device based on geometric features and computer equipment
CN117170501A (en) * 2023-08-24 2023-12-05 北京自动化控制设备研究所 Visual tracking method based on point-line fusion characteristics
CN117170501B (en) * 2023-08-24 2024-05-03 北京自动化控制设备研究所 Visual tracking method based on point-line fusion characteristics
CN117649536A (en) * 2024-01-29 2024-03-05 华东交通大学 Visual synchronous positioning and mapping method for fusing dot line and line structural features
CN117649536B (en) * 2024-01-29 2024-04-16 华东交通大学 Visual synchronous positioning and mapping method for fusing dot line and line structural features

Similar Documents

Publication Publication Date Title
CN108682027A (en) VSLAM realization method and systems based on point, line Fusion Features
CN109631855B (en) ORB-SLAM-based high-precision vehicle positioning method
CN109166149B (en) Positioning and three-dimensional line frame structure reconstruction method and system integrating binocular camera and IMU
CN110335319B (en) Semantic-driven camera positioning and map reconstruction method and system
CN108986037B (en) Monocular vision odometer positioning method and positioning system based on semi-direct method
CN110555901B (en) Method, device, equipment and storage medium for positioning and mapping dynamic and static scenes
CN114862949B (en) Structured scene visual SLAM method based on dot-line surface characteristics
CN108010081B (en) RGB-D visual odometer method based on Census transformation and local graph optimization
CN110782494A (en) Visual SLAM method based on point-line fusion
CN109974743B (en) Visual odometer based on GMS feature matching and sliding window pose graph optimization
CN108615246B (en) Method for improving robustness of visual odometer system and reducing calculation consumption of algorithm
WO2019057179A1 (en) Visual slam method and apparatus based on point and line characteristic
CN111462207A (en) RGB-D simultaneous positioning and map creation method integrating direct method and feature method
CN110717927A (en) Indoor robot motion estimation method based on deep learning and visual inertial fusion
CN110726406A (en) Improved nonlinear optimization monocular inertial navigation SLAM method
CN111707281A (en) SLAM system based on luminosity information and ORB characteristics
CN103646391A (en) Real-time camera tracking method for dynamically-changed scene
CN111882602B (en) Visual odometer implementation method based on ORB feature points and GMS matching filter
CN112419497A (en) Monocular vision-based SLAM method combining feature method and direct method
CN110570474B (en) Pose estimation method and system of depth camera
CN113888603A (en) Loop detection and visual SLAM method based on optical flow tracking and feature matching
Kim et al. Edge-based visual odometry with stereo cameras using multiple oriented quadtrees
WO2023130842A1 (en) Camera pose determining method and apparatus
CN116563341A (en) Visual positioning and mapping method for processing dynamic object in complex environment
CN115855018A (en) Improved synchronous positioning and mapping method based on point-line comprehensive characteristics

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20181019

RJ01 Rejection of invention patent application after publication