CN104374395A  Graphbased vision SLAM (simultaneous localization and mapping) method  Google Patents
Graphbased vision SLAM (simultaneous localization and mapping) method Download PDFInfo
 Publication number
 CN104374395A CN104374395A CN201410125956.4A CN201410125956A CN104374395A CN 104374395 A CN104374395 A CN 104374395A CN 201410125956 A CN201410125956 A CN 201410125956A CN 104374395 A CN104374395 A CN 104374395A
 Authority
 CN
 China
 Prior art keywords
 pose
 ij
 based
 camera
 map
 Prior art date
Links
 238000000034 methods Methods 0.000 claims description 15
 238000005457 optimization Methods 0.000 claims description 13
 230000023298 conjugation with cellular fusion Effects 0.000 claims description 5
 230000021037 unidirectional conjugation Effects 0.000 claims description 5
 238000006073 displacement reactions Methods 0.000 claims description 4
 241001059682 Stereopsis Species 0.000 claims description 3
 238000000354 decomposition reactions Methods 0.000 claims description 3
 238000005516 engineering processes Methods 0.000 description 2
 238000004364 calculation methods Methods 0.000 description 1
 230000000694 effects Effects 0.000 description 1
Classifications

 G—PHYSICS
 G01—MEASURING; TESTING
 G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
 G01C21/00—Navigation; Navigational instruments not provided for in preceding groups G01C1/00G01C19/00
 G01C21/26—Navigation; Navigational instruments not provided for in preceding groups G01C1/00G01C19/00 specially adapted for navigation in a road network
 G01C21/28—Navigation; Navigational instruments not provided for in preceding groups G01C1/00G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
 G01C21/30—Map or contourmatching
 G01C21/32—Structuring or formatting of map data
Abstract
Description
Technical field
The present invention relates to a kind of vision SLAM method based on figure.
Background technology
Most of vision SLAM algorithm adopts the method for signature tracking, using visual signature as natural landmark constructing environment characteristics map, and simultaneously by mating with the road sign in the environmental map created before (natural landmark storehouse), estimation being carried out to the current pose of robot and realizes robot localization.Because map structuring and pose estimate that two processes have noise, therefore whole SLAM process has uncertainty.
Most popular two large algorithm EKFSLAM and PFSLAM are with the uncertainty of probability description information at present, Siegwart etc. propose a kind of EKFSLAM algorithm based on single camera, mated by multiple features and locate and reduce positioning error, the weak point of algorithm is to need to set up motion model and observation model, computation complexity is very high, and lack closed loop ability, data, once associated errors can be brought in whole SLAM state estimation, may make whole forecasting process disperse.Murphy and Russel etc. propose a kind of PFSLAM algorithm, and its complexity increases logarithmically with number of features, and unlike EKFSLAM, square ground increases, and does not need motion model linearization.
Summary of the invention
The present invention adopts based on figure (graphbased), also referred to as the SLAM method of (networkbased) Network Based, be different from EKFSLAM and PFSLAM, do not need to set up strict motion model and observation model, only need the space constraint relation by all observation information and interframe, the robot pose path map that final structure one is complete.Point in map represents robot pose, while represent the constraint between pose.There is not association between pose in the same time in robot, association is a kind of constraint in fact, and constraint represents the structural drawing defined a little with limit.
The present invention solves local data's related question by utilizing the corresponding relation of consecutive image interframe.Environment of the present invention is physical feature environment, and physical feature obtains by carrying out Visual Feature Retrieval Process to the image sequence obtained.SLAM method based on figure does not need clear and definite camera motion model, all images only needing batch processing order to obtain, and utilizes various visual angles geometrical constraint inherent between image can estimate feature locations and pose of camera simultaneously.Map building part utilizes the space constraint relation of observation information and interframe to build pose of camera map, map optimization part obtains optimum pose based on maximumlikelihood method minimum restriction error and estimates, the present invention obtains maximum likelihood map by stochastic gradient descent method minimum restriction error.
Technical solution of the present invention is:
Based on a vision SLAM method of figure,
Visual Feature Retrieval Process is carried out to each two field picture that camera obtains, probability model based on natural vision word represents every two field picture, probability vector based on image represents the feature point pairs obtaining mating, same unique point is utilized to obtain the relative pose relation of robot in these two moment, by the relative pose of stereopsis apart from the every two field picture representative of technique computes in not image coordinate in the same time;
There is not association between pose in the same time in robot, is converted to the data correlation of visual odometry, obtains the space constraint relation between picture frame based on the pose in image sequence between consecutive image;
Camera relative pose is regarded as the node in map, the space constraint relation between picture frame is expressed as limit, builds the track map based on video camera Relative attitude and displacement estimation;
Carry out optimum pose sequence estimation by the position of adjustment figure interior joint, obtain optimum pose map by stochastic gradient descent method minimum restriction error.
Preferably, for obtaining the pose set x of all cameras of current time _{0:m}={ x _{0}..., x _{n}each current pose needs to calculate the estimation relative to initial time k=0 camera local coordinate system;
Due to not transition matrix T in the same time _{k}(k=1 ... n) there is annexation between, current pose x _{n}=x _{n1}t _{n}, x _{0}for the camera pose of initial time, thus by computed image I _{k1}with I _{k}between transformational relation T _{k}all transformational relations are coupled together and can obtain camera sequence motion track X by increment _{1:n}.
Preferably, video camera relative motion T is calculated _{k}:
First estimate the direction of motion of relative pose and two frame video cameras, then recover the pose of the unique point observed in this two frame, finally by the correct yardstick of these road signs of matching primitives of road sign pose, carry out estimation camera motion.
Preferably, carry out decomposition by essential matrix E and can obtain rotation matrix R and translation vector differ a scaleup factor with actual translation vector t, under finally can obtaining k1 moment camera local coordinate system, the camera pose in k moment is expressed as: after obtaining the relative pose in each moment, these relative conversions in all moment are put into a structure in, to represent the internodal association of consecutive image.
Preferably, suppose that the jth moment successfully detects that the image in present image and the ith moment forms closed loop, then the relative pose between two images is expressed as: information matrix is the association of this closedloop data is joined in the data structure of successive frame, forms a new structure
Preferably, suppose there is constraint between node i and node j and a certain moment get back two internodal another constraint two constraints like this can be dissolved into one about intrafascicular, the standard of fusion is as follows:
Preferably, picture frame is regarded as vertex set V, the pose association between picture frame is as the set E on limit, and the every paths connecting two two field pictures represents a relevant pose and estimates, builds camera pose track map G=(V, E) thus.
Preferably, adopt stochastic gradient descent method to carry out map optimization, select constraint iteratively and the node in mobile figure realizes, the renewal of node is as follows:
Wherein, x is the pose variables collection of camera in figure; M is iterations; J _{ij}be jacobi matrix, the uncertain information matrix of observed reading, r _{ij}more than constraint error; H ^{1}be a prior matrix, H is the Hessian matrix of system, and the curvature of representative constraint error function, uses the approximate value of H: h invertible matrix,
Preferably, a variable is found make cost function F _{ij}x () is minimum, select initial value x by iteration _{1}calculate new sequence node, sequence restrains under certain condition and makes F _{ij}(x) minimum thus try to achieve separate x ^{*}; Function F _{ij}x () is to variable gradient be be a negative vector, gradient direction is F _{ij}x () reduces the fastest direction; Obtain thus: wherein, will be defined as m represents the number of times of front iteration, parameter
Preferably, the iterative step of map optimizing process is:
Step 1: choose start node x _{1}, given admissible constraint error a > 0, b > 0, and make m=0;
Step 2: compute gradient descent direction vector
Step 3: judge whether meet constraint condition satisfied then turn to step 8, otherwise continue;
Step 4: calculate best learning rate obtain new node x _{m+1};
Step 5: all nodes form a pose sequence x=(x _{1}, x _{2}, x _{k}, x _{k+1});
Step 6: judge whether cost function meets satisfied then turn to step 8, otherwise continue;
Step 7: iterations adds 1, makes m=m+1, turns to step 2;
Step 8: Output rusults, terminates.
The invention has the beneficial effects as follows: the monocular vision SLAM algorithm that the invention provides a kind of physical feature environment.Physical feature probability vector based on image represents the matching relationship that can be easy to obtain between Image Visual Feature, utilizes the space geometry relation between image to calculate the relative pose of two interframe.Utilize the corresponding relation between consecutive image to obtain the data correlation of visual odometry, obtain the institute's Constrained in image sequence thus.Camera relative pose is regarded as the node in map, the space constraint relation between picture frame is expressed as limit, builds the track map based on camera Relative attitude and displacement estimation.Finally adopt maximum likelihood method to optimize map, obtain optimization pose by stochastic gradient descent method and estimate.In laboratory environments related experiment is carried out to the method that the present invention proposes, and illustrated the running orbit of robot, the results show validity of algorithm of the present invention.
Accompanying drawing explanation
Fig. 1 is the schematic illustration of graphical representation of embodiment graphic structure;
Fig. 2 is the monocular vision SLAM algorithm general frame schematic diagram of physical feature environment;
Fig. 3 is the explanation schematic diagram that the relative pose of video camera is expressed as an a rotation matrix R and translation vector t;
Fig. 4 is data correlation and the pose adjustment figure of closed loop detect;
Fig. 5 is optimized start node structure to obtain maximum likelihood structure
Fig. 6 is the laboratory scene schematic diagram of embodiment;
Tu7Shi robot runs the rear movement locus figure of front two circle;
The final pose of Tu8Shi robot estimates map;
Fig. 9 is that the data correlation of consecutive image interframe under laboratory environment illustrates schematic diagram;
Figure 10 is that the data correlation of laboratory environment closed loop detect illustrates schematic diagram;
Figure 11 is robot kinematics figure under corridor environment;
Figure 12 is (a) the 369th enlarged diagram of frame movement locus in Figure 11;
Figure 13 is (b) the 670th enlarged diagram of frame movement locus in Figure 11;
Figure 14 is (c) the 890th enlarged diagram of frame movement locus in Figure 11;
Figure 15 is (d) the 1050th enlarged diagram of frame movement locus in Figure 11;
Figure 16 is (e) the 1550th enlarged diagram of frame movement locus in Figure 11;
Figure 17 is (f) the 2150th enlarged diagram of frame movement locus in Figure 11.
Embodiment
The preferred embodiments of the present invention are described in detail below in conjunction with accompanying drawing.
Based on the SLAM method of figure, Grisetti proposes to be divided into the structure of two part: figure and the optimization of figure based on the SLAM method of figure.Figure is constructed by the restriction relation between data correlation and node of graph and sets up, data correlation comprises local data's association and associates with global data, local data's association refers to the coupling of consecutive image interframe, and in order to solve relative attitude estimation problem, global data association is obtained by closed loop detect.SLAM based on figure can be understood to the sparse figure retrained between node and node.The node of figure is the position x of robot _{0}, x _{1}..., x _{t}n feature m in map _{0}, m _{1}..., m _{n1}.Constraint is robot pose x in succession _{i}, x _{i1}between relative position and robot location between relative position and from those position detections to feature.It is a softconstraint that SLAM algorithm based on figure wherein retrains, and as Fig. 1, be constrained to " soft " between figure interior joint retrains.Loosen these constraints, the preferably estimation of robot path and environmental map can be obtained.Then maximal possibility estimation is used to be optimized generated map.
The present embodiment adopts the SLAM method based on figure, devises a kind of monocular vision SLAM algorithm of physical feature environment.Whole algorithm is served as theme with picture frame, and be input as every two field picture that camera obtains, output is an overall camera pose map, and algorithm general frame represents as shown in Figure 2.
Visual Feature Retrieval Process is carried out to each two field picture that camera obtains, probability model based on natural vision word represents every two field picture, probability vector based on image represents the feature point pairs that can be easy to obtain mating, utilize same unique point can obtain the relative pose relation of robot in these two moment, by the relative pose of stereopsis apart from the every two field picture representative of technique computes in not image coordinate in the same time.There is not association between pose in the same time in robot, association is a kind of constraint in fact, and the pose that the present invention is based in image sequence between consecutive image is converted to the data correlation of visual odometry, obtains the space constraint relation between picture frame thus.Camera relative pose is regarded as the node in map, the space constraint relation between picture frame is expressed as limit, builds the track map based on video camera Relative attitude and displacement estimation.Carry out optimum pose sequence estimation by the position of adjustment figure interior joint, obtain an optimum pose map by stochastic gradient descent method minimum restriction error.
Data correlation between picture frame
There is not association between pose in the same time in robot, governs the transformational relation between pose, calculated obtain by camera relative pose.Based in the vision SLAM algorithm of figure, the data correlation between image comprises two classes: a class is the data correlation of viewbased access control model odometer, and another kind of is data correlation based on closed loop detect.
The data correlation of visual odometry
Before carrying out the calculating of camera relative pose, need to demarcate camera, thus obtain the inner parameter of camera, comprise focal length, aspect ratio, distortion factor and principal point etc., for this reason, first offline adopts the scaling method of Zhang Zhengyou to demarcate camera.Concrete grammar and step are very ripe, are not described in detail here.
The object that camera relative pose calculates is to estimate the motion of camera increment type thus obtaining camera track.Suppose that the image sequence that current time k obtains is I _{0:n}={ I _{0}..., I _{n}, and robot moves in the planes, and any two camera relative poses between adjacent moment k1 and k can be expressed as a transition matrix T thus _{k, k1}∈ R ^{3 × 3}:
Wherein R _{k, k1}for the rotation matrix containing a parameter, t _{k, k1}∈ R ^{2 × 1}for translation vector.Set contain the camera motion of all subsequences.In order to reduced representation, use T below _{k}replace T _{k, k1}.Our final object is the pose set x obtaining all cameras of current time _{0:n}={ x _{0}..., x _{n}, each current pose needs to calculate the estimation relative to initial time k=0 camera local coordinate system.Due to not transition matrix T in the same time _{k}(k=1 ... n) there is annexation between, current pose x _{n}=x _{n1}t _{n}, x _{0}for the camera pose of initial time, thus by computed image I _{k1}with I _{k}between transformational relation T _{k}, all transformational relations are coupled together and can obtain camera sequence motion track X by increment _{1:n}.
Ensuing key task calculates video camera relative motion T _{k}, the present invention adopts the method for Feature Points Matching, utilizes several unique points from image zoomingout, and these somes picture point in the picture, calculates the pose parameter representing cam movement, and realtime update camera pose.First algorithm estimates the direction of motion of relative pose and two frame video cameras, then recovers the pose of the unique point observed in this two frame, finally by the correct yardstick of these road signs of matching primitives of road sign pose, thus estimates camera motion.The feature that the threedimensional coordinate of road sign must not mate in the same time to two could obtain carrying out threedimensional reconstruction, and compared with the map based on stereoscopic vision, precision exists gap.In order to improve positioning precision, need the image coordinate making full use of feature, namely the feature in the characteristic image coordinate observed by the camera k moment and map, in the image coordinate in certain j moment, can be calculated the relative pose relation of camera in these two moment, realize relative positioning.
Make P=[u, v] ^{t}for unique point is in the homogeneous coordinates of picture plane, X=[x, y, z] ^{t}for the coordinate of unique point under camera coordinate system, then have according to camera projection model:
Wherein, M _{1}for intrinsic parameters of the camera matrix, comprise α _{x}=f/d _{x}, α _{y}=f/d _{y}, u _{0}, v _{0}four inner parameters, are shown above the calibration result of this chapter.Order for the normalization of unique point is as planimetric coordinates.Matching characteristic in known k moment and k1 time at intervals is to p _{k}with p _{k1}, two two field picture I can be calculated _{k}with I _{k1}between transformational relation T _{k}, as Fig. 3, transformational relation is camera relative pose, comprises a rotation matrix R _{k}with translation misorientation amount t _{k}, this geometric relationship is called as essential matrix E _{k}.Essential matrix E _{k}by one 3 dimension matrix R _{k}with composition is that one has 5 degree of freedom, order is the matrix of 2:
Wherein t _{k}=[t _{x}, t _{y}, 1] ^{t}for k1 moment camera coordinate system C _{k1}to k moment camera coordinate system C _{k}translation vector, for t _{k}antisymmetric matrix, in fact a scaleup factor is differed with true translation vector.
Matching double points in given two width images corresponding point normalization is in two images respectively as coordinate with then essential matrix E can pass through calculate.Classical algorithm is eight some algorithms that LonguetHiggins proposes, and each coupling, to representing a kind of epipolarline constraint, sets up an equation AE=0 according to limit restraint, wherein
AE=0 is a system of homogeneous linear equations containing 9 unknown numbers, therefore at least needs 8 Corresponding matching points just can obtain essential matrix E with scaleup factor.
After obtaining essential matrix E, decomposition is carried out to E and can obtain rotation matrix R and translation vector a scaleup factor is differed with actual translation vector t.R and there are four kinds of isolations.Under finally can obtaining k1 moment camera local coordinate system, the camera pose in k moment, the pose of Ye Ji robot, is expressed as: solve relative pose information matrix Ω _{k1, k}.After obtaining the relative pose in each moment, these relative conversions in all moment are put into a structure in, represent the internodal association of consecutive image with this.
The data correlation of closed loop detect
When closed loop being detected, the pose of current pose and more early a frame produces global association, and this association is called that revisiting retrains, and shows as and add a limit in the structure of figure.The closed image detected between the computing method of data correlation identical with the local restriction of consecutive image interframe, do not repeat them here.Suppose that the jth moment successfully detects that the image in present image and the ith moment forms closed loop, then the relative pose between two images is expressed as: information matrix is the association of this closedloop data is joined in the data structure of successive frame, forms a new structure
In addition, in order to avoid this structure is dissolved in too much similar constraint, design one more new standard.Suppose there is constraint between node i and node j and a certain moment get back two internodal another constraint two constraints like this can be dissolved into one about intrafascicular, the standard of fusion is as follows:
Structure thus amount of constraint can maintain in certain scope, can not cause the increase of amount of constraint along with identical closed loop detect, the technology of this minimizing constraint can accelerate the speed of convergence of optimized algorithm.
Closed loop constraint adds the local restriction between adjacent image frame, and these two kinds retrain the structure that simultaneously govern a little, as shown in Figure 4.Associate the restriction relation between current pose and particular frame pose brought based on global data, present frame pose is upgraded, and other framing bit appearances associated with present frame are also upgraded thereupon.
After obtaining the space restriction relation between picture frame, a complete camera pose track map can be created.Based in the formalization representation of figure, the pose x of robot _{i}be counted as stochastic variable, the constraint between pose then the observation relevant to stochastic variable.Picture frame is regarded as vertex set V, the pose association between picture frame is as the set E on limit, and the every paths connecting two two field pictures represents a relevant pose and estimates, builds camera pose track map G=(V, E) thus.
Map is optimized
Due to observation noise and the existence retraining error, the pose figure only relying on visual odometry to create does not possess consistance usually, need the later stage to carry out position adjustment to figure interior joint, make node can meet restriction relation better, finally obtain an optimum pose map.In order to the consistent map of an overall situation can be generated, to the total often adding a constraint and upgrade after a constraint be optimized.
Map optimization part does not consider observation information, only maximal possibility estimation is done to whole sequence pose, do not need the absolute pose knowing camera, only utilize the Relative Transformation relation minimum restriction error between pose, obtain the optimal estimation of pose Relative Transformation, algorithm essence is a maximal possibility estimation problem.Current great majority research finds optimum map by maximum likelihood method, and the present invention also adopts maximum likelihood method to carry out map optimization, solves optimum pose sequence, obtain the node structure of a maximum possible according to the restriction relation set up.
The present embodiment adopts stochastic gradient descent method to carry out map optimization, selects constraint iteratively and the node in mobile figure realizes, mobile node is the error produced to reduce selected constraint.Different from standard random downward gradient formula, these constraints regard as independently instead of entirety is optimized.The renewal of node is as follows:
Wherein x is the pose variables collection of camera in figure; M is iterations; H ^{1}it is a prior matrix; J _{ij}be jacobi matrix, the uncertain information matrix of observed reading, r _{ij}more than constraint error.
The final purpose of algorithm finds a variable make cost function F _{ij}x () is minimum, select initial value x by iteration _{1}calculate new sequence node, sequence restrains under certain condition and makes F _{ij}(x) minimum thus try to achieve separate x ^{*}.Function F _{ij}x () is to variable gradient be be a negative vector, gradient direction is F _{ij}x () reduces the fastest direction.Can obtain thus:
H is the Hessian matrix of system, and the curvature of representative constraint error function, considers the calculating more complicated of H, the general approximate value using H: h invertible matrix can approximate treatment be H ^{1}= diag (H)  ^{1}.
The key issue of algorithm how to calculate λ, and λ is learning rate, controls the speed of convergence of SGD.If select too little, then algorithm the convergence speed is slow; If select to cause very much net result to be dispersed.Here the method for Robbins and Monro is adopted, will be defined as: λ _{ij}=1/ (κ _{ij}m), m represents the number of times of front iteration, parameter
The iterative step of whole optimizing process is:
Step 1: choose start node x _{1}, given admissible constraint error a > 0, b > 0, and make m=0;
Step 2: compute gradient descent direction vector
Step 3: judge whether meet constraint condition satisfied then turn to step 8, otherwise continue;
Step 4: calculate best learning rate obtain new node x _{m1};
Step 5: all nodes form a pose sequence x=(x _{1}, x _{2}..., x _{k}, x _{k+1}) ^{t};
Step 6: judge whether cost function meets satisfied then turn to step 8, otherwise continue;
Step 7: iterations adds 1, makes m=m+1, turns to step 2;
Step 8: Output rusults, terminates;
Whole algorithm reduces constraint error when known constraints by iteration and obtains maximum likelihood map, thus obtains maximum likelihood structure a little, builds the controlled map in global scope, as Fig. 5.
Experimental result
The monocular vision SLAM algorithm of physical feature environment proposed the present embodiment is by carrying out corresponding experiment to laboratory and corridor environment.As shown in Figure 6, experiment service machine people is p3dx autonomous mobile robot to laboratory scene, carries Logiteh QuickCam monocular vision sensor.Except vision sensor, robot is also furnished with laser, sonar, the sensor such as infrared.The mode of wireless ethernet remote monitoring is adopted to carry out the running orbit of control in experiment, real time image collection is carried out by the video camera that robot loads, proof of algorithm adopts the mode of offline to carry out, and graphics processing unit is Intel4 core processor i726003.4G, internal memory 4G.Each experiment carries out experimental result data analysis with regard to the data correlation between picture frame and cam movement track respectively, verifies the validity of algorithm of the present invention from varying environment.
First robot continues to move ahead after taking a round along indoor environment row again, accurately detects closed loop, as shown in Figure 7 in the position of the 1000th frame.Robot walk in laboratory environment two circle after continue to walk forward, because closed loop detect produces the data correlation of global image interframe, carried out potential closed loop candidate simultaneously, therefore after obtaining the historical frames of mating with present frame, the disassociation frame of present frame exists in shortterm memory internal memory, utilize the association between these frames can adjust machine present bit appearance sequence, thus obtain final pose estimation map, as shown in Figure 8.
Next test in the corridor environment of laboratory, because corridor range is comparatively large and be structured environment, based on algorithm of the present invention, experiment is carried out to 2200 multiple images and obtain good result, whole motion process as shown in figure 11, specifically illustrates as shown in Figure 12, Figure 13, Figure 14, Figure 15, Figure 16, Figure 17.In figure, blue arrow represents robot not relative pose in the same time, and current time pose red arrow represents, the pass coupling green line between pose connects.
In experiment, robot continues to move ahead after a starting point has walked a segment distance, because movement velocity is steady, what when not turning round, robot was walked is straight line substantially, as can be seen from Figure 11 (c), Figure 17, the movement locus of robot four corners is also smoother, this is because we are to turn the unique point that detects in the image that camera obtains abundant, is therefore conducive to accurately calculating relative pose.Before closed loop being detected, robot motion's track does not close, and just starts closed loop to be detected about walking 1000 frame.Be not difficult to find out from Figure 11 (d), Figure 15, after closed loop being detected, robot obtains global data association, and the history pose of current pose and section start matches, and movement locus afterwards starts along the path of first reference position again.Although the movement locus of first lap is also not consistent with real trace, the association between each moment pose is accurately, the pose adjustment after being conducive to.After having travelled 1200 multiframes, track has started to form closed loop, and afterwards based on the global data association that closed loop detect brings, robot sequence pose obtains corresponding adjustment, and map is more and more close to real trace.The robot motion's track obtained based on algorithm of the present invention after 2000 multiframes is substantially consistent with environmental structure, the final pose map of robot in corridor environment is as shown in Figure 11 (f), Figure 17, whole pose sequence is comparatively level and smooth, data correlation is accurate, and the pose track map of structure obtains fine optimization.
Computation complexity due to map optimization depends on pose length and nonambient size, is therefore conducive to the data correlation obtained between image sequence.Robot carries out map optimization based on the data correlation between picture frame and restriction relation when constantly running forward, and map building and map optimization are carried out simultaneously.Restriction relation between picture frame comprises the data correlation of visual odometry and data correlation two parts of closed loop detect, respectively as shown in Figure 9 and Figure 10.The common trait point of consecutive image interframe represents the corresponding relation between image, is used for calculating camera relative pose, thus recovers the motion process of camera.When in the stable situation of camera motion speed, new pose need not be calculated when coming back to historical position estimate, until camera motion effectively just starts to upgrade pose, retentively scheme when making like this to have access to same place to resolve, be conducive to the longtime navigation in structurized environment.Figure carries out map optimization by minimum restriction error after building, and finally obtains the maximum likelihood map of a global optimum, as shown in Figure 8.Experimental result shows that data correlation is conducive to integrality and the consistance of pose map building accurately.
Claims (10)
Priority Applications (1)
Application Number  Priority Date  Filing Date  Title 

CN201410125956.4A CN104374395A (en)  20140331  20140331  Graphbased vision SLAM (simultaneous localization and mapping) method 
Applications Claiming Priority (1)
Application Number  Priority Date  Filing Date  Title 

CN201410125956.4A CN104374395A (en)  20140331  20140331  Graphbased vision SLAM (simultaneous localization and mapping) method 
Publications (1)
Publication Number  Publication Date 

CN104374395A true CN104374395A (en)  20150225 
Family
ID=52553450
Family Applications (1)
Application Number  Title  Priority Date  Filing Date 

CN201410125956.4A CN104374395A (en)  20140331  20140331  Graphbased vision SLAM (simultaneous localization and mapping) method 
Country Status (1)
Country  Link 

CN (1)  CN104374395A (en) 
Cited By (25)
Publication number  Priority date  Publication date  Assignee  Title 

CN105631017A (en) *  20151229  20160601  福州华鹰重工机械有限公司  Method and device for achieving offline coordinate calibrating and map building 
CN106154287A (en) *  20160928  20161123  深圳市普渡科技有限公司  A kind of map constructing method based on twowheel speedometer Yu laser radar 
CN106289181A (en) *  20150522  20170104  北京雷动云合智能技术有限公司  A kind of realtime SLAM method that viewbased access control model is measured 
CN106352877A (en) *  20160810  20170125  纳恩博（北京）科技有限公司  Moving device and positioning method thereof 
CN106780699A (en) *  20170109  20170531  东南大学  A kind of vision SLAM methods aided in based on SINS/GPS and odometer 
CN107036594A (en) *  20170507  20170811  郑州大学  The positioning of intelligent Power Station inspection intelligent body and many granularity environment perception technologies 
CN107063190A (en) *  20170302  20170818  辽宁工程技术大学  Towards the highprecision direct method estimating of pose of calibration area array cameras image 
CN107160395A (en) *  20170607  20170915  中国人民解放军装甲兵工程学院  Map constructing method and robot control system 
CN107291093A (en) *  20170704  20171024  西北工业大学  Unmanned plane Autonomous landing regional selection method under viewbased access control model SLAM complex environment 
CN107516326A (en) *  20170714  20171226  中国科学院计算技术研究所  Merge monocular vision and the robot localization method and system of encoder information 
CN107515891A (en) *  20170706  20171226  杭州南江机器人股份有限公司  A kind of robot cartography method, apparatus and storage medium 
CN107562660A (en) *  20170829  20180109  深圳普思英察科技有限公司  A kind of vision SLAM onchip system and data processing method 
CN107607111A (en) *  20170907  20180119  驭势科技（北京）有限公司  Acceleration biases method of estimation and device, vision inertia odometer and its application 
CN108256060A (en) *  20180116  20180706  广州视源电子科技股份有限公司  A kind of closed loop detection method, device, terminal and storage medium 
CN108332752A (en) *  20180109  20180727  深圳市沃特沃德股份有限公司  The method and device of robot indoor positioning 
CN108363387A (en) *  20180111  20180803  驭势科技（北京）有限公司  Sensor control method and device 
CN108564625A (en) *  20180427  20180921  百度在线网络技术（北京）有限公司  Figure optimization method, device, electronic equipment and storage medium 
CN109389645A (en) *  20170802  20190226  珊口（上海）智能科技有限公司  Camera method for selfcalibrating, system, camera, robot and cloud server 
CN109426832A (en) *  20170830  20190305  湖南拓视觉信息技术有限公司  Closed loop detection method, storage medium and electronic equipment in scene threedimensional modeling 
CN109544629A (en) *  20181129  20190329  南京人工智能高等研究院有限公司  Camera pose determines method and apparatus and electronic equipment 
CN109540148A (en) *  20181204  20190329  广州小鹏汽车科技有限公司  Localization method and system based on SLAM map 
CN109848996A (en) *  20190319  20190607  西安交通大学  Extensive threedimensional environment map creating method based on figure optimum theory 
CN110458897A (en) *  20190813  20191115  北京积加科技有限公司  Multicam automatic calibration method and system, monitoring method and system 
WO2019233299A1 (en) *  20180605  20191212  杭州海康机器人技术有限公司  Mapping method and apparatus, and computer readable storage medium 
WO2019242628A1 (en) *  20180619  20191226  Beijing Didi Infinity Technology And Development Co., Ltd.  Systems and methods for pose determination 
Citations (1)
Publication number  Priority date  Publication date  Assignee  Title 

CN102831446A (en) *  20120820  20121219  南京邮电大学  Image appearance based loop closure detecting method in monocular vision SLAM (simultaneous localization and mapping) 

2014
 20140331 CN CN201410125956.4A patent/CN104374395A/en not_active Application Discontinuation
Patent Citations (1)
Publication number  Priority date  Publication date  Assignee  Title 

CN102831446A (en) *  20120820  20121219  南京邮电大学  Image appearance based loop closure detecting method in monocular vision SLAM (simultaneous localization and mapping) 
NonPatent Citations (2)
Title 

梁志伟等: "基于分布式感知的移动机器人同时定位与地图创建", 《机器人》 * 
陈燕燕: "混合人工自然特征环境的单目视觉SLAM", 《中国优秀硕士论文全文数据库信息科技辑》 * 
Cited By (28)
Publication number  Priority date  Publication date  Assignee  Title 

CN106289181A (en) *  20150522  20170104  北京雷动云合智能技术有限公司  A kind of realtime SLAM method that viewbased access control model is measured 
CN106289181B (en) *  20150522  20181218  北京雷动云合智能技术有限公司  A kind of realtime SLAM method of viewbased access control model measurement 
CN105631017A (en) *  20151229  20160601  福州华鹰重工机械有限公司  Method and device for achieving offline coordinate calibrating and map building 
CN106352877A (en) *  20160810  20170125  纳恩博（北京）科技有限公司  Moving device and positioning method thereof 
CN106154287A (en) *  20160928  20161123  深圳市普渡科技有限公司  A kind of map constructing method based on twowheel speedometer Yu laser radar 
CN106780699A (en) *  20170109  20170531  东南大学  A kind of vision SLAM methods aided in based on SINS/GPS and odometer 
CN107063190B (en) *  20170302  20190730  辽宁工程技术大学  Pose highprecision direct method estimating towards calibration area array cameras image 
CN107063190A (en) *  20170302  20170818  辽宁工程技术大学  Towards the highprecision direct method estimating of pose of calibration area array cameras image 
CN107036594A (en) *  20170507  20170811  郑州大学  The positioning of intelligent Power Station inspection intelligent body and many granularity environment perception technologies 
CN107160395A (en) *  20170607  20170915  中国人民解放军装甲兵工程学院  Map constructing method and robot control system 
CN107291093A (en) *  20170704  20171024  西北工业大学  Unmanned plane Autonomous landing regional selection method under viewbased access control model SLAM complex environment 
CN107515891A (en) *  20170706  20171226  杭州南江机器人股份有限公司  A kind of robot cartography method, apparatus and storage medium 
CN107516326A (en) *  20170714  20171226  中国科学院计算技术研究所  Merge monocular vision and the robot localization method and system of encoder information 
CN107516326B (en) *  20170714  20200403  中国科学院计算技术研究所  Robot positioning method and system fusing monocular vision and encoder information 
CN109389645A (en) *  20170802  20190226  珊口（上海）智能科技有限公司  Camera method for selfcalibrating, system, camera, robot and cloud server 
CN107562660A (en) *  20170829  20180109  深圳普思英察科技有限公司  A kind of vision SLAM onchip system and data processing method 
CN109426832A (en) *  20170830  20190305  湖南拓视觉信息技术有限公司  Closed loop detection method, storage medium and electronic equipment in scene threedimensional modeling 
CN107607111A (en) *  20170907  20180119  驭势科技（北京）有限公司  Acceleration biases method of estimation and device, vision inertia odometer and its application 
CN108332752A (en) *  20180109  20180727  深圳市沃特沃德股份有限公司  The method and device of robot indoor positioning 
CN108363387A (en) *  20180111  20180803  驭势科技（北京）有限公司  Sensor control method and device 
CN108256060A (en) *  20180116  20180706  广州视源电子科技股份有限公司  A kind of closed loop detection method, device, terminal and storage medium 
CN108564625A (en) *  20180427  20180921  百度在线网络技术（北京）有限公司  Figure optimization method, device, electronic equipment and storage medium 
WO2019233299A1 (en) *  20180605  20191212  杭州海康机器人技术有限公司  Mapping method and apparatus, and computer readable storage medium 
WO2019242628A1 (en) *  20180619  20191226  Beijing Didi Infinity Technology And Development Co., Ltd.  Systems and methods for pose determination 
CN109544629A (en) *  20181129  20190329  南京人工智能高等研究院有限公司  Camera pose determines method and apparatus and electronic equipment 
CN109540148A (en) *  20181204  20190329  广州小鹏汽车科技有限公司  Localization method and system based on SLAM map 
CN109848996A (en) *  20190319  20190607  西安交通大学  Extensive threedimensional environment map creating method based on figure optimum theory 
CN110458897A (en) *  20190813  20191115  北京积加科技有限公司  Multicam automatic calibration method and system, monitoring method and system 
Similar Documents
Publication  Publication Date  Title 

Bresson et al.  Simultaneous localization and mapping: A survey of current trends in autonomous driving  
Qin et al.  Vinsmono: A robust and versatile monocular visualinertial state estimator  
Zhou et al.  StructSLAM: Visual SLAM with building structure lines  
Johannsson et al.  Temporally scalable visual SLAM using a reduced pose graph  
CN103278170B (en)  Based on mobile robot's cascade map creating method that remarkable scene point detects  
Labbe et al.  Online global loop closure detection for largescale multisession graphbased SLAM  
Yang et al.  Popup slam: Semantic monocular plane slam for lowtexture environments  
Bachrach et al.  Estimation, planning, and mapping for autonomous flight using an RGBD camera in GPSdenied environments  
Scaramuzza et al.  Visual odometry [tutorial]  
Forster et al.  Continuous onboard monocularvisionbased elevation mapping applied to autonomous landing of micro aerial vehicles  
Castellanos et al.  Robocentric map joining: Improving the consistency of EKFSLAM  
Alonso et al.  Accurate global localization using visual odometry and digital maps on urban environments  
CN105843223B (en)  A kind of mobile robot threedimensional based on space bag of words builds figure and barrieravoiding method  
Sala et al.  Landmark selection for visionbased navigation  
Bailey et al.  Simultaneous localization and mapping (SLAM): Part II  
Fruh et al.  3D model generation for cities using aerial photographs and ground level laser scans  
CN101625768B (en)  Threedimensional human face reconstruction method based on stereoscopic vision  
Iocchi et al.  Visually realistic mapping of a planar environment with stereo  
Stachniss et al.  Simultaneous localization and mapping  
US9613420B2 (en)  Method for locating a camera and for 3D reconstruction in a partially known environment  
Chen et al.  Recent advances in simultaneous localization and mapbuilding using computer vision  
Dey et al.  Vision and learning for deliberative monocular cluttered flight  
Saeedi et al.  Neural networkbased multiple robot simultaneous localization and mapping  
CN105164726A (en)  Camera pose estimation for 3d reconstruction  
Liu et al.  Stereo visualinertial odometry with multiple Kalman filters ensemble 
Legal Events
Date  Code  Title  Description 

PB01  Publication  
C06  Publication  
SE01  Entry into force of request for substantive examination  
C10  Entry into substantive examination  
RJ01  Rejection of invention patent application after publication 
Application publication date: 20150225 

RJ01  Rejection of invention patent application after publication 