CN107680133A - A kind of mobile robot visual SLAM methods based on improvement closed loop detection algorithm - Google Patents

A kind of mobile robot visual SLAM methods based on improvement closed loop detection algorithm Download PDF

Info

Publication number
CN107680133A
CN107680133A CN201710839954.5A CN201710839954A CN107680133A CN 107680133 A CN107680133 A CN 107680133A CN 201710839954 A CN201710839954 A CN 201710839954A CN 107680133 A CN107680133 A CN 107680133A
Authority
CN
China
Prior art keywords
mrow
msub
msup
munderover
msubsup
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710839954.5A
Other languages
Chinese (zh)
Inventor
胡章芳
鲍合章
罗元
孙林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University of Post and Telecommunications
Original Assignee
Chongqing University of Post and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University of Post and Telecommunications filed Critical Chongqing University of Post and Telecommunications
Priority to CN201710839954.5A priority Critical patent/CN107680133A/en
Publication of CN107680133A publication Critical patent/CN107680133A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • G06T7/337Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/35Determination of transform parameters for the alignment of images, i.e. image registration using statistical methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/582Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of traffic signs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20164Salient point detection; Corner detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Probability & Statistics with Applications (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The present invention is claimed a kind of based on the mobile robot visual SLAM methods for improving closed loop detection algorithm, including step:S1, Kinect is demarcated using Zhang Dingyou standardizations;S2, ORB feature extractions are carried out to the RGB image of acquisition, and characteristic matching is carried out using FLANN;S3, delete error hiding;Match point space coordinates is obtained, interframe pose conversion (R, t) is then estimated using PnP algorithms;S4, the pose conversion to be asked for PnP carry out structureless iteration optimization.S5, picture frame are pre-processed, and image is described by vision bag of words, and carry out images match with an improved similarity score matching process, so as to obtain candidate's closed loop;Filter out correct closed loop.S6, use and the figure optimization method of core is adjusted to boundling to be optimized to pose and road sign, pass through continuous iteration optimization and obtain more accurate camera pose and road sign.The present invention can obtain more accurate pose estimation and preferably three-dimensional reconstruction effect under environment indoors.

Description

Mobile robot vision SLAM method based on improved closed-loop detection algorithm
Technical Field
The invention belongs to the field of mobile robot navigation, and particularly relates to a synchronous positioning and map building method based on vision.
Background
Meanwhile, positioning and Mapping (SLAM) is a key technology for realizing complete autonomous movement of a robot, wherein a sensor carried by the robot is used for acquiring information of an environment where the robot is located, estimating the pose of the robot and constructing an environment map. Meanwhile, the visual SALM is also the key of the current VR technology, and has wide application prospect in the future virtual reality industry.
The visual sensor can acquire rich image information similar to human eye perception information, and the rich environment information provides more details for environment map construction, so that a better map construction effect can be realized. Great progress has been made in visual SLAM. In 2007, Klein et al proposed a PTAM that implemented the parallelization of trace and map building and was the first solution to use nonlinear optimization as the back-end; the Endres proposed an RGBD-SLAM based on a depth camera in 2014, which integrates the technologies of image characteristics, closed-loop detection, point cloud and the like, and realizes three-dimensional mapping of an indoor scene; in 2015, Raul Mur-Artal and the like are based on a PTAM framework, so that functions of map initialization and closed-loop detection are added, selection of key frames is optimized, and the like, and good effects are achieved in processing speed and map precision.
Although visual SLAM has achieved good processing speed and map accuracy over the past few years, there is still room for improvement because of its stringent requirements for processing speed and map accuracy. The mobile robot can carry out long-time running to bring accumulated errors, which can cause the problem of inconsistency of positioning and map building, and the introduction of closed-loop detection can reduce the accumulated errors. However, the traditional closed-loop detection algorithm has low closed-loop detection efficiency and low accuracy. Therefore, an improved closed-loop detection algorithm is introduced to improve the accuracy and efficiency of closed-loop detection. Meanwhile, in order to reduce system errors, a nonlinear optimization method is adopted to carry out local optimization on the front-end estimation pose, and then global optimization is carried out on the closed-loop information and the front-end estimation information. Therefore, the provided mobile robot vision synchronous positioning and map building method based on the improved closed-loop detection algorithm can meet the real-time requirement and can obtain more accurate pose estimation and three-dimensional reconstruction with better effect.
Disclosure of Invention
The present invention is directed to solving the above problems of the prior art. A mobile robot vision SLAM method based on an improved closed-loop detection algorithm and capable of estimating pose more accurately is provided. The technical scheme of the invention is as follows: a mobile robot vision SLAM method based on an improved closed-loop detection algorithm comprises the following steps:
s1, calibrating the Kinect by using a Zhang friend calibration method to obtain camera internal parameters;
s2, carrying out ORB feature extraction on the obtained RGB image, and carrying out feature matching by adopting a FLANN (fast approximate nearest neighbor) algorithm, wherein the feature matching comprises a correct matching result and a mismatching result;
s3, deleting the mismatching by using RANSAC (random sample consensus) according to the mismatching result of the step S2; acquiring spatial coordinates of matching points through a Kinect camera model, and estimating pose transformation (R, t) between frames by adopting a PnP (n-point perspective problem) algorithm;
s4, carrying out structureless iterative optimization on the pose transformation obtained by adopting the PnP algorithm in the step S3;
s5, preprocessing the image frames obtained in the step S2, describing the images through visual word bags, matching the images by using an improved similarity score matching method, and obtaining matching which is more consistent with the actual similarity between the images so as to obtain candidate closed loops;
and S6, optimizing the pose and the landmark by adopting a graph optimization method taking cluster adjustment BA as a core, and acquiring more accurate camera pose and landmark through continuous iterative optimization.
Further, the ORB feature extraction step in step S2 is:
firstly, performing FAST corner extraction on an RGB image, and enabling ORB feature points to have invariance of scale and rotation by utilizing an image pyramid construction method and a gray centroid method;
thirdly, binary description is carried out on the image around the feature point extracted in the previous step through BRIEF.
Further, in step S3, the spatial coordinates of the matching points are obtained through a Kinect camera model, and the spatial coordinate solving method includes:
zo=z
if the depth value corresponding to the pixel coordinate (u, v) in the image obtained by the Kinect is z, the space coordinate (x) can be obtained according to the Kinect camera modelo,yo,zo)。
Further, the step of optimizing the pose transformation obtained by PnP in step S4 is:
firstly, establishing an optimization problem by minimizing a reprojection error by taking a camera pose as an optimization variable;
then, the original PnP function is optimized by calling g2o with the PnP value as an initial value.
Further, the step S5 of detecting the candidate closed loop includes:
firstly, preprocessing an acquired image, acquiring an image similarity score by using the following similarity calculation function, and discarding a current frame when the score is greater than a set threshold value; if not, the current frame represents a new scene for participating in closed loop detection;
wherein, VlWeight vector, V, representing the image of the previous framecRepresents a weighting vector of the current frame, and the H value represents image similarity.
Next, describing the image by adopting a hierarchical K-means clustering algorithm; then adopting the following improved single-node score function to carry out image similarity score matching;
the repeated accumulation of similarity can be avoided by using a top-down similarity increment method, and the l-th layer similarity score increment is defined as:
defining the matching kernel as:
wherein, ηlIndicating the matching strength factor of the l-th layer.
Further, the function of the improved similarity score is defined as:
based on the new single-node similarity score function proposed by the above formula, the similarity score at the l-th layer is:
wherein,represents the score of image X on the ith node of the ith level of the tree,represents the score of image Y on the ith node of the l-th level of the tree.
Further, the optimizing the pose and the landmark by using the graph optimizing method with the cluster adjustment BA as the core specifically comprises the following steps: firstly, constructing a cost function of pose and whole road sign, namely an objective function, as follows:
e=z-h(ξ,p)
ξ is a lie algebra corresponding to the camera pose (external parameters (R, t)), p is a three-dimensional coordinate point of the road sign, z is pixel coordinate data observed by Kinect, e is an observation error, z is an observation errorijShown in a seated position ξiWhere observation road sign pjGenerated pixel coordinate data;
this least squares method is then solved, defining the independent variables as all the variables to be optimized:
x=[ξ1,...,ξm,p1,...,pn]
and continuously searching a descending direction △ x to find the optimal solution of the cost function according to the idea of nonlinear optimization, wherein when an increment is given to the independent variable, the objective function is changed into:
wherein, Fij、EijRespectively representing the partial derivative of the camera pose in the current state and the partial derivative of the position of the landmark point;
the following incremental equations are finally solved:
h △ x is g, g being a constant equivalent.
Wherein,
H=JTJ+λI
j ═ F E, and matrices E and F represent the derivatives of the global objective function to the global variable.
The invention has the following advantages and beneficial effects:
the invention provides a mobile robot vision SLAM method based on an improved closed-loop detection algorithm, which uses ORB characteristics to improve the speed of characteristic extraction and matching and improve the real-time performance of a system; the closed loop detection accuracy is improved through an improved closed loop detection algorithm, the problem of perception ambiguity is avoided, and the system accumulated error is reduced; and errors caused by system noise and the like are reduced through local optimization of the pose and global optimization of the whole system data, so that more accurate pose estimation and a better three-dimensional reconstruction effect are obtained.
Drawings
Fig. 1 is a flow chart of the mobile robot vision SLAM method based on the improved closed-loop detection algorithm of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be described in detail and clearly with reference to the accompanying drawings. The described embodiments are only some of the embodiments of the present invention.
The technical scheme for solving the technical problems is as follows:
as shown in fig. 1, the present invention provides a mobile robot vision SLAM method based on an improved closed-loop detection algorithm, which is characterized by comprising the following steps:
s1, calibrating the Kinect by using a Zhang-Daoyou calibration method; correcting the distortion of the camera and acquiring camera internal parameters; the Zhang DaoYong calibration method is a calibration method which can obtain the internal parameters of the camera by only using one printed checkerboard.
S2, performing ORB feature extraction on the RGB image by using opencv, wherein the obtained ORB features are good in robustness, short in time consumption and very suitable for real-time SLAM; the fast approximate nearest neighbor (FLANN) algorithm is suitable for the condition that the number of matching points is extremely large, and can effectively complete feature matching.
S3, for the mismatching situation, using random sample consensus (RANSAC) to delete the mismatching; acquiring spatial coordinates of the matching points through a Kinect camera model, and estimating pose transformation (R, t) between frames by adopting a PnP algorithm; the space coordinate solving method comprises the following steps:
zo=z
if the depth value corresponding to the pixel coordinate (u, v) in the image obtained by the Kinect is z, the space coordinate (x) can be obtained according to the Kinect camera modelo,yo,zo)。
And S4, performing unstructured iterative optimization on the pose transformation obtained by the PnP in order to provide more stable and accurate pose estimation for the back-end global optimization. The method for optimizing the pose transformation obtained by PnP comprises the following steps:
firstly, establishing an optimization problem by minimizing a reprojection error by taking a camera pose as an optimization variable;
then, the original PnP function is optimized by calling g2o with the PnP value as an initial value.
S5, firstly, preprocessing the acquired image; obtaining an image similarity score by using the following similarity calculation function, and when the score is greater than a set threshold value, omitting the current frame; if not, the current frame represents a new scene for participating in closed loop detection; the threshold value in the preprocessing should be reasonably set according to the quality of the acquired image and the image acquisition rate, because the smaller the threshold value is, the more image frames are deleted, and the higher the accuracy rate of acquiring a new scene is. However, if the setting is too low, the closed loop cannot be detected even when the robot returns to the region that has been once reached, and therefore the threshold value should be set appropriately according to the situation.
Next, describing the image by adopting a hierarchical K-means clustering algorithm; the following modified single node scoring function was used for similarity score matching.
The function of the improved similarity score is defined as:
based on the new single-node similarity score function proposed by the above formula, the similarity score at the l-th layer is:
the repeated accumulation of similarity can be avoided by using a top-down similarity increment method, and the l-th layer similarity score increment is defined as:
defining the matching kernel as:
wherein, ηlIndicating the matching strength factor of the l-th layer.
Screening out a candidate closed loop by comparing the similarity score with a set threshold value; and screening out a correct closed loop from the candidate closed loops by utilizing the time continuity and the space consistency.
S6, first, in order to optimize the pose and landmark points calculated by the visual odometer, an overall cost function (also called an objective function) is constructed as follows:
e=z-h(ξ,p)
ξ is a lie algebra corresponding to the camera pose (external parameters (R, t)), p is a three-dimensional coordinate point of the road sign, z is pixel coordinate data observed by Kinect, e is an observation error, z is an observation errorijShown in a seated position ξiWhere observation road sign pjThe generated pixel coordinate data.
This least squares method is then solved, defining the independent variables as all the variables to be optimized:
x=[ξ1,...,ξm,p1,...,pn]
and continuously searching a descending direction △ x to find the optimal solution of the cost function according to the idea of nonlinear optimization, wherein when an increment is given to the independent variable, the objective function is changed into:
wherein, Fij、EijRespectively representing the partial derivative of the camera pose in the current state and the partial derivative of the road mark position.
The following incremental equations are finally solved:
H△x=g
wherein,
H=JTJ+λI
J=[F E]
and finally, obtaining a convergence value x meeting the state variable of the target equation through multiple iterations, and obtaining the optimized pose and the optimized landmark.
The above examples are to be construed as merely illustrative and not limitative of the remainder of the disclosure. After reading the description of the invention, the skilled person can make various changes or modifications to the invention, and these equivalent changes and modifications also fall into the scope of the invention defined by the claims.

Claims (7)

1. A mobile robot vision SLAM method based on an improved closed-loop detection algorithm is characterized by comprising the following steps:
s1, calibrating the Kinect by using a Zhang friend calibration method to obtain camera internal parameters;
s2, carrying out ORB feature extraction on the obtained RGB image, and carrying out feature matching by adopting a FLANN fast approximate nearest neighbor algorithm, wherein the feature matching comprises a correct matching result and a wrong matching result;
s3, aiming at the mismatching result in the step S2, the random sample consensus is utilized to delete the mismatching; acquiring spatial coordinates of matching points through a Kinect camera model, and estimating interframe pose transformation (R, t) by adopting an n-point perspective problem algorithm PnP;
s4, carrying out structureless iterative optimization on the pose transformation obtained by adopting the PnP algorithm in the step S3;
s5, preprocessing the image frames obtained in the step S2, describing the images through visual word bags, matching the images by using an improved similarity score matching method, and obtaining matching which is more consistent with the actual similarity between the images so as to obtain candidate closed loops;
and S6, optimizing the pose and the landmark by adopting a graph optimization method taking cluster adjustment BA as a core, and acquiring more accurate camera pose and landmark through continuous iterative optimization.
2. The mobile robot vision-synchronized positioning and mapping method of claim 1, wherein the ORB feature extraction step in step S2 is:
firstly, performing FAST corner extraction on an RGB image, and enabling ORB feature points to have invariance of scale and rotation by utilizing an image pyramid construction method and a gray centroid method;
thirdly, binary description is carried out on the image around the feature point extracted in the previous step through BRIEF.
3. The method for visual synchronous positioning and mapping of a mobile robot as claimed in claim 2, wherein the spatial coordinates of the matching points are obtained through a Kinect camera model in step S3, and the spatial coordinate solving method comprises:
<mrow> <msub> <mi>x</mi> <mi>o</mi> </msub> <mo>=</mo> <mfrac> <mi>z</mi> <msub> <mi>f</mi> <mi>x</mi> </msub> </mfrac> <mrow> <mo>(</mo> <mi>u</mi> <mo>-</mo> <msub> <mi>c</mi> <mi>x</mi> </msub> <mo>)</mo> </mrow> </mrow>
<mrow> <msub> <mi>y</mi> <mi>o</mi> </msub> <mo>=</mo> <mfrac> <mi>z</mi> <msub> <mi>f</mi> <mi>y</mi> </msub> </mfrac> <mrow> <mo>(</mo> <mi>v</mi> <mo>-</mo> <msub> <mi>c</mi> <mi>y</mi> </msub> <mo>)</mo> </mrow> </mrow>
zo=z
if the depth value corresponding to the pixel coordinate (u, v) in the image obtained by the Kinect is z, the space coordinate (x) can be obtained according to the Kinect camera modelo,yo,zo)。
4. The mobile robot vision SLAM method based on the improved closed-loop detection algorithm of claim 3, wherein the step of optimizing the pose transformation found by PnP in step S4 comprises:
firstly, establishing an optimization problem by minimizing a reprojection error by taking a camera pose as an optimization variable;
then, the original PnP function is optimized by calling g2o with the PnP value as an initial value.
5. The visual SLAM method for mobile robots based on improved closed loop detection algorithm as claimed in claim 4 wherein the step S5 detection steps of candidate closed loop are:
firstly, preprocessing an acquired image, acquiring an image similarity score by using the following similarity calculation function, and discarding a current frame when the score is greater than a set threshold value; if not, the current frame represents a new scene for participating in closed loop detection;
<mrow> <mi>H</mi> <mo>=</mo> <mfrac> <mrow> <mo>(</mo> <msub> <mi>V</mi> <mi>l</mi> </msub> <mo>,</mo> <msub> <mi>V</mi> <mi>c</mi> </msub> <mo>)</mo> </mrow> <mrow> <mo>|</mo> <mo>|</mo> <msub> <mi>V</mi> <mi>l</mi> </msub> <mo>|</mo> <mo>|</mo> <mo>|</mo> <mo>|</mo> <msub> <mi>V</mi> <mi>c</mi> </msub> <mo>|</mo> <mo>|</mo> </mrow> </mfrac> <mo>,</mo> </mrow>
wherein, VlWeight vector, V, representing the image of the previous framecRepresenting a weighting vector of the current frame, and an H value representing image similarity;
next, describing the image by adopting a hierarchical K-means clustering algorithm; then adopting the following improved single-node score function to carry out image similarity score matching;
the repeated accumulation of similarity can be avoided by using a top-down similarity increment method, and the l-th layer similarity score increment is defined as:
<mrow> <msup> <mi>&amp;Delta;S</mi> <mi>l</mi> </msup> <mrow> <mo>(</mo> <mi>X</mi> <mo>,</mo> <mi>Y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mrow> <msup> <mi>S</mi> <mi>l</mi> </msup> <mrow> <mo>(</mo> <mi>X</mi> <mo>,</mo> <mi>Y</mi> <mo>)</mo> </mrow> <mo>-</mo> <msup> <mi>S</mi> <mrow> <mi>l</mi> <mo>+</mo> <mn>1</mn> </mrow> </msup> <mrow> <mo>(</mo> <mi>X</mi> <mo>,</mo> <mi>Y</mi> <mo>)</mo> </mrow> <mo>,</mo> </mrow> </mtd> <mtd> <mrow> <mn>1</mn> <mo>&amp;le;</mo> <mi>l</mi> <mo>&lt;</mo> <mi>d</mi> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msup> <mi>S</mi> <mi>d</mi> </msup> <mrow> <mo>(</mo> <mi>X</mi> <mo>,</mo> <mi>Y</mi> <mo>)</mo> </mrow> <mo>,</mo> </mrow> </mtd> <mtd> <mrow> <mi>l</mi> <mo>=</mo> <mi>d</mi> </mrow> </mtd> </mtr> </mtable> </mfenced> </mrow>
defining the matching kernel as:
<mrow> <mi>K</mi> <mrow> <mo>(</mo> <mi>X</mi> <mo>,</mo> <mi>Y</mi> <mo>)</mo> </mrow> <mo>=</mo> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>l</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>d</mi> </munderover> <msub> <mi>&amp;eta;</mi> <mi>l</mi> </msub> <msup> <mi>&amp;Delta;S</mi> <mi>l</mi> </msup> <mrow> <mo>(</mo> <mi>X</mi> <mo>,</mo> <mi>Y</mi> <mo>)</mo> </mrow> </mrow>
wherein, ηlIndicating the matching strength factor of the l-th layer.
6. The visual SLAM method for mobile robots based on an improved closed loop detection algorithm as claimed in claim 5 wherein the function of the improved similarity score is defined as:
<mrow> <msubsup> <mi>S</mi> <mi>i</mi> <mi>l</mi> </msubsup> <mrow> <mo>(</mo> <mi>X</mi> <mo>,</mo> <mi>Y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mi>min</mi> <mo>{</mo> <msubsup> <mi>&amp;omega;</mi> <mi>i</mi> <mi>l</mi> </msubsup> <mrow> <mo>(</mo> <mi>X</mi> <mo>)</mo> </mrow> <mo>,</mo> <msubsup> <mi>&amp;omega;</mi> <mi>i</mi> <mi>l</mi> </msubsup> <mrow> <mo>(</mo> <mi>Y</mi> <mo>)</mo> </mrow> <mo>}</mo> <mo>+</mo> <mfrac> <mn>1</mn> <mrow> <mn>1</mn> <mo>+</mo> <mo>|</mo> <msubsup> <mi>&amp;omega;</mi> <mi>i</mi> <mi>l</mi> </msubsup> <mrow> <mo>(</mo> <mi>X</mi> <mo>)</mo> </mrow> <mo>-</mo> <msubsup> <mi>&amp;omega;</mi> <mi>i</mi> <mi>l</mi> </msubsup> <mrow> <mo>(</mo> <mi>Y</mi> <mo>)</mo> </mrow> <mo>|</mo> </mrow> </mfrac> </mrow>
based on the new single-node similarity score function proposed by the above formula, the similarity score at the l-th layer is:
<mrow> <msup> <mi>S</mi> <mi>l</mi> </msup> <mrow> <mo>(</mo> <mi>X</mi> <mo>,</mo> <mi>Y</mi> <mo>)</mo> </mrow> <mo>=</mo> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <msup> <mi>k</mi> <mi>l</mi> </msup> </munderover> <msubsup> <mi>S</mi> <mi>i</mi> <mi>l</mi> </msubsup> <mrow> <mo>(</mo> <mi>X</mi> <mo>,</mo> <mi>Y</mi> <mo>)</mo> </mrow> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <msup> <mi>k</mi> <mi>l</mi> </msup> </munderover> <mi>min</mi> <mo>{</mo> <msubsup> <mi>&amp;omega;</mi> <mi>i</mi> <mi>l</mi> </msubsup> <mrow> <mo>(</mo> <mi>X</mi> <mo>)</mo> </mrow> <mo>,</mo> <msubsup> <mi>&amp;omega;</mi> <mi>i</mi> <mi>l</mi> </msubsup> <mrow> <mo>(</mo> <mi>Y</mi> <mo>)</mo> </mrow> <mo>}</mo> <mo>+</mo> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <msup> <mi>k</mi> <mi>l</mi> </msup> </munderover> <mfrac> <mn>1</mn> <mrow> <mn>1</mn> <mo>+</mo> <mo>|</mo> <msubsup> <mi>&amp;omega;</mi> <mi>i</mi> <mi>l</mi> </msubsup> <mrow> <mo>(</mo> <mi>X</mi> <mo>)</mo> </mrow> <mo>-</mo> <msubsup> <mi>&amp;omega;</mi> <mi>i</mi> <mi>l</mi> </msubsup> <mrow> <mo>(</mo> <mi>Y</mi> <mo>)</mo> </mrow> <mo>|</mo> </mrow> </mfrac> </mrow>
wherein,represents the score of image X on the ith node of the ith level of the tree,representing image Y in treeThe score on the ith node of the ith layer of (1).
7. The mobile robot vision SLAM method based on the improved closed-loop detection algorithm as claimed in claim 5 or 6, wherein the optimization of pose and road sign by using a graph optimization method with cluster adjustment BA as core specifically comprises: firstly, constructing a cost function of pose and whole road sign, namely an objective function, as follows:
<mrow> <mfrac> <mn>1</mn> <mn>2</mn> </mfrac> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>m</mi> </munderover> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </munderover> <mo>|</mo> <mo>|</mo> <msub> <mi>e</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> <mo>|</mo> <msup> <mo>|</mo> <mn>2</mn> </msup> <mo>=</mo> <mfrac> <mn>1</mn> <mn>2</mn> </mfrac> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>m</mi> </munderover> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </munderover> <mo>|</mo> <mo>|</mo> <msub> <mi>z</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> <mo>-</mo> <mi>h</mi> <mrow> <mo>(</mo> <msub> <mi>&amp;xi;</mi> <mi>i</mi> </msub> <mo>,</mo> <msub> <mi>p</mi> <mi>j</mi> </msub> <mo>)</mo> </mrow> <mo>|</mo> <msup> <mo>|</mo> <mn>2</mn> </msup> </mrow>
e=z-h(ξ,p)
ξ is a lie algebra corresponding to the camera pose (external parameters (R, t)), p is a three-dimensional coordinate point of the road sign, z is pixel coordinate data observed by Kinect, e is an observation error, z is an observation errorijShown in a seated position ξiWhere observation road sign pjGenerated pixel coordinate data;
this least squares method is then solved, defining the independent variables as all the variables to be optimized:
x=[ξ1,...,ξm,p1,...,pn]
and continuously searching a descending direction △ x to find the optimal solution of the cost function according to the idea of nonlinear optimization, wherein when an increment is given to the independent variable, the objective function is changed into:
<mrow> <mfrac> <mn>1</mn> <mn>2</mn> </mfrac> <mo>|</mo> <mo>|</mo> <mi>f</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>+</mo> <mi>&amp;Delta;</mi> <mi>x</mi> <mo>)</mo> </mrow> <mo>|</mo> <msup> <mo>|</mo> <mn>2</mn> </msup> <mo>&amp;ap;</mo> <mfrac> <mn>1</mn> <mn>2</mn> </mfrac> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>m</mi> </munderover> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </munderover> <mo>|</mo> <mo>|</mo> <msub> <mi>e</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> <mo>+</mo> <msub> <mi>F</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> <msub> <mi>&amp;Delta;&amp;xi;</mi> <mi>i</mi> </msub> <mo>+</mo> <msub> <mi>E</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> <msub> <mi>&amp;Delta;p</mi> <mi>j</mi> </msub> <mo>|</mo> <msup> <mo>|</mo> <mn>2</mn> </msup> </mrow>
wherein, Fij、EijRespectively representing the partial derivative of the camera pose in the current state and the partial derivative of the position of the landmark point;
the following incremental equations are finally solved:
h △ x is g, g being a constant equivalent.
Wherein,
H=JTJ+λI
j ═ F E, and matrices E and F represent the derivatives of the global objective function to the global variable.
CN201710839954.5A 2017-09-15 2017-09-15 A kind of mobile robot visual SLAM methods based on improvement closed loop detection algorithm Pending CN107680133A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710839954.5A CN107680133A (en) 2017-09-15 2017-09-15 A kind of mobile robot visual SLAM methods based on improvement closed loop detection algorithm

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710839954.5A CN107680133A (en) 2017-09-15 2017-09-15 A kind of mobile robot visual SLAM methods based on improvement closed loop detection algorithm

Publications (1)

Publication Number Publication Date
CN107680133A true CN107680133A (en) 2018-02-09

Family

ID=61135809

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710839954.5A Pending CN107680133A (en) 2017-09-15 2017-09-15 A kind of mobile robot visual SLAM methods based on improvement closed loop detection algorithm

Country Status (1)

Country Link
CN (1) CN107680133A (en)

Cited By (36)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108648274A (en) * 2018-05-10 2018-10-12 华南理工大学 A kind of cognition point cloud map creation system of vision SLAM
CN108830191A (en) * 2018-05-30 2018-11-16 上海电力学院 Based on the mobile robot SLAM method for improving EMM and ORB algorithm
CN109509211A (en) * 2018-09-28 2019-03-22 北京大学 Positioning simultaneously and the feature point extraction and matching process and system built in diagram technology
CN109556611A (en) * 2018-11-30 2019-04-02 广州高新兴机器人有限公司 A kind of fusion and positioning method based on figure optimization and particle filter
CN109676604A (en) * 2018-12-26 2019-04-26 清华大学 Robot non-plane motion localization method and its motion locating system
CN109766758A (en) * 2018-12-12 2019-05-17 北京计算机技术及应用研究所 A kind of vision SLAM method based on ORB feature
CN109800692A (en) * 2019-01-07 2019-05-24 重庆邮电大学 A kind of vision SLAM winding detection method based on pre-training convolutional neural networks
CN109902619A (en) * 2019-02-26 2019-06-18 上海大学 Image closed loop detection method and system
CN109934857A (en) * 2019-03-04 2019-06-25 大连理工大学 A kind of winding detection method based on convolutional neural networks Yu ORB feature
CN109974707A (en) * 2019-03-19 2019-07-05 重庆邮电大学 A kind of indoor mobile robot vision navigation method based on improvement cloud matching algorithm
CN110119144A (en) * 2019-04-19 2019-08-13 苏州大学 Based on the matched multirobot SLAM algorithm of sub- map feature
WO2019169540A1 (en) * 2018-03-06 2019-09-12 斯坦德机器人(深圳)有限公司 Method for tightly-coupling visual slam, terminal and computer readable storage medium
CN110243370A (en) * 2019-05-16 2019-09-17 西安理工大学 A kind of three-dimensional semantic map constructing method of the indoor environment based on deep learning
CN110322549A (en) * 2019-06-12 2019-10-11 清华大学 A kind of method and system of the three-dimensional reconstruction based on image
CN110335315A (en) * 2019-06-27 2019-10-15 Oppo广东移动通信有限公司 A kind of image processing method and device, computer readable storage medium
CN110827353A (en) * 2019-10-18 2020-02-21 天津大学 Robot positioning method based on monocular camera assistance
CN111024078A (en) * 2019-11-05 2020-04-17 广东工业大学 Unmanned aerial vehicle vision SLAM method based on GPU acceleration
CN111044039A (en) * 2019-12-25 2020-04-21 中航华东光电有限公司 Monocular target area self-adaptive high-precision distance measuring device and method based on IMU
CN111160373A (en) * 2019-12-30 2020-05-15 重庆邮电大学 Method for extracting, detecting and classifying defect image features of variable speed drum parts
CN111275763A (en) * 2020-01-20 2020-06-12 深圳市普渡科技有限公司 Closed loop detection system, multi-sensor fusion SLAM system and robot
CN111322993A (en) * 2018-12-13 2020-06-23 杭州海康机器人技术有限公司 Visual positioning method and device
CN111582123A (en) * 2020-04-29 2020-08-25 华南理工大学 AGV positioning method based on beacon identification and visual SLAM
CN111882663A (en) * 2020-07-03 2020-11-03 广州万维创新科技有限公司 Visual SLAM closed-loop detection method achieved by fusing semantic information
WO2020259185A1 (en) * 2019-06-25 2020-12-30 京东方科技集团股份有限公司 Method and apparatus for implementing visual odometer
CN112304311A (en) * 2019-07-29 2021-02-02 南京理工大学 Method for solving BA problem based on differential evolution algorithm in SLAM process
CN112364881A (en) * 2020-04-01 2021-02-12 武汉理工大学 Advanced sampling consistency image matching algorithm
CN112560648A (en) * 2020-12-09 2021-03-26 长安大学 SLAM method based on RGB-D image
CN113010724A (en) * 2021-04-29 2021-06-22 山东新一代信息产业技术研究院有限公司 Robot map selection method and system based on visual feature point matching
CN113743413A (en) * 2021-07-30 2021-12-03 的卢技术有限公司 Visual SLAM method and system combining image semantic information
CN114111803A (en) * 2022-01-26 2022-03-01 中国人民解放军战略支援部队航天工程大学 Visual navigation method of indoor satellite platform
CN114418927A (en) * 2021-11-09 2022-04-29 四川大学 Closed loop detection method and system based on spatial relationship feature matching
CN114603555A (en) * 2022-02-24 2022-06-10 江西省智能产业技术创新研究院 Mobile robot initial pose estimation method and system, computer and robot
CN114694013A (en) * 2022-04-11 2022-07-01 北京理工大学 Distributed multi-machine cooperative vision SLAM method and system
CN115235505A (en) * 2022-07-12 2022-10-25 重庆邮电大学 Visual odometer method based on nonlinear optimization
CN115272494A (en) * 2022-09-29 2022-11-01 腾讯科技(深圳)有限公司 Calibration method and device for camera and inertial measurement unit and computer equipment
CN117115238A (en) * 2023-04-12 2023-11-24 荣耀终端有限公司 Pose determining method, electronic equipment and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102706342A (en) * 2012-05-31 2012-10-03 重庆邮电大学 Location and environment modeling method of intelligent movable robot
CN104374395A (en) * 2014-03-31 2015-02-25 南京邮电大学 Graph-based vision SLAM (simultaneous localization and mapping) method
CN105843223A (en) * 2016-03-23 2016-08-10 东南大学 Mobile robot three-dimensional mapping and obstacle avoidance method based on space bag of words model
CN105856230A (en) * 2016-05-06 2016-08-17 简燕梅 ORB key frame closed-loop detection SLAM method capable of improving consistency of position and pose of robot
CN106127739A (en) * 2016-06-16 2016-11-16 华东交通大学 A kind of RGB D SLAM method of combination monocular vision
CN106556395A (en) * 2016-11-17 2017-04-05 北京联合大学 A kind of air navigation aid of the single camera vision system based on quaternary number
CN106909877A (en) * 2016-12-13 2017-06-30 浙江大学 A kind of vision based on dotted line comprehensive characteristics builds figure and localization method simultaneously

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102706342A (en) * 2012-05-31 2012-10-03 重庆邮电大学 Location and environment modeling method of intelligent movable robot
CN104374395A (en) * 2014-03-31 2015-02-25 南京邮电大学 Graph-based vision SLAM (simultaneous localization and mapping) method
CN105843223A (en) * 2016-03-23 2016-08-10 东南大学 Mobile robot three-dimensional mapping and obstacle avoidance method based on space bag of words model
CN105856230A (en) * 2016-05-06 2016-08-17 简燕梅 ORB key frame closed-loop detection SLAM method capable of improving consistency of position and pose of robot
CN106127739A (en) * 2016-06-16 2016-11-16 华东交通大学 A kind of RGB D SLAM method of combination monocular vision
CN106556395A (en) * 2016-11-17 2017-04-05 北京联合大学 A kind of air navigation aid of the single camera vision system based on quaternary number
CN106909877A (en) * 2016-12-13 2017-06-30 浙江大学 A kind of vision based on dotted line comprehensive characteristics builds figure and localization method simultaneously

Non-Patent Citations (7)

* Cited by examiner, † Cited by third party
Title
ADRIEN ANGELI ET AL: "Fast and Incremental Method for Loop-Closure Detection Using Bags of Visual Words", 《IEEE TRANSACTIONS ON ROBOTICS》 *
FELIX ENDRES ET AL: "3-D Mapping With an RGB-D Camera", 《IEEE TRANSACTIONS ON ROBOTICS》 *
刘国忠 等: "基于SURF和ORB全局特征的快速闭环检测", 《机器人》 *
张毅 等: "基于图优化的移动机器人视觉SLAM", 《智能系统学报》 *
曹明伟 等: "面向大规模三维重建的快速鲁棒集束调整算法", 《计算机辅助设计与图形学学报》 *
李永锋 等: "一种基于历史模型集的改进闭环检测算法", 《机器人》 *
陈超 等: "基于场景外观建模的移动机器人视觉闭环检测研究", 《产业与科技论坛》 *

Cited By (59)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110462683B (en) * 2018-03-06 2022-04-12 斯坦德机器人(深圳)有限公司 Method, terminal and computer readable storage medium for tightly coupling visual SLAM
CN110462683A (en) * 2018-03-06 2019-11-15 斯坦德机器人(深圳)有限公司 Method, terminal and the computer readable storage medium of close coupling vision SLAM
WO2019169540A1 (en) * 2018-03-06 2019-09-12 斯坦德机器人(深圳)有限公司 Method for tightly-coupling visual slam, terminal and computer readable storage medium
CN108648274A (en) * 2018-05-10 2018-10-12 华南理工大学 A kind of cognition point cloud map creation system of vision SLAM
CN108648274B (en) * 2018-05-10 2020-05-22 华南理工大学 Cognitive point cloud map creating system of visual SLAM
CN108830191A (en) * 2018-05-30 2018-11-16 上海电力学院 Based on the mobile robot SLAM method for improving EMM and ORB algorithm
CN108830191B (en) * 2018-05-30 2022-04-01 上海电力学院 Mobile robot SLAM method based on improved environment measurement module EMM and ORB algorithm
CN109509211A (en) * 2018-09-28 2019-03-22 北京大学 Positioning simultaneously and the feature point extraction and matching process and system built in diagram technology
CN109509211B (en) * 2018-09-28 2021-11-16 北京大学 Feature point extraction and matching method and system in simultaneous positioning and mapping technology
CN109556611A (en) * 2018-11-30 2019-04-02 广州高新兴机器人有限公司 A kind of fusion and positioning method based on figure optimization and particle filter
CN109556611B (en) * 2018-11-30 2020-11-10 广州高新兴机器人有限公司 Fusion positioning method based on graph optimization and particle filtering
CN109766758A (en) * 2018-12-12 2019-05-17 北京计算机技术及应用研究所 A kind of vision SLAM method based on ORB feature
CN111322993B (en) * 2018-12-13 2022-03-04 杭州海康机器人技术有限公司 Visual positioning method and device
CN111322993A (en) * 2018-12-13 2020-06-23 杭州海康机器人技术有限公司 Visual positioning method and device
CN109676604B (en) * 2018-12-26 2020-09-22 清华大学 Robot curved surface motion positioning method and motion positioning system thereof
CN109676604A (en) * 2018-12-26 2019-04-26 清华大学 Robot non-plane motion localization method and its motion locating system
CN109800692A (en) * 2019-01-07 2019-05-24 重庆邮电大学 A kind of vision SLAM winding detection method based on pre-training convolutional neural networks
CN109800692B (en) * 2019-01-07 2022-12-27 重庆邮电大学 Visual SLAM loop detection method based on pre-training convolutional neural network
CN109902619A (en) * 2019-02-26 2019-06-18 上海大学 Image closed loop detection method and system
CN109934857A (en) * 2019-03-04 2019-06-25 大连理工大学 A kind of winding detection method based on convolutional neural networks Yu ORB feature
CN109934857B (en) * 2019-03-04 2021-03-19 大连理工大学 Loop detection method based on convolutional neural network and ORB characteristics
CN109974707A (en) * 2019-03-19 2019-07-05 重庆邮电大学 A kind of indoor mobile robot vision navigation method based on improvement cloud matching algorithm
CN110119144B (en) * 2019-04-19 2022-04-22 苏州大学 Multi-robot SLAM algorithm based on sub-map feature matching
CN110119144A (en) * 2019-04-19 2019-08-13 苏州大学 Based on the matched multirobot SLAM algorithm of sub- map feature
CN110243370A (en) * 2019-05-16 2019-09-17 西安理工大学 A kind of three-dimensional semantic map constructing method of the indoor environment based on deep learning
CN110322549A (en) * 2019-06-12 2019-10-11 清华大学 A kind of method and system of the three-dimensional reconstruction based on image
WO2020259185A1 (en) * 2019-06-25 2020-12-30 京东方科技集团股份有限公司 Method and apparatus for implementing visual odometer
CN110335315A (en) * 2019-06-27 2019-10-15 Oppo广东移动通信有限公司 A kind of image processing method and device, computer readable storage medium
CN110335315B (en) * 2019-06-27 2021-11-02 Oppo广东移动通信有限公司 Image processing method and device and computer readable storage medium
CN112304311A (en) * 2019-07-29 2021-02-02 南京理工大学 Method for solving BA problem based on differential evolution algorithm in SLAM process
CN112304311B (en) * 2019-07-29 2023-08-22 南京理工大学 Method for solving BA problem based on differential evolution algorithm for SLAM process
CN110827353B (en) * 2019-10-18 2023-03-28 天津大学 Robot positioning method based on monocular camera assistance
CN110827353A (en) * 2019-10-18 2020-02-21 天津大学 Robot positioning method based on monocular camera assistance
CN111024078B (en) * 2019-11-05 2021-03-16 广东工业大学 Unmanned aerial vehicle vision SLAM method based on GPU acceleration
CN111024078A (en) * 2019-11-05 2020-04-17 广东工业大学 Unmanned aerial vehicle vision SLAM method based on GPU acceleration
CN111044039B (en) * 2019-12-25 2024-03-19 中航华东光电有限公司 Monocular target area self-adaptive high-precision distance measurement device and method based on IMU
CN111044039A (en) * 2019-12-25 2020-04-21 中航华东光电有限公司 Monocular target area self-adaptive high-precision distance measuring device and method based on IMU
CN111160373A (en) * 2019-12-30 2020-05-15 重庆邮电大学 Method for extracting, detecting and classifying defect image features of variable speed drum parts
WO2021147549A1 (en) * 2020-01-20 2021-07-29 深圳市普渡科技有限公司 Closed-loop detection method and system, multi-sensor fusion slam system, robot, and medium
CN111275763A (en) * 2020-01-20 2020-06-12 深圳市普渡科技有限公司 Closed loop detection system, multi-sensor fusion SLAM system and robot
CN111275763B (en) * 2020-01-20 2023-10-13 深圳市普渡科技有限公司 Closed loop detection system, multi-sensor fusion SLAM system and robot
CN112364881A (en) * 2020-04-01 2021-02-12 武汉理工大学 Advanced sampling consistency image matching algorithm
CN112364881B (en) * 2020-04-01 2022-06-28 武汉理工大学 Advanced sampling consistency image matching method
CN111582123B (en) * 2020-04-29 2023-02-14 华南理工大学 AGV positioning method based on beacon identification and visual SLAM
CN111582123A (en) * 2020-04-29 2020-08-25 华南理工大学 AGV positioning method based on beacon identification and visual SLAM
CN111882663A (en) * 2020-07-03 2020-11-03 广州万维创新科技有限公司 Visual SLAM closed-loop detection method achieved by fusing semantic information
CN112560648B (en) * 2020-12-09 2023-04-07 长安大学 SLAM method based on RGB-D image
CN112560648A (en) * 2020-12-09 2021-03-26 长安大学 SLAM method based on RGB-D image
CN113010724A (en) * 2021-04-29 2021-06-22 山东新一代信息产业技术研究院有限公司 Robot map selection method and system based on visual feature point matching
CN113743413B (en) * 2021-07-30 2023-12-01 的卢技术有限公司 Visual SLAM method and system combining image semantic information
CN113743413A (en) * 2021-07-30 2021-12-03 的卢技术有限公司 Visual SLAM method and system combining image semantic information
CN114418927A (en) * 2021-11-09 2022-04-29 四川大学 Closed loop detection method and system based on spatial relationship feature matching
CN114111803A (en) * 2022-01-26 2022-03-01 中国人民解放军战略支援部队航天工程大学 Visual navigation method of indoor satellite platform
CN114603555A (en) * 2022-02-24 2022-06-10 江西省智能产业技术创新研究院 Mobile robot initial pose estimation method and system, computer and robot
CN114603555B (en) * 2022-02-24 2023-12-08 江西省智能产业技术创新研究院 Mobile robot initial pose estimation method and system, computer and robot
CN114694013A (en) * 2022-04-11 2022-07-01 北京理工大学 Distributed multi-machine cooperative vision SLAM method and system
CN115235505A (en) * 2022-07-12 2022-10-25 重庆邮电大学 Visual odometer method based on nonlinear optimization
CN115272494A (en) * 2022-09-29 2022-11-01 腾讯科技(深圳)有限公司 Calibration method and device for camera and inertial measurement unit and computer equipment
CN117115238A (en) * 2023-04-12 2023-11-24 荣耀终端有限公司 Pose determining method, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN107680133A (en) A kind of mobile robot visual SLAM methods based on improvement closed loop detection algorithm
CN109307508B (en) Panoramic inertial navigation SLAM method based on multiple key frames
CN111156984B (en) Monocular vision inertia SLAM method oriented to dynamic scene
CN109631855B (en) ORB-SLAM-based high-precision vehicle positioning method
CN109387204B (en) Mobile robot synchronous positioning and composition method facing indoor dynamic environment
CN104732518B (en) A kind of PTAM improved methods based on intelligent robot terrain surface specifications
CN109949375B (en) Mobile robot target tracking method based on depth map region of interest
CN105856230B (en) A kind of ORB key frames closed loop detection SLAM methods for improving robot pose uniformity
CN110223348A (en) Robot scene adaptive bit orientation estimation method based on RGB-D camera
CN109166149A (en) A kind of positioning and three-dimensional wire-frame method for reconstructing and system of fusion binocular camera and IMU
CN108229416B (en) Robot SLAM method based on semantic segmentation technology
CN109579825B (en) Robot positioning system and method based on binocular vision and convolutional neural network
CN107590827A (en) A kind of indoor mobile robot vision SLAM methods based on Kinect
CN109671120A (en) A kind of monocular SLAM initial method and system based on wheel type encoder
CN107193279A (en) Robot localization and map structuring system based on monocular vision and IMU information
CN103413352A (en) Scene three-dimensional reconstruction method based on RGBD multi-sensor fusion
CN111998862B (en) BNN-based dense binocular SLAM method
CN109087394A (en) A kind of real-time indoor three-dimensional rebuilding method based on inexpensive RGB-D sensor
CN107220996B (en) One kind is based on the consistent unmanned plane linear array of three-legged structure and face battle array image matching method
CN110322507A (en) A method of based on depth re-projection and Space Consistency characteristic matching
JP2017117386A (en) Self-motion estimation system, control method and program of self-motion estimation system
CN111368759A (en) Monocular vision-based semantic map construction system for mobile robot
CN108680177A (en) Synchronous superposition method and device based on rodent models
CN112509051A (en) Bionic-based autonomous mobile platform environment sensing and mapping method
CN117367427A (en) Multi-mode slam method applicable to vision-assisted laser fusion IMU in indoor environment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20180209