CN111076733A - Robot indoor map building method and system based on vision and laser slam - Google Patents
Robot indoor map building method and system based on vision and laser slam Download PDFInfo
- Publication number
- CN111076733A CN111076733A CN201911257094.XA CN201911257094A CN111076733A CN 111076733 A CN111076733 A CN 111076733A CN 201911257094 A CN201911257094 A CN 201911257094A CN 111076733 A CN111076733 A CN 111076733A
- Authority
- CN
- China
- Prior art keywords
- robot
- pose
- laser
- feature points
- point
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 32
- 230000009466 transformation Effects 0.000 claims abstract description 41
- 238000013507 mapping Methods 0.000 claims abstract description 20
- 238000001514 detection method Methods 0.000 claims abstract description 18
- 238000005457 optimization Methods 0.000 claims abstract description 10
- 230000000007 visual effect Effects 0.000 claims abstract description 3
- 239000011159 matrix material Substances 0.000 claims description 31
- 238000012360 testing method Methods 0.000 claims description 7
- 238000004364 calculation method Methods 0.000 claims description 4
- 229920000535 Tan II Polymers 0.000 claims description 2
- 239000000284 extract Substances 0.000 claims description 2
- 230000007547 defect Effects 0.000 abstract description 6
- 238000005286 illumination Methods 0.000 abstract description 4
- 238000010276 construction Methods 0.000 abstract 1
- 238000006073 displacement reaction Methods 0.000 description 7
- 230000006870 function Effects 0.000 description 7
- 230000008569 process Effects 0.000 description 7
- 230000008859 change Effects 0.000 description 4
- 230000036544 posture Effects 0.000 description 3
- 230000008901 benefit Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/26—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
- G01C21/28—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
- G01C21/30—Map- or contour-matching
- G01C21/32—Structuring or formatting of map data
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
Landscapes
- Engineering & Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Automation & Control Theory (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a robot indoor map building method based on vision and laser slam, which comprises the following steps: a visual sensor acquires a left view and a right view to generate a feature point cloud; calculating a pose transformation estimate for the vision-based robot; obtaining laser data, calculating the confidence of pose transformation estimation and map grid updating of the robot based on laser; generating subgraphs by using continuous laser point cloud data by adopting a tile type mapping method; matching the obtained characteristic points with each sub-graph; and updating the pose of the robot in each sub-image through image optimization to form a final image construction result. The invention adopts a mapping method combining vision and laser, on one hand, the defect that a vision algorithm is greatly influenced by illumination is improved, on the other hand, closed loop is detected by using the descriptor attributes of the characteristic points, the problem that the success rate of laser closed loop detection is too low is solved, and the closed loop detection efficiency is improved.
Description
Technical Field
The invention relates to the field of inspection robots, in particular to a robot indoor map building method and system based on vision and laser slam.
Background
The completion of various polling tasks of the robot is directly influenced by the quality of the indoor navigation environment. To complete the map creation, information about the surroundings is first obtained, which is done using sensors. The drawing research hot spot of the existing robot is mainly concentrated on two sensors, one is a laser sensor, the laser sensor has the advantages of high resolution, high distance measurement precision, strong active interference resistance, good detection performance and no influence of light rays, and the defects are that the obtained information does not have semantic information, the registration precision is not high, and closed loop detection failure often exists in the drawing establishing process. The other vision sensor can obtain abundant texture information and has the advantage of accurate matching, but has the defect of large influence by illumination change.
In order to overcome the defects of a laser sensor and a vision sensor, the invention provides a robot indoor map building method and system based on vision and laser slam.
Disclosure of Invention
In order to solve the above problems, the present invention provides a robot indoor map building method based on vision and laser slam, comprising the following steps:
the visual sensor acquires a left view and a right view, and generates a feature point cloud, which specifically comprises the following steps:
extracting feature points of the left view and the right view,
the obtained feature points are matched with each other,
calculating the space coordinates of the matched characteristic points;
calculating pose transformation estimation of the vision-based robot, namely performing secondary matching on the feature points matched with the left view and the right view at the moment and the feature points matched with the left view and the right view at the last moment at the moment, acquiring a pose transformation relation of the robot according to coordinate changes of the secondary matched feature points and main direction changes of the feature points, projecting descriptors of the feature points onto corresponding map grids, and defining feature point descriptor attributes of the grids;
obtaining laser data, calculating the confidence of pose transformation estimation and map grid updating of the robot based on laser;
generating subgraphs by using continuous laser point cloud data by adopting a tile type mapping method;
closed loop detection, namely matching descriptors of the obtained feature points with the attribute of the feature point descriptor of the grid corresponding to each grid of each sub-graph;
the graph optimization specifically comprises the following steps: the pose of the robot is represented by the nodes of the graph, the transformation relation between the poses of the robot is represented by the edges connecting the nodes, the error between each pose of the robot and the estimated pose transformed by the transformation matrix is calculated, the error objective function of the graph with a plurality of edges is calculated, the pose of the robot converging the objective function is obtained through multiple iterations, and the pose of the robot in each sub-graph is used for updating the pose of the robot in each sub-graph, so that the final graph building result can be formed.
Further, the extracting feature points of the left view and the right view includes:
performing FAST corner detection on the left view and the right view by using an ORB algorithm, and extracting a characteristic point P;
establishing a coordinate system with the characteristic point P as an origin, calculating a centroid position C in the field S, and constructing a vector with the characteristic point P as a starting point and the centroid C as an end pointThe moment of the domain S can be expressed as:
wherein Y (x, Y) is the gray value of the image, x, Y belongs to [ -r, r ], r is the radius of the neighborhood S, and p and q are 0 or 1, then the centroid position of the neighborhood is:
the principal direction θ of the feature point P is:
θ=arctan2(m01,m10);
generating a characteristic point descriptor:
within the neighborhood S of the feature point P, a random set of n pairs of points is generated, represented by a2 × n matrix:principal direction θ of feature points and corresponding rotation matrix RθCalculate the rotated 2 × n matrix:
the binary test criterion in the neighborhood S of the feature point P may be defined as:
wherein iθ=i1θ,…,inθ,jθ=j1θ,…,jnθAnd tau denotes a binary test criterion,represents a point iθIs determined by the gray-scale value of (a),represents a point jθThe gray value of (a);
a descriptor of the rotation of the feature point P is now available:
k' is 128,256 or 512, and the size of the corresponding descriptor is 16 bytes, 32 bytes or 64 bytes, respectively, which is selected according to the real-time performance, storage space and recognition efficiency of the method.
Further, the matching of the obtained feature points specifically includes:
calculating the Hamming distance D of the descriptors of the feature points of the left view and the right view for matching:
wherein f isL,fRDescriptors of a certain characteristic point of the left view and the right view respectively; when two feature points with the Hamming distance smaller than the threshold exist in the two frames of images, the two feature points are successfully matched.
Further, the calculating the spatial coordinates of the matched feature points specifically includes:
wherein (x, y, z) is the coordinate of the characteristic point P in space, (u)L,vL) For the image point coordinates of the feature point P in the left camera, (u)R,vR) As coordinates of image points in the right camera, fFocal lengthIs the focal length of the binocular camera and d is the distance between the two cameras.
Further, the tile-type map building method, which generates sub-maps by using continuous laser point cloud data, specifically includes:
and generating subgraphs by using continuous laser point cloud data, wherein each continuous Ts data generates one subgraph, and the second subgraph comprises data in the last T (T < T) s of the last subgraph.
Further, the graph optimization specifically includes:
the pose of the robot is represented as nodes in the graph, observation information is converted into a constraint relation between poses of the robots after being processed, and the constraint relation is represented by connecting edges between the nodes; each node of the graph depends on the description of the pose of the robot, and the state equation of all the nodes isWhereinThe positions and postures of the robot under the global coordinate system are respectively described, and the transformation relation among the positions and postures of the robot is described; the observation equation between node Pi and node Pj can be expressed as:
zj=xiTi,j
wherein x isiIs a node PiPose, x, of the corresponding robot in the global coordinate systemjIs a node PjPose, z, of the corresponding robot in the global coordinate systemjIs a node PjAn observed value of (1), i.e. xiEstimated pose after transformation by transformation matrix, Ti,jIs a node PiAnd node PjThe transformation relationship between the two;
robot pose xiAnd the estimated pose z after transformation by the transformation matrixjThere is an error e (z) betweenj,xj) The calculation formula is as follows:
e(zj,xj)=xiTi,j-Xj
the error objective function e (x) of the graph with several edges is:
wherein the information matrix omegakIs the inverse of the covariance matrix, is a symmetric matrix; each element omega thereofi,jAs a coefficient of error, pair e (z)i,xi)、e(zj,xj) The error correlation of (a) is predicted;
let x bekAnd adding an increment delta X, finally obtaining a convergence value meeting the objective equation E (X) through multiple iterations, updating the state variable X by using the obtained delta X, and forming a final mapping result by using the pose of each node in the X.
A robot indoor map system based on vision and laser slam comprises a front end and a rear end, wherein the front end of the system acquires a left view and a right view of an environment by using a binocular vision sensor, extracts the left view and the right view by using an ORB algorithm, matches feature points of the left view and the right view, and further calculates coordinates of each feature point in a space; matching the feature points obtained at adjacent moments, further solving the pose transformation relation of the robot at the moments, projecting the feature points onto a map grid, and defining that the grid map with the feature point projection has descriptor attributes; the front end of the system obtains laser point cloud data of the environment through a laser sensor, de-noizes the laser point cloud data, obtains pose transformation estimation of the robot by obtaining a pose initial value of the robot, defining a scanning window and obtaining a confidence coefficient, projects the laser point cloud on a grid map, and updates the confidence coefficient of each map grid; and the back end of the system performs closed-loop detection and map optimization on the obtained map.
Compared with the prior art, the invention has the following beneficial effects:
by adopting the mapping method combining vision and laser, the defect that a vision algorithm is greatly influenced by illumination is overcome, the closed loop is detected by using the descriptor attributes of the characteristic points, the problem of low success rate of laser closed loop detection is solved, and the closed loop detection efficiency is improved.
Drawings
Fig. 1 is a robot indoor positioning and mapping process based on vision and laser slam.
FIG. 2 is a schematic diagram of the M16 template.
Detailed Description
The following describes a mobile robot indoor positioning and mapping method based on vision and laser slam in detail with reference to the accompanying drawings.
A mobile robot indoor positioning and mapping method based on vision and laser slam is disclosed, and the flow is shown in figure 1. Specifically, the indoor positioning and mapping system of the mobile robot based on vision and laser slam is divided into a front end and a rear end, the left view and the right view of the environment are obtained by the front end of the system through a binocular vision sensor, the left view and the right view are extracted through an ORB algorithm, feature points of the left view and the right view are matched, and then coordinates of the feature points in the space are obtained; and then matching the feature points obtained at adjacent moments, further solving the pose transformation relation of the robot at the moments, projecting the feature points onto a map grid, and defining that the grid projected by the feature points has descriptor attributes. Meanwhile, the front end of the system obtains laser point cloud data of the environment through a two-dimensional laser sensor, de-noizes the laser point cloud data, obtains pose transformation estimation of the robot through the modes of solving an initial pose value of the robot, defining a scanning window, solving a confidence coefficient and the like, projects the laser point cloud on a grid map on the basis, and updates the confidence coefficient of each map grid. And finally, in order to reduce the accumulated error of the sensor, the back end of the system carries out closed-loop detection and map optimization on the obtained map.
First, system front end
1. ORB-based feature point cloud generation
(1) Extracting characteristic points: the binocular vision sensor can simultaneously obtain a left view I of the environmentLAnd right view IRUsing ORB (organized FAST and Rotated BRIEF, an algorithm for extracting and describing FAST feature points) algorithm to perform FAST (features from estimated Segment test) corner detection on left and right views, extracting feature points, and calculating the main direction of the feature points to generate an oFAST, the specific flow is as follows:
①, graying the left view and the right view, where Y is 0.39 × R +0.5 × G +0.11 × B, Y is the grayscale value of a certain grayed pixel, and R, G, B is the three color components of red, green, and blue of the pixel.
② in the left view, with the pixel point P as the center, there are 16 pixel points (M10 template) in the circle with radius rounding down to 3, as shown in FIG. 2. if there are N (9 ≦ N ≦ 12) points in M16, the gray values of all the points are greater than YP+ t or both less than YP-t (wherein Y)PIs the gray value of the point P, t is the threshold), the point P is determined to be a feature point. (drawing a circle with a radius of 3 pixels around the center of P. if the gray value of n continuous pixel points on the circle is larger or smaller than that of the P point, the P point is considered as a characteristic point, and n is generally set to be 12.)
③ establishing coordinate system with characteristic point P as origin, calculating centroid in field SPosition C, using characteristic point P as starting point and mass center C as terminal point to form vectorThe moment of the domain S can be expressed as:
wherein Y (x, Y) is the gray value of the image, x, Y belongs to [ -r, r ], r is the radius of the neighborhood S, and p and q are 0 or 1, then the centroid position of the neighborhood is:
the principal direction θ of the feature point P is:
θ=arc tan2(m01,m10)
(2) and (3) generating a feature point descriptor: within the neighborhood S of the feature point P, a random set of n pairs of points is generated, represented by a2 × n matrix:the principal direction θ of the feature points obtained in step ③ and the corresponding rotation matrix RθCalculate the rotated 2 × n matrix:
the binary test criterion in the neighborhood S of the feature point P may be defined as:
wherein iθ=i1θ,…,inθ,jθ=j1θ,…,jnθAnd τ represents one kind of twoA criteria for a binary test is set forth,represents a point iθIs determined by the gray-scale value of (a),represents a point jθThe gray value of (a);
a descriptor of the rotation of the feature point P is now available:
k' may be 128,256, or 512, and the corresponding descriptors may have sizes of 16 bytes, 32 bytes, and 64 bytes, respectively, selected based on the real-time performance, memory space, and recognition efficiency of the algorithm.
(3) Matching the characteristic points: after descriptors of feature points of two frames of images of a left view and a right view are obtained according to the scheme, the Hamming distance D of the two frames of images is calculated for matching:
wherein f isL,fRRespectively, a descriptor of a certain feature point of the left view and the right view. When two feature points with the Hamming distance smaller than the threshold exist in the two frames of images, the two feature points are successfully matched.
Calculating the space position of the feature point: suppose that the image point coordinate of the feature point P in the left camera is (u)L,vL) The image point coordinate in the right camera is (u)R,vR) Then, the coordinates (x, y, z) of the feature point P in space are calculated as follows:
wherein f isFocal lengthCalculating all matched feature point coordinates for the focal length of the binocular camera and d the distance between the two cameras, wherein the feature points have unique characteristicsSpatial coordinates and a unique descriptor.
2. Robot pose transform estimation
And processing the laser data to obtain laser point cloud data, and filtering the laser point cloud data, namely removing corresponding noise points (points with a short distance and points with a long distance) in each laser reflection point in the laser point cloud data, and taking the rest as effective points. And calculating the pose of the effective laser point cloud in the laser sensor coordinate system.
(1) Confidence coefficient of robot posture transformation estimation and map grid updating when observed value is laser
① robot pose initial value estimation
And setting the time point of obtaining the laser data of each frame as the time point of calculating the pose. Suppose that the pose of the robot at the time point t is P (t), the time point of the former pose calculation is t-1, the corresponding pose is P (t-1), the time point of the next pose calculation is t +1, and the corresponding pose is P (t + 1). And calculating the linear velocity V (x, y) and the angular velocity W of the robot motion through the pose difference (comprising the displacement and the rotation angle) between the time points t and t-1.
Linear velocity V (x, y) — (displacement between t and t-1)/time difference
Angular velocity W ═ angle of rotation between t and t-1)/time difference
Correcting the initial predicted pose value according to the latest received odometer data and the latest received inertial navigation data to obtain an estimated pose value (position _ estimated);
② locating the scanning window
The displacement scanning parameters are used for limiting the displacement range of a positioning scanning window, and are respectively deviated from squares with the size of lcm up, down, left and right by taking the position and pose estimated value (position _ estimated) as the center. The angle scanning parameters are used for limiting the angle range of the positioning scanning window, and the angle of the upper part and the lower part deviating from the angle of w degrees is taken as the central angle by the estimated value of the pose.
And determining the position and pose of each scanning angle on each map grid in the positioning and scanning window according to the size of the positioning and scanning window, and taking the position and pose as all possible candidate position and pose _ candidates.
④ confidence estimation
According to the confidence degree of each map grid corresponding to each candidate pose (the confidence degree value of the map grid is related to the map building process, and is a determined value in the positioning process), calculating the confidence degree candidate _ probability of each candidate pose, and the formula is as follows:
wherein m is the total number of map grids in the discrete scanning data of the scanning angle corresponding to a certain candidate pose, and the map coordinate of the nth grid is (x)n,yn) The grid confidence isThe value range is 0.1-0.9.
Calculating the confidence coefficient weight candidate _ weight corresponding to each candidate pose according to the pose difference between each candidate pose able _ candidate and the estimated value of the pose, position _ estimated, and the formula is as follows:
wherein x _ offset is the displacement along the x-axis between each candidate pose and the estimated value of pose, position _ estimated, y _ offset is the displacement along the y-axis between each candidate pose and the estimated value of pose, position _ estimated, transweightRotation is the rotation angle between candidate pose and pose estimate, position _ estimated, rotweightIs the rotation angle weight;
the product of the confidence measure _ probability and the confidence weight measure _ weight of each candidate pose is used as the confidence score of the current pose, and the formula is as follows,
score=candidate_probability*candidate_weight
the estimated value of the pose update pose with the highest confidence score, pos _ estimated, is selected as the optimal pose estimate P (t + 1). And (3) according to the obtained optimal pose transformation estimation P (T +1) and the current pose P (T) of the robot, solving a robot pose transformation C (comprising a rotation matrix and a displacement matrix), moving the robot to the pose of P (T +1) according to the transformation relation T, and projecting the current point cloud data into a map grid of a corresponding position on the basis. If multiple points repeatedly fall on the same map grid position exist, the coordinates of the map grid corresponding to one point in the map coordinate system are only taken, and the probability of the grid is updated according to the projection condition of the laser point cloud. After the first frame of laser points are projected on the map grids, the confidence coefficient after each grid is updated is as follows:
when the second frame of laser data falls on the two-dimensional grid plane, updating the confidence level of each grid according to the following formula:
the state update coefficients of the respective grids are:
when the nth frame of laser point cloud data falls into a two-dimensional plane grid, the confidence coefficient of each grid is as follows:
wherein, Pn-1(xi,yi) When the n-1 th frame of laser point cloud data falls into a two-dimensional plane grid, the coordinate is (x)i,yi) Confidence of the grid of (a);
(2) robot pose transformation estimation when observed values are feature points
Suppose that the matching feature point sequence obtained from the left and right views at time t-1 is S ═ S1,s2,…,sg]And the matched characteristic point sequence obtained from the left view and the right view at the moment t is p ═ p1,p2,…,ph]And matching the characteristic point sequence S, P at the time T-1 and the time T, and then obtaining a pose transformation relation T' of the robot according to the coordinate change of the matched characteristic points and the main direction change of the characteristic points, so that the descriptors of the characteristic points are projected onto the corresponding map grids on the basis, and at this time, a second attribute of the grids, namely the descriptor attribute of the characteristic points, is defined. At the moment, the confidence coefficient of the grid map has the laser probability and the characteristic point, and the first attribute and the second attribute are independent.
4. Generating subgraphs
And generating subgraphs by using continuous laser point cloud data by using a tile type graph building method, wherein each continuous Ts data generates one subgraph, and the second subgraph comprises data in the last T (T < T) s of the last subgraph. Such as: the data in 1s to 40s are registered to form a sub-graph A1, the data in 21s to 60s are registered to form a sub-graph A2, and the data in 41s to 80s are registered to form sub-graphs A3 and … ….
Second, the back end of the system
1. Closed loop detection
And extracting feature points by utilizing an ORB algorithm according to the currently obtained left view and right view, and calculating descriptors of the feature points. Storing the obtained descriptors of the characteristic points into a set Q, Q ═ Q1,Q2,…,QNAnd matching each descriptor in the Q with a second-layer attribute, namely a grid feature point descriptor, corresponding to each grid in each subgraph, assuming that M subgraphs are generated at the time. And when the success rate of matching between the descriptor in the set Q and the descriptor of the grid feature point of a certain sub-image reaches over 90 percent, the closed-loop detection is successful.
2. Graph optimization
And in the process of optimizing the graph, the pose of the robot is represented as nodes in the graph, the observation information is converted into a constraint relation between poses of the robot after being processed, and the constraint relation is represented by connecting edges between the nodes. Each node of the graph depends on the machineDescribing the pose of the robot, and the state equations of all the nodes are WhereinRespectively describing the transformation relation between the poses of the robot under the global coordinate systemThe observation equation between node Pi and node Pj can be expressed as zj=xiTi,jI.e. the robot passes through a movement Ti,jFrom Pi to Pj, where xiIs a node PiPose, x, of the robot in global coordinate systemjIs a node PjPose, z, of the corresponding robot in the global coordinate systemjIs a node PjAn observed value of (1), i.e. xiEstimated pose after transformation by transformation matrix, Ti,jIs a node PiAnd node PjThe transformation relationship between them.
In the optimization process, each node has an estimated value, and two vertexes connected by an edge have an estimated value respectively, and an error exists between the estimated values and the constraint of the edge. That is, ideally we have:
zj=xiTi,j
robot pose xiAnd the estimated pose z after transformation by the transformation matrixjThere is an error e (z) betweenj,xj):
e(zj,xj)=xiTi,j-xj
Since each edge produces a small point error, assuming a graph with K edges is created, the objective function can be written as:
wherein the information matrix omegakIs the inverse of the covariance matrix and is a symmetric matrix. Each element omega thereofi,jAs a coefficient of error, pair e (z)i,xi)、e(zj,xj) The error correlation of (2) is predicted. The simplest is to handle omegakAnd setting the matrix into a diagonal matrix, wherein the size of the elements of the diagonal matrix indicates the attention degree of corresponding errors of each element. Other covariance matrices may be used.
xkBy an increment Δ x, the error value is increased from e (z)k,xk) Become e (z)k,xk+ Δ x), the first order expansion of the error term:
Jkis e (z)k,xk) With respect to xkThe derivative of (c) is in the form of a jacobian matrix. For the objective function term of the kth edge, further developed are:
E(xk+Δx)=e(zk,xk+Δx)TΩke(zk,xk+Δx)
≈[e(zk,xk)+JkΔx]TΩk[e(zk,xk)+JkΔx]
=e(zk,xk)TΩke(zk,xk)+2e(zk,xk)TΩkJkΔx+ΔxTJk TΔx
=Ck+2bkΔx+ΔxTHkΔx
sorting the terms independent of Δ x into constant terms CkCoefficient of first order of DeltaxkThe coefficient of the second order of Deltax is written as Hk。
Rewriting the objective function:
ΔE(x)=2bΔx+ΔxTHΔx
to minimize Δ E (x), the derivative of Δ E (x) with respect to Δ x is made zero, with
And updating the state variable X by using the obtained delta X, finally obtaining a convergence value of the state variable X meeting the target equation through multiple iterations, optimizing the pose of the robot in the whole motion process, and forming a final mapping result by the pose of each node in the X.
The invention adopts a mapping method combining vision and laser, on one hand, the defect that a vision algorithm is greatly influenced by illumination is improved, on the other hand, closed loop is detected by using the descriptor attributes of the characteristic points, the problem that the success rate of laser closed loop detection is too low is solved, and the closed loop detection efficiency is improved.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.
Claims (7)
1. A robot indoor map building method based on vision and laser slam is characterized by comprising the following steps:
the visual sensor acquires a left view and a right view, and generates a feature point cloud, which specifically comprises the following steps:
extracting feature points of the left view and the right view,
the obtained feature points are matched with each other,
calculating the space coordinates of the matched characteristic points;
calculating pose transformation estimation of the vision-based robot, namely performing secondary matching on the feature points matched with the left view and the right view at the moment and the feature points matched with the left view and the right view at the last moment at the moment, acquiring a pose transformation relation of the robot according to coordinate changes of the secondary matched feature points and main direction changes of the feature points, projecting descriptors of the feature points onto corresponding map grids, and defining feature point descriptor attributes of the grids;
obtaining laser data, calculating the confidence of pose transformation estimation and map grid updating of the robot based on laser;
generating subgraphs by using continuous laser point cloud data by adopting a tile type mapping method;
closed loop detection, namely matching descriptors of the obtained feature points with the attribute of the feature point descriptor of the grid corresponding to each grid of each sub-graph;
the graph optimization specifically comprises the following steps: the pose of the robot is represented by the nodes of the graph, the transformation relation between the poses of the robot is represented by the edges connecting the nodes, the error between each pose of the robot and the estimated pose transformed by the transformation matrix is calculated, the error objective function of the graph with a plurality of edges is calculated, the pose of the robot converging the objective function is obtained through multiple iterations, and the pose of the robot in each sub-graph is used for updating the pose of the robot in each sub-graph, so that the final graph building result can be formed.
2. The vision and laser slam-based robot indoor mapping method of claim 1, wherein the extracting feature points of the left view and the right view comprises:
performing FAST corner detection on the left view and the right view by using an ORB algorithm, and extracting a characteristic point P;
establishing a coordinate system with the characteristic point P as an origin, calculating a centroid position C in the field S, and constructing a vector with the characteristic point P as a starting point and the centroid C as an end pointThe moment of the domain S can be expressed as:
wherein Y (x, Y) is the gray value of the image, x, Y belongs to [ -r, r ], r is the radius of the neighborhood S, and p and q are 0 or 1, then the centroid position of the neighborhood is:
the principal direction θ of the feature point P is:
θ=arc tan2(m01,m10);
generating a characteristic point descriptor:
within the neighborhood S of the feature point P, a random set of n pairs of points is generated, represented by a2 × n matrix:principal direction θ of feature points and corresponding rotation matrix RθCalculate the rotated 2 × n matrix:
the binary test criterion in the neighborhood S of the feature point P may be defined as:
wherein iθ=i1θ,…,inθ,jθ=j1θ,…,jnθAnd tau denotes a binary test criterion,represents a point iθIs determined by the gray-scale value of (a),represents a point jθThe gray value of (a);
a descriptor of the rotation of the feature point P is now available:
k' is 128,256 or 512, and the size of the corresponding descriptor is 16 bytes, 32 bytes or 64 bytes, respectively, which is selected according to the real-time performance, storage space and recognition efficiency of the method.
3. The robot indoor mapping method based on vision and laser slam according to claim 2, wherein the matching of the obtained feature points is specifically:
calculating the Hamming distance D of the descriptors of the feature points of the left view and the right view for matching:
wherein f isL,fRDescriptors of a certain characteristic point of the left view and the right view respectively; when two feature points with the Hamming distance smaller than the threshold exist in the two frames of images, the two feature points are successfully matched.
4. The robot indoor mapping method based on vision and laser slam according to claim 3, wherein the calculating the spatial coordinates of the matched feature points is specifically as follows:
wherein (x, y, z) is the coordinate of the characteristic point P in space, (u)L,vL) Is a characteristic pointP image point coordinates in the left camera, (u)R,vR) As coordinates of image points in the right camera, fFocal lengthIs the focal length of the binocular camera and d is the distance between the two cameras.
5. The robot indoor mapping method based on vision and laser slam according to claim 4, wherein the tile-type mapping method is adopted, and the generation of subgraphs by using continuous laser point cloud data specifically comprises:
and generating subgraphs by using continuous laser point cloud data, wherein each continuous Ts data generates one subgraph, and the second subgraph comprises data in the last T (T < T) s of the last subgraph.
6. The vision and laser slam-based robot indoor mapping method of claim 5, wherein the mapping optimization specifically is:
the pose of the robot is represented as nodes in the graph, observation information is converted into a constraint relation between poses of the robots after being processed, and the constraint relation is represented by connecting edges between the nodes; each node of the graph depends on the description of the pose of the robot, and the state equation of all the nodes isWhereinRespectively the pose of the robot under the global coordinate system,
the sides describe the transformation relation between the poses of the robot; the observation equation between node Pi and node Pj can be expressed as:
zj=xiTi,j
wherein x isiIs a node PiPose, x, of the corresponding robot in the global coordinate systemjIs a node PjPose, z, of the corresponding robot in the global coordinate systemjIs a node PjAn observed value of (1), i.e. xiEstimated bits after transformation of a transform matrixPosture Ti,jIs a node PiAnd node PjThe transformation relationship between the two;
robot pose xiAnd the estimated pose z after transformation by the transformation matrixjThere is an error e (z) betweenj,xj) The calculation formula is as follows:
e(zj,xj)=xiTi,j-xj
the error objective function e (x) of the graph with several edges is:
wherein the information matrix omegakIs the inverse of the covariance matrix, is a symmetric matrix; each element omega thereofi,jAs a coefficient of error, pair e (z)i,xi)、e(zj,xj) The error correlation of (a) is predicted;
let x bekAnd adding an increment delta X, finally obtaining a convergence value meeting the objective equation E (X) through multiple iterations, updating the state variable X by using the obtained delta X, and forming a final mapping result by using the pose of each node in the X.
7. A robot indoor map system based on vision and laser slam is characterized in that the system is divided into a front end and a rear end, the front end of the system acquires a left view and a right view of an environment by using a binocular vision sensor, extracts the left view and the right view through an ORB algorithm, matches feature points of the left view and the right view, and further obtains coordinates of the feature points in a space; matching the feature points obtained at adjacent moments, further solving the pose transformation relation of the robot at the moments, projecting the feature points onto a map grid, and defining that the grid map with the feature point projection has descriptor attributes; the front end of the system obtains laser point cloud data of the environment through a laser sensor, de-noizes the laser point cloud data, obtains pose transformation estimation of the robot by obtaining a pose initial value of the robot, defining a scanning window and obtaining a confidence coefficient, projects the laser point cloud on a grid map, and updates the confidence coefficient of each map grid; and the back end of the system performs closed-loop detection and map optimization on the obtained map.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911257094.XA CN111076733B (en) | 2019-12-10 | 2019-12-10 | Robot indoor map building method and system based on vision and laser slam |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911257094.XA CN111076733B (en) | 2019-12-10 | 2019-12-10 | Robot indoor map building method and system based on vision and laser slam |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111076733A true CN111076733A (en) | 2020-04-28 |
CN111076733B CN111076733B (en) | 2022-06-14 |
Family
ID=70313690
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911257094.XA Active CN111076733B (en) | 2019-12-10 | 2019-12-10 | Robot indoor map building method and system based on vision and laser slam |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111076733B (en) |
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111664843A (en) * | 2020-05-22 | 2020-09-15 | 杭州电子科技大学 | SLAM-based intelligent storage checking method |
CN111795687A (en) * | 2020-06-29 | 2020-10-20 | 深圳市优必选科技股份有限公司 | Robot map updating method and device, readable storage medium and robot |
CN111862162A (en) * | 2020-07-31 | 2020-10-30 | 湖北亿咖通科技有限公司 | Loop detection method and system, readable storage medium and electronic device |
CN112102400A (en) * | 2020-09-15 | 2020-12-18 | 上海云绅智能科技有限公司 | Distance-based closed loop detection method and device, electronic equipment and storage medium |
CN112116656A (en) * | 2020-08-03 | 2020-12-22 | 歌尔股份有限公司 | Incremental mapping method and device in synchronous positioning and mapping slam |
CN112197773A (en) * | 2020-09-30 | 2021-01-08 | 江苏集萃未来城市应用技术研究所有限公司 | Visual and laser positioning mapping method based on plane information |
CN112325872A (en) * | 2020-10-27 | 2021-02-05 | 上海懒书智能科技有限公司 | Positioning method of mobile equipment based on multi-sensor coupling |
CN112650255A (en) * | 2020-12-29 | 2021-04-13 | 杭州电子科技大学 | Robot indoor and outdoor positioning navigation system method based on vision and laser radar information fusion |
CN112665575A (en) * | 2020-11-27 | 2021-04-16 | 重庆大学 | SLAM loop detection method based on mobile robot |
CN113112478A (en) * | 2021-04-15 | 2021-07-13 | 深圳市优必选科技股份有限公司 | Pose recognition method and terminal equipment |
CN113126621A (en) * | 2020-10-14 | 2021-07-16 | 中国安全生产科学研究院 | Automatic navigation method of subway carriage disinfection robot |
CN113189613A (en) * | 2021-01-25 | 2021-07-30 | 广东工业大学 | Robot positioning method based on particle filtering |
CN113624221A (en) * | 2021-06-30 | 2021-11-09 | 同济人工智能研究院(苏州)有限公司 | 2.5D map construction method fusing vision and laser |
CN113835422A (en) * | 2020-06-08 | 2021-12-24 | 杭州海康机器人技术有限公司 | Visual map construction method and mobile robot |
CN115880364A (en) * | 2023-02-09 | 2023-03-31 | 广东技术师范大学 | Robot pose estimation method based on laser point cloud and visual SLAM |
Citations (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2179600A1 (en) * | 2007-08-06 | 2010-04-28 | TRX Systems, Inc. | Locating, tracking, and/or monitoring personnel and/or assets both indoors and outdoors |
CN105953796A (en) * | 2016-05-23 | 2016-09-21 | 北京暴风魔镜科技有限公司 | Stable motion tracking method and stable motion tracking device based on integration of simple camera and IMU (inertial measurement unit) of smart cellphone |
CN106793086A (en) * | 2017-03-15 | 2017-05-31 | 河北工业大学 | A kind of indoor orientation method |
CN106909877A (en) * | 2016-12-13 | 2017-06-30 | 浙江大学 | A kind of vision based on dotted line comprehensive characteristics builds figure and localization method simultaneously |
CN107356252A (en) * | 2017-06-02 | 2017-11-17 | 青岛克路德机器人有限公司 | A kind of Position Method for Indoor Robot for merging visual odometry and physics odometer |
AU2016348568A1 (en) * | 2015-11-02 | 2018-06-14 | Starship Technologies Oü | Device and method for autonomous localisation |
US20180204338A1 (en) * | 2017-01-13 | 2018-07-19 | Otsaw Digital Pte. Ltd. | Three-dimensional mapping of an environment |
WO2019040997A1 (en) * | 2017-09-04 | 2019-03-07 | Commonwealth Scientific And Industrial Research Organisation | Method and system for use in performing localisation |
CN109489660A (en) * | 2018-10-09 | 2019-03-19 | 上海岚豹智能科技有限公司 | Robot localization method and apparatus |
CN109857123A (en) * | 2019-03-21 | 2019-06-07 | 郑州大学 | A kind of fusion method of view-based access control model perception and the indoor SLAM map of laser acquisition |
CN109974712A (en) * | 2019-04-22 | 2019-07-05 | 广东亿嘉和科技有限公司 | It is a kind of that drawing method is built based on the Intelligent Mobile Robot for scheming optimization |
CN110084272A (en) * | 2019-03-26 | 2019-08-02 | 哈尔滨工业大学(深圳) | A kind of cluster map creating method and based on cluster map and the matched method for relocating of location expression |
CN110196044A (en) * | 2019-05-28 | 2019-09-03 | 广东亿嘉和科技有限公司 | It is a kind of based on GPS closed loop detection Intelligent Mobile Robot build drawing method |
CN110361027A (en) * | 2019-06-25 | 2019-10-22 | 马鞍山天邦开物智能商务管理有限公司 | Robot path planning method based on single line laser radar Yu binocular camera data fusion |
CN110533722A (en) * | 2019-08-30 | 2019-12-03 | 的卢技术有限公司 | A kind of the robot fast relocation method and system of view-based access control model dictionary |
-
2019
- 2019-12-10 CN CN201911257094.XA patent/CN111076733B/en active Active
Patent Citations (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2179600A1 (en) * | 2007-08-06 | 2010-04-28 | TRX Systems, Inc. | Locating, tracking, and/or monitoring personnel and/or assets both indoors and outdoors |
AU2016348568A1 (en) * | 2015-11-02 | 2018-06-14 | Starship Technologies Oü | Device and method for autonomous localisation |
CN105953796A (en) * | 2016-05-23 | 2016-09-21 | 北京暴风魔镜科技有限公司 | Stable motion tracking method and stable motion tracking device based on integration of simple camera and IMU (inertial measurement unit) of smart cellphone |
CN106909877A (en) * | 2016-12-13 | 2017-06-30 | 浙江大学 | A kind of vision based on dotted line comprehensive characteristics builds figure and localization method simultaneously |
US20180204338A1 (en) * | 2017-01-13 | 2018-07-19 | Otsaw Digital Pte. Ltd. | Three-dimensional mapping of an environment |
CN106793086A (en) * | 2017-03-15 | 2017-05-31 | 河北工业大学 | A kind of indoor orientation method |
CN107356252A (en) * | 2017-06-02 | 2017-11-17 | 青岛克路德机器人有限公司 | A kind of Position Method for Indoor Robot for merging visual odometry and physics odometer |
WO2019040997A1 (en) * | 2017-09-04 | 2019-03-07 | Commonwealth Scientific And Industrial Research Organisation | Method and system for use in performing localisation |
CN109489660A (en) * | 2018-10-09 | 2019-03-19 | 上海岚豹智能科技有限公司 | Robot localization method and apparatus |
CN109857123A (en) * | 2019-03-21 | 2019-06-07 | 郑州大学 | A kind of fusion method of view-based access control model perception and the indoor SLAM map of laser acquisition |
CN110084272A (en) * | 2019-03-26 | 2019-08-02 | 哈尔滨工业大学(深圳) | A kind of cluster map creating method and based on cluster map and the matched method for relocating of location expression |
CN109974712A (en) * | 2019-04-22 | 2019-07-05 | 广东亿嘉和科技有限公司 | It is a kind of that drawing method is built based on the Intelligent Mobile Robot for scheming optimization |
CN110196044A (en) * | 2019-05-28 | 2019-09-03 | 广东亿嘉和科技有限公司 | It is a kind of based on GPS closed loop detection Intelligent Mobile Robot build drawing method |
CN110361027A (en) * | 2019-06-25 | 2019-10-22 | 马鞍山天邦开物智能商务管理有限公司 | Robot path planning method based on single line laser radar Yu binocular camera data fusion |
CN110533722A (en) * | 2019-08-30 | 2019-12-03 | 的卢技术有限公司 | A kind of the robot fast relocation method and system of view-based access control model dictionary |
Non-Patent Citations (4)
Title |
---|
李扬宇: ""基于深度传感器的移动机器人视觉SLAM研究"", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
李扬宇: ""基于深度传感器的移动机器人视觉SLAM研究"", 《中国优秀硕士学位论文全文数据库 信息科技辑》, no. 6, 15 June 2018 (2018-06-15), pages 140 - 264 * |
林欢等: ""基于双三次插值的巡检机器人初始位姿优化"", 《机械设计与制造工程》 * |
林欢等: ""基于双三次插值的巡检机器人初始位姿优化"", 《机械设计与制造工程》, vol. 47, no. 5, 31 May 2018 (2018-05-31), pages 56 - 60 * |
Cited By (22)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111664843A (en) * | 2020-05-22 | 2020-09-15 | 杭州电子科技大学 | SLAM-based intelligent storage checking method |
CN113835422B (en) * | 2020-06-08 | 2023-09-29 | 杭州海康机器人股份有限公司 | Visual map construction method and mobile robot |
CN113835422A (en) * | 2020-06-08 | 2021-12-24 | 杭州海康机器人技术有限公司 | Visual map construction method and mobile robot |
CN111795687A (en) * | 2020-06-29 | 2020-10-20 | 深圳市优必选科技股份有限公司 | Robot map updating method and device, readable storage medium and robot |
CN111862162A (en) * | 2020-07-31 | 2020-10-30 | 湖北亿咖通科技有限公司 | Loop detection method and system, readable storage medium and electronic device |
CN112116656A (en) * | 2020-08-03 | 2020-12-22 | 歌尔股份有限公司 | Incremental mapping method and device in synchronous positioning and mapping slam |
CN112116656B (en) * | 2020-08-03 | 2024-05-31 | 歌尔股份有限公司 | Incremental mapping method and device in synchronous positioning and map construction slam |
CN112102400A (en) * | 2020-09-15 | 2020-12-18 | 上海云绅智能科技有限公司 | Distance-based closed loop detection method and device, electronic equipment and storage medium |
CN112197773A (en) * | 2020-09-30 | 2021-01-08 | 江苏集萃未来城市应用技术研究所有限公司 | Visual and laser positioning mapping method based on plane information |
CN113126621A (en) * | 2020-10-14 | 2021-07-16 | 中国安全生产科学研究院 | Automatic navigation method of subway carriage disinfection robot |
CN112325872B (en) * | 2020-10-27 | 2022-09-30 | 上海懒书智能科技有限公司 | Positioning method of mobile equipment based on multi-sensor coupling |
CN112325872A (en) * | 2020-10-27 | 2021-02-05 | 上海懒书智能科技有限公司 | Positioning method of mobile equipment based on multi-sensor coupling |
CN112665575A (en) * | 2020-11-27 | 2021-04-16 | 重庆大学 | SLAM loop detection method based on mobile robot |
CN112665575B (en) * | 2020-11-27 | 2023-12-29 | 重庆大学 | SLAM loop detection method based on mobile robot |
CN112650255A (en) * | 2020-12-29 | 2021-04-13 | 杭州电子科技大学 | Robot indoor and outdoor positioning navigation system method based on vision and laser radar information fusion |
CN113189613A (en) * | 2021-01-25 | 2021-07-30 | 广东工业大学 | Robot positioning method based on particle filtering |
CN113112478A (en) * | 2021-04-15 | 2021-07-13 | 深圳市优必选科技股份有限公司 | Pose recognition method and terminal equipment |
CN113112478B (en) * | 2021-04-15 | 2023-12-15 | 深圳市优必选科技股份有限公司 | Pose recognition method and terminal equipment |
CN113624221A (en) * | 2021-06-30 | 2021-11-09 | 同济人工智能研究院(苏州)有限公司 | 2.5D map construction method fusing vision and laser |
CN113624221B (en) * | 2021-06-30 | 2023-11-28 | 同济人工智能研究院(苏州)有限公司 | 2.5D map construction method integrating vision and laser |
CN115880364A (en) * | 2023-02-09 | 2023-03-31 | 广东技术师范大学 | Robot pose estimation method based on laser point cloud and visual SLAM |
CN115880364B (en) * | 2023-02-09 | 2023-05-16 | 广东技术师范大学 | Robot pose estimation method based on laser point cloud and visual SLAM |
Also Published As
Publication number | Publication date |
---|---|
CN111076733B (en) | 2022-06-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111076733B (en) | Robot indoor map building method and system based on vision and laser slam | |
CN113436260B (en) | Mobile robot pose estimation method and system based on multi-sensor tight coupling | |
CN112985416B (en) | Robust positioning and mapping method and system based on laser and visual information fusion | |
CN105856230B (en) | A kind of ORB key frames closed loop detection SLAM methods for improving robot pose uniformity | |
Park et al. | Elastic lidar fusion: Dense map-centric continuous-time slam | |
CN110146099B (en) | Synchronous positioning and map construction method based on deep learning | |
CN112258600A (en) | Simultaneous positioning and map construction method based on vision and laser radar | |
CN112785702A (en) | SLAM method based on tight coupling of 2D laser radar and binocular camera | |
CN110223348A (en) | Robot scene adaptive bit orientation estimation method based on RGB-D camera | |
CN108332752B (en) | Indoor robot positioning method and device | |
CN106153048A (en) | A kind of robot chamber inner position based on multisensor and Mapping System | |
CN104732518A (en) | PTAM improvement method based on ground characteristics of intelligent robot | |
WO2019136613A1 (en) | Indoor locating method and device for robot | |
CN112183171A (en) | Method and device for establishing beacon map based on visual beacon | |
WO2023273169A1 (en) | Vision and laser-fused 2.5d map construction method | |
CN109087394A (en) | A kind of real-time indoor three-dimensional rebuilding method based on inexpensive RGB-D sensor | |
CN108680177B (en) | Synchronous positioning and map construction method and device based on rodent model | |
CN111161334B (en) | Semantic map construction method based on deep learning | |
CN112767546B (en) | Binocular image-based visual map generation method for mobile robot | |
CN116222543B (en) | Multi-sensor fusion map construction method and system for robot environment perception | |
CN111998862A (en) | Dense binocular SLAM method based on BNN | |
CN112752028A (en) | Pose determination method, device and equipment of mobile platform and storage medium | |
CN113674412A (en) | Pose fusion optimization-based indoor map construction method and system and storage medium | |
CN115830070A (en) | Infrared laser fusion positioning method for inspection robot of traction substation | |
CN112767481B (en) | High-precision positioning and mapping method based on visual edge features |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |