CN109323697B - Method for rapidly converging particles during starting of indoor robot at any point - Google Patents

Method for rapidly converging particles during starting of indoor robot at any point Download PDF

Info

Publication number
CN109323697B
CN109323697B CN201811349279.9A CN201811349279A CN109323697B CN 109323697 B CN109323697 B CN 109323697B CN 201811349279 A CN201811349279 A CN 201811349279A CN 109323697 B CN109323697 B CN 109323697B
Authority
CN
China
Prior art keywords
image
particles
laser
point
points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811349279.9A
Other languages
Chinese (zh)
Other versions
CN109323697A (en
Inventor
于世宽
闫飞
庄严
于海晨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dalian University of Technology
Original Assignee
Dalian University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dalian University of Technology filed Critical Dalian University of Technology
Priority to CN201811349279.9A priority Critical patent/CN109323697B/en
Publication of CN109323697A publication Critical patent/CN109323697A/en
Application granted granted Critical
Publication of CN109323697B publication Critical patent/CN109323697B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • G01C21/206Instruments for performing navigational calculations specially adapted for indoor navigation

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a method for quickly converging particles when an indoor robot is started at any point, and belongs to the field of indoor mobile robot positioning. The method comprises the steps of constructing a prior map by using a Cartogrraph algorithm, carrying out gray level processing on an image formed by the prior map, and extracting a gray level map frame by using an image gradient algorithm to obtain the outline of the prior map. When the robot is started, laser data obtained by first sampling is collected, and the laser data is stored in an image form. And matching the laser contour image with the contour of the prior map by using a linear neighbor knn search algorithm, and setting an area with the matching degree greater than a set threshold value as a candidate area. And obtaining a local candidate region list after matching, traversing the candidate regions when initializing the particles, and initializing the particles in the candidate regions by using mathematical statistics characteristics. When the particles are initialized in the candidate area, the orientation of the particles is restrained according to the laser data profile instead of using the mode of randomizing the orientation.

Description

Method for rapidly converging particles during starting of indoor robot at any point
Technical Field
The invention belongs to the technical field of indoor mobile robot positioning, and relates to a method for quickly converging particles when an indoor robot is started at any point.
Background
Positioning is a necessary condition for indoor robot navigation, and is a core technology of robot environment interaction. The most common method for indoor robot positioning is a particle filter algorithm, the particle filter algorithm realizes recursive Bayes filtering by a non-parametric Monte Carlo (Monte Carlo) simulation method, and the Monte Carlo method calculates the position of the indoor robot by initializing a particle swarm, simulating the particle swarm, calculating particle grading and resampling the particle swarm, wherein the position of the indoor robot accords with Gaussian probability distribution. The particle filtering method is suitable for any nonlinear system which can be described by a state space model, such as an indoor dynamic environment.
The existing particle filter algorithm uses laser as a distance measuring sensor, when a robot is started at any point, the position of the robot can be in any position of an indoor environment, the traditional particle filter method distributes particles representing the pose of the robot in the global range of a prior map, the observation data is updated through the self motion (rotation or linear motion) of the robot, the particles representing the pose of the robot are scored and resampled by using the particle filter algorithm, the particles are converged, the global particles are finally converged into a cluster, the variance and the average value of the cluster of particles are smaller than a certain threshold value, and the pose (position and orientation) of the robot can be calculated by using the average value of the cluster of the particles. Because the prior map is a full set of positions where the robot may appear, when positioning is realized by using a traditional particle filtering algorithm, if the real environment area represented by the prior map is large, the number of distributed particles is large, and resampling the particles by using the particle filtering algorithm consumes computational resources and consumes a lot of time. Therefore, it is necessary to find a method for enabling the robot to locally distribute the particles when starting at any point in the prior map, so as to make the particles converge quickly.
The literature (shoot S, Fox D, Burgard W, et al. road Monte Carlo localization for mobile robots [ J ]. Artificial Intelligence,2001,128(1):99-141.) proposes an algorithm for modifying particle filter generation, which subverts the conventional particle filter pose sampling process, using a two-particle filter guess approach for particle sampling. In the literature, the algorithm uses the latest sensor measurement data to generate a particle sample, and uses odometry data to evaluate the consistency of the sample pose and the actual pose of the robot. The performance of the particle filter is superior to that of a conventional particle filter, so that the importance weight distribution of the particles is more accurate, and the convergence rate of the particles is relatively improved. The algorithm has high requirement on the accuracy of the sensor, and the common distance measuring sensor cannot keep the high-accuracy data characteristic under the condition of high-frequency data acquisition.
The literature (ZHANG Heng, FAN Xiao Ping, QU Zhi Hua, etc.. Mobile Robot Adaptive Monte Localization Based on Multiple Hypophthesis Tracking 2007,33(9):941-946.) proposes an Adaptive dynamic particle clustering algorithm to optimize the conventional particle filtering algorithm. The algorithm timely clusters the particles according to the similarity in space, and each cluster of particles can represent a robot pose hypothesis according to the mean value. In the iteration process, each cluster of particles corresponds to an independent particle filter algorithm, and when a set judgment condition is met, particles with equal weight are extracted from the whole situation to be clustered. According to the method, normalization processing is not performed on the particle weight in the global scope any more, and the convergence rate of the particles in the local scope is faster than that of the particles in the global scope. However, the number of particle clusters is not limited, and if the prior map includes too large area of the free region and the number of initial distributed particles is large, the number of particle filters that the system needs to maintain is also large. The algorithm time complexity is too high, and the occupied computing resources are too large.
In summary, it is very important to find a method for enabling the particles to be reasonably initialized when the indoor robot is started at any point in an indoor scene, so that the particles can be quickly and accurately converged to the actual accurate position of the robot.
Disclosure of Invention
Aiming at the problem that the convergence rate of particles is low when any point of a robot is started, the invention provides a method for initializing particles locally aiming at particle initialization. The method utilizes a prior map constructed by a Cartogrer algorithm as an observation basis when the robot navigates and positions. And carrying out gray level processing on an image formed by the prior map, and extracting a gray level map frame by using an image gradient algorithm to obtain the outline of the prior map. When the robot is started, laser data obtained by first sampling is collected, and the laser data is stored in an image form. And extracting laser data contour features including environment information such as angular points, tangent points, inflection points and the like obtained by scanning. And matching the laser contour image with the contour of the prior map by utilizing a linear neighbor knn (K-Nearest Neighbors) search algorithm, and setting the area with the matching degree larger than a set threshold value as a candidate area. And obtaining a local candidate region list after matching, traversing the candidate regions when initializing the particles, and initializing the particles in the candidate regions according to certain mathematical statistical characteristics.
When the particles are initialized in the candidate area, the orientation of the particles is restrained according to the laser data profile instead of using the mode of randomizing the orientation. The orientation information in the real pose of the robot is roughly calculated by the laser data outline, and a plurality of calculated orientations can be stored in a list form. The list is traversed as the particles are distributed, and for each orientation information in the list, a number of particles are initialized such that their orientations satisfy a gaussian distribution with the orientations in the list as a mean and a certain threshold as a variance. Therefore, the candidate region has a plurality of particle groups conforming to the gaussian distribution.
The technical scheme of the invention is as follows:
a method for fast convergence of particles when an indoor robot is started at any point comprises the following steps:
step one, constructing a prior map
And constructing an indoor two-dimensional prior map by using a composition algorithm, wherein the composition algorithm comprises a vector algorithm, a Gmipping algorithm or a Cartogrier algorithm, and preferably the Cartogrier algorithm is used for constructing the prior map. In the process of constructing the prior map, the Cartogrer algorithm eliminates errors brought by the odometer in the composition process through closed-loop detection. The subgraph is used as a basic unit for closed-loop detection, a certain amount of laser observation data form subgraphs, one-time observation data is inserted into the optimal position of the subgraph based on the existing laser data, local loop returning is carried out after one subgraph is generated, and global loop returning is carried out after all subgraph local loop returning is finished. The prior map constructed by the Cartogrer algorithm has high precision aiming at similar environments, and the effect of constructing the prior map is as shown in figure 1.
Step two, extracting prior map contour and key feature points
And (2.1) carrying out gray level processing on the prior map obtained in the step one, namely converting the prior map image from an RGB image into a gray level image. Each pixel point of the RGB image is composed of R, G, B three channel components, and each pixel point value of the gray image is a single channel component Y. The formula for converting an RGB image into a grayscale image is as follows:
Y=α*R+β*G+γ*B (1)
where R, G, B denotes the values of the three channels of the image, α, β, γ denote the weights of the three channel values in the gray scale image pixel values, and Y denotes the pixel value of the gray scale image.
And (2.2) converting the prior map into a gray image to obtain a two-dimensional discrete function f (x, y), wherein (x, y) represents the coordinates of pixel points in the gray image, and f (x, y) represents the pixel values of the coordinates. The gradient G of the two-dimensional discrete function in the x direction and the y direction is obtainedx,Gy]T
Figure GDA0003308859130000041
Solving the formula (2) by using difference to obtain an image gradient expression formula as follows:
Figure GDA0003308859130000042
the contour of the prior map image is extracted by using formula (3) and an image gradient method (as shown in fig. 2).
And (2.3) extracting the prior map image outline by using the formulas (1), (2) and (3), and then performing explicit feature extraction on the outline image of the prior map by using a Scale-invariant feature transform (SIFT) algorithm to extract key feature points of the outline. The extracted explicit features may improve the accuracy of the matching. The characteristic extraction steps are as follows:
1) taking the Image contour obtained in the step (2.2) as two-dimensional observation data Image (x, y), and generating a two-dimensional observation data matrix;
2) adopting a Gaussian function to perform downsampling processing on the observation data to generate an image pyramid, wherein the number of layers of the image pyramid is determined as follows:
O=[log(min(H,W))]-2 (4)
wherein H, W represents the number of rows and columns of the two-dimensional observation data matrix;
generating a differential image pyramid according to the obtained image pyramid:
D(x,y)=L(N+1)-L(N) (5)
where L (N +1) represents an N +1 th layer down-sampled image, and L (N) represents an N layer down-sampled image.
3) Determining local extreme points by the differential image pyramid, specifically: traversing the second layer to the penultimate layer of the differential image pyramid, respectively comparing the pixel value of each pixel point with the pixel value of the surrounding pixel points, and when the pixel value of the pixel point is larger than or smaller than all the adjacent points of the image domain and the scale domain, the pixel value is an extreme point.
4) And returning the local extreme points obtained from the differential image pyramid to the image pyramid, and determining the main directions of the extreme points. The local extreme point corresponds to a local area in the image pyramid, namely, the key feature point to be extracted is obtained, a direction histogram of the local area in the image pyramid is analyzed, the peak value of the direction histogram represents the direction of the neighborhood gradient of the key feature point, and the maximum value in the direction histogram is used as the main direction of the key feature point.
The feature points of the contour of the prior map extracted in the above steps are as shown in fig. 3.
Step three, processing the first frame of laser data
For the first acquired laser data, the farther the laser spot is from the laser viewer, the more sparse the laser spot is, as shown in fig. 4. When the image contours are matched, carrying out noise processing on sparse laser points; traversing the laser point cloud, and aiming at one laser point P in the laser point cloudgWhen the distance between the laser spot P and the nearest laser spot is larger than the average distance of all the laser spotsgAll the orphans are eliminated (see fig. 5). Storing the first frame of laser data into an image format, and extracting the outline and key feature points of the first frame of laser data image according to the mode of the second step;
step four, matching the laser contour with the prior map contour by adopting a linear nearest neighbor knn searching algorithm
(4.1) Using the laser profile image as a training set T { l }1,l2,l3,l4.., using the prior map outline image as a query set P { P }1,p2,p3,p4....}。
(4.2) carrying out cross matching on the feature points in the training set and the feature points in the query set, namely forming a feature point set { (l)1,p1),(l1,p2),(l1,p3),(l1,p4),(l2,p1),(l2,p2) ... }; the logarithm of the feature points formed by matching is huge, and the position of the laser profile corresponding to the prior map cannot be positioned.
(4.3) filtering the feature point set: for each feature point l (x) in the training set1,y1) Selecting K feature points p (x) most similar to the feature points in the query set1,y1),p(x2,y2)....p(xk,yk) And if the Euclidean distance ratio of the current k-1 points to the kth point is larger than a given threshold value, the previous k-1 points are correct matches. The feature points used by the matching algorithm are SIFT feature points extracted in the step (2.3). The characteristic points are used for matching, so that noise caused by scaling of the laser profile scale relative to the prior map profile scale, brightness transformation and angle rotation can be eliminated. As shown in fig. 6, through a linear nearest neighbor knn search algorithm, a feature point set matching with the laser profile feature points in the prior map can be obtained.
And (4.4) calculating the ratio of the distances between the matched feature points in the training set and the query set, obtaining the scale scaling rate K of the laser contour map relative to the prior map, and determining the area for distributing the prior particles according to the scale scaling rate K and the size of the laser contour map. Taking the width and the height of the laser profile as W respectivelyj、HjSelecting the length and width of the characteristic point as WjK and HjThe region of K serves as the region where the prior particles are distributed.
Step five, optimizing the orientation of the initialized particles
When the particles are initialized, the particles are not distributed globally any more, and the particles are distributed only in a candidate area of a prior map obtained by laser contour line matching. And by combining the characteristic that the scanning angle range of the laser sensor is an obtuse angle, the real pose of the robot is necessarily positioned in the laser contour line. Therefore, when the particles are distributed, the particles are distributed only in the region included in the laser profile.
And according to the laser profile in the third step, simplifying the distribution of the orientation of the particles, and enabling most of the particles to be oriented to the direction close to the orientation of the real pose. As shown in fig. 7, the particle distribution procedure is as follows:
1) and respectively connecting the opening end points of the laser contour lines to generate n line segments.
2) And respectively making vertical bisectors of the n line segments.
3) The orientation of the n perpendicular bisectors pointing to the laser profile is dominant when the particles are distributed. The main distribution is n Gaussian distributions, the mean values of the Gaussian distributions are the orientations of the n perpendicular bisectors respectively, and the variance is a set value.
Step six, convergence of particles
(6.1) importance weight assignment and pose updating: each particle is assigned with an importance weight, the weight of the robot initialization distribution particles is the same, and normalization processing is carried out. And updating the pose of the particles according to the motion update of the robot and the accumulated data of the odometer.
(6.2) updating and resampling the particle weight: and updating the weight of each particle according to the observation data, and resampling according to the weight of the particles to make the particles converge to Gaussian posterior distribution with smaller variance. Along with the progress of the iterative process, the pose of the robot can be estimated according to more accurate posterior distribution.
The invention has the beneficial effects that: according to the invention, when the robot is started, particles are not distributed globally any more, and after the laser profile is selected to be matched with the prior map profile, the particles representing the pose of the robot are distributed in the candidate region, and the directions of most of the particles are constrained by the real direction of the robot, so that the particles can be rapidly converged near the real position of the robot in the global positioning process.
Drawings
FIG. 1 is a schematic diagram of a static layer of an a priori map.
Fig. 2 is a prior map static layer profile.
Fig. 3 is a feature point extracted from a static layer profile of a prior map.
Fig. 4 is a diagram of raw laser data.
FIG. 5 is a schematic of laser data for filtering outliers.
Fig. 6 is a graph of the effect of the laser profile map and the prior map profile matching.
Fig. 7 is a schematic diagram of a particle orientation distribution strategy.
Detailed Description
The following detailed description of the invention refers to the accompanying drawings.
According to the scheme, two-dimensional laser (Hokuyo UTM-30LX) with the distance measuring range of 0.06m-30m is selected as a laser distance measuring sensor. The two-dimensional plane scanning angle range of the laser is 270 degrees, the angular resolution of the laser is 0.25 degrees, namely, in the range of 270 degrees, one sample point is taken every 0.25 degrees, namely, 1080 laser points exist in each frame of laser data, and the frequency is 40 hertz.
A method for fast convergence of particles when an indoor robot is started at any point comprises the following steps:
step one, constructing a prior map
And constructing an indoor two-dimensional prior map by using a composition algorithm, and selecting a Cartographer algorithm to construct the prior map. The scene selects a corridor with a double closed loop with a higher degree of similarity (as shown in fig. 1 and 2). The motion control carrier selects a two-wheel driving pioneer 3 mobile robot, a Cartogrph algorithm is utilized to construct a two-dimensional prior map of a known environment, and the prior map only takes a static layer.
In the process of constructing the prior map, the Cartogrer algorithm eliminates errors brought by the odometer in the composition process through closed-loop detection. The subgraph is used as a basic unit for closed-loop detection, a certain amount of laser observation data form subgraphs, one-time observation data is inserted into the optimal position of the subgraph based on the existing laser data, local loop returning is carried out after one subgraph is generated, and global loop returning is carried out after all subgraph local loop returning is finished. The prior map constructed by the Cartogrer algorithm has high precision aiming at similar environments, and the effect of constructing the prior map is as shown in figure 1.
Step two, extracting prior map contour and key feature points
And (2.1) carrying out gray level processing on the prior map obtained in the step one, namely converting the prior map image from an RGB image into a gray level image. Each pixel point of the RGB image is composed of R, G, B three channel components, and each pixel point value of the gray image is a single channel component Y. The formula for converting an RGB image into a grayscale image is as follows:
Y=α*R+β*G+γ*B (1)
where R, G, B denotes the values of the three channels of the image, α, β, γ denote the weights of the three channel values in the gray scale image pixel values, and Y denotes the pixel value of the gray scale image.
And (2.2) converting the prior map into a gray image to obtain a two-dimensional discrete function f (x, y), wherein (x, y) represents the coordinates of pixel points in the gray image, and f (x, y) represents the pixel values of the coordinates. The gradient G of the two-dimensional discrete function in the x direction and the y direction is obtainedx,Gy]T
Figure GDA0003308859130000091
Solving the formula (2) by using difference to obtain an image gradient expression formula as follows:
Figure GDA0003308859130000092
the contour of the prior map image is extracted by using formula (3) and an image gradient method (as shown in fig. 2).
And (2.3) after extracting the prior map image contour by using the formulas (1), (2) and (3), performing explicit feature extraction on the contour map image of the prior map by using a Scale-invariant feature transform (SIFT) algorithm, and extracting key feature points of the contour map, including interest points such as corners, inflection points and the like in the image. The extracted explicit features may improve the accuracy of the matching. The characteristic extraction steps are as follows:
1) taking the Image contour obtained in the step (2.2) as two-dimensional observation data Image (x, y), and generating a two-dimensional observation data matrix;
2) adopting a Gaussian function to perform downsampling processing on the observation data to generate an image pyramid, wherein the number of layers of the image pyramid is determined as follows:
O=[log(min(H,W))]-2 (4)
wherein H, W represents the number of rows and columns of the two-dimensional observation data matrix;
generating a differential image pyramid according to the obtained image pyramid:
D(x,y)=L(N+1)-L(N) (5)
where L (N +1) represents an N +1 th layer down-sampled image, and L (N) represents an N layer down-sampled image.
3) Determining local extreme points by the differential image pyramid, specifically: traversing the second layer to the penultimate layer of the differential image pyramid, respectively comparing the pixel value of each pixel point with the pixel value of the surrounding pixel points, and when the pixel value of the pixel point is larger than or smaller than all the adjacent points of the image domain and the scale domain, the pixel value is an extreme point.
4) And returning the local extreme points obtained from the differential image pyramid to the image pyramid, and determining the main directions of the extreme points. The local extreme point corresponds to a local area in the image pyramid, namely, the key feature point to be extracted is obtained, a direction histogram of the local area in the image pyramid is analyzed, the peak value of the direction histogram represents the direction of the neighborhood gradient of the key feature point, and the maximum value in the direction histogram is used as the main direction of the key feature point.
The feature points of the contour of the prior map extracted in the above steps are as shown in fig. 3.
Step three, processing the first frame of laser data
For the first acquired laser data, the farther the laser spot is from the laser viewer, the more sparse the laser spot is, as shown in fig. 4. When the image contours are matched, carrying out noise processing on sparse laser points; traversing the laser point cloud, and aiming at one laser point P in the laser point cloudgWhen the distance between the laser spot P and the nearest laser spot is larger than the average distance between all the laser spotsgAll the orphans are eliminated (see fig. 5). Storing the laser characteristics of the first frame as an image format, and extracting the first frame according to the mode of the second stepThe contour and key feature points of the laser feature image;
step four, matching the laser contour with the prior map contour by adopting a linear nearest neighbor knn searching algorithm
(4.1) Using the laser profile image as a training set T { l }1,l2,l3,l4.., using the prior map outline image as a query set P { P }1,p2,p3,p4....}。
(4.2) carrying out cross matching on the feature points in the training set and the feature points in the query set, namely forming a feature point set { (l)1,p1),(l1,p2),(l1,p3),(l1,p4),(l2,p1),(l2,p2) ... }; the logarithm of the feature points formed by matching is huge, and the position of the laser profile corresponding to the prior map cannot be positioned.
(4.3) filtering the feature point set: for each feature point l (x) in the training set1,y1) Selecting K feature points p (x) most similar to the feature points in the query set1,y1),p(x2,y2)....p(xk,yk) And if the Euclidean distance ratio of the current k-1 points to the kth point is larger than a given threshold value, the previous k-1 points are correct matches. The feature points used by the matching algorithm are SIFT feature points extracted in the step (2.3). The characteristic points are used for matching, so that noise caused by scaling of the laser profile scale relative to the prior map profile scale, brightness transformation and angle rotation can be eliminated. As shown in fig. 6, through a linear nearest neighbor knn search algorithm, a feature point set matching with the laser profile feature points in the prior map can be obtained.
And (4.4) calculating the ratio of the distances between the matched feature points in the training set and the query set, obtaining the scale scaling rate K of the laser contour map relative to the prior map, and determining the area for distributing the prior particles according to the scale scaling rate K and the size of the laser contour map. As illustrated in fig. 6, the feature points m and n correspond to e and f, respectively, and the ratio of the distance between m and n to the distance between e and f is equal to the scale scaling ratio K of the laser profile relative to the prior map. Laser beam taking deviceThe width and height of the profile are Wj、HjSelecting the length and width of the characteristic point as WjK and HjThe region of K serves as the region where the prior particles are distributed.
Step five, optimizing the orientation of the initialized particles
When the particles are initialized, the particles are not distributed globally any more, and the particles are distributed only in a candidate area of a prior map obtained by laser contour line matching. And by combining the characteristic that the scanning angle range of the laser sensor is an obtuse angle, the real pose of the robot is necessarily positioned in the laser contour line. Therefore, when the particles are distributed, the particles are distributed only in the region included in the laser profile.
And according to the laser profile in the third step, simplifying the distribution of the orientation of the particles, and enabling most of the particles to be oriented to the direction close to the orientation of the real pose. As shown in fig. 7, the particle distribution procedure is as follows:
1) and respectively connecting the opening end points of the laser contour lines to form line segments mn, de and kh.
2) Respectively making three line segment perpendicular bisectors: o is1A、O3C and O2B。
3) Distributing the particles with O1A、O3C and O2The orientation B is dominant. The main distribution is three Gaussian distributions, and the mean values of the Gaussian distributions are O1A、O3C and O2B orientation, variance is the initialization value.
Step six, convergence of particles
(6.1) importance weight assignment and pose updating: each particle is assigned with an importance weight, the weight of the robot initialization distribution particles is the same, and normalization processing is carried out. And updating the pose of the particles according to the motion update of the robot and the accumulated data of the odometer.
(6.2) updating and resampling the particle weight: and updating the weight of each particle according to the observation data, and resampling according to the weight of the particles to make the particles converge to Gaussian posterior distribution with smaller variance. Along with the progress of the iterative process, the pose of the robot can be estimated according to more accurate posterior distribution.
And matching the laser profile with the prior map profile by using a linear nearest neighbor knn search algorithm, wherein k in the nearest neighbor knn search algorithm is 2, and the ratio of the nearest neighbor to the nearest neighbor is 0.6. And matching to obtain a distributed particle area, and updating the particles. In the traditional method, the number of the globally distributed particles is 3000-5000, the robot is controlled to rotate in place at an angular speed of 0.4rad/s, and the operation angle during convergence is 4 pi. In the method, the number of particles distributed by using a local initialization particle scheme is 200-400, the robot is controlled to rotate in place at an angular speed of 0.4rad/s, and the operation angle during convergence is pi.

Claims (2)

1. A method for fast convergence of particles when an indoor robot is started at any point is characterized by comprising the following steps:
step one, constructing a prior map
Constructing an indoor two-dimensional prior map by using a composition algorithm;
step two, extracting prior map contour and key feature points
(2.1) carrying out gray level processing on the prior map obtained in the step one, namely converting the prior map image from an RGB image into a gray level image; each pixel point of the RGB image consists of R, G, B three channel components, and each pixel value of the gray image is a single channel component Y; the formula for converting an RGB image into a grayscale image is as follows:
Y=α*R+β*G+γ*B (1)
wherein R, G, B represents the values of the three channels of the image, α, β, γ represent the weights of the three channel values in the pixel values of the grayscale image, and Y represents the pixel values of the grayscale image;
(2.2) converting the prior map into a gray image to obtain a two-dimensional discrete function f (x, y), wherein (x, y) represents the coordinates of pixel points in the gray image, and f (x, y) represents the pixel values of the coordinates; the gradient G of the two-dimensional discrete function in the x direction and the y direction is obtainedx,Gy]T
Figure FDA0003308859120000011
Solving the formula (2) by using difference to obtain an image gradient expression formula as follows:
Figure FDA0003308859120000012
extracting the outline of the prior map image by using a formula (3) and an image gradient method;
(2.3) adopting an SIFT algorithm to perform explicit feature extraction on the profile map image of the prior map, and extracting key feature points of the profile map; the method comprises the following specific steps:
1) taking the Image contour obtained in the step (2.2) as two-dimensional observation data Image (x, y), and generating a two-dimensional observation data matrix;
2) adopting a Gaussian function to perform downsampling processing on the observation data matrix to generate an image pyramid, wherein the number of layers of the image pyramid is determined as follows:
O=[log(min(H,W))]-2 (4)
wherein H, W represents the number of rows and columns of the two-dimensional observation data matrix;
generating a differential image pyramid according to the obtained image pyramid:
D(x,y)=L(N+1)-L(N) (5)
wherein, L (N +1) represents the N +1 th layer down-sampled image, and L (N) represents the N layer down-sampled image;
3) determining local extreme points by the differential image pyramid, specifically: traversing the second layer to the penultimate layer of the differential image pyramid, respectively comparing the pixel value of each pixel point with the pixel value of surrounding pixel points, and when the pixel value of a certain pixel point is larger than or smaller than all adjacent points of the image domain and the scale domain, obtaining a local extreme point;
4) returning the local extreme points obtained from the differential image pyramid to the image pyramid, and determining the main directions of the local extreme points; the local extreme point corresponds to a local area in the image pyramid, namely, a key feature point to be extracted, a direction histogram of the local area in the image pyramid is analyzed, the peak value of the direction histogram represents the direction of the neighborhood gradient of the key feature point, and the maximum value in the direction histogram is taken as the main direction of the key feature point;
step three, processing the first frame of laser data
For the laser data acquired for the first time, when the image contour is matched, performing noise processing on sparse laser points; traversing the laser point cloud, and aiming at one laser point P in the laser point cloudgWhen the distance between the laser spot P and the nearest laser spot is larger than the average distance between all the laser spotsgSetting as isolated points, and removing all the isolated points; storing the first frame of laser data into an image format, and extracting the outline and key feature points of the first frame of laser data image according to the mode of the second step;
step four, matching the laser contour with the prior map contour by adopting a linear nearest neighbor knn searching algorithm
(4.1) Using the laser profile image as a training set T { l }1,l2,l3,l4.., using the prior map outline image as a query set P { P }1,p2,p3,p4....};
(4.2) carrying out cross matching on the feature points in the training set and the feature points in the query set, namely forming a feature point set { (l)1,p1),(l1,p2),(l1,p3),(l1,p4),(l2,p1),(l2,p2)....};
(4.3) filtering the feature point set: for each feature point l (x) in the training set1,y1) Selecting K feature points p (x) most similar to the feature points in the query set1,y1),p(x2,y2)....p(xk,yk) If the Euclidean distance ratio of the current k-1 points to the kth point is larger than a given threshold value, the front k-1 points are correctly matched; finally, obtaining a feature point set matched with the laser contour feature points in the prior map;
(4.4) calculating the ratio of the distances between the matched feature points in the training set and the query set to obtain the scale scaling rate K of the laser contour map relative to the prior map according to the scale scaling rate K and the distance between the matched feature points in the query setThe size of the laser profile determines the area where the prior particles are distributed; taking the width and the height of the laser profile as W respectivelyj、HjSelecting the length and width of the characteristic point as WjK and HjThe region of K is used as a region for distributing the prior particles;
step five, optimizing the orientation of the initialized particles
When the particles are initialized, the particles are not distributed globally any more, and the particles are distributed only in a candidate area of a prior map obtained by laser contour line matching; according to the laser profile in the third step, the distribution of the orientation of the particles is simplified, and most of the particles are oriented to the direction close to the orientation of the real pose; the particle distribution steps are as follows:
1) respectively connecting the opening end points of the laser contour lines to generate n line segments;
2) respectively making vertical bisectors of the n line segments;
3) when the particles are distributed, the orientation of the n vertical bisectors pointing to the laser profile is taken as the main point; the main distribution is n Gaussian distributions, the mean value of the Gaussian distributions is the orientation of n perpendicular bisectors respectively, and the variance is a set value;
step six, convergence of particles
(6.1) importance weight assignment and pose updating: each particle is assigned with an importance weight, the weight of the robot initialized distributed particles is the same, and normalization processing is carried out; according to the motion update of the robot, the pose of the particles can be updated according to the accumulated data of the odometer;
(6.2) updating and resampling the particle weight: updating the weight of each particle according to the observation data, and resampling according to the weight of the particles to make the particles converge to Gaussian posterior distribution with smaller variance; along with the progress of the iterative process, the pose of the robot can be estimated according to more accurate posterior distribution.
2. The method for fast convergence of particles at any point start-up of an indoor robot as claimed in claim 1, wherein the composition algorithm comprises a sector algorithm, a Gmapping algorithm or a Cartogrer algorithm.
CN201811349279.9A 2018-11-13 2018-11-13 Method for rapidly converging particles during starting of indoor robot at any point Active CN109323697B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811349279.9A CN109323697B (en) 2018-11-13 2018-11-13 Method for rapidly converging particles during starting of indoor robot at any point

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811349279.9A CN109323697B (en) 2018-11-13 2018-11-13 Method for rapidly converging particles during starting of indoor robot at any point

Publications (2)

Publication Number Publication Date
CN109323697A CN109323697A (en) 2019-02-12
CN109323697B true CN109323697B (en) 2022-02-15

Family

ID=65260943

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811349279.9A Active CN109323697B (en) 2018-11-13 2018-11-13 Method for rapidly converging particles during starting of indoor robot at any point

Country Status (1)

Country Link
CN (1) CN109323697B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110044358B (en) * 2019-04-29 2020-10-02 清华大学 Mobile robot positioning method based on field line characteristics
CN112215887B (en) * 2019-07-09 2023-09-08 深圳市优必选科技股份有限公司 Pose determining method and device, storage medium and mobile robot
CN111044036B (en) * 2019-12-12 2021-10-15 浙江大学 Remote positioning method based on particle filtering
CN111474535B (en) * 2020-03-18 2022-03-15 广东省智能机器人研究院 Mobile robot global positioning method based on characteristic thermodynamic diagram
CN112965076B (en) * 2021-01-28 2024-05-24 上海思岚科技有限公司 Multi-radar positioning system and method for robot

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102184551A (en) * 2011-05-10 2011-09-14 东北大学 Automatic target tracking method and system by combining multi-characteristic matching and particle filtering
CN103395435A (en) * 2013-08-21 2013-11-20 重庆大学 Real-time positioning system method of high-precision and high-speed train
WO2015083875A1 (en) * 2013-12-03 2015-06-11 전자부품연구원 Method and mobile system for estimating camera location through generation and selection of particle
CN107094319A (en) * 2016-02-17 2017-08-25 王庆文 A kind of high-precision indoor and outdoor fusion alignment system and method
CN108470354A (en) * 2018-03-23 2018-08-31 云南大学 Video target tracking method, device and realization device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102184551A (en) * 2011-05-10 2011-09-14 东北大学 Automatic target tracking method and system by combining multi-characteristic matching and particle filtering
CN103395435A (en) * 2013-08-21 2013-11-20 重庆大学 Real-time positioning system method of high-precision and high-speed train
WO2015083875A1 (en) * 2013-12-03 2015-06-11 전자부품연구원 Method and mobile system for estimating camera location through generation and selection of particle
CN107094319A (en) * 2016-02-17 2017-08-25 王庆文 A kind of high-precision indoor and outdoor fusion alignment system and method
CN108470354A (en) * 2018-03-23 2018-08-31 云南大学 Video target tracking method, device and realization device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Robust Monte Carlo localization for mobile robots;Sebastian Thrun等;《Artificial Intelligence》;20010531;第128卷(第1期);第99-141页 *
Robust Place Recognition and Loop Closing in Laser-Based SLAM for UGVs in Urban Environments;Fengkui Cao等;《IEEE Sensors Journal》;20180315;第18卷(第10期);第4242-4252页 *

Also Published As

Publication number Publication date
CN109323697A (en) 2019-02-12

Similar Documents

Publication Publication Date Title
CN109323697B (en) Method for rapidly converging particles during starting of indoor robot at any point
CN110533722B (en) Robot rapid repositioning method and system based on visual dictionary
CN107301654B (en) Multi-sensor high-precision instant positioning and mapping method
CN109410321B (en) Three-dimensional reconstruction method based on convolutional neural network
CN109949375B (en) Mobile robot target tracking method based on depth map region of interest
CN109102522B (en) Target tracking method and device
Liu et al. Seqlpd: Sequence matching enhanced loop-closure detection based on large-scale point cloud description for self-driving vehicles
CN108446634B (en) Aircraft continuous tracking method based on combination of video analysis and positioning information
CN110473231B (en) Target tracking method of twin full convolution network with prejudging type learning updating strategy
CN112634325B (en) Unmanned aerial vehicle video multi-target tracking method
CN109448023B (en) Satellite video small target real-time tracking method
CN108961385B (en) SLAM composition method and device
Qian et al. Robust visual-lidar simultaneous localization and mapping system for UAV
CN113269094A (en) Laser SLAM system and method based on feature extraction algorithm and key frame
CN108320310B (en) Image sequence-based space target three-dimensional attitude estimation method
CN111783722B (en) Lane line extraction method of laser point cloud and electronic equipment
CN108280845B (en) Scale self-adaptive target tracking method for complex background
CN111339342B (en) Three-dimensional model retrieval method based on angle ternary center loss
CN116894876A (en) 6-DOF positioning method based on real-time image
CN117053779A (en) Tightly coupled laser SLAM method and device based on redundant key frame removal
CN107194947B (en) Target tracking method with self-adaptive self-correction function
CN115267724B (en) Position re-identification method of mobile robot capable of estimating pose based on laser radar
Guo et al. Efficient planar surface-based 3D mapping method for mobile robots using stereo vision
CN108469729B (en) Human body target identification and following method based on RGB-D information
CN115482282A (en) Dynamic SLAM method with multi-target tracking capability in automatic driving scene

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant