CN107741745A - It is a kind of to realize mobile robot autonomous positioning and the method for map structuring - Google Patents
It is a kind of to realize mobile robot autonomous positioning and the method for map structuring Download PDFInfo
- Publication number
- CN107741745A CN107741745A CN201710847755.9A CN201710847755A CN107741745A CN 107741745 A CN107741745 A CN 107741745A CN 201710847755 A CN201710847755 A CN 201710847755A CN 107741745 A CN107741745 A CN 107741745A
- Authority
- CN
- China
- Prior art keywords
- robot
- pose
- moment
- environmental
- environment
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 34
- 230000007613 environmental effect Effects 0.000 claims abstract description 74
- 239000002245 particle Substances 0.000 claims description 51
- 239000011159 matrix material Substances 0.000 claims description 19
- 238000010276 construction Methods 0.000 claims description 9
- 238000004364 calculation method Methods 0.000 claims description 6
- 238000012545 processing Methods 0.000 claims description 6
- 238000001914 filtration Methods 0.000 claims description 5
- 125000004432 carbon atom Chemical group C* 0.000 claims description 3
- 230000004927 fusion Effects 0.000 claims description 3
- 150000001875 compounds Chemical class 0.000 claims description 2
- 230000004807 localization Effects 0.000 abstract 1
- 239000000203 mixture Substances 0.000 abstract 1
- 238000003672 processing method Methods 0.000 abstract 1
- 230000002035 prolonged effect Effects 0.000 abstract 1
- 230000006641 stabilisation Effects 0.000 abstract 1
- 238000011105 stabilization Methods 0.000 abstract 1
- 238000010586 diagram Methods 0.000 description 6
- 238000012935 Averaging Methods 0.000 description 1
- 241000695274 Processa Species 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 239000011541 reaction mixture Substances 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 239000000126 substance Substances 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/08—Control of attitude, i.e. control of roll, pitch, or yaw
- G05D1/0891—Control of attitude, i.e. control of roll, pitch, or yaw specially adapted for land vehicles
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/20—Instruments for performing navigational calculations
Landscapes
- Engineering & Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Automation & Control Theory (AREA)
- Aviation & Aerospace Engineering (AREA)
- Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
Abstract
The present invention relates to robot navigation's technical field, it is desirable to provide a kind of to realize mobile robot autonomous positioning and the method for map structuring.The present invention proposes robot synchronously positioning and the algorithm of map building, robot motion track is divided into the pose that straight line and camber line blend under complex environment and estimates mode, and a kind of processing method renewal map feature of environmental characteristic is invented on the basis of robot pose is obtained, so as to obtain more accurate robot pose and environmental map.The present invention has good adaptability and reliability under complex environment.It can ensure in prolonged exercise in the scope for the stabilization that robot pose error maintains a very little, and on the basis of the accurate pose of robot has been established, the feature of surrounding environment is obtained, environmental map is obtained, is then based on environmental map and realizes autonomous localization and navigation.Present invention, avoiding causing whole map the possibility of deviation occur because of robot inaccuracy, there is good adaptability in various complex environments.
Description
Technical Field
The invention belongs to the technical field of robot navigation, and relates to a method for realizing autonomous positioning and map construction of a mobile robot.
Background
The autonomous positioning and the map construction of the mobile robot are key technologies for realizing autonomous navigation of the robot in an unknown environment, have wide application prospects, and are one of hot problems in current robot research. In an unknown environment, the robot describes the environment according to information provided by a sensor carried by the robot, and then a map feature model is constructed to obtain the accurate position of the map feature, but the establishment of the environment map depends on the observation position of the robot at each moment. Therefore, a dilemma is faced during the movement of the robot: 1) In order to construct an environment map, the robot needs to know the actual position of the robot at each moment; 2) In order for a robot to know the actual position at each time, it is necessary to know the map characteristics of the environment around the robot.
At present, in order to solve the above problems, many researchers propose to combine the positioning of the robot and the map construction together and perform the positioning and the map construction alternately, correct errors of the odometer according to the created map features to obtain an accurate pose estimation of the robot, and process environmental features according to the obtained pose estimation of the robot to obtain an environmental map. The method for combining robot positioning and map construction is basically feasible, but in many methods, the situation that the pose estimation of the robot has large deviation in different scenes is ignored, and when the pose estimation of the robot has large errors, the positions of map features are not accurate enough, so that the accuracy of the whole map is greatly influenced.
Disclosure of Invention
The invention aims to solve the technical problem of overcoming the defects in the prior art and provides a method for realizing autonomous positioning and map construction of a mobile robot.
In order to solve the technical problem, the solution of the invention is as follows:
the method for realizing autonomous positioning and map construction of the mobile robot is characterized in that a speedometer, a compass and a laser radar are installed on the mobile robot, the position and pose of the robot are obtained by utilizing the information of the speedometer, the initial target course is obtained by utilizing the compass, the deflection angle of the robot is obtained by utilizing the compass data difference at the corner, and the distance and angle information of the surrounding environment characteristics are obtained by utilizing the laser radar;
the method specifically comprises the following steps:
step one, the pose s of the robot at the known t-1 moment t-1 Control amount u t-1 And environmental characteristics m t-1 And estimating to obtain the pose s of the robot at the time t k ;
The method specifically comprises the following steps:
(1) At the moment t =1, the initial pose of the robot is s 0 The obtained initial environment characteristic set is The position of the ith characteristic point at the moment of t =1 is represented;
to obtain the optimal pose estimation of the robot at the next moment, firstly predicting the state of the robot at the next moment:
when the above formula is taken as t =1, estimating the pose of the robot at the next moment by taking the initial moment as the initial moment;
in the above-mentioned formula, the reaction mixture,for the predicted position of the robot, p (-) represents the probability density function of the random variable, and p (-) represents the conditional probability density function;
the recursion formula in the above formula is to obtain the posterior probability of the pose of the robot at the time t by using a Bayesian filtering mode; the a posteriori probability is approximated using a particle filter to obtain an proposed distribution (1.2): when the robot is at the moment t, the posterior probability distribution of each particle pose meets the following conditions:
in the above-mentioned formula, the compound has the following structure,for the pose of the j (j =1,2 …, m) th particle at time t, s t-1 The position of the robot at the time t-1;
(2) Adding the observed value z of the surrounding environment at the moment t into the state prediction equation t And correcting the predicted value of the robot by utilizing the environment observed quantity to obtain a more accurate robot pose:
in the above formulaCorrecting the optimal robot pose after the predicted position of the robot is added into the observed quantity;
will measure z to the outside t After posterior probability distribution is added, the predicted pose of the robot can be updated; while updating the predicted pose of the particle, calculating the weight of the jth particle at the t moment
(3) Obtaining a weight for each particle after adding the observed valueThen, the judgment is carried out according to the weight of the particles, the particles with the weight less than 0.5 are discarded, and a new group of particle sets is obtainedThis set of particles was normalized:
after the particle weight is normalized, the particles are weighted and summed to obtain the optimal estimated pose of the robot:
(4) When t =1, calculating through the process to obtain the optimal pose estimation of the robot at the next moment (t = 2)And a set of remaining particle setsThe optimal pose estimation of the robot at the t moment can be obtained through sequential iteration
But the judgment variable N needs to be calculated in each iteration process eff :
If N is present eff &M/2, indicating that the number of particles is enough to predict the pose of the robot at the next moment, and entering the next step; if N is present eff M/2 or less indicates that the number of particles is too small to provide enough information for the predicted state of the robot at the next moment, so that the position of the robot is neededResampling the posture;
after the optimal estimated pose of the robot is obtained, acquiring surrounding environment feature points by using a laser radar, processing the acquired environment feature points by using extended Kalman filtering, and performing data association with the environment feature points at the previous moment so as to update an environment map;
the method specifically comprises the following steps:
(1) When t =1, the robot acquires a set of initial environment feature set m 0 (ii) a At the next moment, the robot moves to a new position, and to acquire the optimal position estimation of the environmental characteristics, the environmental characteristics at the moment t are predicted by using the state at the moment t-1:
when t =1 in the above equation, the prediction of the next position from the initial position is represented,a pre-measurement representing the characteristics of the environment at time t,as a process prediction function, W t-1 Is process Gaussian noise, and satisfies the following conditions:
E[W t ]=0
in the above formula, Q k Is the noise covariance;
(2) In obtaining a prediction of an environmental characteristicThen, the observed quantity of the environmental features is calculated as follows:
as a function of the observation of motion, V t For the measured gaussian noise at time t, the following condition is satisfied:
E[V t ]=0
in the above formula, R k Is the noise covariance;
(3) Measuring the obtained environmental characteristic prediction quantityAnd the observed quantity z t The optimal position estimation of the environment features obtained by fusion is as follows:
in the above formula, the first and second carbon atoms are,for the optimal estimator of the environmental characteristic points at time t, z t As an observed quantity of the environmental characteristics at time t, K t To Kalman gain, P t,t-1 Predicting the error matrix at time t for time t-1, R t To observe the covariance matrix of the noise, H t To measure the covariance matrix, A t-1 As a function of the processA Jacobian matrix of; when t =1, its initial error matrix P 0 Observing the noise R 0 Measuring the covariance matrix H 1 Measuring the Jacobian matrix A 0 The method comprises the following steps of (1) knowing;
after obtaining the optimal position estimate for the environmental feature, for the continuation of the next iteration process, the error matrix is updated as follows:
P t =[I-K t H t ]P t.t-1
(4) At the position of obtaining the optimal environment characteristic pointThen, the environment at the moment needs to be associated with the environment at the last moment to obtain an updated map, so that whether the features acquired at the two moments are the same environmental feature or not is judged by using the mahalanobis distance; when the Mahalanobis distance is calculated, in order to obtain the data association characteristics more quickly and conveniently, the environmental characteristic prediction quantity of the t-1 moment to the t moment is directly calculatedAnd t moment environmental characteristic observed quantityThe distance of (c):
whereinIs a covariance matrix, and epsilon is a threshold constant and takes a value of 0.95;
when D is present 2 When the map is smaller than epsilon, judging that the two environment characteristics have relevance, namely the two environment characteristics are the same environment, and updating the old map; when D is present 2 When the current environmental characteristic is greater than the limit value epsilon, judging that the two environmental characteristics are not related, namely the two environmental characteristics are different environmental characteristics, adding the observed environmental characteristics into the old map at the moment, and expanding the original map;
the set of environmental feature sets obtained after association isThen the optimal robot position and posture are obtained according to the set of environmental characteristics and the momentNamely, the t =3,4. Moment is updated and iterated in the same way, and the pose of the robot at the t moment is obtainedAnd its acquired environmental characteristics are
Description of the inventive principles:
the invention provides a novel algorithm for synchronous positioning and map creation of a robot, namely, under a complex environment, a robot motion track is divided into a pose estimation mode in which a straight line and an arc line are fused, and a map feature is updated by the method for processing the environment feature on the basis of obtaining the pose of the robot, so that a more accurate robot pose and environment map are obtained. The algorithm has good adaptability and reliability under a complex environment. The algorithm can ensure that the pose error of the robot is maintained in a small stable range in long-time movement, and obtains the characteristics of the surrounding environment on the basis of the established accurate pose of the robot to obtain an environment map, and then realizes autonomous positioning and navigation based on the environment map.
Compared with the prior art, the invention has the advantages that:
(1) The invention provides estimation under different situations for the estimation of the pose of the robot, so as to achieve more accurate effect and avoid the possibility of deviation of the whole map caused by larger error of the robot.
(2) The invention considers the motion model of the robot under the complex environment condition, and the invention has good adaptability in various complex environments.
(3) When the optimal pose of the robot is obtained, the environment characteristics obtained at the previous moment are utilized to correct the position of the robot; after the pose of the robot is determined, the map feature at the previous moment is updated in the process of optimizing the map feature; the two are mutually corrected, so that the motion of the robot can be converged to a relatively stable state, the error is small, and the algorithm has great reliability.
Drawings
FIG. 1 is a schematic diagram of the invention for updating the pose and map features of a robot from time t-1 to time t;
FIG. 2 is a schematic diagram illustrating an observation position estimation of a robot obtained by linear estimation of a motion path of the robot;
in fig. 2, a: the position and pose of the robot at the previous moment are scanned at the position to obtain the distance and the angle of the landmark CB: the pose of the robot at the current moment is scanned at the position to obtain the distance between the landmarksAngle of departureC: the same road sign is obtained by measuring the previous time and the current time;
FIG. 3 is a schematic diagram of an observation position estimation of the robot obtained by curve estimation of a motion path of the robot;
in fig. 3, a: the position and pose of the robot at the moment, and the distance and angle (d 1, theta) of the landmark obtained by scanning the landmark C at the position 1 ) (ii) a B: the pose of the robot at the current moment, the distance and the angle (d 2, theta) of the landmark obtained by scanning the landmark C at the position 2 ) (ii) a C: the same road sign is obtained by measuring the previous time and the current time;
FIG. 4 is a schematic diagram of the present invention processing environmental characteristics at time t.
Detailed Description
The method of the invention is described below in terms of the movement of a robot with three sensors, odometer, compass and lidar, in an unknown environment.
The robot predicts the pose of the robot by using the information of the odometer; acquiring an initial target course by using a compass, and acquiring a deflection angle of the robot by using a difference value of compass data at a corner; obtaining distance and angle information of surrounding environment characteristics by using a laser radar; the invention utilizes the three sensors to locate the position of the robot and acquire the surrounding environment map. The above formulas (1.1) - - (1.4) and (2.1) - - (2.3) show a general calculation form, and the specific calculation form is related to the probability distribution of the design and the robot motion. The following examples are provided to illustrate the specific meanings of the variables, functions and probability density functions in the formulas (1.1) - - (1.4), (2.1) - - (2.3):
let the motion equation that the robot satisfies be:
s t =f(s t-1 ,u t-1 )+w t (1)
wherein s is t = (x, y, theta) is the pose of the robot at the moment t;u t-1 the controller is used for controlling the robot at the moment t-1, wherein the controller is used for acquiring control quantity of the robot, and v is the linear velocity of the robot and omega is the angular velocity of the robot; w is a t The following condition is satisfied for Gaussian noise:
E[W t ]=0
E[W k W j T ]=Q k δ kj
when the pose of the robot is estimated in a probability density function mode, the possibility of the position of the robot is estimated in the form of each particle, and each possibility is estimated at the same time, wherein the specific estimation mode is as follows: and decomposing the robot pose represented by each particle at the time t into movement on x-y direction coordinates:
x t =x t-1 +v t-1 *cos(θ t )+w t (2)
y t =y t-1 +v t-1 *sin(θ t )+w t (3)
θ t =θ t-1 +ω t-1 *△t+w t (4)
namely, from the initial time, when the initial pose is known, the predicted pose of the robot at the next time can be obtained according to the motion model. However, errors and interference noise w exist in the pose of the robot at the next moment when the information acquired by the odometer is used alone for predicting t . Therefore, in the patent, the positions of the features of the surrounding environment are obtained by using the laser radar to estimate the observation pose of the robot at the moment, the feature point with the highest probability of the same feature is selected, and the position of the feature point is used for obtaining the position of the feature pointTo calculate the observation pose of the robot, the points having the properties of the feature points are hereinafter referred to as landmarks:
when the robot walks on the straight road, the schematic diagram is used2 form calculation of the pose of the robotThe movement distance of the robot at the moment, namely the length of AB, can be obtained by measuring the positions of the road signs C at two positions A and B:
at this time, the ordinate of the robot at time t is unchangedThe abscissa isWhen the robot walks at the corner, the observation position of the robot is calculated in the form of a schematic diagram 3As shown in the figure, when the robot deflects the original movement direction, the included angle of the road sign C measured by the laser radar has a deviation angle theta 0 Therefore, the first estimation method of the linear motion distance may have a large error, so the present patent proposes the motion distance estimation method in the second case:
whereinCalculating the sum of the distances AB for the angular difference of the compasses at adjacent moments
Further obtain theI.e. according to theta 0 And AB, obtaining the coordinates of the robot at the t moment as follows:
after the observation pose of the robot is obtained, the observation pose is usedCorrecting the predicted pose of the robot represented by the particles, wherein the probability distribution of the predicted pose of each particle satisfies the condition that the observed pose is taken as the mean value and the interference noise Q is taken as the mean value k The weight of each particle after adding the observation pose correction can be calculated as follows:
due to the different weights of each particle, the weights of some particles are too small, i.e. too far away from the observed position When the position of the particle is estimated, the particle is added to the optimal pose, so that the position of the particle is deviated, and the particle is discarded. And then carrying out normalization on the remaining particles, and then carrying out weighted summation to obtain the optimal pose estimation of the robot:
in confirming the pose of the robotThen, scanning surrounding environment characteristics by using a laser radar, and processing the environment characteristics by using extended Kalman filtering to obtain accurate map characteristic information:
at time t-1, the robot obtains a set of environmental feature sets m t-1 (ii) a And at the moment t, the robot moves to a new position, and a group of particle sets representing possible poses of the robot, which are reserved through the last movement processPredicting map features:
representing the predicted position of the jth particle to the set of environmental features, z t The observation quantity of a group of environmental characteristics is added to each particle, and the same environmental characteristic can be obtained to have different positions in different particle poses.
For each environmental characteristicAveraging positions at different particle poses to obtain a mean value mu i Assuming that the error of the measured data isThen the estimate for each environmental feature can be approximated to satisfy the following distribution:
the environmental characteristics represented by each environmental characteristic having a different position under different particles will also have a different degree of importance, i.e. by each environmental characteristicSatisfying corresponding normal distribution to obtain weight corresponding to each environmental featureAnd carrying out weighted summation on the position of each environmental characteristic under different particles to obtain the optimal predicted position of the environmental characteristic:
wherein the content of the first and second substances,for the weight of the ith environmental characteristic under the jth particle,and (4) the optimal predicted position of the ith environmental characteristic.
Obtaining the optimal predicted position of the environmental characteristicsThen, the measured environmental characteristic z is compared with the measured environmental characteristic t And (3) carrying out fusion to obtain the optimal state estimation of the environmental characteristics:
the parameter calculation is updated as follows:
P t =[I-K t H t ]P t.t-1 (22)
whileR t And Q t-1 Is a covariance matrix;
and after error processing is carried out on the obtained environmental characteristics, data association is carried out on the environmental characteristics and the environmental characteristics at the previous moment, so that the map characteristics are updated:
in the above formula, the threshold value epsilon is 0.95, and the mahalanobis distance D 2 If the above formula (23) is satisfied, the environmental feature observation value and the existing environmental feature in the map are the same environmental feature, and the environmental feature is updated; distance of Omajeldahl 2 If the above equation (23) is not satisfied, the environment feature is a new environment feature, that is, added to the map, and the environment map is updated.
Through the iteration of the calculation process, when the robot reaches the target position, the robot stops moving, and the motion path of the robot is obtainedThe map formed by the traversal is m = { m = { (m) 0 ,m 1 ,m 2 ,…,m t }。
Claims (1)
1. A method for realizing autonomous positioning and map construction of a mobile robot is characterized in that a speedometer, a compass and a laser radar are installed on the mobile robot, the pose of the robot is predicted by using the information of the speedometer, the initial target course is obtained by using the compass, the deflection angle of the robot is obtained by using the compass and the difference value of compass data at the corner, and the distance and angle information of the surrounding environment characteristics are obtained by using the laser radar;
the method specifically comprises the following steps:
step one, the pose s of the robot at the known t-1 moment t-1 Control amount u t-1 And environmental characteristics m t-1 And estimating to obtain the pose s of the robot at the time t k ;
The method specifically comprises the following steps:
(1) At the moment t =1, the initial pose of the robot is s 0 The obtained initial environment characteristic set is Representing the position of the ith characteristic point at the time t = 1;
to obtain the optimal pose estimation of the robot at the next moment, firstly predicting the state of the robot at the next moment:
when the above formula is taken as t =1, estimating the pose of the robot at the next moment by taking the initial moment as the initial moment;
in the above-mentioned formula, the compound of formula,for the predicted position of the robot, p (-) represents the probability density function of the random variable, and p (-) represents the conditional probability density function;
the recursion formula in the above formula is to obtain the posterior probability of the pose of the robot at the time t by using a Bayesian filtering mode; the a posteriori probability is approximated using a particle filter to obtain an proposed distribution (1.2): when the robot is at the moment t, the posterior probability distribution of each particle pose meets the following conditions:
in the above formula, the first and second carbon atoms are,for the pose of the j (j =1,2 …, m) th particle at time t, s t-1 The position of the robot at the time t-1;
(2) In the above state prediction equation, when t is addedObserved value z of ambient environment t And correcting the predicted value of the robot by utilizing the environment observed quantity to obtain a more accurate robot pose:
in the above formulaCorrecting the optimal robot pose after the predicted position of the robot is added into the observed quantity;
will measure z to the outside t After the posterior probability distribution is added, the predicted pose of the robot can be updated; while updating the predicted pose of the particle, calculating the weight of the jth particle at the t moment
(3) After a weight of each particle after the observation value is added is obtained, the judgment is carried out according to the weight of each particle, the particles with the weight less than 0.5 are discarded, and a new particle set is obtainedThis set of particles was normalized:
after the particle weight is normalized, weighting and summing are carried out on the particles to obtain the optimal estimated pose of the robot:
(4) When t =1, calculating through the process to obtain the optimal pose estimation of the robot at the next moment (t = 2)And a set of remaining particle setsThe optimal pose estimation of the robot at the moment t can be obtained by successive iterationBut requires the calculation of a decision variable Ne at each iteration ff :
If Ne ff &M/2, indicating that the number of particles is enough to predict the pose of the robot at the next moment, and entering the next step; if Ne ff If the number of particles is less than or equal to m/2, the number of particles is too small, and sufficient information cannot be provided for the predicted state of the robot at the next moment, so that the pose of the robot needs to be resampled;
after the optimal estimated pose of the robot is obtained, acquiring surrounding environment feature points by using a laser radar, processing the acquired environment feature points by using extended Kalman filtering, and performing data association with the environment feature points at the previous moment so as to update an environment map;
the method specifically comprises the following steps:
(1) When t =1, the robot acquires a set of initial environment feature set m 0 (ii) a At the next moment, the robot moves to a new position, and to acquire the optimal position estimation of the environmental characteristics, the environmental characteristics at the t moment are predicted by using the state at the t-1 moment:
when t =1 in the above equation, the prediction of the next position from the initial position is shown, and θ t Represents a pre-measured quantity of the environmental characteristic at time t,as a process prediction function, W t -1 is process gaussian noise, satisfying the following condition:
E[W t ]=0
in the above formula, Q k Is the noise covariance;
(2) In obtaining a predicted quantity theta of the environmental characteristic t Then, the observed quantity of the environmental features is calculated as follows:
z t =h(θ t )+V t (2.2)
h(θ t ) As a function of observation of movement, V t For the measured gaussian noise at time t, the following condition is satisfied:
E[V t ]=0
in the above formula, R k Is the noise covariance;
(3) The obtained environmental characteristic prediction quantity theta is measured t And the observed quantity z t Performing fusion to obtain the optimal environment characteristicsThe position estimate is as follows:
in the above formula, the first and second carbon atoms are,is an optimal estimator of the environmental characteristic points at time t, z t As an observed quantity of the environmental characteristics at time t, K t To Kalman gain, P t,t-1 Predicting the error matrix at time t for time t-1, R t To observe the covariance matrix of the noise, H t To measure the covariance matrix, A t -1 is a process functionA Jacobian matrix of; when t =1, its initial error matrix P 0 Observation noise R 0 Measuring the covariance matrix H 1 Measuring the Jacobian matrix A 0 The method comprises the following steps of (1) knowing;
after obtaining the optimal position estimate for the environmental feature, for the continuation of the next iteration, the error matrix is updated as follows:
Pt=[I-K t H t ]P t,t-1
(4) At the position where the optimal environmental characteristics are obtainedThen, the environment at the moment needs to be associated with the environment at the previous moment to obtain an updated map, so that the characteristic acquired by judging the two moments by using the mahalanobis distance is thatWhether the environmental characteristics are the same; when the Mahalanobis distance is calculated, in order to obtain the data association characteristics more quickly and conveniently, the environmental characteristic prediction quantity theta of the t-1 moment to the t moment is directly calculated t And t moment environmental characteristic observed quantityThe distance of (c):
whereinIs a covariance matrix, and epsilon is a threshold constant and takes a value of 0.95;
when D is present 2 When the map is smaller than epsilon, judging that the two environment characteristics have relevance, namely the two environment characteristics are the same environment, and updating the old map; when D is 2 When the current environmental characteristic is greater than the limit value epsilon, judging that the two environmental characteristics are not related, namely the two environmental characteristics are different environmental characteristics, adding the observed environmental characteristics into the old map at the moment, and expanding the original map;
the set of environmental feature sets obtained after association isThen the optimal robot position and posture are obtained according to the set of environmental characteristics and the momentNamely, the t =3,4. Moment is updated and iterated in the same way, and the pose of the robot at the t moment is obtainedAnd obtained thereofThe environmental characteristics are
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710847755.9A CN107741745B (en) | 2017-09-19 | 2017-09-19 | A method of realizing mobile robot autonomous positioning and map structuring |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710847755.9A CN107741745B (en) | 2017-09-19 | 2017-09-19 | A method of realizing mobile robot autonomous positioning and map structuring |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107741745A true CN107741745A (en) | 2018-02-27 |
CN107741745B CN107741745B (en) | 2019-10-22 |
Family
ID=61235190
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710847755.9A Active CN107741745B (en) | 2017-09-19 | 2017-09-19 | A method of realizing mobile robot autonomous positioning and map structuring |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107741745B (en) |
Cited By (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108550318A (en) * | 2018-03-12 | 2018-09-18 | 浙江大华技术股份有限公司 | A kind of method and device of structure map |
CN108645415A (en) * | 2018-08-03 | 2018-10-12 | 上海海事大学 | A kind of ship track prediction technique |
CN108692701A (en) * | 2018-05-28 | 2018-10-23 | 佛山市南海区广工大数控装备协同创新研究院 | Mobile robot Multi-sensor Fusion localization method based on particle filter |
CN108765437A (en) * | 2018-05-07 | 2018-11-06 | 深圳市三宝创新智能有限公司 | A kind of autonomous mobile robot SLAM builds the data correlation matching process of figure |
CN108955688A (en) * | 2018-07-12 | 2018-12-07 | 苏州大学 | Two-wheel differential method for positioning mobile robot and system |
CN109460029A (en) * | 2018-11-29 | 2019-03-12 | 华南农业大学 | Livestock and poultry cultivation place inspection mobile platform and its control method |
CN109674404A (en) * | 2019-01-26 | 2019-04-26 | 深圳市云鼠科技开发有限公司 | A kind of sweeping robot avoidance processing mode based on free move technology |
CN110057371A (en) * | 2019-04-08 | 2019-07-26 | 南京航空航天大学 | A kind of compressed sensing based planet rover active path planing method |
CN110082776A (en) * | 2019-03-08 | 2019-08-02 | 贵州电网有限责任公司 | A kind of robot real-time location method based on 2D laser data |
CN110207707A (en) * | 2019-05-30 | 2019-09-06 | 四川长虹电器股份有限公司 | Quick initial alignment method and robot device based on particle filter |
CN110320520A (en) * | 2019-06-26 | 2019-10-11 | 哈尔滨工程大学 | A kind of robust rear end figure optimization method depth measurement synchronizing information positioning and build figure |
CN110515381A (en) * | 2019-08-22 | 2019-11-29 | 浙江迈睿机器人有限公司 | Multi-sensor Fusion algorithm for positioning robot |
CN110763243A (en) * | 2018-07-27 | 2020-02-07 | 杭州海康威视数字技术股份有限公司 | Sliding map updating method and device |
CN110888104A (en) * | 2019-11-04 | 2020-03-17 | 浙江大学 | Underwater robot positioning method under beacon track approaching condition |
CN111103875A (en) * | 2018-10-26 | 2020-05-05 | 科沃斯机器人股份有限公司 | Method, apparatus and storage medium for avoiding |
CN112161634A (en) * | 2020-08-13 | 2021-01-01 | 盐城工学院 | RTB-based (real time bus-based) line control chassis map construction and simultaneous positioning method |
CN113375658A (en) * | 2021-06-15 | 2021-09-10 | 电子科技大学中山学院 | Method and system for simultaneously FDD and SLAM under mobile robot fault |
CN114415655A (en) * | 2021-12-02 | 2022-04-29 | 盐城中科高通量计算研究院有限公司 | Inspection robot navigation control method based on improved SLAM |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4829304A (en) * | 1986-05-20 | 1989-05-09 | Harris Corp. | Map-aided navigation system employing TERCOM-SITAN signal processing |
CN102402225A (en) * | 2011-11-23 | 2012-04-04 | 中国科学院自动化研究所 | Method for realizing localization and map building of mobile robot at the same time |
WO2013002067A1 (en) * | 2011-06-29 | 2013-01-03 | 株式会社日立産機システム | Mobile robot and self location and attitude estimation system installed upon mobile body |
CN103487047A (en) * | 2013-08-06 | 2014-01-01 | 重庆邮电大学 | Improved particle filter-based mobile robot positioning method |
CN103644903A (en) * | 2013-09-17 | 2014-03-19 | 北京工业大学 | Simultaneous localization and mapping method based on distributed edge unscented particle filter |
CN105606104A (en) * | 2016-03-17 | 2016-05-25 | 北京工业大学 | Robot autonomous navigation method based on heading-assisting distributed type SLAM (Simultaneous Localization and Mapping) |
CN105865449A (en) * | 2016-04-01 | 2016-08-17 | 深圳杉川科技有限公司 | Laser and vision-based hybrid location method for mobile robot |
CN106248077A (en) * | 2016-07-06 | 2016-12-21 | 北京理工大学 | A kind of visible ray integrated positioning system based on particle filter and method |
CN106598052A (en) * | 2016-12-14 | 2017-04-26 | 南京阿凡达机器人科技有限公司 | Robot security inspection method based on environment map and robot thereof |
CN106885574A (en) * | 2017-02-15 | 2017-06-23 | 北京大学深圳研究生院 | A kind of monocular vision robot synchronous superposition method based on weight tracking strategy |
-
2017
- 2017-09-19 CN CN201710847755.9A patent/CN107741745B/en active Active
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4829304A (en) * | 1986-05-20 | 1989-05-09 | Harris Corp. | Map-aided navigation system employing TERCOM-SITAN signal processing |
WO2013002067A1 (en) * | 2011-06-29 | 2013-01-03 | 株式会社日立産機システム | Mobile robot and self location and attitude estimation system installed upon mobile body |
CN102402225A (en) * | 2011-11-23 | 2012-04-04 | 中国科学院自动化研究所 | Method for realizing localization and map building of mobile robot at the same time |
CN103487047A (en) * | 2013-08-06 | 2014-01-01 | 重庆邮电大学 | Improved particle filter-based mobile robot positioning method |
CN103644903A (en) * | 2013-09-17 | 2014-03-19 | 北京工业大学 | Simultaneous localization and mapping method based on distributed edge unscented particle filter |
CN105606104A (en) * | 2016-03-17 | 2016-05-25 | 北京工业大学 | Robot autonomous navigation method based on heading-assisting distributed type SLAM (Simultaneous Localization and Mapping) |
CN105865449A (en) * | 2016-04-01 | 2016-08-17 | 深圳杉川科技有限公司 | Laser and vision-based hybrid location method for mobile robot |
CN106248077A (en) * | 2016-07-06 | 2016-12-21 | 北京理工大学 | A kind of visible ray integrated positioning system based on particle filter and method |
CN106598052A (en) * | 2016-12-14 | 2017-04-26 | 南京阿凡达机器人科技有限公司 | Robot security inspection method based on environment map and robot thereof |
CN106885574A (en) * | 2017-02-15 | 2017-06-23 | 北京大学深圳研究生院 | A kind of monocular vision robot synchronous superposition method based on weight tracking strategy |
Non-Patent Citations (1)
Title |
---|
侯荣波 等: "基于ORB-SLAM的室内机器人定位和三维稠密地图构建", 《计算机应用》 * |
Cited By (27)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108550318A (en) * | 2018-03-12 | 2018-09-18 | 浙江大华技术股份有限公司 | A kind of method and device of structure map |
CN108765437A (en) * | 2018-05-07 | 2018-11-06 | 深圳市三宝创新智能有限公司 | A kind of autonomous mobile robot SLAM builds the data correlation matching process of figure |
CN108692701A (en) * | 2018-05-28 | 2018-10-23 | 佛山市南海区广工大数控装备协同创新研究院 | Mobile robot Multi-sensor Fusion localization method based on particle filter |
CN108955688A (en) * | 2018-07-12 | 2018-12-07 | 苏州大学 | Two-wheel differential method for positioning mobile robot and system |
CN108955688B (en) * | 2018-07-12 | 2021-12-28 | 苏州大学 | Method and system for positioning double-wheel differential mobile robot |
CN110763243A (en) * | 2018-07-27 | 2020-02-07 | 杭州海康威视数字技术股份有限公司 | Sliding map updating method and device |
CN110763243B (en) * | 2018-07-27 | 2021-08-24 | 杭州海康威视数字技术股份有限公司 | Sliding map updating method and device |
CN108645415A (en) * | 2018-08-03 | 2018-10-12 | 上海海事大学 | A kind of ship track prediction technique |
CN111103875B (en) * | 2018-10-26 | 2021-12-03 | 科沃斯机器人股份有限公司 | Method, apparatus and storage medium for avoiding |
CN111103875A (en) * | 2018-10-26 | 2020-05-05 | 科沃斯机器人股份有限公司 | Method, apparatus and storage medium for avoiding |
CN109460029A (en) * | 2018-11-29 | 2019-03-12 | 华南农业大学 | Livestock and poultry cultivation place inspection mobile platform and its control method |
CN109674404A (en) * | 2019-01-26 | 2019-04-26 | 深圳市云鼠科技开发有限公司 | A kind of sweeping robot avoidance processing mode based on free move technology |
CN110082776A (en) * | 2019-03-08 | 2019-08-02 | 贵州电网有限责任公司 | A kind of robot real-time location method based on 2D laser data |
CN110082776B (en) * | 2019-03-08 | 2023-04-07 | 贵州电网有限责任公司 | Robot real-time positioning method based on 2D laser data |
CN110057371A (en) * | 2019-04-08 | 2019-07-26 | 南京航空航天大学 | A kind of compressed sensing based planet rover active path planing method |
CN110057371B (en) * | 2019-04-08 | 2022-06-24 | 南京航空航天大学 | Planet patrol device active path planning method based on compressed sensing |
CN110207707A (en) * | 2019-05-30 | 2019-09-06 | 四川长虹电器股份有限公司 | Quick initial alignment method and robot device based on particle filter |
CN110320520A (en) * | 2019-06-26 | 2019-10-11 | 哈尔滨工程大学 | A kind of robust rear end figure optimization method depth measurement synchronizing information positioning and build figure |
CN110515381A (en) * | 2019-08-22 | 2019-11-29 | 浙江迈睿机器人有限公司 | Multi-sensor Fusion algorithm for positioning robot |
CN110888104A (en) * | 2019-11-04 | 2020-03-17 | 浙江大学 | Underwater robot positioning method under beacon track approaching condition |
CN110888104B (en) * | 2019-11-04 | 2022-03-22 | 浙江大学 | Underwater robot positioning method under beacon track approaching condition |
CN112161634A (en) * | 2020-08-13 | 2021-01-01 | 盐城工学院 | RTB-based (real time bus-based) line control chassis map construction and simultaneous positioning method |
CN112161634B (en) * | 2020-08-13 | 2024-06-07 | 盐城工学院 | RTB-based drive-by-wire chassis map construction and simultaneous positioning method |
CN113375658A (en) * | 2021-06-15 | 2021-09-10 | 电子科技大学中山学院 | Method and system for simultaneously FDD and SLAM under mobile robot fault |
CN113375658B (en) * | 2021-06-15 | 2023-05-09 | 电子科技大学中山学院 | Method and system for simultaneously FDD and SLAM under fault of mobile robot |
CN114415655B (en) * | 2021-12-02 | 2024-05-07 | 盐城中科高通量计算研究院有限公司 | Improved SLAM-based navigation control method for inspection robot |
CN114415655A (en) * | 2021-12-02 | 2022-04-29 | 盐城中科高通量计算研究院有限公司 | Inspection robot navigation control method based on improved SLAM |
Also Published As
Publication number | Publication date |
---|---|
CN107741745B (en) | 2019-10-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107741745A (en) | It is a kind of to realize mobile robot autonomous positioning and the method for map structuring | |
CN109597864B (en) | Method and system for real-time positioning and map construction of ellipsoid boundary Kalman filtering | |
Nieto et al. | Recursive scan-matching SLAM | |
JP2022113746A (en) | Determination device | |
CN110702091B (en) | High-precision positioning method for moving robot along subway rail | |
CN112882053B (en) | Method for actively calibrating external parameters of laser radar and encoder | |
KR20220024791A (en) | Method and apparatus for determining the trajectory of a vehicle | |
CN111397599A (en) | Improved ICCP (Integrated Circuit chip) underwater geomagnetic matching method based on triangular matching algorithm | |
CN113483755B (en) | Multi-sensor combination positioning method and system based on non-global consistent map | |
CN112034445B (en) | Vehicle motion trail tracking method and system based on millimeter wave radar | |
CN116337045A (en) | High-speed map building navigation method based on karto and teb | |
CN111857142A (en) | Path planning obstacle avoidance auxiliary method based on reinforcement learning | |
CN112067007B (en) | Map generation method, computer storage medium, and electronic device | |
CN109765569B (en) | Method for realizing virtual dead reckoning sensor based on laser radar | |
CN113074757A (en) | Calibration method for vehicle-mounted inertial navigation installation error angle | |
Verentsov et al. | Bayesian localization for autonomous vehicle using sensor fusion and traffic signs | |
CN116338719A (en) | Laser radar-inertia-vehicle fusion positioning method based on B spline function | |
Lundquist et al. | Road geometry estimation and vehicle tracking using a single track model | |
Lee et al. | Development of advanced grid map building model based on sonar geometric reliability for indoor mobile robot localization | |
CN114935345A (en) | Vehicle-mounted inertial navigation mounting angle error compensation method based on pattern recognition | |
CN115930971B (en) | Data fusion processing method for robot positioning and map building | |
Jiang et al. | High precise localization of mobile robot by three times pose correction | |
CN118189970A (en) | Course track optimization method based on graph optimization under multiple sensors | |
CN117808882B (en) | SLAM drift detection and compensation method based on multi-sensor fusion in degradation scene | |
CN115855047A (en) | Indoor multi-floor sensor fusion factor graph positioning method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |