CN112947481B - Autonomous positioning control method for home service robot - Google Patents

Autonomous positioning control method for home service robot Download PDF

Info

Publication number
CN112947481B
CN112947481B CN202110330445.6A CN202110330445A CN112947481B CN 112947481 B CN112947481 B CN 112947481B CN 202110330445 A CN202110330445 A CN 202110330445A CN 112947481 B CN112947481 B CN 112947481B
Authority
CN
China
Prior art keywords
robot
coordinate system
pose
mileage
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110330445.6A
Other languages
Chinese (zh)
Other versions
CN112947481A (en
Inventor
王刚
曹阳
徐峰远
周军
牛绿原
孔令荣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Taizhou Institute Of Sci&tech Nust
Original Assignee
Taizhou Institute Of Sci&tech Nust
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Taizhou Institute Of Sci&tech Nust filed Critical Taizhou Institute Of Sci&tech Nust
Priority to CN202110330445.6A priority Critical patent/CN112947481B/en
Publication of CN112947481A publication Critical patent/CN112947481A/en
Application granted granted Critical
Publication of CN112947481B publication Critical patent/CN112947481B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0238Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors
    • G05D1/024Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors in combination with a laser
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0221Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving a learning process
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0223Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving speed control of the vehicle
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0257Control of position or course in two dimensions specially adapted to land vehicles using a radar

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Optics & Photonics (AREA)
  • Electromagnetism (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The invention discloses an autonomous positioning control method for a family service robot, which comprises dead reckoning and SLAM radar positioning; wherein the dead reckoning step comprises: (1.1) calculating a motion discrete model of the robot; (1.2) analyzing a tangent model and a circular arc model; (1.3) establishing a relation model between a robot coordinate system and a global coordinate system; the SLAM radar location step includes: (2.1) acquiring the initial state of the robot through a laser radar; (2.2) filtering the acquired laser radar data through a self-adaptive unscented Kalman algorithm; (2.3) establishing a rasterized map by using a Hector SLAM algorithm; and (2.4) combining the mileage data variable and the laser radar positioning variable to obtain the state quantity. The invention adopts the self-adaptive unscented Kalman algorithm, can effectively improve the position precision and the angle precision of the robot, and further improves the positioning result.

Description

Autonomous positioning control method for home service robot
Technical Field
The invention relates to an autonomous positioning control method for a home service robot, and belongs to the field of automation control.
Background
The robot technology is a multidisciplinary system integrating machinery, information, computer science and automatic control theory into one body. It not only has high added value of the self technology, but also has wide products. The robot is an important technical radiation platform, military defense strength is enhanced, controllability is improved, the situation development level is improved, the overall economic development is promoted, and the improvement of the living standard of people to a great extent has important significance.
Robots are one of the key areas of future development, with service robots being one of the most promising applications in this century. The "intelligent robot technology" of the service robot is worth further research. Currently, the basic intelligence problem of mobile robots is mainly reflected in three problems: (1) "where do i now? "; (2) "where do i go? "; (3) "how to get there"; corresponding to the positioning, path planning and motion control problems of the mobile robot. The autonomous positioning problem is the basis and premise for realizing specific functions of the mobile robot, and is a problem to be solved first among numerous problems because the autonomous positioning problem reflects the intelligent condition of the robot to a certain extent, and if the problem is not solved well, the function which needs to be executed by the robot can not be realized.
Early indoor robot positioning was mainly sensor positioning, such as inertial navigation and track-pushing algorithms. However, these methods have disadvantages in that it is difficult to apply to autonomous positioning individually and directly in a home micro dynamic environment, and some sensors may generate some accumulated errors. Especially, in the dynamic environment of the home environment, because of the characteristics and limitations of GPS incapability, high positioning accuracy requirement and the like, the difficulty of positioning is increased, and a single positioning method is difficult to apply.
Disclosure of Invention
The invention aims to provide an autonomous positioning control method for a home service robot, which aims to overcome the problems in the prior art.
In order to realize the purpose, the invention adopts the technical scheme that: a family service robot autonomous positioning control method comprises dead reckoning and SLAM radar positioning; wherein the dead reckoning step comprises: (1.1) calculating a motion discrete model of the robot; (1.2) analyzing a tangent model and a circular arc model; (1.3) establishing a relation model between a robot coordinate system and a global coordinate system; the SLAM radar location step includes: (2.1) acquiring the initial state of the robot through a laser radar; (2.2) filtering the acquired laser radar data through a self-adaptive unscented Kalman algorithm; (2.3) establishing a rasterized map by using a Hector SLAM algorithm; and (2.4) combining the mileage data variable and the laser radar positioning variable to obtain the state quantity.
Further, in the step (1.1), the robot selects a robot with two wheels moving in a differential speed mode, and a global coordinate system and a robot coordinate system are established, so that a speed-based kinematic model of the robot is expressed as follows:
Figure BDA0002990489640000021
Figure BDA0002990489640000022
wherein v is the linear velocity of the robot, w is the rotational speed of the robot, l is the wheel spacing of the robot, v l ,v r Linear speeds of left and right driving wheels of the robot, x, y and theta are respectively an abscissa, an ordinate and an angle of the center of the robot in a global coordinate system;
in a discrete system, discretization is realized under the assumption that the speed and the attitude angle of a left wheel and a right wheel are approximately unchanged, and a discrete kinematic formula can be obtained:
Figure BDA0002990489640000023
wherein (x) k-1 ,y k-1k-1 ) Is the pose of the robot at the moment k-1, (x) k ,y kk ) Is the pose of the robot at time k, T s Is a sampling time, v l ,v r Left and right wheel speeds, respectively.
Further, in the step (1.2), in a short time, the speed of the left and right wheels of the robot is considered to be unchanged, and the running tracks of the left and right wheels are concentric circular arcs or parallel line segments; two models can be analyzed, namely a tangent model and an arc model; wherein:
(1) Tangent model
The left wheel mileage value increment is equal to the right wheel mileage value increment, namely when the driving tracks of the left wheel and the right wheel are parallel line segments, the kinematic formula is easy to obtain:
Figure BDA0002990489640000024
wherein (x) k-1 ,y k-1k-1 ) Is the pose of the robot at the time of k-1, (x) k ,y kk ) The pose of the robot at the moment k and d are mileage value increments of left and right wheels;
(2) Arc model
By simplifying a mileage model of the robot, wherein the pose of the robot at the moment k-1 in a global coordinate system is R k-1 (x k-1 ,y k-1k-1 ) Pose at time k is R k (x k ,y kk ) The distance between the wheels of the robot is l, and the increment of the mileage value of the running of the left wheel and the right wheel is d l ,d r Radius and angle of arc formed by mileage of left wheel R and a, R k-1 And R k The length of the straight line distance is d, and the included angle is R k R k-1 O' is b;
the formula of the odometer of the two-wheel differential drive robot is as follows:
Figure BDA0002990489640000031
wherein (x) k-1 ,y k-1k-1 ) Is the pose of the robot at the time of k-1, (x) k ,y kk ) Is the pose of the robot at time k, d l ,d r The mileage value increment between two sampling of the left wheel and the right wheel is respectively, and l is the distance between the wheels of the robot;
because the increment difference of the mileage value between two times of sampling is small, d can be approximated
Figure BDA0002990489640000032
The formula of the odometer for the two-wheeled differential drive robot can be expressed as follows:
Figure BDA0002990489640000033
finally, recording the state quantity d through the differential driving robot odometer formula, a robot sensor photoelectric encoder and a gyroscope l ,d r Left and right wheel mileage data for data fusionAnd (6) mixing.
Further, in the step (1.3), all coordinate systems adopt cartesian coordinate systems, and when data of each coordinate system is applied to other coordinate systems, coordinate conversion needs to be involved; the conversion relation can be described and realized through three-dimensional translation and three-dimensional rotation; specifically, assume that the coordinate of the laser sensor in the robot coordinate system is (t) x ,t y ,t z ) And rotates by gamma, alpha and beta radians around the z-axis, the x-axis and the y-axis in turn, the coordinate (x) of any point in the laser coordinate system s ,y s ,z s ) With the coordinates (x) in the robot coordinate system r ,y r ,z r ) The relationship of (c) is expressed as:
Figure BDA0002990489640000034
further, in the step (2.1), the lidar rotates by itself and performs distance measurement of the surrounding environment through calculation of the flying speed of laser emission and detection, wherein the sampling frequency is 20Hz, and the rotating speed is 1200rpm.
Further, in the step (2.2), in a nonlinear environment, the state equation for the nonlinear system is:
Figure BDA0002990489640000041
where h is the observation function, f is the state transfer function, X k ,Z k Respectively are a state variable and an observation variable at the moment k;
the unscented Kalman algorithm comprises the following steps:
(2.21) sampling of the Sigma spots
Obtaining the predicted weight W corresponding to the state prediction of each Sigma point at the moment k i m And W i c Set of points { χ k/k (i) I =1, \ 8230 +, 2n +1, where 2n +1 is the Sigma point sampling number;
Figure BDA0002990489640000042
sigma point sampling formula is
Figure BDA0002990489640000043
Defining: alpha is a scaling factor with a positive value, beta is a parameter for introducing f (#) high order, 2n +1 is the sampling number of Sigma points, and W i m ,W i c Respectively weighted by the ith mean value and the covariance, and 2n +1 symmetric points to approximate the mean value as
Figure BDA00029904896400000410
And covariance of P x λ is the distance used to control each point from the mean;
(2.22) estimating sample points by equation of state
Figure BDA0002990489640000044
(2.23) passing through the sampling Point
Figure BDA0002990489640000045
Weight W i m ,W i c Separately pre-estimating covariance matrix P k+1/k Sum mean value
Figure BDA0002990489640000046
Figure BDA0002990489640000047
Figure BDA0002990489640000048
(2.24) measuring the corresponding sampling points by (2.22)
Figure BDA0002990489640000049
(2.25) estimate the measurement quantity and covariance, respectively
Figure BDA0002990489640000051
Figure BDA0002990489640000052
Figure BDA0002990489640000053
P x,z Covariance matrix, P, for the combination of interference signal vector and pose state vector z,x Is a covariance matrix of the measured interference signal vector;
(2.26) updating the measurement equation
Figure BDA0002990489640000054
Figure BDA0002990489640000055
Figure BDA0002990489640000056
Then, according to a robot motion model and the mean value and the covariance of the previous moment, the mean value and the covariance of the current moment are estimated through (2.21) - (2.24) sigma point sampling and change; updating the data of the observed sensor data estimated value through the steps (2.25) and (2.26) to obtain the optimal state estimation; repeating the steps (2.21) - (2.26) for each sampling to obtain the optimal state estimation on line;
finally, recording the current laser radar SLAM acquired positioning data [ x, y, theta ]] T And (x, y) are coordinates of the robot in a map coordinate system, and theta is a posture angle of the robot in the map coordinate system.
Further, in the step (2.3), a grid map is created by using a Hector SLAM algorithm, and the probability that a grid P is occupied is represented by P (s = 1), so that the probability that the grid is blank is P (s = 0) =1-P (s = 1); from Bayesian equations, the equations can be derived
Figure BDA0002990489640000057
Let S = logOdd (S), the measurement status update formula can be obtained
S + =S - +lomeas
Wherein S - ,S + The distribution represents S, lomeas { lofree, loccu }, before and after the measurement;
setting the grid resolution as r, i.e. a map with a grid length of r, the relation between the grid coordinate [ i, j ] and the real coordinate [ x, y ] is:
[i,j]=[ceil(x/r),ceil(y/r)]
ceil () returns the smallest integer greater than or equal to the specified value;
according to the grid coordinates of the obstacle points and the grid coordinates of the robot, a Bresenham algorithm is used for calculating a set of non-obstacle grid points;
for any grid P, let M (P) = P (s = 1), then for the continuous map coordinates P m (x, y) approximating the occupation probability, its occupation probability M (P), using bilinear interpolation m ) And gradient thereof
Figure BDA0002990489640000061
Is composed of
Figure BDA0002990489640000062
Figure BDA0002990489640000063
Wherein, P 00 ,P 01 ,P 10 ,P 11 Is P m Adjacent grids of (a);
the Hector SLAM realizes positioning by utilizing the comparison optimization of laser beams and map real data, and the Gaussian Newton method is adopted to match the radar scanning data, so that a local extreme value is avoided; inquiring a pose xi = [ x, y, theta ] in a global coordinate system through a HectrSLAM algorithm] T The following equation is minimized:
Figure BDA0002990489640000064
wherein S i (xi) denotes the position of the ith laser end in the global coordinate system with the laser center at the pose xi. Setting the position s of the ith laser tail end in the coordinate system of the laser sensor i =[s ix ,s iy ] T ,M(S i (ξ)) represents the probability of occupation of the point; solving by a Gauss Newton method to obtain:
Figure BDA0002990489640000065
wherein
Figure BDA0002990489640000066
Thus M (S) i (xi)) can be approximated
Figure BDA0002990489640000067
Wherein
Figure BDA0002990489640000068
Further, in the step (2.4), combining the mileage data variable and the lidar positioning variable to obtain the state quantity X = [ X, y, θ, d ] l ,d r ] T At this time
Figure BDA0002990489640000071
According to the formula of the robot system
Figure BDA0002990489640000072
And then, carrying out pose change between the robot coordinate system and the global coordinate system to acquire the state quantity under the global coordinate system.
The beneficial effects of the invention are: the EKF-SLAM method based on Kalman filtering provides a vector SLAM algorithm to be gradually optimized into an adaptive unscented Kalman Algorithm (AUKF), and the robot self-service positioning is carried out by adopting the method, so that the position precision is improved by about 56% and the angle precision is improved by about 40% compared with the simple laser positioning; compared with the traditional UKF fusion algorithm, the position precision is improved by about 14 percent, and the angle precision is improved by about 68 percent. In conclusion, the method can effectively improve the positioning result.
Drawings
FIG. 1 is a diagram of a two-wheeled differential-drive robot model;
FIG. 2 is a model diagram of a left-turn odometer of a two-wheeled differential-drive robot;
FIG. 3 is a robot positioning trajectory diagram;
FIG. 4 is a map of position error probability densities;
fig. 5 is an angle error probability density map.
Detailed Description
The invention is described in detail below with reference to the figures and specific embodiments.
The embodiment provided by the invention comprises the following steps: a home service robot autonomous positioning control method is divided into two parts of dead reckoning and SLAM radar positioning, wherein:
and (3) dead reckoning:
the first step is as follows: the robot selects a robot with two wheels moving at different speeds and calculates a motion discrete model of the robot;
FIG. 1, wherein X-O-Y is the global coordinate system and Xr-C-Yr is the robot coordinate system.
The robot kinematics model based on velocity is:
Figure BDA0002990489640000081
Figure BDA0002990489640000082
wherein v is the linear velocity of the robot, w is the rotational speed of the robot, l is the wheel spacing of the robot, v l ,v r The linear velocity of the left and right driving wheels of the robot. x, y and theta are respectively the abscissa, the ordinate and the angle of the robot center in the global coordinate system.
In an actual discrete system, discretization is realized under the assumption that the speed and the attitude angle of a left wheel and a right wheel are approximately unchanged, and a discrete kinematics formula can be obtained.
Figure BDA0002990489640000083
Wherein (x) k-1 ,y k-1k-1 ) Is the pose of the robot at the moment k-1, (x) k ,y kk ) Is the pose of the robot at time k, T s To sample time, v l ,v r Left and right wheel speeds, respectively.
The second step is that: in a short time, the left and right wheels of the robot can be regarded as unchanged in speed, and the running tracks of the left and right wheels are concentric circular arcs or parallel line segments. Two models can be analyzed, which are respectively: tangent model and circular arc model
1. Tangent model
The left wheel mileage increment is equal to the right wheel mileage increment, namely when the driving tracks of the left wheel and the right wheel are parallel line segments, the kinematic formula is easy to obtain:
Figure BDA0002990489640000084
wherein (x) k-1 ,y k-1k-1 ) Is the pose of the robot at the moment k-1, (x) k ,y kk ) And d is the mileage value increment of the left wheel and the right wheel.
2. Arc model
Two conditions when the mileage of the left wheel and the mileage of the right wheel are different are symmetrical, and the mileage model of the robot can be simplified into a graph 2 by taking the condition that the mileage value of the left wheel is smaller than the mileage value of the right wheel as an example for analysis. Wherein the pose of the robot at the moment k-1 in the global coordinate system is R k-1 (x k-1 ,y k-1k-1 ) The pose at the time k is R k (x k ,y kk ) The distance between the wheels of the robot is l, and the increment of the mileage value of the running of the left wheel and the right wheel is d l ,d r Radius and angle of arc formed by mileage of left wheel R and a, R k-1 And R k The length of the straight line distance is d, and the included angle is R k R k-1 O' is b.
Odometer formula of two-wheel differential drive robot
Figure BDA0002990489640000091
Wherein (x) k-1 ,y k-1k-1 ) Is the pose of the robot at the moment k-1, (x) k ,y kk ) Is the pose of the robot at time k, d l ,d r The mileage value increment between two times of sampling of the left wheel and the right wheel is respectively, and l is the distance between the wheels of the robot.
Because the increment difference of the mileage value between two times of sampling is small, d can be approximated
Figure BDA0002990489640000092
The formula of the odometer of the two-wheeled differential driving robot can be expressed as formula
Figure BDA0002990489640000093
Finally, recording the state quantity d through the differential driving robot odometer formula, the robot sensor photoelectric encoder and the gyroscope l ,d r And the mileage data of the left wheel and the mileage data of the right wheel are used for data fusion.
The third step: and establishing a relation model between the robot coordinate system and the global coordinate system. All coordinate systems adopt Cartesian coordinate systems, namely coordinate axes conform to the right-hand rule. Each coordinate system data needs to involve coordinate conversion when applied to other coordinate systems. This translation relationship can be described and implemented by three-dimensional translation and three-dimensional rotation. Taking the laser sensor as an example, let the coordinate of the laser sensor in the robot coordinate system be (t) x ,t y ,t z ) And sequentially rotated by gamma, alpha and beta radians around the z, x and y axes. The coordinates (x) of any point in the laser coordinate system s ,y s ,z s ) With the coordinates (x) in the robot coordinate system r ,y r ,z r ) Is expressed as
Figure BDA0002990489640000094
Laser radar positioning:
the first step is as follows: the initial state of the robot is obtained through the laser radar, the laser radar rotates by itself, the flying speed of laser emission and detection is calculated, the distance measurement of the surrounding environment can be carried out, the sampling frequency is 20Hz, and the rotating speed is 1200rpm.
The second step is that: filtering the acquired laser radar data through an adaptive unscented Kalman Algorithm (AUKF); the state equation for a nonlinear system is:
Figure BDA0002990489640000101
the algorithm steps of the Kalman algorithm UKF are as follows:
(1) Sampling of each Sigma Point
Get each Sigma point at time kPredicted weight W corresponding to state prediction i m And W i c Set of points { χ k/k (i) And the symbol of 2n +1 is Sigma point sampling number.
Figure BDA0002990489640000102
Sigma point sampling formula is
Figure BDA0002990489640000103
Defining: alpha is a scaling factor with a positive value, beta is a parameter for introducing f (#) high order, 2n +1 is the sampling number of Sigma points, and W i m ,W i c Respectively weighted by the ith mean value and the covariance, and 2n +1 symmetric points to approximate the mean value as
Figure BDA0002990489640000104
And covariance of P x And λ is used to control the distance of each point from the mean.
(2) Estimating sampling points by state equations
Figure BDA0002990489640000105
(3) Passing through the sampling point
Figure BDA0002990489640000106
Weight W i m ,W i c Separately predict covariance matrix P k+1/k Sum mean value
Figure BDA0002990489640000107
Figure BDA0002990489640000108
Figure BDA0002990489640000109
(4) Measuring corresponding sampling points by (2) estimation
Figure BDA0002990489640000111
(5) Respectively pre-estimating the measurement number and the covariance
Figure BDA0002990489640000112
Figure BDA0002990489640000113
Figure BDA0002990489640000114
P x,z Covariance matrix, P, for the measurement of the interference signal vector and pose state vector combination z,x Is the covariance matrix of the measured interference signal vector.
(6) Updating the measurement equation
Figure BDA0002990489640000115
Figure BDA0002990489640000116
Figure BDA0002990489640000117
And then, estimating the mean value and the covariance of the current moment through sigma point sampling and change from (1) to (4) according to the robot motion model and the mean value and the covariance of the previous moment. And (5) performing data updating on the observed sensor data estimated value through steps (5) and (6) to obtain an optimal state estimation. And (5) repeating the steps (1) to (6) every time of sampling to obtain the optimal state estimation on line.
Finally, recording the positioning data [ x, y, theta ] obtained by the current laser radar SLAM] T And (x, y) are coordinates of the robot in a map coordinate system, and theta is a posture angle of the robot in the map coordinate system.
The third step: a grid map is built by using a vector SLAM algorithm, where P (s = 1) represents the probability that a grid P is occupied, and the probability that the grid is empty is P (s = 0) =1-P (s = 1). From Bayesian equations, the equations can be derived
Figure BDA0002990489640000118
Let S = logOdd (S), the measurement status update formula is obtained
S + =S - +lomeas
Wherein S - ,S + Distribution represents S, lomeas to { lofree, loccu } before and after measurement
Setting the grid resolution as r, namely a map with the grid length as r, and the relation between the grid coordinate [ i, j ] and the real coordinate [ x, y ] as
[i,j]=[ceil(x/r),ceil(y/r)]
ceil () returns the smallest integer greater than or equal to the specified value.
And calculating a set of non-obstacle grid points by a Bresenham algorithm according to the grid coordinates of the obstacle points and the grid coordinates of the robot.
For any grid P, let M (P) = P (s = 1). For successive map coordinates P m (x, y) approximating the occupation probability, its occupation probability M (P), using bilinear interpolation m ) And gradient thereof
Figure BDA0002990489640000121
Is composed of
Figure BDA0002990489640000122
Figure BDA0002990489640000123
Wherein, P 00 ,P 01 ,P 10 ,P 11 Is P m Adjacent to the grid.
The Hector SLAM realizes positioning by utilizing the contrast optimization of laser beams and real map data, and the Gaussian Newton method is adopted to match the radar scanning data, so that a local extreme value is avoided. Inquiring a pose xi = [ x, y, theta ] in a global coordinate system through a HectrSLAM algorithm] T The following formula value is minimized.
Figure BDA0002990489640000124
Wherein S i (xi) denotes the position of the ith laser end in the global coordinate system with the laser center at the pose xi. Setting the position s of the ith laser tail end in the coordinate system of the laser sensor i =[s ix ,s iy ] T 。M(S i (ξ)) represents the probability of occupation of that point. Solved by Gauss-Newton method
Figure BDA0002990489640000125
Wherein
Figure BDA0002990489640000126
Thus M (S) i (xi)) can be approximated
Figure BDA0002990489640000127
Wherein
Figure BDA0002990489640000128
The fourth step: combining the mileage data variable and the laser radar positioning variable to obtain the state quantity X = [ X, y, theta, d ] l ,d r ] T At this time
Figure BDA0002990489640000131
According to the formula of the robot system
Figure BDA0002990489640000132
And then, carrying out pose change between the robot coordinate system and the global coordinate system to acquire the state quantity under the global coordinate system.
In summary, we consider the positioning of the laser SLAM to be relatively stable, and particularly the estimation of the angle is considered to be relatively accurate in an indoor environment. The actual measurement characteristics of the odometer are subject to variation. Taking adaptive vector D k =[0.98,0.98,0.99,0.96,0.96] T The positioning trajectories of the different algorithms are shown in fig. 3.
As can be seen from fig. 3, the AUKF is improved in both the positioning position accuracy and the running trajectory smoothness for the UKF, and both the UKF and the AUKF can approximately conform to the trajectory of the true value by suppressing the accumulated error.
In addition, the position and the attitude errors after dead reckoning and Hector SLAM are simply used and UKF and AUKF are introduced are subjected to statistical analysis, the probability densities of the position errors and the angle errors are shown in FIGS. 4 and 5, and the average value is shown in Table 1.
TABLE 1 error mean value comparison table
Figure BDA0002990489640000133
Through comparative analysis, the position precision is improved by 56% and the angle precision is improved by 40% by using AUKF (autonomous Underwater Kalman Filter) compared with simple laser positioning; compared with the traditional UKF fusion algorithm, the position precision is improved by about 14 percent, and the angle precision is improved by 68 percent. From the simulation result, the result of positioning can be effectively improved by fusing the result of the HectrSLAM and the odometer data through the AUKF.
The foregoing illustrates and describes the principles, general features, and advantages of the present invention. It should be understood by those skilled in the art that the above embodiments do not limit the scope of the present invention in any way, and all technical solutions obtained by using equivalent substitution methods fall within the scope of the present invention.
The parts not involved in the present invention are the same as or can be implemented using the prior art.

Claims (8)

1. The autonomous positioning control method of the family service robot is characterized by comprising dead reckoning and SLAM radar positioning; wherein the dead reckoning step comprises: (1.1) calculating a motion discrete model of the robot; (1.2) analyzing a tangent model and a circular arc model; (1.3) establishing a relation model between a robot coordinate system and a global coordinate system; the SLAM radar location step includes: (2.1) acquiring the initial state of the robot through a laser radar; (2.2) filtering the acquired laser radar data through a self-adaptive unscented Kalman algorithm; (2.3) establishing a rasterized map by using a Hector SLAM algorithm; and (2.4) combining the mileage data variable and the laser radar positioning variable to obtain the state quantity.
2. The home service robot autonomous positioning control method according to claim 1, characterized in that in step (1.1), the robot selects a two-wheeled robot with differential motion, and by establishing a global coordinate system and a robot coordinate system, a speed-based kinematic model of the robot is expressed as:
Figure FDA0003881740370000011
Figure FDA0003881740370000012
wherein v is the linear velocity of the robot, w is the rotational speed of the robot, l is the wheel spacing of the robot, v l ,v r The linear velocity of left and right driving wheels of the robot, x, y and theta are respectively the linear velocity of the robotThe abscissa, the ordinate and the angle of the center of the person in the global coordinate system;
in a discrete system, discretization is realized under the assumption that the speed and the attitude angle of a left wheel and a right wheel are approximately unchanged, and a discrete kinematic formula can be obtained:
Figure FDA0003881740370000013
wherein (x) k-1 ,y k-1k-1 ) Is the pose of the robot at the moment k-1, (x) k ,y kk ) Is the pose of the robot at time k, T s To sample time, v l ,v r Left and right wheel speeds, respectively.
3. The home service robot autonomous positioning control method of claim 1, wherein in step (1.2), if the left and right wheels of the robot are considered to be unchanged in speed in a short time, the tracks traveled by the left and right wheels should be concentric circular arcs or parallel line segments; two models can be analyzed, namely a tangent model and an arc model; wherein:
(1) Tangent model
The left wheel mileage increment is equal to the right wheel mileage increment, namely when the driving tracks of the left wheel and the right wheel are parallel line segments, the kinematic formula is easy to obtain:
Figure FDA0003881740370000021
wherein (x) k-1 ,y k-1k-1 ) Is the pose of the robot at the moment k-1, (x) k ,y kk ) The pose of the robot at the moment k is shown, and d is the mileage value increment of the left wheel and the right wheel;
(2) Circular arc model
By simplifying a mileage model of the robot, wherein the pose of the robot at the moment k-1 in a global coordinate system is R k-1 (x k-1 ,y k-1k-1 ) Pose at time k is R k (x k ,y kk ) The distance between the wheels of the robot is l, and the increment of the mileage value of the running of the left wheel and the right wheel is d l ,d r Radius and angle of arc formed by mileage of left wheel R and a, R k-1 And R k The length of the straight line distance is d, and the included angle is R k R k-1 O' is b;
the two-wheel differential drive robot odometer formula:
Figure FDA0003881740370000022
wherein (x) k-1 ,y k-1k-1 ) Is the pose of the robot at the moment k-1, (x) k ,y kk ) Is the pose of the robot at time k, d l ,d r The mileage value increment between two sampling of the left wheel and the right wheel is respectively, and l is the distance between the wheels of the robot;
because the increment difference of the mileage value between two times of sampling is small, d can be approximated
Figure FDA0003881740370000023
The formula of the odometer of the two-wheeled differential drive robot can be expressed as follows:
Figure FDA0003881740370000024
finally, recording the state quantity d through the differential driving robot odometer formula, the robot sensor photoelectric encoder and the gyroscope l ,d r And the mileage data of the left wheel and the right wheel are used for data fusion.
4. The home service robot autonomous positioning control method of claim 1, wherein in step (1.3), all coordinate systems adopt cartesian coordinate systems, and each coordinate system data is applied to other coordinate systemsCoordinate transformation is required to be involved; the conversion relation can be described and realized through three-dimensional translation and three-dimensional rotation; specifically, assume that the coordinate of the laser sensor in the robot coordinate system is (t) x ,t y ,t z ) And rotates by gamma, alpha and beta radians around the z-axis, the x-axis and the y-axis in turn, the coordinate (x) of any point in the laser coordinate system s ,y s ,z s ) With the coordinates (x) in the robot coordinate system r ,y r ,z r ) The relationship of (c) is expressed as:
Figure FDA0003881740370000031
5. the home service robot self-positioning control method according to claim 1, wherein in step (2.1), the lidar rotates itself while calculating the flying speed through laser emission and detection, and performs ranging of the surrounding environment, wherein the sampling frequency is 20Hz, and the rotating speed is 1200rpm.
6. The home service robot autonomous positioning control method of claim 1, wherein in the step (2.2), in a nonlinear environment, the state equation for a nonlinear system is:
Figure FDA0003881740370000032
where h is the observation function, f is the state transfer function, X k ,Z k Respectively a state variable and an observation variable at the moment k;
the unscented Kalman algorithm comprises the following steps:
(2.21) sampling of the Sigma spots
Obtaining the predicted weight W corresponding to the state prediction of each Sigma point at the moment k i m And W i c Set of points { χ k/k (i) I =1, \ 8230 + 2n +1, where 2n +1 is the Sigma point sample numberCounting;
Figure FDA0003881740370000033
sigma point sampling formula is
Figure FDA0003881740370000034
Defining: alpha is a scaling factor with a positive value, beta is a parameter for introducing f (#) high order, 2n +1 is the sampling number of Sigma points, and W i m ,W i c Respectively weighted by the ith mean value and the covariance, and 2n +1 symmetric points to approximate the mean value as
Figure FDA00038817403700000413
Sum covariance of P x λ is the distance used to control each point from the mean;
(2.22) estimating sample points by equation of state
Figure FDA0003881740370000041
(2.23) passing through the sampling Point
Figure FDA0003881740370000042
Weight W i m ,W i c Separately pre-estimating covariance matrix P k+1/k Sum mean value
Figure FDA0003881740370000043
Figure FDA0003881740370000044
Figure FDA0003881740370000045
(2.24) measuring the corresponding sampling points by (2.22)
Figure FDA0003881740370000046
(2.25) estimating the measurement quantity and covariance respectively
Figure FDA0003881740370000047
Figure FDA0003881740370000048
Figure FDA0003881740370000049
P x,z Covariance matrix, P, for the combination of interference signal vector and pose state vector z,x Is a covariance matrix of the measured interference signal vector;
(2.26) updating the measurement equation
Figure FDA00038817403700000410
Figure FDA00038817403700000411
Figure FDA00038817403700000412
Then, estimating the mean value and the covariance of the current moment through (2.21) to (2.24) sigma point sampling and variation according to the robot motion model and the mean value and the covariance of the previous moment; updating the data of the observed sensor data estimated value through the steps (2.25) and (2.26) to obtain the optimal state estimation; repeating the steps (2.21) - (2.26) for each sampling to obtain the optimal state estimation on line;
finally, recording the positioning data [ x, y, theta ] acquired by the current laser radar SLAM] T And (x, y) are coordinates of the robot in a map coordinate system, and theta is a posture angle of the robot in the map coordinate system.
7. The home service robot autonomous positioning control method of claim 1, wherein in step (2.3), a grid map is created by using a Hector SLAM algorithm, and P (s = 1) represents a probability that a grid P is occupied, so that the probability that the grid is blank is P (s = 0) =1-P (s = 1); from Bayesian equations, the equations can be derived
Figure FDA0003881740370000051
Let S = logOdd (S), the measurement status update formula is obtained
S + =S - +lomeas
Wherein S - ,S + The distribution represents S, lomeas — { lofree, loccu }, before and after the measurement;
setting the grid resolution as r, i.e. a map with a grid length of r, the relation between the grid coordinate [ i, j ] and the real coordinate [ x, y ] is:
[i,j]=[ceil(x/r),ceil(y/r)]
ceil () returns the smallest integer greater than or equal to the specified value;
according to the grid coordinates of the obstacle points and the grid coordinates of the robot, a Bresenham algorithm is used for calculating a set of non-obstacle grid points;
for any grid P, let M (P) = P (s = 1), then for the continuous map coordinates P m (x, y) approximating the occupation probability, its occupation probability M (P), using bilinear interpolation m ) And itGradient of gradient
Figure FDA0003881740370000052
Is composed of
Figure FDA0003881740370000053
Figure FDA0003881740370000054
Wherein, P 00 ,P 01 ,P 10 ,P 11 Is P m Adjacent grids of (a);
the Hector SLAM realizes positioning by utilizing the comparison optimization of laser beams and map real data, and the Gaussian Newton method is adopted to match the radar scanning data, so that a local extreme value is avoided; inquiring a pose xi = [ x, y, theta ] in a global coordinate system through a HectrSLAM algorithm] T The following equation is minimized:
Figure FDA0003881740370000061
wherein S i (xi) represents the position of the ith laser end in the global coordinate system when the laser center is in the pose xi, and the position s of the ith laser end in the laser sensor coordinate system is set i =[s ix ,s iy ] T ,M(S i (ξ)) represents the probability of occupation of the point; solving by a Gauss Newton method to obtain:
Figure FDA0003881740370000062
wherein
Figure FDA0003881740370000063
Thus, the device is provided with
Figure FDA0003881740370000064
Can be approximately obtained
Figure FDA0003881740370000065
Wherein
Figure FDA0003881740370000066
8. The home service robot autonomous positioning control method of claim 1, wherein in the step (2.4), the state quantity X = [ X, y, θ, d ] is obtained by combining a mileage data variable and a lidar positioning variable l ,d r ] T At this time
Figure FDA0003881740370000067
According to the formula of the robot system
Figure FDA0003881740370000068
Then, carrying out pose change between the robot coordinate system and the global coordinate system to obtain state quantity under the global coordinate system; wherein: x represents a state variable, (X, y) represents coordinates of the center of the robot in a global coordinate system, theta represents an angle of the center of the robot in the global coordinate system, and (d) l ,d r ) Indicating mileage data of left and right wheels, l indicating wheel interval of robot, X k State variable representing time k, Z k Representing the observed variable at time k.
CN202110330445.6A 2021-03-24 2021-03-24 Autonomous positioning control method for home service robot Active CN112947481B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110330445.6A CN112947481B (en) 2021-03-24 2021-03-24 Autonomous positioning control method for home service robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110330445.6A CN112947481B (en) 2021-03-24 2021-03-24 Autonomous positioning control method for home service robot

Publications (2)

Publication Number Publication Date
CN112947481A CN112947481A (en) 2021-06-11
CN112947481B true CN112947481B (en) 2022-11-15

Family

ID=76227067

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110330445.6A Active CN112947481B (en) 2021-03-24 2021-03-24 Autonomous positioning control method for home service robot

Country Status (1)

Country Link
CN (1) CN112947481B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114290348A (en) * 2022-01-13 2022-04-08 山东大学 End effector for tunnel detection robot, detection robot and control method thereof
CN115235450A (en) * 2022-07-05 2022-10-25 中国科学院深圳先进技术研究院 Self-adaptive smoothing method and device for constructing grid map by laser radar

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008083777A (en) * 2006-09-26 2008-04-10 Tamagawa Seiki Co Ltd Method and device for guiding unmanned carrier
CN108195376A (en) * 2017-12-13 2018-06-22 天津津航计算技术研究所 Small drone Camera calibration method
CN108955688A (en) * 2018-07-12 2018-12-07 苏州大学 Two-wheel differential method for positioning mobile robot and system
CN109186601A (en) * 2018-07-05 2019-01-11 南京理工大学 A kind of laser SLAM algorithm based on adaptive Unscented kalman filtering
CN110702091A (en) * 2019-07-24 2020-01-17 武汉大学 High-precision positioning method for moving robot along subway rail
CN110895146A (en) * 2019-10-19 2020-03-20 山东理工大学 Synchronous positioning and map construction method for mobile robot

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008083777A (en) * 2006-09-26 2008-04-10 Tamagawa Seiki Co Ltd Method and device for guiding unmanned carrier
CN108195376A (en) * 2017-12-13 2018-06-22 天津津航计算技术研究所 Small drone Camera calibration method
CN109186601A (en) * 2018-07-05 2019-01-11 南京理工大学 A kind of laser SLAM algorithm based on adaptive Unscented kalman filtering
CN108955688A (en) * 2018-07-12 2018-12-07 苏州大学 Two-wheel differential method for positioning mobile robot and system
CN110702091A (en) * 2019-07-24 2020-01-17 武汉大学 High-precision positioning method for moving robot along subway rail
CN110895146A (en) * 2019-10-19 2020-03-20 山东理工大学 Synchronous positioning and map construction method for mobile robot

Also Published As

Publication number Publication date
CN112947481A (en) 2021-06-11

Similar Documents

Publication Publication Date Title
Ortega et al. Mobile robot navigation in a partially structured static environment, using neural predictive control
CN114526745B (en) Drawing construction method and system for tightly coupled laser radar and inertial odometer
CN110196044A (en) It is a kind of based on GPS closed loop detection Intelligent Mobile Robot build drawing method
CN104764457A (en) Urban environment composition method for unmanned vehicles
CN112947481B (en) Autonomous positioning control method for home service robot
CN112284376A (en) Mobile robot indoor positioning mapping method based on multi-sensor fusion
CN103644903A (en) Simultaneous localization and mapping method based on distributed edge unscented particle filter
CN111060099A (en) Real-time positioning method for unmanned automobile
CN106643694A (en) Method for indoor positioning of robot
CN103901891A (en) Dynamic particle tree SLAM algorithm based on hierarchical structure
Nguyen et al. Improving the accuracy of the autonomous mobile robot localization systems based on the multiple sensor fusion methods
CN111578926A (en) Map generation and navigation obstacle avoidance method based on automatic driving platform
CN115540850A (en) Unmanned vehicle mapping method combining laser radar and acceleration sensor
Liu et al. An autonomous positioning method for fire robots with multi-source sensors
Zhang et al. Self-positioning for mobile robot indoor navigation based on wheel odometry, inertia measurement unit and ultra wideband
WO2021109166A1 (en) Three-dimensional laser positioning method and system
CN111761583B (en) Intelligent robot motion positioning method and system
Zhang et al. Exploration with global consistency using real-time re-integration and active loop closure
CN117075158A (en) Pose estimation method and system of unmanned deformation motion platform based on laser radar
CN114415655B (en) Improved SLAM-based navigation control method for inspection robot
CN115981314A (en) Robot navigation automatic obstacle avoidance method and system based on two-dimensional laser radar positioning
CN114543793B (en) Multi-sensor fusion positioning method of vehicle navigation system
Yang et al. SLAM self-cruise vehicle based on ROS platform
He et al. Research on mobile robot positioning and navigation system based on multi-sensor fusion
Liu et al. New integrated multi-algorithm fusion localization and trajectory tracking framework of autonomous vehicles under extreme conditions with non-Gaussian noises

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant