CN116026335B - Mobile robot positioning method and system suitable for unknown indoor environment - Google Patents

Mobile robot positioning method and system suitable for unknown indoor environment Download PDF

Info

Publication number
CN116026335B
CN116026335B CN202211675418.3A CN202211675418A CN116026335B CN 116026335 B CN116026335 B CN 116026335B CN 202211675418 A CN202211675418 A CN 202211675418A CN 116026335 B CN116026335 B CN 116026335B
Authority
CN
China
Prior art keywords
mobile robot
acquiring
state information
point
matching
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211675418.3A
Other languages
Chinese (zh)
Other versions
CN116026335A (en
Inventor
袁旺
竺清芸
张戈猛
黄纪杰
刘勇华
苏春翌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong University of Technology
Original Assignee
Guangdong University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong University of Technology filed Critical Guangdong University of Technology
Priority to CN202211675418.3A priority Critical patent/CN116026335B/en
Publication of CN116026335A publication Critical patent/CN116026335A/en
Application granted granted Critical
Publication of CN116026335B publication Critical patent/CN116026335B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The application relates to the technical field of mobile robot positioning, in particular to a mobile robot positioning method and a mobile robot positioning system suitable for an unknown indoor environment, wherein the method comprises the following steps: acquiring motor displacement motion information through a mobile robot, and calculating and acquiring first state information of the mobile robot by combining a kinematic model; and (B) step (B): acquiring images through a mobile robot, matching two continuous frames of image information, and calculating and acquiring second state information of the mobile robot; step C: acquiring laser data by a mobile robot, matching the laser data at the current moment and the next moment, and calculating and acquiring third state information of the mobile robot; step D: and carrying out data fusion by adopting an extended Kalman filtering algorithm through the acquired first state information, second state information and third state information to obtain the real-time state information of the mobile robot. The application can realize high-precision positioning of the wheel type mobile robot in an unknown indoor environment.

Description

Mobile robot positioning method and system suitable for unknown indoor environment
Technical Field
The application relates to the technical field of mobile robot positioning, in particular to a mobile robot positioning method and system suitable for an unknown indoor environment.
Background
Positioning is a basic requirement of an indoor mobile robot for autonomous navigation, and most of the current reliable positioning methods are based on known environments, and two-dimensional code labels, electronic labels, communication base stations and the like are required to be set in the working environment in advance, or known maps are provided for real-time map matching. Searching for accurate positioning of an unknown environment is beneficial to widening the application range of mobile robot operation.
The method is particularly important to accurately acquire the motion displacement of the mobile robot when working in an unknown indoor environment lacking prior information and communication signals. The wheel type odometer is a relative positioning method based on a widely applied foundation, and the positioning mode based on the kinematic model for calculating the pose inevitably generates errors due to ground slipping or mechanical component load deformation and the like. The positioning method based on visual information utilizes abundant environmental texture information to perform feature matching so as to obtain the displacement of the mobile robot, gets rid of dependence on a kinematic model of the mobile robot, is often influenced by illumination conditions in the environment, and has larger random error in the observation process; the positioning method based on the laser information does not depend on illumination conditions, and has high detection precision, but also has the problems of sparse sensing information and the like. In summary, in view of the difficulty in realizing accurate positioning of the mobile robot in the unknown indoor environment in a single positioning mode, the patent provides a mobile robot multi-source information fusion positioning method and system suitable for the unknown indoor environment.
Disclosure of Invention
The application aims to provide a mobile robot positioning method and a mobile robot positioning system suitable for an unknown indoor environment, which are used for realizing the accurate positioning of the unknown indoor environment by combining the advantages of each sensor by adopting a multi-source information fusion positioning method.
To achieve the purpose, the application adopts the following technical scheme:
a mobile robot positioning method suitable for unknown indoor environment comprises the following steps:
step A: acquiring motor displacement motion information through a mobile robot, and calculating and acquiring first state information of the mobile robot by combining a kinematic model;
and (B) step (B): acquiring images through a mobile robot, reading image information of two continuous frames of images for matching analysis, and calculating and acquiring second state information of the mobile robot based on an observation model;
step C: acquiring laser data through a mobile robot, matching the laser data at the current moment and the next moment, and acquiring third state information of the mobile robot based on calculation of a laser observation model;
step D: and carrying out data fusion by adopting an extended Kalman filtering algorithm through the first state information acquired based on the kinematic model, the second state information and the third state information acquired based on the observation model to acquire the real-time state information of the mobile robot.
Preferably, the step a specifically includes the following steps:
step A1: in the moving process of the mobile robot, the pulse number output by the photoelectric coding disc installed based on the left driving wheel is divided into N L The pulse number output by the photoelectric coding disc installed on the right driving wheel is divided into N R The rotating speed of the photoelectric coding disc is P line/rad, the radius of the driving wheel is r, the moving distance of the wheel is delta S, and the displacement increment delta S of the left driving wheel is respectively obtained L And a displacement increment DeltaS of the right driving wheel R The method comprises the following steps:
step A2: the moving distance of the left driving wheel and the right driving wheel based on the mobile robot is delta S respectively l And DeltaS r The distance between the left driving wheel and the right driving wheel is d, and the acquisition is carried outThe mobile robot is caused to rotate by an angle Δθ= (Δs) r -ΔS l ) And the distance of the motion of the mobile robot is deltas= (deltas) l +ΔS r )/2;
Step A3: based on the kinematic model of the mobile robot, the angle delta theta and the movement distance delta S of the mobile robot in unit time and the pose [ x (k), y (k), theta (k) of the mobile robot at the current moment k can be used for realizing the motion of the mobile robot] T Obtain the mobile robot state X (k+1) = [ X (k+1), y (k+1), θ (k+1) at the k+1 time] T
Therefore, the pose of the mobile robot at the time T can be obtained by carrying out displacement accumulation on the initial pose of the mobile robot.
Preferably, the step B specifically includes the following steps:
step B1: and respectively extracting ORB characteristic points of the two frames of images through judging functions, wherein the judging functions are as follows:
wherein p is a candidate feature point, circle (p) is a circle established by taking the candidate feature point p as an origin, x is a point in circle (p), G (x) is a gray value of a point on the circumference, G (p) is a gray value of the candidate feature point, ζ is a threshold value of the gray value, and E represents a difference between the gray value of a pixel in the circle and the gray value of the candidate point; if E is greater than or equal to a threshold value xi, p is a feature point;
step B2: acquiring BRIEF descriptors of each ORB feature point, describing feature attributes of the feature points by using the descriptors, and performing coarse matching on the BRIEF descriptors of the two frames of images by using a Hamming distance to obtain a coarse matching result;
step B3: screening the coarse matching result, selecting a corresponding ORB characteristic point pair according to the screened coarse matching result, and calculating the relative pose of two frames of images by using the EPnP algorithm by using the selected characteristic point pair;
step B4: according to the relative pose of the two frames of images, obtaining the projection error of the ORB characteristic points, and screening the ORB characteristic point pairs again according to the projection error;
step B5: and B4, repeatedly executing the step until the projection errors of all the remaining ORB characteristic point pairs are within the allowable range, and obtaining the final relative pose, thus completing one-time image matching.
Preferably, the step B2: the BRIEF descriptor for each ORB feature point is obtained, and the method specifically comprises the following steps:
step B21: determining a circular neighborhood M by taking the candidate characteristic points as circle centers, selecting n characteristic point pairs in the neighborhood M, and taking the selected characteristic point pairs asAnd beta i (i=1, 2, …, n), the following relationship is obtained:
wherein the BRIEF descriptor is a binary number having a value of bit B i
Step B22: determining the direction of the characteristic point by adopting a gray centroid method comprises the following steps:
step B221: first define the intermediate variable m 00 ,m 10 And m 01 The method comprises the following steps:
wherein M is a circular neighborhood with the feature point p as a circle center and R as a radius;
step B222: acquiring gray centroid C of neighborhood M:
step B223: acquiring an included angle between the ray pC and a coordinate axis y:
θ=arctan(m 01 /m 10 ) (8)
the direction of the ray pC is the direction of the feature point p.
Preferably, the step B3: screening the coarse matching result, selecting a corresponding ORB characteristic point pair according to the screened coarse matching result, and calculating the relative pose of two frames of images by using the EPnP algorithm by using the selected characteristic point pair; the EPnP algorithm specifically comprises the following steps:
step B31: the feature point set based on matching is { P ] i ,Q i (i=1, 2,3,..n.) the centroids of the two sets of matched feature points are obtained separately:
step B32: calculating the barycenter removing coordinates of all the matching points:
step B33: according to the barycenter removing coordinates, a matrix H is obtained:
step B34: singular value decomposition is performed on equation (11):
H=UΣV T (12)
where U is unitary, Σ is a semi-positive diagonal matrix, and V T The unitary matrix is the conjugate transpose of V;
step B35: based on the formula (9) and the formula (12), acquiring a relative rotation matrix and a translation vector of a camera coordinate system corresponding to the two frames of images, wherein the relative rotation matrix and the translation vector are as follows:
wherein R is represented as a rotation matrix, t is represented as a translation vector, and the two are observables of a visual observation model.
Preferably, the step C: the method comprises the steps of obtaining laser data through a mobile robot, matching the laser data of the current moment and the laser data of the next moment, and obtaining third state information of the mobile robot based on calculation of a laser observation model, wherein the method specifically comprises the following steps:
step C1: adopting an iterative nearest point algorithm to carry out scanning matching; first, defining that the laser radar collects a group of data points { q } at time t t -defining a set of data points { q } acquired at time t+1 t+1 Second, the point set { q } t+1 Each point of the three-dimensional coordinate system is subjected to coordinate transformation and is re-projected to a coordinate system XY t In which the set of projection points is noted asThe transformation relationship is as follows:
step C2: then calculate the re-projection error e i,j I.e. point setIs->The error of (2) is represented by the following formula:
center point setMake the re-projection error e i,j The smallest point is the sum +.>The nearest point, i.e.)>Matching points of->The following formula is shown:
step C3: find outMatching points of->The re-projection error is then expressed in the following form.
Step C4: pair point set { q t Each point in the set { q } performs steps C1 through C3, finding the point set { q } t+1 And calculating the re-projection errors between each pair of matching points, and then accumulating the squares of each re-projection error to construct a least square objective function, wherein the least square objective function is shown in the following formula:
solving a rotation matrix R and a translation vector t which enable the objective function in the formula (18) to obtain the minimum value, wherein the rotation matrix R and the translation vector t are observables of the laser observation model.
In the step D, the kinematic model may be expressed as:
x k+1 =f(x k ,u k+1 )+w k+1 (19)
wherein x is k+1 State variables representing time k+1, including position and velocity, x k State variable representing time k, u k+1 The control amount at time k+1, and f (·) represents the state variable x at time k+1 k+1 State variable x at time k k Functional relation between w k+1 Representing motion noise at time k+1;
the visual observation model and the laser observation model are uniformly expressed as:
z k+1 =h(x k ,x k+1 )+v k+1 (20)
wherein z is k+1 Indicating the system observed quantity at time k+1, h (·) indicating the state variable x k And x k+1 And observed quantity z k+1 Functional relation between v k+1 The observation noise at time k+1 is shown.
Preferably, the step D: and carrying out data fusion by adopting an extended Kalman filtering algorithm through the first state information acquired based on the kinematic model, the second state information and the third state information acquired based on the observation model to acquire the real-time state information of the mobile robot. The specific steps of a single filtering cycle in the extended Kalman filtering algorithm are as follows:
step D1: prior estimate of predictive state vector
Step D2: expanding the motion model into a linear equation by using a first-order taylor expansion formula:
wherein the method comprises the steps ofRepresents the posterior estimate of the state vector at time k, < >>Jacobian matrix representing a motion model, +.>Is f (·) is a function of->u k+1 Where x is k Is the derivative of:
step D3: covariance matrix of predictive state vector prior estimation value
Wherein Q is k+1 For the purpose of mathematical expectations,
step D4: the observation model is expanded into a linear equation by using a first-order taylor expansion formula:
wherein the method comprises the steps ofRepresenting the state vector a priori estimate at time k+1,/->Jacobian matrix representing a motion model, +.>Is that the h (& gt) function is + & lt>Where x is k+1 Is the derivative of:
step D5: based on budgetsSettlement->And R is k+1 Matrix updating calculates a Kalman gain matrix:
step D6: updating the state variable with the measured value to obtain a posterior estimated value of the state variable:
step D7: updating the covariance matrix to be the covariance matrix of the state variable posterior estimation value:
wherein I is an identity matrix with the same covariance matrix dimension;
and respectively participating in the fusion of the circulation and the first state information for the second state information and the third state information respectively acquired by the visual observation model and the laser observation model. Steps D1 to D7 are repeated in this way, so that the latest state of the mobile robot is updated and obtained. .
The mobile robot positioning system suitable for the unknown indoor environment adopts the mobile robot positioning method suitable for the unknown indoor environment, the system comprises mobile robot hardware and positioning software, the mobile robot hardware comprises a power supply component, a main control component, a motion component, a perception component and a display component, and the positioning software comprises a coding disc positioning module, a visual positioning module, a laser positioning module and an extended Kalman filtering fusion module;
the coding disc positioning module is used for acquiring motor displacement motion information through a photoelectric coding disc and combining a kinematic model to acquire first state information of the mobile robot through calculation;
the visual positioning module is used for acquiring images through the depth camera, reading image information of two continuous frames of images, and acquiring second state information of the mobile robot based on observation model calculation;
the laser positioning module is used for acquiring laser data through a laser radar, matching the laser data of the current moment and the next moment, and acquiring third state information of the mobile robot based on the observation model;
the extended Kalman filtering fusion module is used for acquiring the first state information, the second state information and the third state information, and performing data fusion by adopting an extended Kalman filtering algorithm to acquire real-time state information of the mobile robot.
One of the above technical solutions has the following beneficial effects: the application realizes high-precision positioning of the wheel type mobile robot in an unknown indoor environment by fusing various sensor information.
Drawings
FIG. 1 is a schematic diagram of the software algorithm principle of the method of the present application;
FIG. 2 is a schematic flow chart of image matching of the method of the present application;
FIG. 3 is a diagram of the hardware components of the system of the present application;
Detailed Description
The technical scheme of the application is further described below by the specific embodiments with reference to the accompanying drawings.
A mobile robot positioning method suitable for unknown indoor environment comprises the following steps:
step A: acquiring motor displacement motion information through a mobile robot, and calculating and acquiring first state information of the mobile robot by combining a kinematic model; and (B) step (B): acquiring images through a mobile robot, reading image information of two continuous frames of images, and acquiring second state information of the mobile robot based on observation model calculation; wherein the image information includes a rotation matrix and a translation vector; step C: acquiring laser data through the mobile robot, matching the laser data at the current moment and the next moment, and acquiring third state information of the mobile robot based on the observation model; step D: and carrying out data fusion by adopting an extended Kalman filtering algorithm through the first state information acquired based on the kinematic model, the second state information and the third state information acquired based on the observation model to acquire the real-time state information of the mobile robot.
At present, a mobile robot mostly adopts a single positioning mode, including a wheel type odometer calculation method, a positioning method based on visual information and a positioning method based on a laser sensor, but each positioning mode comprises more or less defects, wherein the wheel type odometer calculation method is a relative positioning method widely used at present, the displacement of the mobile robot is detected through a photoelectric encoder and the like arranged on wheels, the initial pose is combined to estimate the current mobile robot coordinate, and the positioning mode based on the kinematic model for estimating the pose inevitably generates errors due to ground slipping or mechanical component load deformation and the like. The positioning method based on visual information is free from the dependence on a mobile robot kinematic model, but is often influenced by illumination conditions in the environment, and larger random errors exist in detection; even though the laser sensor does not depend on illumination conditions and has high detection accuracy, there is a problem that perceived information is sparse.
Therefore, the patent seeks multi-source information fusion, and the accurate positioning of the unknown indoor environment is realized by combining the respective advantages of the multiple sensors. As shown in fig. 1, in this embodiment, the present application detects state information in real time through a coding disc, a vision sensor and a laser sensor configured by a mobile robot, calculates first state information based on a kinematic model, calculates second state information and third state information based on an observation model, and finally fuses the multi-source information based on a kalman filter algorithm, thereby realizing high-precision positioning of a wheeled mobile robot in an unknown indoor environment.
Further, the step A specifically includes the following steps:
step A1: in the moving process of the mobile robot, the pulse number output by the photoelectric coding disc installed based on the left driving wheel is divided into N L The pulse number output by the photoelectric coding disc installed on the right driving wheel is divided into N R The rotating speed of the photoelectric coding disc is P line/rad, the radius of the driving wheel is r, the moving distance of the wheel is delta S, and the displacement increment delta S of the left driving wheel is respectively obtained L And a displacement increment DeltaS of the right driving wheel R The method comprises the following steps:
step A2: the moving distance of the left driving wheel and the right driving wheel based on the mobile robot is delta S respectively l And DeltaS r The distance between the left driving wheel and the right driving wheel is d, and the acquired rotation angle of the mobile robot is Δθ= (Δs) r -ΔS l ) And the distance of the motion of the mobile robot is deltas= (deltas) l +ΔS r )/2;
Step A3: based on the kinematic model of the mobile robot, the angle delta theta and the movement distance delta S of the mobile robot which can rotate in unit time and the current k can be used for controlling the motion of the mobile robotTime mobile robot pose [ x (k), y (k), θ (k)] T Obtain the mobile robot state X (k+1) = [ X (k+1), y (k+1), θ (k+1) at the k+1 time] T
Therefore, the pose of the mobile robot at the time T can be obtained by carrying out displacement accumulation on the initial pose of the mobile robot.
As shown in fig. 2, the application extracts the ORB feature points of two frames of pictures respectively, calculates the descriptors of each feature point, and performs coarse matching on the ORB feature point descriptors of the two frames of pictures, and because a large number of error matches exist in the coarse matching, the coarse matching results need to be screened, and the relative pose of the two frames of pictures is calculated by using the selected feature point pairs after screening. And then, calculating projection errors of the feature point pairs by using the calculated relative pose, screening the feature point pairs again according to the error, repeating the steps until the projection errors of all the rest feature point pairs are within the allowable range, and obtaining the final relative pose, thus completing one-time image matching.
Further, the ORB algorithm was first proposed by Rublee et al, which modified the extraction algorithm from classical FAST (Features from Accelerated Segment Test) corner points, which feature points are collectively referred to as Oriented FAST, using the BRIEF (Binary Robust Independent ElementaryFeatures) descriptor. The step B specifically comprises the following steps:
step B1: extracting ORB (ORiented Brief) feature points from the two frames of images through a judging function respectively comprises the following steps: selecting candidate feature points, determining a circle by taking the candidate feature points as circle centers, detecting pixel gray values in the circle, and if the difference between the gray values of enough pixels in the circle and the gray values of the candidate feature points is greater than or equal to a threshold value, determining the candidate point as the feature point, wherein a judging function is as follows:
wherein p is a candidate feature point, circle (p) is a circle established by taking the candidate feature point p as an origin, x is a point in circle (p), G (x) is a gray value of a point on the circumference, G (p) is a gray value of the candidate feature point, ζ is a threshold value of the gray value, and E represents a difference between the gray value of a pixel in the circle and the gray value of the candidate point; if E is greater than or equal to a threshold value xi, p is a feature point;
step B2: acquiring BRIEF descriptors of each ORB feature point, describing feature attributes of the feature points by using the descriptors, and performing coarse matching on the BRIEF descriptors of the two frames of images by using a Hamming distance to obtain a coarse matching result; hamming distance is a measure used to compare two binary data strings, where when two binary strings of equal length are compared, the hamming distance then compares the number of bit positions where the two bits are different.
Step B3: screening the coarse matching result, and selecting a corresponding ORB characteristic point pair according to the screened coarse matching result to acquire the relative pose of the two frames of images; since there are a large number of false matches to the coarse match, the result of the coarse match needs to be filtered.
Step B4: according to the relative pose of the two frames of images, obtaining the projection error of the ORB characteristic points, and screening the ORB characteristic point pairs again according to the projection error;
step B5: and B4, repeatedly executing the step until the projection errors of all the remaining ORB characteristic point pairs are within the allowable range, and obtaining the final relative pose, thus completing one-time image matching.
Further describing, the step B2: the BRIEF descriptor for each ORB feature point is obtained, and the method specifically comprises the following steps:
step B21: determining a circular neighborhood M by taking the candidate characteristic points as circle centers, selecting n characteristic point pairs in the neighborhood M, and taking the selected characteristic point pairs asAnd beta i (i=1,2,…,n),The following relationship is obtained:
wherein the BRIEF descriptor is a binary number having a value of bit B i
Step B22: in order to make the characteristic point have rotation invariance, the gray centroid method is adopted to determine the direction of the characteristic point, which comprises the following steps:
step B221: first define the intermediate variable m 00 ,m 10 And m 01 The method comprises the following steps:
wherein M is a circular neighborhood with the feature point p as a circle center and R as a radius;
step B222: acquiring gray centroid C of neighborhood M:
step B223: acquiring an included angle between the ray pC and a coordinate axis y:
θ=arctan(m 01 /m 10 ) (8)
the direction of the ray pC is the direction of the feature point p.
If the image is rotated, the direction of the ray pC is rotated accordingly. Therefore, if a rectangular coordinate system is established by taking the ray pC as a coordinate axis when the BRIEF descriptor is acquired, the calculated BRIEF descriptor will be consistent regardless of the rotation of the image.
Further describing, the step B3: screening the coarse matching result, selecting a corresponding ORB characteristic Point pair according to the screened coarse matching result, and calculating the relative pose of two frames of images by adopting an EPnP (Efficient Perspective-n-Point) algorithm by utilizing the selected characteristic Point pair; the EPnP algorithm specifically comprises the following steps:
step B31: the feature point set based on matching is { P ] i ,Q i (i=1, 2,3,..n.) the centroids of the two sets of matched feature points are obtained separately:
step B32: calculating the barycenter removing coordinates of all the matching points:
step B33: according to the barycenter removing coordinates, a matrix H is obtained:
step B34: singular value decomposition is performed based on equation (11):
H=UΣV T (12)
where U is unitary, Σ is a semi-positive diagonal matrix, and V T The unitary matrix is the conjugate transpose of V;
step B35: based on the formula (9) and the formula (12), acquiring a relative rotation matrix and a translation vector of a camera coordinate system corresponding to the two frames of images, wherein the relative rotation matrix and the translation vector are as follows:
wherein R is represented as a rotation matrix, t is represented as a translation vector, and the two are observables of a visual observation model.
After the feature point matching result between two continuous images of the mobile robot is obtained, the relative pose between the two images can be further solved. The application selects the EPnP (Efficient Perspective-n-Point) algorithm with higher speed to estimate the pose of the mobile robot.
Further described, the step C: the method comprises the steps of obtaining laser data through a mobile robot, matching the laser data of the current moment and the laser data of the next moment, and obtaining third state information of the mobile robot based on calculation of a laser observation model, wherein the method specifically comprises the following steps:
step C1: adopting an Iterative closest point algorithm (Iternate ClosestPoint, ICP) to carry out scanning matching; first, defining that the laser radar collects a group of data points { q } at time t t -defining a set of data points { q } acquired at time t+1 t+1 Second, the point set { q } t+1 Each point of the three-dimensional coordinate system is subjected to coordinate transformation and is re-projected to a coordinate system XY t In which the set of projection points is noted asThe transformation relationship is as follows:
step C2: then calculate the re-projection error e i,j I.e. point setIs->The error of (2) is represented by the following formula:
center point setMake the re-projection error e i,j The smallest point is the sum +.>The nearest point, i.e.)>Matching points of->The following formula is shown:
step C3: find outMatching points of->The re-projection error is then expressed in the following form:
step C4: pair point set { q t Each point in the set { q } performs steps C1 through C3, finding the point set { q } t+1 And calculating the re-projection errors between each pair of matching points, and then accumulating the squares of each re-projection error to construct a least square objective function, wherein the least square objective function is shown in the following formula:
solving a rotation matrix R and a translation vector t which enable the objective function in the formula (18) to obtain the minimum value, wherein the rotation matrix R and the translation vector t are observables of the laser observation model.
Further, a Kalman Filter (KF) algorithm is a common algorithm for information fusion based on optimal estimation. The algorithm considers that for a control system, the control equation and the observation equation are influenced by Gaussian white noise, so that deviation occurs between the actual state and the ideal state of the system, and the filtering process mainly carries out fusion estimation on input signals according to covariance matrixes of the noise, and outputs an optimal estimated value. Considering that the Kalman filtering algorithm is only suitable for a linear system, and the kinematic model and the observation model of the mobile robot are nonlinear in the design, an extended Kalman filtering algorithm (Extended Kalman Filter, EKF) is needed, which is an extension of Kalman filtering in the nonlinear system, the nonlinear function is subjected to Taylor expansion, a first-order term is reserved, and the first-order term is a linear part of the Taylor expansion, so that linearization of the nonlinear system is realized.
Further to the description, in step D, the kinematic model may be expressed as:
wherein x is k+1 State variables representing time k+1, including position and velocity, x k State variable representing time k, u k+1 The control amount at time k+1, and f (·) represents the state variable x at time k+1 k+1 State variable x at time k k Functional relation between w k+1 Representing motion noise at time k+1;
the visual observation model and the laser observation model are uniformly expressed as:
z k+1 =h(x k ,x k+1 )+v k+1 (20)
wherein z is k+1 Indicating the system observed quantity at time k+1, h (·) indicating the state variable x k And x k+1 And observed quantity z k+1 Functional relation between v k+1 The observation noise at time k+1 is shown.
Further described, the step D: and carrying out data fusion by adopting an extended Kalman filtering (Extended Kalman Filter, EKF) algorithm through the first state information acquired based on the kinematic model, the second state information and the third state information acquired based on the observation model to obtain the real-time state information of the mobile robot, wherein the specific steps of a single filtering cycle in the extended Kalman filtering algorithm are as follows:
step D1: prior estimate of predictive state vector
Step D2: expanding the motion model into a linear equation by using a first-order taylor expansion formula:
wherein the method comprises the steps ofRepresents the posterior estimate of the state vector at time k, < >>Jacobian matrix representing a motion model, +.>Is f (·) is a function of->u k+1 Where x is k Is the derivative of:
step D3: covariance matrix of predictive state vector prior estimation value
Wherein Q is k+1 For the purpose of mathematical expectations,
step D4: the observation model is expanded into a linear equation by using a first-order taylor expansion formula:
wherein the method comprises the steps ofRepresenting the state vector a priori estimate at time k+1,/->Jacobian matrix representing a motion model, +.>Is that the h (& gt) function is + & lt>Where x is k+1 Is the derivative of:
step D5: based on budgetsSettlement->And R is k+1 Matrix updating calculates a Kalman gain matrix:
step D6: updating the state variable with the measured value to obtain a posterior estimated value of the state variable:
step D7: updating the covariance matrix to be the covariance matrix of the state variable posterior estimation value:
wherein I is an identity matrix with the same covariance matrix dimension;
and respectively participating in the fusion of the circulation and the first state information for the second state information and the third state information respectively acquired by the visual observation model and the laser observation model. Steps D1 to D7 are repeated in this way, so that the latest state of the mobile robot is updated and obtained.
The mobile robot positioning system suitable for the unknown indoor environment adopts the mobile robot positioning method suitable for the unknown indoor environment, the system comprises mobile robot hardware and positioning software, and the mobile robot hardware comprises a power supply component, a main control component, a motion component, a perception component and a display component.
As shown in fig. 3, in this embodiment, the power supply component provides 24v or 12v dc power for other hardware of the system, the sensing component obtains environmental information, the motion component drives the mobile robot to move while monitoring the motion state, and the main control component receives the information of the sensing component and the motion component to perform analysis processing and send a control command to the motion component.
Further, the main control component is composed of an industrial personal computer and is used for intensively analyzing and processing information from the motion component and the sensing component, adjusting and sending control instructions and controlling the motion of the mobile robot. The main control component is matched with an industrial control computer of North China industrial control brand. The CPU is a Pentium I77700 processor, the operating system is Microsoft Windows 7 professional version, and the memory is DDR3 interface memory with double channels 8G. The industrial personal computer has excellent performance and a small structure, is dustproof, radio wave interference-proof, firm and durable, and can better bear various control tasks of the mobile robot.
In the perception component, a depth camera and a laser radar are connected with an industrial personal computer through a USB interface, the depth camera is provided with a Kinect camera with depth information detection capability, the Kinect consists of an RGB camera, a depth sensor and a microphone, the depth sensor adopts an infrared raster ranging technology, emits laser light through an infrared emitter, and utilizes a raster in front of an emitter lens to enable the laser light to be scattered into a front real environment uniformly, and then the infrared camera captures light spots projected in the environment and calculates depth information according to the light spots. The visual range of Kinect is: the data flow of Kinect is color diagram 640x480, 32bit,30fps, depth diagram 320x240, 16bit,30fps, and the laser radar is assembled as LM11x-10100 series sensor of Germany SCIK company. The detection angle range of the model sensor is-45-225 degrees, the measurable maximum distance is 20m, and the minimum resolution value can be set to be 0.25 degrees. The laser sensor has the characteristics of wide measurement azimuth, high spatial resolution, long measurement distance, high speed resolution and strong anti-interference capability, and the sensor is connected with an industrial personal computer through an Ethernet port and completes information transmission through a full duplex TCP/IP protocol.
In the motion assembly, a driver, a direct current motor and a photoelectric encoder form a closed loop integral structure, wherein the direct current motor is a direct current brushless motor of Shenzhen Xinbang company, and the model is XB3000. The input voltage of the motor is 24V, the power is up to 200W, and the top end of the motor is provided with a grating encoder for detecting parameters such as the real-time speed of the motor. Correspondingly, the application adopts an Elmo digital servo driver as the driving of the motor. The input voltage of the driver is direct current 24V, 5 different motor control modes are provided, the driver CAN be connected with a computer through a CAN bus and controlled and parameter adjusted by using software, and the speed control mode is used according to the requirements of kinematic control.
The display assembly comprises a small-size battery power display screen and a 17-inch touch display, and the battery power display screen can display the battery power in real time. The 17-inch touch display can display the conditions of various parameters and variables of the system on one hand and can operate the system software by receiving touch commands on the other hand.
Further described, the positioning software includes a code wheel positioning module, a visual positioning module, a laser positioning module, and an extended Kalman filtering fusion module.
The coding disc positioning module is used for acquiring motor displacement motion information through a photoelectric coding disc and calculating and acquiring first state information of the mobile robot by combining a kinematic model;
the visual positioning module is used for acquiring images through the depth camera, reading image information of two continuous frames of images, and acquiring second state information of the mobile robot based on observation model calculation; wherein the image information includes a rotation matrix and a translation vector;
the laser positioning module is used for acquiring laser data through a laser radar, matching the laser data of the current moment and the next moment, and acquiring third state information of the mobile robot based on the observation model;
the extended Kalman filtering fusion module is used for acquiring the first state information, the second state information and the third state information, and performing data fusion by adopting an extended Kalman filtering algorithm to acquire real-time state information of the mobile robot.
The application realizes high-precision positioning of the wheel type mobile robot in an unknown indoor environment by fusing various sensor information.
The technical principle of the present application is described above in connection with the specific embodiments. The description is made for the purpose of illustrating the general principles of the application and should not be taken in any way as limiting the scope of the application. Other embodiments of the application will occur to those skilled in the art from consideration of this specification without the exercise of inventive faculty, and such equivalent modifications and alternatives are intended to be included within the scope of the application as defined in the claims.

Claims (4)

1. The mobile robot positioning method suitable for the unknown indoor environment is characterized by comprising the following steps of:
step A: acquiring motor displacement motion information through a mobile robot, and calculating and acquiring first state information of the mobile robot by combining a kinematic model; the method specifically comprises the following steps:
step A1: in the moving process of the mobile robot, the pulse number output by the photoelectric coding disc installed based on the left driving wheel is divided into N L The pulse number output by the photoelectric coding disc installed on the right driving wheel is divided into N R The rotating speed of the photoelectric coding disc is P line/rad, the radius of the driving wheel is r, the moving distance of the wheel is delta S, and the displacement increment delta S of the left driving wheel is respectively obtained L And a displacement increment DeltaS of the right driving wheel R The method comprises the following steps:
step A2: the moving distance of the left driving wheel and the right driving wheel based on the mobile robot is delta S respectively l And DeltaS r The distance between the left driving wheel and the right driving wheel is d, and the acquired rotation angle of the mobile robot is Δθ= (Δs) r -ΔS l ) And the distance of the motion of the mobile robot is deltas= (deltas) l +ΔS r )/2;
Step A3: based on the kinematic model of the mobile robot, the angle delta theta and the movement distance delta S of the mobile robot in unit time and the pose [ x (k), y (k), theta (k) of the mobile robot at the current moment k can be used for realizing the motion of the mobile robot] T Obtain the mobile robot state X (k+1) =at the k+1 time[x(k+1),y(k+1),θ(k+1)] T
Therefore, the position and the posture of the mobile robot at any moment can be obtained by carrying out displacement accumulation on the initial position and the posture of the mobile robot;
and (B) step (B): acquiring images through a mobile robot, reading image information of two continuous frames of images for matching analysis, and calculating and acquiring second state information of the mobile robot based on a visual observation model; the method specifically comprises the following steps:
step B1: and respectively extracting ORB characteristic points of the two frames of images through judging functions, wherein the judging functions are as follows:
wherein p is a candidate feature point, circle (p) is a circle established by taking the candidate feature point p as an origin, x is a point in circle (p), G (x) is a gray value of a point on the circumference, G (p) is a gray value of the candidate feature point, ζ is a threshold value of the gray value, and E represents a difference between the gray value of a pixel in the circle and the gray value of the candidate point; if E is greater than or equal to a threshold value xi, p is a feature point;
step B2: acquiring BRIEF descriptors of each ORB characteristic point, describing characteristic attributes of the characteristic points by using the BRIEF descriptors, and performing coarse matching on the BRIEF descriptors of the ORB characteristic points in the two frames of pictures by using a minimum Hamming distance matching method to obtain a coarse matching result;
step B3: screening the coarse matching result, selecting a corresponding ORB characteristic point pair according to the screened coarse matching result, and calculating the relative pose of two frames of images by using the EPnP algorithm by using the selected characteristic point pair;
step B4: according to the relative pose of the two frames of images, obtaining the projection error of the ORB characteristic points, and screening the ORB characteristic point pairs again according to the projection error;
step B5: repeatedly executing the step B4 until the projection errors of all the remaining ORB feature point pairs are within the allowable range, and obtaining the final relative pose, so that one-time image matching is completed;
step C: acquiring laser data through a mobile robot, matching the laser data at the current moment and the next moment, and acquiring third state information of the mobile robot based on calculation of a laser observation model; the method specifically comprises the following steps:
step C1: adopting an iterative nearest point algorithm to carry out scanning matching; first, defining that the laser radar collects a group of data points { q } at time t t -defining a set of data points { q } acquired at time t+1 t+1 Second, the point set { q } t+1 Each point of the three-dimensional coordinate system is subjected to coordinate transformation and is re-projected to a coordinate system XY t In which the set of projection points is noted asThe transformation relationship is as follows:
step C2: then calculate the re-projection error e i,j I.e. point setIs->The error of (2) is represented by the following formula:
center point setMake the re-projection error e i,j The smallest point is the sum +.>The nearest point, i.e.)>Matching points of->The following formula is shown:
step C3: find outMatching points of->The re-projection error is then expressed in the following form:
step C4: pair point set { q t Each point in the set { q } performs steps C1 through C3, finding the point set { q } t+1 And calculating the re-projection errors between each pair of matching points, and then accumulating the squares of each re-projection error to construct a least square objective function, wherein the least square objective function is shown in the following formula:
solving a rotation matrix R and a translation vector t which enable an objective function in a formula (18) to obtain a minimum value, wherein the rotation matrix R and the translation vector t are observables of a laser observation model;
step D: performing data fusion by adopting an extended Kalman filtering algorithm through the first state information acquired based on the kinematic model, the second state information acquired based on the visual observation model and the third state information acquired based on the laser observation model to acquire real-time state information of the mobile robot;
wherein the kinematic model may be expressed as:
wherein x is k+1 State variables representing time k+1, including position and velocity, x k State variable representing time k, u k+1 The control amount at time k+1, and f (·) represents the state variable x at time k+1 k+1 State variable x at time k k Functional relation between w k+1 Representing motion noise at time k+1;
the visual observation model and the laser observation model are uniformly expressed as:
z k+1 =h(x k ,x k+1 )+v k+1 (20)
wherein z is k+1 Indicating the system observed quantity at time k+1, h (·) indicating the state variable x k And x k+1 And observed quantity z k+1 Functional relation between v k+1 Observation noise at time k+1;
the specific steps of a single filtering cycle in the extended Kalman filtering algorithm are as follows:
step D1: prior estimate of predictive state vector
Step D2: expanding the motion model into a linear equation by using a first-order taylor expansion formula:
wherein the method comprises the steps ofRepresents the posterior estimate of the state vector at time k, < >>Jacobian matrix representing a motion model, +.>Is f (·) is a function of->u k+1 Where x is k Is the derivative of:
step D3: covariance matrix of predictive state vector prior estimation value
Wherein Q is k+1 For the purpose of mathematical expectations,
step D4: the observation model is expanded into a linear equation by using a first-order taylor expansion formula:
wherein the method comprises the steps ofRepresenting the state vector a priori estimate at time k+1,/->Jacobian matrix representing a motion model, +.>Is that the h (& gt) function is + & lt>Where x is k+1 Is the derivative of:
step D5: based on budgetsSettlement->And R is k+1 Matrix updating calculates a Kalman gain matrix:
step D6: updating the state variable with the measured value to obtain a posterior estimated value of the state variable:
step D7: updating the covariance matrix to be the covariance matrix of the state variable posterior estimation value:
wherein I is an identity matrix with the same covariance matrix dimension;
the second state information and the third state information respectively acquired by the visual observation model and the laser observation model are respectively involved in the fusion of the circulation and the first state information;
steps D1 to D7 are repeated in this way, so that the latest state of the mobile robot is updated and obtained.
2. The mobile robot positioning method for unknown indoor environments according to claim 1, wherein said step B2: the BRIEF descriptor for each ORB feature point is obtained, and the method specifically comprises the following steps:
step B21: determining a circular neighborhood M by taking the candidate feature point p as a circle center, selecting n feature point pairs in the neighborhood M, and taking the selected feature point pairs asAnd beta i (i=1, 2, …, n), the following relationship is obtained:
wherein the BRIEF descriptor is a binary number having a value of bit B i
Step B22: determining the direction of the characteristic point by adopting a gray centroid method comprises the following steps:
step B221: first define the intermediate variable m 00 ,m 10 And m 01 The method comprises the following steps:
wherein M is a circular neighborhood with the feature point p as a circle center and R as a radius;
step B222: acquiring gray centroid C of neighborhood M:
step B223: acquiring an included angle between the ray pC and a coordinate axis y:
θ=arctan(m 01 /m 10 ) (8)
the direction of the ray pC is the direction of the feature point p.
3. The mobile robot positioning method for unknown indoor environments according to claim 2, wherein said step B3: screening the coarse matching result, selecting a corresponding ORB characteristic point pair according to the screened coarse matching result, and calculating the relative pose of two frames of images by using the EPnP algorithm by using the selected characteristic point pair; the EPnP algorithm specifically comprises the following steps:
step B31: the feature point set based on matching is { P ] i ,Q i (i=1, 2,3,..n.) the centroids of the two sets of matched feature points are obtained separately:
step B32: calculating the barycenter removing coordinates of all the matching points:
step B33: according to the barycenter removing coordinates, a matrix H is obtained:
step B34: singular value decomposition is performed on equation (11):
H=UΣV T (12)
where U is unitary, Σ is a semi-positive diagonal matrix, and V T The unitary matrix is the conjugate transpose of V;
step B35: based on the formula (9) and the formula (12), acquiring a relative rotation matrix and a translation vector of a camera coordinate system corresponding to the two frames of images, wherein the relative rotation matrix and the translation vector are as follows:
wherein R is represented as a rotation matrix, t is represented as a translation vector, and the two are observables of a visual observation model.
4. A mobile robot positioning system suitable for an unknown indoor environment, characterized in that a mobile robot positioning method suitable for an unknown indoor environment is adopted, the system comprises mobile robot hardware and positioning software, the mobile robot hardware comprises a power supply component, a main control component, a motion component, a perception component and a display component, and the positioning software comprises a coding disc positioning module, a visual positioning module, a laser positioning module and an extended Kalman filtering fusion module;
the coding disc positioning module is used for acquiring motor displacement motion information through a photoelectric coding disc and calculating and acquiring first state information of the mobile robot by combining a kinematic model;
the visual positioning module is used for acquiring images through the depth camera, reading image information of two continuous frames of images, and acquiring second state information of the mobile robot based on observation model calculation;
the laser positioning module is used for acquiring laser data through a laser radar, matching the laser data of the current moment and the next moment, and acquiring third state information of the mobile robot based on the observation model;
the extended Kalman filtering fusion module is used for acquiring the first state information, the second state information and the third state information, and performing data fusion by adopting an extended Kalman filtering algorithm to acquire real-time state information of the mobile robot.
CN202211675418.3A 2022-12-26 2022-12-26 Mobile robot positioning method and system suitable for unknown indoor environment Active CN116026335B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211675418.3A CN116026335B (en) 2022-12-26 2022-12-26 Mobile robot positioning method and system suitable for unknown indoor environment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211675418.3A CN116026335B (en) 2022-12-26 2022-12-26 Mobile robot positioning method and system suitable for unknown indoor environment

Publications (2)

Publication Number Publication Date
CN116026335A CN116026335A (en) 2023-04-28
CN116026335B true CN116026335B (en) 2023-10-03

Family

ID=86080502

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211675418.3A Active CN116026335B (en) 2022-12-26 2022-12-26 Mobile robot positioning method and system suitable for unknown indoor environment

Country Status (1)

Country Link
CN (1) CN116026335B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113175925A (en) * 2021-04-14 2021-07-27 武汉理工大学 Positioning and navigation system and method
CN113280808A (en) * 2021-05-25 2021-08-20 上海大学 Method and system for improving positioning accuracy of mobile robot
CN113706626A (en) * 2021-07-30 2021-11-26 西安交通大学 Positioning and mapping method based on multi-sensor fusion and two-dimensional code correction
CN113985465A (en) * 2021-10-28 2022-01-28 江西省智能产业技术创新研究院 Sensor fusion positioning method and system, readable storage medium and computer equipment
CN115015956A (en) * 2022-04-12 2022-09-06 南京邮电大学 Laser and vision SLAM system of indoor unmanned vehicle

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111258313B (en) * 2020-01-20 2022-06-07 深圳市普渡科技有限公司 Multi-sensor fusion SLAM system and robot

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113175925A (en) * 2021-04-14 2021-07-27 武汉理工大学 Positioning and navigation system and method
CN113280808A (en) * 2021-05-25 2021-08-20 上海大学 Method and system for improving positioning accuracy of mobile robot
CN113706626A (en) * 2021-07-30 2021-11-26 西安交通大学 Positioning and mapping method based on multi-sensor fusion and two-dimensional code correction
CN113985465A (en) * 2021-10-28 2022-01-28 江西省智能产业技术创新研究院 Sensor fusion positioning method and system, readable storage medium and computer equipment
CN115015956A (en) * 2022-04-12 2022-09-06 南京邮电大学 Laser and vision SLAM system of indoor unmanned vehicle

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
SLAM 过程中的机器人位姿估计优化算法研究;禹鑫燚,朱熠琛,詹益安,欧林林;高技术通讯;第28卷(第8期);712-718 *

Also Published As

Publication number Publication date
CN116026335A (en) 2023-04-28

Similar Documents

Publication Publication Date Title
CN111563442B (en) Slam method and system for fusing point cloud and camera image data based on laser radar
CN106558080B (en) Monocular camera external parameter online calibration method
Alismail et al. Automatic calibration of a range sensor and camera system
CN112509057B (en) Camera external parameter calibration method, device, electronic equipment and computer readable medium
Stückler et al. Model learning and real-time tracking using multi-resolution surfel maps
Shi et al. Extrinsic calibration and odometry for camera-LiDAR systems
Jin et al. Visual slam with rgb-d cameras
Dryanovski et al. Incremental registration of RGB-D images
Zhang LILO: A Novel Lidar–IMU SLAM System With Loop Optimization
CN113137968A (en) Repositioning method and repositioning device based on multi-sensor fusion and electronic equipment
WO2022088613A1 (en) Robot positioning method and apparatus, device and storage medium
Sheng et al. Mobile robot localization and map building based on laser ranging and PTAM
CN116026335B (en) Mobile robot positioning method and system suitable for unknown indoor environment
Huang et al. B-splines for purely vision-based localization and mapping on non-holonomic ground vehicles
Li et al. Extrinsic calibration of non-overlapping multi-camera system with high precision using circular encoded point ruler
Luo et al. Calib-anything: Zero-training lidar-camera extrinsic calibration method using segment anything
Guo et al. 3D Lidar SLAM Based on Ground Segmentation and Scan Context Loop Detection
Su Vanishing points in road recognition: A review
Wei et al. Improved Hector-SLAM Algorithm Based on Data Fusion of LiDAR and IMU for a Wheeled Robot Working in Machining Workshop
Chen et al. Binocular vision localization based on vision SLAM system with multi-sensor fusion
CN116309883B (en) 3D target 6DoF accurate positioning method and system
Holz et al. Approximate surface reconstruction and registration for RGB-D SLAM
Das et al. Sensor fusion in autonomous vehicle using LiDAR and camera sensor with Odometry
Zhuang et al. 3D-laser-based visual odometry for autonomous mobile robot in outdoor environments
Wang et al. Application of vision aided strapdown integrated navigation in lane vehicles

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant