CN118031951A - Multisource fusion positioning method, multisource fusion positioning device, electronic equipment and storage medium - Google Patents

Multisource fusion positioning method, multisource fusion positioning device, electronic equipment and storage medium Download PDF

Info

Publication number
CN118031951A
CN118031951A CN202311677109.4A CN202311677109A CN118031951A CN 118031951 A CN118031951 A CN 118031951A CN 202311677109 A CN202311677109 A CN 202311677109A CN 118031951 A CN118031951 A CN 118031951A
Authority
CN
China
Prior art keywords
pose
target
information
vector
point cloud
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311677109.4A
Other languages
Chinese (zh)
Inventor
彭小彬
陈智超
闵伟
陈羽雨
戴新宇
潘翔宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shangfei Intelligent Technology Co ltd
Original Assignee
Shangfei Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shangfei Intelligent Technology Co ltd filed Critical Shangfei Intelligent Technology Co ltd
Priority to CN202311677109.4A priority Critical patent/CN118031951A/en
Publication of CN118031951A publication Critical patent/CN118031951A/en
Pending legal-status Critical Current

Links

Landscapes

  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The invention provides a multisource fusion positioning method, a multisource fusion positioning device, electronic equipment and a storage medium, and relates to the technical field of positioning, wherein the multisource fusion positioning method comprises the following steps: acquiring target environment information corresponding to a target moment robot and a previous state vector of a previous moment, wherein the target environment information at least comprises: synchronously acquiring first pose information corresponding to the wheel type odometer, first inertial navigation information corresponding to the inertial measurement unit IMU and second pose information corresponding to the laser radar; the second pose information is determined by performing point cloud inter-frame alignment on point cloud data acquired by the laser radar based on the first inertial navigation information by the laser radar odometer; determining a pose observation vector at a target moment based on the first pose information, the first inertial navigation information and the second pose information; and inputting the pose observation vector and the previous state vector into a nonlinear filtering model, and outputting a target positioning vector corresponding to the robot at the target moment. The invention can improve the positioning precision and the positioning reliability.

Description

Multisource fusion positioning method, multisource fusion positioning device, electronic equipment and storage medium
Technical Field
The present invention relates to the field of positioning technologies, and in particular, to a multi-source fusion positioning method, a multi-source fusion positioning device, an electronic device, and a storage medium.
Background
In the moving positioning of the robot, a single sensor is more affected in an actual scene, so that the accuracy and the reliability of positioning according to the acquired data of the single sensor are low. For example, in autopilot, the presence of obstructions in the environment, such as high-rise buildings and forests, results in a single sensor not being able to effectively acquire positioning information. In another example, in extreme weather such as haze or in severe industrial scenes, a single sensor such as a vision sensor or a laser radar works abnormally or fails, and therefore the robot fails to position. Therefore, how to improve positioning accuracy in robot movement is a problem that needs to be solved at present.
Disclosure of Invention
The invention provides a multisource fusion positioning method, a multisource fusion positioning device, electronic equipment and a storage medium, which are used for solving the defect of low positioning precision by using a single sensor in the prior art.
The invention provides a multisource fusion positioning method, which comprises the following steps:
Acquiring target environment information corresponding to a target moment robot and a previous state vector of a previous moment, wherein the target environment information at least comprises: synchronously acquiring first pose information corresponding to the wheel type odometer, first inertial navigation information corresponding to the inertial measurement unit IMU and second pose information corresponding to the laser radar; the second pose information is determined by performing point cloud inter-frame alignment on the point cloud data acquired by the laser radar odometer based on the first inertial navigation information;
Determining a pose observation vector at a target moment based on the first pose information, the first inertial navigation information and the second pose information;
And inputting the pose observation vector and the previous state vector into a nonlinear filtering model, and outputting a target positioning vector corresponding to the robot at the target moment, wherein the nonlinear filtering model is constructed based on an extended Kalman filter.
According to the multisource fusion positioning method provided by the invention, the second pose information is determined based on the following steps:
acquiring at least two frames of point cloud data acquired by a target moment laser radar;
Determining relative motion information between the point cloud data of each frame based on the first inertial navigation information;
Based on the relative motion information, carrying out inter-frame alignment on the point cloud data of each frame to obtain aligned point cloud data;
superposing the alignment point cloud data of each frame to determine correction point cloud data;
and inputting the correction point cloud data into the laser radar odometer, and outputting second pose information corresponding to the laser radar.
According to the multi-source fusion positioning method provided by the invention, the steps of inputting the correction point cloud data into the laser radar odometer and outputting the second pose information corresponding to the laser radar include:
Respectively extracting the surface characteristics corresponding to the correction point cloud data and the previous point cloud data at the previous moment;
Determining pose change information based on the difference value of the surface characteristics corresponding to the correction point cloud data and the previous point cloud data at the previous moment;
And outputting second pose information corresponding to the laser radar based on the pose change information and the previous pose information.
According to the multi-source fusion positioning method provided by the invention, the nonlinear filtering model comprises a nonlinear state equation and a nonlinear observation equation;
inputting the pose observation vector and the previous state vector into a nonlinear filtering model, and outputting a target positioning vector corresponding to the robot at the target moment, wherein the method comprises the following steps:
performing first-stage taylor expansion on the nonlinear state equation to obtain a linear state equation;
performing first-order taylor expansion on the nonlinear observation equation to obtain a linear observation equation;
And determining a target positioning vector corresponding to the robot at a target moment based on the linear state equation, the linear observation equation, the pose observation vector and the previous state vector.
According to the multi-source fusion positioning method provided by the invention, the determining the target positioning vector corresponding to the robot at the target moment based on the linear state equation, the linear observation equation, the pose observation vector and the previous state vector comprises the following steps:
determining a predicted state vector at a target time based on the system model and the previous state vector;
Determining a prediction covariance matrix based on a previous covariance matrix at a previous moment, a process noise covariance matrix and the linear state equation;
determining an observation matrix based on the linear observation equation;
determining a kalman gain based on an observation noise covariance matrix, the observation matrix, and the prediction covariance matrix;
And determining a target positioning vector corresponding to the robot at a target moment based on the Kalman gain, the pose observation vector and the prediction state vector.
According to the multi-source fusion positioning method provided by the invention, the acquisition of the target environment information corresponding to the target moment robot comprises the following steps:
Acquiring initial environment information corresponding to the robot, wherein the initial environment information comprises: the third pose information corresponding to the wheel type odometer, the second inertial navigation information corresponding to the IMU and the fourth pose information corresponding to the laser radar;
Respectively determining a first time stamp corresponding to the third pose information, a second time stamp corresponding to the second inertial navigation information and a third time stamp corresponding to the fourth pose information in the initial environment information;
Determining three timestamp differences based on differences of any two of the first timestamp, the second timestamp and the third timestamp;
And determining the target environment information corresponding to the robot at the target moment based on the comparison result of the preset threshold value and the time stamp difference value.
According to the multi-source fusion positioning method provided by the invention, the determining of the target environment information corresponding to the robot at the target moment based on the comparison result of the preset threshold value and each timestamp difference value comprises the following steps:
Determining the latest time stamp in the first time stamp, the second time stamp and the third time stamp as the target time under the condition that the time stamp difference value is smaller than the preset threshold value; and determining the third pose information as first pose information of the target moment, the second inertial navigation information as first inertial navigation information of the target moment, and the fourth pose information as second pose information of the target moment;
And deleting the third pose information, the second inertial navigation information and the fourth pose information and collecting the initial environment information of the robot again under the condition that at least one timestamp difference value in the timestamp difference values is larger than or equal to the preset threshold value.
The invention also provides a multisource fusion positioning device, which comprises:
The acquisition module is used for acquiring target environment information corresponding to the target moment robot and a previous state vector of a previous moment, and the target environment information at least comprises: synchronously acquiring first pose information corresponding to the wheel type odometer, first inertial navigation information corresponding to the inertial measurement unit IMU and second pose information corresponding to the laser radar; the second pose information is determined by performing point cloud inter-frame alignment on the point cloud data acquired by the laser radar odometer based on the first inertial navigation information;
The determining module is used for determining a pose observation vector at a target moment based on the first pose information, the first inertial navigation information and the second pose information;
And the positioning module is used for inputting the pose observation vector and the previous state vector into a nonlinear filtering model, outputting a target positioning vector corresponding to the robot at the target moment, and constructing the nonlinear filtering model based on an extended Kalman filter.
The invention also provides an electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the multi-source fusion positioning method according to any one of the above when executing the program.
The invention also provides a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements a multi-source fusion positioning method as described in any of the above.
According to the multisource fusion positioning method, the multisource fusion positioning device, the electronic equipment and the storage medium, at least first pose information corresponding to the synchronously acquired wheel type odometer, first inertial navigation information corresponding to the IMU, second pose information corresponding to the laser radar and a previous state vector at the previous moment are acquired, point cloud frame alignment is carried out on point cloud data acquired by the laser radar through the first inertial navigation information to obtain second pose information, the density of the point cloud data and the accuracy of the second pose information are increased, pose observation vectors are determined according to the first pose information, the first inertial navigation information and the second pose information, a nonlinear filtering model constructed according to an extended Kalman filter is utilized, a target positioning vector is continuously updated according to the difference between the pose observation vectors and the previous state vector, the nonlinear filtering model is closer to a practical application scene when the data fusion of the multisensor is realized, and positioning accuracy and positioning reliability are further improved.
Drawings
In order to more clearly illustrate the invention or the technical solutions of the prior art, the following description will briefly explain the drawings used in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are some embodiments of the invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic flow chart of a multi-source fusion positioning method according to an embodiment of the present invention;
FIG. 2 is a second flow chart of a multi-source fusion positioning method according to an embodiment of the present invention;
FIG. 3 is a schematic structural diagram of a multi-source fusion positioning device according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present invention more apparent, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is apparent that the described embodiments are some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Aiming at the problem of lower positioning precision by using a single sensor in the prior art, an embodiment of the present invention provides a multi-source fusion positioning method, and fig. 1 is one of flow diagrams of the multi-source fusion positioning method provided by the embodiment of the present invention, as shown in fig. 1, the method includes:
step 110, obtaining target environment information corresponding to the target moment robot and a previous state vector of a previous moment, wherein the target environment information at least comprises: synchronously acquiring first pose information corresponding to the wheel type odometer, first inertial navigation information corresponding to the inertial measurement unit IMU and second pose information corresponding to the laser radar; and the second pose information is determined by performing point cloud inter-frame alignment on the point cloud data acquired by the laser radar odometer based on the first inertial navigation information.
Specifically, the wheel type odometer is arranged on a moving wheel of the robot, and the change of the distance and the azimuth angle of the moving wheel relative to the ground can be calculated according to the change of the pulse in the sampling period, so that the first pose information of the robot is determined. The IMU (Inertial Measurement Unit ) mainly comprises a triaxial accelerometer and a triaxial gyroscope, wherein the triaxial accelerometer is used for detecting acceleration of the robot in triaxial directions respectively, the triaxial gyroscope is used for detecting angular velocity of the robot in triaxial directions respectively, and according to the acceleration and the angular velocity, first inertial navigation information corresponding to the IMU can be determined. The laser radar is used for collecting point cloud data in the environment where the robot is located, the point cloud data represent three-dimensional coordinate information of environmental elements such as buildings, trees or other obstacles in the environment, and the spatial position, shape, relative position relation and the like of the environmental elements can be determined through the three-dimensional coordinate information. After the point cloud data is determined, point cloud frame alignment can be performed on the point cloud data through the first inertial navigation information, namely, the point cloud data is overlapped and spliced by adopting the first inertial navigation information as pose compensation, so that the density and the precision of the point cloud data are increased, and the problems of sparseness and large data errors of the point cloud data are solved. The lidar may be a solid state lidar, as embodiments of the invention are not limited in this regard.
Further, the second pose information is determined based on the steps of:
acquiring at least two frames of point cloud data acquired by a target moment laser radar;
Determining relative motion information between the point cloud data of each frame based on the first inertial navigation information;
Based on the relative motion information, carrying out inter-frame alignment on the point cloud data of each frame to obtain aligned point cloud data;
superposing the alignment point cloud data of each frame to determine correction point cloud data;
and inputting the correction point cloud data into the laser radar odometer, and outputting second pose information corresponding to the laser radar.
Specifically, after the lidar collects at least two frames of point cloud data and the IMU collects first inertial navigation information, preprocessing operations such as denoising and filtering can be performed on each frame of point cloud data, preprocessing operations such as removing gravitational acceleration and calibration are performed on the first inertial navigation information, and in addition, acceleration in the three-axis direction and angular velocity in the three-axis direction in the first inertial navigation information respectively belong to different coordinate systems, namely, acceleration in the three-axis direction is determined based on a carrier coordinate system fixedly connected with a robot, and angular velocity in the three-axis direction is determined based on a navigation coordinate system taking the center of the earth as an origin, so that coordinate system conversion is performed on the first inertial navigation information, and the first inertial navigation information is converted into the same coordinate system. After preprocessing the point cloud data of each frame and the first inertial navigation information, estimating the relative motion information between the point cloud data of each frame according to the first inertial navigation information, and aligning the point cloud data between frames according to the relative motion information, namely ensuring that the frame intervals between two adjacent frames of point cloud data are the same, further obtaining aligned point cloud data, and eliminating offset between the point cloud frames caused by motion. After the alignment point cloud data of each frame are determined, the alignment point cloud data of each frame are overlapped and spliced, so that motion compensation of the alignment point cloud data of each frame is realized, for example, the position, normal vector and other attributes of each point in the alignment point cloud data of each frame are adjusted, the alignment point cloud data of each frame is kept stable in the motion process of the robot, the problems that the point cloud data of each frame is sparse, and the accuracy of determining second pose information according to the correction point cloud data is low are solved.
Further, the inputting the correction point cloud data into the lidar odometer and outputting the second pose information corresponding to the lidar includes:
Respectively extracting the surface characteristics corresponding to the correction point cloud data and the previous point cloud data at the previous moment;
Determining pose change information based on the difference value of the surface characteristics corresponding to the correction point cloud data and the previous point cloud data at the previous moment;
And outputting second pose information corresponding to the laser radar based on the pose change information and the previous pose information.
Specifically, the lidar odometer is a direct lidar odometer (DIRECT LIDAR Odometry, DLO). After the correction point cloud data are obtained, the previous point cloud data in the previous point cloud map at the previous moment are obtained, the surface characteristics corresponding to the previous point cloud data and the correction point cloud data are respectively extracted, point-surface residual errors of the previous frame and the next frame are constructed according to the difference value of the surface characteristics, pose change information between frames is obtained, and second pose information corresponding to the laser radar can be obtained according to the sum of the pose change information and the previous pose information.
Alternatively, the surface feature may be a normal vector corresponding to each point in the correction point cloud data, and the normal vector may be determined by calculating a local curvature of each point in the correction point cloud data, for example, fitting a plane using a least square method, and calculating a normal vector corresponding to each point in the plane. The normal vector can be used for determining sub-pose change information corresponding to each point in the correction point cloud data, and the pose change information can be determined through all the sub-pose change information.
In addition, the corrected point cloud data may be registered with the corrected point cloud data in the previous point cloud map, and the registration method may include an ICP (ITERATIVE CLOSEST POINT ) algorithm or NDT (Normal Distributions Transform, normal distribution transformation) algorithm, or the like. And analyzing the registration result, for example, counting average deviation in each direction, and the like, and correcting the conversion relation between the coordinate system corresponding to the laser radar odometer and the world coordinate system to obtain the laser radar odometer with higher precision.
Optionally, besides the wheel type odometer, the IMU and the laser radar, the robot may be further provided with a laser radar odometer, an ultrasonic sensor, a vision sensor, a GPS positioning system and the like, which is not limited in the embodiment of the present invention. The laser radar odometer can determine second pose information according to the point cloud data. The ultrasonic sensor can determine the propagation displacement amount of the reflected wave through the propagation time length of the reflected wave, so as to further determine the pose information of the robot, wherein the pose information comprises the position, the direction and the like of the robot. The vision sensor can acquire an environment image through the camera, the environment image can provide the shape, the size, the color and other characteristics of a target object, the target detection is carried out on the environment image, and the pose information of the robot can be determined. The GPS positioning system can acquire pose information of the robot by receiving satellite signals, wherein the pose information can comprise longitude, latitude, altitude and the like.
Further, fig. 2 is a second flowchart of a multi-source fusion positioning method according to an embodiment of the present invention, as shown in fig. 2, where the obtaining, by a target time robot, target environment information corresponding to the target time robot includes:
Acquiring initial environment information corresponding to the robot, wherein the initial environment information comprises: the third pose information corresponding to the wheel type odometer, the second inertial navigation information corresponding to the IMU and the fourth pose information corresponding to the laser radar;
Respectively determining a first time stamp corresponding to the third pose information, a second time stamp corresponding to the second inertial navigation information and a third time stamp corresponding to the fourth pose information in the initial environment information;
Determining three timestamp differences based on differences of any two of the first timestamp, the second timestamp and the third timestamp;
And determining the target environment information corresponding to the robot at the target moment based on the comparison result of the preset threshold value and the time stamp difference value.
Specifically, the target time is the time of synchronously acquiring the first pose information, the first inertial navigation information and the second pose information. The wheel type odometer, the IMU and the laser radar can acquire corresponding data according to the respective corresponding frequencies, namely, the third pose information corresponding to the wheel type odometer, the second inertial navigation information corresponding to the IMU and the fourth pose information corresponding to the laser radar in the initial environment information can be acquired simultaneously, and the sequence or part of the sequence can exist. After the initial environmental information is acquired, a first timestamp corresponding to the third pose information, a second timestamp corresponding to the second inertial navigation information and a third timestamp corresponding to the fourth pose information can be respectively determined, and three timestamp differences are determined according to the absolute value of the difference value of any two timestamps, namely, according to the difference value of the first timestamp and the second timestamp, the difference value of the second timestamp and the third timestamp and the difference value of the first timestamp and the third timestamp, and the three timestamp difference values are all non-negative numbers. After determining three time stamp difference values, comparing each time stamp difference value with a preset threshold value, and judging whether to determine the initial environment information as target environment information according to a comparison result.
Further, the determining, based on the comparison result of the preset threshold value and each timestamp difference value, the target environmental information corresponding to the robot at the target moment includes:
Determining the latest time stamp in the first time stamp, the second time stamp and the third time stamp as the target time under the condition that the time stamp difference value is smaller than the preset threshold value; and determining the third pose information as first pose information of the target moment, the second inertial navigation information as first inertial navigation information of the target moment, and the fourth pose information as second pose information of the target moment;
And deleting the third pose information, the second inertial navigation information and the fourth pose information and collecting the initial environment information of the robot again under the condition that at least one timestamp difference value in the timestamp difference values is larger than or equal to the preset threshold value.
Specifically, if each time stamp difference value is smaller than a preset threshold value, it indicates that the displacement change of the robot between the time stamp difference values is smaller, and can be ignored, and the third pose information, the second inertial navigation information and the fourth pose information can be obtained at the same moment. At this time, the first timestamp, the second timestamp and the third timestamp may be compared, the latest timestamp may be determined therefrom, the latest timestamp may be determined as the target time, the third pose information may be determined as the first pose information, the second pose information may be determined as the first pose information, the fourth pose information may be determined as the second pose information, and the first pose information, the first pose information and the second pose information may be fused at the target time, so as to improve the positioning accuracy and reliability of the robot. If at least one of the three timestamp difference values is larger than or equal to a preset threshold value, the displacement variation between the two pose information corresponding to the timestamp difference value is larger, so that the acquired initial environment information needs to be deleted and acquired again, and the problem that the positioning error of the robot is larger and the navigation precision and the movement reliability of the robot are influenced when fusion and positioning are carried out according to the third pose information, the second inertial navigation information and the fourth pose information in the initial environment information is avoided.
It should be noted that, if the number of times of fusion of the first pose information, the first inertial navigation information, and the second pose information is not the first time, the target time needs to be later than the previous time of the previous fusion, that is, the first time stamp, the second time stamp, and the third time stamp need to be later than the previous time.
And step 120, determining a pose observation vector at a target moment based on the first pose information, the first inertial navigation information and the second pose information.
Specifically, after the first pose information, the first inertial navigation information and the second pose information are determined, the first pose information, the first inertial navigation information and the second pose information can be converted into pose observation vectors at the target moment to be used as observation values of the subsequent extended kalman filter.
And 130, inputting the pose observation vector and the previous state vector into a nonlinear filtering model, and outputting a target positioning vector corresponding to the robot at the target moment, wherein the nonlinear filtering model is constructed based on an extended Kalman filter.
Specifically, after the pose observation vector is determined, extended kalman filtering can be performed by using the pose observation vector and the previous state vector, that is, the pose observation vector and the previous state vector are input into a nonlinear filtering model, state prediction of the previous state vector at the target moment is performed through the nonlinear filtering model, and the target positioning vector of the robot is determined according to the error between the state prediction and the pose observation vector.
Before the extended kalman filtering, a system model is constructed, wherein a system state variable in the system model is (x, y, z, alpha, beta, gamma) T, a system control vector is (v, omega) T, wherein (x, y, z) represents a spatial coordinate of the robot, (alpha, beta, gamma) represents a gesture of the robot, alpha represents an angle of rotation around an x axis in euler angles, beta represents an angle of rotation around a y axis in euler angles, gamma represents an angle of rotation around a z axis in euler angles, v represents a linear speed of the robot, and omega represents an angular speed of the robot. Then, since the robot moves only on the ground, the system state equation is determined as shown in the formula (1) according to the kinematic relationship of the robot, and the formula (1) is:
Where k represents the time of day.
Further, the nonlinear filtering model comprises a nonlinear state equation and a nonlinear observation equation;
inputting the pose observation vector and the previous state vector into a nonlinear filtering model, and outputting a target positioning vector corresponding to the robot at the target moment, wherein the method comprises the following steps:
performing first-stage taylor expansion on the nonlinear state equation to obtain a linear state equation;
performing first-order taylor expansion on the nonlinear observation equation to obtain a linear observation equation;
And determining a target positioning vector corresponding to the robot at a target moment based on the linear state equation, the linear observation equation, the pose observation vector and the previous state vector.
Specifically, the nonlinear state equation is shown in formula (2), and formula (2) is:
Xk=f(Xk-1,uk,wk)
Where f (X) denotes the state transfer function, X k denotes the system state vector at time k, X k-1 denotes the system state vector at time k-1, u k denotes the system control vector at time k, w k denotes the process noise at time k, w k is subject to gaussian white noise, i.e. P (w k) to N (0, Q), Q denotes the process noise covariance matrix.
The nonlinear observation equation is shown as a formula (3), and the formula (3) is as follows:
Zk=h(Zk,vk)
Where Z k denotes the observation vector at time k, h (x) denotes the observation function of the sensor, v k denotes the observation noise at time k, v k is subject to gaussian white noise, i.e. P (v k) to N (0, R), R denotes the observation noise covariance matrix.
Then, as can be seen from the equation (1), if the system model is a nonlinear system and the kalman filtering is performed by using a nonlinear state equation and a nonlinear observation equation in the nonlinear filtering model, the error of the predicted target positioning vector diverges, and the nonlinear filtering model fails. Therefore, in the embodiment of the present invention, a first-order taylor expansion is performed on the nonlinear state equation at the a posteriori estimation position at the k-1 moment, that is, the nonlinear state equation is linearized, so as to obtain an approximately linear state equation as shown in the formula (4), where the formula (4) is:
Wherein, Representing a posterior estimate of time k-1, i.e. the previous state vector of time k-1, X k-1 represents an a priori estimate of time k-1, i.e. the predicted state vector of time k-1,/>That is, the state transfer function value when w k-1 =0, a represents the state transfer matrix, i.e., jacobian of the state transfer function, and/>
Thereafter, Z k is atPerforming first-order Taylor expansion to obtain an approximately linear observation equation shown as a formula (5), wherein the formula (5) is as follows:
Wherein, H represents the observation matrix, i.e. the jacobian of the observation function, and
After the linear state equation and the linear observation equation are obtained, the pose observation vector and the previous state vector are combined, and then the posterior estimation of the target moment, namely the target positioning vector corresponding to the target moment robot, is further determined.
Further, the determining, based on the linear state equation, the linear observation equation, the pose observation vector, and the previous state vector, a target positioning vector corresponding to the robot at a target time includes:
determining a predicted state vector at a target time based on the system model and the previous state vector;
Determining a prediction covariance matrix based on a previous covariance matrix at a previous moment, a process noise covariance matrix and the linear state equation;
determining an observation matrix based on the linear observation equation;
determining a kalman gain based on an observation noise covariance matrix, the observation matrix, and the prediction covariance matrix;
And determining a target positioning vector corresponding to the robot at a target moment based on the Kalman gain, the pose observation vector and the prediction state vector.
Specifically, after determining the linear state equation and the linear observation equation, firstly, combining the previous state vector and a system model constructed according to the kinematic equation, determining prior estimation of the target moment, namely, a predicted state vector of the target moment, wherein the obtained predicted state vector is shown as a formula (6), and the formula (6) is as follows:
Wherein, Representing the predicted state vector.
Meanwhile, using equation (7), a predicted covariance matrix is calculated from a previous covariance matrix at a previous time, a state transition matrix determined from a linear state equation, and a process noise covariance matrix, where equation (7) is:
Wherein, Representing the predicted covariance matrix, and P k-1 represents the previous covariance matrix.
Then, after determining the observation matrix according to the linear observation equation, the kalman gain K k is calculated according to the observation noise covariance matrix, the observation matrix, and the prediction covariance matrix by using the formula (8), where the formula (8) is:
After determining the kalman gain, calculating a posterior estimate of the target moment, that is, a target positioning vector corresponding to the robot at the target moment, according to the kalman gain, the pose observation vector and the prediction state vector by using the formula (9), wherein the formula (9) is as follows:
Wherein, Representing the transformation of the predicted state vector in the observation function.
In addition, after the target positioning vector is calculated, a posterior covariance matrix at the target moment is calculated according to the identity matrix, the kalman gain, the observation matrix and the prediction covariance matrix by using the formula (10) to update the previous covariance matrix, wherein the formula (10) is as follows:
Wherein, P k represents the posterior covariance matrix and I represents the identity matrix.
According to the multisource fusion positioning method provided by the embodiment of the invention, at least the first pose information corresponding to the synchronously acquired wheel type odometer, the first inertial navigation information corresponding to the IMU, the second pose information corresponding to the laser radar and the previous state vector at the previous moment are acquired, the point cloud data acquired by the laser radar are subjected to point cloud frame-to-frame alignment through the first inertial navigation information to obtain the second pose information, the density of the point cloud data and the accuracy of the second pose information are increased, the pose observation vector is determined according to the first pose information, the first inertial navigation information and the second pose information, the nonlinear filtering model constructed according to the extended Kalman filter is utilized, the target positioning vector is continuously updated according to the difference between the pose observation vector and the previous state vector, the nonlinear filtering model is closer to the actual application scene when the data fusion of the multiple sensors is realized, and the positioning accuracy and the positioning reliability are further improved.
The multi-source fusion positioning device provided by the invention is described below, and the multi-source fusion positioning device described below and the multi-source fusion positioning method described above can be referred to correspondingly.
The embodiment of the present invention further provides a multi-source fusion positioning device, and fig. 3 is a schematic structural diagram of the multi-source fusion positioning device provided by the embodiment of the present invention, as shown in fig. 3, the multi-source fusion positioning device 300 includes: an acquisition module 310, a determination module 320, and a positioning module 330, wherein:
An obtaining module 310, configured to obtain target environment information corresponding to a target moment robot and a previous state vector of a previous moment, where the target environment information at least includes: synchronously acquiring first pose information corresponding to the wheel type odometer, first inertial navigation information corresponding to the inertial measurement unit IMU and second pose information corresponding to the laser radar; the second pose information is determined by performing point cloud inter-frame alignment on the point cloud data acquired by the laser radar odometer based on the first inertial navigation information;
A determining module 320, configured to determine a pose observation vector at a target moment based on the first pose information, the first inertial navigation information, and the second pose information;
And the positioning module 330 is configured to input the pose observation vector and the previous state vector into a nonlinear filtering model, and output a target positioning vector corresponding to the robot at a target time, where the nonlinear filtering model is constructed based on an extended kalman filter.
According to the multisource fusion positioning device provided by the embodiment of the invention, at least the first pose information corresponding to the synchronously acquired wheel type odometer, the first inertial navigation information corresponding to the IMU, the second pose information corresponding to the laser radar and the previous state vector at the previous moment are acquired, the point cloud data acquired by the laser radar are subjected to point cloud frame-to-frame alignment through the first inertial navigation information to obtain the second pose information, the density of the point cloud data and the accuracy of the second pose information are increased, the pose observation vector is determined according to the first pose information, the first inertial navigation information and the second pose information, the nonlinear filtering model constructed according to the extended Kalman filter is utilized, the target positioning vector is continuously updated according to the difference between the pose observation vector and the previous state vector, the nonlinear filtering model is closer to the actual application scene when the data fusion of the multiple sensors is realized, and the positioning accuracy and the positioning reliability are further improved.
Optionally, the obtaining module 310 is specifically configured to:
Acquiring initial environment information corresponding to the robot, wherein the initial environment information comprises: the third pose information corresponding to the wheel type odometer, the second inertial navigation information corresponding to the IMU and the fourth pose information corresponding to the laser radar;
Respectively determining a first time stamp corresponding to the third pose information, a second time stamp corresponding to the second inertial navigation information and a third time stamp corresponding to the fourth pose information in the initial environment information;
Determining three timestamp differences based on differences of any two of the first timestamp, the second timestamp and the third timestamp;
And determining the target environment information corresponding to the robot at the target moment based on the comparison result of the preset threshold value and the time stamp difference value.
Optionally, the obtaining module 310 is specifically configured to:
Determining the latest time stamp in the first time stamp, the second time stamp and the third time stamp as the target time under the condition that the time stamp difference value is smaller than the preset threshold value; and determining the third pose information as first pose information of the target moment, the second inertial navigation information as first inertial navigation information of the target moment, and the fourth pose information as second pose information of the target moment;
And deleting the third pose information, the second inertial navigation information and the fourth pose information and collecting the initial environment information of the robot again under the condition that at least one timestamp difference value in the timestamp difference values is larger than or equal to the preset threshold value.
Optionally, the multi-source fusion positioning device 300 further includes an output module, where the output module is specifically configured to:
acquiring at least two frames of point cloud data acquired by a target moment laser radar;
Determining relative motion information between the point cloud data of each frame based on the first inertial navigation information;
Based on the relative motion information, carrying out inter-frame alignment on the point cloud data of each frame to obtain aligned point cloud data;
superposing the alignment point cloud data of each frame to determine correction point cloud data;
and inputting the correction point cloud data into the laser radar odometer, and outputting second pose information corresponding to the laser radar.
Optionally, the output module is specifically configured to:
Respectively extracting the surface characteristics corresponding to the correction point cloud data and the previous point cloud data at the previous moment;
Determining pose change information based on the difference value of the surface characteristics corresponding to the correction point cloud data and the previous point cloud data at the previous moment;
And outputting second pose information corresponding to the laser radar based on the pose change information and the previous pose information.
Optionally, the nonlinear filtering model includes a nonlinear state equation and a nonlinear observation equation.
Optionally, the positioning module 330 is specifically configured to:
performing first-stage taylor expansion on the nonlinear state equation to obtain a linear state equation;
performing first-order taylor expansion on the nonlinear observation equation to obtain a linear observation equation;
And determining a target positioning vector corresponding to the robot at a target moment based on the linear state equation, the linear observation equation, the pose observation vector and the previous state vector.
Optionally, the positioning module 330 is specifically configured to:
determining a predicted state vector at a target time based on the system model and the previous state vector;
Determining a prediction covariance matrix based on a previous covariance matrix at a previous moment, a process noise covariance matrix and the linear state equation;
determining an observation matrix based on the linear observation equation;
determining a kalman gain based on an observation noise covariance matrix, the observation matrix, and the prediction covariance matrix;
And determining a target positioning vector corresponding to the robot at a target moment based on the Kalman gain, the pose observation vector and the prediction state vector.
Fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present invention, as shown in fig. 4, the electronic device may include: processor 410, communication interface (Communications Interface) 420, memory 430, and communication bus 440, wherein processor 410, communication interface 420, and memory 430 communicate with each other via communication bus 440. The processor 410 may invoke logic instructions in the memory 430 to perform a multi-source fusion localization method comprising:
Acquiring target environment information corresponding to a target moment robot and a previous state vector of a previous moment, wherein the target environment information at least comprises: synchronously acquiring first pose information corresponding to the wheel type odometer, first inertial navigation information corresponding to the IMU and second pose information corresponding to the laser radar; the second pose information is determined by performing point cloud inter-frame alignment on the point cloud data acquired by the laser radar odometer based on the first inertial navigation information;
Determining a pose observation vector at a target moment based on the first pose information, the first inertial navigation information and the second pose information;
And inputting the pose observation vector and the previous state vector into a nonlinear filtering model, and outputting a target positioning vector corresponding to the robot at the target moment, wherein the nonlinear filtering model is constructed based on an extended Kalman filter.
Further, the logic instructions in the memory 430 described above may be implemented in the form of software functional units and may be stored in a computer-readable storage medium when sold or used as a stand-alone product. Based on this understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a usb disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
In another aspect, the present invention also provides a computer program product, the computer program product comprising a computer program, the computer program being storable on a non-transitory computer readable storage medium, the computer program, when executed by a processor, being capable of performing the multi-source fusion positioning method provided by the methods described above, the method comprising:
Acquiring target environment information corresponding to a target moment robot and a previous state vector of a previous moment, wherein the target environment information at least comprises: synchronously acquiring first pose information corresponding to the wheel type odometer, first inertial navigation information corresponding to the IMU and second pose information corresponding to the laser radar; the second pose information is determined by performing point cloud inter-frame alignment on the point cloud data acquired by the laser radar odometer based on the first inertial navigation information;
Determining a pose observation vector at a target moment based on the first pose information, the first inertial navigation information and the second pose information;
And inputting the pose observation vector and the previous state vector into a nonlinear filtering model, and outputting a target positioning vector corresponding to the robot at the target moment, wherein the nonlinear filtering model is constructed based on an extended Kalman filter.
In yet another aspect, the present invention also provides a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, is implemented to perform the multi-source fusion positioning method provided by the above methods, the method comprising:
Acquiring target environment information corresponding to a target moment robot and a previous state vector of a previous moment, wherein the target environment information at least comprises: synchronously acquiring first pose information corresponding to the wheel type odometer, first inertial navigation information corresponding to the IMU and second pose information corresponding to the laser radar; the second pose information is determined by performing point cloud inter-frame alignment on the point cloud data acquired by the laser radar odometer based on the first inertial navigation information;
Determining a pose observation vector at a target moment based on the first pose information, the first inertial navigation information and the second pose information;
And inputting the pose observation vector and the previous state vector into a nonlinear filtering model, and outputting a target positioning vector corresponding to the robot at the target moment, wherein the nonlinear filtering model is constructed based on an extended Kalman filter.
The apparatus embodiments described above are merely illustrative, wherein the elements illustrated as separate elements may or may not be physically separate, and the elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
From the above description of the embodiments, it will be apparent to those skilled in the art that the embodiments may be implemented by means of software plus necessary general hardware platforms, or of course may be implemented by means of hardware. Based on this understanding, the foregoing technical solution may be embodied essentially or in a part contributing to the prior art in the form of a software product, which may be stored in a computer readable storage medium, such as ROM/RAM, a magnetic disk, an optical disk, etc., including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method described in the respective embodiments or some parts of the embodiments.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and are not limiting; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (10)

1. A multi-source fusion positioning method, comprising:
Acquiring target environment information corresponding to a target moment robot and a previous state vector of a previous moment, wherein the target environment information at least comprises: synchronously acquiring first pose information corresponding to the wheel type odometer, first inertial navigation information corresponding to the inertial measurement unit IMU and second pose information corresponding to the laser radar; the second pose information is determined by performing point cloud inter-frame alignment on the point cloud data acquired by the laser radar odometer based on the first inertial navigation information;
Determining a pose observation vector at a target moment based on the first pose information, the first inertial navigation information and the second pose information;
And inputting the pose observation vector and the previous state vector into a nonlinear filtering model, and outputting a target positioning vector corresponding to the robot at the target moment, wherein the nonlinear filtering model is constructed based on an extended Kalman filter.
2. The multi-source fusion positioning method according to claim 1, wherein the second pose information is determined based on the steps of:
acquiring at least two frames of point cloud data acquired by a target moment laser radar;
Determining relative motion information between the point cloud data of each frame based on the first inertial navigation information;
Based on the relative motion information, carrying out inter-frame alignment on the point cloud data of each frame to obtain aligned point cloud data;
superposing the alignment point cloud data of each frame to determine correction point cloud data;
and inputting the correction point cloud data into the laser radar odometer, and outputting second pose information corresponding to the laser radar.
3. The multi-source fusion positioning method according to claim 2, wherein the inputting the correction point cloud data into the lidar odometer and outputting the second pose information corresponding to the lidar includes:
Respectively extracting the surface characteristics corresponding to the correction point cloud data and the previous point cloud data at the previous moment;
Determining pose change information based on the difference value of the surface characteristics corresponding to the correction point cloud data and the previous point cloud data at the previous moment;
And outputting second pose information corresponding to the laser radar based on the pose change information and the previous pose information.
4. A multi-source fusion positioning method according to any of claims 1-3, wherein the nonlinear filtering model comprises a nonlinear state equation and a nonlinear observation equation;
inputting the pose observation vector and the previous state vector into a nonlinear filtering model, and outputting a target positioning vector corresponding to the robot at the target moment, wherein the method comprises the following steps:
performing first-stage taylor expansion on the nonlinear state equation to obtain a linear state equation;
performing first-order taylor expansion on the nonlinear observation equation to obtain a linear observation equation;
And determining a target positioning vector corresponding to the robot at a target moment based on the linear state equation, the linear observation equation, the pose observation vector and the previous state vector.
5. The multi-source fusion positioning method according to claim 4, wherein the determining the target positioning vector corresponding to the robot at the target time based on the linear state equation, the linear observation equation, the pose observation vector, and the previous state vector comprises:
determining a predicted state vector at a target time based on the system model and the previous state vector;
Determining a prediction covariance matrix based on a previous covariance matrix at a previous moment, a process noise covariance matrix and the linear state equation;
determining an observation matrix based on the linear observation equation;
determining a kalman gain based on an observation noise covariance matrix, the observation matrix, and the prediction covariance matrix;
And determining a target positioning vector corresponding to the robot at a target moment based on the Kalman gain, the pose observation vector and the prediction state vector.
6. A multi-source fusion positioning method according to any one of claims 1-3, wherein the obtaining the target environmental information corresponding to the target moment robot includes:
Acquiring initial environment information corresponding to the robot, wherein the initial environment information comprises: the third pose information corresponding to the wheel type odometer, the second inertial navigation information corresponding to the IMU and the fourth pose information corresponding to the laser radar;
Respectively determining a first time stamp corresponding to the third pose information, a second time stamp corresponding to the second inertial navigation information and a third time stamp corresponding to the fourth pose information in the initial environment information;
Determining three timestamp differences based on differences of any two of the first timestamp, the second timestamp and the third timestamp;
And determining the target environment information corresponding to the robot at the target moment based on the comparison result of the preset threshold value and the time stamp difference value.
7. The multi-source fusion positioning method according to claim 6, wherein determining the target environmental information corresponding to the robot at the target time based on the comparison result of the preset threshold value and each timestamp difference value comprises:
Determining the latest time stamp in the first time stamp, the second time stamp and the third time stamp as the target time under the condition that the time stamp difference value is smaller than the preset threshold value; and determining the third pose information as first pose information of the target moment, the second inertial navigation information as first inertial navigation information of the target moment, and the fourth pose information as second pose information of the target moment;
And deleting the third pose information, the second inertial navigation information and the fourth pose information and collecting the initial environment information of the robot again under the condition that at least one timestamp difference value in the timestamp difference values is larger than or equal to the preset threshold value.
8. A multi-source fusion positioning device, comprising:
The acquisition module is used for acquiring target environment information corresponding to the target moment robot and a previous state vector of a previous moment, and the target environment information at least comprises: synchronously acquiring first pose information corresponding to the wheel type odometer, first inertial navigation information corresponding to the inertial measurement unit IMU and second pose information corresponding to the laser radar; the second pose information is determined by performing point cloud inter-frame alignment on the point cloud data acquired by the laser radar odometer based on the first inertial navigation information;
The determining module is used for determining a pose observation vector at a target moment based on the first pose information, the first inertial navigation information and the second pose information;
And the positioning module is used for inputting the pose observation vector and the previous state vector into a nonlinear filtering model, outputting a target positioning vector corresponding to the robot at the target moment, and constructing the nonlinear filtering model based on an extended Kalman filter.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the multi-source fusion positioning method of any of claims 1-7 when the program is executed by the processor.
10. A non-transitory computer readable storage medium having stored thereon a computer program, wherein the computer program when executed by a processor implements the multi-source fusion positioning method of any of claims 1-7.
CN202311677109.4A 2023-12-07 2023-12-07 Multisource fusion positioning method, multisource fusion positioning device, electronic equipment and storage medium Pending CN118031951A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311677109.4A CN118031951A (en) 2023-12-07 2023-12-07 Multisource fusion positioning method, multisource fusion positioning device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311677109.4A CN118031951A (en) 2023-12-07 2023-12-07 Multisource fusion positioning method, multisource fusion positioning device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN118031951A true CN118031951A (en) 2024-05-14

Family

ID=91001067

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311677109.4A Pending CN118031951A (en) 2023-12-07 2023-12-07 Multisource fusion positioning method, multisource fusion positioning device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN118031951A (en)

Similar Documents

Publication Publication Date Title
CN109974693B (en) Unmanned aerial vehicle positioning method and device, computer equipment and storage medium
CN108519615B (en) Mobile robot autonomous navigation method based on combined navigation and feature point matching
CN109887057B (en) Method and device for generating high-precision map
EP2503510B1 (en) Wide baseline feature matching using collaborative navigation and digital terrain elevation data constraints
Caballero et al. Vision-based odometry and SLAM for medium and high altitude flying UAVs
Olson et al. Rover navigation using stereo ego-motion
Indelman et al. Factor graph based incremental smoothing in inertial navigation systems
CN111795686B (en) Mobile robot positioning and mapping method
Panahandeh et al. Vision-aided inertial navigation based on ground plane feature detection
JP5992184B2 (en) Image data processing apparatus, image data processing method, and image data processing program
CN110726406A (en) Improved nonlinear optimization monocular inertial navigation SLAM method
WO2011120141A1 (en) Dynamic network adjustment for rigorous integration of passive and active imaging observations into trajectory determination
CN113466890B (en) Light laser radar inertial combination positioning method and system based on key feature extraction
CN111338383A (en) Autonomous flight method and system based on GAAS and storage medium
CN113763548B (en) Vision-laser radar coupling-based lean texture tunnel modeling method and system
CN112346104A (en) Unmanned aerial vehicle information fusion positioning method
CN115451948A (en) Agricultural unmanned vehicle positioning odometer method and system based on multi-sensor fusion
CN114494629A (en) Three-dimensional map construction method, device, equipment and storage medium
CN114442133A (en) Unmanned aerial vehicle positioning method, device, equipment and storage medium
WO2022045982A1 (en) Unmanned aerial vehicle and localization method for unmanned aerial vehicle
CN115930948A (en) Orchard robot fusion positioning method
CN118031951A (en) Multisource fusion positioning method, multisource fusion positioning device, electronic equipment and storage medium
CN114690226A (en) Monocular vision distance measurement method and system based on carrier phase difference technology assistance
CN114705223A (en) Inertial navigation error compensation method and system for multiple mobile intelligent bodies in target tracking
CN113034538B (en) Pose tracking method and device of visual inertial navigation equipment and visual inertial navigation equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination