CN114322996B - Pose optimization method and device of multi-sensor fusion positioning system - Google Patents

Pose optimization method and device of multi-sensor fusion positioning system Download PDF

Info

Publication number
CN114322996B
CN114322996B CN202011060481.7A CN202011060481A CN114322996B CN 114322996 B CN114322996 B CN 114322996B CN 202011060481 A CN202011060481 A CN 202011060481A CN 114322996 B CN114322996 B CN 114322996B
Authority
CN
China
Prior art keywords
pose
coordinate system
world coordinate
absolute
relative
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011060481.7A
Other languages
Chinese (zh)
Other versions
CN114322996A (en
Inventor
韩冰
张涛
边威
黄帅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba Group Holding Ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Ltd filed Critical Alibaba Group Holding Ltd
Priority to CN202011060481.7A priority Critical patent/CN114322996B/en
Publication of CN114322996A publication Critical patent/CN114322996A/en
Application granted granted Critical
Publication of CN114322996B publication Critical patent/CN114322996B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The invention discloses a pose optimization method and device of a multi-sensor fusion positioning system. The method comprises the following steps: fusing the acquired image data shot by the vision sensor in the preset time period with inertial navigation measurement data to obtain a first pose under a relative world coordinate system; based on the observation position and the observation course of the first pose and the global positioning system under the absolute world coordinate system, performing iterative optimization on the second pose converted from the first pose to the absolute world coordinate system and the rotation external parameter between the relative world coordinate system and the absolute world coordinate system to obtain an optimized value of the second pose and an optimized value of the rotation external parameter, wherein the optimized value of the rotation external parameter is used for fusing image data shot by a vision sensor in the next time period with inertial navigation measurement data. The pose in the multi-sensor fusion positioning system can be optimized rapidly and accurately.

Description

Pose optimization method and device of multi-sensor fusion positioning system
Technical Field
The invention relates to the technical field of high-precision positioning, in particular to a pose optimization method and device of a multi-sensor fusion positioning system.
Background
The multi-sensor fusion positioning system generally refers to a positioning system comprising a visual sensor, an inertial measurement unit (Inertial Measurement Unit, an IMU, abbreviated as 'inertial navigation') and a global navigation satellite system ((Global Navigation Satellite System, GNSS) and the like, and has the advantages of high positioning precision and low cost, so that the multi-sensor fusion is an important choice for high-precision positioning.
The prior art for pose optimization of multi-sensor fusion positioning systems generally includes:
(1) VINS_MONO algorithm
Only the cumulative error correction of loop detection is added, and the problem of cumulative drift of positioning errors exists.
(2) VI-ORB visual inertial navigation initialization algorithm
Local Bundle Adjustment is performed on the basis of Pose map optimization Pose Graph, absolute position information of a global positioning system is not added first, and if absolute information is added, a sufficient window size is needed, and Bundle Adjustment of optimization is performed at this time, so that excessive computing resources are consumed.
(3) Traditional Pose Graph scheme
Only optimizing relative pose information and absolute position information of a global positioning system, recovering a world coordinate system rotation matrix through a latest frame, and when the fusion pose or the original observation value of the latest frame is not correct, easily causing the inaccuracy of the world coordinate system rotation matrix, thereby causing unstable optimization; and the initial map pose calculated by the inaccurate world rotation matrix is inaccurate, so that the algorithm is easily trapped into local optimization.
In summary, in the prior art, the pose optimization of the multi-sensor fusion positioning system has the problems of low precision and long optimization time consumption, such as error accumulation, local optimization and the like.
Disclosure of Invention
The present invention has been made in view of the above-mentioned problems, and it is an object of the present invention to provide a pose optimization method and apparatus for a multi-sensor fusion positioning system that overcomes or at least partially solves the above-mentioned problems.
In a first aspect, an embodiment of the present invention provides a pose optimization method of a multi-sensor fusion positioning system, including:
fusing the acquired image data shot by the vision sensor in the preset time period with inertial navigation measurement data to obtain a first pose under a relative world coordinate system;
based on the observation position and the observation course of the first pose and the global positioning system under the absolute world coordinate system, performing iterative optimization on the first pose converted to a second pose under the absolute world coordinate system and a rotation external parameter between the relative world coordinate system and the absolute world coordinate system to obtain an optimized value of the second pose and an optimized value of the rotation external parameter, wherein the optimized value of the rotation external parameter is used for fusion of image data shot by a vision sensor in the next time period and inertial navigation measurement data.
In some optional embodiments, the performing iterative optimization on the second pose converted from the first pose to the absolute world coordinate system and the rotation external parameter between the relative world coordinate system and the absolute world coordinate system to obtain an optimized value of the second pose and an optimized value of the rotation external parameter specifically includes:
and iteratively optimizing a second position and a second course angle in the second pose and the rotation external parameter to obtain an optimized value of the second position, an optimized value of the second course angle and an optimized value of the rotation external parameter.
In some optional embodiments, the iterative optimization of the first pose to the second pose under the absolute world coordinate system and the rotation external parameter between the relative world coordinate system and the absolute world coordinate system based on the observed position and the observed heading of the first pose and the global positioning system under the absolute world coordinate system specifically includes:
according to the first pose and the observed position and the observed heading of the global positioning system in an absolute world coordinate system, at least one residual error is established, and the established residual errors are combined into a target residual error:
a relative position change residual of a first position in the first pose and a second position in a second pose in which the first pose is converted to an absolute world coordinate system;
a relative pose change residual error between a first pose of the first poses and a second pose of the first poses converted to a second pose of the absolute world coordinate system;
an absolute pose conversion residual of the first pose;
absolute position residuals of an observed position of the global positioning system in an absolute world coordinate system;
absolute pose residuals of the observed heading of the global positioning system in an absolute world coordinate system.
In some alternative embodiments, the relative position change residual is established by:
converting the difference value of the first position of the shooting moment of the adjacent two frames of images into a first difference value under the inertial navigation coordinate system of the shooting moment of the previous frame of images according to the rotation external parameters of the inertial navigation coordinate system of the shooting moment of the previous frame of images and the relative world coordinate system in the two frames;
determining a second difference value under an inertial navigation coordinate system of the shooting moment of the previous frame of image according to a difference value of a second position in a second pose to be optimized of the shooting moment of the two frames of image and a second pose in the second pose of the shooting moment of the previous frame of image;
and determining a difference value between the first difference value and the second difference value as a relative position change residual error.
In some alternative embodiments, the relative pose change residual is established by:
and determining a relative attitude change residual error according to a second attitude in a second attitude to be optimized at the shooting time of two adjacent frames of images and rotation external parameters of an inertial navigation coordinate system and a relative world coordinate system at the shooting time of the two frames of images respectively.
In some alternative embodiments, the absolute pose change residual of the first pose is established by:
and determining an absolute attitude change residual error of the first attitude according to the rotation external parameters of the inertial navigation coordinate system and the relative world coordinate system, the second attitude in the second attitude to be optimized and the rotation external parameters between the relative world coordinate system and the absolute world coordinate system.
In some alternative embodiments, the absolute position residual of the global positioning system's observed position in the absolute world coordinate system is established by:
and determining an absolute position residual error of the observed position of the global positioning system under the absolute world coordinate system according to the observed position of the global positioning system under the absolute world coordinate system, the second position and the second posture in the second posture to be optimized and the rotation external parameters between the inertial navigation coordinate system and the absolute world coordinate system.
In some alternative embodiments, the absolute pose residual of the observed heading of the global positioning system in the absolute world coordinate system is established by:
and determining an absolute posture residual error of the observed heading of the global positioning system under the absolute world coordinate system according to the observed heading of the global positioning system under the absolute world coordinate system and the second posture in the second posture to be optimized.
In some optional embodiments, before the converting the first pose to the second pose in the absolute world coordinate system and the iteratively optimizing the rotation external parameters between the relative world coordinate system and the absolute world coordinate system based on the observed position and the observed heading of the first pose and the global positioning system in the absolute world coordinate system, the method further includes:
judging whether the fusion times of the image data shot by the vision sensor and the inertial navigation measurement data reach preset times or not; or alternatively, the first and second heat exchangers may be,
and judging whether the number of frames corresponding to the currently acquired first pose reaches a preset number of frames or not.
In a second aspect, an embodiment of the present invention provides a pose optimization device of a multi-sensor fusion positioning system, including:
the fusion module is used for fusing the acquired image data shot by the vision sensor in the preset time period with the inertial navigation measurement data to obtain a first pose under a relative world coordinate system;
and the optimization module is used for carrying out iterative optimization on the first pose converted to the second pose under the absolute world coordinate system and the rotation external parameter between the relative world coordinate system and the absolute world coordinate system according to the observation position and the observation course of the first pose and the global positioning system under the absolute world coordinate system obtained by the fusion module, so as to obtain an optimized value of the second pose and an optimized value of the rotation external parameter, wherein the optimized value of the rotation external parameter is used for fusing the image data shot by the vision sensor in the next time period with the inertial navigation measurement data.
In a third aspect, embodiments of the present invention provide a computer readable storage medium having stored thereon computer instructions that when executed by a processor implement the above-described method of pose optimization for a multisensor fusion positioning system.
In a fourth aspect, an embodiment of the present invention provides a server, including: the system comprises a memory, a processor and a computer program stored in the memory and capable of running on the processor, wherein the processor realizes the pose optimization method of the multi-sensor fusion positioning system when executing the program.
According to the pose optimization method of the multi-sensor fusion positioning system, which is provided by the embodiment of the invention, the acquired image data shot by the vision sensor in the preset time period is fused with the inertial navigation measurement data to obtain the first pose under the relative world coordinate system; based on the observation position and the observation course of the first pose and the global positioning system under the absolute world coordinate system, performing iterative optimization on the second pose converted from the first pose to the absolute world coordinate system and the rotation external parameter between the relative world coordinate system and the absolute world coordinate system to obtain an optimized value of the second pose and an optimized value of the rotation external parameter, wherein the optimized value of the rotation external parameter is used for fusing image data shot by a vision sensor in the next time period with inertial navigation measurement data. The beneficial effects of this technical scheme include at least:
(1) The inner layer Visual inertial odometer (Visual-Inertial Odometry, VIO) fuses the image data shot by the Visual sensor with inertial navigation measurement data to obtain a first pose under a relative world coordinate system, the outer layer introduces the observing position and the observing course of the global positioning system under the world coordinate system as constraints, and optimizes the optimized values of the second pose and the rotation external parameters on the basis of the first pose, so that the problem of inaccurate precision caused by local optimization is solved; the optimized value of the rotation external parameter is input into the inner layer VIO fusion of the next round, so that the accuracy of the first pose output by the inner layer VIO is improved, and the problem of unstable optimization caused by inaccurate rotation external parameter estimation between a relative world coordinate system and an absolute world coordinate system is solved; and the observation position and the observation course of the global positioning system under the world coordinate system are introduced as constraints by the outer layer, so that the problem of error accumulation caused by drift of the inner layer can be solved, and the second pose precision is improved.
(2) The inner layer VIO is fused into tight coupling, so that optimized parameters and residual errors are more, and the calculated amount is large; the pose optimization of the outer layer is loose coupling, the optimized parameters of each frame are less, residual errors are relatively less, and the calculated amount is small. Therefore, the double-layer optimization scheme with separated inner and outer layers reduces the calculated amount compared with the double-layer optimization scheme with all optimization works completed in the inner layer.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims thereof as well as the appended drawings.
The technical scheme of the invention is further described in detail through the drawings and the embodiments.
Drawings
The accompanying drawings are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate the invention and together with the embodiments of the invention, serve to explain the invention. In the drawings:
FIG. 1 is a flow chart of a pose optimization method of a multi-sensor fusion positioning system according to a first embodiment of the invention;
FIG. 2 is an exemplary diagram of a method for optimizing the pose of a multi-sensor fusion positioning system according to an embodiment of the present invention;
FIG. 3 is a flowchart of a method for establishing a target residual error in a second embodiment of the present invention;
FIG. 4 is a flowchart showing a specific implementation of step S31 in FIG. 3;
FIG. 5 is a schematic diagram of the factor graph optimization principle in an embodiment of the present invention;
fig. 6 is a schematic structural diagram of a pose optimization device of a multi-sensor fusion positioning system in an embodiment of the invention.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
In order to solve the problems of low pose optimization precision and long time consumption of a multi-sensor fusion positioning system in the prior art, the embodiment of the invention provides a pose optimization method and a device for the multi-sensor fusion positioning system, which can quickly and accurately perform pose optimization on the multi-sensor fusion positioning system and have small calculated amount.
Example 1
The first embodiment of the invention provides a pose optimization method of a multi-sensor fusion positioning system, the flow of which is shown in figure 1, comprising the following steps:
step S11: and fusing the acquired image data shot by the vision sensor in the preset time period with the inertial navigation measurement data to obtain a first pose under the relative world coordinate system.
According to the shooting time of each frame of image shot by the vision sensor, after the inertial navigation measurement data and the image data are subjected to time difference alignment, the inertial navigation measurement data and the image data are fused to obtain a first pose under a relative world coordinate system, and specifically, the first pose can be the inertial navigation pose or the vision sensor pose, and the inertial navigation pose and the vision sensor pose can be flexibly converted through rotation external parameters between the inertial navigation and the vision sensor.
Specifically, the first location includes values of x, y, and z coordinates in a relative world coordinate system; the first pose includes a heading angle Yaw, a Roll angle Roll, and a Pitch angle Pitch.
The fusion of the image data and the inertial navigation measurement data may be performed by using the prior art, and the specific fusion method is not limited in this embodiment.
Step S12: based on the observation position and the observation course of the first pose and the global positioning system under the absolute world coordinate system, performing iterative optimization on the second pose converted from the first pose to the absolute world coordinate system and the rotation external parameter between the relative world coordinate system and the absolute world coordinate system to obtain an optimized value of the second pose and an optimized value of the rotation external parameter.
The above relative world coordinate system may be understood as an initial world coordinate system, which is a world coordinate system before the gesture optimization, and more specifically, may be a coordinate system before the heading angle optimization; the absolute world coordinate system is a coordinate system after gesture optimization.
In one embodiment, iteratively optimizing the second position and the second heading angle in the second pose and the rotation extrinsic parameter may include obtaining an optimized value of the second position, an optimized value of the second heading angle, and an optimized value of the rotation extrinsic parameter.
According to the first pose and the observed position and the observed heading of the global positioning system under the absolute world coordinate system, at least one residual error is established, and the established residual errors are combined into a target residual error.
(1) A relative position change residual error between a first position in the first pose and a second position in the second pose converted from the first pose to an absolute world coordinate system;
(2) A relative pose change residual error between a first pose in the first poses and a second pose in the second poses of the first poses converted to the absolute world coordinate system;
(3) Absolute pose conversion residual of the first pose;
(4) Absolute position residuals of an observed position of the global positioning system in an absolute world coordinate system;
(5) Absolute pose residuals of the observed heading of the global positioning system in an absolute world coordinate system.
The global positioning system, specifically, the global navigation satellite system, may be any satellite navigation system, such as a GPS or a beidou satellite navigation system.
The established target residual comprises a second pose converted from the first pose to the absolute world coordinate system and a rotation external parameter between the relative world coordinate system and the absolute world coordinate system.
The specific determination method of each residual is described in detail in the following second embodiment.
The optimized value of the rotation external parameter obtained by optimization is used for fusing the image data shot by the vision sensor in the next time period with the inertial navigation measurement data, so that the optimization of the first pose is more accurate.
Specifically, the termination condition of the optimization may be that the target residual reaches a set condition, or that the target residual is smaller than a set residual threshold; or the target residual error is smaller than a set residual error threshold value and the iteration times reach the preset times; alternatively, other preset iteration optimization termination conditions can be adopted.
According to the pose optimization method of the multi-sensor fusion positioning system, which is provided by the embodiment of the invention, the acquired image data shot by the vision sensor in the preset time period is fused with the inertial navigation measurement data to obtain the first pose under the relative world coordinate system; based on the observation position and the observation course of the first pose and the global positioning system under the absolute world coordinate system, performing iterative optimization on the second pose converted from the first pose to the absolute world coordinate system and the rotation external parameter between the relative world coordinate system and the absolute world coordinate system to obtain an optimized value of the second pose and an optimized value of the rotation external parameter, wherein the optimized value of the rotation external parameter is used for fusing image data shot by a vision sensor in the next time period with inertial navigation measurement data. The beneficial effects of this technical scheme include at least:
(1) The inner layer Visual inertial odometer (Visual-Inertial Odometry, VIO) fuses the image data shot by the Visual sensor with inertial navigation measurement data to obtain a first pose under a relative world coordinate system, the outer layer introduces the observing position and the observing course of the global positioning system under the world coordinate system as constraints, and optimizes the optimized values of the second pose and the rotation external parameters on the basis of the first pose, so that the problem of inaccurate precision caused by local optimization is solved; the optimized value of the rotation external parameter is input into the inner layer VIO fusion of the next round, so that the accuracy of the first pose output by the inner layer VIO is improved, and the problem of unstable optimization caused by inaccurate rotation external parameter estimation between a relative world coordinate system and an absolute world coordinate system is solved; and the observation position and the observation course of the global positioning system under the world coordinate system are introduced as constraints by the outer layer, so that the problem of error accumulation caused by drift of the inner layer can be solved, and the second pose precision is improved.
(2) The inner layer VIO is fused into tight coupling, so that optimized parameters and residual errors are more, and the calculated amount is large; the pose optimization of the outer layer is loose coupling, the optimized parameters of each frame are less, residual errors are relatively less, and the calculated amount is small. Therefore, the double-layer optimization scheme with separated inner and outer layers reduces the calculated amount compared with the double-layer optimization scheme with all optimization works completed in the inner layer.
Referring to fig. 2, the above steps can be summarized as follows: on the basis of VIO inner layer fusion, namely, measuring data of a vision sensor (such as Camera) and measuring data of an inertial measurement unit (Inertial Measurement Unit, IMU, inertial navigation for short), and carrying out first pose estimation (Local estimate) under a relative world coordinate system by combining speed and longitude and latitude information measured by a global positioning system (such as GPS); and then taking the first pose (Local Position) as input, combining longitude and latitude information measured by a Global positioning system (such as GPS), carrying out pose optimization (Global Pose Graph optimization) under an outer absolute world coordinate system, outputting the pose (Global Position) under the absolute world coordinate system, and simultaneously returning a rotation external parameter (Word Frame R) between the relative world coordinate system and the absolute world coordinate system to the Local Estimator for VIO fusion in the next time period. The constraint of global positioning system (such as GPS) measurement data solves the problem of inner-layer VIO drift, realizes global optimization and improves the pose accuracy.
In one embodiment, before executing step S12, determining whether the number of times of fusion between the image data captured by the vision sensor and the inertial measurement data reaches a preset number of times; or judging whether the number of frames corresponding to the currently acquired first pose reaches a preset number of frames or not.
The image data shot by the vision sensor and the inertial navigation measurement data are fused, namely VIO fusion is tightly coupled, optimized parameters and residual errors are more, and the calculated amount is large, so that the VIO fusion at the inner layer is a small window, and the number of frames optimized at each time is less; the pose optimization of the outer layer, namely the second pose optimization is loose coupling, the optimized parameters and residual errors of each frame are relatively less, the calculated amount is small, the outer layer is a large window, and the number of frames optimized each time is more. Therefore, after the inner layer fusion is carried out for a plurality of times, the outer layer can be optimized again according to the result of the inner layer fusion for a plurality of times.
When the above step is judged yes, step S12 is executed; if not, continuing to acquire the image data and the inertial navigation measurement data shot by the vision sensor in the next time period, and executing the step S11.
Example two
In a pose optimization process of a multi-sensor fusion positioning system, a specific implementation of a target residual error establishment method to be optimized is provided in a second embodiment of the present invention, and a flow chart of the method is shown in fig. 3, and the method includes the following steps:
step S31: and establishing a relative position change residual error of the first position in the first pose and the second position in the second pose.
Referring to fig. 4, the method comprises the following steps:
step S311: and converting the difference value of the first position of the shooting moment of the adjacent two frames of images into a first difference value under the inertial navigation coordinate system of the shooting moment of the previous frame of images according to the rotation external parameters of the inertial navigation coordinate system of the shooting moment of the previous frame of images and the relative world coordinate system in the two frames.
Step S312: and determining a second difference value under the inertial navigation coordinate system of the shooting moment of the previous frame image according to the difference value of the second position in the second pose to be optimized of the shooting moment of the two frames of images and the second pose in the second pose of the shooting moment of the previous frame image.
Step S313: and determining the difference between the first difference and the second difference as a relative position change residual error.
Specifically, the residual error of the relative position change between the first position in the first pose and the second position in the second pose may be represented by the following formula:
wherein,residual error of relative position change of the first position of the kth frame time and the (k+1) th frame time;the rotation external parameters of the inertial navigation coordinate system at the kth frame moment and the relative world coordinate system are adopted; />A first position at the kth frame time, namely a position under a relative world coordinate system by inertial navigation; />The second position at the kth frame moment is the position of inertial navigation to be optimized under an absolute world coordinate system; />The second gesture at the kth frame moment is the gesture of inertial navigation to be optimized under an absolute world coordinate system; k=0, 1 … … n, n+1 is the total number of frames of the image.
The k-th frame time described above and later is the shooting time at which the vision sensor shoots the k-th frame image.
Step S32: and establishing a relative gesture change residual error of the first gesture in the first gestures and the second gesture in the second gestures.
In one embodiment, the method can include determining a relative attitude change residual according to a second attitude in a second attitude to be optimized at the shooting time of two adjacent frames of images and rotation external parameters of an inertial navigation coordinate system and a relative world coordinate system at the shooting time of two frames of images respectively. Specifically established residual errors of relative gesture changes of a first gesture in a first gesture and a second gesture in a second gestureCan be of the formula:
reference is made to the formula:
step S33: and establishing an absolute gesture conversion residual error of the first gesture.
In one embodiment, determining the absolute pose change residual of the first pose based on the rotational extrinsic of the inertial navigation coordinate system and the relative world coordinate system, the second pose of the second pose to be optimized, and the rotational extrinsic between the relative world coordinate system and the absolute world coordinate system may include determining the absolute pose change residual of the first pose based on the rotational extrinsic of the inertial navigation coordinate system and the relative world coordinate system.
Reference is made to the formula:
wherein,absolute attitude change residual error of the first attitude at the kth frame time; />Is a rotation external parameter between a relative world coordinate system and an absolute world coordinate system to be optimized.
Step S34: an absolute position residual error of an observed position of the global positioning system under an absolute world coordinate system is established.
In one embodiment, determining the absolute position residual of the observed position of the global positioning system in the absolute world coordinate system based on the observed position of the global positioning system in the absolute world coordinate system, the second position and the second pose in the second pose to be optimized, and the rotation profile between the inertial navigation coordinate system and the absolute world coordinate system may include determining the absolute position residual of the observed position of the global positioning system in the absolute world coordinate system.
Reference is made to the formula:
wherein,absolute position residual error of the observation position of the global positioning system at the kth frame time under the absolute world coordinate system; />The rotation external parameter is the external parameter between the absolute world coordinate system and the inertial navigation coordinate system, namely the external parameter of the relative position between the antenna of the global positioning system and the inertial navigation; />Is the observed position of the global positioning system at the kth frame time under the absolute world coordinate system.
Step S35: and establishing an absolute attitude residual error of an observed course of the global positioning system under an absolute world coordinate system.
In one embodiment, determining the absolute pose residual of the observed heading of the global positioning system in the absolute world coordinate system based on the observed heading of the global positioning system in the absolute world coordinate system and the second pose of the second poses to be optimized may include determining the absolute pose residual of the observed heading of the global positioning system in the absolute world coordinate system.
Due to the problems of inaccurate Z value optimization of VIO, unsmooth Z value observation of a global positioning system and the like, the Z value is easy to be coupled with a Roll angle Roll and a Pitch angle Pitch, so that the Roll and Pitch are estimated inaccurately. Therefore, the integral rotation optimization can be decomposed on line, only the course angle Yaw in the gesture is optimized, and the gesture combination is performed by combining the Pitch angle Pitch and the Roll angle Roll in the VIO result, so that the problem of inaccurate estimation of Roll and Pitch caused by error coupling is solved, and the gesture optimization precision is improved. Therefore, the absolute pose residual of the observed pose of the global positioning system in the absolute world coordinate system can be established as follows:
wherein,absolute attitude residual error of the observed heading of the global positioning system at the kth frame moment under the absolute world coordinate system; />The heading angle of the kth frame moment in the second gesture under the absolute world coordinate system; />Sitting in absolute world for global positioning systemObserved heading (angle) at the kth frame time under the standard.
In summary, the parameters to be optimized in the residual error are:
a second pose of each frame in the residual, e.g. a second pose of a kth frameOnly need to optimize +.>
The steps S31 to S35 are not sequential, and any one or more steps may be performed first, or may be performed simultaneously.
Step S36: and combining the established residuals into a target residual.
The combined target residual x may be:
wherein, Ω p k A first position covariance for a kth frame time; omega phi k A first pose covariance for a kth frame time; Ω φv k An absolute pose covariance of the first pose at a kth frame time; omega phi w k The absolute attitude covariance of the global positioning system at the kth frame moment, namely the absolute attitude covariance of the observed heading of the global positioning system under an absolute world coordinate system; omega pw k The absolute position covariance of the global positioning system at the kth frame moment, namely the absolute position covariance of the observation position of the global positioning system under an absolute world coordinate system; ρ (x) represents the outliers in x are culled.
The embodiment of the invention provides a factor graph optimization method for determining pose based on 4DOF and absolute pose constraint, specifically, 4DOF refers to X, y and z in position and course angle in pose, absolute pose constraint refers to using observation data of a global positioning system under an absolute world coordinate system as constraint, factor graph optimization principle is shown by referring to figure 5, wherein X1-X6 represent first pose (subscript represents frame number) of each frame under a relative world coordinate system obtained by fusing image data shot by an inner layer vision sensor and inertial measurement data, R w w 0 For rotation extrinsic parameters between a relative world coordinate system to be optimized and an absolute world coordinate system, a factor (1) represents a relative position change residual of a first position in a first pose between two adjacent frames and a relative pose change residual of the first pose in the first pose, a factor (2) represents an absolute pose conversion residual of the first pose, and a factor (3) represents an absolute position residual of an observed position and an absolute pose residual of an observed heading of a global positioning system under the absolute world coordinate system. The total factors composed of the factors are iteratively optimized, so that the conversion from the pose under the relative world coordinate system (Local Coordination) to the pose under the absolute world coordinate system (Global Coordination) can be realized.
Based on the inventive concept of the present invention, an embodiment of the present invention further provides a pose optimization device of a multi-sensor fusion positioning system, where the structure of the pose optimization device is shown in fig. 6, and the pose optimization device includes:
the fusion module 61 is configured to fuse the acquired image data captured by the vision sensor in the preset time period with the inertial navigation measurement data to obtain a first pose in a relative world coordinate system;
the optimization module 62 is configured to perform iterative optimization on the first pose converted to a second pose under the absolute world coordinate system and a rotation parameter between the relative world coordinate system and the absolute world coordinate system according to the observed position and the observed heading of the first pose and the global positioning system under the absolute world coordinate system obtained by the fusion module 61, so as to obtain an optimized value of the second pose and an optimized value of the rotation parameter, where the optimized value of the rotation parameter is used for fusion of image data captured by the vision sensor in the next time period and inertial navigation measurement data.
In one embodiment, the optimizing module 62 performs iterative optimization on the second pose converted from the first pose to the absolute world coordinate system and the rotation external parameter between the relative world coordinate system and the absolute world coordinate system, so as to obtain an optimized value of the second pose and an optimized value of the rotation external parameter, which are specifically used for:
and iteratively optimizing a second position and a second course angle in the second pose and the rotation external parameter to obtain an optimized value of the second position, an optimized value of the second course angle and an optimized value of the rotation external parameter.
In one embodiment, the optimizing module 62 performs iterative optimization on the first pose converted to the second pose under the absolute world coordinate system and the rotation external parameter between the relative world coordinate system and the absolute world coordinate system based on the observed position and the observed heading of the first pose and the global positioning system under the absolute world coordinate system, which is specifically used for:
according to the first pose and the observed position and the observed heading of the global positioning system in an absolute world coordinate system, at least one residual error is established, and the established residual errors are combined into a target residual error:
a relative position change residual of a first position in the first pose and a second position in a second pose in which the first pose is converted to an absolute world coordinate system; a relative pose change residual error between a first pose of the first poses and a second pose of the first poses converted to a second pose of the absolute world coordinate system; an absolute pose conversion residual of the first pose; absolute position residuals of an observed position of the global positioning system in an absolute world coordinate system; absolute pose residuals of the observed heading of the global positioning system in an absolute world coordinate system.
In one embodiment, the apparatus further includes a judging module 63, where before the optimizing module 62 performs iterative optimization on the first pose and the rotation parameters between the relative world coordinate system and the absolute world coordinate system based on the observed position and the observed heading of the first pose and the global positioning system in the absolute world coordinate system, the judging module 63 is configured to:
judging whether the fusion times of the image data shot by the vision sensor and the inertial navigation measurement data reach preset times or not; or judging whether the number of frames corresponding to the currently acquired first pose reaches a preset number of frames or not.
The specific manner in which the various modules perform the operations in the apparatus of the above embodiments have been described in detail in connection with the embodiments of the method, and will not be described in detail herein.
Based on the inventive concept of the present invention, the embodiments of the present invention further provide a computer readable storage medium having stored thereon computer instructions that when executed by a processor implement the above-described pose optimization method of the multi-sensor fusion positioning system.
Based on the inventive concept of the present invention, an embodiment of the present invention further provides a server, including: the system comprises a memory, a processor and a computer program stored in the memory and capable of running on the processor, wherein the processor realizes the pose optimization method of the multi-sensor fusion positioning system when executing the program.
Unless specifically stated otherwise, terms such as processing, computing, calculating, determining, displaying, or the like, may refer to an action and/or process of one or more processing or computing systems, or similar devices, that manipulates and transforms data represented as physical (e.g., electronic) quantities within the processing system's registers or memories into other data similarly represented as physical quantities within the processing system's memories, registers or other such information storage, transmission or display devices. Information and signals may be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits, symbols, and chips that may be referenced throughout the above description may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof.
It should be understood that the specific order or hierarchy of steps in the processes disclosed are examples of exemplary approaches. Based on design preferences, it is understood that the specific order or hierarchy of steps in the processes may be rearranged without departing from the scope of the present disclosure. The accompanying method claims present elements of the various steps in a sample order, and are not meant to be limited to the specific order or hierarchy presented.
In the foregoing detailed description, various features are grouped together in a single embodiment for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the claimed embodiments of the subject matter require more features than are expressly recited in each claim. Rather, as the following claims reflect, invention lies in less than all features of a single disclosed embodiment. Thus the following claims are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate preferred embodiment of this invention.
Those of skill would further appreciate that the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC. The ASIC may reside in a user terminal. The processor and the storage medium may reside as discrete components in a user terminal.
For a software implementation, the techniques described herein may be implemented with modules (e.g., procedures, functions, and so on) that perform the functions described herein. These software codes may be stored in memory units and executed by processors. The memory unit may be implemented within the processor or external to the processor, in which case it can be communicatively coupled to the processor via various means as is known in the art.
The foregoing description includes examples of one or more embodiments. It is, of course, not possible to describe every conceivable combination of components or methodologies for purposes of describing the aforementioned embodiments, but one of ordinary skill in the art may recognize that many further combinations and permutations of various embodiments are possible. Accordingly, the embodiments described herein are intended to embrace all such alterations, modifications and variations that fall within the scope of the appended claims. Furthermore, as used in the specification or claims, the term "comprising" is intended to be inclusive in a manner similar to the term "comprising," as interpreted when employed as a transitional word in a claim. Furthermore, any use of the term "or" in the specification of the claims is intended to mean "non-exclusive or". The terms "first," "second," and "third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.

Claims (11)

1. A pose optimization method of a multi-sensor fusion positioning system comprises the following steps:
fusing the acquired image data shot by the vision sensor in the preset time period with inertial navigation measurement data to obtain a first pose under a relative world coordinate system;
based on the observation position and the observation course of the first pose and the global positioning system under the absolute world coordinate system, performing iterative optimization on the first pose converted to a second pose under the absolute world coordinate system and a rotation external parameter between the relative world coordinate system and the absolute world coordinate system to obtain an optimized value of the second pose and an optimized value of the rotation external parameter, wherein the optimized value of the rotation external parameter is used for fusion of image data shot by a vision sensor in the next time period and inertial navigation measurement data.
2. The method of claim 1, wherein the performing iterative optimization on the second pose converted from the first pose to the absolute world coordinate system and the rotation external parameter between the relative world coordinate system and the absolute world coordinate system to obtain the optimized value of the second pose and the optimized value of the rotation external parameter specifically comprises:
and iteratively optimizing a second position and a second course angle in the second pose and the rotation external parameter to obtain an optimized value of the second position, an optimized value of the second course angle and an optimized value of the rotation external parameter.
3. The method according to claim 1, wherein the iterative optimization of the first pose to the second pose in the absolute world coordinate system and the rotation external parameters between the relative world coordinate system and the absolute world coordinate system based on the observed position and the observed heading of the first pose and the global positioning system in the absolute world coordinate system specifically comprises:
according to the first pose and the observed position and the observed heading of the global positioning system in an absolute world coordinate system, at least one residual error is established, and the established residual errors are combined into a target residual error:
a relative position change residual of a first position in the first pose and a second position in a second pose in which the first pose is converted to an absolute world coordinate system;
a relative pose change residual error between a first pose of the first poses and a second pose of the first poses converted to a second pose of the absolute world coordinate system;
an absolute pose conversion residual of the first pose;
absolute position residuals of an observed position of the global positioning system in an absolute world coordinate system;
absolute pose residuals of the observed heading of the global positioning system in an absolute world coordinate system.
4. A method according to claim 3, said relative position change residual being established by:
converting the difference value of the first position of the shooting moment of the adjacent two frames of images into a first difference value under the inertial navigation coordinate system of the shooting moment of the previous frame of images according to the rotation external parameters of the inertial navigation coordinate system of the shooting moment of the previous frame of images and the relative world coordinate system in the two frames;
determining a second difference value under an inertial navigation coordinate system of the shooting moment of the previous frame of image according to a difference value of a second position in a second pose to be optimized of the shooting moment of the two frames of image and a second pose in the second pose of the shooting moment of the previous frame of image;
and determining a difference value between the first difference value and the second difference value as a relative position change residual error.
5. A method according to claim 3, said relative attitude change residual being established by:
and determining a relative attitude change residual error according to a second attitude in a second attitude to be optimized at the shooting time of two adjacent frames of images and rotation external parameters of an inertial navigation coordinate system and a relative world coordinate system at the shooting time of the two frames of images respectively.
6. A method according to claim 3, the absolute pose change residual of the first pose being established by:
and determining an absolute attitude change residual error of the first attitude according to the rotation external parameters of the inertial navigation coordinate system and the relative world coordinate system, the second attitude in the second attitude to be optimized and the rotation external parameters between the relative world coordinate system and the absolute world coordinate system.
7. A method according to claim 3, the absolute position residual of the global positioning system's observed position in the absolute world coordinate system being established by:
and determining an absolute position residual error of the observed position of the global positioning system under the absolute world coordinate system according to the observed position of the global positioning system under the absolute world coordinate system, the second position and the second posture in the second posture to be optimized and the rotation external parameters between the inertial navigation coordinate system and the absolute world coordinate system.
8. A method according to claim 3, wherein the absolute pose residual of the global positioning system's observed heading in the absolute world coordinate system is established by:
and determining an absolute posture residual error of the observed heading of the global positioning system under the absolute world coordinate system according to the observed heading of the global positioning system under the absolute world coordinate system and the second posture in the second posture to be optimized.
9. The method of any one of claims 1 to 8, wherein before the converting the first pose to the second pose in the absolute world coordinate system and the iteratively optimizing the rotation profile between the relative world coordinate system and the absolute world coordinate system based on the observed position and the observed heading of the first pose and the global positioning system in the absolute world coordinate system, further comprising:
judging whether the fusion times of the image data shot by the vision sensor and the inertial navigation measurement data reach preset times or not; or alternatively, the first and second heat exchangers may be,
and judging whether the number of frames corresponding to the currently acquired first pose reaches a preset number of frames or not.
10. A pose optimization device of a multi-sensor fusion positioning system, comprising:
the fusion module is used for fusing the acquired image data shot by the vision sensor in the preset time period with the inertial navigation measurement data to obtain a first pose under a relative world coordinate system;
and the optimization module is used for carrying out iterative optimization on the first pose converted to the second pose under the absolute world coordinate system and the rotation external parameter between the relative world coordinate system and the absolute world coordinate system according to the observation position and the observation course of the first pose and the global positioning system under the absolute world coordinate system obtained by the fusion module, so as to obtain an optimized value of the second pose and an optimized value of the rotation external parameter, wherein the optimized value of the rotation external parameter is used for fusing the image data shot by the vision sensor in the next time period with the inertial navigation measurement data.
11. A computer readable storage medium having stored thereon computer instructions which, when executed by a processor, implement the pose optimization method of a multisensor fusion positioning system of any one of claims 1 to 9.
CN202011060481.7A 2020-09-30 2020-09-30 Pose optimization method and device of multi-sensor fusion positioning system Active CN114322996B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011060481.7A CN114322996B (en) 2020-09-30 2020-09-30 Pose optimization method and device of multi-sensor fusion positioning system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011060481.7A CN114322996B (en) 2020-09-30 2020-09-30 Pose optimization method and device of multi-sensor fusion positioning system

Publications (2)

Publication Number Publication Date
CN114322996A CN114322996A (en) 2022-04-12
CN114322996B true CN114322996B (en) 2024-03-19

Family

ID=81011228

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011060481.7A Active CN114322996B (en) 2020-09-30 2020-09-30 Pose optimization method and device of multi-sensor fusion positioning system

Country Status (1)

Country Link
CN (1) CN114322996B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113375665B (en) * 2021-06-18 2022-12-02 西安电子科技大学 Unmanned aerial vehicle pose estimation method based on multi-sensor elastic coupling

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016187757A1 (en) * 2015-05-23 2016-12-01 SZ DJI Technology Co., Ltd. Sensor fusion using inertial and image sensors
CN107869989A (en) * 2017-11-06 2018-04-03 东北大学 A kind of localization method and system of the fusion of view-based access control model inertial navigation information
CN109029433A (en) * 2018-06-28 2018-12-18 东南大学 Join outside the calibration of view-based access control model and inertial navigation fusion SLAM on a kind of mobile platform and the method for timing
CN109993113A (en) * 2019-03-29 2019-07-09 东北大学 A kind of position and orientation estimation method based on the fusion of RGB-D and IMU information
CN110514225A (en) * 2019-08-29 2019-11-29 中国矿业大学 The calibrating external parameters and precise positioning method of Multi-sensor Fusion under a kind of mine
CN110706279A (en) * 2019-09-27 2020-01-17 清华大学 Global position and pose estimation method based on information fusion of global map and multiple sensors
WO2020087846A1 (en) * 2018-10-31 2020-05-07 东南大学 Navigation method based on iteratively extended kalman filter fusion inertia and monocular vision
CN111207774A (en) * 2020-01-17 2020-05-29 山东大学 Method and system for laser-IMU external reference calibration

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110012827A1 (en) * 2009-07-14 2011-01-20 Zhou Ye Motion Mapping System

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016187757A1 (en) * 2015-05-23 2016-12-01 SZ DJI Technology Co., Ltd. Sensor fusion using inertial and image sensors
CN107869989A (en) * 2017-11-06 2018-04-03 东北大学 A kind of localization method and system of the fusion of view-based access control model inertial navigation information
CN109029433A (en) * 2018-06-28 2018-12-18 东南大学 Join outside the calibration of view-based access control model and inertial navigation fusion SLAM on a kind of mobile platform and the method for timing
WO2020087846A1 (en) * 2018-10-31 2020-05-07 东南大学 Navigation method based on iteratively extended kalman filter fusion inertia and monocular vision
CN109993113A (en) * 2019-03-29 2019-07-09 东北大学 A kind of position and orientation estimation method based on the fusion of RGB-D and IMU information
CN110514225A (en) * 2019-08-29 2019-11-29 中国矿业大学 The calibrating external parameters and precise positioning method of Multi-sensor Fusion under a kind of mine
CN110706279A (en) * 2019-09-27 2020-01-17 清华大学 Global position and pose estimation method based on information fusion of global map and multiple sensors
CN111207774A (en) * 2020-01-17 2020-05-29 山东大学 Method and system for laser-IMU external reference calibration

Also Published As

Publication number Publication date
CN114322996A (en) 2022-04-12

Similar Documents

Publication Publication Date Title
CN105698765B (en) Object pose method under double IMU monocular visions measurement in a closed series noninertial systems
CN108592950B (en) Calibration method for relative installation angle of monocular camera and inertial measurement unit
CN102289804B (en) System and method for three dimensional video stabilisation by fusing orientation sensor readings with image alignment estimates
CN112781586B (en) Pose data determination method and device, electronic equipment and vehicle
CN110187375A (en) A kind of method and device improving positioning accuracy based on SLAM positioning result
CN109059907A (en) Track data processing method, device, computer equipment and storage medium
CN113701745B (en) External parameter change detection method, device, electronic equipment and detection system
CN109507706B (en) GPS signal loss prediction positioning method
CN110032201A (en) A method of the airborne visual gesture fusion of IMU based on Kalman filtering
CN113933818A (en) Method, device, storage medium and program product for calibrating laser radar external parameter
CN114322996B (en) Pose optimization method and device of multi-sensor fusion positioning system
US20220057517A1 (en) Method for constructing point cloud map, computer device, and storage medium
CN109470269B (en) Calibration method, calibration equipment and calibration system for space target measuring mechanism
CN116481543A (en) Multi-sensor fusion double-layer filtering positioning method for mobile robot
CN114519671B (en) Unmanned aerial vehicle remote sensing image dynamic rapid splicing method
CN115560744A (en) Robot, multi-sensor-based three-dimensional mapping method and storage medium
CN114001730B (en) Fusion positioning method, fusion positioning device, computer equipment and storage medium
CN114019954B (en) Course installation angle calibration method, device, computer equipment and storage medium
CN115906641A (en) IMU gyroscope random error compensation method and device based on deep learning
CN111678515A (en) Device state estimation method and device, electronic device and storage medium
CN110954933B (en) Mobile platform positioning device and method based on scene DNA
CN116499455B (en) Positioning method and device
CN115451958B (en) Camera absolute attitude optimization method based on relative rotation angle
CN113628279B (en) Panoramic vision SLAM mapping method
CN116753947A (en) Multi-sensor-based coordinate system relative matrix determination method, device and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant