CN116184430B - Pose estimation algorithm fused by laser radar, visible light camera and inertial measurement unit - Google Patents

Pose estimation algorithm fused by laser radar, visible light camera and inertial measurement unit Download PDF

Info

Publication number
CN116184430B
CN116184430B CN202310144825.XA CN202310144825A CN116184430B CN 116184430 B CN116184430 B CN 116184430B CN 202310144825 A CN202310144825 A CN 202310144825A CN 116184430 B CN116184430 B CN 116184430B
Authority
CN
China
Prior art keywords
pose
camera
laser radar
odometer
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310144825.XA
Other languages
Chinese (zh)
Other versions
CN116184430A (en
Inventor
刘俊伟
王娟
唐莉萍
王金兰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hefei Tairui Shuchuang Technology Co ltd
Original Assignee
Hefei Tairui Shuchuang Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hefei Tairui Shuchuang Technology Co ltd filed Critical Hefei Tairui Shuchuang Technology Co ltd
Priority to CN202310144825.XA priority Critical patent/CN116184430B/en
Publication of CN116184430A publication Critical patent/CN116184430A/en
Application granted granted Critical
Publication of CN116184430B publication Critical patent/CN116184430B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/86Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • G01C21/1652Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments with ranging devices, e.g. LIDAR or RADAR
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • G01C21/1656Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments with passive imaging devices, e.g. cameras

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention provides a pose estimation algorithm fused by a laser radar, a camera and an inertial measurement unit, which is constructed by independently running tightly coupled fusion LCO, LIO, CIO and then loosely coupled fusion algorithm architecture among three odometers, so that the pose estimation algorithm not only comprises the high-precision advantage of tightly coupling, but also comprises the strong robustness advantage of loosely coupling, and can improve the pose estimation capability of a plurality of sensors.

Description

Pose estimation algorithm fused by laser radar, visible light camera and inertial measurement unit
Technical Field
The invention relates to the field of multi-sensor fusion and pose estimation, in particular to a pose estimation method for fusion of a laser radar, a visible light camera and an inertial measurement unit.
Background
The traditional multi-sensor fusion and pose estimation technology often has difficulty in combining precision and robustness. The traditional methods fall into two categories: (1) loosely coupled fusion. The multiple sensors are mutually independent, a residual function is independently constructed, the pose result is calculated by optimizing the function, and then weighted average is carried out, so that the method has the advantage of high robustness. When the original data of a certain sensor is missing or the error of solving the pose is larger, the fused pose is still an acceptable result. The method has the defects that the accuracy is poor, and a sensor with low measurement accuracy can reduce the effect of the sensor with high measurement accuracy on pose solving. (2) close-coupled fusion. The combination of the original data of the multiple sensors constructs a residual function, and the residual function is optimized to calculate a pose result. The method has the advantages that the joint optimization is nonlinear and directly processes the observed data, so that the pose accuracy after the residual error is optimized is high. The method has the defects that the efficiency is low due to the fact that residual error items are more, when certain sensor data is missing or error is large, a residual error function is wrong or even fails, and therefore effective pose cannot be solved, and robustness is poor. Therefore, in order to overcome the defects of the two methods and absorb the advantages thereof, the invention provides an algorithm architecture for tightly coupling and fusing a laser radar, a visible light camera and an inertial measurement unit, and constructing a laser radar camera odometer (LCO), a laser radar inertial odometer (LIO) and a Camera Inertial Odometer (CIO), and then loosely coupling and fusing the three odometers. The algorithm not only comprises the advantage of high precision of tight coupling, but also comprises the advantage of strong robustness of loose coupling, and can improve the pose estimation capability of multiple sensors.
A system and method for determining vehicle dynamics is disclosed in US20100017128A1, where radar, lidar and or camera signals are used to correct the dynamic estimation of the vehicle in real time.
In chinese patent document CN101241011a, a device and a method for positioning and determining a pose with high precision on a laser radar platform are described, referring to fig. 1, a measurable camera unit, an inertial measurement unit, a high-precision position pose measurement and resolving unit, an aerial position measurement unit, a laser scanning unit, a three-dimensional coordinate information obtaining unit, and the like are provided, an aerial digital image is obtained on a flight track of the laser radar platform through measurable camera, a relative orientation calculation is performed on the obtained aerial digital image, a low-frequency high-precision pose parameter value of the laser radar platform on a working aerial is obtained, a high-frequency low-precision pose parameter of the laser radar platform is obtained through the inertial measurement unit, and the high-frequency low-precision pose parameter is corrected by using the low-frequency high-precision pose parameter value obtained by the relative orientation, so as to obtain high-precision pose parameter information of the laser radar platform. Therefore, the cost of the high-precision position and posture measurement system can be reduced, the precision of the high-precision position and posture measurement system is improved, and the requirement of the Lidar system for high-precision measurement is met.
Chinese patent document CN110501712B discloses a method for determining position and orientation data, which determines reference point cloud data associated with a predetermined space based on scanned point cloud data obtained by scanning the predetermined space by a first laser radar, a transmitting device for transmitting an analog positioning signal and an error determining device for determining a positioning error based on the analog positioning signal are arranged in the predetermined space, position and orientation data for a vehicle is determined based on the positioning error, the analog positioning signal received by the vehicle in the predetermined space, and inertial measurement unit data of the vehicle, and position and orientation data of a second laser radar is determined based on the reference point cloud data, the position and orientation data of the vehicle, and test point cloud data obtained by the second laser radar on the vehicle. Therefore, dependence on the environment of the GNNS is reduced, and the calibration time of the laser radar is saved.
In chinese patent document CN105318876a, a combined high-precision attitude measurement method of a inertial odometer is described, in which a combined kalman filter calculation is performed based on a dead reckoning result, so that a horizontal misalignment angle and a horizontal gyro drift are estimated and corrected, and the attitude precision is improved.
The chinese patent document CN114646993a describes a data fusion algorithm for accurate positioning based on GNSS, vision and IMU, which utilizes complementarity between different sensor data sources, utilizes a local data source to improve global positioning accuracy of GNSS, utilizes the global data source of GNSS to eliminate local accumulated error, and integrally improves performance and robustness of GNSS.
However, there is no position and orientation (hereinafter abbreviated as pose) estimation method of the coupling fusion LCO, LIO, CIO in the prior art.
Disclosure of Invention
In view of the above, the invention provides a pose estimation algorithm fused by a laser radar, a camera and an inertial measurement unit, which is constructed by independently running tightly coupled fusion LCO, LIO, CIO and then loosely coupled fusion algorithm architecture among three odometers, so that the pose estimation capability of multiple sensors can be improved, and the pose estimation algorithm comprises the advantages of high precision of tightly coupling and strong robustness of loosely coupling.
Specifically, the aim of the invention is achieved by the following technical scheme:
according to one aspect of the present invention, there is provided a pose estimation algorithm using computer-implemented laser radar, camera, inertial measurement unit fusion, the method comprising the steps of:
(1) And constructing a tightly coupled and fused laser radar inertial odometer.
(2) And constructing a tightly coupled and fused camera inertial odometer.
(3) And constructing a tightly coupled and fused laser radar camera odometer.
(4) And (5) carrying out weighted average on the output results of the three tightly-coupled odometers to construct a loosely-coupled fusion odometer.
Further, according to the pose estimation algorithm described above, 3D point cloud data of the environment is obtained by a laser radar (LiDAR), a 2D projection image of the environment on an imaging plane thereof is obtained by a visible light Camera (Camera), and a three-axis acceleration and a three-axis angular velocity of the carrier are obtained by an Inertial Measurement Unit (IMU).
Further, according to the pose estimation algorithm described above, a laser radar Camera odometer (LCO) is constructed using a fusion of LiDAR and Camera, a Camera Inertial Odometer (CIO) is constructed using a fusion of Camera and IMU, and a laser radar inertial odometer (LIO) is constructed using a fusion of LiDAR and IMU.
Further, according to the pose estimation algorithm, the LCO, CIO, LIO odometers are in a tightly coupled fusion form, the LCO, CIO, LIO fusion output pose results are fused in a loosely coupled form, and a multi-source fusion combined pose estimation algorithm is integrally formed.
Further, the step (1) specifically comprises the following steps: LIO is estimated in pose, where position is expressed in terms of three-dimensional point coordinates (x, y, z), and pose is expressed in terms of Euler angles (roll anglesPitch angle θ, yaw angle ψ), i.e. pose state variableWherein, the three-axis acceleration data of the IMU is combined with gravity constraint to calculate the roll angle +.>And optimizing the residual pose variables (x, y, z and psi) by using the laser radar point cloud data together with the global accurate estimated value of the pitch angle theta, so as to achieve the effect of decoupling and optimizing the pose.
Further, the gravity vector [0 g ] is expressed in terms of Euler angles of the x-y-z type] T Projected onto a carrier coordinate system by a rotation matrix of the carrier, and is combined with three-axis acceleration data [ a ] of the IMU x a y a z ] T Establishing an equal amount relationship to obtain
wherein The solution is obtained
The roll angle is obtained by utilizing the three-axis acceleration data of the IMUAnd a more accurate estimate of the pitch angle θ.
Further, maximum a posteriori estimation of pose state variablesDecoupling into two parts for calculation, wherein the first part is directly calculated by three-axis acceleration data of the IMU, the second part is optimized by least square nonlinearity after laser radar point cloud matching, and an objective function is to minimize a matching residual error r i I.e.
Further, the step (2) specifically comprises: CIO calculates a more accurate roll angle according to the triaxial acceleration data of the IMUAnd the pitch angle θ, thereby determining the direction of gravity. And rotating the directions of the characteristic points of each frame of the camera image to the gravity direction corresponding to the frame, namely adding the gravity direction information of the corresponding frame into the direction information of the descriptor, and enhancing the rotation invariant characteristic during matching.
Further, when the description of each feature point is calculated after the direction of the feature point is rotated to the gravity direction corresponding to the frame, the feature point is used as a circle center, a certain distance is used as a radius to form a circle, the pixel points on the circle and the feature points are compared one by one, if the pixel points are larger than the feature points, the description is taken as 1, otherwise, the pixel points are taken as 0, and the pixel points right above the feature points (in the opposite direction of gravity) are defined as first comparison points, so that the binary descriptor vector is obtained. The hamming distance is used to measure the difference between two descriptors, i.e. the number of elements in the corresponding positions of the two descriptor vectors that are different (one is 1 and the other is 0). Such descriptors not only have rotation-invariant properties, but also the comparison mode employed can describe more precisely the degree of similarity between the feature points of different frames.
Further, the step (3) specifically comprises: LCO utilizes a camera image to extract 2D edge points of the environment, utilizes a laser radar Point cloud to extract 3D edge points of the environment, carries out cross-modal matching on the two, and then adopts PnP (Perselect-n-Point) to solve the pose.
Further, the camera image is utilized to assist in extracting edge feature points in the laser radar point cloud, and the specific method comprises the following steps: and selecting the pixel point with the maximum pixel first derivative as a 2D edge point in a potential pixel edge point area of the camera image. And the depth corresponding to the pixel point is calculated by utilizing triangulation, then the external parameters of the camera and the laser radar are projected into a point cloud coordinate system of the laser radar, and the area nearby the 3D point is used as a potential point cloud edge point, so that the calculated amount is reduced, and the extraction efficiency and accuracy are improved. The camera may be any camera that can take images, preferably a visible light camera, but also a camera that can take a measurable picture.
Further, curvature is calculated on the space around the potential point cloud edge points, points with large curvature are extracted to be edge points, all the point cloud edge points of the laser radar are reversely projected into a pixel coordinate system of the camera by using external parameters, the periphery of the pixel is used as a potential pixel edge point area, the calculated amount is reduced, and the extraction efficiency and accuracy are improved.
Further, the step (4) specifically comprises: LIO, CIO and LCO are all independently operated in parallel, and internal data are fused in a tightly coupled mode, so that pose matrixes and pose confidence matrixes can be independently calculated. And then adopting a Kalman filtering architecture to carry out weighted average on the three output pose results according to the confidence level, so as to realize loose coupling fusion, and outputting the fused unique pose matrix and pose confidence coefficient matrix for a robot or an external program. The tight coupling and the loose coupling refer to the technical meanings of the tight coupling and the loose coupling commonly used in the art, namely the tight coupling generally refers to the common construction of a motion equation and an observation equation to perform state estimation, and simultaneously optimize state variable data and the like, the loose coupling generally refers to the process of respectively performing motion estimation of the loose coupling and fusing pose estimation results, respectively optimizing respective state vectors, finally fusing pose estimation results obtained in different optimization processes and the like.
The beneficial effects of the invention are as follows: aiming at the problems of precision and robustness of multi-sensor fusion pose estimation, three independently-operated odometers are constructed LIO, CIO, LCO by tightly coupling and fusing a laser radar, a camera and an inertial measurement unit. The tightly coupled joint optimization adopts a nonlinear optimization method to directly process the observed data, so that the pose accuracy after the residual error is optimized is high. And because the tight coupling is performed in parallel, and each odometer only fuses the data of two sensors, the number of residual terms is not remarkably increased along with the increase of the sensors, so that the calculation efficiency is high. And then, loose coupling fusion is adopted among the three odometers, when a certain sensor fails or data is wrong, other odometers which do not contain the sensor are not affected, in the loose coupling fusion process, the odometer weight without data or data errors is low, and the pose result after fusion is affected little, so that the robustness is strong. Therefore, the algorithm architecture can promote the pose estimation capability of multi-sensor fusion.
Drawings
In order to more clearly illustrate the technical solutions of the present invention, the drawings that are required for the embodiments will be briefly described, it being understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope of the present invention. Like elements are numbered alike in the various figures.
Fig. 1 is a schematic structural diagram of a high-precision attitude determination device on a laser radar platform in the prior art.
Fig. 2 is a schematic flow chart of a pose estimation algorithm of laser radar, visible light camera and inertial measurement unit fusion provided in an embodiment of the invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments.
The components of the embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the invention, as presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be made by a person skilled in the art without making any inventive effort, are intended to be within the scope of the present invention.
Hereinafter, various embodiments of the present disclosure will be more fully described. The present disclosure is capable of various embodiments and of modifications and variations therein. However, it should be understood that: there is no intention to limit the various embodiments of the disclosure to the specific embodiments disclosed herein, but rather the disclosure is to be interpreted to cover all modifications, equivalents, and/or alternatives falling within the spirit and scope of the various embodiments of the disclosure.
The terms "comprises," "comprising," "including," or any other variation thereof, are intended to cover a specific feature, number, step, operation, element, component, or combination of the foregoing, which may be used in various embodiments of the present invention, and are not intended to first exclude the presence of or increase the likelihood of one or more other features, numbers, steps, operations, elements, components, or combinations of the foregoing.
Furthermore, the terms "first," "second," "third," and the like are used merely to distinguish between descriptions and should not be construed as indicating or implying relative importance.
Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which various embodiments of the invention belong. The terms (such as those defined in commonly used dictionaries) will be interpreted as having a meaning that is the same as the context of the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein in connection with the various embodiments of the invention.
According to the pose estimation algorithm fused by the laser radar, the camera and the inertia measurement unit implemented by the computer, the pose estimation capability of the multi-sensor is improved by constructing an independently operated tightly-coupled fused LCO, LIO, CIO algorithm architecture of loose coupling fusion among three odometers. In a preferred embodiment, the camera is a visible light camera.
FIG. 2 is a flow chart of a pose estimation algorithm using computer implemented laser radar, visible light camera, inertial measurement unit fusion in one embodiment of the invention, the method comprising:
step S110, distributing point cloud data of the laser radar, picture data of the visible light camera and data of the inertial measurement element to corresponding odometers in a pairwise combination mode, namely, a laser radar inertial odometer (LIO), a Camera Inertial Odometer (CIO) and a laser radar camera odometer (LCO) constructed in the follow-up steps.
Step S120, constructing a laser radar inertial odometer (LIO), and calculating by utilizing three-axis acceleration data of an IMU (inertial measurement unit) and combining gravity constraint to obtain a roll angle by adopting a tight coupling fusion modeAnd optimizing the residual pose variables (x, y, z and psi) by using the laser radar point cloud data together with the global accurate estimated value of the pitch angle theta, so as to achieve the effect of decoupling and optimizing the pose.
Step S130, constructing a Camera Inertial Odometer (CIO), and calculating a more accurate roll angle according to the three-axis acceleration data of the IMU by adopting a tightly coupled fusion modeAnd the pitch angle θ, thereby determining the direction of gravity. And rotating the directions of the characteristic points of each frame of the camera image to the gravity direction corresponding to the frame, namely adding the gravity direction information of the corresponding frame into the direction information of the descriptor, and enhancing the rotation invariant characteristic during matching.
Step S140, constructing a laser radar camera odometer (LCO), extracting 2D edge points of the environment by using a camera image, extracting 3D edge points of the environment by using a laser radar Point cloud, performing cross-modal matching on the two, and then solving the pose by adopting PnP (transparent-n-Point).
Step S150, the output results of the three tightly-coupled odometers are weighted and averaged to construct a loosely-coupled fusion odometer, in a preferred embodiment of the invention, a loosely-coupled fusion framework is constructed firstly, then Kalman filtering is adopted to carry out weighted and average on the output pose results of the three according to the confidence level, so that the loosely-coupled fusion is realized, and a unique pose matrix and a pose confidence coefficient matrix after fusion are output for a robot or an external program.
In a preferred embodiment of the present invention, the step of constructing a lidar inertial odometer (S120) specifically includes: the roll angle is obtained by combining three-axis acceleration data of the IMU with gravity constraint solutionAnd optimizing the residual pose variables (x, y, z and psi) by using the laser radar point cloud data together with the global accurate estimated value of the pitch angle theta, so as to achieve the effect of decoupling and optimizing the pose.
Further, the gravity vector [0 g ] is expressed in terms of Euler angles of the x-y-z type] T Projected onto a carrier coordinate system by a rotation matrix of the carrier, and is combined with three-axis acceleration data [ a ] of the IMU x a y a z ] T Establishing an equal amount relationship to obtain
wherein The solution is obtained
The roll angle is obtained by utilizing the three-axis acceleration data of the IMUAnd a more accurate estimate of the pitch angle θ.
Further, the least square nonlinear optimization is performed after laser radar point cloud matching, and the objective function is to minimize a matching residual error r i I.e.
In a preferred embodiment of the present invention, the step of constructing a camera inertial odometer (S130) is specifically: CIO is according to three axes of IMUThe acceleration data is calculated to obtain a more accurate roll angleAnd the pitch angle θ, thereby determining the direction of gravity. And rotating the directions of the characteristic points of each frame of the camera image to the gravity direction corresponding to the frame, namely adding the gravity direction information of the corresponding frame into the direction information of the descriptor, and enhancing the rotation invariant characteristic during matching.
Further, when the description of each feature point is calculated after the direction of the feature point is rotated to the gravity direction corresponding to the frame, the feature point is used as a circle center, a certain distance is used as a radius to form a circle, the pixel points on the circle and the feature points are compared one by one, if the pixel points are larger than the feature points, the description is taken as 1, otherwise, the pixel points are taken as 0, and the pixel points right above the feature points (in the opposite direction of gravity) are defined as first comparison points, so that the binary descriptor vector is obtained. The hamming distance is used to measure the difference between two descriptors, i.e. the number of elements in the corresponding positions of the two descriptor vectors that are different (one is 1 and the other is 0). Such descriptors not only have rotation-invariant properties, but also the comparison mode employed can describe more precisely the degree of similarity between the feature points of different frames.
In a preferred embodiment of the present invention, the step of constructing a lidar camera odometer (S140) specifically includes: LCO utilizes a camera image to extract 2D edge points of the environment, utilizes a laser radar Point cloud to extract 3D edge points of the environment, carries out cross-modal matching on the two, and then adopts PnP (Perselect-n-Point) to solve the pose.
Further, the camera image is utilized to assist in extracting edge feature points in the laser radar point cloud, and the specific method comprises the following steps: and selecting the pixel point with the maximum pixel first derivative as a 2D edge point in a potential pixel edge point area of the camera image. And the depth corresponding to the pixel point is calculated by utilizing triangulation, then the external parameters of the camera and the laser radar are projected into a point cloud coordinate system of the laser radar, and the area nearby the 3D point is used as a potential point cloud edge point, so that the calculated amount is reduced, and the extraction efficiency and accuracy are improved.
Further, curvature is calculated on the space around the potential point cloud edge points, points with large curvature are extracted to be edge points, all the point cloud edge points of the laser radar are reversely projected into a pixel coordinate system of the camera by using external parameters, the periphery of the pixel is used as a potential pixel edge point area, the calculated amount is reduced, and the extraction efficiency and accuracy are improved.
In a preferred embodiment of the present invention, the step of constructing a loosely coupled fusion odometer (S150) specifically includes: LIO, CIO and LCO are all independently operated in parallel, and internal data are fused in a tightly coupled mode, so that pose matrixes and pose confidence matrixes can be independently calculated. And then adopting a Kalman filtering architecture to carry out weighted average on the three output pose results according to the confidence level, so as to realize loose coupling fusion, and outputting the fused unique pose matrix and pose confidence coefficient matrix for a robot or an external program.
According to the invention, aiming at the problems of precision and robustness of multi-sensor fusion pose estimation, three independently-operated odometers of LIO, CIO, LCO are constructed through tight coupling fusion among a laser radar, a camera and an inertial measurement unit. The three odometers are fused by loose coupling, when a certain sensor fails or data is wrong, other odometers which do not contain the sensor are not affected, in the loose coupling fusion process, the odometer weight without data or data errors is low, the pose result after fusion is affected little, and therefore the robustness is strong. The pose estimation capability of multi-sensor fusion can be improved.
The foregoing is merely illustrative of the present invention, and the present invention is not limited thereto, and any person skilled in the art will readily recognize that variations or substitutions are within the scope of the present invention.

Claims (13)

1. A pose estimation algorithm using computer implemented lidar, camera, inertial measurement unit fusion, the method comprising the steps of:
(1) Constructing a tightly coupled and fused laser radar inertial odometer;
(2) Constructing a tightly coupled and fused camera inertial odometer;
(3) Constructing a tightly coupled and fused laser radar camera odometer;
(4) And (5) carrying out weighted average on the output results of the three tightly-coupled odometers to construct a loosely-coupled fusion odometer.
2. The pose estimation algorithm according to claim 1, wherein the above mile meter is constructed, 3D point cloud data of the environment is obtained by a LiDAR, 2D projection images of the environment on an imaging plane thereof are obtained by a Camera, and three-axis acceleration and three-axis angular velocity of the carrier are obtained by an inertial measurement unit IMU.
3. The pose estimation algorithm of claim 1, wherein the LiDAR Camera odometer LCO is constructed with a fusion of the LiDAR and the Camera, the Camera inertial odometer CIO is constructed with a fusion of the Camera and the inertial measurement unit IMU, and the LiDAR inertial odometer LIO is constructed with a fusion of the LiDAR and the inertial measurement unit IMU.
4. A pose estimation algorithm according to claim 3, wherein the laser radar camera odometer LCO, the camera inertial odometer CIO and the laser radar inertial odometer LIO odometer are in a tightly coupled fusion form, and the laser radar camera odometer LCO, the camera inertial odometer CIO and the laser radar inertial odometer LIO are fused to output pose results in a loose coupling form, so that a multi-source fused joint pose estimation algorithm is formed as a whole.
5. The pose estimation algorithm according to claim 4, wherein the laser radar inertia meter LIO is configured to estimate the pose with three-dimensional coordinates (x, y, z) and the pose is configured to be a roll angle of euler anglePitch angle theta, offsetThe navigation angle ψ is represented by the pose state variable +.>Wherein, the three-axis acceleration data of the inertial measurement unit IMU is combined with gravity constraint to calculate to obtain the roll angle +.>And optimizing the residual pose variables (x, y, z and psi) by using the laser radar point cloud data together with the global accurate estimated value of the pitch angle theta, so as to achieve the effect of decoupling and optimizing the pose.
6. The pose estimation algorithm according to claim 5, characterized in that the gravity vector [0 g ] is expressed in terms of euler angles of the x-y-z type] T Projected to a carrier coordinate system through a rotation matrix of the carrier, and is combined with three-axis acceleration data [ a ] of an Inertial Measurement Unit (IMU) x a y a z ] T Establishing an equal amount relationship to obtain
wherein ,solving the equation to obtain
Namely, the roll angle is obtained by utilizing the three-axis acceleration data of the inertial measurement unit IMUAnd an estimated value more accurate than the pitch angle θ.
7. The pose estimation algorithm according to claim 5, wherein the maximum a posteriori estimation of the pose state variable is performedDecoupling into two parts for calculation, wherein the first part is obtained by directly calculating triaxial acceleration data of an Inertial Measurement Unit (IMU), the second part is obtained by least square nonlinear optimization after laser radar point cloud matching, and an objective function is to minimize a matching residual error r i I.e.
8. The pose estimation algorithm according to claim 4, wherein the camera inertial odometer CIO calculates a more accurate roll angle from the three-axis acceleration data of the inertial measurement unit IMUAnd the pitch angle theta, thereby determining the direction of gravity; and rotating the directions of the characteristic points of each frame of the camera image to the gravity direction corresponding to the frame.
9. The pose estimation algorithm according to claim 8, wherein when calculating the description of each feature point after rotating the direction of the feature point to the gravity direction corresponding to the frame, the feature point is used as a center of a circle, a certain distance is used as a radius to make a circle, the pixel points on the circle and the feature point are compared one by one in size, if the pixel point is larger than the feature point, the description is taken as 1, otherwise, the pixel point is taken as 0, and the pixel point located right above the feature point, namely, the opposite direction of the gravity is defined as a first comparison point, so as to obtain a binary description subvector; the Hamming distance is used to measure the difference between two descriptors, namely the number of elements at the corresponding positions of the two descriptor vectors is different, namely the number that one is 1 and the other is 0.
10. The pose estimation algorithm according to claim 4, wherein the lidar camera odometer LCO extracts 2D edge points of the environment using a camera image, extracts 3D edge points of the environment using a lidar Point cloud, cross-modal matches the two, and then solves the pose using PnPPerspective-n-Point.
11. The pose estimation algorithm according to claim 10, wherein the extraction of edge feature points in the lidar point cloud is assisted by using a camera image, comprising: selecting a pixel point with the maximum pixel first derivative as a 2D edge point in a potential pixel edge point area of the camera image; and calculating the depth corresponding to the pixel point by utilizing triangulation, then projecting external parameters of the camera and the laser radar into a point cloud coordinate system of the laser radar, and taking the area near the 3D point as an interested area as a potential point cloud edge point.
12. The pose estimation algorithm according to claim 10, wherein curvature is calculated for a space around potential point cloud edge points, points with large curvature are extracted as edge points, all point cloud edge points of the lidar are back-projected into a pixel coordinate system of a camera by using external parameters, and the surrounding of the pixel is used as a potential pixel edge point area.
13. The pose estimation algorithm according to claim 4, wherein the laser radar inertial odometer LIO, the camera inertial odometer CIO and the laser radar camera odometer LCO are all independently operated in parallel, and the internal data are fused in a tight coupling manner, so that the pose matrix and the pose confidence matrix can be independently calculated; and then adopting a Kalman filtering architecture to carry out weighted average on the three output pose results according to the confidence level, so as to realize loose coupling fusion, and outputting the fused unique pose matrix and pose confidence coefficient matrix for a robot or an external program.
CN202310144825.XA 2023-02-21 2023-02-21 Pose estimation algorithm fused by laser radar, visible light camera and inertial measurement unit Active CN116184430B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310144825.XA CN116184430B (en) 2023-02-21 2023-02-21 Pose estimation algorithm fused by laser radar, visible light camera and inertial measurement unit

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310144825.XA CN116184430B (en) 2023-02-21 2023-02-21 Pose estimation algorithm fused by laser radar, visible light camera and inertial measurement unit

Publications (2)

Publication Number Publication Date
CN116184430A CN116184430A (en) 2023-05-30
CN116184430B true CN116184430B (en) 2023-09-29

Family

ID=86436212

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310144825.XA Active CN116184430B (en) 2023-02-21 2023-02-21 Pose estimation algorithm fused by laser radar, visible light camera and inertial measurement unit

Country Status (1)

Country Link
CN (1) CN116184430B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117058474B (en) * 2023-10-12 2024-01-12 南昌航空大学 Depth estimation method and system based on multi-sensor fusion

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112781582A (en) * 2020-12-26 2021-05-11 复三人工智能科技(上海)有限公司 Multi-sensor fusion high-precision pose estimation algorithm under satellite weak observation condition
CN113436260A (en) * 2021-06-24 2021-09-24 华中科技大学 Mobile robot pose estimation method and system based on multi-sensor tight coupling
CN114608554A (en) * 2022-02-22 2022-06-10 北京理工大学 Handheld SLAM equipment and robot instant positioning and mapping method
CN114966734A (en) * 2022-04-28 2022-08-30 华中科技大学 Bidirectional depth vision inertial pose estimation method combined with multi-line laser radar

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8855848B2 (en) * 2007-06-05 2014-10-07 GM Global Technology Operations LLC Radar, lidar and camera enhanced methods for vehicle dynamics estimation

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112781582A (en) * 2020-12-26 2021-05-11 复三人工智能科技(上海)有限公司 Multi-sensor fusion high-precision pose estimation algorithm under satellite weak observation condition
CN113436260A (en) * 2021-06-24 2021-09-24 华中科技大学 Mobile robot pose estimation method and system based on multi-sensor tight coupling
CN114608554A (en) * 2022-02-22 2022-06-10 北京理工大学 Handheld SLAM equipment and robot instant positioning and mapping method
CN114966734A (en) * 2022-04-28 2022-08-30 华中科技大学 Bidirectional depth vision inertial pose estimation method combined with multi-line laser radar

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Jiarong Lin 等.R 2 LIVE: A Robust, Real-Time, LiDAR-Inertial-Visual Tightly-Coupled State Estimator and Mapping.《IEEE ROBOTICS AND AUTOMATION LETTERS》.2021,第6卷(第4期),7469-7476. *

Also Published As

Publication number Publication date
CN116184430A (en) 2023-05-30

Similar Documents

Publication Publication Date Title
US10788830B2 (en) Systems and methods for determining a vehicle position
WO2018196391A1 (en) Method and device for calibrating external parameters of vehicle-mounted camera
JP6760114B2 (en) Information processing equipment, data management equipment, data management systems, methods, and programs
CN110033489B (en) Method, device and equipment for evaluating vehicle positioning accuracy
CN102435188B (en) Monocular vision/inertia autonomous navigation method for indoor environment
JP5832341B2 (en) Movie processing apparatus, movie processing method, and movie processing program
EP2434256B1 (en) Camera and inertial measurement unit integration with navigation data feedback for feature tracking
CN111912416B (en) Method, device and equipment for positioning equipment
US20190072392A1 (en) System and method for self-geoposition unmanned aerial vehicle
CN109443348B (en) Underground garage position tracking method based on fusion of look-around vision and inertial navigation
US9020204B2 (en) Method and an apparatus for image-based navigation
CN107909614B (en) Positioning method of inspection robot in GPS failure environment
CN110411457B (en) Positioning method, system, terminal and storage medium based on stroke perception and vision fusion
US9959625B2 (en) Method for fast camera pose refinement for wide area motion imagery
US20120243775A1 (en) Wide baseline feature matching using collobrative navigation and digital terrain elevation data constraints
JP2001331787A (en) Road shape estimating device
CN112230242A (en) Pose estimation system and method
CN110081881A (en) It is a kind of based on unmanned plane multi-sensor information fusion technology warship bootstrap technique
CN110032201A (en) A method of the airborne visual gesture fusion of IMU based on Kalman filtering
CN113516692A (en) Multi-sensor fusion SLAM method and device
CN116184430B (en) Pose estimation algorithm fused by laser radar, visible light camera and inertial measurement unit
CN103411587A (en) Positioning and attitude-determining method and system
CN113052897A (en) Positioning initialization method and related device, equipment and storage medium
CN110458885B (en) Positioning system and mobile terminal based on stroke perception and vision fusion
CN114690229A (en) GPS-fused mobile robot visual inertial navigation method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant