CN110849363A - Pose calibration method, system and medium for laser radar and combined inertial navigation - Google Patents

Pose calibration method, system and medium for laser radar and combined inertial navigation Download PDF

Info

Publication number
CN110849363A
CN110849363A CN201911221495.XA CN201911221495A CN110849363A CN 110849363 A CN110849363 A CN 110849363A CN 201911221495 A CN201911221495 A CN 201911221495A CN 110849363 A CN110849363 A CN 110849363A
Authority
CN
China
Prior art keywords
point cloud
cloud data
inertial navigation
laser radar
linear
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911221495.XA
Other languages
Chinese (zh)
Other versions
CN110849363B (en
Inventor
胡小波
杨业
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
LeiShen Intelligent System Co Ltd
Original Assignee
LeiShen Intelligent System Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by LeiShen Intelligent System Co Ltd filed Critical LeiShen Intelligent System Co Ltd
Priority to CN201911221495.XA priority Critical patent/CN110849363B/en
Publication of CN110849363A publication Critical patent/CN110849363A/en
Application granted granted Critical
Publication of CN110849363B publication Critical patent/CN110849363B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C25/00Manufacturing, calibrating, cleaning, or repairing instruments or devices referred to in the other groups of this subclass
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/497Means for monitoring or calibrating
    • G01S7/4972Alignment of sensor

Abstract

The embodiment of the invention discloses a method, a system and a medium for calibrating the pose of a laser radar and a combined inertial navigation system. The method comprises the following steps: determining a pair of linear point cloud data of a target edge line of each static object according to a point cloud data set acquired by a laser radar at two acquisition positions for each static object in at least three static objects; wherein the target edge lines on the at least three stationary objects are not coplanar; converting at least three pairs of straight line point cloud data into a preset coordinate system according to an inertial navigation data set acquired by the combined inertial navigation at two acquisition positions for each static object in at least three static objects; wherein, the laser radar is rigidly connected with the combined inertial navigation; and determining a pose transformation matrix from the laser radar to the combined inertial navigation according to at least three pairs of linear point cloud data transformed into a preset coordinate system. According to the scheme of the invention, the relative pose of the laser radar and the combined inertial navigation does not need to be manually measured, and the relative pose between the laser radar and the combined inertial navigation can be rapidly and accurately calibrated.

Description

Pose calibration method, system and medium for laser radar and combined inertial navigation
Technical Field
The embodiment of the invention relates to the technical field of computers, in particular to a method, a system and a medium for calibrating the pose of a laser radar and a combined inertial navigation system.
Background
With the development of sensor technology, the combined use of multi-line laser radar and combined inertial navigation becomes an indispensable scheme in multi-sensor fusion. The calibration of the relative pose between the two is crucial to the use of the sensor, and is directly related to the accuracy of the measurement result of the sensor.
At present, when the pose calibration of the laser radar and the combined inertial navigation is carried out, the method is adopted to manually measure the angle value and the displacement value between the laser radar and the combined inertial navigation which are in rigid connection, and further realize the pose calibration of the laser radar and the combined inertial navigation according to the measurement result. However, the relative pose between the laser radar and the combined inertial navigation is calibrated through manual measurement, so that the cost is high, the error is large, and the accuracy of pose calibration is seriously influenced.
Disclosure of Invention
The embodiment of the invention provides a method, a system and a medium for calibrating the pose of a laser radar and a combined inertial navigation system, which do not need to manually measure the relative pose of the laser radar and the combined inertial navigation system and can quickly and accurately calibrate the relative pose between the laser radar and the combined inertial navigation system.
In a first aspect, an embodiment of the present invention provides a pose calibration method for a laser radar and a combined inertial navigation, including:
determining a pair of linear point cloud data of a target edge line of each static object according to a point cloud data set acquired by a laser radar at two acquisition positions for each static object in at least three static objects; wherein the target edge lines on the at least three stationary objects are not coplanar;
converting at least three pairs of straight line point cloud data into a preset coordinate system according to an inertial navigation data set acquired by the combined inertial navigation at the two acquisition positions for each of the at least three stationary objects; wherein the laser radar is rigidly connected with the combined inertial navigation system;
and determining a pose transformation matrix of the laser radar reaching the combined inertial navigation according to at least three pairs of linear point cloud data transformed into a preset coordinate system.
In a second aspect, an embodiment of the present invention further provides a pose calibration apparatus for a laser radar and a combined inertial navigation, where the apparatus includes:
the system comprises a linear point cloud determining model, a linear point cloud determining model and a control module, wherein the linear point cloud determining model is used for determining a pair of linear point cloud data of a target edge line of each static object according to a point cloud data set acquired by a laser radar for each static object in at least three static objects at two acquisition positions; wherein the target edge lines on the at least three stationary objects are not coplanar;
a coordinate system conversion module, configured to convert at least three pairs of straight-line point cloud data into a preset coordinate system according to an inertial navigation data set acquired by the combined inertial navigation at the two acquisition positions for each of the at least three stationary objects; wherein the laser radar is rigidly connected with the combined inertial navigation system;
and the pose matrix determining module is used for determining a pose conversion matrix of the laser radar reaching the combined inertial navigation according to at least three pairs of linear point cloud data converted into a preset coordinate system.
In a third aspect, an embodiment of the present invention further provides a mapping system, where the mapping system includes a laser radar, a combined inertial navigation and control device; the control equipment respectively with laser radar with the combination is used to lead and is connected, control equipment includes: :
one or more processors;
storage means for storing one or more programs;
when the one or more programs are executed by the one or more processors, the one or more processors implement the method for calibrating the pose of lidar and combined inertial navigation according to the first aspect.
In a fourth aspect, an embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the method for calibrating the pose of the lidar and the combined inertial navigation according to the first aspect is implemented.
According to the pose calibration method, the pose calibration system and the pose calibration medium for the laser radar and the combined inertial navigation, a pair of linear point cloud data of a target edge line of each static object is determined according to a point cloud data set and an inertial navigation data set which are acquired by the rigidly connected laser radar and the combined inertial navigation for each static object in at least three static objects at two different acquisition positions, each pair of linear point cloud data is converted into a preset coordinate system, and then a pose conversion matrix from the laser radar to the combined inertial navigation is determined according to each pair of point cloud data converted into the preset coordinate system. The technical scheme of the embodiment of the invention does not need manual measurement in the whole process of calibrating the relative pose of the laser radar and the combined inertial navigation and does not need other measuring instruments, thereby avoiding the condition of inaccurate calibration result caused by measurement error, greatly improving the accuracy of the calibration result and reducing the calibration cost. A new idea is provided for calibrating the relative pose between the laser radar and the combined inertial navigation.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained according to the drawings without inventive efforts.
Fig. 1 is a flowchart of a pose calibration method for a laser radar and a combined inertial navigation system according to a first embodiment of the present invention;
fig. 2A is a flowchart of a pose calibration method for a laser radar and a combined inertial navigation system according to a second embodiment of the present invention;
FIG. 2B is a schematic view of the installation orientation of the lidar of an embodiment of the present invention;
FIG. 3 is a flowchart of a pose calibration method for a laser radar and a combined inertial navigation system according to a third embodiment of the present invention;
FIG. 4 is a flowchart of a pose calibration method for a laser radar and a combined inertial navigation system according to a fourth embodiment of the present invention;
fig. 5 is a schematic structural diagram of a pose calibration apparatus for a laser radar and a combined inertial navigation in a fifth embodiment of the present invention;
fig. 6A is a schematic structural diagram of a mapping system according to a sixth embodiment of the present invention;
fig. 6B is a schematic structural diagram of a control device of a surveying and mapping system according to a sixth embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be further noted that, for the convenience of description, only some of the structures related to the present invention are shown in the drawings, not all of the structures.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures. Meanwhile, in the description of the present invention, the terms "first", "second", and the like are used only for distinguishing the description, and are not to be construed as indicating or implying relative importance.
Example one
Fig. 1 is a flowchart of a pose calibration method for a laser radar and a combined inertial navigation system according to an embodiment of the present invention, which is suitable for a situation how to accurately calibrate a relative pose between a laser radar and a combined inertial navigation system. The method may be performed by a control device in the mapping system of the embodiment of the present invention, which may be implemented in software and/or hardware. As shown in fig. 1, the method specifically includes the following steps:
s101, determining a pair of straight line point cloud data of a target edge line of each static object according to a point cloud data set acquired by the laser radar for each static object in at least three static objects at two acquisition positions.
The laser radar is a radar system that detects the position of an object by emitting a laser beam, and optionally, the laser radar in this embodiment may be a multi-line laser radar. The point cloud data set is a point set consisting of a group of points which contain three-dimensional coordinates and are acquired by a laser radar, and can be used for representing the shape of the outer surface of an object. The three-dimensional space geometric position information of each point can be represented by (x, y, z), and the point cloud data can also represent the reflected light intensity of one point. The stationary object in the embodiment of the present invention may be an object in a stationary state in the current capture scene, for example, a building, a square, or a stationary vehicle. Optionally, in order to ensure the accuracy of pose calibration, a large-volume stationary object with a straight edge line is preferred in the embodiment. The edge line of the stationary object may be the intersection of any two adjacent faces of the stationary object. The target edge line of the static object can be one of the edge lines of the static object which is in a straight line or tends to be in a straight line, and the edge line can be displayed in the acquired visual image of the point cloud data set of the static object. For example, if the stationary object is a building 1 and the point cloud data set collected by the lidar is a top view of the building 1, a straight edge line of the top of the building 1 may be used as the target edge line of the building 1.
It should be noted that, for each of at least three preselected stationary objects, the rigidly connected lidar and the combined inertial navigation system to be calibrated respectively acquire a point cloud data set and an inertial navigation data set for the stationary object at two different acquisition positions. The specific acquisition process will be described in detail in the following examples.
Optionally, since for each stationary object the lidar acquires a set of point cloud data sets of the stationary object at each of two different acquisition positions, for each of the at least three stationary objects its point cloud data set comprises a first point cloud data set acquired at a first acquisition position and a second point cloud data set acquired at a second acquisition position. In this step, when a pair of linear point cloud data of the target edge line of each stationary object is determined, point cloud data representing the target edge line of the stationary object may be selected from a first point cloud data set acquired by the stationary object at a first acquisition position as first linear point cloud data, point cloud data representing the target edge line of the stationary object may be selected from a second point cloud data set acquired by the stationary object at a second acquisition position as second linear point cloud data, and then the first linear point cloud data and the second linear point cloud data may be used as the pair of linear point cloud data of the stationary object. Specifically, when selecting the corresponding straight-line point cloud data from the first point cloud data set and the second point cloud data set, the operator may manually select point cloud data representing a target edge line in the two point cloud data sets according to actual requirements, or the control device may automatically select point cloud data representing a target edge line in the two point cloud data sets according to a target edge line identification algorithm, which is not limited in this embodiment.
It should be noted that, this step needs to determine a pair of straight-line point cloud data of the target edge lines of each of at least three stationary objects, and the target edge lines on the at least three stationary objects are not coplanar. Specifically, the specific method of how to determine a pair of straight point cloud data of the target edge line of each stationary object and how to ensure that the target edge lines corresponding to at least three pairs of determined point cloud data are not coplanar will be described in detail in the following embodiments.
S102, converting at least three pairs of straight line point cloud data into a preset coordinate system according to an inertial navigation data set acquired by the combined inertial navigation on each static object in at least three static objects at two acquisition positions.
The combined Inertial Navigation may refer to a combination of at least two units or systems having a positioning function, and may be, for example, a combination of at least one of an Inertial Measurement Unit (IMU), an Inertial Navigation System (INS), and at least one of a Global Positioning System (GPS) and a BeiDou Navigation Satellite System (BDS). It should be noted that, in the embodiment of the present invention, the positioning accuracy of the combined inertial navigation system formed by combining a plurality of units or systems having positioning functions is higher than that of a single positioning unit or system. Optionally, the laser radar and the combined inertial navigation in the embodiment of the invention are rigidly connected. The inertial navigation data set is a data set which is acquired by combining inertial navigation and represents the position and the posture of a rigid structure formed by rigidly connected laser radar and the combined inertial navigation, and comprises: longitude, latitude, altitude, roll angle, pitch angle, and heading angle. The preset coordinate system in the embodiment of the present invention may be any one of a geocentric coordinate system, a geodetic coordinate system, a geographic coordinate system, and the like, which is not limited in this embodiment.
Optionally, for each stationary object, while the lidar acquires the point cloud data set of the stationary object at each of two different acquisition positions, the combined inertial navigation rigidly connected to the lidar also acquires the inertial navigation data set at each of the two acquisition positions. Thus, for each of the at least three stationary objects, its inertial navigation data set comprises a first inertial navigation data set acquired at the first acquisition position and a second inertial navigation data set acquired at the second acquisition position.
When at least three pairs of straight line point cloud data of at least three static objects are converted into a preset coordinate system, for each static object, the first straight line point cloud data corresponding to the first acquisition position in the pair of straight line point cloud data of the static object can be converted into the preset coordinate system from the radar coordinate system where the first straight line point cloud data is located according to the first inertial navigation data set acquired at the first acquisition position in the inertial navigation data set. And converting second linear point cloud data corresponding to a second acquisition position in the pair of linear point cloud data of the static object from the radar coordinate system of the second acquisition position to a preset coordinate system according to the second inertial navigation data set acquired at the second acquisition position in the inertial navigation data set. A specific conversion process may include the following two substeps:
and S1021, determining a pair of coordinate system conversion matrixes of each static object from the inertial navigation coordinate system to a preset coordinate system at the first acquisition position and the second acquisition position according to the inertial navigation data set acquired by the combined inertial navigation on each static object in the at least three static objects at the two acquisition positions.
Optionally, in this sub-step, for each of at least three stationary objects, according to a first inertial navigation data set at a first acquisition position in the inertial navigation data sets of the stationary object, a first coordinate system conversion matrix of the stationary object from the inertial navigation coordinate system to the preset coordinate system at the first acquisition position is determined, and according to a second inertial navigation data set at a second acquisition position in the inertial navigation data sets of the stationary object, a second coordinate system conversion matrix of the stationary object from the inertial navigation coordinate system to the preset coordinate system at the second acquisition position is determined.
Specifically, there are many ways to determine the coordinate system transformation matrix from the inertial navigation coordinate system to the preset coordinate system according to the inertial navigation data set, and this embodiment is not limited thereto. For example, a coordinate system conversion tool may be adopted, the inertial navigation data set is input into the coordinate system conversion tool, a preset coordinate system to be converted is selected from the coordinate system conversion tool, and then the coordinate system conversion tool is operated, at which time the coordinate system conversion tool outputs a coordinate system conversion matrix from the inertial navigation coordinate system to the preset coordinate system corresponding to the inertial navigation data set. And calculating a coordinate system conversion matrix from the inertial navigation coordinate system to the preset coordinate system corresponding to the inertial navigation data set according to the inertial navigation data set and a calculation formula from the inertial navigation coordinate system to the preset coordinate system, and the like.
And S1022, converting at least three pairs of linear point cloud data from a radar coordinate system to a preset coordinate system according to at least three pairs of coordinate system conversion matrixes and a pose conversion matrix from the laser radar to be determined to the combined inertial navigation.
Specifically, in step S101, a pair of linear point cloud data is determined for each of at least three stationary objects, and the linear point cloud data is linear point cloud data in a radar coordinate system, and in this step, first, according to a pose transformation matrix of the laser radar reaching the combined inertial navigation to be determined, a pair of linear point cloud data in the radar coordinate system is transformed into the combined inertial navigation coordinate system, and then, according to a pair of coordinate system transformation matrices corresponding to the stationary objects determined in step S1021, a pair of linear point cloud data transformed into the combined inertial navigation coordinate system is continuously transformed into a preset coordinate system, specifically, for each stationary object, a first coordinate system transformation matrix at the first collection position is used for transforming first linear point cloud data corresponding to the first collection position; and the second coordinate system conversion matrix at the second acquisition position is used for converting the corresponding second straight-line point cloud data at the second acquisition position. For example, assuming that first linear point cloud data of a stationary object in a radar coordinate system corresponding to a first position is P, a first coordinate system transformation matrix of the stationary object in a first acquisition position is R, and a pose transformation matrix from the laser radar to the combined inertial navigation to be determined is C, at this time, the first linear point cloud data P 'transformed to a preset coordinate system may be determined according to a formula P' ═ RCP.
S103, determining a pose transformation matrix from the laser radar to the combined inertial navigation according to at least three pairs of linear point cloud data transformed to a preset coordinate system.
Optionally, because at least three pairs of linear point cloud data have been converted into the preset coordinate system, and each pair of linear point cloud data corresponds to the same target edge line of the same stationary object in the space, that is, the positions of the first linear point cloud data and the second linear point cloud data in each pair of linear point cloud data converted into the preset coordinate system in the preset coordinate system should be on the same straight line, specifically, the following two positional relationships may be included: the first linear point cloud data and the second linear point cloud data converted into a preset coordinate system are respectively part of a target edge line of a static object, but no area superposition exists; and the other is that the areas of the parts of the first linear point cloud data and the second linear point cloud data converted into the preset coordinate system are overlapped. Since the pose conversion matrix from the laser radar to the combined inertial navigation to be determined is included in the linear point cloud data converted to the preset coordinate system in S1022, an equation with an unknown number as the pose conversion matrix from the laser radar to the combined inertial navigation may be constructed according to the position relationship between the first linear point cloud data and the second linear point cloud data (for example, the distance between the first linear point cloud data and the second linear point cloud data is 0) when each stationary object is converted to the preset coordinate system.
Optionally, for a pair of point cloud data of each stationary object, at most four equations with unknowns as a pose conversion matrix from the laser radar to the combined inertial navigation may be constructed (a specific construction method is described in a subsequent embodiment), and for the pose conversion matrix from the laser radar to the combined inertial navigation to be determined, 9 pose calibration parameters corresponding to rotation transformation and 3 pose calibration parameters corresponding to translation transformation, that is, 12 unknown parameters, are included, so that in the embodiment of the present invention, 12 equations constructed by at least three stationary objects are required to form a target equation set, and then the target equation set is solved, so that 12 unknown parameters in the pose conversion matrix from the laser radar to the combined inertial navigation may be obtained, where a specific implementation process will be described in detail in the subsequent embodiment.
Optionally, when the number of the stationary objects is greater than three, a target equation set of 12 equations can be still constructed for the plurality of stationary objects, and 12 unknown parameters in a pose transformation matrix from the laser radar to the combined inertial navigation are solved; the number of the equations constructed for a plurality of static objects is more than 12, and the optimal solution of the target equation set is solved by adopting a least square method according to the target equation set formed by all the constructed equations. This embodiment is not limited to this.
According to the pose calibration method of the laser radar and the combined inertial navigation, provided by the embodiment of the invention, a pair of linear point cloud data of a target edge line of each static object is determined according to a point cloud data set and an inertial navigation data set acquired by the rigidly connected laser radar and the combined inertial navigation for each static object in at least three static objects at two different acquisition positions, each pair of linear point cloud data is converted into a preset coordinate system, and a pose conversion matrix from the laser radar to the combined inertial navigation is determined according to each pair of point cloud data converted into the preset coordinate system. The technical scheme of the embodiment of the invention does not need manual measurement in the whole process of calibrating the relative pose of the laser radar and the combined inertial navigation and does not need other measuring instruments, thereby avoiding the condition of inaccurate calibration result caused by measurement error, greatly improving the accuracy of the calibration result and reducing the calibration cost. A new idea is provided for calibrating the relative pose between the laser radar and the combined inertial navigation.
Example two
Fig. 2A is a flowchart of a pose calibration method for a laser radar and a combined inertial navigation system in a second embodiment of the present invention, and fig. 2B is a schematic view of an installation orientation of the laser radar in the second embodiment of the present invention. Based on the above embodiments, the present embodiment performs further optimization, and specifically gives a description of a specific situation how to obtain the point cloud data set and the inertial navigation data set of each stationary object. As shown in fig. 2A, the method of this embodiment specifically includes the following steps:
s201, point cloud data sets and inertial navigation data sets acquired by rigidly connected laser radars and combined inertial navigation on each static object in at least three static objects at two acquisition positions are acquired.
Optionally, in the embodiment of the present invention, when the laser radar and the combined inertial navigation to be calibrated that are rigidly connected are used for acquiring the point cloud data set and the inertial navigation data set for at least three stationary objects, it is only required to ensure that the point cloud data set and the inertial navigation data set of each stationary object are acquired at two different acquisition positions, and there are many specific acquisition modes, which are not limited herein. For example, three possible implementations may be included as follows:
in the first embodiment, for each of at least three stationary objects, a point cloud data set and an inertial navigation data set acquired by the rigidly connected lidar and the combined inertial navigation system for the stationary object at two acquisition positions are sequentially acquired.
Specifically, the rigid connection between the lidar and the combined inertial navigation system may be realized by acquiring a point cloud data set and an inertial navigation data set of a stationary object at one acquisition position at a time. That is, the rigidly connected lidar and the combined inertial navigation perform the acquisition of the point cloud data set and the inertial navigation data set twice at two different acquisition positions once for each of the at least three stationary objects. The data finally acquired by the present embodiment includes two sets of point cloud data sets (i.e., the first point cloud data set and the second point cloud data set) and two sets of inertial navigation data sets (i.e., the first inertial navigation data set and the second inertial navigation data set) for each stationary object.
Exemplarily, assuming that there are three stationary objects, namely a stationary object 1, a stationary object 2 and a stationary object 3, for the stationary object 1, the rigidly connected lidar and the combined inertial navigation collect a first point cloud data set and a first inertial navigation data set of the stationary object 1 once at a position a, and collect a second point cloud data set and a second inertial navigation data set of the stationary object 1 once at a position B; aiming at a static object 2, a first point cloud data set and a first inertial navigation data set of the static object 2 are collected at a position C by a rigidly connected laser radar and a combined inertial navigation, and a second point cloud data set and a second inertial navigation data set of the static object 2 are collected at a position D; aiming at a static object 3, a first point cloud data set and a first inertial navigation data set of the static object 3 are collected once at a position E by the rigidly connected laser radar and the combined inertial navigation, and a second point cloud data set and a second inertial navigation data set of the static object 3 are collected once at a position F, namely, 6 point cloud data and 6 inertial navigation data sets are collected for three static objects.
In a second embodiment, point cloud data sets and inertial navigation data sets acquired by the rigidly connected lidar and the combined inertial navigation system for at least three stationary objects at two acquisition positions are acquired.
Specifically, the rigid connection laser radar and the combined inertial navigation can simultaneously acquire a point cloud data set and an inertial navigation data set of at least three static objects at one acquisition position. That is, the rigidly connected lidar and the combined inertial navigation perform the acquisition of the point cloud data set and the inertial navigation data set at two different acquisition positions only once for at least three stationary objects. The present implementable method finally acquires only two sets of point cloud data sets (i.e., the first point cloud data set and the second point cloud data set) and two sets of inertial navigation data sets (i.e., the first inertial navigation data set and the second inertial navigation data set) for at least three stationary objects.
Illustratively, assuming there are three stationary objects, namely stationary object 1, stationary object 2, and stationary object 3, the rigidly connected lidar and combined inertial navigation simultaneously acquire a first point cloud data set and a first inertial navigation data set for the three stationary objects (namely stationary object 1, stationary object 2, and stationary object 3) at position a, and a second point cloud data set and a second inertial navigation data set for the three stationary objects at position B. Namely, 2 point cloud data and 2 inertial navigation data sets are acquired in total corresponding to three static objects.
In a third implementation manner, the point cloud data set and the inertial navigation data set are acquired by using the second implementation manner for a part of the at least three static objects (where the number of the part of the static objects is greater than or equal to two), and the point cloud data set and the inertial navigation data set are acquired by using the first implementation manner for the rest of the static objects.
Specifically, some stationary objects may be close to each other in the at least three stationary objects and can be simultaneously located in the coverage area of the laser radar, and at this time, the point cloud data set and the inertial navigation data set of the stationary objects may be simultaneously acquired for the stationary objects in the part; for the rest objects which are far away and cannot be located in the coverage range of the laser radar with other rest objects, sequentially aiming at each rest object, when the rest object is located in the coverage range of the laser radar by moving the positions of the rigidly connected laser radar and the combined inertial navigation, the first implementable mode is adopted to collect the point cloud data set and the inertial navigation data set of the rest object at two different collecting positions.
For example, assuming that there are three stationary objects, namely, the stationary object 1, the stationary object 2 and the stationary object 3, wherein the stationary object 1 and the stationary object 2 may be both within the coverage of the lidar, the point cloud data sets of the stationary object 1 and the stationary object 2 may be acquired by using the above implementable embodiment two for the stationary object 1 and the stationary object 2, and the point cloud data set and the inertial navigation data set of the stationary object 3 may be acquired by using the above implementable embodiment one for the stationary object 3 after adjusting the positions of the rigidly connected lidar and the combined inertial navigation.
S202, determining a pair of straight line point cloud data of the target edge line of each static object according to the point cloud data set acquired by the laser radar for each static object in the at least three static objects at the two acquisition positions.
Optionally, in order to ensure accuracy of a finally determined pose transformation matrix of the laser radar reach combined inertial navigation, the embodiment of the invention requires that the edge lines of the targets on at least three stationary objects are not in one plane (i.e., are not coplanar). Since the stationary object on the ground is generally perpendicular to the ground, and the edge lines of the stationary object perpendicular to the ground on the vertical plane are generally in a plane, in order to ensure that the target edge lines included in the point cloud data collected by the lidar are not coplanar, the preferred installation position of the embodiment when the lidar is installed is that the lidar is vertically installed relative to the horizontal plane. Vertical mounting means that the drive mechanism of the lidar, such as the axis of rotation of a motor, is arranged parallel to the horizontal plane. It will be appreciated that the lidar may be inclined at an angle to the horizontal as long as it is ensured that it can scan the edge profile at the top of the object normally. The specific installation direction schematic diagram is shown in fig. 2B, the Z-axis direction of the laser radar is parallel to the horizontal plane, and the laser radar can integrally rotate along the Z-axis direction or the light-emitting element inside the laser radar rotates along the Z-axis direction, so that the laser beam can rotate around the Z-axis to realize scanning and detection of the surrounding environment. The rigidly connected lidar and the combined inertial navigation can acquire data at a position above or on the side of a stationary object.
S203, converting at least three pairs of straight line point cloud data into a preset coordinate system according to an inertial navigation data set acquired by the combined inertial navigation on each of at least three static objects at two acquisition positions.
And S204, determining a pose transformation matrix from the laser radar to the combined inertial navigation according to at least three pairs of linear point cloud data transformed into a preset coordinate system.
The pose calibration method for the laser radar and the combined inertial navigation provided by the embodiment of the invention comprises the steps of acquiring a point cloud data set and an inertial navigation data set acquired by the rigidly connected laser radar and the combined inertial navigation at two acquisition positions for each static object in at least three static objects, determining a pair of linear point cloud data of a target edge line of each static object according to the point cloud data set and the inertial navigation data set of each static object, converting each pair of linear point cloud data into a preset coordinate system, and further determining a pose conversion matrix from the laser radar to the combined inertial navigation according to each pair of point cloud data converted into the preset coordinate system. According to the technical scheme of the embodiment of the invention, the calibration process of the relative pose is completed only through the data acquired by the rigidly connected laser radar and the combined inertial navigation to be calibrated of the surveying and mapping system, so that the error caused by the manual measurement process is reduced, the accuracy of the calibration result is greatly improved, and the calibration cost is reduced.
EXAMPLE III
Fig. 3 is a flowchart of a pose calibration method for a lidar and a combined inertial navigation in the third embodiment of the present invention, and this embodiment is based on the above embodiments and further optimized, and specifically gives a description of a specific case of how to determine a pair of straight-line point cloud data of a target edge line of each stationary object according to a point cloud data set acquired by the lidar for each stationary object of at least three stationary objects at two acquisition positions. As shown in fig. 3, the method of this embodiment specifically includes the following steps:
s301, generating a visual image of a first point cloud data set at a first acquisition position and a visual image of a second point cloud data set at a second acquisition position of each stationary object according to the point cloud data sets acquired by the lidar for each of at least three stationary objects at two acquisition positions.
Optionally, in the embodiment of the present invention, for each of the at least three stationary objects, the first point cloud data set of the stationary object at the first acquisition position is input into the visualization program, and the visualization program visually displays coordinates of each point in the first point cloud data set, so as to form a visualization image of the first point cloud data set. And inputting the second point cloud data set of the static object at the second acquisition position into a visualization program, and visually displaying the coordinates of each point in the second point cloud data set by the visualization program to form a visual image of the second point cloud data set.
It should be noted that, in the embodiment of the present invention, if the rigidly connected lidar and the combined inertial navigation acquire the point cloud data of each stationary object, for each stationary object of the at least three stationary objects, the lidar sequentially acquires point cloud data sets of the stationary objects at two acquisition positions. Then the operation of this step may be performed once for both sets of point cloud data sets for each stationary object, resulting in a visualized image of at least 12 point cloud data sets. It should be noted that, in this case, only the stationary object is included in the visualized images of the two point cloud data sets of each stationary object.
If the point cloud data of each static object is acquired by the rigidly connected laser radar and the combined inertial navigation in the embodiment of the invention, the point cloud data set and the inertial navigation data set are acquired by the laser radar on at least three static objects at two acquisition positions simultaneously. Then the two sets of point cloud data sets may be subjected to the operation of this step only once to generate a visual image of the two point cloud data sets. It should be noted that at this time, each of the visual images includes at least three stationary objects.
And S302, taking the point cloud data representing the target edge line of the static object in the visual image of the first point cloud data set of each static object as the first linear point cloud data of the target edge line of the static object.
And S303, taking the point cloud data representing the target edge line of the static object in the visual image of the second point cloud data set of each static object as second straight line point cloud data of the target edge line of the static object.
Optionally, if S301 is two sets of point cloud data sets for each stationary object, and two visual images of the point cloud data sets (i.e., a visual image of the first point cloud data set and a visual image of the second point cloud data set) are generated, then the operation of S302 may be performed on the visual image of the first point cloud data of each stationary object, and the operation of S303 may be performed on the visual image of the second point cloud data of each stationary object. In order to ensure that the target edge lines of the stationary objects corresponding to the pair of linear point cloud data (i.e., the first linear point cloud data and the second linear point cloud data) of each stationary object are not coplanar, the target edge lines of each stationary object may be selected according to the position of each stationary object in the actual three-dimensional space and the angle of the laser emitting surface of the laser radar.
If S301 is a visualized image in which only two sets of point cloud data sets are generated for at least three static objects (i.e., a visualized image of a first point cloud data set and a visualized image of a second point cloud data set), then at least three times of S302 operations may be performed on the visualized image of the first point cloud data set including at least three static objects, the first straight-line point cloud data of the target edge line of each of the at least three static objects is determined by the visualized image, and at least three times of S303 operations are performed on the visualized image of the second point cloud data set including at least three static objects, and the second straight-line point cloud data of the target edge line of each of the at least three static objects is determined by the visualized image. In order to ensure that the target edge lines of the stationary objects corresponding to a pair of linear point cloud data (i.e., the first linear point cloud data and the second linear point cloud data) of each stationary object are not coplanar, at least three first linear point cloud data selected from the visualized images of the first point cloud data set are not parallel, and at least three second linear point cloud data selected from the visualized images of the second point cloud data set are not parallel.
Optionally, in the embodiment of the present invention, there are many specific methods for determining point cloud data representing a target edge line of each stationary object in a visualized image of the first point cloud data set or the second point cloud data set of the stationary object. This embodiment is not limited to this. For example, the visualized image of the first point data set and the visualized image of the second point data set of each static object may be displayed to a worker, the worker marks the position of the same target edge line of the static object in the visualized image of the first point cloud data set and the visualized image of the second point cloud data set of the static object (for example, marks the position of the target edge line by a straight line, or marks the position of the target edge line by at least two points), and the control device may use the point cloud data at the marked position in the visualized image of the first point cloud data set as the first straight line point cloud data and the point cloud data at the marked position in the visualized image of the second point cloud data set as the second straight line point cloud data according to the marked position of the worker. And then the determined first linear point cloud data and the second linear point cloud data are used as a pair of linear point cloud data of the static object.
S304, converting the first straight line point cloud data and the second straight line point cloud data of each static object into a preset coordinate system according to an inertial navigation data set acquired by the combined inertial navigation at two acquisition positions for each static object in at least three static objects.
S305, determining a pose conversion matrix from the laser radar to the combined inertial navigation according to the first linear point cloud data and the second linear point cloud data of each static object converted to the preset coordinate system.
According to the method for calibrating the pose of the laser radar and the combined inertial navigation, visual images of two point cloud data sets are generated according to two point cloud data sets acquired by the laser radar for each static object in at least three static objects at two acquisition positions, point cloud data representing a target edge line of the static object in the two visual images of each static object are used as a pair of linear point cloud data, each pair of linear point cloud data is converted into a preset coordinate system according to the inertial navigation data sets acquired by the laser radar for each static object at two different acquisition positions, and then the pose conversion matrix from the laser radar to the combined inertial navigation is determined according to each pair of point cloud data converted into the preset coordinate system. According to the technical scheme of the embodiment of the invention, the condition that the calibration result is inaccurate due to measurement errors is avoided, the accuracy of the calibration result is greatly improved, the calibration cost is reduced, and a new idea is provided for calibration of the relative pose between the laser radar and the combined inertial navigation.
Example four
Fig. 4 is a flowchart of a pose calibration method for a laser radar and a combined inertial navigation system in the fourth embodiment of the present invention, and this embodiment is based on the foregoing embodiments and further optimizes the pose calibration method, and specifically provides a description of how to determine a pose conversion matrix from the laser radar to the combined inertial navigation system according to at least three pairs of linear point cloud data converted into a preset coordinate system. As shown in fig. 4, the method of this embodiment specifically includes the following steps:
s401, determining a pair of straight line point cloud data of a target edge line of each static object according to a point cloud data set acquired by the laser radar for each static object in at least three static objects at two acquisition positions.
Wherein the target edge lines on the at least three stationary objects are not coplanar.
S402, converting at least three pairs of straight line point cloud data into a preset coordinate system according to an inertial navigation data set acquired by the combined inertial navigation at two acquisition positions for each static object in at least three static objects.
Alternatively, this step may be according to the formula P' ═ RCP; and converting each straight-line point cloud data in each pair of straight-line point cloud data from a radar coordinate system to a preset coordinate system. The method comprises the following steps that P is linear point cloud data under a radar coordinate system to be subjected to coordinate conversion, R is a coordinate system conversion matrix from a combined inertial navigation coordinate system to a preset coordinate system at an acquisition position corresponding to the linear point cloud data, and C is a pose conversion matrix from a laser radar to be determined to the combined inertial navigation; and P' is the linear point cloud data converted into the preset coordinate system.
And S403, determining a linear distance expression of the first linear point cloud data and the second linear point cloud data according to the first linear point cloud data and the second linear point cloud data in each pair of linear point cloud data converted into a preset coordinate system.
Specifically, the first straight-line point cloud data and the second straight-line point cloud data respectively represent two straight lines, and there are many methods for determining a straight-line distance expression between two straight lines corresponding to the first straight-line point cloud data and the second straight-line point cloud data according to the first straight-line point cloud data and the second straight-line point cloud data in each pair of straight-line point cloud data in this step, which is not limited in this embodiment. Can be realized by the following three substeps:
s4031, for each pair of linear point cloud data converted into the preset coordinate system, determine first target point cloud data of the first linear point cloud data and second target point cloud data of the second linear point cloud data in the pair of linear point cloud data.
Alternatively, since two points may determine a straight line, the sub-step may be to convert each stationary object into a pair of straight line point cloud data (i.e., first straight line point cloud data and second straight line point cloud data of the stationary object) under a preset coordinate system, select at least two points from the first straight line point cloud data of the stationary object as a first target point, acquire point cloud data (e.g., coordinate values (x, y, z)) of the first target point as first target point cloud data, select at least two points from the second straight line point cloud data of the stationary object as a second target point, and acquire point cloud data (e.g., coordinate values (x, y, z)) of the second target point as second target point cloud data. Optionally, in order to ensure the accuracy of the linear distance expression determined in the present application, in the sub-step, at least two points which are relatively far apart on the straight line corresponding to the linear point cloud data set are preferably selected as the target points.
Alternatively, two first target point cloud data P 'may be selected from the first straight-line point cloud data of each pair of straight-line point cloud data converted into the preset coordinate system'11And P'12. Selecting two second target point cloud data P 'from second straight-line point cloud data in the pair of straight-line point cloud data'21And P'22. Optionally, in this sub-step, P 'may be obtained according to the coordinate system conversion process introduced in S402'11、P′12、P′21And P'22The expression of (a) is as follows:
P′11=RCP11、P′12=RCP12、P′21=RCP21、P′22=RCP22
wherein, P11、P12、P21And P22Are respectively P'11、P′12、P′21And P'22Point cloud data in a radar coordinate system before conversion to a preset coordinate system; r is a coordinate system conversion matrix from the combined inertial navigation coordinate system to a preset coordinate system at the acquisition position corresponding to the straight-line point cloud data, and C is a pose conversion matrix from the laser radar to be determined to the combined inertial navigation.
And S4032, determining a direction vector of the second linear point cloud data according to the second target point cloud data.
Optionally, if the second target point cloud data is P'21And P'22Then it can be according to the formula
Figure BDA0002300987480000161
To determine a direction vector of the second straight-line point cloud data.
Wherein the content of the first and second substances,
Figure BDA0002300987480000162
the direction vector of the second straight-line point cloud data is obtained;
Figure BDA0002300987480000163
and
Figure BDA0002300987480000164
respectively being P 'in the second point cloud data'21And P'22The coordinate value of (2) in the x direction,
Figure BDA0002300987480000165
and
Figure BDA0002300987480000166
respectively being P 'in the second point cloud data'21And P'22The coordinate value in the y-direction of (c),andrespectively being P 'in the second point cloud data'21And P'22And (3) coordinate values in the z direction.
It should be noted that, in this sub-step, the direction vector of the second straight-line point cloud data may also be calculated in other manners. This embodiment is not limited.
And S4033, determining a linear distance expression of the first linear point cloud data and the second linear point cloud data according to the first target point cloud data, the second target point cloud data and the direction vector.
Optionally, if the first target point cloud data is P'11And P'12(ii) a The second target point cloud data is P'21And P'22(ii) a The direction vector of the second linear point cloud data is
Figure BDA0002300987480000169
Then 4 linear distance expressions for the first and second linear point cloud data may be obtained according to the following equations (1) - (4).
Figure BDA00023009874800001610
Figure BDA00023009874800001611
Figure BDA00023009874800001613
Note that P 'in the above formulas (1) to (4)'11、P′12、P′21And P'22Contains the position and attitude conversion matrix from the laser radar to be determined to the combined inertial navigation, so according to P'11、P′12、P′21And P'22The determined linear distance expression also comprises a pose transformation matrix from the laser radar to be determined to the combined inertial navigation.
S404, constructing a target equation set according to the linear distance expression of at least three pairs of linear point cloud data.
Optionally, the pose conversion matrix from the laser radar to the combined inertial navigation to be determined includes 9 pose calibration parameters corresponding to rotation transformation and 3 pose calibration parameters corresponding to translation transformation, that is, 12 unknown parameters, so the constructed target equation set at least includes 12 equations, and in S403, 4 linear distance expressions can be determined for a pair of linear point cloud data, so the step requires constructing the target equation set according to at least 12 linear distance expressions determined by at least three pairs of linear point cloud data.
Optionally, at least three pairs of straight-line point cloud data are converted into a preset coordinate system, and each pair of straight-line point cloud data corresponds to the same target edge line of the same stationary object in the space, that is, positions of the first straight-line point cloud data and the second straight-line point cloud data in each pair of straight-line point cloud data converted into the preset coordinate system in the preset coordinate system should be on the same straight line, so that a straight line distance between the first straight-line point cloud data and the second straight-line point cloud data should be 0 or a wireless distance close to 0. At this time, based on the position relationship of each pair of linear point cloud data, the linear distance expressions of at least 12 pieces of first linear point cloud data and second linear point cloud data determined in S403 are set to 0 or infinitesimal, so as to obtain at least 12 linear distance equations, and the at least 12 linear distance equations are simultaneously constructed into the target equation set.
And S405, solving a target equation set to obtain a pose transformation matrix from the laser radar to the combined inertial navigation.
Optionally, because the unknown parameter in the linear distance expression determined in S403 is the pose conversion matrix from the laser radar to the combined inertial navigation, the unknown parameter of the target equation set constructed according to the linear distance expression is still the pose conversion matrix from the laser radar to the combined inertial navigation, and therefore the pose conversion matrix from the laser radar to the combined inertial navigation can be obtained by solving the target equation set in this step.
According to the pose calibration method of the laser radar and the combined inertial navigation, a pair of linear point cloud data of a target edge line of each static object is determined according to a point cloud data set and an inertial navigation data set which are acquired by the rigidly connected laser radar and the combined inertial navigation for each static object in at least three static objects at two different acquisition positions, each pair of linear point cloud data is converted into a preset coordinate system, a linear distance expression of first point cloud data and second point cloud data in each pair of point cloud data in the preset coordinate system is determined, a target equation set is constructed and solved according to the linear distance expression determined by combining the distance relationship of the first point cloud data and the second point cloud data in each pair of point cloud data in the preset coordinate system, and a pose conversion matrix from the laser radar to the combined inertial navigation is obtained. According to the technical scheme of the embodiment of the invention, an equation set is constructed based on the distance relation between each pair of linear point cloud data under the preset coordinate system, and the pose conversion matrix from the laser radar to the combined inertial navigation is solved, so that a new thought is provided for solving the pose conversion matrix from the laser radar to the combined inertial navigation, and meanwhile, the accuracy of a calibration result is improved.
EXAMPLE five
Fig. 5 is a schematic structural diagram of a pose calibration apparatus for a laser radar and a combined inertial navigation system according to a fifth embodiment of the present invention. The device can execute the pose calibration method of the laser radar and the combined inertial navigation provided by any embodiment of the invention, and has corresponding functional modules and beneficial effects of the execution method. As shown in fig. 5, the apparatus specifically includes:
a linear point cloud determination model 501, configured to determine a pair of linear point cloud data of a target edge line of each stationary object according to a point cloud data set acquired by the laser radar for each stationary object of the at least three stationary objects at two acquisition positions; wherein the target edge lines on the at least three stationary objects are not coplanar;
a coordinate system conversion module 502, configured to convert at least three pairs of straight-line point cloud data into a preset coordinate system according to an inertial navigation data set acquired by the combined inertial navigation for each of the at least three stationary objects at the two acquisition positions; wherein the laser radar is rigidly connected with the combined inertial navigation system;
and a pose matrix determining module 503, configured to determine a pose transformation matrix for the laser radar to reach the combined inertial navigation according to at least three pairs of linear point cloud data transformed into a preset coordinate system.
According to the pose calibration device for the laser radar and the combined inertial navigation, provided by the embodiment of the invention, a pair of linear point cloud data of a target edge line of each static object is determined according to a point cloud data set and an inertial navigation data set acquired by the rigidly connected laser radar and the combined inertial navigation for each static object in at least three static objects at two different acquisition positions, each pair of linear point cloud data is converted into a preset coordinate system, and a pose conversion matrix from the laser radar to the combined inertial navigation is determined according to each pair of point cloud data converted into the preset coordinate system. The technical scheme of the embodiment of the invention does not need manual measurement in the whole process of calibrating the relative pose of the laser radar and the combined inertial navigation and does not need other measuring instruments, thereby avoiding the condition of inaccurate calibration result caused by measurement error, greatly improving the accuracy of the calibration result and reducing the calibration cost. A new idea is provided for calibrating the relative pose between the laser radar and the combined inertial navigation.
Further, the above apparatus further comprises:
and the data acquisition module is used for acquiring a point cloud data set and an inertial navigation data set acquired by the rigidly connected laser radar and the combined inertial navigation on each of at least three static objects at two acquisition positions.
Further, the data acquisition module is provided with:
for each static object in at least three static objects, sequentially acquiring a point cloud data set and an inertial navigation data set acquired by a rigidly connected laser radar and a combined inertial navigation system for the static object at two acquisition positions; alternatively, the first and second electrodes may be,
and acquiring a point cloud data set and an inertial navigation data set acquired by the rigidly connected laser radar and the combined inertial navigation on at least three static objects at two acquisition positions.
Further, the laser radar is vertically installed relative to a horizontal plane.
Further, the straight-line point cloud determination model 501 is specifically configured to:
generating a visual image of a first point cloud data set of each static object at a first acquisition position and a visual image of a second point cloud data set of each static object at a second acquisition position according to a point cloud data set acquired by a laser radar for each static object at two acquisition positions;
taking point cloud data representing a target edge line of each static object in a visual image of the first point cloud data set of each static object as first linear point cloud data of the target edge line of the static object;
and taking the point cloud data representing the target edge line of the static object in the visual image of the second point cloud data set of each static object as second straight line point cloud data of the target edge line of the static object.
Further, the coordinate system converting module 502 is specifically configured to:
determining a pair of coordinate system conversion matrixes of each static object from an inertial navigation coordinate system to a preset coordinate system at a first acquisition position and a second acquisition position according to an inertial navigation data set acquired by the combined inertial navigation on each static object in the at least three static objects at the two acquisition positions;
and converting at least three pairs of linear point cloud data from a radar coordinate system to a preset coordinate system according to at least three pairs of coordinate system conversion matrixes and a pose conversion matrix of the laser radar to be determined to reach the combined inertial navigation.
Further, the pose matrix determination module 503 includes:
a distance expression determining unit, configured to determine a linear distance expression of the first linear point cloud data and the second linear point cloud data according to the first linear point cloud data and the second linear point cloud data in each pair of linear point cloud data converted into a preset coordinate system; the linear distance expression comprises a pose transformation matrix of the laser radar to be determined to the combined inertial navigation;
the equation set building unit is used for building a target equation set according to linear distance expressions of at least three pairs of linear point cloud data;
and the pose matrix solving unit is used for solving the target equation set to obtain a pose conversion matrix from the laser radar to the combined inertial navigation.
Further, the distance expression determining unit is specifically configured to:
determining first target point cloud data of first linear point cloud data and second target point cloud data of second linear point cloud data in the pair of linear point cloud data aiming at each pair of linear point cloud data converted into a preset coordinate system;
determining a direction vector of the second linear point cloud data according to the second target point cloud data;
and determining a linear distance expression of the first linear point cloud data and the second linear point cloud data according to the first target point cloud data, the second target point cloud data and the direction vector.
EXAMPLE six
Fig. 6A is a schematic structural diagram of a mapping system according to a sixth embodiment of the present invention, and fig. 6B is a schematic structural diagram of a control device of the mapping system according to the sixth embodiment of the present invention. The mapping system 6 shown in fig. 6A comprises a lidar 61, a combined inertial navigation 62 and a control device 60. FIG. 6B illustrates a block diagram of an exemplary control device 60 suitable for use in implementing embodiments of the present invention. The control device 60 shown in fig. 6B is only an example, and should not bring any limitation to the functions and the scope of use of the embodiment of the present invention. As shown in fig. 6B, the control device 60 is in the form of a general purpose computing device. The components of the control device 60 may include, but are not limited to: one or more processors 601, a storage device 602, and a bus 603 that couples various system components (including the storage device 602 and the processors 601).
Bus 603 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. By way of example, such architectures include, but are not limited to, Industry Standard Architecture (ISA) bus, micro-channel architecture (MAC) bus, enhanced ISA bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus.
Control device 60 typically includes a variety of computer system readable media. Such media may be any available media that is accessible by control device 60 and includes both volatile and nonvolatile media, removable and non-removable media.
The storage 602 may include computer system readable media in the form of volatile memory, such as Random Access Memory (RAM)604 and/or cache memory 605. The control device 60 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 606 may be used to read from and write to non-removable, nonvolatile magnetic media (not shown in FIG. 6B and commonly referred to as a "hard drive"). Although not shown in FIG. 6B, a magnetic disk drive for reading from and writing to a removable, nonvolatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, nonvolatile optical disk (e.g., a CD-ROM, DVD-ROM, or other optical media) may be provided. In these cases, each drive may be connected to the bus 603 by one or more data media interfaces. The memory device 602 may include at least one program product having a set (e.g., at least one) of program modules that are configured to carry out the functions of embodiments of the invention.
A program/utility 608 having a set (at least one) of program modules 607 may be stored, for example, in storage 602, such program modules 607 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each of which examples or some combination may comprise an implementation of a network environment. The program modules 607 generally perform the functions and/or methods of the described embodiments of the invention.
The control device 60 may also communicate with one or more external devices 609 (e.g., keyboard, pointing device, display 610, etc.), with one or more devices that enable a user to interact with the device, and/or with any devices (e.g., network card, modem, etc.) that enable the control device 60 to communicate with one or more other computing devices. Such communication may occur via an input/output (I/O) interface 611. Also, the control device 60 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the internet) via the network adapter 612. As shown in fig. 6B, the network adapter 612 communicates with the other modules of the control device 60 via the bus 603. It should be understood that although not shown in the figures, other hardware and/or software modules may be used in conjunction with the control device 60, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
The processor 601 executes a program stored in the system memory 602, thereby executing various functional applications and data processing, for example, implementing the pose calibration method for the lidar and the combined inertial navigation provided by the embodiment of the present invention.
EXAMPLE seven
The seventh embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, can implement the method for calibrating the pose of the laser radar and the combined inertial navigation described in the foregoing embodiments.
Computer storage media for embodiments of the invention may employ any combination of one or more computer-readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. The computer-readable storage medium may be, for example but not limited to: an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination thereof. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The above example numbers are for description only and do not represent the merits of the examples.
It will be appreciated by those of ordinary skill in the art that the modules or operations of the embodiments of the invention described above may be implemented using a general purpose computing device, which may be centralized on a single computing device or distributed across a network of computing devices, and that they may alternatively be implemented using program code executable by a computing device, such that the program code is stored in a memory device and executed by a computing device, and separately fabricated into integrated circuit modules, or fabricated into a single integrated circuit module from a plurality of modules or operations thereof. Thus, the present invention is not limited to any specific combination of hardware and software.
The embodiments in the present specification are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same or similar parts in the embodiments are referred to each other.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. A pose calibration method of a laser radar and a combined inertial navigation is characterized by comprising the following steps:
determining a pair of linear point cloud data of a target edge line of each static object according to a point cloud data set acquired by a laser radar at two acquisition positions for each static object in at least three static objects; wherein the target edge lines on the at least three stationary objects are not coplanar;
converting at least three pairs of straight line point cloud data into a preset coordinate system according to an inertial navigation data set acquired by the combined inertial navigation at the two acquisition positions for each of the at least three stationary objects; wherein the laser radar is rigidly connected with the combined inertial navigation system;
and determining a pose transformation matrix of the laser radar reaching the combined inertial navigation according to at least three pairs of linear point cloud data transformed into a preset coordinate system.
2. The method of claim 1, wherein prior to determining a pair of straight-line point cloud data for a target edge line of each stationary object from the point cloud data sets acquired by the lidar for each of the at least three stationary objects at two acquisition locations, further comprising:
point cloud data sets and inertial navigation data sets acquired by rigidly connected laser radars and combined inertial navigation for each of at least three stationary objects at two acquisition positions are acquired.
3. The method of claim 2, wherein acquiring the point cloud dataset and the inertial navigation dataset acquired by the rigidly connected lidar and the combined inertial navigation for each of at least three stationary objects at two acquisition locations comprises:
for each static object in at least three static objects, sequentially acquiring a point cloud data set and an inertial navigation data set acquired by a rigidly connected laser radar and a combined inertial navigation system for the static object at two acquisition positions; alternatively, the first and second electrodes may be,
and acquiring a point cloud data set and an inertial navigation data set acquired by the rigidly connected laser radar and the combined inertial navigation on at least three static objects at two acquisition positions.
4. Method according to any of claims 1-2, characterized in that the lidar is mounted vertically with respect to a horizontal plane.
5. The method of claim 1, wherein determining a pair of straight-line point cloud data for a target edge line of each stationary object from the point cloud data sets acquired by the lidar for each of the at least three stationary objects at two acquisition locations comprises:
generating a visual image of a first point cloud data set of each static object at a first acquisition position and a visual image of a second point cloud data set of each static object at a second acquisition position according to a point cloud data set acquired by a laser radar for each static object at two acquisition positions;
taking point cloud data representing a target edge line of each static object in a visual image of the first point cloud data set of each static object as first linear point cloud data of the target edge line of the static object;
and taking the point cloud data representing the target edge line of the static object in the visual image of the second point cloud data set of each static object as second straight line point cloud data of the target edge line of the static object.
6. The method of claim 1, wherein converting at least three pairs of straight-line point cloud data to a preset coordinate system according to an inertial navigation data set acquired by the combined inertial navigation for each of the at least three stationary objects at the two acquisition positions comprises:
determining a pair of coordinate system conversion matrixes of each static object from an inertial navigation coordinate system to a preset coordinate system at a first acquisition position and a second acquisition position according to an inertial navigation data set acquired by the combined inertial navigation on each static object in the at least three static objects at the two acquisition positions;
and converting at least three pairs of linear point cloud data from a radar coordinate system to a preset coordinate system according to at least three pairs of coordinate system conversion matrixes and a pose conversion matrix of the laser radar to be determined to reach the combined inertial navigation.
7. The method of claim 1, wherein determining a pose transformation matrix of the laser radar to the combined inertial navigation from at least three pairs of straight-line point cloud data transformed to a preset coordinate system comprises:
determining a linear distance expression of the first linear point cloud data and the second linear point cloud data according to the first linear point cloud data and the second linear point cloud data in each pair of linear point cloud data converted into a preset coordinate system; the linear distance expression comprises a pose transformation matrix of the laser radar to be determined to the combined inertial navigation;
constructing a target equation set according to linear distance expressions of at least three pairs of linear point cloud data;
and solving the target equation set to obtain a pose transformation matrix from the laser radar to the combined inertial navigation.
8. The method of claim 7, wherein determining the linear distance expression of the first and second linear point cloud data according to the first and second linear point cloud data in each pair of linear point cloud data converted to the preset coordinate system comprises:
determining first target point cloud data of first linear point cloud data and second target point cloud data of second linear point cloud data in the pair of linear point cloud data aiming at each pair of linear point cloud data converted into a preset coordinate system;
determining a direction vector of the second linear point cloud data according to the second target point cloud data;
and determining a linear distance expression of the first linear point cloud data and the second linear point cloud data according to the first target point cloud data, the second target point cloud data and the direction vector.
9. A surveying system comprising a lidar, a combined inertial navigation and control device; the control equipment respectively with laser radar with the combination is used to lead and is connected, control equipment includes:
one or more processors;
storage means for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement the method for pose calibration of lidar and combined inertial navigation according to any of claims 1-8.
10. A computer-readable storage medium, on which a computer program is stored, wherein the program, when executed by a processor, implements the method for pose calibration of lidar and combined inertial navigation according to any one of claims 1 to 8.
CN201911221495.XA 2019-12-03 2019-12-03 Pose calibration method, system and medium for laser radar and combined inertial navigation Active CN110849363B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911221495.XA CN110849363B (en) 2019-12-03 2019-12-03 Pose calibration method, system and medium for laser radar and combined inertial navigation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911221495.XA CN110849363B (en) 2019-12-03 2019-12-03 Pose calibration method, system and medium for laser radar and combined inertial navigation

Publications (2)

Publication Number Publication Date
CN110849363A true CN110849363A (en) 2020-02-28
CN110849363B CN110849363B (en) 2021-09-21

Family

ID=69607388

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911221495.XA Active CN110849363B (en) 2019-12-03 2019-12-03 Pose calibration method, system and medium for laser radar and combined inertial navigation

Country Status (1)

Country Link
CN (1) CN110849363B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112180348A (en) * 2020-11-27 2021-01-05 深兰人工智能(深圳)有限公司 Attitude calibration method and device for vehicle-mounted multi-line laser radar
CN112578394A (en) * 2020-11-25 2021-03-30 中国矿业大学 LiDAR/INS fusion positioning and drawing method with geometric constraint
WO2021174507A1 (en) * 2020-03-05 2021-09-10 深圳市大疆创新科技有限公司 Parameter calibration method, device, and system, and storage medium
WO2023028823A1 (en) * 2021-08-31 2023-03-09 深圳市速腾聚创科技有限公司 Radar calibration method and apparatus, and terminal device and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180190046A1 (en) * 2015-11-04 2018-07-05 Zoox, Inc. Calibration for autonomous vehicle operation
CN109297510A (en) * 2018-09-27 2019-02-01 百度在线网络技术(北京)有限公司 Relative pose scaling method, device, equipment and medium
CN109901138A (en) * 2018-12-28 2019-06-18 文远知行有限公司 Laser radar scaling method, device, equipment and storage medium
CN109945856A (en) * 2019-02-18 2019-06-28 天津大学 Based on inertia/radar unmanned plane autonomous positioning and build drawing method
CN110031824A (en) * 2019-04-12 2019-07-19 杭州飞步科技有限公司 Laser radar combined calibrating method and device
CN110361010A (en) * 2019-08-13 2019-10-22 中山大学 It is a kind of based on occupy grating map and combine imu method for positioning mobile robot
CN110517284A (en) * 2019-08-13 2019-11-29 中山大学 A kind of target tracking method based on laser radar and Pan/Tilt/Zoom camera

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180190046A1 (en) * 2015-11-04 2018-07-05 Zoox, Inc. Calibration for autonomous vehicle operation
CN109297510A (en) * 2018-09-27 2019-02-01 百度在线网络技术(北京)有限公司 Relative pose scaling method, device, equipment and medium
CN109901138A (en) * 2018-12-28 2019-06-18 文远知行有限公司 Laser radar scaling method, device, equipment and storage medium
CN109945856A (en) * 2019-02-18 2019-06-28 天津大学 Based on inertia/radar unmanned plane autonomous positioning and build drawing method
CN110031824A (en) * 2019-04-12 2019-07-19 杭州飞步科技有限公司 Laser radar combined calibrating method and device
CN110361010A (en) * 2019-08-13 2019-10-22 中山大学 It is a kind of based on occupy grating map and combine imu method for positioning mobile robot
CN110517284A (en) * 2019-08-13 2019-11-29 中山大学 A kind of target tracking method based on laser radar and Pan/Tilt/Zoom camera

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
W.I. LIU,ET AL.: "Error modeling and extrinsic–intrinsic calibration for LiDAR-IMU system based on cone-cylinder features", 《ROBOTICS AND AUTONOMOUS SYSTEMS》 *
张艳国等: "基于惯性测量单元的激光雷达点云融合方法", 《系统仿真学报》 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021174507A1 (en) * 2020-03-05 2021-09-10 深圳市大疆创新科技有限公司 Parameter calibration method, device, and system, and storage medium
CN113767264A (en) * 2020-03-05 2021-12-07 深圳市大疆创新科技有限公司 Parameter calibration method, device, system and storage medium
CN112578394A (en) * 2020-11-25 2021-03-30 中国矿业大学 LiDAR/INS fusion positioning and drawing method with geometric constraint
CN112578394B (en) * 2020-11-25 2022-09-27 中国矿业大学 LiDAR/INS fusion positioning and drawing method with geometric constraint
CN112180348A (en) * 2020-11-27 2021-01-05 深兰人工智能(深圳)有限公司 Attitude calibration method and device for vehicle-mounted multi-line laser radar
CN112180348B (en) * 2020-11-27 2021-03-02 深兰人工智能(深圳)有限公司 Attitude calibration method and device for vehicle-mounted multi-line laser radar
WO2023028823A1 (en) * 2021-08-31 2023-03-09 深圳市速腾聚创科技有限公司 Radar calibration method and apparatus, and terminal device and storage medium

Also Published As

Publication number Publication date
CN110849363B (en) 2021-09-21

Similar Documents

Publication Publication Date Title
CN110849363B (en) Pose calibration method, system and medium for laser radar and combined inertial navigation
CN112654886B (en) External parameter calibration method, device, equipment and storage medium
US11480443B2 (en) Method for calibrating relative pose, device and medium
CN109521403B (en) Parameter calibration method, device and equipment of multi-line laser radar and readable medium
CN109270545B (en) Positioning true value verification method, device, equipment and storage medium
CN110686704A (en) Pose calibration method, system and medium for laser radar and combined inertial navigation
CN110780285A (en) Pose calibration method, system and medium for laser radar and combined inertial navigation
CN110764111B (en) Conversion method, device, system and medium of radar coordinates and geodetic coordinates
US8817093B2 (en) Photogrammetric networks for positional accuracy
US11250622B2 (en) Map creation system and map creation method
JP2019152576A (en) Columnar object state detector, columnar object state detection method, and columnar object state detection processing program
CN112824828B (en) Laser tracker station position determination method and system, electronic device and medium
CN108090212B (en) Method, device and equipment for showing interest points and storage medium
CN113592951A (en) Method and device for calibrating external parameters of vehicle-road cooperative middle-road side camera and electronic equipment
CN113759348A (en) Radar calibration method, device, equipment and storage medium
CN110647600A (en) Three-dimensional map construction method and device, server and storage medium
El-Hakim et al. Sensor based creation of indoor virtual environment models
CN115063489A (en) External parameter calibration method, device, equipment and storage medium
WO2022059051A1 (en) Device, method, and program which convert coordinates of 3d point cloud
CN111880182A (en) Meteorological environment data analysis method and system, storage medium and radar
CN112485774B (en) Vehicle-mounted laser radar calibration method, device, equipment and storage medium
CN113126058A (en) Memory, control method and device for airborne laser radar system
CN112822632B (en) Dynamic attitude position compensation method, system, electronic device, and medium
Jamali et al. 3D Indoor building environment reconstruction using calibration of range finder data
Jamali et al. 3D indoor building environment reconstruction using least square adjustment, polynomial kernel, interval analysis and homotopy continuation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant