CN112082545B - Map generation method, device and system based on IMU and laser radar - Google Patents

Map generation method, device and system based on IMU and laser radar Download PDF

Info

Publication number
CN112082545B
CN112082545B CN202010741680.8A CN202010741680A CN112082545B CN 112082545 B CN112082545 B CN 112082545B CN 202010741680 A CN202010741680 A CN 202010741680A CN 112082545 B CN112082545 B CN 112082545B
Authority
CN
China
Prior art keywords
point cloud
cloud data
imu
dimensional point
map generation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010741680.8A
Other languages
Chinese (zh)
Other versions
CN112082545A (en
Inventor
胡钊政
朱云涛
陶倩文
李飞
彭超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Weitu Chuanshi Technology Co ltd
Original Assignee
Wuhan Weitu Chuanshi Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Weitu Chuanshi Technology Co ltd filed Critical Wuhan Weitu Chuanshi Technology Co ltd
Priority to CN202010741680.8A priority Critical patent/CN112082545B/en
Publication of CN112082545A publication Critical patent/CN112082545A/en
Application granted granted Critical
Publication of CN112082545B publication Critical patent/CN112082545B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/86Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/42Determining position
    • G01S19/45Determining position by combining measurements of signals from the satellite radio beacon positioning system with a supplementary measurement
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/42Determining position
    • G01S19/45Determining position by combining measurements of signals from the satellite radio beacon positioning system with a supplementary measurement
    • G01S19/47Determining position by combining measurements of signals from the satellite radio beacon positioning system with a supplementary measurement the supplementary measurement being an inertial measurement, e.g. tightly coupled inertial

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • Automation & Control Theory (AREA)
  • Navigation (AREA)

Abstract

The invention relates to the technical field of map construction based on computer vision, and discloses a map generation method based on an IMU (inertial measurement Unit) and a laser radar, which comprises the following steps: acquiring three-dimensional point cloud data acquired by a laser radar and acquiring inertial data acquired by an IMU (inertial measurement Unit); generating a projection drawing on the ground according to the three-dimensional point cloud data, and calculating the translation amount of the vehicle according to the projection drawing of the adjacent frame of three-dimensional point cloud data; calculating the rotation amount of the vehicle between the adjacent frames of three-dimensional point cloud data according to the inertial data; drawing a track node by combining the translation amount and the rotation amount to obtain an odometer track; acquiring a GPS track, and correcting the odometer track by using the GPS track; extracting the characteristic information of each frame of the three-dimensional point cloud data, and associating each characteristic information with the GPS coordinate in the GPS track according to the corrected odometer track to obtain a three-dimensional map. The invention has the technical effects of high map precision, high map building speed and small memory occupation.

Description

Map generation method, device and system based on IMU and laser radar
Technical Field
The invention relates to the technical field of map construction based on computer vision, in particular to a map generation method, a map generation device, a map generation system and a computer storage medium based on an IMU (inertial measurement Unit) and a laser radar.
Background
The high-precision map is the basis for realizing automatic driving positioning, the traditional map is simple and convenient to operate by combining with GPS positioning, but the positioning precision can only reach about 10 m; in addition, point cloud maps constructed accumulatively after laser point cloud matching are common, but the inter-frame matching speed is low, the occupied memory is large, and the drawing process is complex.
Disclosure of Invention
The invention aims to overcome the technical defects, provides a map generation method, a map generation device, a map generation system and a computer storage medium based on an IMU (inertial measurement Unit) and a laser radar, and solves the technical problems of complex point cloud map making process and slow inter-frame matching algorithm speed in the prior art.
In order to achieve the technical purpose, the technical scheme of the invention provides a map generation method based on an IMU and a laser radar, which comprises the following steps:
acquiring three-dimensional point cloud data acquired by a laser radar and acquiring inertial data acquired by an IMU (inertial measurement Unit);
generating a projection drawing on the ground according to the three-dimensional point cloud data, and calculating the translation amount of the vehicle according to the projection drawing of the adjacent frame of three-dimensional point cloud data;
calculating the rotation amount of the vehicle between the adjacent frames of three-dimensional point cloud data according to the inertial data;
drawing a track node by combining the translation amount and the rotation amount to obtain an odometer track;
acquiring a GPS track, and correcting the odometer track by using the GPS track;
extracting the characteristic information of each frame of the three-dimensional point cloud data, and associating each characteristic information with the GPS coordinate in the GPS track according to the corrected odometer track to obtain a three-dimensional map.
The invention also provides a map generation device based on the IMU and the lidar, which comprises a processor and a memory, wherein the memory is stored with a computer program, and the computer program is executed by the processor to realize the map generation method based on the IMU and the lidar.
The invention also provides a map generation system based on the IMU and the lidar, which comprises a map generation device based on the IMU and the lidar, a vehicle body, the lidar, the IMU and a GPS receiver;
the map generation device based on the IMU and the lidar, the IMU and the GPS receiver are all arranged on the vehicle body, and the lidar, the IMU and the GPS receiver are respectively and electrically connected with the map generation device based on the IMU and the lidar;
the laser radar is used for acquiring three-dimensional point cloud data and sending the three-dimensional point cloud data to the map generation device based on the IMU and the laser radar;
the IMU is used for collecting inertial data of a vehicle and sending the inertial data to the map generation device based on the IMU and the laser radar;
the GPS receiver is connected with a remote differential GPS reference station and used for collecting a GPS track of a vehicle and sending the GPS track to the map generating device based on the IMU and the laser radar.
The present invention also provides a computer storage medium having stored thereon a computer program which, when executed by a processor, implements the IMU and lidar based map generation method.
Compared with the prior art, the invention has the beneficial effects that: the invention provides a high-precision map generation method based on vehicle-mounted IMU and laser radar fusion. The method converts three-dimensional point cloud data into a two-dimensional projection drawing, calculates the translation amount by using the projection drawing, avoids the high complexity of directly matching and drawing by directly using the three-dimensional point cloud data, and thus accelerates the calculation speed of a matching algorithm; meanwhile, the rotation amount is calculated by using inertial data of the IMU, the high-precision angle rotation amount acquired by the IMU is combined with the translation amount calculated by the projection diagram to obtain high-precision pose information, and the defect that the precision is low because the translation amount can only be calculated by using a traditional phase correlation method is overcome. The method has the advantages of high map generating speed, high precision and small occupied memory, and can meet the requirement of high-precision map construction of the intelligent vehicle.
Drawings
FIG. 1 is a flow chart of an embodiment of a map generation method based on IMU and lidar provided by the present invention;
FIG. 2 is a projection result diagram of an embodiment of a projection diagram obtained by projecting three-dimensional point cloud data on the ground according to the present invention;
FIG. 3 is a graph of a spectrum result of an embodiment of a projected graph converted to a frequency domain to obtain a spectrogram according to the present invention;
fig. 4 is a schematic structural diagram of an embodiment of the map generation system based on the IMU and the lidar provided in the present invention.
Reference numerals:
1. a vehicle body; 2. a laser radar; 3. an IMU; 4. a GPS receiver; 5. a display; 6 difference GPS reference station; 7. an industrial personal computer; 8. and a power supply.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Example 1
As shown in fig. 1, embodiment 1 of the present invention provides a map generation method based on an IMU and a lidar, including the following steps:
s1, acquiring three-dimensional point cloud data acquired by a laser radar, and acquiring inertial data acquired by an IMU;
s2, generating a projection drawing on the ground according to the three-dimensional point cloud data, and calculating the translation amount of the vehicle according to the projection drawing of the adjacent frame three-dimensional point cloud data;
s3, calculating the rotation amount of the vehicle between the adjacent frames of three-dimensional point cloud data according to the inertia data;
s4, drawing a track node by combining the translation amount and the rotation amount to obtain an odometer track;
s5, acquiring a GPS track, and correcting the odometer track by using the GPS track;
and S6, extracting the feature information of each frame of the three-dimensional point cloud data, and associating each feature information with the GPS coordinate in the GPS track according to the corrected odometer track to obtain a three-dimensional map.
The method comprises the steps of firstly, acquiring original three-dimensional point cloud data, GPS data and inertia data acquired by a vehicle-mounted laser radar; extracting and estimating a ground normal from the point cloud data, then obtaining a projection image from the point cloud data, and then obtaining a translation vector by using a Phase-only correlation method; obtaining a rotation matrix, namely a rotation amount, by combining inertial data acquired by an IMU (inertial measurement unit), and finally drawing a track of the odometer by using the rotation amount and the translation amount obtained by two adjacent frames of data of the laser radar; the method comprises the steps that a high-precision position of a vehicle, namely a GPS track, is obtained by combining GPS data with an RTK (Real-time kinematic) technology and is used for correcting a track of a speedometer, characteristic information of the same track node of the vehicle corresponds to a high-precision GPS coordinate through data association, and a three-dimensional map is obtained by combining the high-precision GPS track of the vehicle.
According to the method, the three-dimensional point cloud data are converted into the two-dimensional projection drawing for calculation translation, so that the high complexity of directly matching and drawing by directly utilizing the three-dimensional point cloud data is avoided, and the calculation speed of a matching algorithm is increased; the rotation amount is calculated by using the accurate angle of the IMU, the translation amount is acquired by using the point cloud data, the high-accuracy angle rotation amount acquired by the IMU is combined with the translation amount calculated by the projection diagram to obtain high-accuracy pose information, and the defect that the traditional phase correlation method only can calculate the translation amount and therefore the accuracy is low is overcome. The method has the effects of high map generation speed, high precision and small memory occupation, and can meet the requirement of high-precision map construction of the intelligent vehicle.
Preferably, a projection map on the ground is generated according to the three-dimensional point cloud data, specifically:
filtering and downsampling the three-dimensional point cloud data by using a voxel grid filter to obtain filtered point cloud data;
screening a ground point set from the three-dimensional point cloud data by using a condition screening method;
fitting the ground point set to obtain a ground equation, and solving a normal of the ground equation to obtain a ground normal;
and projecting the filtered point cloud data along the ground normal direction to obtain the projection diagram.
The voxel grid filter is used for filtering and downsampling original three-dimensional point cloud data, because laser radar directly has a large number of outliers, isolated points and noise points from the laser point cloud data collected by the environment, subsequent influence is large, the points are filtered and removed, errors of subsequent plane fitting can be greatly reduced, and meanwhile, the calculated amount of subsequent processing can be increased due to the large density of the original point cloud. And after the noise points are removed and the three-dimensional point cloud data with greatly reduced density after outliers are obtained, a normal is estimated by utilizing a condition screening method and a fitting algorithm to fit the ground, a ground normal of the position where the vehicle is located is obtained, and then the point cloud data with the ground removed is projected along the normal direction to obtain a projection drawing.
Filtering and downsampling the three-dimensional point cloud data by using a voxel grid filter to obtain filtered point cloud data, which specifically comprises the following steps:
dividing the three-dimensional point cloud data into a plurality of grids, calculating the gravity center of each grid, and replacing all points in the corresponding grid with the gravity center to obtain filtered point cloud data;
screening out a ground point set from the three-dimensional point cloud data by using a condition screening method, which specifically comprises the following steps:
screening out points with the height smaller than the set height in the filtered point cloud data as a ground candidate point set;
sorting the points in the ground candidate point set according to the heights, calculating the average height of all the points in the ground candidate point set, and calculating the height of a threshold value according to the average height;
calculating the normal of a point with the height smaller than the threshold height in the ground candidate point set, and eliminating the point with the included angle between the normal and the Z axis in the ground candidate point set larger than a set angle to obtain a final ground point set;
fitting the ground point set to obtain a ground equation, which specifically comprises the following steps:
and fitting the ground point set by using a RANSAC algorithm to obtain a ground equation.
Specifically, the process of filtering by using the voxel grid filter is as follows: dividing original three-dimensional point cloud data into small cube grids with side length of 5 cm, wherein point sets in the grids are { p1,p2,p3,...,pnN is the number of points contained in the grid, and the coordinates of each point in the grid are p1:(x1,y1),p2:(x2,y2),p3:(x3,y3),...,pn:(xn,yn) Then, the barycenter p of all points in each cubic grid is calculated0(x0,y0) The calculation formula is as follows:
Figure GDA0003609904820000051
Figure GDA0003609904820000052
and representing all points in the grid by using the gravity center point to obtain filtered point cloud data.
Then, a ground point set P is detected by a condition screening methodgroundFirstly, sorting the point clouds according to the heights from small to large, taking out the points with the height less than 10cm, and then calculating the average height z of all the pointsaveThe height of withdrawal is less than the mean height zaveThen calculating the normal lines of the points, eliminating the points with the included angle of more than 10 degrees with the z axis, and taking the rest points as a final ground point set Pground. Then, a better model is fitted by utilizing a RANSAC algorithm, specifically, a ground point set P is selected from each timegroundRandomly taking some points out of the model equation, and judging the fitting degree of other points and the points, wherein the fitting degree formula is as follows:
Figure GDA0003609904820000061
wherein d isiIs the degree of fit, n is the normal to the plane of the fitting model, piFor point to be judged, pjIs a vertex in the model.
Taking the point set with the fitting degree less than the threshold value tau as an interior point, taking tau as the number of the model and the corresponding interior points recorded each time,
Figure GDA0003609904820000062
σ is the distance diAnd (4) continuously taking points according to the standard deviation of Gaussian distribution, stopping taking the points when the point taking times meet the iteration condition, and taking the fitting model with the maximum corresponding number of the inner points as the optimal model.
Figure GDA0003609904820000063
Wherein e represents the probability that a certain point is not an interior point, N is the number of point taking times, and N is the total number of points.
The optimal model equation is the fitted ground equation, which can be expressed as ax2+bx+c=d。
Finally, the ground equation ax is used2And d, calculating the vehicle ground normal, removing the ground point set, and projecting the original point cloud data along the ground normal direction to obtain a projection diagram, as shown in fig. 2.
Preferably, the translation amount of the vehicle is calculated according to the projection diagram of the three-dimensional point cloud data of the adjacent frames, specifically:
extracting adjacent quaternions corresponding to the adjacent frames of three-dimensional point cloud data from the inertial data;
converting the quaternion into an Euler angle, and converting the yaw angle of the Euler angle into an angle system to obtain the rotation angle of the vehicle on the ground;
aligning the adjacent frame projection drawings by using the rotation angle, so that the adjacent frame projection drawings have no rotation relation;
converting the projection graph from a spatial domain to a frequency domain by using a phase correlation method to obtain a spectrogram;
performing fast Fourier transform and centralization processing on the spectrogram, and extracting a phase from the spectrogram after the centralization processing;
synthesizing the phases of the adjacent frame depth images into a complex form to obtain a cross-power spectrum of the adjacent frame depth images;
performing fast Fourier transform on the cross-power spectrum to obtain a Dirac function;
and extracting the peak value coordinate of the Dirac function to obtain the translation amount.
Specifically, two groups of quaternions Q corresponding to two adjacent frames of projection images are respectively extracted and collected from the collected inertial data1And Q2And calculating a quaternion Q corresponding to the rotation angle of the vehicle in the period of time (the period of the two adjacent frames of projection images/the two adjacent frames of three-dimensional point cloud data) according to the two groups of quaternion, wherein the formula is as follows:
Q=Q1*Q2 *
wherein Q2 *Representing a quaternion Q2Conjugation of (1).
Then, the quaternion Q is equal to (Q)0,q1,q2,q3) Converted into euler angles (row, pitch, yaw), the conversion formula is as follows:
row=arcsin(2q2q3+2q0q1)
pitch=arctan(2q0q2-2q1q3,q0 2-q1 2-q2 2+q3 2)
yaw=arctan(2q0q3-2q1q2,q0 2-q1 2+q2 2-q3 2)
wherein q is0,q1,q2,q3The real part and the three imaginary parts of the quaternion are respectively, and the row, pitch and yaw are respectively the roll angle, the pitch angle and the yaw angle of the Euler angle rotating around the x, y and z axes.
And converting the yaw angle yaw parallel to the horizontal direction into an angle system to obtain a rotation angle theta, so that the latter projection drawing in the two adjacent frames of projection drawings is rotated to be aligned with the former projection drawing according to the rotation angle theta, and the two projection drawings are ensured not to have a rotation relation.
Converting the projection drawing by using a phase correlation method, and firstly setting the projection drawings of two adjacent frames after angle correction as f respectively1(x, y) and f2(x, y) the two projection images only have a translation relation f on the space domain2(x,y)=f1(x-x0,y-y0) And respectively performing fast Fourier transform on the two projection graphs with the size of M x N, wherein the fast Fourier transform formula is as follows:
Figure GDA0003609904820000071
the variables u and v respectively represent spatial frequencies corresponding to an x axis and a y axis, F (u and v) is a result of fast Fourier transform of an image, and F (x and y) represents a pixel value of the image needing the fast Fourier transform at a pixel point (x and y).
The frequency spectrums of the two images after the transformation satisfy:
F2(ξ,η)=e-j2π(ξΔx+ηΔy)F1(ξ,η)
F1(xi, eta) represents the projection view f1(x, y) a spectrogram obtained by performing fast Fourier transform, F2(xi, eta) represents the projection view f2(x, y) a spectrogram obtained after performing fast Fourier transform.
The transformed image information is transformed from a space domain to a frequency domain, the image frequency spectrum obviously has a bright part and a dark part, the brightness of four corners of the image frequency spectrum is larger and represents a low-frequency part with larger image energy, the center of the image frequency spectrum is darker and represents a high-frequency part with smaller image energy, and therefore a spectrogram is obtained; and then centralizing the spectrogram, namely translating the part with larger brightness at the four corners to the center of the image to obtain the final spectrogram, as shown in fig. 3. The center of the final spectrogram is a part with the frequency of 0, and the frequency represented by points is higher and higher from inside to outside.
Then, extracting a phase from the spectrogram after fast fourier transform and centering, where the phase in the frequency domain represents change information of the two spectrograms, and if Re and Im are used to represent a real part and an imaginary part respectively, the phase spectrum can be represented as:
Figure GDA0003609904820000081
phi (u, v) represents the phase spectrum of the projection f (x, y).
Then the phases of the two spectrograms are synthesized into a complex form to obtain an image cross-power spectrum, wherein the cross-power spectrum formula is as follows:
Figure GDA0003609904820000082
wherein F2 *(ζ, η) represents F2And (zeta, eta) complex conjugate operation is carried out, and the cross power spectrum obtained by calculation contains information of two spectrogram transformations.
And finally, performing fast Fourier inverse transformation on the cross-power spectrum to obtain a Dirac function delta (x-Deltax, y-Delay), wherein the function is an impulse function, only a peak value exists at an original point (0,0), the peak value is 1 when the images are completely the same, when only a translation relation exists between the two images and the overlapped parts of the images are more, the peak value coordinate corresponds to the translation amount between the two images, and the translation amount T of the projection image is finally obtained by extracting the peak value coordinate of the Dirac function.
Preferably, the method for calculating the rotation amount of the vehicle between the adjacent frames of three-dimensional point cloud data according to the inertial data specifically comprises the following steps:
extracting adjacent quaternions corresponding to the adjacent frames of three-dimensional point cloud data from the inertial data;
converting the quaternion into an Euler angle, and zeroing a roll angle and a pitch angle in the Euler angle;
and converting the euler angle subjected to the zeroing treatment into a rotation matrix to obtain the rotation amount.
When the rotation amount is calculated, similarly, two adjacent frames of projections are respectively extracted and collected from the collected inertial dataTwo groups of quaternions Q corresponding to the figure1And Q2And calculating a quaternion Q corresponding to the rotation angle of the vehicle in the period of time (the period of the two adjacent frames of projection images/the two adjacent frames of three-dimensional point cloud data) according to the two groups of quaternion, wherein the formula is as follows:
Q=Q1*Q2 *
wherein Q is2 *Representing a quaternion Q2Conjugation of (1).
Then, the quaternion Q is equal to (Q)0,q1,q2,q3) Converted into euler angles (row, pitch, yaw), the conversion formula is as follows:
row=arcsin(2q2q3+2q0q1)
pitch=arctan(2q0q2-2q1q3,q0 2-q1 2-q2 2+q3 2)
yaw=arctan(2q0q3-2q1q2,q0 2-q1 2+q2 2-q3 2)
wherein q is0,q1,q2,q3The real part and the three imaginary parts of the quaternion are respectively, and the row, pitch and yaw are respectively the roll angle, the pitch angle and the yaw angle of the Euler angle rotating around the x, y and z axes.
And setting the roll angle row and the pitch angle pitch of the Euler angle to be 0, wherein the roll angle row and the pitch angle pitch should be 0 because the intelligent vehicle moves on a horizontal ground, and then forming a new Euler angle (0,0, yaw).
Converting the return-to-zero processed Euler angle into a rotation matrix to obtain a rotation matrix R of the vehicle in the period of acquiring two adjacent frames of three-dimensional point cloud data, wherein the conversion formula is as follows:
Figure GDA0003609904820000091
and after the translation amount and the rotation amount are obtained, the odometer track can be drawn.
After the track drawing is finished, feature extraction is carried out; the features and the track points are conveniently correlated to form the three-dimensional map. In this embodiment, global features and local features are extracted at the same time, the depth map information is subjected to global Feature extraction, orb (organized FAST and organized brief) global features of the image are obtained, and then pfh (point Feature histories) local features are extracted from the filtered point cloud data, which is described in detail below.
Preferably, the feature information of the three-dimensional point cloud data comprises a global feature descriptor;
extracting a global feature descriptor of the three-dimensional point cloud data, specifically:
converting the three-dimensional point cloud data into a two-dimensional depth map;
carrying out histogram equalization on the depth map, and carrying out normalization processing on the image size;
taking the image center point of the normalized image as the position of the feature point;
and calculating a local feature descriptor of the image center point of the adjacent frame depth map as a global feature descriptor.
Preferably, the three-dimensional point cloud data is converted into a two-dimensional depth map, specifically:
and taking the Euclidean distance from each point in the three-dimensional point cloud data to the laser radar as the pixel value of the corresponding point in the depth map to obtain the depth map.
Specifically, the original three-dimensional point cloud data is converted into a two-dimensional depth map, the Euclidean distance from a certain point in each frame of three-dimensional point cloud data to the laser radar is used as the pixel value of the point in the depth map, the pixel coordinate corresponding to the point in the depth map is determined by using the relative position of the point in the point cloud to the rest points, and the depth map is finally obtained.
When global features are extracted, firstly, the depth map is subjected to histogram equalization, then the size of the whole image is normalized to be 63 x 63 pixels so as to solve the problem that the ORB global feature descriptor does not have scale invariance, then the normalized whole image is directly used as a feature field area, the center of the image is directly used as the position of a feature point, then the local feature descriptor of the image is calculated to be used as the ORB global feature descriptor, and the ORB global feature descriptor is a 256-dimensional binary vector. Calculating a local feature descriptor of an image center point of adjacent frame depth maps as a global feature descriptor, specifically:
calculating the Hamming distance of the image center points of the adjacent frame depth maps as the local feature descriptor of the image center points:
Figure GDA0003609904820000101
wherein, Hamm (X)1,X2) Representing a depth map X1And depth map X2The hamming distance of (a), XOR () represents an exclusive or operation,
Figure GDA0003609904820000102
representation versus depth map X1And depth map X2A bitwise xor operation is performed with i representing a binary digit number.
Shorter hamming distances between vectors indicate that the global features of the two images are more similar.
Preferably, the feature information of the three-dimensional point cloud data comprises local feature descriptors;
extracting a local feature descriptor of the three-dimensional point cloud data, specifically:
extracting feature points of the three-dimensional point cloud data;
searching a plurality of points closest to the target point by adopting a Kd-tree, and connecting the searched points pairwise to construct a ball network;
defining a local coordinate system at one of the points of said net;
constructing a triple representing characteristics by using the relative position relationship between the normal of the two points and the local coordinate system;
and calculating all triples in the ball network and voting respectively to obtain the local feature descriptors.
The embodiment of extracting the PFH local features of the point cloud specifically includes: firstly, extracting the filtered point cloud dataSearching K points closest to the target point by using a Kd-tree, and connecting every two points to form a ball network. Defining two points p1And p2Each normal vector is n1And n2In order to calculate the relative difference between two points, a local coordinate system uvw is defined at one of the two points, and the three satisfy the following relationship:
u=n1
Figure GDA0003609904820000111
w=u×v
calculating a local coordinate system and normals to the selected two points to define a triplet of descriptive point characteristics
Figure GDA0003609904820000112
The triad is satisfied:
a=v×n2
Figure GDA0003609904820000113
θ=arctan(wgn2,ugn2)
for K points in the sphere field around a certain target point, the K points can be obtained
Figure GDA0003609904820000114
Dividing the range of each dimension into b equal parts according to the maximum value and the minimum value of the three dimensions of the triple, taking 5 as b, constructing a histogram by taking 5 range segments of each dimension, and counting
Figure GDA0003609904820000121
The frequency of the triple in the three range segments is 5 × 5 × 5, which is 125-dimensional local feature descriptor, and the smaller the euclidean distance of the descriptor, the higher the similarity of the points.
After the translation amount and the rotation amount are obtained, a vehicle motion model is established, a rotation matrix and a translation vector obtained by using two continuous adjacent frames of data of a laser radar are used for drawing a track of the odometer, and the vehicle motion model is as follows:
Figure GDA0003609904820000122
wherein p isk xFor the vehicle x-axis coordinate, p, corresponding to the k-th frame projectionk yFor the vehicle y-axis coordinate, t, corresponding to the k-th frame projection0 xIs the starting point of the vehicle, i.e. the x-axis coordinate, t, of the vehicle corresponding to the projection of the 0 th frame0 yThe vehicle starting point, namely the vehicle y-axis coordinate corresponding to the projection diagram of the 0 th frame,
Figure GDA0003609904820000123
θiis the relative rotation angle between the projection diagram of the ith frame and the projection diagram of the (i-1) th frame, ti xFor the x-axis coordinate, t, of the vehicle corresponding to the projection of the ith framei yThe y-axis coordinate of the vehicle corresponding to the projection diagram of the ith frame and the coordinate of the projection diagram of the 0 th frame are p0(0,0). The rotation amount R and the translation amount T can be multiplied by the model, and the positions of the projection graph can be converted into poses based on the starting points.
After acquiring the track of the odometer, receiving the rough longitude and latitude coordinates obtained by a global positioning system through a vehicle-mounted GPS receiver, obtaining accurate longitude and latitude coordinates by combining a real-time carrier phase differential (RTK) technology, generating a high-accuracy GPS track, correcting the track of the odometer by using the high-accuracy GPS track, and corresponding the global characteristics and the local characteristics of the same track node of the vehicle to the high-accuracy GPS coordinates through data association to obtain a final map.
Example 2
Embodiment 2 of the present invention provides an IMU and lidar based map generation apparatus, including a processor and a memory, where the memory stores a computer program, and the computer program, when executed by the processor, implements the IMU and lidar based map generation method provided in embodiment 1.
The map generation device based on the IMU and the lidar provided by the embodiment of the invention is used for realizing the map generation method based on the IMU and the lidar, so that the map generation device based on the IMU and the lidar has the technical effects, and the map generation device based on the IMU and the lidar is not described herein again.
Example 3
As shown in fig. 4, embodiment 3 of the present invention provides an IMU and lidar based map generation system, including the IMU3 and lidar based map generation apparatus provided in embodiment 2, further including a vehicle body 1, a lidar 2, an IMU3, and a GPS receiver 4;
the map generation device based on the IMU and the lidar, the lidar 2, the IMU3 and the GPS receiver 4 are all installed on the vehicle body 1, and the lidar 2, the IMU3 and the GPS receiver 4 are respectively and electrically connected with the map generation device based on the IMU and the lidar;
the laser radar 2 is used for acquiring three-dimensional point cloud data and sending the three-dimensional point cloud data to the map generation device based on the IMU and the laser radar;
the IMU3 is used for collecting inertial data of a vehicle and sending the inertial data to the map generating device based on the IMU and the laser radar;
the GPS receiver 4 is connected with a remote differential GPS reference station 6 and is used for collecting a GPS track of a vehicle and sending the GPS track to the map generating device based on the IMU and the laser radar.
Specifically, in the embodiment, the map generation device based on the IMU and the lidar is realized by using the industrial personal computer 7, the vehicle-mounted GPS receiver 4 receives the rough longitude and latitude coordinates obtained by the global positioning system, and then the precise longitude and latitude coordinates are obtained by combining with a GPS reference station real-time carrier phase differential (RTK) technology to generate a high-precision GPS track, the acquisition frequencies of the lidar 2 and the GPS receiver 4 are both 10Hz, and the acquisition frequency of the IMU3 is 100 Hz. Simultaneously, still set up display 5 on the vehicle body and be used for showing three-dimensional map, set up power 8 and be used for supplying power for laser radar 2, IMU3, GPS receiver 4, industrial computer 7. The display 5 and the power supply 8 may be both those built in the vehicle body.
Example 4
Embodiment 4 of the present invention provides a computer storage medium having stored thereon a computer program that, when executed by a processor, implements the IMU and lidar based map generation method provided in embodiment 1.
The computer storage medium provided by the embodiment of the invention is used for realizing the map generation method based on the IMU and the lidar, so that the computer storage medium has the technical effects of the map generation method based on the IMU and the lidar, and the details are not repeated here.
The above-described embodiments of the present invention should not be construed as limiting the scope of the present invention. Any other corresponding changes and modifications made according to the technical idea of the present invention should be included in the protection scope of the claims of the present invention.

Claims (10)

1. A map generation method based on an IMU and a lidar is characterized by comprising the following steps:
acquiring three-dimensional point cloud data acquired by a laser radar and acquiring inertial data acquired by an IMU (inertial measurement Unit);
generating a projection drawing on the ground according to the three-dimensional point cloud data, and calculating the translation amount of the vehicle according to the projection drawing of the adjacent frame of three-dimensional point cloud data;
calculating the rotation amount of the vehicle between the adjacent frames of three-dimensional point cloud data according to the inertial data;
drawing a track node by combining the translation amount and the rotation amount to obtain an odometer track;
acquiring a GPS track, and correcting the odometer track by using the GPS track;
extracting the characteristic information of each frame of the three-dimensional point cloud data, and associating each characteristic information with the GPS coordinate in the GPS track according to the corrected odometer track to obtain a three-dimensional map.
2. The IMU and lidar based map generation method of claim 1, wherein a projection map on the ground is generated from the three-dimensional point cloud data, specifically:
filtering and downsampling the three-dimensional point cloud data by using a voxel grid filter to obtain filtered point cloud data;
screening a ground point set from the three-dimensional point cloud data by using a condition screening method;
fitting the ground point set to obtain a ground equation, and solving a normal of the ground equation to obtain a ground normal;
and projecting the filtered point cloud data along the ground normal direction to obtain the projection diagram.
3. The IMU and lidar based map generation method of claim 1, wherein the translation of the vehicle is calculated from the projection map of the adjacent frame three-dimensional point cloud data, specifically:
extracting adjacent quaternions corresponding to the adjacent frames of three-dimensional point cloud data from the inertial data;
converting the quaternion into an Euler angle, and converting the yaw angle of the Euler angle into an angle system to obtain the rotation angle of the vehicle on the ground;
aligning the adjacent frame projection drawings by using the rotation angle, so that the adjacent frame projection drawings have no rotation relation;
converting the projection graph from a space domain to a frequency domain by using a phase correlation method to obtain a spectrogram;
performing fast Fourier transform and centralization processing on the spectrogram, and extracting a phase from the spectrogram after the centralization processing;
synthesizing the phases of the adjacent frame depth images into a complex form to obtain a cross-power spectrum of the adjacent frame depth images;
performing fast Fourier transform on the cross-power spectrum to obtain a Dirac function;
and extracting the peak value coordinate of the Dirac function to obtain the translation amount.
4. The IMU and lidar based map generation method of claim 1, wherein the rotation amount of the vehicle between adjacent frames of three-dimensional point cloud data is calculated from the inertial data, specifically:
extracting adjacent quaternions corresponding to the adjacent frames of three-dimensional point cloud data from the inertial data;
converting the quaternion into Euler angles, and zeroizing the roll angles and the pitch angles in the Euler angles;
and converting the euler angle subjected to the zeroing treatment into a rotation matrix to obtain the rotation amount.
5. The IMU and lidar based map generation method of claim 1, wherein the feature information of the three-dimensional point cloud data comprises a global feature descriptor;
extracting a global feature descriptor of the three-dimensional point cloud data, specifically:
converting the three-dimensional point cloud data into a two-dimensional depth map;
carrying out histogram equalization on the depth map, and carrying out normalization processing on the image size;
taking the image center point of the normalized image as the position of the feature point;
and calculating a local feature descriptor of the image center point of the adjacent frame depth map as a global feature descriptor.
6. The IMU and lidar based map generation method of claim 5, wherein converting the three-dimensional point cloud data into a two-dimensional depth map specifically:
and taking the Euclidean distance from each point in the three-dimensional point cloud data to the laser radar as the pixel value of the corresponding point in the depth map to obtain the depth map.
7. The IMU and lidar based map generation method of claim 1, wherein the feature information of the three-dimensional point cloud data comprises local feature descriptors;
extracting a local feature descriptor of the three-dimensional point cloud data, specifically:
extracting feature points of the three-dimensional point cloud data;
searching a plurality of points closest to the target point by adopting a Kd-tree, and connecting the searched points pairwise to construct a ball network;
defining a local coordinate system on one of the two connected points;
constructing a triple representing characteristics by using the relative position relationship between the normal of the two points and the local coordinate system;
and calculating all triples in the ball network and voting respectively to obtain the local feature descriptors.
8. An IMU and lidar based map generation apparatus comprising a processor and a memory, the memory having stored thereon a computer program that, when executed by the processor, implements an IMU and lidar based map generation method as claimed in any of claims 1-7.
9. An IMU and lidar based map generation system, comprising the IMU and lidar based map generation apparatus of claim 8, further comprising a vehicle body, a lidar, an IMU, and a GPS receiver;
the map generation device based on the IMU and the lidar, the IMU and the GPS receiver are all arranged on the vehicle body, and the lidar, the IMU and the GPS receiver are respectively and electrically connected with the map generation device based on the IMU and the lidar;
the laser radar is used for acquiring three-dimensional point cloud data and sending the three-dimensional point cloud data to the map generation device based on the IMU and the laser radar;
the IMU is used for collecting inertial data of a vehicle and sending the inertial data to the map generation device based on the IMU and the laser radar;
the GPS receiver is connected with a remote differential GPS reference station and used for collecting a GPS track of a vehicle and sending the GPS track to the map generating device based on the IMU and the laser radar.
10. A computer storage medium having a computer program stored thereon, wherein the computer program, when executed by a processor, implements the IMU and lidar based map generation method of any of claims 1-7.
CN202010741680.8A 2020-07-29 2020-07-29 Map generation method, device and system based on IMU and laser radar Active CN112082545B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010741680.8A CN112082545B (en) 2020-07-29 2020-07-29 Map generation method, device and system based on IMU and laser radar

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010741680.8A CN112082545B (en) 2020-07-29 2020-07-29 Map generation method, device and system based on IMU and laser radar

Publications (2)

Publication Number Publication Date
CN112082545A CN112082545A (en) 2020-12-15
CN112082545B true CN112082545B (en) 2022-06-21

Family

ID=73735192

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010741680.8A Active CN112082545B (en) 2020-07-29 2020-07-29 Map generation method, device and system based on IMU and laser radar

Country Status (1)

Country Link
CN (1) CN112082545B (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112731357A (en) * 2020-12-30 2021-04-30 清华大学 Real-time correction method and system for positioning error of laser point cloud odometer
CN112835063B (en) * 2021-01-08 2024-04-12 北京京东尚科信息技术有限公司 Method, device, equipment and storage medium for determining dynamic and static properties of object
CN112859051B (en) * 2021-01-11 2024-04-09 桂林电子科技大学 Laser radar point cloud motion distortion correction method
CN113066105B (en) * 2021-04-02 2022-10-21 北京理工大学 Positioning and mapping method and system based on fusion of laser radar and inertial measurement unit
CN113406659A (en) * 2021-05-28 2021-09-17 浙江大学 Mobile robot position re-identification method based on laser radar information
CN113405560B (en) * 2021-05-28 2023-03-24 武汉理工大学 Unified modeling method for vehicle positioning and path planning
CN113379841B (en) * 2021-06-21 2024-04-30 上海仙工智能科技有限公司 Laser SLAM method based on phase correlation method and factor graph and readable storage medium thereof
CN114018269B (en) * 2021-11-22 2024-03-26 阿波罗智能技术(北京)有限公司 Positioning method, positioning device, electronic equipment, storage medium and automatic driving vehicle
CN114608568B (en) * 2022-02-22 2024-05-03 北京理工大学 Multi-sensor information based instant fusion positioning method
CN116804765B (en) * 2023-08-25 2023-11-14 兰笺(苏州)科技有限公司 Automatic measurement method and device for real-quantity index of indoor space actual measurement

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108665540A (en) * 2018-03-16 2018-10-16 浙江工业大学 Robot localization based on binocular vision feature and IMU information and map structuring system
CN109407073A (en) * 2017-08-15 2019-03-01 百度在线网络技术(北京)有限公司 Reflected value map constructing method and device
CN109991636A (en) * 2019-03-25 2019-07-09 启明信息技术股份有限公司 Map constructing method and system based on GPS, IMU and binocular vision
CN110261870A (en) * 2019-04-15 2019-09-20 浙江工业大学 It is a kind of to synchronize positioning for vision-inertia-laser fusion and build drawing method
CN110850439A (en) * 2020-01-15 2020-02-28 奥特酷智能科技(南京)有限公司 High-precision three-dimensional point cloud map construction method
CN111272181A (en) * 2020-02-06 2020-06-12 北京京东乾石科技有限公司 Method, device, equipment and computer readable medium for constructing map
CN111427061A (en) * 2020-06-15 2020-07-17 北京云迹科技有限公司 Robot mapping method and device, robot and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018048353A1 (en) * 2016-09-09 2018-03-15 Nanyang Technological University Simultaneous localization and mapping methods and apparatus

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109407073A (en) * 2017-08-15 2019-03-01 百度在线网络技术(北京)有限公司 Reflected value map constructing method and device
CN108665540A (en) * 2018-03-16 2018-10-16 浙江工业大学 Robot localization based on binocular vision feature and IMU information and map structuring system
CN109991636A (en) * 2019-03-25 2019-07-09 启明信息技术股份有限公司 Map constructing method and system based on GPS, IMU and binocular vision
CN110261870A (en) * 2019-04-15 2019-09-20 浙江工业大学 It is a kind of to synchronize positioning for vision-inertia-laser fusion and build drawing method
CN110850439A (en) * 2020-01-15 2020-02-28 奥特酷智能科技(南京)有限公司 High-precision three-dimensional point cloud map construction method
CN111272181A (en) * 2020-02-06 2020-06-12 北京京东乾石科技有限公司 Method, device, equipment and computer readable medium for constructing map
CN111427061A (en) * 2020-06-15 2020-07-17 北京云迹科技有限公司 Robot mapping method and device, robot and storage medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Tightly Coupled 3D Lidar Inertial Odometry and Mapping;Ye Haoyang,et.al;《IEEE International Conference on Robotics and Automation ICRA》;20191231;第3144-3150页 *
基于GPS与图像融合的智能车辆高精度定位算法;李祎承等;《交通运输系统工程与信息》;20170630;第17卷(第3期);第112-119页 *
无人驾驶汽车三维同步定位与建图精度评估;吴晶;《装备机械》;20191231(第4期);第17-20页 *

Also Published As

Publication number Publication date
CN112082545A (en) 2020-12-15

Similar Documents

Publication Publication Date Title
CN112082545B (en) Map generation method, device and system based on IMU and laser radar
Naseer et al. Deep regression for monocular camera-based 6-dof global localization in outdoor environments
CN110070615B (en) Multi-camera cooperation-based panoramic vision SLAM method
CN109029433B (en) Method for calibrating external parameters and time sequence based on vision and inertial navigation fusion SLAM on mobile platform
CN109360240B (en) Small unmanned aerial vehicle positioning method based on binocular vision
CN109165680B (en) Single-target object dictionary model improvement method in indoor scene based on visual SLAM
CN105976353A (en) Spatial non-cooperative target pose estimation method based on model and point cloud global matching
CN110060329B (en) Mobile terminal human body model reconstruction method based on color depth video stream data
CN112179357B (en) Monocular camera-based visual navigation method and system for plane moving target
CN113706591B (en) Point cloud-based three-dimensional reconstruction method for surface weak texture satellite
CN112712589A (en) Plant 3D modeling method and system based on laser radar and deep learning
CN115187676A (en) High-precision line laser three-dimensional reconstruction calibration method
CN114022560A (en) Calibration method and related device and equipment
CN108320310B (en) Image sequence-based space target three-dimensional attitude estimation method
CN116097307A (en) Image processing method and related equipment
CN116433843A (en) Three-dimensional model reconstruction method and device based on binocular vision reconstruction route
CN110348310A (en) A kind of Hough ballot 3D colour point clouds recognition methods
Firintepe et al. The more, the merrier? A study on in-car IR-based head pose estimation
CN111664845B (en) Traffic sign positioning and visual map making method and device and positioning system
CN117132737B (en) Three-dimensional building model construction method, system and equipment
CN117197333A (en) Space target reconstruction and pose estimation method and system based on multi-view vision
CN110706288A (en) Target detection method, device, equipment and readable storage medium
CN113284249B (en) Multi-view three-dimensional human body reconstruction method and system based on graph neural network
CN115861352A (en) Monocular vision, IMU and laser radar data fusion and edge extraction method
US20220129660A1 (en) System and method of acquiring coordinates of pupil center point

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant