CN112082545A - Map generation method, device and system based on IMU and laser radar - Google Patents
Map generation method, device and system based on IMU and laser radar Download PDFInfo
- Publication number
- CN112082545A CN112082545A CN202010741680.8A CN202010741680A CN112082545A CN 112082545 A CN112082545 A CN 112082545A CN 202010741680 A CN202010741680 A CN 202010741680A CN 112082545 A CN112082545 A CN 112082545A
- Authority
- CN
- China
- Prior art keywords
- point cloud
- cloud data
- imu
- dimensional point
- map generation
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/10—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
- G01C21/12—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
- G01C21/16—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
- G01C21/165—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/86—Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S19/00—Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
- G01S19/38—Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
- G01S19/39—Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
- G01S19/42—Determining position
- G01S19/45—Determining position by combining measurements of signals from the satellite radio beacon positioning system with a supplementary measurement
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S19/00—Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
- G01S19/38—Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
- G01S19/39—Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
- G01S19/42—Determining position
- G01S19/45—Determining position by combining measurements of signals from the satellite radio beacon positioning system with a supplementary measurement
- G01S19/47—Determining position by combining measurements of signals from the satellite radio beacon positioning system with a supplementary measurement the supplementary measurement being an inertial measurement, e.g. tightly coupled inertial
Landscapes
- Engineering & Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Networks & Wireless Communication (AREA)
- Electromagnetism (AREA)
- Automation & Control Theory (AREA)
- Navigation (AREA)
Abstract
The invention relates to the technical field of map construction based on computer vision, and discloses a map generation method based on an IMU (inertial measurement Unit) and a laser radar, which comprises the following steps: acquiring three-dimensional point cloud data acquired by a laser radar and acquiring inertial data acquired by an IMU (inertial measurement Unit); generating a projection drawing on the ground according to the three-dimensional point cloud data, and calculating the translation amount of the vehicle according to the projection drawing of the adjacent frame of three-dimensional point cloud data; calculating the rotation amount of the vehicle between the adjacent frames of three-dimensional point cloud data according to the inertial data; drawing a track node by combining the translation amount and the rotation amount to obtain an odometer track; acquiring a GPS track, and correcting the odometer track by using the GPS track; extracting the characteristic information of each frame of the three-dimensional point cloud data, and associating each characteristic information with the GPS coordinate in the GPS track according to the corrected odometer track to obtain a three-dimensional map. The invention has the technical effects of high map precision, high map building speed and small memory occupation.
Description
Technical Field
The invention relates to the technical field of map construction based on computer vision, in particular to a map generation method, a map generation device, a map generation system and a computer storage medium based on an IMU (inertial measurement Unit) and a laser radar.
Background
The high-precision map is the basis for realizing automatic driving positioning, the traditional map is simple and convenient to operate by combining with GPS positioning, but the positioning precision can only reach about 10 m; in addition, point cloud maps constructed accumulatively after laser point cloud matching are common, but the inter-frame matching speed is low, the occupied memory is large, and the drawing process is complex.
Disclosure of Invention
The invention aims to overcome the technical defects, provides a map generation method, a map generation device, a map generation system and a computer storage medium based on an IMU (inertial measurement Unit) and a laser radar, and solves the technical problems of complex point cloud map making process and slow inter-frame matching algorithm speed in the prior art.
In order to achieve the technical purpose, the technical scheme of the invention provides a map generation method based on an IMU and a laser radar, which comprises the following steps:
acquiring three-dimensional point cloud data acquired by a laser radar and acquiring inertial data acquired by an IMU (inertial measurement Unit);
generating a projection drawing on the ground according to the three-dimensional point cloud data, and calculating the translation amount of the vehicle according to the projection drawing of the adjacent frame of three-dimensional point cloud data;
calculating the rotation amount of the vehicle between the adjacent frames of three-dimensional point cloud data according to the inertial data;
drawing a track node by combining the translation amount and the rotation amount to obtain an odometer track;
acquiring a GPS track, and correcting the odometer track by using the GPS track;
extracting the characteristic information of each frame of the three-dimensional point cloud data, and associating each characteristic information with the GPS coordinate in the GPS track according to the corrected odometer track to obtain a three-dimensional map.
The invention also provides a map generation device based on the IMU and the lidar, which comprises a processor and a memory, wherein the memory is stored with a computer program, and the computer program is executed by the processor to realize the map generation method based on the IMU and the lidar.
The invention also provides a map generation system based on the IMU and the lidar, which comprises a map generation device based on the IMU and the lidar, a vehicle body, the lidar, the IMU and a GPS receiver;
the map generation device based on the IMU and the lidar, the IMU and the GPS receiver are all arranged on the vehicle body, and the lidar, the IMU and the GPS receiver are respectively and electrically connected with the map generation device based on the IMU and the lidar;
the laser radar is used for acquiring three-dimensional point cloud data and sending the three-dimensional point cloud data to the map generation device based on the IMU and the laser radar;
the IMU is used for collecting inertial data of a vehicle and sending the inertial data to the map generation device based on the IMU and the laser radar;
the GPS receiver is connected with a remote differential GPS reference station and used for collecting a GPS track of a vehicle and sending the GPS track to the map generating device based on the IMU and the laser radar.
The present invention also provides a computer storage medium having stored thereon a computer program which, when executed by a processor, implements the IMU and lidar based map generation method.
Compared with the prior art, the invention has the beneficial effects that: the invention provides a high-precision map generation method based on vehicle-mounted IMU and laser radar fusion. The method converts three-dimensional point cloud data into a two-dimensional projection drawing, calculates the translation amount by using the projection drawing, avoids the high complexity of directly matching and drawing by directly using the three-dimensional point cloud data, and thus, the calculation speed of a matching algorithm is increased; meanwhile, the rotation amount is calculated by using inertial data of the IMU, the high-precision angle rotation amount acquired by the IMU is combined with the translation amount calculated by the projection diagram to obtain high-precision pose information, and the defect that the precision is low because the translation amount can only be calculated by using a traditional phase correlation method is overcome. The method has the advantages of high map generating speed, high precision and small occupied memory, and can meet the requirement of high-precision map construction of the intelligent vehicle.
Drawings
FIG. 1 is a flow chart of an embodiment of a map generation method based on IMU and lidar provided by the present invention;
FIG. 2 is a projection result diagram of an embodiment of a projection diagram obtained by projecting three-dimensional point cloud data on the ground according to the present invention;
FIG. 3 is a graph of a spectrum result of an embodiment of a projected graph converted to a frequency domain to obtain a spectrogram according to the present invention;
fig. 4 is a schematic structural diagram of an embodiment of the map generation system based on the IMU and the lidar provided in the present invention.
Reference numerals:
1. a vehicle body; 2. a laser radar; 3. an IMU; 4. a GPS receiver; 5. a display; 6 difference GPS reference station; 7. an industrial personal computer; 8. a power source.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Example 1
As shown in fig. 1, embodiment 1 of the present invention provides a map generation method based on an IMU and a lidar, including the following steps:
s1, acquiring three-dimensional point cloud data acquired by a laser radar, and acquiring inertial data acquired by an IMU;
s2, generating a projection drawing on the ground according to the three-dimensional point cloud data, and calculating the translation amount of the vehicle according to the projection drawing of the adjacent frame three-dimensional point cloud data;
s3, calculating the rotation amount of the vehicle between the adjacent frames of three-dimensional point cloud data according to the inertia data;
s4, drawing a track node by combining the translation amount and the rotation amount to obtain an odometer track;
s5, acquiring a GPS track, and correcting the odometer track by using the GPS track;
and S6, extracting the feature information of each frame of the three-dimensional point cloud data, and associating each feature information with the GPS coordinate in the GPS track according to the corrected odometer track to obtain a three-dimensional map.
The method comprises the steps of firstly, acquiring original three-dimensional point cloud data, GPS data and inertia data acquired by a vehicle-mounted laser radar; extracting and estimating a ground normal from the point cloud data, then obtaining a projection image from the point cloud data, and then obtaining a translation vector by using a Phase-only correlation method; obtaining a rotation matrix, namely a rotation amount, by combining inertial data acquired by an IMU (inertial measurement unit), and finally drawing a track of the odometer by using the rotation amount and the translation amount obtained by two adjacent frames of data of the laser radar; the method comprises the steps that a high-precision position of a vehicle, namely a GPS track, is obtained by combining GPS data with an RTK (Real-time kinematic) technology and is used for correcting a track of a speedometer, characteristic information of the same track node of the vehicle corresponds to a high-precision GPS coordinate through data association, and a three-dimensional map is obtained by combining the high-precision GPS track of the vehicle.
The method converts the three-dimensional point cloud data into the two-dimensional projection image for calculation translation, avoids the high complexity of directly matching and establishing the image by directly utilizing the three-dimensional point cloud data, and thus, the calculation speed of a matching algorithm is increased; the rotation amount is calculated by using the accurate angle of the IMU, the translation amount is acquired by using the point cloud data, the high-accuracy angle rotation amount acquired by the IMU is combined with the translation amount calculated by the projection diagram to obtain high-accuracy pose information, and the defect that the traditional phase correlation method only can calculate the translation amount and therefore the accuracy is low is overcome. The method has the effects of high map generation speed, high precision and small memory occupation, and can meet the requirement of high-precision map construction of the intelligent vehicle.
Preferably, a projection map on the ground is generated according to the three-dimensional point cloud data, specifically:
filtering and downsampling the three-dimensional point cloud data by using a voxel grid filter to obtain filtered point cloud data;
screening a ground point set from the three-dimensional point cloud data by using a condition screening method;
fitting the ground point set to obtain a ground equation, and solving a normal of the ground equation to obtain a ground normal;
and projecting the filtered point cloud data along the ground normal direction to obtain the projection diagram.
The voxel grid filter is used for filtering and downsampling original three-dimensional point cloud data, because laser radar directly has a large number of outliers, isolated points and noise points from the laser point cloud data collected by the environment, subsequent influence is large, the points are filtered and removed, errors of subsequent plane fitting can be greatly reduced, and meanwhile, the calculated amount of subsequent processing can be increased due to the large density of the original point cloud. And after the noise points are removed and the three-dimensional point cloud data with greatly reduced density after outliers are obtained, a normal is estimated by utilizing a condition screening method and a fitting algorithm to fit the ground, a ground normal of the position where the vehicle is located is obtained, and then the point cloud data with the ground removed is projected along the normal direction to obtain a projection drawing.
Filtering and downsampling the three-dimensional point cloud data by using a voxel grid filter to obtain filtered point cloud data, which specifically comprises the following steps:
dividing the three-dimensional point cloud data into a plurality of grids, calculating the gravity center of each grid, and replacing all points in the corresponding grid with the gravity center to obtain filtered point cloud data;
screening out a ground point set from the three-dimensional point cloud data by using a condition screening method, which specifically comprises the following steps:
screening out points with the height smaller than the set height in the filtered point cloud data as a ground candidate point set;
sorting the points in the ground candidate point set according to the heights, calculating the average height of all the points in the ground candidate point set, and calculating the height of a threshold value according to the average height;
calculating the normal of a point with the height smaller than the threshold height in the ground candidate point set, and eliminating the point with the included angle between the normal and the Z axis in the ground candidate point set larger than a set angle to obtain a final ground point set;
fitting the ground point set to obtain a ground equation, which specifically comprises the following steps:
and fitting the ground point set by using a RANSAC algorithm to obtain a ground equation.
Specifically, the process of filtering by using the voxel grid filter is as follows: dividing original three-dimensional point cloud data into small cube grids with side length of 5 cm, wherein point sets in the grids are { p1,p2,p3,...,pnN is the number of points included in the grid, and the coordinates of each point in the grid are p1:(x1,y1),p2:(x2,y2),p3:(x3,y3),...,pn:(xn,yn) Then, the barycenter p of all points in each cubic grid is calculated0(x0,y0) The calculation formula is as follows:
and representing all points in the grid by using the gravity center point to obtain filtered point cloud data.
Then, a ground point set P is detected by a condition screening methodgroundFirstly, sorting the point clouds according to the heights from small to large, taking out the points with the height less than 10cm, and then calculating the average height z of all the pointsaveThe height of withdrawal is less than the mean height zaveThen calculating the normal of these points, eliminating the points whose included angle with z-axis is greater than 10 deg. and making all the rest points asFor the final set of ground points Pground. Then, a better model is fitted by utilizing a RANSAC algorithm, specifically, a ground point set P is selected from each timegroundRandomly taking some points out of the model equation, and judging the fitting degree of other points and the points, wherein the fitting degree formula is as follows:
wherein d isiIs the degree of fit, n is the normal to the plane of the fitting model, piFor point to be judged, pjIs a vertex in the model.
Taking the point set with the fitting degree less than the threshold value tau as an interior point, taking tau as the number of the model and the corresponding interior points recorded each time,σ is the distance diAnd (4) continuously taking points according to the standard deviation of Gaussian distribution, stopping taking the points when the point taking times meet the iteration condition, and taking the fitting model with the maximum corresponding number of the inner points as the optimal model.
Wherein e represents the probability that a certain point is not an interior point, N is the number of point taking times, and N is the total number of points.
The optimal model equation is the fitted ground equation, which can be expressed as ax2+bx+c=d。
Finally, the ground equation ax is used2And d, calculating the vehicle ground normal, removing the ground point set, and projecting the original point cloud data along the ground normal direction to obtain a projection diagram, as shown in fig. 2.
Preferably, the translation amount of the vehicle is calculated according to the projection diagram of the three-dimensional point cloud data of the adjacent frames, specifically:
extracting adjacent quaternions corresponding to the adjacent frames of three-dimensional point cloud data from the inertial data;
converting the quaternion into an Euler angle, and converting the yaw angle of the Euler angle into an angle system to obtain the rotation angle of the vehicle on the ground;
aligning the adjacent frame projection drawings by using the rotation angle, so that the adjacent frame projection drawings have no rotation relation;
converting the projection graph from a space domain to a frequency domain by using a phase correlation method to obtain a spectrogram;
performing fast Fourier transform and centralization processing on the spectrogram, and extracting a phase from the spectrogram after the centralization processing;
synthesizing the phases of the adjacent frame depth images into a complex form to obtain cross-power spectrums of the adjacent frame depth images;
performing fast Fourier transform on the cross-power spectrum to obtain a Dirac function;
and extracting the peak value coordinate of the Dirac function to obtain the translation amount.
Specifically, two groups of quaternions Q corresponding to two adjacent frames of projection images are respectively extracted and collected from the collected inertial data1And Q2And calculating a quaternion Q corresponding to the rotation angle of the vehicle in the period of time (the period of the two adjacent frames of projection images/the two adjacent frames of three-dimensional point cloud data) according to the two groups of quaternion, wherein the formula is as follows:
Q=Q1*Q2 *
wherein Q is2 *Representing a quaternion Q2Conjugation of (1).
Then, the quaternion Q is equal to (Q)0,q1,q2,q3) Converted into euler angles (row, pitch, yaw), the conversion formula is as follows:
row=arcsin(2q2q3+2q0q1)
pitch=arctan(2q0q2-2q1q3,q0 2-q1 2-q2 2+q3 2)
yaw=arctan(2q0q3-2q1q2,q0 2-q1 2+q2 2-q3 2)
wherein q is0,q1,q2,q3The real part and the three imaginary parts of the quaternion are respectively, and the row, pitch and yaw are respectively the roll angle, the pitch angle and the yaw angle of the Euler angle rotating around the x, y and z axes.
And converting the yaw angle yaw parallel to the horizontal direction into an angle system to obtain a rotation angle theta, so that the latter projection drawing in the two adjacent frames of projection drawings is rotated to be aligned with the former projection drawing according to the rotation angle theta, and the two projection drawings are ensured not to have a rotation relation.
Converting the projection drawing by using a phase correlation method, and firstly setting the projection drawings of two adjacent frames after angle correction as f respectively1(x, y) and f2(x, y) the two projection images only have a translation relation f on the space domain2(x,y)=f1(x-x0,y-y0) And respectively performing fast Fourier transform on the two projection graphs with the size of M x N, wherein the fast Fourier transform formula is as follows:
the variables u and v respectively represent spatial frequencies corresponding to an x axis and a y axis, F (u and v) is a result of fast Fourier transform of an image, and F (x and y) represents a pixel value of the image needing the fast Fourier transform at a pixel point (x and y).
The frequency spectrums of the two images after the transformation satisfy:
F2(ξ,η)=e-j2π(ξΔx+ηΔy)F1(ξ,η)
F1(xi, eta) represents the projection view f1(x, y) a spectrogram obtained by performing fast Fourier transform, F2(xi, eta) represents the projection view f2(x, y) a spectrogram obtained after performing fast Fourier transform.
The transformed image information is transformed from a space domain to a frequency domain, the image frequency spectrum obviously has a bright part and a dark part, the brightness of four corners of the image frequency spectrum is larger and represents a low-frequency part with larger image energy, the center of the image frequency spectrum is darker and represents a high-frequency part with smaller image energy, and therefore a spectrogram is obtained; and then centralizing the spectrogram, namely translating the part with larger brightness at the four corners to the center of the image to obtain the final spectrogram, as shown in fig. 3. The center of the final spectrogram is a part with the frequency of 0, and the frequency represented by the dots is higher and higher from inside to outside.
Then, extracting a phase from the spectrogram after the fast fourier transform and the centering, wherein the phase in the frequency domain represents the variation information of the two spectrograms, and if the real part and the imaginary part are respectively represented by Re and Im, the phase spectrum can be represented as:
phi (u, v) represents the phase spectrum of the projection f (x, y).
Then the phases of the two spectrograms are synthesized into a complex form to obtain an image cross-power spectrum, wherein the cross-power spectrum formula is as follows:
wherein F2 *(ζ, η) represents F2And (zeta, eta) complex conjugate operation is carried out, and the cross power spectrum obtained by calculation contains information of two spectrogram transformations.
And finally, performing fast Fourier inverse transformation on the cross-power spectrum to obtain a Dirac function (x-Deltax, y-Deltax), wherein the Dirac function is an impulse function, only has a peak value at the original point (0,0), the peak value is 1 when the images are completely the same, when the two images only have a translation relation and the images have more overlapped parts, the peak value coordinate corresponds to the translation amount between the two images, and the translation amount T of the projection image is finally obtained by extracting the peak value coordinate of the Dirac function.
Preferably, the method for calculating the rotation amount of the vehicle between the adjacent frames of three-dimensional point cloud data according to the inertial data specifically comprises the following steps:
extracting adjacent quaternions corresponding to the adjacent frames of three-dimensional point cloud data from the inertial data;
converting the quaternion into Euler angles, and zeroizing the roll angles and the pitch angles in the Euler angles;
and converting the euler angle subjected to the zeroing treatment into a rotation matrix to obtain the rotation amount.
When the rotation amount is calculated, similarly, two groups of quaternion Q corresponding to two adjacent frames of projection images are respectively extracted and collected from the collected inertial data1And Q2And calculating a quaternion Q corresponding to the rotation angle of the vehicle in the period of time (the period of the two adjacent frames of projection images/the two adjacent frames of three-dimensional point cloud data) according to the two groups of quaternion, wherein the formula is as follows:
Q=Q1*Q2 *
wherein Q is2 *Representing a quaternion Q2Conjugation of (1).
Then, the quaternion Q is equal to (Q)0,q1,q2,q3) Converted into euler angles (row, pitch, yaw), the conversion formula is as follows:
row=arcsin(2q2q3+2q0q1)
pitch=arctan(2q0q2-2q1q3,q0 2-q1 2-q2 2+q3 2)
yaw=arctan(2q0q3-2q1q2,q0 2-q1 2+q2 2-q3 2)
wherein q is0,q1,q2,q3The real part and the three imaginary parts of the quaternion are respectively, and the row, pitch and yaw are respectively the roll angle, the pitch angle and the yaw angle of the Euler angle rotating around the x, y and z axes.
And setting the roll angle row and the pitch angle pitch of the Euler angle to be 0, wherein the roll angle row and the pitch angle pitch should be 0 because the intelligent vehicle moves on a horizontal ground, and then forming a new Euler angle (0,0, yaw).
Converting the return-to-zero processed Euler angle into a rotation matrix to obtain a rotation matrix R of the vehicle in the period of acquiring two adjacent frames of three-dimensional point cloud data, wherein the conversion formula is as follows:
and after the translation amount and the rotation amount are obtained, the odometer track can be drawn.
After the track drawing is finished, feature extraction is carried out; the features and the track points are conveniently correlated to form the three-dimensional map. In this embodiment, global features and local features are extracted at the same time, the depth map information is subjected to global Feature extraction, orb (organized FAST and organized brief) global features of the image are obtained, and then pfh (point Feature histories) local features are extracted from the filtered point cloud data, which is described in detail below.
Preferably, the feature information of the three-dimensional point cloud data comprises a global feature descriptor;
extracting a global feature descriptor of the three-dimensional point cloud data, specifically:
converting the three-dimensional point cloud data into a two-dimensional depth map;
carrying out histogram equalization on the depth map, and carrying out normalization processing on the image size;
taking the image center point of the normalized image as the position of the feature point;
and calculating a local feature descriptor of the image center point of the adjacent frame depth map as a global feature descriptor.
Preferably, the three-dimensional point cloud data is converted into a two-dimensional depth map, specifically:
and taking the Euclidean distance from each point in the three-dimensional point cloud data to the laser radar as the pixel value of the corresponding point in the depth map to obtain the depth map.
Specifically, the original three-dimensional point cloud data is converted into a two-dimensional depth map, the Euclidean distance from a certain point in each frame of three-dimensional point cloud data to the laser radar is used as the pixel value of the point in the depth map, the pixel coordinate corresponding to the point in the depth map is determined by using the relative position of the point in the point cloud to the rest points, and the depth map is finally obtained.
When global features are extracted, firstly, the depth map is subjected to histogram equalization, then the size of the whole image is normalized to be 63 x 63 pixels so as to solve the problem that the ORB global feature descriptor does not have scale invariance, then the normalized whole image is directly used as a feature field area, the center of the image is directly used as the position of a feature point, then the local feature descriptor of the image is calculated to be used as the ORB global feature descriptor, and the ORB global feature descriptor is a 256-dimensional binary vector. Calculating a local feature descriptor of an image center point of adjacent frame depth maps as a global feature descriptor, specifically:
calculating the Hamming distance of the image center points of the adjacent frame depth maps as the local feature descriptor of the image center points:
wherein, Hamm (X)1,X2) Representing a depth map X1And depth map X2The hamming distance of (a), XOR () represents an exclusive or operation,representation versus depth map X1And depth map X2A bitwise xor operation is performed with i representing a binary digit number.
Shorter hamming distances between vectors indicate that the global features of the two images are more similar.
Preferably, the feature information of the three-dimensional point cloud data comprises local feature descriptors;
extracting a local feature descriptor of the three-dimensional point cloud data, specifically:
extracting feature points of the three-dimensional point cloud data;
searching a plurality of points closest to the target point by adopting a Kd-tree, and connecting the searched points pairwise to construct a ball network;
defining a local coordinate system at one of the points of said net;
constructing a triple representing characteristics by utilizing the normal lines of the two points and the relative position relation with the local coordinate system;
and calculating all triples in the ball network and voting respectively to obtain the local feature descriptors.
The embodiment of extracting the PFH local features of the point cloud specifically includes: firstly, extracting characteristic points in the filtered point cloud data, searching K points closest to a target point by using a Kd-tree, and connecting every two points to form a ball network. Defining two points p1And p2Each normal vector is n1And n2In order to calculate the relative difference between two points, a local coordinate system uvw is defined at one of the two points, and the three satisfy the following relationship:
u=n1
w=u×v
calculating a local coordinate system and normals to the selected two points to define a triplet of descriptive point characteristicsThe triad is satisfied:
a=v×n2
θ=arctan(wgn2,ugn2)
for K points in the sphere field around a certain target point, the K points can be obtainedDividing the range of each dimension into b equal parts according to the maximum value and the minimum value of the three dimensions of the triple, taking 5 as b, constructing a histogram by taking 5 range segments of each dimension, and countingThe frequency of the triple in the three range segments is 5 × 5 × 5, which is 125-dimensional local feature descriptor, and the smaller the euclidean distance of the descriptor, the higher the similarity of the points.
After the translation amount and the rotation amount are obtained, a vehicle motion model is established, a rotation matrix and a translation vector obtained by using two continuous adjacent frames of data of a laser radar are used for drawing a track of the odometer, and the vehicle motion model is as follows:
wherein p isk xFor the vehicle x-axis coordinate, p, corresponding to the k-th frame projectionk yFor the vehicle y-axis coordinate, t, corresponding to the k-th frame projection0 xIs the starting point of the vehicle, i.e. the x-axis coordinate, t, of the vehicle corresponding to the projection of the 0 th frame0 yThe vehicle starting point, namely the vehicle y-axis coordinate corresponding to the projection diagram of the 0 th frame,θiis the relative rotation angle between the projection diagram of the ith frame and the projection diagram of the (i-1) th frame, ti xFor the x-axis coordinate, t, of the vehicle corresponding to the projection of the ith framei yThe y-axis coordinate of the vehicle corresponding to the projection diagram of the ith frame and the coordinate of the projection diagram of the 0 th frame are p0(0,0). The rotation amount R and the translation amount T can be multiplied by the model, and the positions of the projection graph can be converted into poses based on the starting points.
After acquiring the track of the odometer, receiving the rough longitude and latitude coordinates obtained by a global positioning system through a vehicle-mounted GPS receiver, obtaining accurate longitude and latitude coordinates by combining a real-time carrier phase differential (RTK) technology, generating a high-accuracy GPS track, correcting the track of the odometer by using the high-accuracy GPS track, and corresponding the global characteristics and the local characteristics of the same track node of the vehicle to the high-accuracy GPS coordinates through data association to obtain a final map.
Example 2
The map generation device based on the IMU and the lidar provided by the embodiment of the invention is used for realizing the map generation method based on the IMU and the lidar, so that the map generation device based on the IMU and the lidar has the technical effects, and the map generation device based on the IMU and the lidar is not described herein again.
Example 3
As shown in fig. 4, embodiment 3 of the present invention provides an IMU and lidar based map generation system, including the IMU3 and lidar based map generation apparatus provided in embodiment 2, further including a vehicle body 1, a lidar 2, an IMU3, and a GPS receiver 4;
the map generation device based on the IMU and the lidar, the lidar 2, the IMU3 and the GPS receiver 4 are all installed on the vehicle body 1, and the lidar 2, the IMU3 and the GPS receiver 4 are respectively and electrically connected with the map generation device based on the IMU and the lidar;
the laser radar 2 is used for acquiring three-dimensional point cloud data and sending the three-dimensional point cloud data to the map generation device based on the IMU and the laser radar;
the IMU3 is used for collecting inertial data of a vehicle and sending the inertial data to the map generating device based on the IMU and the laser radar;
the GPS receiver 4 is connected with a remote differential GPS reference station 6 and is used for collecting a GPS track of a vehicle and sending the GPS track to the map generating device based on the IMU and the laser radar.
Specifically, in the embodiment, the map generation device based on the IMU and the lidar is realized by using the industrial personal computer 7, the vehicle-mounted GPS receiver 4 receives the rough longitude and latitude coordinates obtained by the global positioning system, and then the precise longitude and latitude coordinates are obtained by combining with a GPS reference station real-time carrier phase differential (RTK) technology to generate a high-precision GPS track, the acquisition frequencies of the lidar 2 and the GPS receiver 4 are both 10Hz, and the acquisition frequency of the IMU3 is 100 Hz. Simultaneously, still set up display 5 on the vehicle body and be used for showing three-dimensional map, set up power 8 and be used for supplying power for laser radar 2, IMU3, GPS receiver 4, industrial computer 7. The display 5 and the power supply 8 may be both those built in the vehicle body.
Example 4
Embodiment 4 of the present invention provides a computer storage medium having stored thereon a computer program that, when executed by a processor, implements the IMU and lidar based map generation method provided in embodiment 1.
The computer storage medium provided by the embodiment of the invention is used for realizing the map generation method based on the IMU and the laser radar, so that the computer storage medium has the technical effects of the map generation method based on the IMU and the laser radar, and the details are not repeated herein.
The above-described embodiments of the present invention should not be construed as limiting the scope of the present invention. Any other corresponding changes and modifications made according to the technical idea of the present invention should be included in the protection scope of the claims of the present invention.
Claims (10)
1. A map generation method based on an IMU and a laser radar is characterized by comprising the following steps:
acquiring three-dimensional point cloud data acquired by a laser radar and acquiring inertial data acquired by an IMU (inertial measurement Unit);
generating a projection drawing on the ground according to the three-dimensional point cloud data, and calculating the translation amount of the vehicle according to the projection drawing of the adjacent frame of three-dimensional point cloud data;
calculating the rotation amount of the vehicle between the adjacent frames of three-dimensional point cloud data according to the inertial data;
drawing a track node by combining the translation amount and the rotation amount to obtain an odometer track;
acquiring a GPS track, and correcting the odometer track by using the GPS track;
extracting the characteristic information of each frame of the three-dimensional point cloud data, and associating each characteristic information with the GPS coordinate in the GPS track according to the corrected odometer track to obtain a three-dimensional map.
2. The IMU and lidar based map generation method of claim 1, wherein a projection map on the ground is generated from the three-dimensional point cloud data, specifically:
filtering and downsampling the three-dimensional point cloud data by using a voxel grid filter to obtain filtered point cloud data;
screening a ground point set from the three-dimensional point cloud data by using a condition screening method;
fitting the ground point set to obtain a ground equation, and solving a normal of the ground equation to obtain a ground normal;
and projecting the filtered point cloud data along the ground normal direction to obtain the projection diagram.
3. The IMU and lidar based map generation method of claim 1, wherein the translation of the vehicle is calculated from the projection map of the adjacent frame three-dimensional point cloud data, specifically:
extracting adjacent quaternions corresponding to the adjacent frames of three-dimensional point cloud data from the inertial data;
converting the quaternion into an Euler angle, and converting the yaw angle of the Euler angle into an angle system to obtain the rotation angle of the vehicle on the ground;
aligning the adjacent frame projection drawings by using the rotation angle, so that the adjacent frame projection drawings have no rotation relation;
converting the projection graph from a space domain to a frequency domain by using a phase correlation method to obtain a spectrogram;
performing fast Fourier transform and centralization processing on the spectrogram, and extracting a phase from the spectrogram after the centralization processing;
synthesizing the phases of the adjacent frame depth images into a complex form to obtain cross-power spectrums of the adjacent frame depth images;
performing fast Fourier transform on the cross-power spectrum to obtain a Dirac function;
and extracting the peak value coordinate of the Dirac function to obtain the translation amount.
4. The IMU and lidar based map generation method of claim 1, wherein the rotation amount of the vehicle between adjacent frames of three-dimensional point cloud data is calculated from the inertial data, specifically:
extracting adjacent quaternions corresponding to the adjacent frames of three-dimensional point cloud data from the inertial data;
converting the quaternion into Euler angles, and zeroizing the roll angles and the pitch angles in the Euler angles;
and converting the euler angle subjected to the zeroing treatment into a rotation matrix to obtain the rotation amount.
5. The IMU and lidar based map generation method of claim 1, wherein the feature information of the three-dimensional point cloud data comprises a global feature descriptor;
extracting a global feature descriptor of the three-dimensional point cloud data, specifically:
converting the three-dimensional point cloud data into a two-dimensional depth map;
carrying out histogram equalization on the depth map, and carrying out normalization processing on the image size;
taking the image center point of the normalized image as the position of the feature point;
and calculating a local feature descriptor of the image center point of the adjacent frame depth map as a global feature descriptor.
6. The IMU and lidar based map generation method of claim 5, wherein the three-dimensional point cloud data is converted to a two-dimensional depth map, specifically:
and taking the Euclidean distance from each point in the three-dimensional point cloud data to the laser radar as the pixel value of the corresponding point in the depth map to obtain the depth map.
7. The IMU and lidar based map generation method of claim 1, wherein the feature information of the three-dimensional point cloud data comprises local feature descriptors;
extracting a local feature descriptor of the three-dimensional point cloud data, specifically:
extracting feature points of the three-dimensional point cloud data;
searching a plurality of points closest to the target point by adopting a Kd-tree, and connecting the searched points pairwise to construct a ball network;
defining a local coordinate system on one of the two connected points;
constructing a triple representing characteristics by utilizing the normal lines of the two points and the relative position relation with the local coordinate system;
and calculating all triples in the ball network and voting respectively to obtain the local feature descriptors.
8. An IMU and lidar based map generation apparatus comprising a processor and a memory, the memory having stored thereon a computer program that, when executed by the processor, implements an IMU and lidar based map generation method as claimed in any of claims 1-7.
9. An IMU and lidar based map generation system, comprising the IMU and lidar based map generation apparatus of claim 8, further comprising a vehicle body, a lidar, an IMU, and a GPS receiver;
the map generation device based on the IMU and the lidar, the IMU and the GPS receiver are all arranged on the vehicle body, and the lidar, the IMU and the GPS receiver are respectively and electrically connected with the map generation device based on the IMU and the lidar;
the laser radar is used for acquiring three-dimensional point cloud data and sending the three-dimensional point cloud data to the map generation device based on the IMU and the laser radar;
the IMU is used for collecting inertial data of a vehicle and sending the inertial data to the map generation device based on the IMU and the laser radar;
the GPS receiver is connected with a remote differential GPS reference station and used for collecting a GPS track of a vehicle and sending the GPS track to the map generating device based on the IMU and the laser radar.
10. A computer storage medium having a computer program stored thereon, wherein the computer program, when executed by a processor, implements the IMU and lidar based map generation method of any of claims 1-7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010741680.8A CN112082545B (en) | 2020-07-29 | 2020-07-29 | Map generation method, device and system based on IMU and laser radar |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010741680.8A CN112082545B (en) | 2020-07-29 | 2020-07-29 | Map generation method, device and system based on IMU and laser radar |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112082545A true CN112082545A (en) | 2020-12-15 |
CN112082545B CN112082545B (en) | 2022-06-21 |
Family
ID=73735192
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010741680.8A Active CN112082545B (en) | 2020-07-29 | 2020-07-29 | Map generation method, device and system based on IMU and laser radar |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112082545B (en) |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112731357A (en) * | 2020-12-30 | 2021-04-30 | 清华大学 | Real-time correction method and system for positioning error of laser point cloud odometer |
CN112835063A (en) * | 2021-01-08 | 2021-05-25 | 北京京东乾石科技有限公司 | Method, device and equipment for determining dynamic and static properties of object and storage medium |
CN112859051A (en) * | 2021-01-11 | 2021-05-28 | 桂林电子科技大学 | Method for correcting laser radar point cloud motion distortion |
CN113066105A (en) * | 2021-04-02 | 2021-07-02 | 北京理工大学 | Positioning and mapping method and system based on fusion of laser radar and inertial measurement unit |
CN113379841A (en) * | 2021-06-21 | 2021-09-10 | 上海仙工智能科技有限公司 | Laser SLAM method based on phase correlation method and factor graph and readable storage medium thereof |
CN113405560A (en) * | 2021-05-28 | 2021-09-17 | 武汉理工大学 | Unified modeling method for vehicle positioning and path planning |
CN114018269A (en) * | 2021-11-22 | 2022-02-08 | 阿波罗智能技术(北京)有限公司 | Positioning method, positioning device, electronic equipment, storage medium and automatic driving vehicle |
CN114608568A (en) * | 2022-02-22 | 2022-06-10 | 北京理工大学 | Multi-sensor-based information instant fusion positioning method |
WO2022247045A1 (en) * | 2021-05-28 | 2022-12-01 | 浙江大学 | Laser radar information-based mobile robot location re-identification method |
CN116804765A (en) * | 2023-08-25 | 2023-09-26 | 兰笺(苏州)科技有限公司 | Automatic measurement method and device for real-quantity index of indoor space actual measurement |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108665540A (en) * | 2018-03-16 | 2018-10-16 | 浙江工业大学 | Robot localization based on binocular vision feature and IMU information and map structuring system |
CN109407073A (en) * | 2017-08-15 | 2019-03-01 | 百度在线网络技术(北京)有限公司 | Reflected value map constructing method and device |
CN109991636A (en) * | 2019-03-25 | 2019-07-09 | 启明信息技术股份有限公司 | Map constructing method and system based on GPS, IMU and binocular vision |
US20190226852A1 (en) * | 2016-09-09 | 2019-07-25 | Nanyang Technological University | Simultaneous localization and mapping methods and apparatus |
CN110261870A (en) * | 2019-04-15 | 2019-09-20 | 浙江工业大学 | It is a kind of to synchronize positioning for vision-inertia-laser fusion and build drawing method |
CN110850439A (en) * | 2020-01-15 | 2020-02-28 | 奥特酷智能科技(南京)有限公司 | High-precision three-dimensional point cloud map construction method |
CN111272181A (en) * | 2020-02-06 | 2020-06-12 | 北京京东乾石科技有限公司 | Method, device, equipment and computer readable medium for constructing map |
CN111427061A (en) * | 2020-06-15 | 2020-07-17 | 北京云迹科技有限公司 | Robot mapping method and device, robot and storage medium |
-
2020
- 2020-07-29 CN CN202010741680.8A patent/CN112082545B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190226852A1 (en) * | 2016-09-09 | 2019-07-25 | Nanyang Technological University | Simultaneous localization and mapping methods and apparatus |
CN109407073A (en) * | 2017-08-15 | 2019-03-01 | 百度在线网络技术(北京)有限公司 | Reflected value map constructing method and device |
CN108665540A (en) * | 2018-03-16 | 2018-10-16 | 浙江工业大学 | Robot localization based on binocular vision feature and IMU information and map structuring system |
CN109991636A (en) * | 2019-03-25 | 2019-07-09 | 启明信息技术股份有限公司 | Map constructing method and system based on GPS, IMU and binocular vision |
CN110261870A (en) * | 2019-04-15 | 2019-09-20 | 浙江工业大学 | It is a kind of to synchronize positioning for vision-inertia-laser fusion and build drawing method |
CN110850439A (en) * | 2020-01-15 | 2020-02-28 | 奥特酷智能科技(南京)有限公司 | High-precision three-dimensional point cloud map construction method |
CN111272181A (en) * | 2020-02-06 | 2020-06-12 | 北京京东乾石科技有限公司 | Method, device, equipment and computer readable medium for constructing map |
CN111427061A (en) * | 2020-06-15 | 2020-07-17 | 北京云迹科技有限公司 | Robot mapping method and device, robot and storage medium |
Non-Patent Citations (3)
Title |
---|
YE HAOYANG,ET.AL: "Tightly Coupled 3D Lidar Inertial Odometry and Mapping", 《IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION ICRA》 * |
吴晶: "无人驾驶汽车三维同步定位与建图精度评估", 《装备机械》 * |
李祎承等: "基于GPS与图像融合的智能车辆高精度定位算法", 《交通运输系统工程与信息》 * |
Cited By (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112731357A (en) * | 2020-12-30 | 2021-04-30 | 清华大学 | Real-time correction method and system for positioning error of laser point cloud odometer |
CN112835063A (en) * | 2021-01-08 | 2021-05-25 | 北京京东乾石科技有限公司 | Method, device and equipment for determining dynamic and static properties of object and storage medium |
CN112835063B (en) * | 2021-01-08 | 2024-04-12 | 北京京东尚科信息技术有限公司 | Method, device, equipment and storage medium for determining dynamic and static properties of object |
CN112859051A (en) * | 2021-01-11 | 2021-05-28 | 桂林电子科技大学 | Method for correcting laser radar point cloud motion distortion |
CN112859051B (en) * | 2021-01-11 | 2024-04-09 | 桂林电子科技大学 | Laser radar point cloud motion distortion correction method |
CN113066105A (en) * | 2021-04-02 | 2021-07-02 | 北京理工大学 | Positioning and mapping method and system based on fusion of laser radar and inertial measurement unit |
CN113066105B (en) * | 2021-04-02 | 2022-10-21 | 北京理工大学 | Positioning and mapping method and system based on fusion of laser radar and inertial measurement unit |
WO2022247045A1 (en) * | 2021-05-28 | 2022-12-01 | 浙江大学 | Laser radar information-based mobile robot location re-identification method |
CN113405560A (en) * | 2021-05-28 | 2021-09-17 | 武汉理工大学 | Unified modeling method for vehicle positioning and path planning |
CN113379841A (en) * | 2021-06-21 | 2021-09-10 | 上海仙工智能科技有限公司 | Laser SLAM method based on phase correlation method and factor graph and readable storage medium thereof |
CN113379841B (en) * | 2021-06-21 | 2024-04-30 | 上海仙工智能科技有限公司 | Laser SLAM method based on phase correlation method and factor graph and readable storage medium thereof |
CN114018269B (en) * | 2021-11-22 | 2024-03-26 | 阿波罗智能技术(北京)有限公司 | Positioning method, positioning device, electronic equipment, storage medium and automatic driving vehicle |
CN114018269A (en) * | 2021-11-22 | 2022-02-08 | 阿波罗智能技术(北京)有限公司 | Positioning method, positioning device, electronic equipment, storage medium and automatic driving vehicle |
CN114608568A (en) * | 2022-02-22 | 2022-06-10 | 北京理工大学 | Multi-sensor-based information instant fusion positioning method |
CN114608568B (en) * | 2022-02-22 | 2024-05-03 | 北京理工大学 | Multi-sensor information based instant fusion positioning method |
CN116804765A (en) * | 2023-08-25 | 2023-09-26 | 兰笺(苏州)科技有限公司 | Automatic measurement method and device for real-quantity index of indoor space actual measurement |
CN116804765B (en) * | 2023-08-25 | 2023-11-14 | 兰笺(苏州)科技有限公司 | Automatic measurement method and device for real-quantity index of indoor space actual measurement |
Also Published As
Publication number | Publication date |
---|---|
CN112082545B (en) | 2022-06-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112082545B (en) | Map generation method, device and system based on IMU and laser radar | |
CN110070615B (en) | Multi-camera cooperation-based panoramic vision SLAM method | |
Naseer et al. | Deep regression for monocular camera-based 6-dof global localization in outdoor environments | |
CN109360240B (en) | Small unmanned aerial vehicle positioning method based on binocular vision | |
CN109029433B (en) | Method for calibrating external parameters and time sequence based on vision and inertial navigation fusion SLAM on mobile platform | |
CN109165680B (en) | Single-target object dictionary model improvement method in indoor scene based on visual SLAM | |
CN105976353A (en) | Spatial non-cooperative target pose estimation method based on model and point cloud global matching | |
CN106780729A (en) | A kind of unmanned plane sequential images batch processing three-dimensional rebuilding method | |
CN110060329B (en) | Mobile terminal human body model reconstruction method based on color depth video stream data | |
CN111998862B (en) | BNN-based dense binocular SLAM method | |
CN112179357B (en) | Monocular camera-based visual navigation method and system for plane moving target | |
CN107862733B (en) | Large-scale scene real-time three-dimensional reconstruction method and system based on sight updating algorithm | |
CN111307146A (en) | Virtual reality wears display device positioning system based on binocular camera and IMU | |
CN115187676A (en) | High-precision line laser three-dimensional reconstruction calibration method | |
CN114022560A (en) | Calibration method and related device and equipment | |
CN116097307A (en) | Image processing method and related equipment | |
Firintepe et al. | The more, the merrier? A study on in-car IR-based head pose estimation | |
CN111664845B (en) | Traffic sign positioning and visual map making method and device and positioning system | |
CN110706288A (en) | Target detection method, device, equipment and readable storage medium | |
CN113284249B (en) | Multi-view three-dimensional human body reconstruction method and system based on graph neural network | |
CN117132737B (en) | Three-dimensional building model construction method, system and equipment | |
US20220129660A1 (en) | System and method of acquiring coordinates of pupil center point | |
CN118262127A (en) | Unmanned aerial vehicle image matching positioning method based on data preprocessing | |
CN117036666A (en) | Unmanned aerial vehicle low-altitude positioning method based on inter-frame image stitching | |
CN113723468B (en) | Object detection method of three-dimensional point cloud |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |