CN107505644B - Three-dimensional high-precision map generation system and method based on vehicle-mounted multi-sensor fusion - Google Patents

Three-dimensional high-precision map generation system and method based on vehicle-mounted multi-sensor fusion Download PDF

Info

Publication number
CN107505644B
CN107505644B CN201710629023.2A CN201710629023A CN107505644B CN 107505644 B CN107505644 B CN 107505644B CN 201710629023 A CN201710629023 A CN 201710629023A CN 107505644 B CN107505644 B CN 107505644B
Authority
CN
China
Prior art keywords
camera
laser
vehicle
image
dimensional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710629023.2A
Other languages
Chinese (zh)
Other versions
CN107505644A (en
Inventor
胡钊政
穆孟超
李祎承
游继安
陶倩文
黄刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University of Technology WUT
Original Assignee
Wuhan University of Technology WUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University of Technology WUT filed Critical Wuhan University of Technology WUT
Priority to CN201710629023.2A priority Critical patent/CN107505644B/en
Publication of CN107505644A publication Critical patent/CN107505644A/en
Application granted granted Critical
Publication of CN107505644B publication Critical patent/CN107505644B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/42Determining position
    • G01S19/45Determining position by combining measurements of signals from the satellite radio beacon positioning system with a supplementary measurement
    • G01S19/47Determining position by combining measurements of signals from the satellite radio beacon positioning system with a supplementary measurement the supplementary measurement being an inertial measurement, e.g. tightly coupled inertial
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C3/00Measuring distances in line of sight; Optical rangefinders
    • G01C3/02Details
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/01Satellite radio beacon positioning systems transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/13Receivers

Abstract

The invention provides a three-dimensional high-precision map generation system and method based on vehicle-mounted multi-sensor fusion, which comprises a data acquisition module, a data acquisition module and a data acquisition module, wherein the data acquisition module comprises a camera, a laser range finder, a GPS/INS receiving processor and a differential GPS reference platform, the camera is fixed above a vehicle, the laser range finder is fixed above the camera, the GPS/INS receiving processor is fixed in the vehicle, and the differential PGS reference platform is arranged at a position which is near the acquisition vehicle and can receive signals; and the data processing module is arranged in the vehicle, receives the data acquired by the data acquisition module, fuses the image information with the three-dimensional laser information, enables the map to have image characteristics and three-dimensional characteristics, and is combined with track information consisting of high-precision longitude and latitude information and current pose information of the vehicle to construct a multi-sensor fused three-dimensional high-precision visual map which is a basis for realizing visual positioning and an important component for realizing intelligent vehicle environment perception.

Description

Three-dimensional high-precision map generation system and method based on vehicle-mounted multi-sensor fusion
Technical Field
The invention belongs to the field of computer vision, and particularly relates to a three-dimensional high-precision map generation system and method based on vehicle-mounted multi-sensor fusion.
Background
With the development of intelligent driving, high-precision maps are also receiving wide attention. High-precision maps are the basis for implementing automated driving. At present, in the field of intelligent transportation, particularly in the field of intelligent vehicle unmanned driving, the dependence on a high-precision map is very high, and the traditional map is fully distributed to meet the requirement of intelligent vehicle unmanned driving. With the trend of the technical research of the intelligent vehicle, the problem of creating a high-precision map which can be provided for the intelligent vehicle to use is gradually brought into the visual field of people.
With the advent of the Geographic Information System (GIS), the conventional electronic map stores geographic information worldwide in a digital form, the navigation electronic map industry has been developed for about twenty years, and the production of high-precision navigation data has begun in the late 20 s and 80 s in some developed countries including north american europe and the like. Its development can be mainly divided into three stages: the first stage is the 90 s of the 20 th century, and the electronic map mainly applies a navigator carried by a real vehicle; the second stage is from 2002 to 2007, the application of the navigation electronic map is expanded from the actual vehicle carrying navigator in the previous stage to a handheld navigator (PND), and compared with a vehicle-mounted navigator, the PND has the advantages of low cost and convenience in installation and operation, and is rapidly popularized. The third stage is from 2007 to the present, the application of the navigation electronic map is from the vehicle-mounted navigator to the handheld navigator and further expanded to the mobile phone, and particularly with the emergence of the smart phone equipped with the GPS, the navigation market is marked to be further expanded to the mass consumption market. On the other hand, the construction of the city of china and the construction of urban roads have changed greatly in recent years, which has led to a rapid increase in consumer demand for a navigation electronic map service. However, the positioning accuracy of the traditional map plus the GPS is about 10m, and the requirement of high-accuracy positioning of the unmanned vehicle is difficult to achieve, so that the unmanned vehicle running at high speed has potential safety hazards. If the positioning accuracy is to be improved, most of the systems capable of achieving high-accuracy positioning at present are high in cost, for example, the positioning accuracy of about 20cm can be achieved by using an Inertial Navigation System (INS) and a GNSS combined navigation system for positioning, but the cost of the inertial navigation system is about 20 ten thousand yuan or more, which is much higher than the cost of the most popular automobiles of present audiences.
Disclosure of Invention
The technical problem to be solved by the invention is as follows: the three-dimensional high-precision map generation system and method based on vehicle-mounted multi-sensor fusion can improve positioning precision.
The technical scheme adopted by the invention for solving the technical problems is as follows: the utility model provides a three-dimensional high accuracy map generation system based on-vehicle multisensor fuses which characterized in that: it includes:
the data acquisition module comprises a camera, a laser range finder, a GPS/INS receiving processor and a differential GPS reference platform, wherein the camera is fixed above the vehicle, the laser range finder is fixed above the camera, the GPS/INS receiving processor is fixed in the vehicle, and the differential PGS reference platform is arranged at a position near the acquisition vehicle, and can receive signals;
the data processing module is arranged in the vehicle, receives the data acquired by the data acquisition module and processes the data according to the following mode:
combining the collected three-dimensional laser characteristic points with the image to obtain three-dimensional characteristics according to the mapping relation between the laser information and the image information; the mapping relation of the laser information and the image information is obtained by calibrating a camera in advance to obtain internal and external parameters of the camera, calibrating the camera and a laser range finder and fusing a world coordinate system corresponding to the camera and a world coordinate system corresponding to the laser range finder;
extracting global features of image information acquired by a camera to obtain image features;
calculating a rotation matrix and a translation matrix of a local characteristic point containing three-dimensional characteristics in a previous image and a local characteristic point in a next image by using a PnP model, and drawing a track through the rotation matrix and the translation matrix of adjacent continuous images;
the GPS/INS receiving processor combines with an RTK technology to obtain longitude and latitude coordinates of the vehicle in the current state, and combines with vehicle position and attitude information obtained by calculation to generate running track information of the vehicle;
and correcting the drawn track according to the running track information to obtain a three-dimensional map.
According to the scheme, the data processing module is a vehicle-mounted industrial personal computer.
The map generation method realized by the three-dimensional high-precision map generation system based on the vehicle-mounted multi-sensor fusion is characterized by comprising the following steps of: it comprises the following steps:
s1, calibrating the camera in advance to obtain internal and external parameters of the camera, calibrating the camera and the laser range finder to enable a world coordinate system corresponding to the camera and a world coordinate system corresponding to the laser range finder to be fused to obtain a mapping relation between laser information and image information;
s2, combining the collected three-dimensional laser characteristic points with the images to obtain three-dimensional characteristics according to the mapping relation between the laser information and the image information;
s3, extracting the global features of the image information collected by the camera to obtain image features;
s4, calculating a rotation matrix and a translation matrix of a local characteristic point containing three-dimensional characteristics in the previous image and a local characteristic point in the next image by using the PnP model, and drawing a track through the rotation matrix and the translation matrix of adjacent continuous images;
s5, acquiring longitude and latitude coordinates of the vehicle in the current state by the GPS/INS receiving processor in combination with an RTK technology, and generating driving track information of the vehicle in combination with vehicle position and attitude information obtained by calculation;
and S6, correcting the drawn track according to the running track information to obtain a three-dimensional map.
According to the method, the S1 specifically comprises the following steps:
s11, converting relation between image feature point pixel plane coordinates and world coordinates:
calibrating the camera to obtain internal and external parameters of the camera; pixel coordinates (u, v,1) of a point on the image, with respect to the world coordinate system coordinates (X) of the cameraw1,Yw1,Zw1,1)TThe corresponding calculation formula is as follows:
Figure BDA0001363443430000031
wherein, an internal parameter matrix K composed of the internal parameters of the camera is a 3 multiplied by 3 matrix, and R in the external parameter matrixcIs a rotation matrix with dimensions 3 x 3, TcIs a translation vector, with dimensions 3 × 1; the internal and external parameters are obtained by marking Zhangzhengyou chessboard grids;
s12, the position relation of the laser three-dimensional characteristic points relative to the laser range finder:
coordinates (X) of characteristic points in laser coordinate systemL,YL,ZL,1)TThe coordinates (X) of the laser in the world coordinate system are obtained by the following formulaW2,YW2,ZW2,1)T
Figure BDA0001363443430000032
RLIs a rotation matrix with dimensions 3 x 3, TLThe translation vector is 3 multiplied by 1 in size and is obtained by the laser range finder during calibration;
s13, mapping relation of laser three-dimensional feature points to images, namely three-dimensional feature information and image information fusion:
calibrating the camera and the laser range finder to obtain RuAnd t, obtaining the corresponding relation between the image point and the three-dimensional laser point under the same origin world coordinate system according to the following formula; wherein the content of the first and second substances,
Figure BDA0001363443430000033
Ruis a rotation matrix with dimensions 3 x 3, t is a translation vector with dimensions 3 x 1.
According to the method, the S13 specifically comprises the following steps:
s131, adopting an external calibration method, wherein the measured data is images and 3D laser radar data generated by a camera and a laser range finder at the same time, and the calibration task is to determine external parameters between two sensors, so that the 3D laser radar data is converted into image data;
s132, determining a chessboard plane in a camera coordinate system by using a camera calibration method;
s133, adjusting the chessboard plane according to the 3D laser radar data, and calculating coordinate system data of the laser range finder on the chessboard plane;
s134, respectively calculating rotation matrixes RuAnd translation vector t:
in the laser coordinate system, the scanning plane of the laser range finder is recorded as sigmai,ΣiThe parameters of (1) include plane normal and distance; in the camera coordinate system, the same plane scanned by the laser range finder is detected by the camera at the same time, and the plane is recorded as pii,ΠiThe parameters of (1) include plane normal and distance information; graphic plane ni`;
The transformation formula for transforming from the image coordinate system to the laser coordinate system is:
Figure BDA0001363443430000041
wherein T iscIs a 4 x 4 matrix, a rotation matrix RuIs a 3 × 3 orthogonal matrix, and the translation vector t is a 3 × 1 vector;
plane sigmaiAnd piiIs expressed as follows:
Figure BDA0001363443430000042
wherein n isiAnd miRepresenting plane ΣiAnd piiNormal vector of (d)iAnd biRepresents the distance of the device from the plane;
plane piiConvertible to plane piiThe relationship of ″:
Figure BDA0001363443430000043
translation vector t represents ΠiTranslation along a straight line to a plane sigmaiIs expressed by the following formula:
Figure BDA0001363443430000044
wherein I3×3Representing an identity matrix;
the rotation matrix R is established based on a calculation formula of two three-dimensional spaces and meets the following conditions:
Q1=Ru·Q2
wherein Q1=[n1n2n3],Q2=[m1m2m3]To find a rotation matrix Ru=Q1·Q2 -1
Then will be
Figure BDA0001363443430000045
Bringing in
Figure BDA0001363443430000046
In (b), the following relationship is obtained:
Figure BDA0001363443430000047
the calculated translation vector t is expressed as: t is A-1B, wherein A ═ Rum1Rum2Rum3],B=[b1-d1b2-d2b3-d3]T
According to the method, the S2 specifically comprises the following steps:
s21, shooting the scene around the vehicle by using a camera, and scanning the scene around the vehicle by using a laser range finder; the shooting frequency of the camera and the scanning frequency of the laser are kept synchronous;
s22, extracting local feature points from the picture shot by the camera, and describing the extracted local feature points;
s23, obtaining three-dimensional characteristic points mapped by the local characteristic points in each picture by using the mapping relation of the laser three-dimensional characteristic points obtained in S1 to the images, namely combining the three-dimensional laser characteristic points with the image local characteristic points to obtain three-dimensional characteristics:
Figure BDA0001363443430000051
the above formula is a mapping relation expression of mapping the laser three-dimensional characteristic points to the image, whereinPixel coordinates (u, v,1) of a point on the image, and coordinates (X) of the feature point under the laser coordinate systemL,YL,ZL,1)TK is the camera's internal reference, R is the rotation matrix, with dimensions of 3 × 3, T is the translation vector, with dimensions of 3 × 1.
According to the method, the S3 specifically comprises the following steps:
s31, zooming the image acquired in S2, wherein the size of the acquired image is N multiplied by M pixels;
s32, graying and histogram equalization are carried out on each frame of image, and then the global characteristic f of the image is calculatedq
According to the method, the S4 specifically comprises the following steps:
s41, establishing a pose relationship between the local characteristic points mapped to the image by the laser points and the laser three-dimensional characteristic points by using a PnP model to acquire pose information of the vehicle in the current position form state;
establishing a pose relation between the local feature and the three-dimensional feature point by using a PnP model, wherein the pose relation is shown as the following formula:
Figure BDA0001363443430000052
in the formula, [ u ]ivi1]TFor the ith local feature point coordinate, [ X ], corresponding to the acquired imageiYiZi1]TAn internal parameter matrix K is formed by internal parameters of the vehicle-mounted camera and is a 3 x 3 matrix, wherein the internal parameter matrix is a corresponding ith three-dimensional characteristic point in an acquisition result; external reference matrix P2=[RpTp],RpAnd TpRespectively representing the rotation and translation between the ith and (i-1) th vehicle positioning results of the acquired image; rpIs a rotation matrix with a size of 3 × 3, TpThe position and posture relation between the current vehicle and the last vehicle positioning result is represented by a translation vector with the size of 3 multiplied by 1, and a rotation matrix and the translation vector;
Figure BDA0001363443430000053
representing a linear relationship, i.e. the left and right sides of the equation differ by a scale factor which may be determined by the corresponding characteristicPoint solving;
s42, solving at least 4 groups of corresponding points by utilizing PnP algorithm to obtain RpAnd Tp
S43, finding R between all imagespMatrix sum TpMatrix according to a series of RpAnd TpAnd matrix drawing tracks.
The invention has the beneficial effects that: the image information and the three-dimensional laser information are fused, so that the map has image characteristics and three-dimensional characteristics, and meanwhile, the map is combined with track information consisting of high-precision longitude and latitude information and current pose information of the vehicle, so that a multi-sensor fused three-dimensional high-precision visual map is constructed, and the visual map is a basis for realizing visual positioning and is an important component for realizing intelligent vehicle environment perception.
Drawings
Fig. 1 is a system layout diagram of an embodiment of the present invention.
FIG. 2 is a block diagram of a method according to an embodiment of the present invention.
Fig. 3 is a view of the camera and lidar external calibration.
Fig. 4 is a conversion diagram of three-plane calibration in two coordinate systems.
In the figure: 1. a vehicle-mounted industrial personal computer display; 2. a vehicle-mounted industrial personal computer host; 3. a laser range finder; 4. a camera; 5. a vehicle; 6. fixing a bracket; a GPS/INS receive processor; 8. a power supply module; 9. a differential GPS reference station.
Detailed Description
The invention is further illustrated by the following specific examples and figures.
The invention provides a three-dimensional high-precision map generation system based on vehicle-mounted multi-sensor fusion, as shown in figure 1, comprising: the data acquisition module comprises a camera 4, a laser range finder 3, a GPS/INS receiving processor 7 and a differential GPS reference platform 9, wherein the camera 4 is fixed above the vehicle 5, the laser range finder 3 is fixed above the camera 4, the GPS/INS receiving processor 7 is fixed in the vehicle, and the differential PGS reference platform 9 is arranged at a position near the acquisition vehicle 5 and capable of receiving signals; data processing module, this embodiment are on-vehicle industrial computer, including on-vehicle industrial computer display 1 and on-vehicle industrial computer host computer 2, install in the car, receive the data of data acquisition module collection and handle according to following mode:
combining the collected three-dimensional laser characteristic points with the image to obtain three-dimensional characteristics according to the mapping relation between the laser information and the image information; the mapping relation of the laser information and the image information is obtained by calibrating a camera in advance to obtain internal and external parameters of the camera, calibrating the camera and a laser range finder and fusing a world coordinate system corresponding to the camera and a world coordinate system corresponding to the laser range finder;
extracting global features of image information acquired by a camera to obtain image features;
calculating a rotation matrix and a translation matrix of a local characteristic point containing three-dimensional characteristics in a previous image and a local characteristic point in a next image by using a PnP model, and drawing a track through the rotation matrix and the translation matrix of adjacent continuous images;
the GPS/INS receiving processor combines with an RTK technology to obtain longitude and latitude coordinates of the vehicle in the current state, and combines with vehicle position and attitude information obtained by calculation to generate running track information of the vehicle;
and correcting the drawn track according to the running track information to obtain a three-dimensional map.
The map generation method implemented by the three-dimensional high-precision map generation system based on the vehicle-mounted multi-sensor fusion, as shown in fig. 2, comprises the following steps:
and S1, calibrating the camera in advance to obtain internal and external parameters of the camera, calibrating the camera and the laser range finder to enable a world coordinate system corresponding to the camera and a world coordinate system corresponding to the laser range finder to be fused to obtain the mapping relation between the laser information and the image information.
S1 specifically includes:
s11, converting relation between image feature point pixel plane coordinates and world coordinates:
calibrating the camera to obtain internal and external parameters of the camera; pixel coordinates (u, v,1) of points on the image relative to the camera's worldWorld coordinate system coordinate (X)w1,Yw1,Zw1,1)TThe corresponding calculation formula is as follows:
Figure BDA0001363443430000071
wherein, an internal parameter matrix K composed of the internal parameters of the camera is a 3 multiplied by 3 matrix, and R in the external parameter matrixcIs a rotation matrix with dimensions 3 x 3, TcIs a translation vector, with dimensions 3 × 1; the internal and external parameters are obtained by marking Zhangzhengyou chessboard grids;
s12, the position relation of the laser three-dimensional characteristic points relative to the laser range finder:
coordinates (X) of characteristic points in laser coordinate systemL,YL,ZL,1)TThe coordinates (X) of the laser in the world coordinate system are obtained by the following formulaW2,YW2,ZW2,1)T
Figure BDA0001363443430000072
RLIs a rotation matrix with dimensions 3 x 3, TLThe translation vector is 3 multiplied by 1 in size and is obtained by the laser range finder during calibration;
s13, mapping relation of laser three-dimensional feature points to images, namely three-dimensional feature information and image information fusion:
calibrating the camera and the laser range finder to obtain RuAnd t, obtaining the corresponding relation between the image point and the three-dimensional laser point under the same origin world coordinate system according to the following formula; wherein the content of the first and second substances,
Figure BDA0001363443430000081
Ruis a rotation matrix with dimensions 3 x 3, t is a translation vector with dimensions 3 x 1.
S13 specifically includes:
s131, adopting an external calibration method, as shown in FIG. 3, a chessboard plane is a target for calibration, measured data is images and 3D laser radar data generated by a camera and a laser range finder at the same time, and a calibration task is to determine external parameters between two sensors, so that the 3D laser radar data is converted into the image data;
s132, determining a chessboard plane in a camera coordinate system by using a camera calibration method;
s133, adjusting the chessboard plane according to the 3D laser radar data, and calculating coordinate system data of the laser range finder on the chessboard plane;
s134, respectively calculating rotation matrixes RuAnd translation vector t, as shown in FIG. 4:
in the laser coordinate system, the scanning plane of the laser range finder is recorded as sigmai,ΣiThe parameters of (1) include plane normal and distance; in the camera coordinate system, the same plane scanned by the laser range finder is detected by the camera at the same time, and the plane is recorded as pii,ΠiThe parameters of (1) include plane normal and distance information; graphic plane ni`;
The transformation formula for transforming from the image coordinate system to the laser coordinate system is:
Figure BDA0001363443430000082
wherein T iscIs a 4 x 4 matrix, a rotation matrix RuIs a 3 × 3 orthogonal matrix, and the translation vector t is a 3 × 1 vector;
plane sigmaiAnd piiIs expressed as follows:
Figure BDA0001363443430000083
wherein n isiAnd miRepresenting plane ΣiAnd piiNormal vector of (d)iAnd biRepresents the distance of the device from the plane;
plane piiConvertible to plane piiThe relationship of ″:
Figure BDA0001363443430000084
translation vector t represents ΠiTranslation along a straight line to a plane sigmaiIs expressed by the following formula:
Figure BDA0001363443430000091
wherein I3×3Representing an identity matrix;
the rotation matrix R is established based on a calculation formula of two three-dimensional spaces and meets the following conditions:
Q1=Ru·Q2
wherein Q1=[n1n2n3],Q2=[m1m2m3]To find a rotation matrix Ru=Q1·Q2 -1
Then will be
Figure BDA0001363443430000092
Bringing in
Figure BDA0001363443430000093
In (b), the following relationship is obtained:
Figure BDA0001363443430000094
the calculated translation vector t is expressed as: t is A-1B, wherein A ═ Rum1Rum2Rum3],B=[b1-d1b2-d2b3-d3]T
And S2, combining the collected three-dimensional laser characteristic points with the images to obtain three-dimensional characteristics according to the mapping relation between the laser information and the image information.
S2 specifically includes:
s21, shooting the scene around the vehicle by using a camera, and scanning the scene around the vehicle by using a laser range finder; the shooting frequency of the camera and the scanning frequency of the laser are kept synchronous;
s22, extracting local feature points from the picture shot by the camera, and describing the extracted local feature points;
s23, obtaining three-dimensional characteristic points mapped by the local characteristic points in each picture by using the mapping relation of the laser three-dimensional characteristic points obtained in S1 to the images, namely combining the three-dimensional laser characteristic points with the image local characteristic points to obtain three-dimensional characteristics:
Figure BDA0001363443430000095
the above formula is a mapping relation expression of mapping the laser three-dimensional characteristic point to the image, wherein the pixel coordinate (u, v,1) of the point on the image, and the coordinate (X) of the characteristic point under the laser coordinate systemL,YL,ZL,1)TK is the camera's internal reference, R is the rotation matrix, with dimensions of 3 × 3, T is the translation vector, with dimensions of 3 × 1.
And S3, extracting the global features of the image information collected by the camera to obtain the image features.
S3 specifically includes:
s31, zooming the image acquired in S2, wherein the size of the acquired image is N multiplied by M pixels;
s32, graying and histogram equalization are carried out on each frame of image, and then the global characteristic f of the image is calculatedq
S4, calculating a rotation matrix and a translation matrix of the local characteristic point containing the three-dimensional characteristic in the previous image and the local characteristic point in the next image by using the PnP model, and drawing a track through the rotation matrix and the translation matrix of the adjacent continuous images.
S4 specifically includes:
s41, establishing a pose relationship between the local characteristic points mapped to the image by the laser points and the laser three-dimensional characteristic points by using a PnP model to acquire pose information of the vehicle in the current position form state;
establishing a pose relation between the local feature and the three-dimensional feature point by using a PnP model, wherein the pose relation is shown as the following formula:
Figure BDA0001363443430000101
in the formula, [ u ]ivi1]TFor the ith local feature point coordinate, [ X ], corresponding to the acquired imageiYiZi1]TAn internal parameter matrix K is formed by internal parameters of the vehicle-mounted camera and is a 3 x 3 matrix, wherein the internal parameter matrix is a corresponding ith three-dimensional characteristic point in an acquisition result; external reference matrix P2=[RpTp],RpAnd TpRespectively representing the rotation and translation between the ith and (i-1) th vehicle positioning results of the acquired image; rpIs a rotation matrix with a size of 3 × 3, TpThe position and posture relation between the current vehicle and the last vehicle positioning result is represented by a translation vector with the size of 3 multiplied by 1, and a rotation matrix and the translation vector;
Figure BDA0001363443430000102
expressing a linear relation, namely the difference between the left side and the right side of the equation is a scale factor, and the scale factor can be solved by corresponding characteristic points;
s42, solving at least 4 groups of corresponding points by utilizing PnP algorithm to obtain RpAnd Tp
S43, finding R between all imagespMatrix sum TpMatrix according to a series of RpAnd TpAnd matrix drawing tracks.
And S5, acquiring longitude and latitude coordinates of the vehicle in the current state by the GPS/INS receiving processor in combination with an RTK technology, and generating the running track information of the vehicle in combination with the vehicle position and attitude information obtained by calculation. The method specifically comprises the following steps: receiving the rough longitude and latitude information obtained by a global positioning system by a GPS receiver, and processing the rough longitude and latitude information by combining a longitude and latitude correction value given by an Inertial Navigation System (INS) with a real-time carrier phase difference (RTK) technology so as to obtain high-precision longitude and latitude information; and generating the running track information of the vehicle by combining the longitude and latitude information.
And S6, correcting the drawn track according to the running track information to obtain a three-dimensional map.
The invention can fuse image information and three-dimensional laser information, so that the map has image characteristics and three-dimensional characteristics, and is combined with track information consisting of high-precision longitude and latitude information and current pose information of a vehicle to construct a multi-sensor fused three-dimensional high-precision visual map, wherein the visual map is a basis for realizing visual positioning and is an important component for realizing intelligent vehicle environment perception.
The above embodiments are only used for illustrating the design idea and features of the present invention, and the purpose of the present invention is to enable those skilled in the art to understand the content of the present invention and implement the present invention accordingly, and the protection scope of the present invention is not limited to the above embodiments. Therefore, all equivalent changes and modifications made in accordance with the principles and concepts disclosed herein are intended to be included within the scope of the present invention.

Claims (1)

1. The map generation method realized by using the three-dimensional high-precision map generation system based on the vehicle-mounted multi-sensor fusion is characterized by comprising the following steps of: the three-dimensional high-precision map generation system based on vehicle-mounted multi-sensor fusion comprises a data acquisition module, a data acquisition module and a data acquisition module, wherein the data acquisition module comprises a camera, a laser range finder, a GPS/INS receiving processor and a differential GPS reference platform;
the data processing module is arranged in the vehicle, receives the data acquired by the data acquisition module and processes the data according to the following mode:
combining the collected three-dimensional laser characteristic points with the image to obtain three-dimensional characteristics according to the mapping relation between the laser information and the image information; the mapping relation of the laser information and the image information is obtained by calibrating a camera in advance to obtain internal and external parameters of the camera, calibrating the camera and a laser range finder and fusing a world coordinate system corresponding to the camera and a world coordinate system corresponding to the laser range finder;
extracting global features of image information acquired by a camera to obtain image features;
calculating a rotation matrix and a translation matrix of a local characteristic point containing three-dimensional characteristics in a previous image and a local characteristic point in a next image by using a PnP model, and drawing a track through the rotation matrix and the translation matrix of adjacent continuous images;
the GPS/INS receiving processor combines with an RTK technology to obtain longitude and latitude coordinates of the vehicle in the current state, and combines with vehicle position and attitude information obtained by calculation to generate running track information of the vehicle;
correcting the drawn track according to the running track information to obtain a three-dimensional map;
the data processing module is a vehicle-mounted industrial personal computer;
the method comprises the following steps:
s1, calibrating the camera in advance to obtain internal and external parameters of the camera, calibrating the camera and the laser range finder to enable a world coordinate system corresponding to the camera and a world coordinate system corresponding to the laser range finder to be fused to obtain a mapping relation between laser information and image information;
s2, combining the collected three-dimensional laser characteristic points with the images to obtain three-dimensional characteristics according to the mapping relation between the laser information and the image information;
s3, extracting the global features of the image information collected by the camera to obtain image features;
s4, calculating a rotation matrix and a translation matrix of a local characteristic point containing three-dimensional characteristics in the previous image and a local characteristic point in the next image by using the PnP model, and drawing a track through the rotation matrix and the translation matrix of adjacent continuous images;
s5, acquiring longitude and latitude coordinates of the vehicle in the current state by the GPS/INS receiving processor in combination with an RTK technology, and generating driving track information of the vehicle in combination with vehicle position and attitude information obtained by calculation;
s6, correcting the drawn track according to the running track information to obtain a three-dimensional map;
the S1 specifically includes:
s11, converting relation between image feature point pixel plane coordinates and world coordinates:
calibrating the camera to obtain internal and external parameters of the camera; pixel coordinates (u, v,1) of a point on the image, with respect to the world coordinate system coordinates (X) of the cameraw1,Yw1,Zw1,1)TThe corresponding calculation formula is as follows:
Figure FDA0002319595310000021
wherein, an internal parameter matrix K composed of the internal parameters of the camera is a 3 multiplied by 3 matrix, and R in the external parameter matrixcIs a rotation matrix with dimensions 3 x 3, TcIs a translation vector, with dimensions 3 × 1; the internal and external parameters are obtained by marking Zhangzhengyou chessboard grids;
s12, the position relation of the laser three-dimensional characteristic points relative to the laser range finder:
coordinates (X) of characteristic points in laser coordinate systemL,YL,ZL,1)TThe coordinates (X) of the laser in the world coordinate system are obtained by the following formulaW2,YW2,ZW2,1)T
Figure FDA0002319595310000022
RLIs a rotation matrix with dimensions 3 x 3, TLThe translation vector is 3 multiplied by 1 in size and is obtained by the laser range finder during calibration;
s13, mapping relation of laser three-dimensional feature points to images, namely three-dimensional feature information and image information fusion:
calibrating the camera and the laser range finder to obtain RuAnd t, obtaining the corresponding relation between the image point and the three-dimensional laser point under the same origin world coordinate system according to the following formula; wherein the content of the first and second substances,
Figure FDA0002319595310000023
Ruis a rotation matrix with a size of 3 × 3, t is a translation vector with a size of 3 × 1;
the S13 specifically includes:
s131, adopting an external calibration method, wherein the measured data is images and 3D laser radar data generated by a camera and a laser range finder at the same time, and the calibration task is to determine external parameters between two sensors, so that the 3D laser radar data is converted into image data;
s132, determining a chessboard plane in a camera coordinate system by using a camera calibration method;
s133, adjusting the chessboard plane according to the 3D laser radar data, and calculating coordinate system data of the laser range finder on the chessboard plane;
s134, respectively calculating rotation matrixes RuAnd translation vector t:
in the laser coordinate system, the scanning plane of the laser range finder is recorded as sigmai,ΣiThe parameters of (1) include plane normal and distance; in the camera coordinate system, the same plane scanned by the laser range finder is detected by the camera at the same time, and the plane is recorded as pii,ΠiThe parameters of (1) include plane normal and distance information; graphic plane ni`;
The transformation formula for transforming from the image coordinate system to the laser coordinate system is:
Figure FDA0002319595310000031
wherein T iscIs a 4 x 4 matrix, a rotation matrix RuIs a 3 × 3 orthogonal matrix, and the translation vector t is a 3 × 1 vector;
plane sigmaiAnd piiIs expressed as follows:
Figure FDA0002319595310000032
wherein n isiAnd miRepresenting plane ΣiAnd piiNormal vector of (d)iAnd biRepresents the distance of the device from the plane;
plane piiConvertible to plane piiThe relationship of ″:
Figure FDA0002319595310000033
translation vector t represents ΠiTranslation along a straight line to a plane sigmaiIs expressed by the following formula:
Figure FDA0002319595310000034
wherein I3×3Representing an identity matrix;
the rotation matrix R is established based on a calculation formula of two three-dimensional spaces and meets the following conditions:
Q1=Ru·Q2
wherein Q1=[n1n2n3],Q2=[m1m2m3]To find a rotation matrix Ru=Q1·Q2 -1
Then will be
Figure FDA0002319595310000035
Bringing in
Figure FDA0002319595310000036
In (b), the following relationship is obtained:
Figure FDA0002319595310000037
the calculated translation vector t is expressed as: t is A-1B, wherein A ═ Rum1Rum2Rum3],B=[b1-d1b2-d2b3-d3]T
The S3 specifically includes:
s31, zooming the image acquired in S2, wherein the size of the acquired image is N multiplied by M pixels;
s32, graying and straightening each frame imageThe method comprises the steps of equalization of a block diagram and calculation of global characteristics f of an imageq
The S4 specifically includes:
s41, establishing a pose relationship between the local characteristic points mapped to the image by the laser points and the laser three-dimensional characteristic points by using a PnP model to acquire pose information of the vehicle in the current position form state;
establishing a pose relation between the local feature and the three-dimensional feature point by using a PnP model, wherein the pose relation is shown as the following formula:
Figure FDA0002319595310000041
in the formula, [ u ]ivi1]TFor the ith local feature point coordinate, [ X ], corresponding to the acquired imageiYiZi1]TAn internal parameter matrix K is formed by internal parameters of the vehicle-mounted camera and is a 3 x 3 matrix, wherein the internal parameter matrix is a corresponding ith three-dimensional characteristic point in an acquisition result; external reference matrix P2=[RpTp],RpAnd TpRespectively representing the rotation and translation between the ith and (i-1) th vehicle positioning results of the acquired image; rpIs a rotation matrix with a size of 3 × 3, TpThe position and posture relation between the current vehicle and the last vehicle positioning result is represented by a translation vector with the size of 3 multiplied by 1, and a rotation matrix and the translation vector;
Figure FDA0002319595310000042
expressing a linear relation, namely the difference between the left side and the right side of the equation is a scale factor, and the scale factor can be solved by corresponding characteristic points;
s42, solving at least 4 groups of corresponding points by utilizing PnP algorithm to obtain RpAnd Tp
S43, finding R between all imagespMatrix sum TpMatrix according to a series of RpAnd TpDrawing a track in a matrix;
the S2 specifically includes:
s21, shooting the scene around the vehicle by using a camera, and scanning the scene around the vehicle by using a laser range finder; the shooting frequency of the camera and the scanning frequency of the laser are kept synchronous;
s22, extracting local feature points from the picture shot by the camera, and describing the extracted local feature points;
s23, obtaining three-dimensional characteristic points mapped by the local characteristic points in each picture by using the mapping relation of the laser three-dimensional characteristic points obtained in S1 to the images, namely combining the three-dimensional laser characteristic points with the image local characteristic points to obtain three-dimensional characteristics:
Figure FDA0002319595310000043
the above formula is a mapping relation expression of mapping the laser three-dimensional characteristic point to the image, wherein the pixel coordinate (u, v,1) of the point on the image, and the coordinate (X) of the characteristic point under the laser coordinate systemL,YL,ZL,1)TK is the camera's internal reference, R is the rotation matrix, with dimensions of 3 × 3, T is the translation vector, with dimensions of 3 × 1.
CN201710629023.2A 2017-07-28 2017-07-28 Three-dimensional high-precision map generation system and method based on vehicle-mounted multi-sensor fusion Active CN107505644B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710629023.2A CN107505644B (en) 2017-07-28 2017-07-28 Three-dimensional high-precision map generation system and method based on vehicle-mounted multi-sensor fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710629023.2A CN107505644B (en) 2017-07-28 2017-07-28 Three-dimensional high-precision map generation system and method based on vehicle-mounted multi-sensor fusion

Publications (2)

Publication Number Publication Date
CN107505644A CN107505644A (en) 2017-12-22
CN107505644B true CN107505644B (en) 2020-05-05

Family

ID=60690303

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710629023.2A Active CN107505644B (en) 2017-07-28 2017-07-28 Three-dimensional high-precision map generation system and method based on vehicle-mounted multi-sensor fusion

Country Status (1)

Country Link
CN (1) CN107505644B (en)

Families Citing this family (37)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108416257A (en) * 2018-01-19 2018-08-17 北京交通大学 Merge the underground railway track obstacle detection method of vision and laser radar data feature
CN108319976B (en) * 2018-01-25 2019-06-07 北京三快在线科技有限公司 Build drawing method and device
CN108680156B (en) * 2018-02-26 2022-01-07 青岛克路德机器人有限公司 Robot positioning method for multi-sensor data fusion
CN108428254A (en) * 2018-03-15 2018-08-21 斑马网络技术有限公司 The construction method and device of three-dimensional map
CN110554420B (en) * 2018-06-04 2022-06-28 百度在线网络技术(北京)有限公司 Equipment track obtaining method and device, computer equipment and storage medium
CN109029442A (en) * 2018-06-07 2018-12-18 武汉理工大学 Based on the matched positioning device of multi-angle of view and method
CN108594245A (en) * 2018-07-04 2018-09-28 北京国泰星云科技有限公司 A kind of object movement monitoring system and method
CN109146929B (en) * 2018-07-05 2021-12-31 中山大学 Object identification and registration method based on event-triggered camera and three-dimensional laser radar fusion system
CN109099923A (en) * 2018-08-20 2018-12-28 江苏大学 Road scene based on laser, video camera, GPS and inertial navigation fusion characterizes system and method
CN109345596A (en) 2018-09-19 2019-02-15 百度在线网络技术(北京)有限公司 Multisensor scaling method, device, computer equipment, medium and vehicle
KR102233260B1 (en) * 2018-10-02 2021-03-29 에스케이텔레콤 주식회사 Apparatus and method for updating high definition map
CN109583415B (en) * 2018-12-11 2022-09-30 兰州大学 Traffic light detection and identification method based on fusion of laser radar and camera
CN109445441A (en) * 2018-12-14 2019-03-08 上海安吉四维信息技术有限公司 3D Laser navigation system, automated guided vehicle and working method
CN109712197B (en) * 2018-12-20 2023-06-30 珠海瑞天安科技发展有限公司 Airport runway gridding calibration method and system
CN109816588B (en) * 2018-12-29 2023-03-21 百度在线网络技术(北京)有限公司 Method, device and equipment for recording driving trajectory
CN109724610A (en) * 2018-12-29 2019-05-07 河北德冠隆电子科技有限公司 A kind of method and device of full information real scene navigation
CN109784250B (en) * 2019-01-04 2020-12-08 广州广电研究院有限公司 Positioning method and device of automatic guide trolley
CN111771137A (en) * 2019-01-30 2020-10-13 深圳市大疆创新科技有限公司 Radar external parameter calibration method and device and storage medium
CN110018470A (en) * 2019-03-01 2019-07-16 北京纵目安驰智能科技有限公司 Based on example mask method, model, terminal and the storage medium merged before multisensor
US11402220B2 (en) * 2019-03-13 2022-08-02 Here Global B.V. Maplets for maintaining and updating a self-healing high definition map
CN110174115B (en) * 2019-06-05 2021-03-16 武汉中海庭数据技术有限公司 Method and device for automatically generating high-precision positioning map based on perception data
CN110864696A (en) * 2019-09-19 2020-03-06 福建农林大学 Three-dimensional high-precision map drawing method based on vehicle-mounted laser inertial navigation data
CN110673115B (en) * 2019-09-25 2021-11-23 杭州飞步科技有限公司 Combined calibration method, device, equipment and medium for radar and integrated navigation system
CN110736472A (en) * 2019-10-10 2020-01-31 武汉理工大学 indoor high-precision map representation method based on fusion of vehicle-mounted all-around images and millimeter wave radar
CN110796597B (en) * 2019-10-10 2024-02-02 武汉理工大学 Vehicle-mounted all-round image splicing device based on space-time compensation
CN110727009B (en) * 2019-10-10 2023-04-11 武汉理工大学 High-precision visual map construction and positioning method based on vehicle-mounted all-around image
CN111060118B (en) * 2019-12-27 2022-01-07 炬星科技(深圳)有限公司 Scene map establishing method, device and storage medium
CN111275671A (en) * 2020-01-15 2020-06-12 江苏众远智能装备有限公司 robot-3D technology-matched visual identification method
CN111257882B (en) * 2020-03-19 2021-11-19 北京三快在线科技有限公司 Data fusion method and device, unmanned equipment and readable storage medium
CN111539973B (en) * 2020-04-28 2021-10-01 北京百度网讯科技有限公司 Method and device for detecting pose of vehicle
CN111667545B (en) * 2020-05-07 2024-02-27 东软睿驰汽车技术(沈阳)有限公司 High-precision map generation method and device, electronic equipment and storage medium
CN112614171B (en) * 2020-11-26 2023-12-19 厦门大学 Air-ground integrated dynamic environment sensing system for engineering machinery cluster operation
CN112700546A (en) * 2021-01-14 2021-04-23 视辰信息科技(上海)有限公司 System and method for constructing outdoor large-scale three-dimensional map
CN113390411B (en) * 2021-06-10 2022-08-09 中国北方车辆研究所 Foot type robot navigation and positioning method based on variable configuration sensing device
CN115797900B (en) * 2021-09-09 2023-06-27 廊坊和易生活网络科技股份有限公司 Vehicle-road gesture sensing method based on monocular vision
CN114170320B (en) * 2021-10-29 2022-10-28 广西大学 Automatic positioning and working condition self-adaption method of pile driver based on multi-sensor fusion
CN114442805A (en) * 2022-01-06 2022-05-06 上海安维尔信息科技股份有限公司 Monitoring scene display method and system, electronic equipment and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105758426A (en) * 2016-02-19 2016-07-13 深圳杉川科技有限公司 Combined calibration method for multiple sensors of mobile robot
CN106225789A (en) * 2016-07-12 2016-12-14 武汉理工大学 A kind of onboard navigation system with high security and bootstrap technique thereof

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9753060B2 (en) * 2015-08-28 2017-09-05 Stmicroelectronics (Research & Development) Limited Apparatus with device with fault detection protection

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105758426A (en) * 2016-02-19 2016-07-13 深圳杉川科技有限公司 Combined calibration method for multiple sensors of mobile robot
CN106225789A (en) * 2016-07-12 2016-12-14 武汉理工大学 A kind of onboard navigation system with high security and bootstrap technique thereof

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Registration of Image and 3D LIDAR Data from Extrinsic Calibration;Zhaozheng Hu等;《The 3rd International Conference on Transportation Information and Safety》;20151231;第102页右栏第2段至第104页右栏第6段及图1-2 *
基于GPS与图像融合的智能车辆高精度定位算法;李祎承等;《交通运输系统工程与信息》;20170630;第17卷(第3期);第113页右栏第1段至第115页右栏第3段 *

Also Published As

Publication number Publication date
CN107505644A (en) 2017-12-22

Similar Documents

Publication Publication Date Title
CN107505644B (en) Three-dimensional high-precision map generation system and method based on vehicle-mounted multi-sensor fusion
CN107084727B (en) Visual positioning system and method based on high-precision three-dimensional map
CN108534782B (en) Binocular vision system-based landmark map vehicle instant positioning method
KR100728377B1 (en) Method for real-time updating gis of changed region vis laser scanning and mobile internet
CN108801274B (en) Landmark map generation method integrating binocular vision and differential satellite positioning
CN107451593B (en) High-precision GPS positioning method based on image feature points
CN102353377B (en) High altitude long endurance unmanned aerial vehicle integrated navigation system and navigating and positioning method thereof
CN112162297B (en) Method for eliminating dynamic obstacle artifacts in laser point cloud map
JP6950832B2 (en) Position coordinate estimation device, position coordinate estimation method and program
JP5286653B2 (en) Stationary object map generator
CN111830953A (en) Vehicle self-positioning method, device and system
US10872246B2 (en) Vehicle lane detection system
US20230351625A1 (en) A method for measuring the topography of an environment
CN114485654A (en) Multi-sensor fusion positioning method and device based on high-precision map
CN111932627A (en) Marker drawing method and system
CN112446915A (en) Picture-establishing method and device based on image group
CN110927765B (en) Laser radar and satellite navigation fused target online positioning method
CN116027351A (en) Hand-held/knapsack type SLAM device and positioning method
CN114419571B (en) Target detection and positioning method and system for unmanned vehicle
CN113030960B (en) Vehicle positioning method based on monocular vision SLAM
Jiang et al. Precise vehicle ego-localization using feature matching of pavement images
CN112985417B (en) Pose correction method for particle filter positioning of mobile robot and mobile robot
CN112577499B (en) VSLAM feature map scale recovery method and system
CN115049794A (en) Method and system for generating dense global point cloud picture through deep completion
CN112213753B (en) Method for planning parachuting training path by combining Beidou navigation and positioning function and augmented reality technology

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant