CN113421325B - Three-dimensional reconstruction method for vehicle based on multi-sensor fusion - Google Patents

Three-dimensional reconstruction method for vehicle based on multi-sensor fusion Download PDF

Info

Publication number
CN113421325B
CN113421325B CN202110538817.4A CN202110538817A CN113421325B CN 113421325 B CN113421325 B CN 113421325B CN 202110538817 A CN202110538817 A CN 202110538817A CN 113421325 B CN113421325 B CN 113421325B
Authority
CN
China
Prior art keywords
camera
matrix
dimensional reconstruction
point
coordinate system
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110538817.4A
Other languages
Chinese (zh)
Other versions
CN113421325A (en
Inventor
夏长晨
李祎承
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu University
Original Assignee
Jiangsu University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu University filed Critical Jiangsu University
Priority to CN202110538817.4A priority Critical patent/CN113421325B/en
Publication of CN113421325A publication Critical patent/CN113421325A/en
Application granted granted Critical
Publication of CN113421325B publication Critical patent/CN113421325B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • G06T2207/10044Radar image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Optical Radar Systems And Details Thereof (AREA)

Abstract

The invention relates to the technical field of automatic driving vehicles, in particular to a three-dimensional reconstruction method for an intelligent vehicle based on multi-sensor fusion. The method comprises the following steps: the method comprises the steps of acquiring camera image data and laser point cloud data through a camera and radar laser installed on a vehicle, calibrating the camera, calibrating the radar and jointly calibrating the camera and the radar, realizing three-dimensional reconstruction according to the laser point cloud and the image data respectively, fusing data of the radar laser and the camera, and finally obtaining final three-dimensional reconstruction through noise reduction and redundant information filtering after data fusion. Compared with the traditional three-dimensional reconstruction, the invention can realize better three-dimensional reconstruction of the surrounding environment of the vehicle under special environment and realize the three-dimensional reconstruction effect with higher applicability and rationality, thereby promoting the floor process of the automatic driving vehicle and improving the safety and reliability of the automatic driving vehicle.

Description

Three-dimensional reconstruction method for vehicle based on multi-sensor fusion
Technical Field
The invention relates to the technical field of automatic driving vehicles, in particular to a three-dimensional reconstruction method for an intelligent vehicle based on multi-sensor fusion.
Background
The three-dimensional reconstruction of the intelligent vehicle is realized by acquiring three-dimensional information of a large environment around the vehicle through a sensor technology, occupies very important positions in the fields of intelligent driving, intelligent transportation and the like, and is compared with the problems of instability, inaccuracy and the like of the traditional three-dimensional information measuring method of the traditional GPS, the laser range finder, the IMU and the like under special environments (such as severe environments of tunnels, mountain areas, mine holes and the like). Three-dimensional reconstruction is more practical and reliable, and more effective and accurate three-dimensional space information can be obtained under the environments.
The traditional three-dimensional reconstruction is based on a camera sensor, and is realized through the steps of calibration, feature point extraction, matching and the like. The traditional three-dimensional reconstruction cannot feed back the space distance information, so that the radar laser sensor is integrated on the basis, the three-dimensional reconstruction is enriched, the applicability is wider, and the perfection is higher. Most three-dimensional reconstruction methods based on multi-sensor fusion at present comprise a plurality of redundant data information, so that the three-dimensional reconstruction effect is not efficient and reasonable enough.
Disclosure of Invention
Aiming at the problems in the prior art, the invention provides the three-dimensional reconstruction method for the intelligent vehicle based on multi-sensor fusion, which realizes the three-dimensional reconstruction effect with higher applicability and rationality to the surrounding environment of the vehicle under special environment, thereby promoting the landing process of the automatic driving vehicle and improving the safety and reliability of the automatic driving vehicle.
In order to achieve the above purpose, the specific technical scheme of the invention is as follows: a three-dimensional reconstruction method for an intelligent vehicle based on multi-sensor fusion comprises the following steps:
1) Calibrating camera parameters, wherein the camera parameters comprise a camera internal parameter matrix M and a camera external parameter rotation matrix R X And an extrinsic translation matrix T X
2) Calibrating laser radar parameters, wherein the laser radar parameters comprise a laser radar external parameter rotation matrix R L External parameter translation matrix T L
3) Combined calibration of camera and radar, calculating transformation matrix T from laser radar coordinate system to camera coordinate system DC
4) Through the camera internal parameter matrix M and the camera external parameter rotation matrix R X And an extrinsic translation matrix T X Obtaining a three-dimensional reconstruction coordinate system V1 through a laser radar external parameter rotation matrix R L External parameter translation matrix T L Obtaining three-dimensional point coordinates in V2, wherein V1 is a three-dimensional reconstruction coordinate system of image data, and V2 is a three-dimensional reconstruction coordinate system of laser point cloud;
5) Using a transformation matrix T DC And carrying out data fusion of the laser radar and the camera to realize three-dimensional reconstruction.
Further, in the step 1), calibrating the camera parameters includes the following steps:
1.1 Collecting checkerboard images;
1.2 Using the acquired checkerboard image, and calibrating the camera parameters by adopting a Zhang Zhengyou calibration method.
Further, in the step 2), calibrating the laser radar parameters includes the following steps:
2.1 Constructing a vehicle coordinate system to be XYZ and a radar coordinate system to be XYZ;
2.2 Extracting a ground plane equation in a radar coordinate system by adopting a RANSAC method: ax+by-z+c=0;
2.3 Collecting laser point cloud data by using a laser radar;
2.4 Laser acquired by laser radarPoint cloud data calculation rotation matrix R m And a translation matrix T m
2.5 Calculating a rotation matrix R using laser point cloud data collected by a lidar h And a translation matrix T h
2.6 Calculating a laser radar external parameter rotation matrix R L External parameter translation matrix T L The calculation formula is as follows:
further, in the step 3), the joint calibration of the camera and the radar includes the following steps:
3.1 Pasting checkerboard drawings for calibrating internal and external parameters of a camera on a wall surface of a positioning laser radar, and collecting image data and laser point cloud data;
3.2 Building a camera coordinate system: x1y1z1; the z1 axis in the x1y1z1 is the space vertical direction, the positive direction of the x1 axis points to the front of the vehicle, the positive direction of the y1 axis points to the right side of the vehicle, and the origin of coordinates is the center of the bottom surface of the camera;
3.3 Vector set matrix M using the above-described acquired data calculation method C Normal vector set matrix M D Distance value set matrix b C Distance value set matrix b D
3.4 Calculating a transformation matrix T from a laser radar coordinate system to a camera coordinate system DC The calculation formula is as follows:
T DC =[R LH T LH ]
wherein:
R LH =UV T ,UV T is R'. LH As a result of the SVD decomposition of (c),
further, the step 4) includes the following steps:
4.1 Using a camera to acquire data of a large environment to obtain an RGB image;
4.2 Extracting and matching characteristic points of the acquired RGB image through an ORB algorithm to obtain a two-dimensional point pair set, and marking the two-dimensional point pair set as
4.3 Calculating the coordinates X of each three-dimensional point in V1 3W The calculation formula is as follows:
wherein: i means the i-th picture; x is X 2W Homogeneous coordinates of two-dimensional points +.>(data paired);
4.4 Data acquisition is carried out on the large environment through a laser radar to obtain a plurality of point cloud coordinates p1 i I represents the i-th point cloud);
4.5 Calculating each three-dimensional coordinate P2 in V2.
Further, the step 5) includes the following steps:
5.1 Filtering invalid feature points by adopting a Filter method;
5.2 To reconstruct the effective feature point TT in the coordinate system V1 i Projecting to V2;
5.3 ICP algorithm is adopted to solve and effectively obtain the feature point TTDD i Matching the nearest point cloud;
5.4 Calculating a transformation matrix T DC2 TT, i.e i And DD i The matrix transformation relation between the two is calculated as follows:
wherein: TT (TT) i Representing the coordinates of the ith effective feature point in V1; DD (DD) i Represents V2Point cloud coordinates, T, of the i-th effective feature point match D ' C2 Representing a transformation matrix from each effective characteristic point in V1 to a matching point cloud in V2;
5.5 The depth fusion of the image layer and the point cloud layer is realized, the final three-dimensional reconstruction is obtained, the pixel point set contained in the effective characteristic point set in the V1 is recorded as XS2, the point set of the pixel point projected into the V2 is recorded as XS1, the pixel point is projected into the V2, and the calculation formula is as follows:
wherein: i represents the ith pixel point, T DC2 Representing the transformation matrix from the effective feature point in V1 to the matching point cloud in V2.
The beneficial effects of the invention are as follows: the sensing accuracy of surrounding environment information is improved, the reconstruction stability of the three-dimensional reconstruction to the surrounding large environment and the integrity of the reconstruction of the sensitive area are improved, the influence of redundant environmental noise on the three-dimensional reconstruction effect is avoided, and the importance of the sensitive environment area in the three-dimensional reconstruction is highlighted.
Drawings
FIG. 1 is a flow chart of the method of the present invention.
Fig. 2 is a schematic diagram of three-dimensional reconstruction effect after three-dimensional reconstruction data fusion and redundant information filtering.
Detailed Description
The invention will be further described with reference to the drawings and the specific embodiments, it being noted that the technical solution and the design principle of the invention will be described in detail with only one optimized technical solution, but the scope of the invention is not limited thereto.
The examples are preferred embodiments of the present invention, but the present invention is not limited to the above-described embodiments, and any obvious modifications, substitutions or variations that can be made by one skilled in the art without departing from the spirit of the present invention are within the scope of the present invention.
The invention provides a three-dimensional reconstruction method for an intelligent vehicle based on multi-sensor fusion, wherein a flow chart of the method is shown in fig. 1, and the method comprises the following specific steps:
1) Calibrating camera parameters, wherein the camera parameters comprise a camera internal parameter matrix M and a camera external parameter rotation matrix R X And an extrinsic translation matrix T X . As a preferred embodiment of the present invention, the method comprises the steps of:
1.1 In the specific embodiment of the invention, the camera is arranged at the front end of the vehicle, the acquisition frequency of the camera is 15HZ, the size of the image is 1280 multiplied by 960 pixels, the checkerboard is fixed on a rigid and flat calibration plate, 15-20 checkerboard photos are acquired under different angles by moving the calibration plate, and the data are stored in a vehicle-mounted computer;
1.2 Using the acquired checkerboard image, calibrating the camera parameters by Zhang Zhengyou calibration method, wherein in the specific embodiment of the invention, the camera internal reference matrix M is a 3x3 matrix, and the camera external reference rotation matrix R X Is a 3x3 matrix, and the external parameter translates the matrix T X A 3x1 matrix;
2) Calibrating laser radar parameters, wherein the laser radar parameters comprise a laser radar external parameter rotation matrix R L External parameter translation matrix T L As a preferred embodiment of the present invention, the method comprises the steps of:
2.1 Constructing a vehicle coordinate system to be XYZ and a radar coordinate system to be XYZ; in the vehicle coordinate system of xyz, the z-axis is the space vertical direction, the x-axis positive direction points to the front of the vehicle, the y-axis positive direction points to the right side of the vehicle, and the origin of coordinates is the center of the bottom surface of the vehicle; in the radar coordinate system XYZ, the Z axis is the axis line of the laser radar in the vertical direction of the laser radar, the positive direction of the Z axis points to the front of the vehicle, the positive direction of the Y axis points to the right side of the vehicle, and the origin of coordinates is the center of the bottom surface of the laser radar;
2.2 Extracting a ground plane equation in a radar coordinate system by adopting a RANSAC method: ax+by-z+c=0
2.3 In the specific embodiment of the invention, the laser radar is arranged at the center of the top of the vehicle, the scanning frequency of the laser radar is 15hz, and the data are stored in a vehicle-mounted computer;
2.4 Calculating a rotation matrix R using laser point cloud data collected by a lidar m And a translation matrix T m Wherein the matrix R is rotated m The calculation formula is as follows:
in the middle of
Translation matrix T m The calculation formula of (2) is as follows: t (T) m =[0,0,-C] T
2.5 Calculating a rotation matrix R using laser point cloud data collected by a lidar h And a translation matrix T h Wherein T is h The calculation formula is as follows:
wherein: p (P) t+1,V The radar coordinate origin coordinates in the vehicle coordinate system at the moment t; rotation matrix R c,t And a translation matrix T c,t The pose change matrix of the vehicle is acquired by a gyroscope at the moment t to t+1; r is R t =R c,t
R h The calculation formula is as follows:
P a,t ,P a,t+1 at tMarking coordinates of marking points matched with two adjacent frames under a radar coordinate system; rotation matrix R C,t The pose change matrix of the vehicle is acquired by a gyroscope at the moment t to t+1; p (P) t+1,V Is the radar origin coordinates in the vehicle coordinate system at time t.
2.6 Calculating a laser radar external parameter rotation matrix R L External parameter translation matrix T L The calculation formula is as follows:
R L =R m R h
T L =T m +T h
3) Combined calibration of camera and radar, calculating transformation matrix T from laser radar coordinate system to camera coordinate system DC As a preferred embodiment of the present invention, the method comprises the steps of:
3.1 The checkerboard drawing for calibrating the internal and external parameters of the camera is stuck on the wall surface of the positioning laser radar, and image data and laser point cloud data are collected.
3.2 Building a camera coordinate system: x1y1z1; the z1 axis in the x1y1z1 is the space vertical direction, the positive direction of the x1 axis points to the front of the vehicle, the positive direction of the y1 axis points to the right side of the vehicle, and the origin of coordinates is the center of the bottom surface of the camera.
3.3 Vector set matrix M using the above-described acquired data calculation method C Normal vector set matrix M D Distance value set matrix b C Distance value set matrix b D The calculation method is as follows:
setting the plane equation of the checkerboard and the wall surface as n T x- δ=0, δ refers to the distance value from the origin of the coordinate system to the plane, and n represents the normal vector of the plane. Set the rotation R of the plane to the camera coordinate system 1 (r 11 ,r 12 ,r 13 ) Translation matrix t 1 Then n in the camera coordinate system C =r 13C =r 13 T t 1 . Let the plane rotate R to the laser radar coordinate system 2 (r 21 ,r 22 ,r 23 ) Translation matrix t 2 Then n in the laser radar coordinate system D =r 23D =r 23 T t 2
n Ci Representing normal vector delta in ith group of pictures in camera coordinate system Ci Representing the distance value in the i-th group of pictures.
n Di Representing normal vector delta in ith group of pictures under laser radar coordinate system Di Representing the distance value in the i-th group of pictures.
M C =[n C1 ,n C2 ,...,n Cn ] T ,b C =[δ C1C2 ,...,δ Cn ] T
M D =[n D1 ,n D2 ,...,n Dn ] T ,b D =[δ D1D2 ,...,δ Dn ] T
3.4 Calculating a transformation matrix T from a laser radar coordinate system to a camera coordinate system DC The calculation formula is as follows:
T DC =[R LH T LH ]
wherein: r is R LH =UV T
UV T Is R'. LH Results after SVD decomposition of (C)
4) Data are collected, three-dimensional point coordinates in three-dimensional reconstruction coordinate systems V1 and V2 are obtained through matrix transformation, wherein V1 is the three-dimensional reconstruction coordinate system of image data, and V2 is the three-dimensional reconstruction coordinate system of laser point cloud, and the method comprises the following steps of:
4.1 Using a camera to acquire data of a large environment to obtain an RGB image;
4.2 Extracting and matching characteristic points of the acquired RGB image through an ORB algorithm to obtain a two-dimensional point pair set, and marking the two-dimensional point pair set as
4.3 Calculating the coordinates X of each three-dimensional point in V1 3W The calculation formula is as follows:
wherein: i means the i-th picture; m is camera internal reference; r is R X 、T X Is a camera external parameter; x is X 2W Homogeneous coordinates of two-dimensional points +.>(data pairing)
4.4 Data acquisition is carried out on the large environment through a laser radar to obtain a plurality of point cloud coordinates p1 i (I represents the I-th point cloud);
4.5 Calculating each three-dimensional coordinate P2 in V2, wherein the calculation formula is as follows:
wherein: i is the ith point cloud, R L 、T L Is a laser radar external parameter.
5) Data fusion of the laser radar and the camera is carried out to realize three-dimensional reconstruction, and as a preferred embodiment of the invention, as shown in fig. 2, the method comprises the following steps:
5.1 Filtering invalid feature points
Characteristic points extracted from sensitive areas in a large environment (such as roadside buildings in a road street view) are defined as effective characteristic points, and characteristic points of other non-sensitive areas (such as tree sceneries in the road street view, and some environment information is obtained for irrelevant three-dimensional reconstruction) are defined as ineffective characteristic points. And (3) giving weight distinction to different characteristic points by adopting a Filter method, and filtering invalid characteristic points to obtain an effective characteristic point set in V1.
5.2 To reconstruct the effective feature point TT in the coordinate system V1 i Projected to V2, the calculation formula is as follows:
wherein: TT (TT) i Representing the i-th effective feature point in V1;
TTDD i representing the ith effective feature point projected from V1 to V2;
5.3 Solving and valid feature points TTDD i The matching nearest point cloud is realized by the following steps: characteristic point TTDD projected into V2 as V1 i As a datum point, each characteristic point TTDD is obtained by adopting an ICP algorithm i Is a nearest matching point cloud DD i
5.4 Calculating a transformation matrix T DC2 TT, i.e i And DD i The matrix transformation relation between the two is calculated as follows:
wherein: TT (TT) i Representing the coordinates of the ith effective feature point in V1; DD (DD) i Representing the point cloud coordinate matched with the ith effective feature point in V2, T D ' C2 Representing a transformation matrix from each effective characteristic point in V1 to a matching point cloud in V2;
5.5 The depth fusion of the image layer and the point cloud layer is realized, the final three-dimensional reconstruction is obtained, the pixel point set contained in the effective characteristic point set in the V1 is recorded as XS2, the point set of the pixel point projected into the V2 is recorded as XS1, the pixel point is projected into the V2, and the calculation formula is as follows:
wherein: i represents the ith pixel point, T DC2 Representing a transformation matrix from the effective feature points in V1 to the matching point cloud in V2;
according to the method, the coordinate system V1 comprising the pixel point set, the effective characteristic point set and the laser point cloud can be finally obtained, so that the depth fusion of the two-dimensional image layer and the three-dimensional point cloud layer is realized, and the final three-dimensional reconstruction is obtained.

Claims (6)

1. A three-dimensional reconstruction method for an intelligent vehicle based on multi-sensor fusion is characterized by comprising the following steps:
1) Calibrating camera parameters, wherein the camera parameters comprise a camera internal parameter matrix M and a camera external parameter rotation matrix R X And an extrinsic translation matrix T X
2) Calibrating laser radar parameters, wherein the laser radar parameters comprise a laser radar external parameter rotation matrix R L External parameter translation matrix T L
3) Combined calibration of camera and radar, calculating transformation matrix T from laser radar coordinate system to camera coordinate system DC
4) Through the camera internal parameter matrix M and the camera external parameter rotation matrix R X And an extrinsic translation matrix T X Obtaining a three-dimensional reconstruction coordinate system V1 through a laser radar external parameter rotation matrix R L External parameter translation matrix T L Obtaining three-dimensional point coordinates in V2, wherein V1 is a three-dimensional reconstruction coordinate system of image data, and V2 is a three-dimensional reconstruction coordinate system of laser point cloud;
5) Using a transformation matrix T DC And carrying out data fusion of the laser radar and the camera to realize three-dimensional reconstruction.
2. The three-dimensional reconstruction method for intelligent vehicles based on multi-sensor fusion according to claim 1, wherein in the step 1), calibrating the camera parameters comprises the steps of:
1.1 Collecting checkerboard images;
1.2 Using the acquired checkerboard image, and calibrating the camera parameters by adopting a Zhang Zhengyou calibration method.
3. The three-dimensional reconstruction method for intelligent vehicles based on multi-sensor fusion according to claim 1, wherein in the step 2), calibrating the lidar parameters comprises the steps of:
2.1 Constructing a vehicle coordinate system to be XYZ and a radar coordinate system to be XYZ;
2.2 Extracting a ground plane equation in a radar coordinate system by adopting a RANSAC method: ax+by-z+c=0;
2.3 Collecting laser point cloud data by using a laser radar;
2.4 Calculating a rotation matrix R using laser point cloud data collected by a lidar m And a translation matrix T m
2.5 Calculating a rotation matrix R using laser point cloud data collected by a lidar h And a translation matrix T h
2.6 Calculating a laser radar external parameter rotation matrix R L External parameter translation matrix T L The calculation formula is as follows:
4. the three-dimensional reconstruction method for intelligent vehicles based on multi-sensor fusion according to claim 1, wherein in the step 3), the joint calibration of the camera and the radar comprises the following steps:
3.1 Pasting checkerboard drawings for calibrating internal and external parameters of a camera on a wall surface of a positioning laser radar, and collecting image data and laser point cloud data;
3.2 Building a camera coordinate system: x1y1z1; the z1 axis in the x1y1z1 is the space vertical direction, the positive direction of the x1 axis points to the front of the vehicle, the positive direction of the y1 axis points to the right side of the vehicle, and the origin of coordinates is the center of the bottom surface of the camera;
3.3 Vector set matrix M using the above-described acquired data calculation method C Normal vector set matrix M D Distance value set matrix b C Distance value set matrix b D
3.4 Calculating a transformation matrix T from a laser radar coordinate system to a camera coordinate system DC The calculation formula is as follows:
T DC =[R LH T LH ]
wherein: r is R LH =UV T ,UV T Is R'. LH As a result of the SVD decomposition of (c),
5. the three-dimensional reconstruction method for intelligent vehicles based on multi-sensor fusion according to claim 1, wherein said step 4) comprises the steps of:
4.1 Using a camera to acquire data of a large environment to obtain an RGB image;
4.2 Extracting and matching characteristic points of the acquired RGB image through an ORB algorithm to obtain a two-dimensional point pair set, and marking the two-dimensional point pair set as
4.3 Calculating the coordinates X of each three-dimensional point in V1 3W The calculation formula is as follows:
wherein: i means the i-th picture; x is X 2W Homogeneous coordinates of two-dimensional points +.>(data paired);
4.4 Data acquisition is carried out on the large environment through a laser radar to obtain a plurality of point cloud coordinates p1 i I represents the ith point cloud;
4.5 Calculating each three-dimensional coordinate P2 in V2.
6. The three-dimensional reconstruction method for intelligent vehicles based on multi-sensor fusion according to claim 1, wherein said step 5) comprises the steps of:
5.1 Filtering invalid feature points by adopting a Filter method;
5.2 To reconstruct the effective feature point TT in the coordinate system V1 i Projecting to V2;
5.3 ICP algorithm is adopted to solve and effectively obtain the feature point TTDD i Matching the nearest point cloud;
5.4 Calculating a transformation matrix T DC2 TT, i.e i And DD i The matrix transformation relation between the two is calculated as follows:
wherein: TT (TT) i Representing the coordinates of the ith effective feature point in V1; DD (DD) i Representing the point cloud coordinate, T 'of the i-th effective feature point match in V2' DC2 Representing a transformation matrix from each effective characteristic point in V1 to a matching point cloud in V2;
5.5 The depth fusion of the image layer and the point cloud layer is realized, the final three-dimensional reconstruction is obtained, the pixel point set contained in the effective characteristic point set in the V1 is recorded as XS2, the point set of the pixel point projected into the V2 is recorded as XS1, the pixel point is projected into the V2, and the calculation formula is as follows:
wherein: i represents the ith pixel point, T DC2 Representing the transformation matrix from the effective feature point in V1 to the matching point cloud in V2.
CN202110538817.4A 2021-05-18 2021-05-18 Three-dimensional reconstruction method for vehicle based on multi-sensor fusion Active CN113421325B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110538817.4A CN113421325B (en) 2021-05-18 2021-05-18 Three-dimensional reconstruction method for vehicle based on multi-sensor fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110538817.4A CN113421325B (en) 2021-05-18 2021-05-18 Three-dimensional reconstruction method for vehicle based on multi-sensor fusion

Publications (2)

Publication Number Publication Date
CN113421325A CN113421325A (en) 2021-09-21
CN113421325B true CN113421325B (en) 2024-03-19

Family

ID=77712467

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110538817.4A Active CN113421325B (en) 2021-05-18 2021-05-18 Three-dimensional reconstruction method for vehicle based on multi-sensor fusion

Country Status (1)

Country Link
CN (1) CN113421325B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116824067B (en) * 2023-08-24 2023-11-24 成都量芯集成科技有限公司 Indoor three-dimensional reconstruction method and device thereof

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111612845A (en) * 2020-04-13 2020-09-01 江苏大学 Laser radar and camera combined calibration method based on mobile calibration plate

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106796728A (en) * 2016-11-16 2017-05-31 深圳市大疆创新科技有限公司 Generate method, device, computer system and the mobile device of three-dimensional point cloud

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111612845A (en) * 2020-04-13 2020-09-01 江苏大学 Laser radar and camera combined calibration method based on mobile calibration plate

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
智能汽车激光雷达和相机数据融合系统标定;许小徐;黄影平;胡兴;;光学仪器;第41卷(第06期);第79-86页 *

Also Published As

Publication number Publication date
CN113421325A (en) 2021-09-21

Similar Documents

Publication Publication Date Title
CN110859044B (en) Integrated sensor calibration in natural scenes
JP7073315B2 (en) Vehicles, vehicle positioning systems, and vehicle positioning methods
CN110146910B (en) Positioning method and device based on data fusion of GPS and laser radar
CN111436216B (en) Method and system for color point cloud generation
CN109631887B (en) Inertial navigation high-precision positioning method based on binocular, acceleration and gyroscope
JP2020525809A (en) System and method for updating high resolution maps based on binocular images
Senlet et al. A framework for global vehicle localization using stereo images and satellite and road maps
US11781863B2 (en) Systems and methods for pose determination
CN105300403B (en) A kind of vehicle mileage calculating method based on binocular vision
WO2019007263A1 (en) Method and device for calibrating external parameters of vehicle-mounted sensor
JP2012185011A (en) Mobile position measuring apparatus
JP2012127896A (en) Mobile object position measurement device
JP6278791B2 (en) Vehicle position detection device, vehicle position detection method, vehicle position detection computer program, and vehicle position detection system
WO2021017211A1 (en) Vehicle positioning method and device employing visual sensing, and vehicle-mounted terminal
CN109900245A (en) Gradient estimating device, gradient presumption method and storage medium
JP6278790B2 (en) Vehicle position detection device, vehicle position detection method, vehicle position detection computer program, and vehicle position detection system
CN113421325B (en) Three-dimensional reconstruction method for vehicle based on multi-sensor fusion
WO2020113425A1 (en) Systems and methods for constructing high-definition map
CN112749584A (en) Vehicle positioning method based on image detection and vehicle-mounted terminal
CN111260733B (en) External parameter estimation method and system of vehicle-mounted all-around multi-camera system
CN114503044A (en) System and method for automatically labeling objects in 3D point clouds
KR20210102953A (en) Methods for detecting and modeling objects on the road surface
CN110660113A (en) Method and device for establishing characteristic map, acquisition equipment and storage medium
Horani et al. A framework for vision-based lane line detection in adverse weather conditions using vehicle-to-infrastructure (V2I) communication
CN111505692B (en) Beidou/vision-based combined positioning navigation method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant