CN112465732A - Registration method of vehicle-mounted laser point cloud and sequence panoramic image - Google Patents

Registration method of vehicle-mounted laser point cloud and sequence panoramic image Download PDF

Info

Publication number
CN112465732A
CN112465732A CN202011367207.4A CN202011367207A CN112465732A CN 112465732 A CN112465732 A CN 112465732A CN 202011367207 A CN202011367207 A CN 202011367207A CN 112465732 A CN112465732 A CN 112465732A
Authority
CN
China
Prior art keywords
registration
image
point cloud
sequence
panoramic image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011367207.4A
Other languages
Chinese (zh)
Inventor
陈驰
杨必胜
张云菲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University WHU
Original Assignee
Wuhan University WHU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University WHU filed Critical Wuhan University WHU
Priority to CN202011367207.4A priority Critical patent/CN112465732A/en
Publication of CN112465732A publication Critical patent/CN112465732A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/40Image enhancement or restoration using histogram techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • G06T7/344Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/38Registration of image sequences
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

The invention belongs to the technical field of surveying and mapping, and discloses a registration method of vehicle-mounted laser point cloud and sequence panoramic images, which comprises the following steps: acquiring vehicle-mounted laser point cloud data and sequence panoramic image data, and acquiring a first antenna line vector and a second antenna line vector; respectively constructing a laser point cloud registration primitive image and an image registration primitive image; forming a conjugate registration element pair; calculating a pre-established 2D-3D coarse registration model by utilizing a conjugate registration element pair to obtain space coordinate initial conversion parameter information between a photogrammetry coordinate system and a LiDAR reference coordinate system, and finishing 2D-3D coarse registration; generating an MVS image dense point cloud based on the sequence panoramic image data; and taking the initial conversion parameter information of the space coordinates as an initial value, and performing optimal registration between the MVS image dense point cloud and the vehicle-mounted laser point cloud data to complete 3D-3D accurate registration. The invention has higher precision of the registration result and better precision consistency.

Description

Registration method of vehicle-mounted laser point cloud and sequence panoramic image
Technical Field
The invention relates to the technical field of surveying and mapping, in particular to a registration method of vehicle-mounted laser point cloud and sequence panoramic images.
Background
With the rapid development of electronic, sensor and computer technologies, sensors capable of measuring earth observation such as cameras, laser scanners, POS (Position and Orientation System) and the like are gradually miniaturized and lightened, and low-cost, light, small, integrated and multi-sensor remote sensing systems are realized. Meanwhile, the continuous diversified development of the sensor carrying platform, especially the rapid development of a low-altitude Unmanned Aerial Vehicle (UAV) and a Vehicle-mounted mobile measurement system, makes up the defects of the traditional satellite, man-machine photogrammetry and remote sensing in medium and low altitude and ground surface high-resolution multi-source space data coverage. Under the common drive of a light and small multi-sensor remote sensing system and a full-coverage remote sensing platform, the acquisition of spatial data is developed in a forward multi-platform, multi-view, high-resolution and diversified direction, as shown in fig. 1. The novel low-altitude and vehicle-mounted mobile measurement remote sensing platform has the advantages of high data acquisition speed, multiple visual angles, rich data types (geometry and spectrum), capability of acquiring three-dimensional coordinates of the surface of a ground object in a non-contact manner and the like which are incomparable with traditional surveying and mapping means, and plays an increasingly important role in the industries such as basic surveying and mapping, smart cities, navigation and position service and the like.
The vehicle-mounted mobile measurement system realizes coverage of ground side visual angle laser point cloud and image data, can efficiently collect data such as high-resolution elevation laser radar (LiDAR) point cloud and panoramic image in a measured environment, and is a novel photogrammetry and remote sensing means. A plurality of earth observation data such as high-resolution LiDAR point clouds, panoramic images and the like provided by a vehicle-mounted Mobile Measurement System (MMS) can be applied to the fields of strip terrain mapping, highway asset management, image city construction and the like.
The multi-source data fusion is the trend of vehicle-mounted MMS application development, and the vehicle-mounted MM multi-source data matching criterion is the first problem to be solved by the fusion application. Panoramic images and vehicle-mounted laser point clouds are typical vehicle-mounted MMS mapping data, and the registration method is mainly divided into 4 types: (1) a semi-automatic method based on multi-sensor calibration, that is, a method of directly outputting orientation element values by a Position and Orientation System (POS) to integrally compensate calibration parameters of a sensor platform, is used to perform registration of vehicle-mounted laser point clouds and images, and this method usually needs to use a calibration field to achieve calibration. Research shows that due to the diversity of registration error sources and the limitation of hardware synchronization and geometric calibration, the registration difference still exists after data registration is carried out by using a hardware synchronization and geometric calibration method. Meanwhile, since the GPS signal quality and IMU drift are different at different time periods, this kind of method cannot solve such registration error. (2) A2D (panoramic image) -3D (laser point cloud) automatic method based on geometric registration element matching. The method extracts a matching pair of conjugate registration elements from LiDAR point cloud and an image, generally uses an existing building wire frame model as prior knowledge, and performs registration between the wire frame profile and a laser point cloud profile through the similarity of the wire frame profile and the laser point cloud profile. How to realize the automatic extraction and matching of the conjugate registration element robustly is a research problem which is not completely solved by the academic community. (3) And a 2D-3D automatic method based on mutual information maximization matching. The method generally images point cloud data according to a panoramic image imaging model, calculates the mutual information measure of the obtained point cloud intensity/depth image and the panoramic image, solves the highest mutual information solution by using a Nelder-Mead method, realizes registration, but needs to calibrate the laser point cloud intensity in advance. (4) The registration method is based on multi-view stereo matching point cloud and laser point cloud 2D (image point cloud) -3D (laser point cloud). The method uses the output value of the POS system as initial registration, uses a non-rigid closest point iteration method to realize the registration of the POS system and the non-rigid closest point iteration method, but the Iterative Closest Point (ICP) method has higher requirement on the approximate value of an initial conversion parameter, and under the condition that the deviation of the initial value provided by the POS is larger, the convergence of the method is difficult to ensure, and the algorithm adaptability is limited.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides a registration method of vehicle-mounted laser point cloud and sequence panoramic image.
The invention provides a registration method of vehicle-mounted laser point cloud and sequence panoramic image, which comprises the following steps:
step 1, vehicle-mounted laser point cloud data and sequence panoramic image data are obtained; obtaining a first skyline vector based on the vehicle-mounted laser point cloud data, and obtaining a second skyline vector based on the sequence panoramic image data;
step 2, respectively constructing a laser point cloud registration primitive image and an image registration primitive image according to the first skyline vector and the second skyline vector; forming a conjugate registration element pair according to the laser point cloud registration element image and the image registration element image; calculating a pre-established 2D-3D coarse registration model by using the conjugate registration element pair to obtain space coordinate initial conversion parameter information between a photogrammetry coordinate system and a LiDAR reference coordinate system, and finishing 2D-3D coarse registration;
step 3, generating an MVS image dense point cloud based on the sequence panoramic image data; and taking the space coordinate initial conversion parameter information as an initial value, and performing optimal registration between the MVS image dense point cloud and the vehicle-mounted laser point cloud data to complete 3D-3D accurate registration.
Preferably, in the step 1, based on the vehicle-mounted laser point cloud data, a hierarchical urban scene target extraction method is adopted to extract a building facade to obtain building facade point cloud blocks, and an RMB algorithm is used to extract the outline of the building facade point cloud blocks and perform regularization to generate a first antenna line vector.
Preferably, in step 1, based on the sequence panoramic image data, the panoramic image is converted into a frame-type virtual image by a virtual imaging method, initial skyline position information is obtained by histogram threshold segmentation, optimized skyline position information is obtained by a GrabCut segmentation method, and a second skyline vector is generated by a CPDA corner detection algorithm.
Preferably, in step 2, registration primitive matching is implemented by using a minimum image editing distance (GED) map matching algorithm to obtain the conjugate registration primitive pair.
Preferably, the matching of the registration primitives by using the minimum image editing distance GED graph matching algorithm includes: constructing indexes for all the three-dimensional point coordinate data of the kernel triangle inner angle values by using a KD tree; traversing the kernel triangle matching set to calculate the GED converted by each kernel triangle matching corresponding graph, and sequencing, wherein the T corresponding to the minimum GED is taken as the best matching;
the specific implementation manner of computing the GED of the kernel triangle matching corresponding map conversion is as follows: image registration primitive G using GED metricimageRegistering primitive graph G with laser point cloudlasGlobal graph edge similarity under graph transformation defined by local kernel triangle matching pairs, Gimage、GlasThe GED between and is defined as follows:
Figure BDA0002804837360000031
in the formula, Glas(Elas, Vlas) representing a laser point cloud registration primitive map; gimage(Eimage, Visage), representing an image registration primitive map; t is graph conversion defined by the local kernel triangle matching pair; cost (op, T) is GimageComplete match G after graph editinglasA graph editing cost function corresponding to the needed K-step operation is a rotating graph edge difference; length is the graph edge length accumulation function.
Preferably, in step 2, in the pre-established 2D-3D coarse registration model, the image point of the interplanetary corner point registration primitive on the virtual image is set to [ xv,yv,-f]TThe corresponding panoramic image has the pixel coordinate of [ xp,yp,zp]TThe coordinate of the corresponding building corner point in the laser point cloud data is (x)w,yw,zw) Then, the collinear relationship between two corner points is expressed as:
Figure BDA0002804837360000041
in the formula, RvA camera rotation matrix in the virtual imaging process is obtained; rfDecoupling a rotation matrix in the calculation for the rotational translation; t isfIs a camera translation parameter, and lambda is a proportional parameter; h. w is the height and width of the panoramic image respectively; and r and c are row and column numbers of the current pixel on the panoramic image respectively.
Preferably, the specific implementation method of step 3 is as follows: generating a sequence virtual image based on the sequence panoramic image data through virtual imaging, and recovering external orientation elements of the sequence virtual image in a photogrammetric coordinate system by using a motion structure recovery method; generating a multi-view stereo matching MVS image dense point cloud from the sequence panoramic image data by a multi-view stereo matching method; and taking the initial conversion parameter information of the space coordinates as an initial value of an ICP (inductively coupled plasma) algorithm, minimizing the distance between the dense point cloud of the MVS image and the point cloud of the vehicle-mounted laser, and realizing accurate 3D-3D registration.
Preferably, in the step 3, the virtual camera external parameter calibration is realized by adopting an incremental beam adjustment SfM algorithm, and the MVS image dense point cloud is generated by using a Daisy algorithm.
Preferably, in step 3, the registration set model adopted to realize the 3D-3D precise registration is represented as:
Mlas=λRMMVS+T
wherein M islasRepresenting a corresponding point, M, of an image point on the virtual image in the reference system Cw of the laser point cloudMVSA pixel on the virtual image in the MVS photogrammetry coordinate system CMVSλ represents a scale parameter, R represents a rotation matrix of 3 × 3, and T represents a translation parameter;
r is calculated by the following formula:
Figure BDA0002804837360000042
wherein R is- bundle、RdecoupleRespectively obtaining the external parameter moments of the image calculated in the process of SfM and coarse registration; the 3D-3D precise registration comprises fine matchingAnd carrying out quasi-model initial value robust estimation and iteration nearest point model parameter optimization.
One or more technical schemes provided by the invention at least have the following technical effects or advantages:
firstly, acquiring vehicle-mounted laser point cloud data and sequence panoramic image data; obtaining a first skyline vector based on vehicle-mounted laser point cloud data, and obtaining a second skyline vector based on sequence panoramic image data; then respectively constructing a laser point cloud registration primitive image and an image registration primitive image according to the first antenna line vector and the second antenna line vector; forming a conjugate registration element pair according to the laser point cloud registration element image and the image registration element image; calculating a pre-established 2D-3D coarse registration model by utilizing a conjugate registration element pair to obtain space coordinate initial conversion parameter information between a photogrammetry coordinate system and a LiDAR reference coordinate system, and finishing 2D-3D coarse registration; finally, generating an MVS image dense point cloud based on the sequence panoramic image data; and taking the initial conversion parameter information of the space coordinates as an initial value, and performing optimal registration between the MVS image dense point cloud and the vehicle-mounted laser point cloud data to complete 3D-3D accurate registration. The invention has higher precision of the registration result and better precision consistency.
Drawings
FIG. 1 is a schematic diagram of a remote sensing earth observation platform and the development of sensor diversity;
FIG. 2 is a schematic flow chart of a registration method of a vehicle-mounted laser point cloud and a sequence panoramic image according to the present invention;
FIG. 3 is a schematic diagram of virtual imaging of a panoramic image in a registration method of a vehicle-mounted laser point cloud and a sequence panoramic image provided by the invention;
FIG. 4 is a schematic diagram of the extraction of the geometric registration elements (skylines) of the panoramic image virtual image in the registration method of the vehicle-mounted laser point cloud and the sequence panoramic image provided by the present invention;
FIG. 5 is a schematic diagram of a geometric registration cell pair generated based on graph matching in a registration method of a vehicle-mounted laser point cloud and a sequence panoramic image provided by the invention;
FIG. 6 is a test data set used in a registration method of a vehicle-mounted laser point cloud and a sequence panoramic image provided by the present invention;
FIG. 7 is a schematic diagram illustrating a coarse-to-fine registration result in the registration method of the vehicle-mounted laser point cloud and the sequence panoramic image provided by the present invention;
FIG. 8 is a histogram of randomly sampled image registration accuracy during registration by using the registration method of vehicle-mounted laser point cloud and sequence panoramic image provided by the present invention;
fig. 9 is a detail of a color point cloud of a facade of a building obtained by using the registration method of the vehicle-mounted laser point cloud and the sequence panoramic image provided by the invention.
Detailed Description
In order to better understand the technical solution, the technical solution will be described in detail with reference to the drawings and the specific embodiments.
The embodiment provides a registration method of a vehicle-mounted laser point cloud and a sequence panoramic image, as shown in fig. 2, the registration method mainly comprises the following steps:
step 1, registration element extraction.
Selecting skylines (upper sidelines of buildings) as registration elements, extracting the facades of the buildings by adopting a layering urban scene target extraction method in vehicle-mounted laser point clouds (namely LiDAR point clouds) and regularly generating a first skyline vector; in the sequence panoramic image, the panoramic image of the spherical expansion imaging model is converted into a frame-type image through a virtual imaging method, and a GrabCT algorithm and an arc-to-distance accumulated corner detection algorithm are combined on the virtual image to generate a second skyline vector.
Specifically, the extraction of the skyline in the vehicle-mounted laser point cloud data is equivalent to the extraction of the building facade. The invention adopts a hierarchical urban scene target extraction method to extract the building facade in the vehicle-mounted laser point cloud data. And for the extracted cloud blocks of the building facade points, extracting the outline of the cloud blocks by using an RMB algorithm, regularizing the extracted outline, and taking the extracted external polygons as candidate registration primitives.
The panoramic image model is spherical, as shown in fig. 3. The spherical surface is a plane which can not be unfolded, and a large deformation is generated in the process of generating a two-dimensional spectral information storage matrix (panoramic image) by unfolding the spherical surface. The ground object geometry in the panoramic image is affected by the deformation distortion. Aiming at the problem, the invention provides a panoramic image virtual imaging skyline extraction method which comprises the steps of firstly converting a panoramic image into a frame type virtual image by using a virtual imaging method, then carrying out histogram threshold segmentation to determine an approximate skyline position, then using a GrabCT segmentation method to realize optimization of the skyline position, and finally generating a second skyline vector by using a CPDA corner detection algorithm.
In the pre-established 2D-3D coarse registration model, the coordinate of an image space coordinate system of a certain point of an object on a virtual imaging plane is made to be [ x ]v,yv,-f]TThe imaging point on the panoramic ball model is [ x ]p,yp,zp]TAnd the pixel position on the corresponding expansion model is (r, c), the collinear relationship between the image point on the panoramic ball and the virtual imaging image point can be expressed as formula (1), as follows:
Figure BDA0002804837360000071
in the formula, h and w are the height and width of the panoramic image respectively; and r and c are row and column numbers of the current pixel on the panoramic image respectively.
And combining predefined orientation elements, pixel sizes and virtual area array sizes in the virtual camera, and obtaining the common lens virtual imaging image with any focal length and orientation through inverse solution of the formula (1).
The sky has a large gray level difference with the urban environment of the image subject due to high reflection intensity, and has obvious edge characteristics at the junction with the building, as shown in fig. 4(a) and 4 (b). Therefore, the image histogram is counted first, and the middle value of the gray level middle value (128) and the gray level high peak value is selected as the segmentation threshold, as shown in fig. 4 (c); then, the image is divided by using the division threshold, and a region higher than the threshold is set as a candidate sky pixel region, and a portion lower than the threshold is set as a candidate feature region, as shown in fig. 4 (d). The initial position of the skyline can be determined based on histogram threshold segmentation, and then the initial position of the skyline is segmented by using the histogram threshold as the initial segmentation of the GrabCT algorithm.
The method comprises the following steps of optimizing segmentation results of ground feature foreground and sky background, and improving the extraction quality of skyline:
(1) the GrabCT initial segmentation is specified according to the histogram threshold segmentation result and the prior knowledge that the sky background is located at the upper part of the image and is continuously distributed, and the image is defined as 3 regions shown in FIG. 4(e), namely, a sky region (top), a sky region (middle) and a ground object region (bottom).
(2) This initial segmentation is substituted into the GrabCut algorithm for iterative solution until the algorithm converges, as shown in fig. 4 (f).
(3) And extracting the contour according to the binary image obtained by the segmentation to obtain the skyline. Then, the CPDA method is used to detect the corner points on the contour lines, and skyline vector data is generated, as shown in fig. 4 (g). And (4) extracting skyline vectors from the panoramic image and the laser point cloud to construct a registration primitive map, and then participating in the calculation of 2D panoramic image-3D laser point cloud rough registration model parameters.
And 2, 2D-3D coarse registration based on registration primitive map matching.
And describing the geometric relationship between the panoramic image and the LiDAR point cloud by adopting a 2D and 3D collinear registration model, and after the registration elements are extracted, carrying out 2D-3D coarse registration according to the geometric relationship. The specific method comprises the following steps: and constructing a registration primitive map according to the extracted skyline vector, matching based on an edit distance minimization criterion according to the registration primitive map to form a conjugate registration primitive pair, calculating a pre-established 2D-3D coarse registration model from the generated conjugate registration primitive pair, and obtaining an initial conversion relation between a photogrammetric coordinate system and a LiDAR reference coordinate system.
And describing the geometric relationship between the panoramic image and the LiDAR point cloud by adopting a 2D-3D coarse registration model. In the pre-established 2D-3D coarse registration model, the image point of the registration primitive of the skyline corner point on the virtual image is made to be [ xv,yv,-f]TWhich corresponds toThe panoramic image has the coordinate of xp,yp,zp]TThe coordinates of the corresponding building corner point in the LiDAR data are (x)w,yw,zw) Then, the collinear relationship between two corner points can be expressed as equation (2), as follows:
Figure BDA0002804837360000081
in the formula, RvA camera rotation matrix in the virtual imaging process is obtained; rfDecoupling a rotation matrix in the calculation for the rotational translation; t isfIs a camera translation parameter, and lambda is a proportional parameter; h. w is the height and width of the panoramic image respectively; and r and c are row and column numbers of the current pixel on the panoramic image respectively.
Converting the direct geographic orientation data of the panoramic image from a navigation coordinate system to a photogrammetric coordinate system to obtain an external orientation element (R) of the panoramic imagef,Tf) An initial value. And back projecting the extracted LiDAR skyline corner points to a panoramic image surface by using the value, and then providing a minimum image edit distance (GED) image matching algorithm to realize registration primitive matching.
And for the vehicle-mounted MMS street view data, selecting the extracted n registration primitive corner points to form a graph node set E. Selecting any 3 vertexes in the graph vertex set E to be combined into a triangle as a generating core of the graph G, wherein the edge connection rule of the graph is defined as: for the vertex of the kernel triangle, the vertex is directly connected to form a complete graph; for the vertex of the non-kernel triangle, the distance from the vertex to 3 vertices of the kernel triangle is calculated, and the side with the shortest length is used as a connecting side. Image registration primitive G using GED metricimage=(Eimage,Vimage) Registering primitive graph G with laser point cloudlas=(Elas,Vlas) Global graph edge similarity under graph transformation defined by local kernel triangle matching pairs, i.e. the problem of optimal graph matching is transformed into the problem of minimum graph edit distance search, as shown in fig. 5. Gimage、GlasAnd the GED therebetween is defined by formula (3) as follows:
Figure BDA0002804837360000091
in the formula, cost (op, T) is a graph editing cost function corresponding to K-step operations (addition, deletion and replacement) required for completely matching the Gimage with the Glas after graph editing, and the cost function is set as a rotating graph edge difference; length is the graph edge length accumulation function; and T is graph transformation defined by the local kernel triangle matching pairs.
Firstly, constructing indexes for all kernel triangle inner angle value three-dimensional point coordinate data by using a KD tree, and improving matching efficiency; secondly, traversing the kernel triangle matching set to calculate the GED converted by each kernel triangle matching corresponding graph, sequencing, taking the T corresponding to the minimum GED as the best matching, and then adopting a rotation-translation decoupling calculation method to calculate the coarse registration parameters based on the 2D-3D geometric feature pair.
Due to errors in extraction and matching of geometric registration primitives, accuracy and robustness of a coarse registration result are limited. In order to solve the problem, the invention uses the conversion relation of the coarse registration space coordinates as an initial value, uses an ICP algorithm variation to optimally register the dense point cloud of the panoramic image and the LiDAR point cloud, indirectly obtains the accurate registration parameters of the sequence image, and realizes the steady and high-precision registration of data.
And 3, accurately registering the 3D and the 3D based on the iteration nearest point method.
Generating a sequence virtual image based on sequence panoramic image virtual imaging, recovering external orientation elements of the sequence virtual image in a photogrammetry coordinate system by using a motion structure recovery method, generating multi-view stereo Matching (MVS) image dense point cloud from the sequence image by using a multi-view stereo matching method, calculating coarse registration to obtain a space coordinate conversion relation as an initial value, and realizing optimal registration between the MVS dense point cloud of the panoramic image and the LiDAR point cloud by using an ICP algorithm variation to obtain accurate registration parameters of the sequence image.
Specifically, the problem of constructing the dense point cloud of the panoramic image is converted into the problem of constructing the dense point cloud of the imaging data of the traditional short-focus pinhole camera. And calibrating external parameters of the virtual camera by adopting an incremental beam adjustment SfM algorithm, and then generating an MVS image point cloud by using a Daisy algorithm to realize object space reconstruction. Then minimizing the distance between the MVS point cloud and the LiDAR point cloud to realize registration parameter refinement, and the registration geometric model can be expressed as formula (4):
Mlas=λRMMVS+T (4)
and the fine registration comprises the initial value robust estimation of a fine registration model and the parameter optimization of an iterative nearest point model.
Let a pixel on the virtual image be m, its corresponding point in the MVS photogrammetry coordinate system CMVS be MMVS, in the laser point cloud reference system CwThe middle corresponding point is MlasThe co-linear relationship therebetween can be expressed as equation (5) and equation (6) as follows:
sdecouplem=A[Rdecouple|tdecouple]Mlas (5)
sbundlem=A[Rbundle|tbundle]MMVS (6)
wherein [ R ]bundle|tbundle],[Rdecouple|tdecouple]The image extrinsic parameter matrix obtained by calculation in the process of SfM and coarse registration is respectively, and A is a known camera intrinsic parameter matrix. The formula (7) can be obtained by combining the formula (5) and the formula (6), as follows:
R=RdecoupleRbundle (7)
let Pdecouple(Xi,Yi,Zi)t、Pbundle(xi,yi,zi)tThe camera positions are respectively the coordinates of the camera in Cw and CMVS after the center of gravity is changed. The ratio and the translation parameter can be calculated by the equations (8) and (9) as follows:
Figure BDA0002804837360000101
T=Pdecouple-λRPbundle (9)
and for a plurality of registration images, realizing the steady estimation of parameters by adopting a voting clustering method and realizing the calculation of conversion parameters. And adopting a rigid body space similarity transformation model as a registration model between MVS and LiDAR point cloud. Using the point-to-surface distance difference as the point pair error, the error equation can be expressed as equation (10), as follows:
Figure BDA0002804837360000111
in the formula, mi、MiMVS and LiDAR points, respectively; w is aiPoint pair weight; etaiIs MiPoint normal vectors; (lambda3D-3D,R3D-3D,T3D-3D) Representing the spatial similarity transformation of the minimum MVS and LiDAR point cloud point-to-face distance that needs to be solved.
And (3) taking the conversion parameters obtained by coarse registration calculation as initial values of the ICP algorithm, and performing iterative calculation on the formula (10) to avoid ICP iterative erroneous convergence caused by POS data quality and geometric element registration extraction and matching errors. In order to avoid the influence of the MVS point cloud outer point on the ICP algorithm, the relative motion threshold algorithm is adopted to search the limiting point pairs, and the robustness convergence is ensured. Noting the coordinate transformation between optimal photogrammetry and LiDAR reference frame as (lambda)3D-3D,R3D-3D,T3D-3D) Then, the corrected exterior orientation element (R) of the virtual imagev-cam,Tv-cam) Can be expressed as formula (11) as follows:
Figure BDA0002804837360000112
the panoramic image has known spatial rotation with the virtual image in the virtual imaging process, namely a virtual imaging rotation matrix RvThe virtual image and the panoramic image share the imaging center, and the refined panoramic image external orientation element (R)pano,Tpano) Can be represented by formula (12) as follows:
Figure BDA0002804837360000113
obtaining the refined exterior orientation element (R) of the panoramic imagepano,Tpano) As above, accurate registration is accomplished.
In order to verify the effectiveness of the method, two sets of LiDAR point cloud and sequence panoramic image data sets collected by two vehicle-mounted MMS systems are adopted to verify the registration method of the vehicle-mounted laser point cloud and the sequence panoramic image. The data parameters related to LiDAR point density, panoramic image resolution, etc. are shown in Table 1 and FIG. 6.
TABLE 1 relevant data parameters for data set 1 and data set 2
Figure BDA0002804837360000121
Scenes in the data set 1 and the data set 2 are typical urban roads and residential areas respectively, are common scenes in vehicle-mounted MMS data acquisition, and have good representativeness. Fig. 7 shows laser point cloud data and a sample panoramic image included in the data sets 1 and 2.
And (3) carrying out back projection on the laser point cloud within a certain range (50m) of the imaging center position of the registered panoramic image to the panoramic image according to the spherical imaging model of the panoramic image to form the panoramic image displayed by overlapping the point cloud, thereby qualitatively judging the data registration accuracy. Fig. 8 is a display diagram of a group of panoramic image superimposed laser point clouds of the data set 1 and the data set 2, in which each row of fig. 8(a) and 8(b) is an original panoramic image, and the point cloud data registration results are superimposed by using direct geographic orientation data, rough registration image external orientation elements, and fine registration image external orientation elements. Due to the factors such as incompleteness and inaccurate positioning in the process of extracting the registration primitives, the registration error still exists after the coarse registration is completed (as shown in fig. 8(a) line 3 and fig. 8(b) line 3), but the precision is improved compared with the results of the registration by using direct geographic orientation data (as shown in fig. 8(a) line 2 and fig. 8(b) line 2).
For quantitative analysis, the registration result precision obtained by registering the panoramic images of the data set 1 and the data set 2 and the LiDAR point cloud by adopting the registration algorithm provided by the invention is manually selected on the registered panoramic images and the laser panoramic images, and the pixel difference between the panoramic images and the laser panoramic images is calculated to be used as the registration precision index. The laser panoramic image is a two-dimensional plane image which is formed by cutting out a point cloud within a certain range (50m) of an imaging center and projecting the point cloud to a panoramic spherical imaging model by taking the current panoramic image position as the imaging center. And respectively randomly extracting 20 registered panoramic images from the data set 1 and the data set 2 to generate a laser panoramic image corresponding to each panoramic image. And (3) manually selecting 10 homonymous points on each pair of panoramic and laser panoramic images, calculating the absolute value of the pixel difference between the homonymous points, and taking the average value of the errors of the marked point pairs as the registration accuracy measure of the frame panoramic image.
The statistical results of the artificial mark points of the randomly sampled image in the data sets 1 and 2 are shown in table 2:
table 2 data set 1, data set 2 artificial mark point pair error statistical table for random sampling image
Figure BDA0002804837360000131
As can be seen from table 2, under the condition of a large initial registration error, the registration error gradually decreases through coarse-to-fine registration, and the average error after registration is about 1.5 pixels. Fig. 8 is a histogram of registration errors of 20 sampled accuracy evaluation images in the registration process of the data set 1 and the data set 2, and it can be seen that, after the registration is performed by using the two-step registration method, the registration errors are greatly reduced, and the standard deviation of the frame errors also reaches a lower level, which indicates that the registration result has higher accuracy and better accuracy consistency.
And after the registration of the LiDAR point cloud data and the sequence panoramic image data is finished, fusing the LiDAR point cloud data and the sequence panoramic image. FIG. 9 is a color laser point cloud generated after registration of LiDAR point cloud data and sequence panoramic image data is completed, and the house edge details are known to show that the data registration accuracy is good.
The registration method of the vehicle-mounted laser point cloud and the sequence panoramic image provided by the embodiment of the invention at least comprises the following technical effects:
(1) by adopting a hierarchical urban scene target extraction method, because the building facade can be extracted from the LiDAR point cloud and the point cloud is regularly generated, the skyline vector in the point cloud is extracted; by adopting the virtual imaging method, the panoramic image of the spherical expansion imaging model can be converted into a frame-type image, and the GrabCut algorithm and the arc-to-point distance and angle point detection algorithm are combined on the virtual image, so that the skyline vector in the image is generated.
(2) By adopting the 2D-3D coarse registration method for matching the registration primitive images, the registration primitive images can be constructed according to the extracted skyline vectors and are matched through the registration primitive image editing distance minimization criterion to form a conjugate registration primitive pair and a self-generated conjugate registration primitive pair, so that a coarse registration model is obtained, and the initial conversion relation between a photographing and measuring coordinate system and a LiDAR reference coordinate system is obtained.
(3) The 3D-3D accurate registration method adopting the iterative closest point method is characterized in that a sequence virtual image can be generated by self-sequence panoramic image virtual imaging, the exterior orientation element of the sequence virtual image in a photogrammetric coordinate system is recovered by using a motion structure from motion (SfM) method, a multi-view stereo Matching (MVS) image dense point cloud is generated by the sequence image through the multi-view stereo matching method, the space coordinate conversion relation obtained by coarse registration calculation is used as an initial value, and finally an ICP algorithm variation is used, so that the optimal registration between the MVS dense point cloud of the panoramic image and the LiDAR point cloud is realized, and the accurate registration parameter of the sequence image is obtained.
Finally, it should be noted that the above embodiments are only for illustrating the technical solutions of the present invention and not for limiting, and although the present invention has been described in detail with reference to examples, it should be understood by those skilled in the art that modifications or equivalent substitutions may be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions of the present invention, which should be covered by the claims of the present invention.

Claims (9)

1. A registration method of vehicle-mounted laser point cloud and sequence panoramic image is characterized by comprising the following steps:
step 1, vehicle-mounted laser point cloud data and sequence panoramic image data are obtained; obtaining a first skyline vector based on the vehicle-mounted laser point cloud data, and obtaining a second skyline vector based on the sequence panoramic image data;
step 2, respectively constructing a laser point cloud registration primitive image and an image registration primitive image according to the first skyline vector and the second skyline vector; forming a conjugate registration element pair according to the laser point cloud registration element image and the image registration element image; calculating a pre-established 2D-3D coarse registration model by using the conjugate registration element pair to obtain space coordinate initial conversion parameter information between a photogrammetry coordinate system and a LiDAR reference coordinate system, and finishing 2D-3D coarse registration;
step 3, generating an MVS image dense point cloud based on the sequence panoramic image data; and taking the space coordinate initial conversion parameter information as an initial value, and performing optimal registration between the MVS image dense point cloud and the vehicle-mounted laser point cloud data to complete 3D-3D accurate registration.
2. The method for registering vehicle-mounted laser point cloud and sequence panoramic image according to claim 1, wherein in the step 1, a hierarchical urban scene target extraction method is adopted to extract a building facade to obtain a building facade point cloud block based on the vehicle-mounted laser point cloud data, and an RMB algorithm is used to extract the outline of the building facade point cloud block and perform regularization to generate the first antenna line vector.
3. The method for registering the vehicle-mounted laser point cloud and the sequence panoramic image as claimed in claim 1, wherein in the step 1, based on the sequence panoramic image data, the panoramic image is converted into a frame-type virtual image through a virtual imaging method, initial skyline position information is obtained through histogram threshold segmentation, optimized skyline position information is obtained through a GrabCT segmentation method, and a second skyline vector is generated through a CPDA corner detection algorithm.
4. The method for registering vehicle-mounted laser point cloud and sequence panoramic image according to claim 1, wherein in the step 2, registration element matching is realized by adopting a minimum image editing distance (GED) image matching algorithm to obtain the conjugate registration element pair.
5. The method for registering the vehicle-mounted laser point cloud and the sequence panoramic image according to claim 4, wherein the matching of the registration primitives by adopting a minimum image editing distance (GED) graph matching algorithm comprises the following steps: constructing indexes for all the three-dimensional point coordinate data of the kernel triangle inner angle values by using a KD tree; traversing the kernel triangle matching set to calculate the GED converted by each kernel triangle matching corresponding graph, and sequencing, wherein the T corresponding to the minimum GED is taken as the best matching;
the specific implementation manner of computing the GED of the kernel triangle matching corresponding map conversion is as follows: image registration primitive G using GED metricimageRegistering primitive graph G with laser point cloudlasGlobal graph edge similarity under graph transformation defined by local kernel triangle matching pairs, Gimage、GlasThe GED between and is defined as follows:
Figure FDA0002804837350000021
in the formula, Glas(Elas, Vlas) representing a laser point cloud registration primitive map; gimage(Eimage, Visage), representing an image registration primitive map; t is graph conversion defined by the local kernel triangle matching pair; cost (op, T) is GimageComplete match G after graph editinglasA graph editing cost function corresponding to the needed K-step operation is a rotating graph edge difference; length is the graph edge length accumulation function.
6. The method as claimed in claim 1, wherein in step 2, the pre-established 2D-3D rough matching is performedIn the quasi-model, the image point of the registration primitive of the skyline corner point on the virtual image is [ xv,yv,-f]TThe corresponding panoramic image has the pixel coordinate of [ xp,yp,zp]TThe coordinate of the corresponding building corner point in the laser point cloud data is (x)w,yw,zw) Then, the collinear relationship between two corner points is expressed as:
Figure FDA0002804837350000022
in the formula, RvA camera rotation matrix in the virtual imaging process is obtained; rfDecoupling a rotation matrix in the calculation for the rotational translation; t isfIs a camera translation parameter, and lambda is a proportional parameter; h. w is the height and width of the panoramic image respectively; and r and c are row and column numbers of the current pixel on the panoramic image respectively.
7. The registration method of the vehicle-mounted laser point cloud and the sequence panoramic image according to claim 1, wherein the specific implementation method of the step 3 is as follows: generating a sequence virtual image based on the sequence panoramic image data through virtual imaging, and recovering external orientation elements of the sequence virtual image in a photogrammetric coordinate system by using a motion structure recovery method; generating a multi-view stereo matching MVS image dense point cloud from the sequence panoramic image data by a multi-view stereo matching method; and taking the initial conversion parameter information of the space coordinates as an initial value of an ICP (inductively coupled plasma) algorithm, minimizing the distance between the dense point cloud of the MVS image and the point cloud of the vehicle-mounted laser, and realizing accurate 3D-3D registration.
8. The method for registering the vehicle-mounted laser point cloud and the sequence panoramic image as claimed in claim 7, wherein in the step 3, virtual camera extrinsic parameter calibration is realized by adopting an incremental beam adjustment SfM algorithm, and the MVS image dense point cloud is generated by using a Daisy algorithm.
9. The method for registering the vehicle-mounted laser point cloud and the sequence panoramic image according to claim 7, wherein in the step 3, a registration set model for realizing the 3D-3D precise registration is represented as follows:
Mlas=λRMMVS+T
wherein M islasRepresenting a corresponding point, M, of an image point on the virtual image in the reference system Cw of the laser point cloudMVSA pixel on the virtual image in the MVS photogrammetry coordinate system CMVSλ represents a scale parameter, R represents a 3 × 3 rotation matrix, and T represents a translation parameter;
r is calculated by the following formula:
Figure FDA0002804837350000031
wherein R is- bundle、RdecoupleRespectively obtaining the external parameter moments of the image calculated in the process of SfM and coarse registration; the 3D-3D accurate registration comprises the steps of initial value robust estimation of a fine registration model and iterative nearest point model parameter optimization.
CN202011367207.4A 2020-11-27 2020-11-27 Registration method of vehicle-mounted laser point cloud and sequence panoramic image Pending CN112465732A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011367207.4A CN112465732A (en) 2020-11-27 2020-11-27 Registration method of vehicle-mounted laser point cloud and sequence panoramic image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011367207.4A CN112465732A (en) 2020-11-27 2020-11-27 Registration method of vehicle-mounted laser point cloud and sequence panoramic image

Publications (1)

Publication Number Publication Date
CN112465732A true CN112465732A (en) 2021-03-09

Family

ID=74809829

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011367207.4A Pending CN112465732A (en) 2020-11-27 2020-11-27 Registration method of vehicle-mounted laser point cloud and sequence panoramic image

Country Status (1)

Country Link
CN (1) CN112465732A (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112802073A (en) * 2021-04-08 2021-05-14 之江实验室 Fusion registration method based on image data and point cloud data
CN113689331A (en) * 2021-07-20 2021-11-23 中国铁路设计集团有限公司 Panoramic image splicing method under complex background
CN114004951A (en) * 2021-10-29 2022-02-01 武汉大学 Road sign extraction method and system combining point cloud intensity and geometric structure
CN114792327A (en) * 2022-06-23 2022-07-26 中国科学院空天信息创新研究院 Image processing method and system
CN115586143A (en) * 2022-09-14 2023-01-10 中国科学院上海技术物理研究所 Spectrum drift calibration and correction method based on random sampling consistency
CN116222592A (en) * 2023-03-03 2023-06-06 北京数字政通科技股份有限公司 High-precision map generation method and system based on multi-source data
CN116597168A (en) * 2023-07-18 2023-08-15 齐鲁空天信息研究院 Matching method, device, equipment and medium of vehicle-mounted laser point cloud and panoramic image
CN117437289A (en) * 2023-12-20 2024-01-23 绘见科技(深圳)有限公司 Space calculation method based on multi-source sensor and related equipment
CN114004951B (en) * 2021-10-29 2024-07-12 武汉大学 Road sign extraction method and system combining point cloud intensity and geometric structure

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102169579A (en) * 2011-03-31 2011-08-31 西北工业大学 Rapid and accurate registration method of dense point cloud model
CN105931234A (en) * 2016-04-19 2016-09-07 东北林业大学 Ground three-dimensional laser scanning point cloud and image fusion and registration method
CN110458871A (en) * 2019-08-14 2019-11-15 上海霁目信息科技有限公司 The method for registering of model and panorama sketch, system, equipment and medium and map
CN111457930A (en) * 2020-04-02 2020-07-28 武汉中海庭数据技术有限公司 High-precision mapping positioning method combining vehicle-mounted L idar and unmanned aerial vehicle

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102169579A (en) * 2011-03-31 2011-08-31 西北工业大学 Rapid and accurate registration method of dense point cloud model
CN105931234A (en) * 2016-04-19 2016-09-07 东北林业大学 Ground three-dimensional laser scanning point cloud and image fusion and registration method
CN110458871A (en) * 2019-08-14 2019-11-15 上海霁目信息科技有限公司 The method for registering of model and panorama sketch, system, equipment and medium and map
CN111457930A (en) * 2020-04-02 2020-07-28 武汉中海庭数据技术有限公司 High-precision mapping positioning method combining vehicle-mounted L idar and unmanned aerial vehicle

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
陈驰 等: "车载MMS激光点云与序列全景影像自动配准方法", 《测绘学报》 *

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112802073B (en) * 2021-04-08 2021-07-06 之江实验室 Fusion registration method based on image data and point cloud data
CN112802073A (en) * 2021-04-08 2021-05-14 之江实验室 Fusion registration method based on image data and point cloud data
CN113689331A (en) * 2021-07-20 2021-11-23 中国铁路设计集团有限公司 Panoramic image splicing method under complex background
CN114004951B (en) * 2021-10-29 2024-07-12 武汉大学 Road sign extraction method and system combining point cloud intensity and geometric structure
CN114004951A (en) * 2021-10-29 2022-02-01 武汉大学 Road sign extraction method and system combining point cloud intensity and geometric structure
CN114792327A (en) * 2022-06-23 2022-07-26 中国科学院空天信息创新研究院 Image processing method and system
CN115586143A (en) * 2022-09-14 2023-01-10 中国科学院上海技术物理研究所 Spectrum drift calibration and correction method based on random sampling consistency
CN116222592A (en) * 2023-03-03 2023-06-06 北京数字政通科技股份有限公司 High-precision map generation method and system based on multi-source data
CN116222592B (en) * 2023-03-03 2023-09-29 北京数字政通科技股份有限公司 High-precision map generation method and system based on multi-source data
CN116597168B (en) * 2023-07-18 2023-11-17 齐鲁空天信息研究院 Matching method, device, equipment and medium of vehicle-mounted laser point cloud and panoramic image
CN116597168A (en) * 2023-07-18 2023-08-15 齐鲁空天信息研究院 Matching method, device, equipment and medium of vehicle-mounted laser point cloud and panoramic image
CN117437289A (en) * 2023-12-20 2024-01-23 绘见科技(深圳)有限公司 Space calculation method based on multi-source sensor and related equipment
CN117437289B (en) * 2023-12-20 2024-04-02 绘见科技(深圳)有限公司 Space calculation method based on multi-source sensor and related equipment

Similar Documents

Publication Publication Date Title
CN112465732A (en) Registration method of vehicle-mounted laser point cloud and sequence panoramic image
KR100912715B1 (en) Method and apparatus of digital photogrammetry by integrated modeling for different types of sensors
CN111275750B (en) Indoor space panoramic image generation method based on multi-sensor fusion
Zhao et al. Reconstructing a textured CAD model of an urban environment using vehicle-borne laser range scanners and line cameras
CN111709981A (en) Registration method of laser point cloud and analog image with characteristic line fusion
Haala et al. Extracting 3D urban models from oblique aerial images
Moussa Integration of digital photogrammetry and terrestrial laser scanning for cultural heritage data recording
CN112767461A (en) Automatic registration method for laser point cloud and sequence panoramic image
CN112465849B (en) Registration method for laser point cloud and sequence image of unmanned aerial vehicle
CN112862966B (en) Method, device, equipment and storage medium for constructing surface three-dimensional model
Cosido et al. Hybridization of convergent photogrammetry, computer vision, and artificial intelligence for digital documentation of cultural heritage-a case study: the magdalena palace
CN110986888A (en) Aerial photography integrated method
Bybee et al. Method for 3-D scene reconstruction using fused LiDAR and imagery from a texel camera
Hirschmüller et al. Stereo vision based reconstruction of huge urban areas from an airborne pushbroom camera (HRSC)
CN107941241B (en) Resolution board for aerial photogrammetry quality evaluation and use method thereof
CN112767459A (en) Unmanned aerial vehicle laser point cloud and sequence image registration method based on 2D-3D conversion
Zhao et al. Alignment of continuous video onto 3D point clouds
Gao et al. Automatic geo-referencing mobile laser scanning data to UAV images
Yu et al. Registration and Fusion of UAV LiDAR System Sequence Images and Laser Point Clouds.
CN115830083A (en) Point cloud data registration method, device and system
Zhang et al. Integrating smartphone images and airborne lidar data for complete urban building modelling
Al-Durgham The registration and segmentation of heterogeneous Laser scanning data
Shin et al. Algorithms for multi‐sensor and multi‐primitive photogrammetric triangulation
Gneeniss Integration of LiDAR and photogrammetric data for enhanced aerial triangulation and camera calibration
CN113240755A (en) City scene composition method and system based on street view image and vehicle-mounted laser fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20210309

WD01 Invention patent application deemed withdrawn after publication