CN112465849B - Registration method for laser point cloud and sequence image of unmanned aerial vehicle - Google Patents

Registration method for laser point cloud and sequence image of unmanned aerial vehicle Download PDF

Info

Publication number
CN112465849B
CN112465849B CN202011367372.XA CN202011367372A CN112465849B CN 112465849 B CN112465849 B CN 112465849B CN 202011367372 A CN202011367372 A CN 202011367372A CN 112465849 B CN112465849 B CN 112465849B
Authority
CN
China
Prior art keywords
registration
image
point cloud
building
primitive
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011367372.XA
Other languages
Chinese (zh)
Other versions
CN112465849A (en
Inventor
陈驰
杨必胜
张云菲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University WHU
Original Assignee
Wuhan University WHU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University WHU filed Critical Wuhan University WHU
Priority to CN202011367372.XA priority Critical patent/CN112465849B/en
Publication of CN112465849A publication Critical patent/CN112465849A/en
Application granted granted Critical
Publication of CN112465849B publication Critical patent/CN112465849B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/38Registration of image sequences
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/155Segmentation; Edge detection involving morphological operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • G06T7/344Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Abstract

The invention belongs to the technical field of remote sensing, and discloses a registration method of laser point cloud and sequence image of an unmanned aerial vehicle. The invention can improve the usability of MMS imaging data of the unmanned aerial vehicle.

Description

Registration method for laser point cloud and sequence image of unmanned aerial vehicle
Technical Field
The invention relates to the technical field of remote sensing, in particular to a registration method of laser point cloud and sequence images of an unmanned aerial vehicle.
Background
The unmanned aerial vehicle mobile measurement system realizes coverage of medium and low altitude remote sensing data, is effective supplement of traditional photogrammetry and remote sensing means, provides various earth observation data including high-resolution LiDAR point cloud, aerial image and the like, and is widely applied to the aspects of high-precision map construction, forest biomass estimation, power inspection and the like. The unmanned aerial vehicle platform carries a light and small or no POS system due to the limitation of load and cost, and the accuracy of direct geographical orientation data is limited or no direct geographical orientation data exists. The unmanned aerial vehicle mobile measurement mainly carries out data acquisition under the condition of no ground control point, and few ground control fields carry out precision control on LiDAR point cloud and image data achievements. Meanwhile, inherent registration errors exist among the multi-sensor data of the mobile measurement system, so that direct registration and fusion cannot be achieved between LiDAR point cloud data acquired by an unmanned aerial vehicle MMS and sequence images, the LiDAR point cloud data can only be used in a single source, and the usability of the MMS imaging data of the unmanned aerial vehicle is further reduced.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides a registration method of laser point cloud and sequence images of an unmanned aerial vehicle.
The invention provides a registration method of laser point cloud and sequence image of an unmanned aerial vehicle, which comprises the following steps:
step 1, acquiring LiDAR laser point clouds and sequence images acquired by an unmanned aerial vehicle, and generating MVS image dense point clouds based on the sequence images;
step 2, extracting the roof of the building in the LiDAR laser point cloud to obtain first contour extraction information; extracting the roof of the building from the MVS image dense point cloud to obtain second contour extraction information;
when building roof extraction is carried out in the MVS image dense point cloud, under the condition that POS data exists, a rough registration model is selected as a 2D-3D collinear equation registration model, the first contour extraction information is used for guiding the extraction of building registration primitives on the MVS image dense point cloud, attitude determination positioning data provided by POS are converted into external camera orientation element values, an extracted building outer frame is back-projected onto all images by using a collinear equation, the image with the complete building back-projection outer frame is marked as a key frame, and the key frame is involved in the 2D-3D rough registration model for resolving to obtain second contour extraction information;
when building roof extraction is carried out in the MVS image dense point cloud, under the condition of no POS data, building MVS image point cloud segmentation, outline extraction and regularization processing are carried out in the MVS image dense point cloud to obtain second outline extraction information;
step 3, constructing a first registration primitive image based on the first contour extraction information, and obtaining a first registration primitive image set; constructing a second registration primitive image based on the second contour extraction information, and obtaining a second registration primitive image set; matching the first registration primitive image set and the second registration primitive image set to obtain a conjugate registration primitive pair;
step 4, resolving a coarse registration model according to the conjugate registration element pair to obtain a space coordinate conversion relation between an unmanned aerial vehicle photogrammetry coordinate system and a LiDAR reference coordinate system;
and 5, realizing the registration of the MVS image dense point cloud and the LiDAR laser point cloud based on the space coordinate conversion relation.
Preferably, the specific implementation manner of step 1 is as follows: and after the sequence image is obtained, recovering external orientation elements of the sequence image collected by the calibration camera in an unmanned aerial vehicle photogrammetry coordinate system by a motion structure recovery (SfM) method, and generating the MVS image dense point cloud from the sequence image by a multi-view stereo matching method.
Preferably, in the step 2, the method of a marking point process is adopted to realize the extraction of the building roof in the LiDAR laser point cloud, so as to obtain a primary extraction profile; and regularizing the primary extraction contour by using an iteration minimum outsourcing rectangle RMBR algorithm to obtain the first contour extraction information.
Preferably, in step 3, the first contour extraction information and the second contour extraction information both include extracted outer polygons of buildings, centers of the extracted outer polygons of buildings are used as graph nodes, building roofs are used as building registration primitives, and registration primitive graphs are constructed for the extracted building registration primitives;
when matching the registered primitive images, firstly detecting the local similarity between the first registered primitive image and the second registered primitive image through matching of kernel triangles, and then measuring the global similarity between the first registered primitive image and the second registered primitive image by using GED (global similarity between coordinates); and combining the results of the local similarity and the global similarity to realize the matching of the optimal registration primitive images to obtain the conjugate registration primitive pair.
Preferably, in step 4, when the coarse registration model is a 2D-3D collinearity equation registration model, the calculating the coarse registration model includes:
using the regularized polygon corner points outside the building to solve a 2D-3D collinear equation registration model; recording the image of the back projection frame of the complete building as a key frame, and making an outer polygon of the building in the key frameThe angular point is m ═ u, v, f)TThe coordinate of the corner point of the building corresponding to the point in the LiDAR laser point cloud data is Mlas=(X,Y,Z)TThen, the collinear relationship between two angular points is expressed as:
spnpm=A[Rpnp|tpnp]Mlas
wherein A is a parameter matrix in the camera, spnpAs a proportional parameter, Rpnp、tpnpConstituting an extrinsic parameter matrix of the camera.
Preferably, in step 4, when the coarse registration model is a 3D-3D model, the calculating the coarse registration model includes:
extracting building registration elements from MVS image point cloud, matching the registration element image with registration elements in LiDAR laser point cloud, correspondingly registering a geometric model by 3D-3D space similarity transformation, and solving three-dimensional space similarity transformation by using corner pairs of regular building outer polygons conjugated by 3D as control points; let the unmanned aerial vehicle photogrammetry coordinate system CmvsWherein the outer polygon corner point of the regularized building is MmvsWith its corresponding building corner point in the LiDAR reference frame CwCoordinate in (A) is MlasThen, the conversion relationship between two corner points is defined as:
Mlas=λRMmvs+T;
wherein λ and R, T are scale, rotation and translation parameters in the coordinate transformation process, respectively.
Preferably, in the step 5, the space coordinate conversion relationship obtained by the coarse registration calculation is used as an initial value, and a variant ICP algorithm is used to realize the optimal registration between the MVS image dense point cloud and the LiDAR laser point cloud, so as to obtain the accurate registration parameters of the sequence image.
Preferably, the registration method of the laser point cloud of the unmanned aerial vehicle and the sequence image further includes: and 6, constructing a data fusion result based on the registered LiDAR laser point cloud and the sequence image data set.
Preferably, the data fusion result comprises color laser point cloud generation and real projection image.
One or more technical schemes provided by the invention at least have the following technical effects or advantages:
in the invention, LiDAR laser point clouds and sequence images collected by an unmanned aerial vehicle are obtained, and MVS image dense point clouds are generated based on the sequence images; building roof extraction is carried out in LiDAR laser point cloud to obtain first contour extraction information; extracting the roof of the building from the MVS image dense point cloud to obtain second contour extraction information; constructing a first registration primitive image based on the first contour extraction information, and obtaining a first registration primitive image set; constructing a second registration primitive image based on the second contour extraction information, and obtaining a second registration primitive image set; matching the first registration primitive image set and the second registration primitive image set to obtain a conjugate registration primitive pair; calculating a coarse registration model according to the conjugate registration element pair to obtain a space coordinate conversion relation between an unmanned aerial vehicle photogrammetry coordinate system and a LiDAR reference coordinate system; based on the space coordinate conversion relationship, the registration of the MVS image dense point cloud and the LiDAR laser point cloud is realized. According to the invention, the unmanned aerial vehicle MMS imaging data is taken as a research object, a two-step registration model application strategy is formulated according to the data characteristics of the unmanned aerial vehicle MMS imaging data, the registration element extraction and matching defined in the two-step registration model of the distance imaging and visible light imaging data and the coarse-to-fine registration algorithm are specified, the registration and fusion of the unmanned aerial vehicle MMS imaging data are completed, and the usability of the unmanned aerial vehicle MMS imaging data is improved.
Drawings
Fig. 1 is a flowchart of a registration method of a laser point cloud and a sequence image of an unmanned aerial vehicle according to an embodiment of the present invention;
fig. 2 is a schematic diagram of an MVS image dense point cloud generated by unmanned aerial vehicle MMS sequence image data according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of the extraction of LiDAR laser point cloud data registration primitives of an embodiment of the present invention; FIG. 3 (a) is a LiDAR laser point cloud, (b) in FIG. 3 is an extracted building point cloud, and (c) in FIG. 3 is a building outline obtained after regularization;
FIG. 4 is a schematic diagram of adaptive change of a change step according to an embodiment of the present invention;
FIG. 5 is a tensor gradient direction statistical plot of an embodiment of the present invention; fig. 5 (a) is a histogram of house tensor gradient directions, and fig. 5 (b) is a histogram of house tensor gradient directions;
FIG. 6 is a diagram of a building outline detection process within an area of an embodiment of the present invention; fig. 6 (a) is an R3 region, fig. 6 (b) is saliency segmentation, fig. 6 (c) is contour extraction, and fig. 6 (d) is regularization processing;
FIG. 7 is a diagram illustrating registration primitive extraction in MVS image dense point cloud data according to an embodiment of the present invention; fig. 7 (a) is an MVS image dense point cloud, fig. 7 (b) is an MVS building point cloud, and fig. 7 (c) is a regularized building outline;
FIG. 8 is a schematic diagram of registration primitive map construction according to an embodiment of the present invention; fig. 8 (a) is a schematic diagram of generation of a sequential image (MVS image dense point cloud) registration primitive map, and fig. 8 (b) is a schematic diagram of generation of a LiDAR laser point cloud registration primitive map;
FIG. 9 is a schematic diagram of non-optimal matching with high local similarity and low global similarity according to an embodiment of the present invention;
FIG. 10 is a Double-Mapping graph generated without occlusion detection according to an embodiment of the present invention; fig. 10 (a) shows a Double-Mapping cause, (b) shows Double-Mapping in DSM ortho-correction, and (c) shows Double-Mapping in unmanned aerial vehicle point cloud coloring;
FIG. 11 is a point cloud colorization map after occlusion detection in accordance with an embodiment of the present invention; fig. 11 (a) is a color-imparted image, (b) in fig. 11 is an HPR visible region, (c) in fig. 11 is a visible region after the closing operation, and (d) in fig. 11 is point cloud coloring.
Detailed Description
In order to better understand the technical solution, the technical solution will be described in detail with reference to the drawings and the specific embodiments.
The embodiment provides a registration method of laser point cloud and sequence image of an unmanned aerial vehicle, as shown in fig. 1, comprising the following steps:
step 1, acquiring LiDAR laser point clouds and sequence images acquired by an unmanned aerial vehicle, and generating MVS image dense point clouds based on the sequence images, namely MVS dense image point cloud generation.
SfM (Structure from Motion) is a means to obtain the three-dimensional Structure of an object and the parameters of a camera from a sequence of Motion images, and comprises a series of sub-processes such as homonymous point matching, relative orientation solution, beam adjustment and the like. SfM is highly consistent with the procedures of homonymous point measurement, relative orientation, aerial triangulation and the like in the traditional photogrammetry, and is a different name of the camera parameter and object space structure recovery process in photogrammetry in the field of computer vision. The method realizes camera external parameter calibration and sparse object space structure recovery by using an Incremental beam method Adjustment (SfM) algorithm. In the algorithm, homonymous point matching is realized through Sift, a RANSAC five-point resolving method is adopted in a relative orientation process, and adjustment of an incremental beam method is realized by a sparse adjustment software package. On the basis of camera parameter calibration, an MVS image point cloud is generated by using a Daisy algorithm, and reconstruction of object space dense point cloud is achieved. Fig. 2 is a top and side screenshot of a set of MVS image dense point clouds generated from drone MMS sequence image data. Except noise and uneven point density distribution, the MVS image dense point cloud has data essence and expression form similar to those of LiDAR laser point cloud, and is discrete sampling, namely discrete three-dimensional point cloud, of the three-dimensional surface of an object space object.
Step 2, extracting the roof of the building in the LiDAR laser point cloud to obtain first contour extraction information; and extracting the roof of the building from the MVS image dense point cloud to obtain second contour extraction information, namely extracting the registration primitive of the building.
The MMS imaging data of the unmanned aerial vehicle is divided into LiDAR laser point cloud and sequence images which are collected by the same machine and different machines according to a data collection mode. LiDAR laser point clouds acquired by the same machine of an unmanned aerial vehicle MMS and sequence images have direct geospatial reference as initial registration values, and the registration process of a two-step registration model can be accelerated. Sequence images of unmanned aerial vehicles acquired by the same machine as LiDAR laser point cloud are generally acquired by a small and light system (such as a small fixed wing unmanned aerial vehicle and an eight-rotor unmanned aerial vehicle), the data generally only have navigation-level precision POS or direct geographical orientation data without POS, and under the condition of the data, only artificial structures contained in the data are used as registration primitives to perform two-step registration model registration. The extraction of the registration primitive comprises two parts:
extracting registration elements from LiDAR point clouds.
Referring to fig. 3, the building roof has good distinguishability in both airborne LiDAR laser point clouds and sequence images, and contains rich structural information (points, lines, planes). The method for selecting the marking point process is adopted to realize the extraction of the building point cloud in the LiDAR laser point cloud data, which is shown in (a) in figure 3 and (b) in figure 3. Due to the fact that the laser point cloud is blocked and the like, point cloud scanning is incomplete, and the extracted building point cloud blocks are incomplete. To form meaningful building outer boundary polygons, the building outer polygon extraction and regularization are performed using RMBR (iterative Minimum Bounding Rectangle) algorithm. The RMBR uses rectangles as regularization primitives, and uses a group of rectangle combinations to realize regularization on the original outline. Fig. 3 (c) shows the regularized outer frame of the building, and it can be seen that after contour extraction and regularization, the original cloud of building points is transformed into a geometrically meaningful outer polygon of the building.
And extraction of registration elements in the sequence image.
The selection of the registration model (2D-3D or 3D-3D model) is different according to the existence of POS data of the sequence images and the rough registration process. The extraction of the registration elements in the sequence image can be carried out by adopting different methods, namely an image building element extraction method guided by laser building extraction prior knowledge under the condition of POS data and an MVS image point cloud building registration element extraction method generally used for POS data.
1) And (3) extracting image building elements under the guidance of the prior knowledge of laser building extraction.
Under the condition of POS data, selecting a rough registration model as a 2D-3D collinear equation registration model, guiding the extraction of building registration elements on a sequence image by using a building extraction result (namely the first contour extraction information) corresponding to LiDAR laser point cloud data, converting attitude determination positioning data provided by POS into an external camera orientation element value, back-projecting an extracted building outer frame onto all images by using a collinear equation, recording the image with the complete building back-projection outer frame as a key frame, and participating in the 2D-3D rough registration model for resolving to obtain second contour extraction information. And converting the attitude determination positioning data provided by the POS into an element value of an external orientation of the camera, and back-projecting the extracted building outer frame onto all images by using a collinear equation. And recording the image with the back projection outer frame of the complete building as a key frame, and adding the key frame into a 2D-3D coarse registration model for calculation. The building outer frame after the back projection provides good priori knowledge for the extraction of image building elements based on local saliency. However, due to POS data quality and system calibration and sensor synchronization errors, there is a severe position and direction offset between the building frame after back projection and the real image building position, which requires further correction.
The present invention uses tensor gradient statistics within the R1 buffer region to determine the area in which a building is located in the image. A buffer is created for the backprojection region R1, the buffer width viewing region within which the tensor gradient statistics are performed for the initial backprojection error determination (empirical values 50-200 pixels), as shown in fig. 4.
For a multi-channel image, the structure tensor is defined as:
Figure GDA0003339864000000061
wherein (represents a Gaussian kernel convolution operation, fx,fyRepresenting the gradient in the horizontal and vertical directions.
The structure tensor describes the local differential structure of the image, and is mostly researched in the detection of the corner points and the edges of the image. For color unmanned aerial vehicle image f ═ (R, G, B)T. After the spatial derivation operation is performed, two eigenvalues of the tensor G are calculated according to the following formula:
Figure GDA0003339864000000062
Figure GDA0003339864000000063
Figure GDA0003339864000000071
λ1denotes the local derived Energy (Derivative Energy.) in the principal direction, λ2Representing the local derivative energy in the vertical main direction. The principal direction of the tensor G is set as the tensor gradient direction (theta), and the corresponding lambda is1The magnitude of the tensor gradient after Non-local Maximum Suppression (Non-Maximum Suppression) is shown in fig. 5.
By the method, the tensor gradient size and direction of each pixel point in the R1 buffer area can be calculated. And counting the tensor gradient direction in the region to obtain a tensor gradient statistical histogram in the buffer region. Fig. 5 (a) and 5 (b) are tensor gradient direction statistical diagrams for a rectangular house and an L-shaped house, respectively. As can be seen from fig. 5, the long edge of a rectangular or L-shaped house will form a certain gradient direction peak and the shorter edge another lower peak.
All LiDAR building outer frames are back-projected on the image, tensor gradient statistical graphs in a back-projection buffer area are analyzed, areas without the statistical characteristics of rectangular and L-shaped tensor gradient directions are removed, and a back-projection area R1 with the characteristics of single-peak and double-peak tensor gradient directions in the areas is reserved. Further refinement of the area coverage of the building is achieved by:
1. rotating the R1 back projection area to the vertical direction of the tensor gradient direction statistical peak, namely the long side direction of the building;
2. and constructing a buffer area by taking the R2 area as a core, performing buffer sliding in the main direction of R2 and the vertical main direction, counting the sum of tensor gradient sizes in the area formed by R2 and the buffer area, stopping sliding at a local maximum response position, and taking the area as an optimal area for extracting the image building elements.
After the area where the image building is located is determined, the building in the local area presents high global significance. The building image is segmented in the R3 Region using a segmentation method (RCC) based on global Contrast saliency detection (see fig. 6 (a), fig. 6 (b)). The segmentation result is subjected to contour extraction (see (c) in fig. 6), and is subjected to regularization by using RMBR algorithm (see (d) in fig. 6), so that a regularized image building top surface extraction result can be obtained, and the process is shown in fig. 6.
2) And extracting registration elements of the MVS image point cloud building.
Under the condition that the sequential image has no POS direct directional data or the POS data precision is poor, the image building extraction method guided according to the LiDAR laser point cloud building extraction result is not applicable. Under the condition, a method for extracting building registration primitives from LiDAR laser point clouds is used, and building MVS image point cloud segmentation, outline extraction and regularization are directly carried out from MVS image dense point cloud data generated by a sequence image. Fig. 7 is a schematic diagram of a process of performing building point cloud extraction and regularization on a set of MVS image dense point cloud data, which includes (a) in fig. 7, (b) in fig. 7, and (c) in fig. 7. As can be seen from (c) in fig. 7, although the dense point cloud of the MVS image has the data degradation problems such as noise and uneven density, the edge of the regularized outline of the building can still summarize the building boundary information.
Step 3, constructing a first registration primitive image based on the first contour extraction information, and obtaining a first registration primitive image set; constructing a second registration primitive image based on the second contour extraction information, and obtaining a second registration primitive image set; and matching the first registration primitive image set and the second registration primitive image set to obtain a conjugate registration primitive pair, namely construction and matching of registration primitive images.
Under the support of POS data, the primitive matching process is completed in the image building primitive extraction process guided by the prior knowledge of laser building extraction. Under the condition of no POS data support, the registration primitive image matching method defined in the two-step registration model is used for realizing the matching of extracting registration primitives, and generating a conjugate registration primitive pair for resolving the coarse registration model.
And according to the composition rule of the registration primitive graph, taking the center of the extracted outer polygon of the building as a graph node, taking the roof of the building as a registration primitive of the building, and respectively constructing the registration primitive graph for the extracted registration primitive of the building. Fig. 8 (a) and 8 (b) are schematic diagrams illustrating the process of registering the primitive map by the sequence image (MVS image dense point cloud) and the LiDAR laser point cloud respectively, wherein the triangle is the kernel triangle corresponding to the current map. The left columns of (a) in fig. 8 and (b) in fig. 8 are both extracted registration primitives, the middle columns of (a) in fig. 8 and (b) in fig. 8 are both generated registration primitive maps, and the right columns of (a) in fig. 8 and (b) in fig. 8 are both registration primitive maps and original registration primitives are displayed in an overlapping manner.
After the registration primitive map set is generated, the MVS image dense point cloud registration primitive map and the LiDAR laser point cloud registration primitive map are registered according to a registration primitive map matching method defined in the two-step registration model. In the registration primitive image matching process, local similarity of the registration primitive images is detected through matching of kernel triangles. And for the registration primitive graphs corresponding to the similar kernel triangles, measuring the global similarity by using GED. The registration pairs with similar kernel triangles (local similarity) and large GED distances (low global similarity) shown in FIG. 9 are removed, and finally the optimal registration primitive image matching is realized.
And 4, resolving a coarse registration model according to the conjugate registration element pair to obtain a space coordinate conversion relation between the unmanned aerial vehicle photogrammetry coordinate system and the LiDAR reference coordinate system.
Different registration element extraction and matching methods correspond to different coarse registration geometric models. And extracting and matching image building elements under the guidance of the prior knowledge of laser building extraction, and directly extracting registration elements on the image to correspond to the 2D-3D collinearity equation registration model. And extracting building registration elements from the MVS image dense point cloud, and matching the registration element map with the registration elements in the LIDAR laser point cloud to correspondingly transform the registration geometric model by 3D-3D space similarity. Two model solution methods are described below:
(1) and (5) solving a 2D-3D model coarse registration model.
And solving the 2D-3D registration model by using the regularized polygon corner points outside the building. Recording the image of the back projection outer frame of the complete building as a key frame, and setting the outer polygon corner point of one building in the key frame as m ═ u, v, f)TThe coordinate of the corner point of the building corresponding to the LiDAR data is Mlas=(X,Y,Z)TThen, the collinear relationship between two corner points can be expressed as:
spnpm=A[Rpnp|tpnp]Mlas
where A is a known in-camera parameter matrix, spnpAs a proportional parameter, Rpnp,tpnpConstituting an extrinsic parameter matrix of the camera. Equation describes a collinear problem of object point and image point, which is solved linearly by the present invention using the EPnP algorithm. The regular outer polygon of the building simultaneously provides geometric characteristics of collinearity, coplanarity and the like, and can be used as a constraint condition to carry out iterative optimization on an EPnP result, such as a line-to-coplanar constraint registration algorithm.
(2) And (4) solving a 3D-3D model coarse registration model.
And (3D) using the corner point pair of the regularized outer polygon of the building, which is subjected to 3D conjugation (conjugation refers to the fact that a registration primitive pair is extracted by two methods under the support of POS data or not), as a control point, and calculating three-dimensional space similarity transformation. In a photogrammetric coordinate system (C)mvs) The angular point of the regular outer polygon of the middle building is MmvsWith its corresponding corner point in the LiDAR reference frame (C)w) Coordinate in (A) is MlasThe conversion relationship therebetween is defined as Mlas=λRMmvs+ T. Where λ, R, T are the scale, rotation and translation parameters, respectively. Using the robust SVD method for each 3 graph node pairs of the n graph matching node pairs, a set (λ, R, T) can be solved.
And 5, realizing the registration of the MVS image dense point cloud and the LiDAR laser point cloud based on the space coordinate conversion relation.
And taking a space coordinate conversion relation obtained by coarse registration calculation as an initial value, and using a variant ICP algorithm to realize the optimal registration between the MVS image dense point cloud and the LiDAR laser point cloud so as to obtain accurate registration parameters of the sequence image.
Among them, the ICP algorithm and its Variants (Variants of the ICP algorithm) can be decomposed into 5 steps (rusinfikiewicz and Levoy,2001), namely: (1) point Selection (Selection of Points); (2) point Matching (Matching Points); (3) matching weights (Weighting of Matches); (4) matching pair generation rules (Rejecting Pairs); (5) error definition and Minimization method (Error Metric and Minimization).
By the data registration method, data registration errors between the sequence images and the LiDAR laser point clouds are eliminated, and coordinate references of different source data are unified.
And 6, constructing a data fusion result based on the registered LiDAR laser point cloud and the sequence image data set.
On the basis of the registration of the laser point cloud of the unmanned aerial vehicle and the sequence image, a series of data fusion processing of the sequence image and the LiDAR point cloud can be expanded, such as color laser point cloud generation and real projective images.
The color laser point cloud generation is to assign the registered image color information to LiDAR point cloud data.
A simple color point generation algorithm is described below:
1) selecting a color-imparting image to ensure that the measuring area is completely covered;
2) cutting point cloud data corresponding to the current coloring image, carrying out shielding detection according to camera parameters of the current coloring image, and determining visible laser point cloud data in the current coloring image;
3) carrying out coloring on the currently visible and non-colored three-dimensional point cloud according to a collinear equation;
4) and (5) circulating the step (2-3) until all the images are traversed or all the point clouds are colored.
Referring to fig. 10 (including fig. 10 (a), (b), and (c)) the unmanned aerial vehicle operates at a high altitude, a projection difference formed by the ground object height in the acquired image data is large, and if no occlusion detection is performed, a Double-Mapping error occurs by directly applying a collinearity equation for color assignment. Double-Mapping refers to an error generated by making a DOM using DSM for ordinary orthorectification. Fig. 10 (a) shows the cause of the above: the ground object fluctuation (2-3) forms shielding on the ground (4-5) area, and color is given by directly using a collinear condition equation under the condition of not carrying out shielding detection, so that the color error in the section (2-3) is copied into the section (4-5), and the color error in the section (4-5) is given.
Occlusion detection is performed before color application, so that the error can be effectively avoided, and the commonly used occlusion detection comprises a Z-buffer method and the like. Considering the sparsity of laser Point clouds, the optimal thickness of Z-buffer light is difficult to determine, the invention directly carries out shielding detection from the laser Point clouds, carries out space closing operation on a visual domain graph output by an HPR (high Point Removal) algorithm, determines a shielding region, and enhances the adaptability of the shielding region on the processing of Point cloud data of a sparse unmanned aerial vehicle. FIG. 11 depicts occlusion detection and color application after detection is complete. Fig. 11 (a) is a color image, and fig. 11 (b) is a visible region detected by the HPR algorithm, where white is the visible region and black is the invisible region. Fig. 11 (b) shows that the output of the original HPR is poor in adaptability to the sparse point cloud due to its algorithm limitation. The invention optimizes the misdetection of the occlusion region by using closed operation, wherein the criterion is a closed operation morphological operation criterion, namely if the occlusion regions are necessarily communicated, the occlusion regions do not exist independently. The independent occlusion region of the HPR output is culled according to this criterion, as shown in (c) of fig. 11. Fig. 11 (d) shows the result of coloring the point cloud after occlusion analysis, and it can be seen that Double-Mapping is well suppressed.
The registration method of the laser point cloud and the sequence image of the unmanned aerial vehicle provided by the embodiment of the invention at least comprises the following technical effects:
(1) LiDAR laser point cloud and sequence images collected by the same machine of an unmanned aerial vehicle MMS are adopted, and the direct geospatial reference is used as an initial registration value, so that the registration process of a registration model is accelerated.
(2) And a conjugate registration element with or without POS data extraction is adopted, and the registration result is optimized due to different registration calculation models.
Finally, it should be noted that the above embodiments are only for illustrating the technical solutions of the present invention and not for limiting, and although the present invention has been described in detail with reference to examples, it should be understood by those skilled in the art that modifications or equivalent substitutions may be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions of the present invention, which should be covered by the claims of the present invention.

Claims (9)

1. A registration method of laser point cloud and sequence images of an unmanned aerial vehicle is characterized by comprising the following steps:
step 1, acquiring LiDAR laser point clouds and sequence images acquired by an unmanned aerial vehicle, and generating MVS image dense point clouds based on the sequence images;
step 2, extracting the roof of the building in the LiDAR laser point cloud to obtain first contour extraction information; extracting the roof of the building from the MVS image dense point cloud to obtain second contour extraction information;
when building roof extraction is carried out in the MVS image dense point cloud, under the condition that POS data exists, a rough registration model is selected as a 2D-3D collinear equation registration model, the first contour extraction information is used for guiding the extraction of building registration primitives on the MVS image dense point cloud, attitude determination positioning data provided by POS are converted into external camera orientation element values, an extracted building outer frame is back-projected onto all images by using a collinear equation, the image with the complete building back-projection outer frame is marked as a key frame, and the key frame is involved in the 2D-3D rough registration model for resolving to obtain second contour extraction information;
when building roof extraction is carried out in the MVS image dense point cloud, under the condition of no POS data, building MVS image point cloud segmentation, outline extraction and regularization processing are carried out in the MVS image dense point cloud to obtain second outline extraction information;
step 3, constructing a first registration primitive image based on the first contour extraction information, and obtaining a first registration primitive image set; constructing a second registration primitive image based on the second contour extraction information, and obtaining a second registration primitive image set; matching the first registration primitive image set and the second registration primitive image set to obtain a conjugate registration primitive pair;
step 4, resolving a coarse registration model according to the conjugate registration element pair to obtain a space coordinate conversion relation between an unmanned aerial vehicle photogrammetry coordinate system and a LiDAR reference coordinate system;
and 5, realizing the registration of the MVS image dense point cloud and the LiDAR laser point cloud based on the space coordinate conversion relation.
2. The unmanned aerial vehicle laser point cloud and sequence image registration method according to claim 1, wherein the specific implementation manner of the step 1 is as follows: and after the sequence image is obtained, recovering external orientation elements of the sequence image collected by the calibration camera in an unmanned aerial vehicle photogrammetry coordinate system by a motion structure recovery (SfM) method, and generating the MVS image dense point cloud from the sequence image by a multi-view stereo matching method.
3. The method for registering the laser point cloud of the unmanned aerial vehicle and the sequence image as claimed in claim 1, wherein in the step 2, a marking process is adopted to extract the roof of the building from the laser point cloud of the LiDAR to obtain a primary extraction profile; and regularizing the primary extraction contour by using an iteration minimum outsourcing rectangle RMBR algorithm to obtain the first contour extraction information.
4. The unmanned aerial vehicle laser point cloud and sequence image registration method of claim 1, wherein in step 3, the first contour extraction information and the second contour extraction information both include extracted outer polygons of buildings, centers of the extracted outer polygons of buildings are used as graph nodes, building roofs are used as building registration primitives, and registration primitive graphs are constructed for the extracted building registration primitives;
when matching the registered primitive images, firstly detecting the local similarity between the first registered primitive image and the second registered primitive image through matching of kernel triangles, and then measuring the global similarity between the first registered primitive image and the second registered primitive image by using GED (global similarity between coordinates); and combining the results of the local similarity and the global similarity to realize the matching of the optimal registration primitive images to obtain the conjugate registration primitive pair.
5. The method for registering the laser point cloud of the unmanned aerial vehicle and the sequence image according to claim 1, wherein in the step 4, when the rough registration model is a 2D-3D collinearity equation registration model, the calculating the rough registration model comprises:
using the regularized polygon corner points outside the building to solve a 2D-3D collinear equation registration model; recording the image of the back projection outer frame of the complete building as a key frame, and setting the outer polygon corner point of one building in the key frame as m ═ u, v, f)TThe coordinate of the corner point of the building corresponding to the point in the LiDAR laser point cloud data is Mlas=(X,Y,Z)TThen, the collinear relationship between two angular points is expressed as:
spnpm=A[Rpnp|tpnp]Mlas
wherein A is a parameter matrix in the camera, spnpAs a proportional parameter, Rpnp、tpnpConstituting an extrinsic parameter matrix of the camera.
6. The method for registering the laser point cloud of the unmanned aerial vehicle and the sequence image according to claim 1, wherein in the step 4, when the rough registration model is a 3D-3D model, the calculating the rough registration model comprises:
extracting building registration elements from MVS image point cloud, matching the registration element image with registration elements in LiDAR laser point cloud, correspondingly registering a geometric model by 3D-3D space similarity transformation, and solving three-dimensional space similarity transformation by using corner pairs of regular building outer polygons conjugated by 3D as control points; let the unmanned aerial vehicle photogrammetry coordinate system CmvsRegular outer polygon of buildingCorner point of MmvsWith its corresponding building corner point in the LiDAR reference frame CwCoordinate in (A) is MlasThen, the conversion relationship between two corner points is defined as:
Mlas=λRMmvs+T;
wherein λ and R, T are scale, rotation and translation parameters in the coordinate transformation process, respectively.
7. The method as claimed in claim 1, wherein in step 5, a space coordinate transformation relation obtained by coarse registration calculation is used as an initial value, and a variant ICP algorithm is used to achieve optimal registration between the MVS image dense point cloud and the LiDAR laser point cloud, so as to obtain precise registration parameters of the sequence image.
8. The method of registering a laser point cloud of a drone with a sequence image of claim 1, further comprising: and 6, constructing a data fusion result based on the registered LiDAR laser point cloud and the sequence image data set.
9. The method of claim 8, wherein the data fusion result comprises color laser point cloud generation and real projection image.
CN202011367372.XA 2020-11-27 2020-11-27 Registration method for laser point cloud and sequence image of unmanned aerial vehicle Active CN112465849B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011367372.XA CN112465849B (en) 2020-11-27 2020-11-27 Registration method for laser point cloud and sequence image of unmanned aerial vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011367372.XA CN112465849B (en) 2020-11-27 2020-11-27 Registration method for laser point cloud and sequence image of unmanned aerial vehicle

Publications (2)

Publication Number Publication Date
CN112465849A CN112465849A (en) 2021-03-09
CN112465849B true CN112465849B (en) 2022-02-15

Family

ID=74809601

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011367372.XA Active CN112465849B (en) 2020-11-27 2020-11-27 Registration method for laser point cloud and sequence image of unmanned aerial vehicle

Country Status (1)

Country Link
CN (1) CN112465849B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113625288A (en) * 2021-06-15 2021-11-09 中国科学院自动化研究所 Camera and laser radar pose calibration method and device based on point cloud registration
CN115496835B (en) * 2022-09-20 2023-10-20 北京数字绿土科技股份有限公司 Point cloud data color-imparting method and system based on CPU and GPU heterogeneous parallel architecture
CN115690380B (en) * 2022-11-11 2023-07-21 重庆数字城市科技有限公司 Registration method and system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102411778A (en) * 2011-07-28 2012-04-11 武汉大学 Automatic registration method of airborne laser point cloud and aerial image
CN104809689A (en) * 2015-05-15 2015-07-29 北京理工大学深圳研究院 Building point cloud model and base map aligned method based on outline
CN105844629A (en) * 2016-03-21 2016-08-10 河南理工大学 Automatic segmentation method for point cloud of facade of large scene city building
CN110110641A (en) * 2019-04-29 2019-08-09 中国水利水电科学研究院 A kind of the unmanned plane monitoring method and system of Basin-wide flood scene

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102411778A (en) * 2011-07-28 2012-04-11 武汉大学 Automatic registration method of airborne laser point cloud and aerial image
CN104809689A (en) * 2015-05-15 2015-07-29 北京理工大学深圳研究院 Building point cloud model and base map aligned method based on outline
CN105844629A (en) * 2016-03-21 2016-08-10 河南理工大学 Automatic segmentation method for point cloud of facade of large scene city building
CN110110641A (en) * 2019-04-29 2019-08-09 中国水利水电科学研究院 A kind of the unmanned plane monitoring method and system of Basin-wide flood scene

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Jianping Li等.Automatic registration of panoramic image sequence and mobile laser scanning data using semantic features.《ISPRS Journal of Photogrammetry and Remote Sensing》.2018,第41~57页. *
陈驰等.车载MMS激光点云与序列全景影像自动配准方法.《测绘学报》.2018,第47卷(第2期),第215~224页. *

Also Published As

Publication number Publication date
CN112465849A (en) 2021-03-09

Similar Documents

Publication Publication Date Title
CN109872397B (en) Three-dimensional reconstruction method of airplane parts based on multi-view stereo vision
CN106709947B (en) Three-dimensional human body rapid modeling system based on RGBD camera
CN112465849B (en) Registration method for laser point cloud and sequence image of unmanned aerial vehicle
Cheng et al. 3D building model reconstruction from multi-view aerial imagery and lidar data
Li et al. Automatic DSM generation from linear array imagery data
US7509241B2 (en) Method and apparatus for automatically generating a site model
CN104156536B (en) The visualization quantitatively calibrating and analysis method of a kind of shield machine cutter abrasion
Cheng et al. Integration of LiDAR data and optical multi-view images for 3D reconstruction of building roofs
WO2018061010A1 (en) Point cloud transforming in large-scale urban modelling
Haala et al. Extracting 3D urban models from oblique aerial images
Gao et al. Ancient Chinese architecture 3D preservation by merging ground and aerial point clouds
Gao et al. Ground and aerial meta-data integration for localization and reconstruction: A review
CN103839286B (en) The true orthophoto of a kind of Object Semanteme constraint optimizes the method for sampling
CN112465732A (en) Registration method of vehicle-mounted laser point cloud and sequence panoramic image
CN111047698B (en) Real projection image acquisition method
CN112767461A (en) Automatic registration method for laser point cloud and sequence panoramic image
Zhang et al. Lidar-guided stereo matching with a spatial consistency constraint
CN111652241A (en) Building contour extraction method fusing image features and dense matching point cloud features
CN115222884A (en) Space object analysis and modeling optimization method based on artificial intelligence
CN112767459A (en) Unmanned aerial vehicle laser point cloud and sequence image registration method based on 2D-3D conversion
CN115631317B (en) Tunnel lining ortho-image generation method and device, storage medium and terminal
Deng et al. Automatic true orthophoto generation based on three-dimensional building model using multiview urban aerial images
Frommholz et al. Inlining 3d reconstruction, multi-source texture mapping and semantic analysis using oblique aerial imagery
Novacheva Building roof reconstruction from LiDAR data and aerial images through plane extraction and colour edge detection
Haggag et al. Towards automated generation of true orthoimages for urban areas

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant