CN117092621A - Hyperspectral image-point cloud three-dimensional registration method based on ray tracing correction - Google Patents

Hyperspectral image-point cloud three-dimensional registration method based on ray tracing correction Download PDF

Info

Publication number
CN117092621A
CN117092621A CN202310633129.5A CN202310633129A CN117092621A CN 117092621 A CN117092621 A CN 117092621A CN 202310633129 A CN202310633129 A CN 202310633129A CN 117092621 A CN117092621 A CN 117092621A
Authority
CN
China
Prior art keywords
plane
point cloud
image
point
hyperspectral
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310633129.5A
Other languages
Chinese (zh)
Inventor
谷延锋
戴逸尘
李贤�
王晨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin Institute of Technology
Original Assignee
Harbin Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Institute of Technology filed Critical Harbin Institute of Technology
Priority to CN202310633129.5A priority Critical patent/CN117092621A/en
Publication of CN117092621A publication Critical patent/CN117092621A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/497Means for monitoring or calibrating
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/50Systems of measurement based on relative movement of target
    • G01S17/58Velocity or trajectory determination systems; Sense-of-movement determination systems
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/66Tracking systems using electromagnetic waves other than radio waves
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/86Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • G06T2207/10036Multispectral image; Hyperspectral image

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Electromagnetism (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

A hyperspectral image-point cloud three-dimensional registration method based on ray tracing correction belongs to the technical field of hyperspectral image and laser radar data three-dimensional registration. The method aims to solve the problem that three-dimensional dimension information loss is large due to low-altitude visual angle errors in three-dimensional registration of hyperspectral images of a low-altitude airborne platform and laser radar point cloud. Firstly, calculating the spatial position and the flight direction vector of the unmanned aerial vehicle when scanning a hyperspectral single frame, determining a layering plane by a plane normal vector and a fixed point on the plane, and determining a spatial layer corresponding to the single hyperspectral scanning frame based on the layering plane; then on the layering result, obtaining a corresponding offset point according to any point of the hyperspectral image point Yun Cengna, and establishing a corresponding relation between each point and the grid by orthographic projection according to the coordinate grid parameters, so as to obtain a digital surface model; according to the phase consistency principle, edge extraction is carried out on the digital surface model and the hyperspectral image, and then registration is carried out on the digital surface model and the hyperspectral image.

Description

Hyperspectral image-point cloud three-dimensional registration method based on ray tracing correction
Technical Field
The invention belongs to the technical field of stereo registration of hyperspectral images and laser radar data, and particularly relates to a hyperspectral image-point cloud stereo registration method based on Ray Tracing (Ray Tracing) correction.
Background
Hyperspectral imaging can acquire two-dimensional and spectral information of an observation scene/target, and a remote sensing image integrating the patterns is obtained. The laser radar is a technology for rapidly acquiring three-dimensional information of an observation scene/target space by utilizing a laser beam to scan a target object and receiving reflected laser signals, and has the advantages of high precision, weather protection and the like. In order to exert the advantages of hyperspectral imaging and laser radar, the combined processing of hyperspectral images and laser radar data has important research significance, can generate spectral information and spatial three-dimensional information of an observation scene/target, and has wide application prospect.
Because hyperspectral sensors differ from the lidar detection principle, registration is required prior to the joint processing of the information. The existing registration of hyperspectral images and laser radar data is mainly aimed at two-dimensional registration of a satellite-borne/high-altitude airborne platform, and three-dimensional registration of a low-altitude airborne platform is relatively few. The low-altitude airborne hyperspectral-laser radar can acquire space three-dimensional information and spectral information with higher space resolution, and has important significance in the fields of three-dimensional reconstruction, digital twinning and the like. However, in the existing research, the hyperspectral-laser radar data stereo registration research aiming at the low-altitude unmanned aerial vehicle platform is still insufficient, and the low-altitude visual angle error is difficult to overcome by directly adopting a mainstream image registration method, so that the three-dimensional dimension information is lost; the main stereo registration method still takes a manual calibration-mapping process as a main part, and the manual calibration-mapping process is needed and cannot be used for real-time data registration.
Disclosure of Invention
The method aims to solve the problem that three-dimensional dimension information loss is large due to low-altitude visual angle errors in three-dimensional registration of the hyperspectral image of the low-altitude airborne platform and the laser radar point cloud.
A hyperspectral image-point cloud three-dimensional registration method based on ray tracing correction comprises the following steps:
s1, layering laser radar point clouds, which comprises the following steps:
s1.1, calculating the spatial position and the flight direction vector of the unmanned aerial vehicle when scanning a hyperspectral single frame:
set trace data { P m ,T m ,POS m M=1, 2,., M represents the number of hyperspectral image frames; p (P) m =(E m ,N m ,H m ) Representing the coordinates of E, N and H axes in the UTM coordinate system; t (T) m Is a time stamp;for the angle of flight, ω mm ,/>Respectively a pitch angle, a course angle and a yaw angle;
the direction vector of the frame scanning moment is calculated by the following formula:
wherein v is E ,v N ,v H E, N, H axis unit direction vectors under UTM coordinates are represented;
s1.2, spatial layer Z corresponding to single hyperspectral scanning frame m Represented as front and back layered planes pi m 、Π m+1 Intermediate interlayer space based on layered plane pi i Determining spatial layer Z corresponding to a single hyperspectral scan frame m
Layered planar pi i By plane methodVector d i With a fixed point P on the plane i Determination, wherein i=1, 2, …, M, m+1;
s2, generating a digital surface model based on ray tracing point cloud offset projection on the layering result, and establishing a corresponding relation with an original point cloud; the process of generating a digital surface model based on ray traced point cloud offset projection includes the steps of:
s2.1, according to the point cloud layering result in the step 1, according to any point p in the point cloud layer corresponding to a certain frame of the hyperspectral image n =(E n ,N n ,H n ) Obtaining a corresponding offset pointCalculated according to the following formula:
wherein, kappa' is the track angle and represents the direction v with the north direction N Acute angle formed; h pj Imaging plane height for hyperspectrum; p (P) m =(E m ,N m ,H m ) Scanning the spatial position coordinates of the unmanned aerial vehicle at the moment of the frame;
traversing all point clouds in the hierarchy, performing offset transformation on the point clouds, and simultaneously keeping the height H of the point clouds n Generating offset point cloud without change;
s2.2, establishing a grid with row and column numbers of R multiplied by C, wherein R multiplied by C is larger than the resolution of the hyperspectral image; establishing the corresponding relation between each point and the grid by orthographic projection of the offset point cloud according to the coordinate grid parameters:wherein,indicating lidar points->Correspondence in a digital surface modelCoordinates of the grid; (r, c) represents the abscissa of a grid in the lidar digital surface model;
the value of the single grid is the highest value in the height values of all points in the grid, and is recorded as the gray level of the grid; after traversing all grids, linearly mapping gray scales into [0,1] to obtain a digital surface model;
s3, carrying out edge extraction on the digital surface model and the hyperspectral image according to the phase consistency principle;
s4, registering the digital surface model and the hyperspectral image by an image registration method to generate a hyperspectral point cloud.
Further, the layering plane pi in S1.2 i Upper plane normal vector d i Is determined by the following method:
layered planar pi 2 To pi (n) M The corresponding plane normal vector is the flying direction vector d at the front and rear moments m Is a mean vector of (2);
first and last plane pi 1 And pi M+1 The normal vector of (2) is the flight direction vector d at the beginning and end 1 And d M
Further, the layering plane pi in S1.2 i Fixed point P on i Is determined by the following method:
layered planar pi 2 To pi (n) M The fixed point on the device is the flying coordinate P of the front and back moments m Is a midpoint of (2);
first plane pi 1 Is fixed by pi 2 、Π 3 The fixed points of the fixed points are reversely lengthened to obtain equal length points for determination;
end plane pi M+1 Is fixed by pi M-1 、Π M Is reversely extended to obtain equal length point determination.
Further, S1.2 is the layering plane pi i From the plane normal vector d i With a fixed point P on the plane i The determination process comprises the following steps:
according to the determined plane normal vector d i By means of the plane normal vector d i =A i ·v E +B i ·v N +C i ·v H Obtaining coefficient A i 、B i 、C i The method comprises the steps of carrying out a first treatment on the surface of the Based on the fixed point P i =(E i ,N i ,H i ),(E i ,N i ,H i ) Coordinates of E, N and H axes in UTM coordinate system, plane pi i The following are provided:
Π ii =A i ·e+B i ·n+C i ·h-(E i +N i +H i )=0
wherein pi i =A i ·e+B i ·n+C i ·h-(E i +N i +H i ) Indicating plane pi i Corresponding function, pi i (p) represents substituting the coordinates of a point p= (E, N, H) in space into the function pi i Calculating the function value; e. n and H are E, N and H axes coordinates in a three UTM coordinate system which is embodied by independent variables in a function form.
Based on layered plane pi i Determining spatial layer Z corresponding to a single hyperspectral scan frame m The process of (1) comprises the following steps:
according to M+1 demarcation planes, dividing the point cloud space into M scanning layers Z corresponding to hyperspectral image frames m Each scanning layer is formed by a plane pi m And pi (a Chinese character) m+1 Determining any point p in the point cloud n =(E n ,N n ,H n ) At scanning layer Z m The sufficiency requirements within are expressed as:
wherein,meaning that both are equivalent;
according to the sequence of frame scanning, substituting each point into a space layer corresponding formula to verify, and for the satisfied point p n Recording the corresponding scanning layer serial number, and not participating in the verification of the subsequent layer; and calculating the corresponding relation between each laser radar point cloud and the space layer according to the layering plane information.
Or,
when the step 1.2 is carried out, if the unmanned aerial vehicle flight meets the straight-line flight attitude stabilization conditions (a) and (b), determining a layering plane n by adopting a rapid point cloud layering algorithm i And is based on a layering plane pi i Determining spatial layer Z corresponding to a single hyperspectral scan frame m
The unmanned aerial vehicle flight satisfies the stable condition of straight line flight gesture:
(a) The flight trajectory can approximate a straight path, i.e. satisfy the instantaneous heading angle κ of all trajectory points in the path m Tends to be constant K and the deviation does not exceed ±5°;
(b) In single-section straight line flight, the height fluctuation is not more than 5 meters, and the flying pitch angle omega is all the scanning moments m Approaching 0 and no more than 10 °;
determining a layering plane pi by adopting a rapid point cloud layering algorithm i The process of (1) comprises the following steps:
when the flight path is relatively stable and approximately straight, the layering planes are perpendicular to the ground surface, and all planes are parallel to each other, i.e. C i =0,A i 、B i Is constant A, B, the plane n is i From the fixed point P i Determination, expressed as:
Π ii =A·e+B·n-D i =0
wherein the plane equation constant term D i =E i +N i +H i The sum of the coordinates of the axes of the fixed points E, N, H, a=sink, b=cosk.
Based on layered plane pi i Determining spatial layer Z corresponding to a single hyperspectral scan frame m The method comprises the following steps:
in straight line flight, plane group constant term D i Is a monotonic sequence; through any point p in space n And the plane parallel to the plane set is denoted as vtp n :A·e+B·n-D n =0; wherein the constant term D n The sum of the coordinates of the axes E, N and H of the point;
at this time p n At scanning layer Z m The sufficiency requirements within are expressed as:
wherein D is m 、D m+1 For scanning layer Z m Corresponding front and back layered plane equation pi m 、Π m+1 A medium constant term;
when the flight data of the hyperspectral imager meets the linear attitude stabilization condition, traversing the point cloud in advance to obtain a preliminary point cloud layering result p n ∈Z m According toVerifying whether the layering result is correct or not by the non-approximated plane equation; if the verification is wrong, p is the n With the nearest neighbor layer Z m±t Until verification is correct.
Further, the process of extracting the edges of the digital surface model and the hyperspectral image comprises the following steps:
single-band image in hyperspectral imageAnd digital surface model I DSM The method comprises the steps of (1) respectively obtaining gray level images, S is the total number of wave bands of a hyperspectral image, filtering the gray level images by using a two-dimensional Log-Gabor filter, generating an edge image M of the gray level image by using a phase consistency algorithm, and marking the edge image group of the hyperspectral image as +.>Edge map of digital surface model M DSM
Recording the noise threshold value group of the edge image group of the highlight image asNoise suppression is performed by traversing Gao Guangtu the full-band edge map of the image by the following formula:
wherein,is a hyperspectral image edge map after full-band noise reduction,>representing an upward rounding;
recording the noise threshold value of the edge map of the digital surface model as T DSM Noise suppression is performed on the digital surface model edge map by the following formula:
wherein,a digital surface model edge map.
Further, the process of registering the digital surface model with the hyperspectral image by the image registration method comprises the following steps:
edge map of hyperspectral image after noise reduction in step 3Edge map of digital surface model as moving image +.>As a fixed image, registering the two images by adopting an image registration mode based on feature extraction to generate a two-dimensional affine transformation relation T affine
Record hyperspectral image as [ HSI ] 1 (x,y),HSI 2 (x,y),...,HSI S (x,y)]The resolution of the image is X Y, according to T affine Establishing pixel correspondence of an image of an X×Y hyperspectral image to an image of a digital surface model R×CI.e. registration is achieved.
Further, after determining the gray level of the grid, S2.2 requires median filtering processing for all grids.
The beneficial effects are that:
(1) The invention provides a point cloud position correction method based on ray tracing, which comprises the steps of firstly establishing a scanning ray model of push-broom hyperspectral imaging to determine the coverage range of a hyperspectral scanning single frame, thereby obtaining the corresponding relation between a hyperspectral data frame and laser radar point cloud. In principle, a rapid point cloud layering algorithm is provided, and the time complexity is reduced. Meanwhile, a point cloud position correction method based on ray tracing is provided, offset point clouds are generated, and a digital surface model containing three-dimensional information can be generated in an orthographic projection mode, so that a shielding detection process of curved surface projection is avoided.
(2) The invention also provides a hyperspectral image edge feature map noise reduction algorithm, and provides a full-band circular noise reduction method according to the feature point gray distribution characteristics of a hyperspectral single-band edge map, wherein the defect of low spatial resolution is overcome through rich spectral dimensions of the hyperspectral image.
Based on the characteristics of the invention, the invention can effectively solve the problem of large three-dimensional dimension information loss caused by low-altitude visual angle errors in the three-dimensional registration of the hyperspectral image of the low-altitude airborne platform and the laser radar point cloud, can effectively improve the accuracy of the registration by using the invention in the three-dimensional registration of the hyperspectral image of the low-altitude airborne platform and the laser radar point cloud, and has good application prospect on the low-altitude airborne platform.
Drawings
Fig. 1 is an overall flowchart of push-broom hyperspectral image-lidar data stereo registration based on ray-tracing correction.
Fig. 2 is a flow chart of a laser radar point cloud correction fast algorithm based on ray projection.
FIG. 3 is a flowchart of a hyperspectral image edge map full-band circular noise reduction algorithm.
Fig. 4 is an example of the straight flight attitude stabilization condition (a).
Fig. 5 is an example of the straight flight attitude stabilization condition (b).
Fig. 6 is a graph of a point cloud layering effect.
Fig. 7 is a graph of a point cloud migration effect based on ray tracing.
FIG. 8 is a graph comparing a generated digital surface model with a hyperspectral full-band mean image.
Fig. 9 is a comparison of the generated digital surface model edge map with the hyperspectral full band edge map.
Fig. 10 is a pre-registration effect diagram.
Fig. 11 is a post-registration effect diagram.
Fig. 12 is a post-registration effect graph (labeled cross-ratio region).
Fig. 13 is a perspective registration effect diagram of the present invention.
Fig. 14 is a perspective registration effect diagram of the present invention.
Fig. 15 is a perspective registration partial effect diagram of the present invention.
Fig. 16 is a non-stereo registration effect diagram.
Fig. 17 is a partial effect diagram of non-stereo registration.
Detailed Description
The first embodiment is as follows: the present embodiment will be described with reference to figure 1,
the embodiment is a push-broom hyperspectral image-laser radar data three-dimensional registration method based on ray tracing correction, which comprises the following steps:
step 1, layering laser radar point clouds, and establishing a corresponding relation between laser radar points and hyperspectral frames; as shown in fig. 2, the laser radar point cloud layering process includes the following steps:
s1.1, in order to achieve matching of the laser radar point cloud and the single-frame hyperspectral image, layering plane information of a corresponding single frame is calculated through airborne positioning and gesture data. Firstly, calculating the spatial position and the flight direction vector of the unmanned aerial vehicle when scanning a hyperspectral single frame:
set trace data { P m ,T m ,POS m And, where m=1, 2,.., M, M represents the number of hyperspectral image frames; p (P) m =(E m ,N m ,H m ) Representing the coordinates of E, N and H axes in the UTM coordinate system; t (T) m Is a time stamp;for the angle of flight, ω mm ,/>Respectively pitch angle, course angle and yaw angle.
Let the normalized vector of the flight direction at the scanning moment be d m =A m ·v E +B m ·v N +C m ·v H Wherein v is E ,v N ,v H E, N, H axis unit direction vectors under UTM coordinates are represented; a is that m 、B m 、C m For the normalized coefficient, the normalized coefficient satisfies
According to the flight course angle and pitch angle at the moment, the direction vector of the frame scanning moment is calculated by the following formula:
s1.2, spatial layer Z corresponding to single hyperspectral scanning frame m Represented as front and back layered planes pi m 、Π m+1 Intermediate interlayer space, single layered plane pi i From the plane normal vector d i And a fixed point P on the plane i Determination, wherein i=1, 2, …, M, m+1.
Wherein plane pi 2 To pi (n) M The corresponding plane normal vector is the flying direction vector d at the front and rear moments m Is fixed as the flying coordinate P of the front and back time m Is defined by a central point of the lens. First and last plane pi 1 And pi M+1 The normal vector of (2) is the flight direction vector d at the beginning and end 1 And d M Plane fixingThe point is pi 2 、Π 3 And pi M-1 、Π M Is reversely extended to obtain equal length point determination.
From the determined normal vector d of the plane i By means of the plane normal vector d i =A i ·v E +B i ·v N +C i ·v H Obtaining coefficient A i 、B i 、C i The method comprises the steps of carrying out a first treatment on the surface of the Based on the fixed point P i =(E i ,N i ,H i ),(E i ,N i ,H i ) Coordinates of E, N and H axes in UTM coordinate system, plane pi i The following are provided:
Π ii =A i ·e+B i ·n+C i ·h-(E i +N i +H i )=0
wherein pi i =A i ·e+B i ·n+C i ·h-(E i +N i +H i ) Indicating plane pi i Corresponding function, pi i (p) represents substituting the coordinates of a point p= (E, N, H) in space into the function pi i Calculating the function value; e. n and H are E, N and H axes coordinates in a three UTM coordinate system which is embodied by independent variables in a function form.
According to M+1 demarcation planes, dividing the point cloud space into M scanning layers Z corresponding to hyperspectral image frames m Each scanning layer is formed by a plane pi m And pi (a Chinese character) m+1 Determining any point p in the point cloud n =(E n ,N n ,H n ) At scanning layer Z m The sufficiency requirements within can be expressed as:
wherein,meaning that the two are equivalent and are mutually sufficient and necessary conditions.
According to the sequence of frame scanning, substituting each point into a space layer corresponding formula to verify, and for the satisfied point p n And recording the corresponding scanning layer serial numbers, and not participating in the verification of the subsequent layers. And calculating the corresponding relation between each laser radar point cloud and the space layer according to the layering plane information.
When the unmanned plane flight meets the linear flight attitude stabilization conditions (a) and (b), a rapid point cloud layering algorithm can be applied, namely, the rapid point cloud layering algorithm is used for replacing the processing in the step 1.2, so that the process is simplified, and the calculated amount is reduced; the unmanned aerial vehicle flight satisfies the linear flight attitude stabilization condition and the rapid point cloud layering algorithm as follows:
(a) The flight trajectory can approximate a straight path, i.e. satisfy the instantaneous heading angle κ of all trajectory points in the path m Tends to be constant K and the deviation does not exceed + -5 deg..
(b) In single-section straight line flight, the height fluctuation is not more than 5 meters, and the flying pitch angle omega is all the scanning moments m Approaching 0 and no more than 10.
When the flight path is relatively stable and approximately straight, the layering planes are perpendicular to the ground surface, and all planes are parallel to each other, i.e. C i =0,A i 、B i Is constant A, B, the plane n is i Can be fixed by a fixed point P i Determination, expressed as:
Π ii =A·e+B·n-D i =0
wherein plane equation constant term D i =E i +N i +H i The sum of the coordinates of the axes of the fixed points E, N, H, a=sink, b=cosk.
In straight line flight, plane group constant term D i Is a monotonic sequence. Through any point p in space n And planes parallel to the plane set may be denoted as vtp n :A·e+B·n-D n =0. Wherein the constant term D n The sum of the coordinates of the axes of the points E, N and H.
At this time p n At scanning layer Z m The sufficiency requirements within can be expressed as:
wherein D is m 、D m+1 For scanning layer Z m Corresponding front and back layered plane equation pi m 、Π m+1 Medium constant term.
When the flight data of the hyperspectral imager meets the linear attitude stabilization condition, the method is used for traversing the point cloud in advance to obtain a preliminary point cloud layering result p n ∈Z m And verifying whether the layering result is correct according to a non-approximated plane equation in the formula (2). If the verification is wrong, p is the n With the nearest neighbor layer Z m±t Verifying according to equation (2) until verification is correct.
Because the algorithm does not need to traverse the point cloud in the calculation of each layer (the total point number of the point cloud is far greater than the total layer number in the practical application), the method can change the time complexity of the algorithm from o (N) 2 ) Down to o (N).
Step 2, generating a digital surface model based on ray tracing point cloud offset projection on the layering result, and establishing a corresponding relation with an original point cloud; the process of generating a digital surface model based on ray traced point cloud offset projection includes the steps of:
s2.1, performing point cloud offset based on ray tracing:
according to the point cloud layering result in the step 1, according to any point p in the point cloud layer corresponding to a certain frame of the hyperspectral image n =(E n ,N n ,H n ) Obtaining a corresponding offset pointCalculated according to the following formula:
wherein, kappa' is the track angle and represents the direction v with the north direction N Acute angle formed; h pj Imaging plane height for hyperspectrum; p (P) m =(E m ,N m ,H m ) And scanning the spatial position coordinates of the unmanned aerial vehicle at the frame moment.
Traversing all point clouds within the hierarchy for whichThe point clouds are shifted and converted while the height H is reserved n The offset point cloud is generated unchanged.
The step aims to enable the laser radar points matched with the hyperspectral image stereo information to be reflected in a digital surface model through point cloud offset, namely, offset correction is carried out on the point cloud by simulating hyperspectral imaging angles.
S2.2, establishing a digital surface model (laser radar point cloud rasterization):
a grid of rows and columns r×c is established, r×c being larger than the resolution of the hyperspectral image. Establishing the corresponding relation between each point and the grid by orthographic projection of the offset point cloud according to the coordinate grid parameters:
wherein the method comprises the steps ofIndicating lidar points->Coordinates of the corresponding grid in the digital surface model; (r, c) represents the abscissa of a grid in the lidar digital surface model.
The value of a single grid is the highest value of all the point height values in the grid. After traversing all grids, gray scale (height value) is linearly mapped into [0,1], and then a digital surface model is obtained.
If the density of the point cloud is insufficient or the grid parameters are selected too much, and therefore no corresponding laser radar points exist in part of the grids, the generated digital surface model possibly has salt and pepper noise, the noise reduction is carried out through median filtering in the preferred scheme, and in fact, the density of the point cloud is sufficient or the grids all have the corresponding laser radar points, and the noise reduction can be carried out through median filtering.
Step 3, according to the phase consistency principle, performing edge extraction on the digital surface model and the hyperspectral image, as shown in fig. 3, including the following steps:
single-band image in hyperspectral imageAnd digital surface model I DSM The method comprises the steps of (1) respectively obtaining gray level images, S is the total number of wave bands of a hyperspectral image, filtering the gray level images by using a two-dimensional Log-Gabor filter with an interval of 30 DEG, generating an edge image M of the gray level image by using a phase consistency algorithm, and dividing the edge image M into edge image groups for recording the hyperspectral imageEdge map of digital surface model M DSM
Recording the noise threshold value group of the edge image group of the highlight image asThreshold->Determined by the gray scale distribution of each edge map. Noise suppression is performed by traversing Gao Guangtu the full-band edge map of the image by the following formula:
wherein,is a hyperspectral image edge map after full-band noise reduction,>representing an upward rounding.
Recording the noise threshold value of the edge map of the digital surface model as T DSM Noise suppression is performed on the digital surface model edge map by the following formula:
a digital surface model edge map.
This step aims to improve the consistency of the spectral image and the digital surface model by means of a unified edge extraction algorithm, thereby facilitating the registration method of the image applicable in step 4.
And 4, registering the digital surface model and the hyperspectral image by an image registration method to generate a hyperspectral point cloud, wherein the method comprises the following steps of:
edge map of hyperspectral image after noise reduction in step 3Edge map of digital surface model as moving image +.>As a fixed image, registering the two images by adopting an image registration mode based on feature extraction to generate a two-dimensional affine transformation relation T affine
Record hyperspectral image as [ HSI ] 1 (x,y),HSI 2 (x,y),...,HSI S (x,y)]The resolution of the image is X Y, according to T affine Establishing pixel correspondence of an image of an X×Y hyperspectral image to an image of a digital surface model R×CI.e. registration is achieved.
The process of the invention can be simply expressed as the correspondence between the offset point cloud and the digital surface model in the step 2The hyperspectral image pixel (x, y) and the original point cloud p n Corresponding to the above. The corresponding relation establishment process of the laser radar point cloud and the hyperspectral image pixels is as follows:
examples
To verify the effect of the present invention, a ray-tracing correction-based hyperspectral image-point cloud stereo registration was performed using the procedure of the first embodiment.
Examples of the "straight flight attitude stabilization conditions" set forth in step 1 are shown in fig. 4 to 5.
In the embodiment, the flying height fluctuates by about 1 meter, the flying approximates a straight path, the instantaneous course angle deviation is not more than +/-5 degrees, and the flying pitch angle approaches 0.
The point cloud layering result obtained in step 1 is shown as 6.
In step 2, the point cloud offset result based on ray tracing is shown in fig. 7, where gray (actually red in the color chart) is the original point cloud, black (actually blue in the color chart) is the corrected point cloud, and it can be observed that the corrected point cloud exhibits offset far from both sides of the flight track, and the offset at a high position is greater than that at a low position.
The comparison of the digital surface model (left) generated in step 2 with the hyperspectral full band mean image (right) is shown in fig. 8. It can be observed that the digital surface model contains part of the three-dimensional plane information, similar to the hyperspectral image, but with a larger difference in the features of the two images.
The digital surface model edge map (left) and the hyperspectral full-band edge map (right) generated in step 3 are shown in fig. 9, and the feature consistency of the two is observed to be improved remarkably.
In step 4, the pre-registration and post-registration effect graphs between the hyperspectral image and the digital surface model are shown in fig. 10-12, fig. 10 is a pre-registration effect graph, fig. 11 is a post-registration effect graph, and fig. 12 is a post-registration effect graph (the cross-correlation area is marked). The intersection ratio of the line characteristic matching degree of the registered image and the key area can be observed to be high.
The effect after the step 4 of stereo registration is shown in fig. 13-15, wherein fig. 13 is a stereo registration effect diagram of the present invention, fig. 14 is a stereo registration effect diagram of the present invention, and fig. 15 is a stereo registration partial effect diagram of the present invention.
The three-dimensional registration of the invention is embodied in the point cloud correction based on ray tracing in the step 2, and in order to explain the three-dimensional effect of the invention, the invention also carries out non-three-dimensional registration on the scheme without the step 2, the non-three-dimensional registration effect is shown in fig. 16-17, wherein fig. 16 is a non-three-dimensional registration effect diagram, and fig. 17 is a non-three-dimensional registration local effect diagram.
In non-stereo registration, registration between two kinds of data cannot be achieved in a stereo dimension (such as a building wall surface); the three-dimensional registration method can accurately realize the registration of three-dimensional information.
The above examples of the present invention are only for describing the calculation model and calculation flow of the present invention in detail, and are not limiting of the embodiments of the present invention. Other variations and modifications of the above description will be apparent to those of ordinary skill in the art, and it is not intended to be exhaustive of all embodiments, all of which are within the scope of the invention.

Claims (10)

1. The hyperspectral image-point cloud three-dimensional registration method based on ray tracing correction is characterized by comprising the following steps of:
s1, layering laser radar point clouds, which comprises the following steps:
s1.1, calculating the spatial position and the flight direction vector of the unmanned aerial vehicle when scanning a hyperspectral single frame:
set trace data { P m ,T m ,POS m M=1, 2,., M represents the number of hyperspectral image frames; p (P) m =(E m ,N m ,H m ) Representing the coordinates of E, N and H axes in the UTM coordinate system; t (T) m Is a time stamp;as the direction angle of the flight of the vehicle,respectively pitch angle and course angleAnd yaw angle;
the direction vector of the frame scanning moment is calculated by the following formula:
wherein v is E ,v N ,v H E, N, H axis unit direction vectors under UTM coordinates are represented;
s1.2, spatial layer Z corresponding to single hyperspectral scanning frame m Represented as front and back layered planes pi m 、Π m+1 Intermediate interlayer space based on layered plane pi i Determining spatial layer Z corresponding to a single hyperspectral scan frame m
Layered planar pi i From the plane normal vector d i With a fixed point P on the plane i Determination, wherein i=1, 2, …, M, m+1;
s2, generating a digital surface model based on ray tracing point cloud offset projection on the layering result, and establishing a corresponding relation with an original point cloud; the process of generating a digital surface model based on ray traced point cloud offset projection includes the steps of:
s2.1, according to the point cloud layering result in the step 1, according to any point p in the point cloud layer corresponding to a certain frame of the hyperspectral image n =(E n ,N n ,H n ) Obtaining a corresponding offset pointCalculated according to the following formula:
wherein, kappa' is the track angle and represents the direction v with the north direction N Acute angle formed; h pj Imaging plane height for hyperspectrum; p (P) m =(E m ,N m ,H m ) Scanning the spatial position coordinates of the unmanned aerial vehicle at the moment of the frame;
traversing all point clouds in the hierarchy, performing offset transformation on the point clouds, and simultaneously keeping the height H of the point clouds n Generating offset point cloud without change;
s2.2, establishing a grid with row and column numbers of R multiplied by C, wherein R multiplied by C is larger than the resolution of the hyperspectral image; establishing the corresponding relation between each point and the grid by orthographic projection of the offset point cloud according to the coordinate grid parameters:wherein (1)>Indicating lidar points->Coordinates of the corresponding grid in the digital surface model; (r, c) represents the abscissa of a grid in the lidar digital surface model;
the value of the single grid is the highest value in the height values of all points in the grid, and is recorded as the gray level of the grid; after traversing all grids, linearly mapping gray scales into [0,1] to obtain a digital surface model;
s3, carrying out edge extraction on the digital surface model and the hyperspectral image according to the phase consistency principle;
s4, registering the digital surface model and the hyperspectral image by an image registration method to generate a hyperspectral point cloud.
2. The method for ray-tracing correction based hyperspectral image-point cloud stereo registration of claim 1, wherein the layered plane pi in S1.2 i Upper plane normal vector d i Is determined by the following method:
layered planar pi 2 To pi (n) M The corresponding plane normal vector is the flying direction vector d at the front and rear moments m Is a mean vector of (2);
first and last plane pi 1 And pi M+1 The normal vector of (2) is the flight direction vector d at the beginning and end 1 And d M
3. The method for ray-tracing correction based hyperspectral image-point cloud stereo registration of claim 2, wherein the layered plane pi in S1.2 i Fixed point P on i Is determined by the following method:
layered planar pi 2 To pi (n) M The fixed point on the device is the flying coordinate P of the front and back moments m Is a midpoint of (2);
first plane pi 1 Is fixed by pi 2 、Π 3 The fixed points of the fixed points are reversely lengthened to obtain equal length points for determination;
end plane pi M+1 Is fixed by pi M-1 、Π M Is reversely extended to obtain equal length point determination.
4. A hyperspectral image-point cloud stereo registration method based on ray tracing correction as claimed in claim 3 wherein S1.2 the layered plane pi i From the plane normal vector d i With a fixed point P on the plane i The determination process comprises the following steps:
according to the determined plane normal vector d i By means of the plane normal vector d i =A i ·v E +B i ·v N +C i ·v H Obtaining coefficient A i 、B i 、C i The method comprises the steps of carrying out a first treatment on the surface of the Based on the fixed point P i =(E i ,N i ,H i ),(E i ,N i ,H i ) Coordinates of E, N and H axes in UTM coordinate system, plane pi i The following are provided:
Π ii =A i ·e+B i ·n+C i ·h-(E i +N i +H i )=0
wherein pi i =A i ·e+B i ·n+C i ·h-(E i +N i +H i ) Indicating plane pi i Corresponding function, pi i (p) represents substituting the coordinates of a point p= (E, N, H) in space into the function pi i Calculating the function value; e. n is,H is E, N, H axis coordinates in a three UTM coordinate system embodied in an independent variable in a functional form.
5. The ray-tracing correction based hyperspectral image-point cloud stereo registration method of claim 4, wherein pi is based on a layered plane i Determining spatial layer Z corresponding to a single hyperspectral scan frame m The process of (1) comprises the following steps:
according to M+1 demarcation planes, dividing the point cloud space into M scanning layers Z corresponding to hyperspectral image frames m Each scanning layer is formed by a plane pi m And pi (a Chinese character) m+1 Determining any point p in the point cloud n =(E n ,N n ,H n ) At scanning layer Z m The sufficiency requirements within are expressed as:
wherein,meaning that both are equivalent;
according to the sequence of frame scanning, substituting each point into a space layer corresponding formula to verify, and for the satisfied point p n Recording the corresponding scanning layer serial number, and not participating in the verification of the subsequent layer; and calculating the corresponding relation between each laser radar point cloud and the space layer according to the layering plane information.
6. A hyperspectral image-point cloud stereo registration method based on ray tracing correction as claimed in claim 3 wherein in step 1.2, if the unmanned aerial vehicle flight satisfies the straight line flight attitude stabilization conditions (a) and (b), a fast point cloud layering algorithm is adopted to determine the layering plane n i And is based on a layering plane pi i Determining spatial layer Z corresponding to a single hyperspectral scan frame m
The unmanned aerial vehicle flight satisfies the stable condition of straight line flight gesture:
(a) The flight trajectory can approximate a straight path, i.e. satisfy the instantaneous heading angle κ of all trajectory points in the path m Tends to be constant K and the deviation does not exceed ±5°;
(b) In single-section straight line flight, the height fluctuation is not more than 5 meters, and the flying pitch angle omega is all the scanning moments m Approaching 0 and no more than 10 °;
determining a layering plane pi by adopting a rapid point cloud layering algorithm i The process of (1) comprises the following steps:
when the flight path is relatively stable and approximately straight, the layering planes are perpendicular to the ground surface, and all planes are parallel to each other, i.e. C i =0,A i 、B i Is constant A, B, the plane n is i From the fixed point P i Determination, expressed as:
Π ii =A·e+B·n-D i =0
wherein the plane equation constant term D i =E i +N i +H i The sum of the coordinates of the axes of the fixed points E, N, H, a=sink, b=cosk.
7. The ray-tracing correction based hyperspectral image-point cloud stereo registration method of claim 6, wherein based on layered plane pi i Determining spatial layer Z corresponding to a single hyperspectral scan frame m The method comprises the following steps:
in straight line flight, plane group constant term D i Is a monotonic sequence; through any point p in space n And the plane parallel to the plane set is denoted as vtp n :A·e+B·n-D n =0; wherein the constant term D n The sum of the coordinates of the axes E, N and H of the point;
at this time p n At scanning layer Z m The sufficiency requirements within are expressed as:
wherein D is m 、D m+1 For scanning layer Z m Corresponding front and back layered plane equation pi m 、Π m+1 A medium constant term;
when the flight data of the hyperspectral imager meets the linear attitude stabilization condition, traversing the point cloud in advance to obtain a preliminary point cloud layering result p n ∈Z m According toVerifying whether the layering result is correct or not by the non-approximated plane equation; if the verification is wrong, p is the n With the nearest neighbor layer Z m±t Until verification is correct.
8. The method for ray-tracing correction based hyperspectral image-point cloud stereo registration according to any one of claims 1 to 7, wherein the process of edge extraction of the digital surface model and hyperspectral image comprises the steps of:
single-band image in hyperspectral imageAnd digital surface model I DSM The method comprises the steps of (1) respectively obtaining gray level images, S is the total number of wave bands of a hyperspectral image, filtering the gray level images by using a two-dimensional Log-Gabor filter, generating an edge image M of the gray level image by using a phase consistency algorithm, and marking the edge image group of the hyperspectral image as +.>Edge map of digital surface model M DSM
Recording the noise threshold value group of the edge image group of the highlight image asNoise suppression is performed by traversing Gao Guangtu the full-band edge map of the image by the following formula:
wherein,is a hyperspectral image edge map after full-band noise reduction,>representing an upward rounding;
recording the noise threshold value of the edge map of the digital surface model as T DSM Noise suppression is performed on the digital surface model edge map by the following formula:
wherein,a digital surface model edge map.
9. The ray-tracing correction based hyperspectral image-point cloud stereoscopic registration method of claim 8, wherein the process of registering the digital surface model with the hyperspectral image by the image registration method comprises the steps of:
edge map of hyperspectral image after noise reduction in step 3Edge map of digital surface model as moving imageAs a fixed image, registering the two images by adopting an image registration mode based on feature extraction to generate a two-dimensional affine transformation relation T affine
Record hyperspectral image as [ HSI ] 1 (x,y),HSI 2 (x,y),...,HSI S (x,y)]Image resolutionFor X Y, according to T affine Establishing pixel correspondence of an image of an X×Y hyperspectral image to an image of a digital surface model R×CI.e. registration is achieved.
10. The ray-tracing correction-based hyperspectral image-point cloud stereo registration method of claim 9, wherein after S2.2 determines the gray scale of the grids, a median filtering process is required for all grids.
CN202310633129.5A 2023-05-31 2023-05-31 Hyperspectral image-point cloud three-dimensional registration method based on ray tracing correction Pending CN117092621A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310633129.5A CN117092621A (en) 2023-05-31 2023-05-31 Hyperspectral image-point cloud three-dimensional registration method based on ray tracing correction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310633129.5A CN117092621A (en) 2023-05-31 2023-05-31 Hyperspectral image-point cloud three-dimensional registration method based on ray tracing correction

Publications (1)

Publication Number Publication Date
CN117092621A true CN117092621A (en) 2023-11-21

Family

ID=88772366

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310633129.5A Pending CN117092621A (en) 2023-05-31 2023-05-31 Hyperspectral image-point cloud three-dimensional registration method based on ray tracing correction

Country Status (1)

Country Link
CN (1) CN117092621A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117872390A (en) * 2024-03-11 2024-04-12 北京市农林科学院信息技术研究中心 Image fusion method, hyperspectral laser radar sensor and hyperspectral laser radar system
CN117872390B (en) * 2024-03-11 2024-05-31 北京市农林科学院信息技术研究中心 Image fusion method, hyperspectral laser radar sensor and hyperspectral laser radar system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117872390A (en) * 2024-03-11 2024-04-12 北京市农林科学院信息技术研究中心 Image fusion method, hyperspectral laser radar sensor and hyperspectral laser radar system
CN117872390B (en) * 2024-03-11 2024-05-31 北京市农林科学院信息技术研究中心 Image fusion method, hyperspectral laser radar sensor and hyperspectral laser radar system

Similar Documents

Publication Publication Date Title
CN107316325B (en) Airborne laser point cloud and image registration fusion method based on image registration
CN110211043B (en) Registration method based on grid optimization for panoramic image stitching
EP2847741B1 (en) Camera scene fitting of real world scenes for camera pose determination
US9729789B2 (en) Method of 3D reconstruction and 3D panoramic mosaicing of a scene
US20090154793A1 (en) Digital photogrammetric method and apparatus using intergrated modeling of different types of sensors
CN106485690A (en) Cloud data based on a feature and the autoregistration fusion method of optical image
CN103093459B (en) Utilize the method that airborne LiDAR point cloud data assisted image mates
CN106526593B (en) Sub-pixel-level corner reflector automatic positioning method based on the tight imaging model of SAR
Guo et al. Mapping crop status from an unmanned aerial vehicle for precision agriculture applications
Toschi et al. Combining airborne oblique camera and LiDAR sensors: Investigation and new perspectives
CN102073874A (en) Geometric constraint-attached spaceflight three-line-array charged coupled device (CCD) camera multi-image stereo matching method
CN108416798B (en) A kind of vehicle distances estimation method based on light stream
CN106971408A (en) A kind of camera marking method based on space-time conversion thought
CN113327296B (en) Laser radar and camera online combined calibration method based on depth weighting
CN104361563B (en) GPS-based (global positioning system based) geometric precision correction method of hyperspectral remote sensing images
CN111486864A (en) Multi-source sensor combined calibration method based on three-dimensional regular octagon structure
CN109724586A (en) A kind of spacecraft relative pose measurement method of fusion depth map and point cloud
CN110986888A (en) Aerial photography integrated method
CN117576343B (en) Three-dimensional MESH model manufacturing method based on high-resolution satellite stereoscopic image
Zhao et al. Alignment of continuous video onto 3D point clouds
CN104809720A (en) Small cross view field-based double-camera target associating method
CN112767459A (en) Unmanned aerial vehicle laser point cloud and sequence image registration method based on 2D-3D conversion
CN102798851B (en) Geometric-imaging-based MODIS (Moderate Resolution Imaging Spectroradiometer) LAI product verification method
Hoegner et al. Thermal leakage detection on building facades using infrared textures generated by mobile mapping
CN114092534B (en) Hyperspectral image and laser radar data registration method and registration system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination