CN117330031A - Unmanned aerial vehicle inclined image real-time positioning method integrating airborne LiDAR point cloud - Google Patents

Unmanned aerial vehicle inclined image real-time positioning method integrating airborne LiDAR point cloud Download PDF

Info

Publication number
CN117330031A
CN117330031A CN202311303537.0A CN202311303537A CN117330031A CN 117330031 A CN117330031 A CN 117330031A CN 202311303537 A CN202311303537 A CN 202311303537A CN 117330031 A CN117330031 A CN 117330031A
Authority
CN
China
Prior art keywords
point
unmanned aerial
aerial vehicle
image
point cloud
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311303537.0A
Other languages
Chinese (zh)
Inventor
范伟
刘豪杰
刘朋俊
李超群
荆海峰
王钰菁
白正雄
裴浩阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yellow River Engineering Consulting Co Ltd
Original Assignee
Yellow River Engineering Consulting Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yellow River Engineering Consulting Co Ltd filed Critical Yellow River Engineering Consulting Co Ltd
Priority to CN202311303537.0A priority Critical patent/CN117330031A/en
Publication of CN117330031A publication Critical patent/CN117330031A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • G01C11/04Interpretation of pictures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/17Terrestrial scenes taken from planes or by drones

Abstract

The invention discloses an unmanned aerial vehicle inclined image real-time positioning method integrating airborne LiDAR point clouds, which comprises the steps of rasterizing and imaging unmanned aerial vehicle LiDAR point cloud data to generate a digital earth surface model; and after the unmanned aerial vehicle inclined image is corrected to be a horizontal image by utilizing real-time POS data of the unmanned aerial vehicle inclined image, the geometrical fusion of the unmanned aerial vehicle inclined image and the airborne LiDAR point cloud data image is realized by utilizing an iterative intersection method, and the real-time positioning of the unmanned aerial vehicle inclined image is realized. The method has the advantages that the digital earth surface model is generated by rasterizing and imaging the unmanned aerial vehicle-mounted LiDAR point cloud data; the geometrical fusion of the unmanned aerial vehicle inclined image and the airborne LiDAR point cloud is realized by utilizing the unmanned aerial vehicle real-time POS data; through pixel-by-pixel calculation, the whole frame positioning of the unmanned aerial vehicle inclined image is realized. Compared with the prior art, the method has high instantaneity, does not need to arrange ground control points or construct stereopair, and avoids time-consuming operations such as image feature matching, optical flow matching and the like.

Description

Unmanned aerial vehicle inclined image real-time positioning method integrating airborne LiDAR point cloud
Technical Field
The invention relates to the technical field of image positioning, in particular to an unmanned aerial vehicle inclined image real-time positioning method based on fusion of airborne LiDAR point clouds.
Background
In recent years, serious geological disasters such as flood, landslide, debris flow, ground subsidence and the like are generated in partial areas, and great losses are brought to lives and properties of people. The unmanned plane can be used as a safety guard, can carry out safety inspection, hazard investigation, emergency mapping, early warning and evaluation and other works in a geological disaster prone area, and plays an increasingly important role along with the progress of technology. In the operation process of the unmanned aerial vehicle, the key target area can be acquired, identified and positioned in a mode of shooting images in real time.
Currently, the existing image positioning method mainly includes the following four methods: the method comprises a photogrammetry method, a binocular vision method, an optical flow method, and a point cloud and image feature matching method.
(1) Firstly, shooting a group of aerial image sequences with a certain overlapping degree by using an unmanned aerial vehicle; then, relative positioning between adjacent aerial images is realized by utilizing a collineation/coplanarity equation under the assistance of an image matching technology; and finally, absolute positioning of the aerial image is completed with the aid of a certain number of ground control points by using a net adjustment technology. At present, the photogrammetry method can automatically realize the mapping of the large-scale topographic map with the precision of 5cm, but is a post-processing method, and is difficult to meet the real-time requirements of unmanned aerial vehicle safety inspection, emergency mapping and other applications.
(2) The binocular vision method is to calibrate a fixed binocular camera by using the existing calibration methods (DLT method, pnP method, PST method and the like) so as to determine the accurate position and posture of the binocular camera; then, performing feature matching on two overlapped images shot by the binocular camera by using an image matching technology; finally, the three-dimensional coordinates of object points corresponding to the same-name image points on the two images are intersected by utilizing the principle of similar triangles, so that the absolute positioning of the images can be completed. At present, the binocular vision method can realize real-time absolute positioning of images, but the binocular vision method requires that the positions and the postures of binocular cameras are strictly fixed, so that the field of view is limited, and the requirements of safety inspection or emergency mapping of unmanned aerial vehicles on mobility and large scenes cannot be met.
(3) The optical flow method is to obtain a group of high overlapping sequence video image frames by using a video camera; then, under certain assumption conditions (tiny displacement, consistent brightness, consistent regional motion and the like), constructing an optical flow model; and finally, carrying out displacement calculation or tracking on specific points (areas) in the video frame by using the constructed optical flow model, thereby realizing the relative positioning between adjacent images. At present, an optical flow method is widely applied to the fields of motion tracking, image stitching, medical modeling and the like, but the optical flow method is a relative positioning method and cannot meet absolute positioning requirements of unmanned aerial vehicle safety inspection or emergency mapping.
(4) The feature matching method of the point cloud and the aerial image is to acquire the point cloud data and the aerial image data of a target area; converting unordered three-dimensional point cloud data into ordered two-dimensional point cloud images by using information such as the height, reflection intensity and the like of the point cloud as gray values; and then, the feature matching of the point cloud image and the aerial image is realized by utilizing an image matching technology, so that the positioning of the aerial image is realized. At present, the feature matching method of the point cloud and the aerial image is still in a preliminary research stage, and the matching precision and the matching stability can not meet the reliability requirements of unmanned aerial vehicle safety inspection or emergency mapping.
In summary, the existing image positioning methods are difficult to meet the requirements of the application scenes such as unmanned aerial vehicle safety inspection or emergency mapping on the real-time performance, the absolute performance, the mobility and the reliability of image positioning.
Disclosure of Invention
The invention aims to provide a real-time positioning method of an unmanned aerial vehicle inclined image by fusing an onboard LiDAR point cloud aiming at the real-time positioning problem of the unmanned aerial vehicle inclined image.
In order to achieve the above purpose, the invention adopts the following technical scheme:
according to the unmanned aerial vehicle inclined image real-time positioning method integrating the airborne LiDAR point cloud, the unmanned aerial vehicle airborne LiDAR point cloud data is subjected to rasterization and imaging processing, and a digital earth surface model is generated; and after the unmanned aerial vehicle inclined image is corrected to be a horizontal image by utilizing real-time POS data of the unmanned aerial vehicle inclined image, the geometrical fusion of the unmanned aerial vehicle inclined image and the airborne LiDAR point cloud data image is realized by utilizing an iterative intersection method, and the real-time positioning of the unmanned aerial vehicle inclined image is realized.
Further, the onboard LiDAR point cloud data rasterization specifically includes: comprehensively considering the complexity of the terrain, the density of the point cloud and the positioning accuracy to determine the line width and the column width of a single grid, and establishing a regular two-dimensional grid network; and determining grids corresponding to the point cloud data according to the horizontal coordinate values of the airborne LiDAR point cloud data, and storing the data into the corresponding grids.
Further, the imaging process specifically includes determining a gray value of each grid according to cloud point data around each grid to be imaged, including:
aiming at grids to be imaged containing 0 point cloud data, sequentially searching the point cloud data in all grids in 8 directions around the center grid by taking the grids to be imaged as the center grid; calculating the expected and middle errors of the gray value of the central grid according to the search result, and further determining the gray value of the central grid;
for grids to be imaged containing 2 or less point cloud data, searching the periphery of a central grid by taking the grids to be imaged as the central gridPoint cloud data in all grids within the range; if the total number of the searched point cloud data is smaller than a set threshold value, determining the gray value of the grid to be imaged according to the step that the grid to be imaged contains 0 point cloud data; conversely according to two dimensions Gao SihanCalculating the expected gray value of the central grid and the error in calculation, and determining the gray value of the central grid;
and aiming at the grid to be imaged which contains more than 2 point cloud data, sequencing all the point cloud data in the grid to be imaged according to elevation values, and taking the median value of the elevation values in the elevation value sequence of all the point cloud data as the gray value of the grid to be imaged.
Further, the search result is specifically that if a grid containing more than 2 point cloud data is searched in a search direction, the search is stopped and the grid containing more than 2 point cloud data is used as the point cloud data in the search direction; if a grid containing more than 2 point cloud data is not searched in the searching direction, the point cloud data in the searching direction is empty; if the point cloud data of more than 4 searching directions are empty, the central grid is the map edge, and the gray value of the central grid is not required to be determined.
Further, the implementation of the geometric fusion of the unmanned aerial vehicle inclined image and the airborne LiDAR point cloud data image by using the iterative intersection method specifically comprises the following steps:
s1, acquiring coordinates of a center point S of an inclined image shot by an unmanned aerial vehicle and an image main point O of the corrected horizontal image by using real-time POS data of the inclined image of the unmanned aerial vehicle N Coordinates of (c);
s2, taking the point S as a starting point and passing through the point O N Making a vertical line, intersecting the digital earth surface model at a point N, and determining the horizontal coordinate of the point N through a projection relation;
s3, determining the gray value Z of the grid corresponding to the point N according to the horizontal coordinate of the point N N The method comprises the steps of carrying out a first treatment on the surface of the Calculating the length H of the perpendicular SN SN
S4, selecting any point a on the unmanned aerial vehicle inclined image t Determining point a t Corresponding point a on the corrected horizontal image v Coordinates of (c);
s5, taking the shooting center S as a starting point and passing through a point a t Taking rays, and intersecting the rays with the digital earth surface model at a point A;
s6, taking the point N as a starting point to serve as a horizontal line, intersecting the ray at the point A 0 Determining point A according to the principle of similar triangles 0 Coordinates of (c);
s7, at point A 0 Perpendicular to the digital earth model at point G 1 Determining point G by projection relationship 1 Is a horizontal coordinate of (2);
s8, at point G 1 A horizontal line is taken as a starting point and is intersected with the ray at a point A 1 Intersecting the extension line of the vertical line SN with N 1
S9, according to the point G 1 Is a horizontal coordinate determination point G of (2) 1 Gray value Z of corresponding grid G1 The method comprises the steps of carrying out a first treatment on the surface of the Calculating the perpendicular SN 1 Length H of (2) SN1
S10, determining a point A according to the principle of similar triangles 1 Coordinates of (c);
s11, repeatedly executing the steps S7-S10 until the point A i And point A i-1 If the height difference is smaller than the preset threshold value, then A i Is considered as the coordinates of point a.
The method has the advantages that the digital earth surface model is generated by rasterizing and imaging the unmanned aerial vehicle-mounted LiDAR point cloud data; the geometrical fusion of the unmanned aerial vehicle inclined image and the airborne LiDAR point cloud is realized by utilizing the unmanned aerial vehicle real-time POS data; through pixel-by-pixel calculation, the whole frame positioning of the unmanned aerial vehicle inclined image is realized. Compared with the prior art, the method has high instantaneity, does not need to arrange ground control points or construct stereopair, and avoids time-consuming operations such as image feature matching, optical flow matching and the like.
Meanwhile, the method has high stability and real-time positioning precision, fully considers the processes of denoising the point cloud data, fitting the point cloud, correcting the inclined image and the like, and avoids the problem of matching errors caused by illumination, movement, shielding, similar objects and the like in a computer vision algorithm. The calculation result is stable and reliable in unmanned aerial vehicle safety inspection and emergency mapping aiming at hills, urban areas, river channels and various large-altitude-difference areas, and unmanned aerial vehicle inclined images can be realized with 0.5m precision within 0.38sPixels) are located in the frame.
Finally, the method can be completely and autonomously written on the C++ platform without any pre-processing software (post-processing software), and has strong extensibility and expansibility. The application requirements of different related scenes can be rapidly met.
Drawings
FIG. 1 is a schematic diagram of rasterization of an airborne LiDAR point cloud in a method of the present invention.
FIG. 2 is a schematic diagram of a search along a "rice" shape in the method of the present invention.
Fig. 3 is a schematic diagram of a grid to be imaged according to the method of the present invention, wherein the grid to be imaged comprises 2 or less point cloud data.
Fig. 4 is a schematic diagram of the principle of iterative intersection method in the method of the present invention.
FIG. 5 is a flow chart of an embodiment of the method of the present invention.
Fig. 6 is an illustration of an imaging application effect of an airborne LiDAR point cloud in the method of the present invention.
Fig. 7 shows the effect of the method of the present invention in hilly areas.
Fig. 8 shows the effect of the method according to the invention in urban areas.
Fig. 9 shows the effect of the method of the present invention in river region.
Fig. 10 shows the effect of the method of the present invention in large-level-difference terrain.
Detailed Description
The following description of the technical solutions in the embodiments of the present invention will be clear and complete, and it is obvious that the described embodiments are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Example 1: the invention discloses a detailed description of an unmanned aerial vehicle inclined image real-time positioning method fusing airborne LiDAR point clouds
According to the unmanned aerial vehicle inclined image real-time positioning method integrating the airborne LiDAR point cloud, the unmanned aerial vehicle airborne LiDAR point cloud data is subjected to rasterization and imaging processing, and a digital earth surface model is generated; and after the unmanned aerial vehicle inclined image is corrected to be a horizontal image by utilizing real-time POS data of the unmanned aerial vehicle inclined image, the geometrical fusion of the unmanned aerial vehicle inclined image and the airborne LiDAR point cloud data image is realized by utilizing an iterative intersection method, and the real-time positioning of the unmanned aerial vehicle inclined image is realized. The method comprises the following steps:
first, an unordered onboard LiDAR point cloud needs to be rasterized for organization and management. As shown in fig. 1, the algorithm steps are as follows:
first, a regular two-dimensional grid network is established. Wherein,is the origin of coordinates of the two-dimensional grid network; ->Indicate the number of lines->Representing the number of columns, the line width of a single grid is defined as +.>Column width is defined as +.>The values of the single grid line width and the column width can be set by comprehensively considering the complexity of the terrain, the density of the point cloud, the precision requirement of the product and the like.
Second, the grid stores an onboard LiDAR point cloud. According to the horizontal coordinate value of the airborne LiDAR point cloudAnd determining the grid network row and column numbers corresponding to each point, and sequentially storing the grid network row and column numbers into the corresponding grids. The grid line and column numbers corresponding to each point are determined by (1), namely
(1)
Wherein,representing->Line number of individual points, ++>,/>Representing the total number of points in the point cloud; />Representing->Plane coordinate values of the individual points; />Representing +.>Minimum value of direction, +.>Representing +.>A maximum value of the direction; />Column width representing a single grid, +.>Representing the line width of a single grid.
Secondly, imaging processing is carried out on unmanned aerial vehicle LiDAR point cloud data, namely, the gray value of each grid is determined according to cloud point data around each grid to be imaged, and a digital earth surface model is generated; the specific algorithm comprises the following steps:
due to the defects of an onboard LiDAR system and the interference of external environment, the acquired original point cloud data has disorder and non-uniformity and contains a small number of noise points. Therefore, the rasterized point cloud data still cannot meet the application requirements and needs to be further imaged.
According to the distribution condition of point cloud data in a grid network and considering the requirements of denoising and fitting, the invention provides an imaging method of 3 types of point clouds, namely: the number of point cloud data in a single grid to be imaged is 0, and the number of point cloud in the single grid to be imaged is 2 or less, including 1 scene and 2 scenes; the number of point clouds in a single grid to be imaged is more than 2.
Due to non-uniformity of the distribution of the point cloud, refraction of the water surface, absorption of artificial materials and the like, more grids without the point cloud can appear in the grid network.
First, for a grid to be imaged including 0 point cloud data, the present invention proposes a "m" shape searching and denoising method to image a grid not including point cloud, as shown in fig. 2.
Sequentially searching point cloud data in all grids along the 8 directions around the center grid by taking the grid to be imaged as the center grid; calculating the expected and middle errors of the gray value of the central grid according to the search result, and further determining the gray value of the central grid;
in the searching process, if grids containing more than 2 point cloud data are searched in a certain searching direction, stopping searching and taking the grids containing more than 2 point cloud data as the point cloud data in the searching direction; if a grid containing more than 2 point cloud data is not searched in a certain searching direction, the point cloud data in the searching direction is empty; if the point cloud data of more than 4 searching directions are empty, the central grid is the map edge, and the gray value of the central grid is not required to be determined.
The gray value expectation of the center grid is calculated as follows:
(2)
wherein,a gray value expectation representing the center grid; />The search direction is represented by 8 directions of a 'rice' -shaped character;indicate direction +.>The number of the searched point clouds; />Representation->The first ∈of the point cloud set searched in the direction>Elevation of individual points>Representation->Weights for points in the point cloud collection searched in the direction,;/>indicating the right to direction->Representation->Line number of grid containing point cloud searched in direction, +.>A row and column number representing the center grid; />Express quantity rights, +_>Indicate direction +.>And (5) the number of the searched point clouds.
The error in the calculation is as follows:
(3)
the gray value of the center grid is calculated as follows:
(4)
wherein,representing the gray value of the center grid, namely the elevation fitting value of the center grid;representing the restriction, i.e. satisfying only the restriction +.>And the calculation of the gray value of the central grid is participated.
Secondly, for the grid to be imaged containing 2 or less point cloud data, that is, the single grid to be imaged contains 1 or 2 point cloud data, the specific method is shown in fig. 3.
Searching the periphery of the central grid by taking the grid to be imaged as the central gridPoint cloud data in all grids within the range; if the total number of the searched point cloud data is smaller than a set threshold value, determining the gray value of the grid to be imaged according to the step that the grid to be imaged contains 0 point cloud data; otherwise, calculate the center according to a two-dimensional Gaussian functionThe gray value of the central grid is determined according to the expected gray value of the grid and the error in calculation;
the gray value expectation of the center grid is calculated by using a two-dimensional Gaussian function, and the formula is as follows:
(5)
wherein,a row and column number representing the center grid; />The search range is represented and can be customized. Can let->Line numbers representing grids within the search range; />Indicate->Line->The number of point clouds in the grid of the column; />Indicate->Line->First +.in the set of point clouds searched in the grid of columns>Elevation of individual points>Representing a two-dimensional Gaussian function>,/>Representing the variance of a two-dimensional Gaussian function, let +.>
The error in the calculation is as follows:
(6)
the gray value of the center grid is calculated as follows:
(7)
wherein,representing the gray value of the center grid, namely the elevation fitting value of the center grid;representing the restriction, i.e. satisfying only the restriction +.>And the calculation of the gray value of the central grid is participated.
Thirdly, the grid to be imaged containing more than 2 point cloud data, namely, the situation that the number of the point cloud data contained in a single grid to be imaged is 3 or more is aimed at.
When the number of the point cloud data in a single grid to be imaged is 3 or more, in order to express the height of the ground surface more quickly and consider the demands of denoising and fitting of the point cloud, the grid is imaged by using median filtering. Namely, all the point cloud data in the grid to be imaged are ordered according to the elevation values, the median value of the elevation values in the elevation value sequence of all the point cloud data is taken as the gray value of the grid to be imaged, and the formula is as follows.
(8)
Wherein,representing the gray value of the center grid, namely the elevation fitting value of the center grid; />Representing a high Cheng Zhongzhi filter function; />A row and column number representing the grid; />Representing a collection of point cloud elevations in the grid.
Finally, the real-time POS data of the unmanned aerial vehicle inclined image is utilized, after the unmanned aerial vehicle inclined image is corrected to be a horizontal image, the iterative intersection method is utilized to realize the geometric fusion of the unmanned aerial vehicle inclined image and the onboard LiDAR point cloud data image, and the real-time positioning of the unmanned aerial vehicle inclined image is realized, specifically:
the real-time POS data of unmanned aerial vehicle image is utilized, the inclined image shot by the unmanned aerial vehicle is corrected into a horizontal image, and the horizontal image is expressed as follows:
(9)
wherein,representing the coordinates of the image plane of the image point on the tilted image>Representing image plane coordinates of an image point on a corresponding horizontal shot; />Representing a cameraA focal length; />Respectively representing pitch angle, roll angle and deflection angle, and providing by real-time POS data of unmanned aerial vehicle images;
expanding the formula (9) to obtain
(10)
Wherein,therefore, the image plane coordinates of the horizontal photo corresponding to the image plane coordinates of the unmanned aerial vehicle inclined image can be calculated, and the unmanned aerial vehicle inclined image is corrected to be the horizontal image.
The specific principle of realizing the geometric fusion of the unmanned aerial vehicle inclined image and the onboard LiDAR point cloud data image by using the iterative intersection method is shown in fig. 4. Wherein, group represents the digital earth model, is provided by the point cloud image. Point S represents the center of the tilted image taken by the unmanned aerial vehicle, its three-dimensional coordinatesCan be provided by real-time POS data of unmanned aerial vehicle tilting images. P (P) t Representing oblique images photographed by unmanned aerial vehicle, P v Representing a horizontal image corresponding to the oblique image shot by the unmanned aerial vehicle; through the process of correcting the inclined image shot by the unmanned aerial vehicle into the horizontal image, the inclined image P can be obtained t And (3) the horizontal image coordinates corresponding to any point.
Point O in fig. 4 represents tiltThe principal point of the oblique image is located at P t On the straight line SO t The method comprises the steps of carrying out a first treatment on the surface of the The corresponding point of the point O on the horizontal image is the point O N Point O N Located at P v On, and straight line SO N ⊥P v The method comprises the steps of carrying out a first treatment on the surface of the Point O can be determined from its three-dimensional coordinates N Is a three-dimensional coordinate of (c). At the same time through unmanned aerial vehicle tilting image P t Any image point a on t Can also determine the point a t In horizontal image P v Corresponding image point a on v Is a three-dimensional coordinate of (c). The geometrical fusion of the unmanned aerial vehicle inclined image and the onboard LiDAR point cloud data image is to determine the unmanned aerial vehicle inclined image P t Any image point a on t Coordinates of corresponding point A to the digital earth model, point S, a v 、a t And the points A and A are collinear.
The invention adopts an iterative intersection method to determine the unmanned plane inclined image P t Any image point a on t Coordinates of corresponding point A of the digital surface model, taking point S as a starting point and passing through point O N And (5) making a vertical line, and intersecting the digital surface model at a point N. According to the relation of vertical projection, the X-axis and Y-axis coordinates of the point N can be determined, namely. At the same time, the point S is taken as a starting point, and the unmanned aerial vehicle is passed through the inclined image P t Any image point a on t Acting as rays Sa t . Then uses the point N as the starting point to make horizontal line and makes it intersect with ray at point A 0 . Determining the gray value Z of the grid corresponding to the point N through the X-axis and Y-axis coordinates of the point N N The length of the perpendicular SN is calculated. Then, as shown in FIG. 4, according to the triangle SO N N and triangle Sa v A 0 The similar principle is found A 0 Is a three-dimensional coordinate of (c). Resume at point A 0 Perpendicular to the digital earth model at point G 1 Determining point G by projection relationship 1 Is a horizontal coordinate of (2); then by point G 1 Horizontal line is taken as the starting point, and is connected with ray Sa t Intersecting at point A 1 Intersecting the extension line of the vertical line SN with N 1 The method comprises the steps of carrying out a first treatment on the surface of the Due to G 1 And N 1 On the same horizontal line, so according to G 1 Can determine N by the horizontal coordinates of (2) 1 To pass through the point N 1 Is fixed point N of horizontal coordinate of (2) 1 Gray value Z of corresponding grid N1 Calculating the perpendicular SN 1 Is a length of (c). Continuing according to triangle SO N N 1 Triangle Sa v A 0 The similar principle is found A 1 Is a three-dimensional coordinate of (c). Sequentially and circularly iterating until the point A calculated by two adjacent times i And point A i-1 After the height difference of (a) is smaller than the preset threshold value, A is i Is regarded as unmanned aerial vehicle inclined image P t Any image point a on t Coordinates of the corresponding point a to the digital earth model.
The specific steps are as follows:
s1, acquiring coordinates of a center point S of an inclined image shot by an unmanned aerial vehicle by utilizing real-time POS data of the inclined image of the unmanned aerial vehicleAnd the principal point O of the corrected horizontal image N Coordinates of (c);
s2, taking the point S as a starting point and passing through the point O N Making a vertical line, intersecting the digital earth surface model at a point N, and determining the horizontal coordinate of the point N through a projection relation, wherein the horizontal coordinate is actually
S3, determining the gray value Z of the grid corresponding to the point N according to the horizontal coordinate of the point N N The method comprises the steps of carrying out a first treatment on the surface of the Calculating the length H of the perpendicular SN SN
(11)
And representing an elevation retrieval function, namely retrieving gray scale (elevation) values of corresponding grids in the point cloud image according to the horizontal coordinates.
S4, selecting any point a on the unmanned aerial vehicle inclined image t Determining point a t Corresponding to the corrected horizontal image upper pointa v Coordinates of (c);
s5, taking the shooting center S as a starting point and passing through a point a t Taking rays, and intersecting the rays with the digital earth surface model at a point A;
s6, taking the point N as a starting point to serve as a horizontal line, intersecting the ray at the point A 0 Determining point A according to the principle of similar triangles 0 Coordinates of (c); i.e.
(12)
Wherein,representation dot->Is a three-dimensional coordinate of (2); />Representing horizontal photo +.>Go up some->Can be obtained according to formula (10); />Focal length of the camera; />Representing the initial height difference +.>,/>Obtained by the formula (11); />Is a photography center +.>Is a three-dimensional coordinate of (2); />Is referred to as formula (11).
S7, at point A 0 Perpendicular to the digital earth model at point G 1 Determining point G by projection relationship 1 Is a horizontal coordinate of (2);
s8, at point G 1 A horizontal line is taken as a starting point and is intersected with the ray at a point A 1 Intersecting the extension line of the vertical line SN with N 1
S9, according to the point G 1 Is a horizontal coordinate determination point G of (2) 1 Gray value Z of corresponding grid G1 The method comprises the steps of carrying out a first treatment on the surface of the Calculating the perpendicular SN 1 Length H of (2) SN1
S10, determining a point A according to the principle of similar triangles 1 Coordinates of (c); i.e.
(13)
Wherein,representation dot->Is a three-dimensional coordinate of (2); />Representing horizontal photo +.>Go up some->Can be obtained according to formula (10); />Focal length of the camera; />Representing the difference in height of the iteration,;/>is a photography center +.>Is a three-dimensional coordinate of (2); />Is a transition point->Is obtained by the formula (12); />Is referred to as formula (11).
S11, repeatedly executing the steps S7-S10 until the point A i And point A i-1 If the height difference is smaller than the preset threshold value, then A i Is considered as the coordinates of point a.
And resolving all the pixel points on the unmanned aerial vehicle inclined image one by one according to the steps to obtain the three-dimensional coordinates of the ground point corresponding to each pixel point on the unmanned aerial vehicle inclined image, thereby realizing the whole-frame positioning of the unmanned aerial vehicle inclined image.
Example 2: the invention discloses an implementation process of an unmanned aerial vehicle inclined image real-time positioning method fusing airborne LiDAR point clouds
As shown in FIG. 5, in step 501, on-board LiDAR point cloud data is rasterized. Firstly, constructing a regular two-dimensional grid network; then, the onboard LiDAR point cloud data is stored in a rasterization mode, namely the onboard LiDAR point cloud is sequentially stored in corresponding grids according to the horizontal coordinates of the onboard LiDAR point cloud.
In step 502, on-board LiDAR point cloud data stored in a grid is imaged. And imaging the grid point cloud under three conditions: the number of point clouds in a single grid is 0, the number of point clouds in a single grid is 1 or 2, and the number of point clouds in a single grid is 3 or more.
In step 503, the unmanned aerial vehicle tilt image is corrected to a horizontal image using the real-time POS data.
In step 504, the geometrical fusion of the unmanned aerial vehicle oblique image and the onboard LiDAR point cloud image is achieved using the real-time POS data.
In step 505, three-dimensional coordinates of ground points corresponding to the unmanned aerial vehicle oblique image are calculated pixel by pixel, so that whole-frame positioning of the unmanned aerial vehicle oblique image is realized.
In step 506, it is determined whether to perform real-time positioning calculation on the new unmanned aerial vehicle inclination image; if yes, go to step 503, if no, end.
Example 3: the invention discloses a specific application of an unmanned aerial vehicle inclined image real-time positioning method fusing airborne LiDAR point clouds
According to steps 501 and 502, an airborne LiDAR point cloud of a region is rasterized and imaged, as shown in FIG. 6. In the figure, the black area represents the edge of the area and does not contain point cloud data; the gray area represents the interior of the zone, contains point cloud data, and the brightness of the gray area represents the height of the terrain. FIG. 6 (a) is a schematic diagram of a rasterization result of an airborne LiDAR point cloud, where a large number of void areas not containing the point cloud exist; fig. 6 (b) is a schematic diagram of an imaging result of an airborne LiDAR point cloud, in which no cavity area exists, and the overall topography shows a smooth variation trend, which accords with objective rules.
According to steps 503, 504 and 505, the unmanned aerial vehicle oblique image and the airborne LiDAR point cloud image are geometrically fused, and the whole frame positioning result is shown in fig. 7, 8, 9 and 10.
Fig. 7 is a hilly area. Fig. 7 (a) is an oblique image taken by the unmanned aerial vehicle; FIG. 7 (b) is a high-precision DOM image of the prior art, used as a reference truth value for evaluating the positioning precision of the unmanned aerial vehicle tilt image; fig. 7 (c) is a superimposed effect diagram of the positioned unmanned aerial vehicle oblique image and the high-precision DOM image, that is, an effect diagram in which the positioned oblique image (gray area) and the high-precision DOM image (color area) are superimposed together according to coordinates in arcgis. Through measurement, the homonymous point position error of the positioned unmanned aerial vehicle inclined image and the high-precision DOM is about 0.5m, and the whole frame is #Pixel) resolving time is about 0.38s.
Fig. 8 is a town area. Fig. 8 (a) is an oblique image taken by the unmanned aerial vehicle; FIG. 8 (b) is a high-precision DOM image of the prior art, used as a reference truth value for evaluating the positioning precision of the unmanned aerial vehicle tilt image; fig. 8 (c) is a diagram showing the superposition effect of the positioned unmanned aerial vehicle inclination image and the high-precision DOM image. Through measurement, the homonymous point position error of the positioned unmanned aerial vehicle inclined image and the high-precision DOM is about 0.5m, and the unmanned aerial vehicle inclined image is #Pixels) is about 0.37s.
Fig. 9 is a river region. Fig. 9 (a) is an oblique image taken by the unmanned aerial vehicle; fig. 9 (b) is a high-precision DOM image of the prior art, used as a reference true value for evaluating the positioning precision of the unmanned aerial vehicle tilt image; fig. 9 (c) is a diagram showing the superposition effect of the positioned unmanned aerial vehicle tilt image and the high-precision DOM image. Through measurement, the homonymous point position error of the positioned unmanned aerial vehicle inclined image and the high-precision DOM is about 0.5m, and the whole frame is #Pixel) resolving time is about 0.39s.
Fig. 10 is a large elevation difference terrain area. Fig. 10 (a) is an oblique image taken by the unmanned aerial vehicle; FIG. 10 (b) is a high-precision DOM image of the prior art, used as a reference truth value for evaluating the positioning precision of the unmanned aerial vehicle tilt image; fig. 10 (c) is a diagram showing the superposition effect of the positioned unmanned aerial vehicle inclination image and the high-precision DOM image. Through measurement, the homonymous point position error of the positioned unmanned aerial vehicle inclined image and the high-precision DOM is about 0.5m, and the whole frame is #Pixel) resolving time is about 0.38s.
The experiment proves that the invention can be robustly used in any area (hills, towns, river channels and large-height-difference areas)Inclination image of single-frame unmanned aerial vehicle with precision of about 0.5mPixels) are positioned in real time (about 0.38 s). />

Claims (5)

1. An unmanned aerial vehicle inclined image real-time positioning method integrating airborne LiDAR point clouds is characterized by comprising the following steps of: rasterizing and imaging unmanned aerial vehicle LiDAR point cloud data to generate a digital earth surface model; and after the unmanned aerial vehicle inclined image is corrected to be a horizontal image by utilizing real-time POS data of the unmanned aerial vehicle inclined image, the geometrical fusion of the unmanned aerial vehicle inclined image and the airborne LiDAR point cloud data image is realized by utilizing an iterative intersection method, and the real-time positioning of the unmanned aerial vehicle inclined image is realized.
2. The unmanned aerial vehicle inclination image real-time positioning method fusing airborne LiDAR point clouds according to claim 1, wherein the unmanned aerial vehicle inclination image real-time positioning method is characterized by comprising the following steps of: the onboard LiDAR point cloud data rasterization specifically comprises the following steps: comprehensively considering the complexity of the terrain, the density of the point cloud and the positioning accuracy to determine the line width and the column width of a single grid, and establishing a regular two-dimensional grid network; and determining grids corresponding to the point cloud data according to the horizontal coordinate values of the airborne LiDAR point cloud data, and storing the data into the corresponding grids.
3. The unmanned aerial vehicle inclination image real-time positioning method fusing airborne LiDAR point clouds according to claim 1, wherein the unmanned aerial vehicle inclination image real-time positioning method is characterized by comprising the following steps of: the imaging processing specifically includes determining a gray value of each grid according to point cloud data around each grid to be imaged, and specifically includes:
aiming at grids to be imaged containing 0 point cloud data, sequentially searching the point cloud data in all grids in 8 directions around the center grid by taking the grids to be imaged as the center grid; calculating the expected and middle errors of the gray value of the central grid according to the search result, and further determining the gray value of the central grid;
for grids to be imaged containing 2 or less point cloud data, searching by taking the grids to be imaged as a central gridAround the central gridPoint cloud data in all grids within the range; if the total number of the searched point cloud data is smaller than a set threshold value, determining the gray value of the grid to be imaged according to the step that the grid to be imaged contains 0 point cloud data; otherwise, calculating the gray value expectation of the center grid according to the two-dimensional Gaussian function, and determining the gray value of the center grid;
and aiming at the grid to be imaged which contains more than 2 point cloud data, sequencing all the point cloud data in the grid to be imaged according to elevation values, and taking the median value of the elevation values in the elevation value sequence of all the point cloud data as the gray value of the grid to be imaged.
4. The unmanned aerial vehicle inclination image real-time positioning method fusing airborne LiDAR point clouds according to claim 3, wherein the unmanned aerial vehicle inclination image real-time positioning method is characterized by comprising the following steps of: the search result is specifically that if a grid containing more than 2 point cloud data is searched in a search direction, the search is stopped, and the grid containing more than 2 point cloud data is used as the point cloud data in the search direction; if a grid containing more than 2 point cloud data is not searched in the searching direction, the point cloud data in the searching direction is empty; if the point cloud data of more than 4 searching directions are empty, the central grid is the map edge, and the gray value of the central grid is not required to be determined.
5. The unmanned aerial vehicle inclination image real-time positioning method fusing airborne LiDAR point clouds according to claim 1, wherein the unmanned aerial vehicle inclination image real-time positioning method is characterized by comprising the following steps of: the method for realizing the geometric fusion of the unmanned aerial vehicle inclined image and the airborne LiDAR point cloud data image by using the iterative intersection method comprises the following specific steps:
s1, acquiring coordinates of a center point S of an inclined image shot by an unmanned aerial vehicle and an image main point O of the corrected horizontal image by using real-time POS data of the inclined image of the unmanned aerial vehicle N Coordinates of (c);
s2, taking the point S as a starting point and passing through the point O N Perpendicular to the digit groundThe surface model intersects at the point N, and the horizontal coordinate of the point N is determined through the projection relation;
s3, determining the gray value Z of the grid corresponding to the point N according to the horizontal coordinate of the point N N The method comprises the steps of carrying out a first treatment on the surface of the Calculating the length H of the perpendicular SN SN
S4, selecting any point a on the unmanned aerial vehicle inclined image t Determining point a t Corresponding point a on the corrected horizontal image v Coordinates of (c);
s5, taking the shooting center S as a starting point and passing through a point a t Taking rays, and intersecting the rays with the digital earth surface model at a point A;
s6, taking the point N as a starting point to serve as a horizontal line, intersecting the ray at the point A 0 Determining point A according to the principle of similar triangles 0 Coordinates of (c);
s7, at point A 0 Perpendicular to the digital earth model at point G 1 Determining point G by projection relationship 1 Is a horizontal coordinate of (2);
s8, at point G 1 A horizontal line is taken as a starting point and is intersected with the ray at a point A 1 Intersecting the extension line of the vertical line SN at a point N 1
S9, according to the point G 1 Is a horizontal coordinate determination point G of (2) 1 Gray value Z of corresponding grid G1 The method comprises the steps of carrying out a first treatment on the surface of the Calculating the perpendicular SN 1 Length H of (2) SN1
S10, determining a point A according to the principle of similar triangles 1 Coordinates of (c);
s11, repeatedly executing the steps S7-S10 until the point A i And point A i-1 If the height difference is smaller than the preset threshold value, then A i Is considered as the coordinates of point a.
CN202311303537.0A 2023-10-10 2023-10-10 Unmanned aerial vehicle inclined image real-time positioning method integrating airborne LiDAR point cloud Pending CN117330031A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311303537.0A CN117330031A (en) 2023-10-10 2023-10-10 Unmanned aerial vehicle inclined image real-time positioning method integrating airborne LiDAR point cloud

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311303537.0A CN117330031A (en) 2023-10-10 2023-10-10 Unmanned aerial vehicle inclined image real-time positioning method integrating airborne LiDAR point cloud

Publications (1)

Publication Number Publication Date
CN117330031A true CN117330031A (en) 2024-01-02

Family

ID=89276811

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311303537.0A Pending CN117330031A (en) 2023-10-10 2023-10-10 Unmanned aerial vehicle inclined image real-time positioning method integrating airborne LiDAR point cloud

Country Status (1)

Country Link
CN (1) CN117330031A (en)

Similar Documents

Publication Publication Date Title
CN111144388B (en) Monocular image-based road sign line updating method
Wu et al. Integration of aerial oblique imagery and terrestrial imagery for optimized 3D modeling in urban areas
CN110135455B (en) Image matching method, device and computer readable storage medium
CN112444242B (en) Pose optimization method and device
KR100912715B1 (en) Method and apparatus of digital photogrammetry by integrated modeling for different types of sensors
CN106780712B (en) Three-dimensional point cloud generation method combining laser scanning and image matching
Pepe et al. Techniques, tools, platforms and algorithms in close range photogrammetry in building 3D model and 2D representation of objects and complex architectures
WO2001048683A1 (en) Any aspect passive volumetric image processing method
CN104200523A (en) Large-scale scene three-dimensional reconstruction method for fusion of additional information
CN112465970B (en) Navigation map construction method, device, system, electronic device and storage medium
Gao et al. Ground and aerial meta-data integration for localization and reconstruction: A review
CN102519436A (en) Chang'e-1 (CE-1) stereo camera and laser altimeter data combined adjustment method
CN110986888A (en) Aerial photography integrated method
CN113538501A (en) Low-altitude image DSM generation building edge refinement method
CN116740288B (en) Three-dimensional reconstruction method integrating laser radar and oblique photography
CN113345072A (en) Multi-view remote sensing topographic image point cloud reconstruction method and system
CN112767459A (en) Unmanned aerial vehicle laser point cloud and sequence image registration method based on 2D-3D conversion
CN116994012A (en) Map spot matching system and method based on ecological restoration
CN116563377A (en) Mars rock measurement method based on hemispherical projection model
CN107784666B (en) Three-dimensional change detection and updating method for terrain and ground features based on three-dimensional images
CN117330031A (en) Unmanned aerial vehicle inclined image real-time positioning method integrating airborne LiDAR point cloud
CN115690138A (en) Road boundary extraction and vectorization method fusing vehicle-mounted image and point cloud
CN114563000A (en) Indoor and outdoor SLAM method based on improved laser radar odometer
Oliveira et al. Height gradient approach for occlusion detection in UAV imagery
CN116704138B (en) Method and device for establishing oblique photography three-dimensional model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination