CN114723907A - Water surface area reconstruction method for unmanned aerial vehicle oblique photography three-dimensional model data - Google Patents

Water surface area reconstruction method for unmanned aerial vehicle oblique photography three-dimensional model data Download PDF

Info

Publication number
CN114723907A
CN114723907A CN202210436345.6A CN202210436345A CN114723907A CN 114723907 A CN114723907 A CN 114723907A CN 202210436345 A CN202210436345 A CN 202210436345A CN 114723907 A CN114723907 A CN 114723907A
Authority
CN
China
Prior art keywords
water surface
point cloud
unmanned aerial
surface area
aerial vehicle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210436345.6A
Other languages
Chinese (zh)
Inventor
邱银国
焦亚沁
段洪涛
罗菊花
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Institute of Geography and Limnology of CAS
Original Assignee
Nanjing Institute of Geography and Limnology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Institute of Geography and Limnology of CAS filed Critical Nanjing Institute of Geography and Limnology of CAS
Priority to CN202210436345.6A priority Critical patent/CN114723907A/en
Publication of CN114723907A publication Critical patent/CN114723907A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/08Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/32Indexing scheme for image data processing or generation, in general involving image mosaicing
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/30Assessment of water resources

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Remote Sensing (AREA)
  • Image Processing (AREA)
  • Processing Or Creating Images (AREA)
  • Image Generation (AREA)

Abstract

The invention relates to a water surface area reconstruction method facing unmanned aerial vehicle oblique photography three-dimensional model data, which converts the unmanned aerial vehicle oblique photography three-dimensional model data into point cloud data and carries out gridding treatment; designing a water surface area boundary extraction method by using the density and elevation characteristics of the point cloud data of the water surface area, and rapidly screening the water area boundary point cloud; constructing an irregular triangulation network model of the water surface area based on the water area boundary extraction result; the method comprises the steps of realizing intelligent selection of texture images of a water surface area based on EXIF information of downward-looking image data of an unmanned aerial vehicle, designing an intelligent processing algorithm to generate final texture image data of the water surface area by combining a spatial inclusion relation between the texture images and a TIN model, and realizing reconstruction of the water surface area. The invention can improve the existing unmanned aerial vehicle oblique photogrammetry three-dimensional modeling technical system, realize the rapid and automatic reconstruction of real water surface information, and has high popularization and application values in the fields of twin watersheds, twin cities and the like.

Description

Water surface area reconstruction method for unmanned aerial vehicle oblique photography three-dimensional model data
Technical Field
The invention belongs to the field of three-dimensional modeling, and particularly relates to a rapid water surface area reconstruction method for three-dimensional model data of unmanned aerial vehicle oblique photography.
Background
The unmanned aerial vehicle oblique photogrammetry technology provides a new effective means for the rapid three-dimensional reconstruction of large-area geographic space elements, and is widely applied to the three-dimensional modeling work of cities and watershed scales. Compared with other geographic space elements, the water surface area is easily influenced by factors such as wind waves, reflection and the like, so that the prior art cannot realize the rapid reconstruction of the water surface area. Therefore, in the existing three-dimensional visualization system, the real water surface information of the river and the lake is rarely shown. In a general application scenario, the real water surface information is rarely concerned, and a preset (imaginary) water surface model is usually used to replace the real water surface information to obtain a good visual display effect. For some special application fields (such as ecological environment monitoring, water pollution supervision and the like), real water surface information (water color, floating objects and the like) is an important decision basis, and a preset water surface model cannot meet the requirements. Although some commercial software (such as 3ds Max, DP-Modeller and the like) can carry out post-repair on the water surface area in the unmanned aerial vehicle oblique photography three-dimensional model, the whole process needs a large amount of manual participation and is low in efficiency; more importantly, the software realizes the reconstruction of the water surface information by adopting texture interpolation, texture sampling and other modes, and the real water surface information is still difficult to realize visual expression in a three-dimensional scene. Therefore, the rapid reconstruction of the water surface area is realized in the unmanned aerial vehicle oblique photography three-dimensional model data, and the method has important practical significance and application value.
Disclosure of Invention
The invention aims to provide a rapid water surface area reconstruction method for three-dimensional model data of unmanned aerial vehicle oblique photography, which supplements and perfects the existing unmanned aerial vehicle oblique photography three-dimensional modeling technology system.
In order to achieve the purpose, the technical scheme adopted by the invention is as follows:
the water surface area reconstruction method facing the unmanned aerial vehicle oblique photography three-dimensional model data comprises the following steps:
converting the oblique photography three-dimensional model into point cloud data, and carrying out meshing processing on the point cloud data; taking the cloud density and the elevation information of points in the grid as constraint conditions, and designing an eight-neighborhood traversal method to extract point cloud data at the water area boundary;
constructing an irregular triangulation network model of the water surface area based on the extraction result of the point cloud data at the water area boundary;
traversing EXIF information of the unmanned aerial vehicle downward-looking image, extracting a coordinate value of the image, converting the coordinate value into a Gaussian plane rectangular coordinate, and selecting the unmanned aerial vehicle downward-looking image with the central point coordinate closest to the central point coordinate of the irregular triangulation network model as a texture image primary selection result;
performing spatial inclusion relation analysis on the selected downward-looking images of the unmanned aerial vehicles and the irregular triangulation network model, and performing image splicing on the downward-looking images of the unmanned aerial vehicles until the selected downward-looking images of the unmanned aerial vehicles completely contain the irregular triangulation network model, namely acquiring final texture images of the water surface area;
and endowing the final texture image of the water surface area to an irregular triangulation network model of the water surface area to realize water surface reconstruction.
As a preferred embodiment, after the oblique photography three-dimensional model is converted into point cloud data, the point cloud data is subjected to straight-through filtering processing and then to gridding processing, so that the number of processed point clouds can be reduced, and the algorithm execution efficiency can be improved.
Further, the threshold value of the straight-through filtering is dynamically determined by the point cloud elevation.
In a preferred embodiment, during the gridding process, the grid is set to be square, and the side length of the grid is dynamically determined according to the density of the point cloud.
As a preferred embodiment, the starting point cloud mesh of the eight-neighborhood traversal is located at the water surface region boundary.
As a preferred embodiment, the designing an eight-neighborhood traversal method to extract point cloud data at a water boundary with point cloud density and elevation information in a grid as constraints includes:
calculating point cloud densities in eight neighborhood grids of an initial grid, comparing the point cloud densities with the point cloud densities in the initial grid, and recording the grid number if the difference between the point cloud densities of a certain neighborhood grid and the initial grid is less than a preset threshold value; if the point cloud density difference between all the neighborhood grids and the initial grid is not less than the threshold value, recording the neighborhood grid number with the minimum point cloud density difference between the neighborhood grids and the initial grid;
traversing each point cloud grid according to the neighborhood grid number recorded in the step I, respectively calculating the point cloud average elevation value in each grid, comparing the point cloud average elevation value with the point cloud average elevation value in the initial grid, selecting the grid with the minimum difference between the point cloud average elevation and the point cloud average elevation of the initial grid, and regarding the grid as the point cloud grid at the newly determined water surface area boundary;
if the point cloud grid at the newly determined water surface area boundary is not traversed, the point cloud grid is taken as a new initial grid, and the first step and the second step are repeated; otherwise, the water area boundary extraction program is finished, and all the searched point cloud sets at the water surface area boundaries are the water surface area boundary extraction results.
As a preferred embodiment, based on the extracted three-dimensional point cloud data of the boundary of the water surface area, all the point cloud elevation values are set to be uniform values, a plurality of points are randomly added to the uniform values to serve as vertexes, and an irregular triangulation network model of the water surface area is constructed by using a Delaunay algorithm. And considering the characteristics of the water surface, setting the elevation values of all the point clouds at the boundary of the extracted water surface area as a uniform value (the mean value of the elevations of all the point clouds), and dynamically determining the number of the randomly added points of the water surface area according to the coordinate range of the point clouds at the boundary of the extracted water surface area.
As a preferred embodiment, the performing the spatial containment relationship analysis on the selected downward-looking image of the unmanned aerial vehicle and the irregular triangulation network model includes:
judging the inclusion relation between the selected downward-looking images of the unmanned aerial vehicle and the irregular triangulation network model, and if the spatial range of the selected downward-looking images of the unmanned aerial vehicle completely contains the irregular triangulation network model: taking the downward-looking image of the unmanned aerial vehicle as a finally determined texture image of the water surface area; otherwise, searching an image closest to the downward-looking image of the unmanned aerial vehicle from the downward-looking images of other unmanned aerial vehicles, splicing the image with the downward-looking image of the current unmanned aerial vehicle to form a new downward-looking image of the unmanned aerial vehicle, judging the spatial inclusion relationship between the downward-looking image of the new unmanned aerial vehicle and the irregular triangulation network model, and repeating until the space range of the downward-looking image of the obtained unmanned aerial vehicle completely contains the irregular triangulation network model.
As a preferred embodiment, the spatial inclusion relationship between the selected downward-looking image of the unmanned aerial vehicle and the irregular triangulation network model is analyzed based on the following formula:
Figure BDA0003613020030000031
Figure BDA0003613020030000032
in the formula, xmin、yminRespectively the minimum value of the horizontal and vertical coordinates, x, of the vertex in the irregular triangulation network modelmax、ymaxRespectively the maximum value of the horizontal and vertical coordinates, X, of the vertex in the irregular triangulation network modelmin、YminRespectively the minimum value of the horizontal and vertical coordinates, X, of the picture element in the downward-looking image of the unmanned aerial vehiclemax、YmaxMaximum values of horizontal and vertical coordinates of pixels in the downward-looking image of the unmanned aerial vehicle are respectively; (X'i,Y′i) Representing the rectangular coordinate of the central point of the downward-looking image of the unmanned aerial vehicle, W, H respectively representing the width and height values of the downward-looking image of the unmanned aerial vehicle, Res representing the spatial resolution of the downward-looking image of the unmanned aerial vehicle,
Figure BDA0003613020030000033
is a rounded down function;
when the conditions in the above formula are all satisfied, the spatial range of the selected downward-looking image of the unmanned aerial vehicle is considered to completely contain the irregular triangulation network model of the water surface area.
As a preferred embodiment, the final water surface region texture image is obtained, and is clipped by using the vector boundary of the water surface region irregular triangulation network model, and image data containing only the water surface region texture information is generated and then is given to the water surface region irregular triangulation network model, thereby realizing water surface reconstruction.
The invention discloses a rapid water surface area reconstruction method for three-dimensional model data of unmanned aerial vehicle oblique photography, and a solution is designed for solving the problem that the existing unmanned aerial vehicle oblique photography three-dimensional modeling technology system cannot realize automatic reconstruction of real water surface information. Firstly, designing a water surface area boundary extraction method based on eight-neighborhood traversal by using point cloud density and elevation features of a water surface area and a non-water surface area; then, based on the water area boundary extraction result, constructing an irregular triangulation network (TIN) model of the water surface area by using a Delaunay method; finally, an intelligent optimization and processing method of the texture image of the water surface area is designed, and the real water surface information is quickly reconstructed. The invention can improve the existing unmanned aerial vehicle oblique photogrammetry three-dimensional modeling technical system, realize the rapid and automatic reconstruction of real water surface information, and has high popularization and application values in the fields of twin watersheds, twin cities and the like.
It should be understood that all combinations of the foregoing concepts and additional concepts described in greater detail below can be considered as part of the inventive subject matter of this disclosure unless such concepts are mutually inconsistent. Additionally, all combinations of claimed subject matter are considered a part of the presently disclosed subject matter.
The foregoing and other aspects, embodiments and features of the present teachings will be more fully understood from the following description taken in conjunction with the accompanying drawings. Additional aspects of the present invention, such as features and/or advantages of exemplary embodiments, will be apparent from the description which follows, or may be learned by practice of specific embodiments in accordance with the teachings of the present invention.
Drawings
The drawings are not necessarily to scale. In the drawings, each identical or nearly identical component that is illustrated in various figures may be represented by a like numeral. For purposes of clarity, not every component may be labeled in every drawing. Embodiments of various aspects of the present invention will now be described, by way of example, with reference to the accompanying drawings, in which:
FIG. 1 is a flow chart of the method of the present invention.
Fig. 2 is a flow chart of water surface region boundary extraction.
FIG. 3 is a flow chart of surface area triangulation construction and texture mapping.
Fig. 4 is a point cloud conversion result of a three-dimensional model of unmanned aerial vehicle oblique photography.
Fig. 5 is a schematic diagram of a point cloud data gridding process.
Fig. 6 shows the result of extracting the boundary of the water surface area from the point cloud data.
Fig. 7 shows the result of the irregular triangulation network (TIN) model construction of the surface area.
Fig. 8 is a water surface region texture image clipping result.
Fig. 9 is a result of surface region reconstruction.
Fig. 10 is ten pieces of unmanned aerial vehicle oblique photography three-dimensional model data employed in embodiment 2.
FIG. 11 shows the results of qualitative evaluation of the method of the present invention.
Detailed Description
In order to better understand the technical content of the present invention, specific embodiments are described below with reference to the accompanying drawings.
In this disclosure, aspects of the present invention are described with reference to the accompanying drawings, in which a number of illustrative embodiments are shown. Embodiments of the present disclosure are not necessarily defined to include all aspects of the invention. It should be appreciated that the various concepts and embodiments described above, as well as those described in greater detail below, may be implemented in any of numerous ways, as the disclosed concepts and embodiments are not limited to any one implementation. In addition, some aspects of the present disclosure may be used alone, or in any suitable combination with other aspects of the present disclosure.
Example 1
This example illustrates a specific implementation of the present invention.
This example carries out quick reconstruction to the surface of water region in the unmanned aerial vehicle slope three-dimensional model data. The specific implementation process is as follows: firstly, converting original three-dimensional model data into point cloud data; then, processing the acquired point cloud data by using a straight-through filtering method, reducing the number of the processed point clouds to improve algorithm execution efficiency, and performing meshing processing on the acquired point cloud data; extracting a water area boundary based on point cloud data characteristics of the water surface area, and further constructing an irregular triangulation network (TIN) model of the water surface area; based on EXIF information of an unmanned aerial vehicle downward-looking image, intelligently selecting and processing texture image data of a generated water surface area; and finally, mapping the texture image to a water surface region TIN model to realize real water surface information reconstruction.
The implementation of the foregoing method is specifically described below, as an exemplary description, with reference to the figures.
The method flow is shown in fig. 1-3, and comprises the following steps:
step 1: converting original unmanned aerial vehicle oblique photography three-dimensional model data into point cloud data
Calling a pcl _ mesh _ sampling.exe tool in an open source library PCL (Point Cloud library) to convert oblique photography three-dimensional model data (in the obj format) into point Cloud data (in the pcd format). The transformation results are shown in fig. 4, where (a) is the original oblique photography three-dimensional model and (b) is the point cloud transformation results.
And 2, step: filtering the acquired point cloud data
And calling a PassThrough () method in a PCL (Point Cloud library) library, processing the acquired point Cloud data, eliminating the influence of the irrelevant point Cloud on the reconstruction of the water surface area, and improving the algorithm execution efficiency.
The threshold value of the through filter is determined according to the elevation of the point cloud, and in this example, the elevation range of the point cloud of the constructed filter is-20-30 (unit: meter).
And step 3: carrying out meshing processing on the point cloud data after filtering processing
Setting a square grid, wherein the side length of the square grid is determined according to the acquired point cloud density; in this example, the set grid side is 1 meter. And dividing the point cloud into corresponding grids according to the abscissa and the ordinate. A schematic diagram of the point cloud gridding process is shown in fig. 5.
And 4, step 4: extracting water surface area boundary based on point cloud data after gridding processing
And designing an eight-neighborhood traversal method based on the point cloud characteristics of the water surface area, and extracting point cloud data at the water area boundary. Firstly, manually determining a starting grid and ensuring that the grid is positioned on the boundary of a water surface area; then, searching the next point cloud grid located on the boundary of the water surface area from eight neighborhood grids around the initial grid, wherein the searching process is as follows:
calculating the point cloud densities (point cloud number/grid area) in eight neighborhood grids, comparing the point cloud densities with the point cloud density in the initial grid, and recording the number of the neighborhood grid if the difference between the point cloud densities of a certain neighborhood grid and the initial grid is less than a given threshold (determined according to experience; the threshold in the example is 10 per square meter); if the point cloud density difference between all the neighborhood grids and the initial grid is not less than the threshold value, recording the neighborhood grid number with the minimum point cloud density difference between the neighborhood grids and the initial grid;
traversing each point cloud grid according to the neighborhood grid number recorded in the step I, respectively calculating the point cloud average elevation value in each grid, comparing the point cloud average elevation value with the point cloud average elevation value in the initial grid, selecting the grid with the minimum difference between the point cloud average elevation and the point cloud average elevation of the initial grid, and regarding the grid as the point cloud grid at the newly determined water surface area boundary;
if the point cloud grid at the newly determined water surface area boundary is not traversed, the point cloud grid is taken as a new initial grid, and the first step and the second step are repeated; otherwise, the water area boundary extraction program is finished, and all the searched point cloud sets at the water surface area boundaries are the water surface area boundary extraction results.
The extraction result of the water surface area boundary is shown in fig. 6, wherein (a) is an original oblique photography three-dimensional model, (b) is a point cloud conversion result, and (c) is the extraction result of the water surface area boundary.
And 5: method for constructing irregular triangulation network (TIN) model of water surface area based on water surface area boundary extraction result
A certain number of vertexes (200 in this example) are randomly added into the extracted water surface area boundary (three-dimensional point cloud), and a Delaunay method is used for performing joint network construction on the water surface area boundary extraction result and the randomly added points, so that the generated result is the water surface area TIN model. The TIN model construction results are shown in FIG. 7, wherein (a) is the water surface region boundary extraction results, and (b) is the water surface region TIN model construction results.
Step 6: intelligently selecting and processing to generate a water surface area texture image based on the constructed water surface area TIN model
Calculating the coordinates of the center point of the TIN model based on the formula (1) according to the point cloud coordinates in the TIN model of the water surface area. Wherein: (x)c,yc) For the center point coordinate calculation result, xmax、ymax、xmin、yminThe maximum x coordinate, the maximum y coordinate, the minimum x coordinate and the minimum y coordinate of the point cloud in the TIN model are respectively.
Figure BDA0003613020030000061
Traversing EXIF information of downward-looking image data of the unmanned aerial vehicle, converting longitude and latitude of each image data into Gaussian plane rectangular coordinates by using a formula (2), and selecting an image with the central point coordinate closest to the TIN central point coordinate as a preliminarily determined texture image;
Figure BDA0003613020030000062
wherein, (x, y) is the calculation result of the Gaussian plane rectangular coordinate of the image central point, B, L is the longitude and latitude of the image central point respectively, N is the curvature radius of the Mao-unitary ring, and L is the radius of curvature of the Mao-unitary ring0For the central meridian longitude, a and b are the major and minor axis radii of the earth ellipsoid, respectively, and ρ 206264.806247096355 ″, where X represents the ellipsoid from the equator to the projected point of the figure's center point on the reference ellipsoidThe arc length.
Thirdly, judging whether the preliminarily determined texture image space range completely contains a water surface area TIN model, and judging methods shown in the formula (3) and the formula (4); if the inclusion relation is established, taking the preliminarily determined texture image as a water surface area texture image selection result; if the inclusion relation is not established, continuing to select the image with the coordinate closest to the coordinate of the preliminarily determined texture image, splicing the selected image with the preliminarily determined texture image to form a new texture image, and taking the new texture image as the preliminarily determined texture image;
Figure BDA0003613020030000071
Figure BDA0003613020030000072
in the formula (3), xmin、yminThe minimum value of the horizontal and vertical coordinates, x, of the vertex in the TIN modelmax、ymaxMaximum values of the horizontal and vertical coordinates of the vertex in the TIN model, Xmin、YminRespectively the minimum value of the horizontal and vertical coordinates, X, of the picture element in the texture imagemax、YmaxThe maximum values of the horizontal coordinate and the vertical coordinate of the pixel in the texture image are respectively. (X ') in the formula (4)'i,Y′i) Representing the rectangular coordinates of the center point of the texture image, W, H are the width and height values of the image, Res represents the spatial resolution of the image,
Figure BDA0003613020030000073
is a rounded down function.
Fourthly, repeating the third step until the preliminarily determined texture image space range completely contains the TIN model of the water surface area; and taking the preliminarily determined texture image as a water surface area texture image selection result.
And 7: mapping the texture image to a water surface region TIN model to realize water surface region reconstruction
Cutting the texture image of the water surface area by using the vector boundary of the TIN model of the water surface area, further removing interference elements in the texture image, and acquiring texture information only retaining the water surface area (the cutting result of the texture image is shown in figure 8, wherein (a) is the selection result of the texture image of the water surface area, and (b) is the cutting result); and (3) endowing the cut texture image with a TIN (triangulated irregular network) model of the water surface area to realize the reconstruction of the water surface area (the reconstruction result is shown in figure 9, wherein (a) is the TIN model of the water surface area, (b) is the water surface reconstruction result, and (c) is the integration result of the water surface area reconstruction result and the original oblique photography three-dimensional model).
Example 2
The embodiment illustrates the water surface area reconstruction efficiency and accuracy evaluation result of the unmanned aerial vehicle oblique photography oriented three-dimensional model data.
Ten pieces of unmanned aerial vehicle oblique photography three-dimensional model data containing a water surface area and original unmanned aerial vehicle image data (see figure 10) are selected, the method designed by the invention is utilized to carry out three-dimensional reconstruction on the water surface area, and the used computer configuration conditions are shown in table 1.
Table 1 computer configuration information for carrying out experiments
Figure BDA0003613020030000081
First, the effect of surface reconstruction was tested from a qualitative perspective. As shown in fig. 11, (a) is a live shot of the drone, and (b) is a computer screen shot of the water surface reconstruction results. As can be seen from the results in FIG. 11, the method of the present invention has a good visual effect.
In addition, the accuracy of the water surface region boundary extraction result is evaluated from a quantitative point of view. And extracting the boundary of the water surface area from the original oblique photography three-dimensional model data in a manual drawing mode, and comparing the precision of the boundary of the water surface area extracted by the design method with the precision of the boundary of the water surface area extracted by the design method. The evaluation indexes include a vertex coordinate mean error (AE), a vertex coordinate Root Mean Square Error (RMSE), a vertex coordinate Standard Deviation (SD), and a water surface area difference (EOA). The calculation method of the four evaluation indexes is shown in formula (5), wherein (x)i,yi) Is the ith point coordinate, (x ') in the water surface area boundary extraction result'i,y′i) Distance (x) in water surface area boundary for manual delineationi,yi) Nearest point coordinates, n being the number of points contained in the water surface region boundary extraction result, SaArea of water surface area boundary bounding area extracted for algorithm, SmAnd enclosing the area of the area for the manually drawn water surface area boundary.
Figure BDA0003613020030000082
The accuracy evaluation results are shown in table 2. As can be seen from the results in Table 2, the water surface region boundary extraction algorithm designed by the invention has higher precision.
TABLE 2 precision evaluation results
Figure BDA0003613020030000083
Figure BDA0003613020030000091
Finally, the water surface reconstruction efficiency of the design method of the invention is tested, and the result is shown in table 3. As can be seen from the results in Table 3, the method of the present invention has been designed with the desired efficiency using a conventionally configured computer apparatus.
TABLE 3 efficiency test results
Figure BDA0003613020030000092

Claims (10)

1. A water surface area reconstruction method for three-dimensional model data of unmanned aerial vehicle oblique photography is characterized by comprising the following steps:
converting the oblique photography three-dimensional model into point cloud data, and carrying out meshing processing on the point cloud data; taking the cloud density and the elevation information of points in the grid as constraint conditions, and designing an eight-neighborhood traversal method to extract point cloud data at the water area boundary;
constructing an irregular triangulation network model of the water surface area based on the extraction result of the point cloud data at the water area boundary;
traversing EXIF information of the downward-looking image of the unmanned aerial vehicle, extracting a coordinate value of the image, converting the coordinate value into a Gaussian plane rectangular coordinate, and selecting the downward-looking image of the unmanned aerial vehicle with the central point coordinate closest to the central point coordinate of the irregular triangulation network model as a texture image primary selection result;
performing spatial inclusion relation analysis on the selected downward-looking images of the unmanned aerial vehicles and the irregular triangulation network model, and performing image splicing on the downward-looking images of the unmanned aerial vehicles until the selected downward-looking images of the unmanned aerial vehicles completely contain the irregular triangulation network model, namely acquiring final texture images of the water surface area;
and endowing the final texture image of the water surface area to an irregular triangulation network model of the water surface area to realize water surface reconstruction.
2. The method of claim 1, wherein the oblique photography three-dimensional model is converted into point cloud data, and then the point cloud data is subjected to pass-through filtering and then gridding.
3. The method of claim 1, wherein the threshold for pass-through filtering is dynamically determined from the point cloud elevations.
4. The method according to claim 1, wherein the grid is set to be square during gridding, and the side length of the grid is dynamically determined according to the density of the point cloud.
5. The method of claim 1, wherein an eight neighborhood traversed start point cloud mesh is located at a water surface region boundary.
6. The method of claim 5, wherein the extracting point cloud data at the water area boundary by the eight-neighborhood traversal method with the density and elevation information of the point cloud in the grid as constraints comprises:
calculating point cloud densities in eight neighborhood grids of an initial grid, comparing the point cloud densities with the point cloud densities in the initial grid, and recording the grid number if the difference between the point cloud densities of a certain neighborhood grid and the initial grid is less than a preset threshold value; if the point cloud density difference between all the neighborhood grids and the initial grid is not less than the threshold value, recording the neighborhood grid number with the minimum point cloud density difference between the neighborhood grids and the initial grid;
traversing each point cloud grid according to the neighborhood grid number recorded in the step I, respectively calculating the point cloud average elevation value in each grid, comparing the point cloud average elevation value with the point cloud average elevation value in the initial grid, selecting the grid with the minimum difference between the point cloud average elevation and the point cloud average elevation of the initial grid, and regarding the grid as the point cloud grid at the newly determined water surface area boundary;
if the point cloud grid at the newly determined water surface area boundary is not traversed, the point cloud grid is taken as a new initial grid, and the first step and the second step are repeated; otherwise, the water area boundary extraction program is finished, and all the searched point cloud sets at the water surface area boundaries are the water surface area boundary extraction results.
7. The method according to claim 1, wherein all point cloud elevation values are set to be uniform values based on the extracted three-dimensional point cloud data of the boundary of the water surface area, a plurality of points are randomly added to the point cloud elevation values to serve as vertexes, and an irregular triangulation network model of the water surface area is constructed by using a Delaunay algorithm.
8. The method of claim 1, wherein the analyzing the selected downward-looking unmanned aerial vehicle image and the irregular triangulation network model for spatial containment relationships comprises:
judging the inclusion relation between the selected downward-looking images of the unmanned aerial vehicle and the irregular triangulation network model, and if the spatial range of the selected downward-looking images of the unmanned aerial vehicle completely contains the irregular triangulation network model: taking the downward-looking image of the unmanned aerial vehicle as a finally determined texture image of the water surface area; otherwise, searching an image closest to the downward-looking image of the unmanned aerial vehicle from the downward-looking images of other unmanned aerial vehicles, splicing the image with the downward-looking image of the current unmanned aerial vehicle to form a new downward-looking image of the unmanned aerial vehicle, judging the spatial inclusion relationship between the downward-looking image of the new unmanned aerial vehicle and the irregular triangulation network model, and repeating until the space range of the downward-looking image of the obtained unmanned aerial vehicle completely contains the irregular triangulation network model.
9. The method of claim 1 or 8, wherein the selected aerial vehicle downward-looking image and the irregular triangulation model are analyzed for spatial inclusion based on the following equation:
Figure FDA0003613020020000021
Figure FDA0003613020020000022
in the formula, xmin、yminRespectively the minimum value of the horizontal and vertical coordinates, x, of the vertex in the irregular triangulation network modelmax、ymaxRespectively the maximum value of the horizontal and vertical coordinates, X, of the vertex in the irregular triangulation network modelmin、YminRespectively the minimum value of the horizontal and vertical coordinates, X, of the picture element in the downward-looking image of the unmanned aerial vehiclemax、YmaxMaximum values of horizontal and vertical coordinates of pixels in the downward-looking image of the unmanned aerial vehicle are respectively; (X'i,Yi') represents the rectangular coordinates of the central point of the downward-looking image of the unmanned aerial vehicle, W, H represents the width and height values of the downward-looking image of the unmanned aerial vehicle, Res represents the spatial resolution of the downward-looking image of the unmanned aerial vehicle,
Figure FDA0003613020020000023
is a rounded down function;
when the conditions in the above formula are all satisfied, the spatial range of the selected downward-looking image of the unmanned aerial vehicle is considered to completely contain the irregular triangulation network model of the water surface area.
10. The method according to claim 1, wherein the final water surface region texture image is obtained and cut by using the vector boundary of the water surface region irregular triangulation network model, and the water surface region irregular triangulation network model is given after image data only containing the water surface region texture information is generated, so that water surface reconstruction is realized.
CN202210436345.6A 2022-04-25 2022-04-25 Water surface area reconstruction method for unmanned aerial vehicle oblique photography three-dimensional model data Pending CN114723907A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210436345.6A CN114723907A (en) 2022-04-25 2022-04-25 Water surface area reconstruction method for unmanned aerial vehicle oblique photography three-dimensional model data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210436345.6A CN114723907A (en) 2022-04-25 2022-04-25 Water surface area reconstruction method for unmanned aerial vehicle oblique photography three-dimensional model data

Publications (1)

Publication Number Publication Date
CN114723907A true CN114723907A (en) 2022-07-08

Family

ID=82246698

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210436345.6A Pending CN114723907A (en) 2022-04-25 2022-04-25 Water surface area reconstruction method for unmanned aerial vehicle oblique photography three-dimensional model data

Country Status (1)

Country Link
CN (1) CN114723907A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114861475A (en) * 2022-07-11 2022-08-05 威海海洋职业学院 Real-time ocean simulation method and system based on sensing data
CN116310225A (en) * 2023-05-16 2023-06-23 山东省国土测绘院 OSGB (open sensor grid) model embedding method and system based on triangle network fusion
CN116757004A (en) * 2023-08-21 2023-09-15 长江空间信息技术工程有限公司(武汉) EFDC three-dimensional water quality data multi-mode deduction method based on digital twin technology
CN117372246A (en) * 2023-10-08 2024-01-09 北京市测绘设计研究院 Partial flattening method for oblique photography three-dimensional model based on filtering algorithm

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114861475A (en) * 2022-07-11 2022-08-05 威海海洋职业学院 Real-time ocean simulation method and system based on sensing data
CN114861475B (en) * 2022-07-11 2022-09-16 威海海洋职业学院 Real-time ocean simulation method and system based on sensing data
CN116310225A (en) * 2023-05-16 2023-06-23 山东省国土测绘院 OSGB (open sensor grid) model embedding method and system based on triangle network fusion
CN116757004A (en) * 2023-08-21 2023-09-15 长江空间信息技术工程有限公司(武汉) EFDC three-dimensional water quality data multi-mode deduction method based on digital twin technology
CN116757004B (en) * 2023-08-21 2023-10-20 长江空间信息技术工程有限公司(武汉) EFDC three-dimensional water quality data multi-mode deduction method based on digital twin technology
CN117372246A (en) * 2023-10-08 2024-01-09 北京市测绘设计研究院 Partial flattening method for oblique photography three-dimensional model based on filtering algorithm
CN117372246B (en) * 2023-10-08 2024-03-22 北京市测绘设计研究院 Partial flattening method for oblique photography three-dimensional model based on filtering algorithm

Similar Documents

Publication Publication Date Title
CN114723907A (en) Water surface area reconstruction method for unmanned aerial vehicle oblique photography three-dimensional model data
CN112595258B (en) Ground object contour extraction method based on ground laser point cloud
EP2118854B1 (en) Exemplar/pde-based technique to fill null regions and corresponding accuracy assessment
Liu et al. Algorithmic foundation and software tools for extracting shoreline features from remote sensing imagery and LiDAR data
Neubert et al. Evaluation of remote sensing image segmentation quality–further results and concepts
CN110866531A (en) Building feature extraction method and system based on three-dimensional modeling and storage medium
CN107045733B (en) Method for modeling GIS (gas insulated switchgear) of transformer substation based on point cloud data
CN116310192A (en) Urban building three-dimensional model monomer reconstruction method based on point cloud
CN114219819A (en) Oblique photography model unitization method based on orthoscopic image boundary detection
CN110992366B (en) Image semantic segmentation method, device and storage medium
CN111047698B (en) Real projection image acquisition method
Rashidi et al. Ground filtering LiDAR data based on multi-scale analysis of height difference threshold
CN109389553B (en) Meteorological facsimile picture contour interpolation method based on T spline
CN113362359A (en) Building automatic extraction method of oblique photography data fused with height and spectrum information
Ren et al. Overall filtering algorithm for multiscale noise removal from point cloud data
CA2684893A1 (en) Geospatial modeling system providing data thinning of geospatial data points and related methods
CN111383335A (en) Crowd funding photo and two-dimensional map combined building three-dimensional modeling method
CN112017227A (en) Method for hybrid visualization of terrain model and tidal data generated by point cloud fusion
CN111598803A (en) Point cloud filtering method based on variable resolution voxel grid and sparse convolution
CN114241155A (en) Urban tree three-dimensional visualization method based on vehicle-mounted laser point cloud data
CN111895907B (en) Electricity tower point cloud extraction method, system and equipment
Ni et al. Applications of 3d-edge detection for als point cloud
JP2004093632A (en) Extraction method and system for topographic geometry, and program therefor
KR101079531B1 (en) A system for generating road layer using point cloud data
CN113838199B (en) Three-dimensional terrain generation method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination