CN116824079A - Three-dimensional entity model construction method and device based on full-information photogrammetry - Google Patents

Three-dimensional entity model construction method and device based on full-information photogrammetry Download PDF

Info

Publication number
CN116824079A
CN116824079A CN202310775920.XA CN202310775920A CN116824079A CN 116824079 A CN116824079 A CN 116824079A CN 202310775920 A CN202310775920 A CN 202310775920A CN 116824079 A CN116824079 A CN 116824079A
Authority
CN
China
Prior art keywords
image
space
ground
view
outer layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310775920.XA
Other languages
Chinese (zh)
Inventor
杨立君
李梦博
张荣春
鲍幼锋
吴彤馨
李玮霖
曾国栋
王俊杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Posts and Telecommunications
Original Assignee
Nanjing University of Posts and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Posts and Telecommunications filed Critical Nanjing University of Posts and Telecommunications
Priority to CN202310775920.XA priority Critical patent/CN116824079A/en
Publication of CN116824079A publication Critical patent/CN116824079A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/602Providing cryptographic facilities or services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/04Architectural design, interior design

Abstract

The application discloses a three-dimensional solid model construction method and device based on full-information photogrammetry, wherein the method comprises the following steps: step S1: acquiring an aerial multi-view inclined image, a multi-angle near-ground outer layer near-view image and a multi-angle near-view image of a near-ground inner layer; step S2: performing space three encryption processing on the aerial multi-view inclined image, the ground-near outer layer multi-angle close-range image and the ground-near inner layer multi-angle close-range image respectively; step S3: combining the near-ground outer layer multi-angle near-view image space three encryption result and the near-ground inner layer multi-angle near-view image space three encryption result, and then carrying out integral space three encryption to obtain a first integral space three encryption result; step S4: combining the first integral space three encryption result with the space three encryption result of the air multi-view inclined image, and then carrying out integral space three encryption to obtain a second integral space three encryption result; step S5: performing multi-view matching on the second integral space three encryption result to obtain dense point cloud data; step S6: and constructing to obtain a final three-dimensional entity model.

Description

Three-dimensional entity model construction method and device based on full-information photogrammetry
Technical Field
The application relates to the technical field of three-dimensional modeling, in particular to a three-dimensional entity model construction method and device based on full-information photogrammetry.
Background
The ancient architecture is used as cultural heritage and historical heritage, and has important historical, cultural and artistic values. However, in a long-river of a diffuse history, the ancient architecture is inevitably affected by natural factors such as wind and rain erosion and artificial factors such as travel development, so that the excellent history culture is continuously destroyed, which presents a great challenge for protecting and repairing the ancient architecture.
The three-dimensional modeling of the ancient architecture refers to a process of converting the physical form of the ancient architecture into a three-dimensional digital model so as to facilitate operations such as digital display, protection, repair and the like. The three-dimensional modeling method of the ancient architecture at home and abroad is subjected to the following development stages: the first stage can be traced back to the 70 s of the last century, when traditional mapping methods were used for modeling of ancient architecture, which was mainly done by means of manual measurement and drawing, i.e. by manually drawing and measuring the dimensions and structure of the architecture, and then manually modeling. This method, while guaranteeing the accuracy of modeling, is time-consuming and labor-intensive. The second phase can be traced back to the 90 s of the last century, with the development of digital technology, digital mapping technology began to be applied to modeling of ancient architecture, such as theodolites, total stations, etc. In 1991, the project Heritage Recording and Information System (HRIS) in the united kingdom proposed a method for modeling ancient architecture based on total station measurement, and the mapping result obtained by the method only can reflect the overall structure of the architecture, but cannot reflect the texture information of the architecture. The beginning of the 21 st century is the third stage of three-dimensional modeling development of ancient architecture, namely the unmanned aerial vehicle remote sensing technology modeling stage. With the rapid development of computer technology and computer graphics technology, the digitized reconstruction of ancient architecture becomes a new protection means. The three-dimensional reconstruction of the live-action is one of important methods for the digital reconstruction of the ancient architecture. Compared with the traditional manual measurement and drawing, the real-scene three-dimensional reconstruction has the advantages of high efficiency, high precision, automation and the like, and the efficiency and the precision of the digitized reconstruction of the ancient architecture can be greatly improved. The three-dimensional reconstruction method of the live-action commonly used on the ground comprises a close-range photogrammetry technology and a three-dimensional laser scanner technology, and the three-dimensional reconstruction method of the live-action commonly used in the air comprises an unmanned aerial vehicle oblique photogrammetry technology and the like. However, the model obtained by adopting a single modeling technology is difficult to meet the requirement of digitalization of the ancient architecture. For example, zhang Lei three-dimensional modeling of small cultural relics based on close-range photogrammetry techniques has been proposed to model large buildings for problems such as occlusion and texture discontinuities. Chen Xuanyu et al reconstruct three-dimensional playgrounds and libraries of universities of Wuhan based on oblique photogrammetry, and the method is difficult to meet the application requirements of three-dimensional models of ancient architectures because the lateral texture information of the architecture can be lost due to shielding images of the ancient architecture with complex structures. The Sojunfeng et al adopts a non-contact laser scanning technology to carry out three-dimensional reconstruction on the butterfly hall of the university of North China, and the method also has the problems of complex later texture processing, large three-dimensional point cloud data and the like.
The traditional three-dimensional modeling adopts a single modeling technology, and the self limitation of the traditional three-dimensional modeling cannot meet the requirement of building fine modeling. On the one hand, angles, distances and the like make the vertical surface texture shot by the unmanned aerial vehicle oblique photography technology not fine enough. On the other hand, because the traditional building has wider eave, unmanned aerial vehicle material-tilting photography technique can not shoot the architecture texture under the eave such as the bracket, the tablet, etc. Aiming at the problems, a modeling method for combining aerial photography with ground photography and integrating full-information images is proposed at home and abroad. For example, yang Luhong et al propose a three-dimensional modeling technique for space-to-ground image integration. Sun Baoyan et al reconstruct three-dimensional large-scale brand-sized ancient architectures based on the air-ground multi-data mutual-assist fusion technique. Jutzi Boris proposes a new method of modeling unmanned aerial vehicle images along with laser scanner data. In summary, although various available methods can improve modeling accuracy and reduce modeling defects to a certain extent, problems still remain in reconstructing a 3D model of a geographical scene, and particularly, photos acquired by using a common camera do not have accurate POS position data, so that the operation of fusing the common camera photo with aerial survey photo for air three modeling can be very complicated in later in-process. Meanwhile, the adopted multi-source data also has larger differences in the aspects of a coordinate system, resolution, CMOS image point size and the like, so that the problem of unsuccessful space three matching exists when the point cloud is generated by the homonymous points in the later internal processing.
Disclosure of Invention
The application aims to solve the problems in the prior art and provides a three-dimensional solid model construction method and device based on full-information photogrammetry, which are used for solving at least one of the technical problems.
Based on one aspect of the specification, the application provides a three-dimensional entity model construction method based on full-information photogrammetry, which comprises the following steps:
step S1: acquiring an aerial multi-view inclined image, a near-ground outer layer multi-angle near-view image and a near-ground inner layer multi-angle near-view image of a target three-dimensional entity;
step S2: performing air three encryption processing on the air multi-view inclined image, the near-ground outer layer multi-angle near-view image and the near-ground inner layer multi-angle near-view image respectively to obtain an air multi-view inclined image air three encryption result, a near-ground outer layer multi-angle near-view image air three encryption result and a near-ground inner layer multi-angle near-view image air three encryption result;
step S3: combining the near-ground outer layer multi-angle near-view image space three encryption result and the near-ground inner layer multi-angle near-view image space three encryption result, and performing integral space three encryption on the combined result to obtain a first integral space three encryption result;
step S4: combining the first integral space three encryption result with the space three encryption result of the air multi-view oblique image, and carrying out integral space three encryption on the combined result to obtain a second integral space three encryption result;
step S5: performing multi-view matching on the second integral space three encryption result to obtain dense point cloud data;
step S6: and constructing and obtaining a final three-dimensional entity model based on the dense point cloud data.
According to the technical scheme, the aerial multi-view inclined image, the near-ground outer layer multi-angle near-view image and the near-ground inner layer multi-angle near-view image which are obtained are subjected to aerial three encryption processing respectively, object point coordinates corresponding to image points and external azimuth elements of each photo are calculated respectively, aerial three results with different resolutions are combined and then integrated aerial three calculation is performed for a plurality of times, the problem that multi-source, multi-scale and polymorphic non-calibrated image aerial three encryption is prone to failure is solved, and stability of data processing is guaranteed.
The acquisition of the near-ground outer layer multi-angle near-view image plays a role in bearing the near-ground inner layer multi-angle near-view image and the aerial multi-view inclined image, so that the multiple images are better fused together for modeling.
Further, in the step S1, the method for acquiring the aerial multiview oblique image includes:
based on lens focal length, ground resolution and pixel size, obtaining photographic navigation height H by calculation 0
Acquiring the actual overlapping degree P of the aerial heading of the aerial photograph of the top of the target three-dimensional entity x Degree of sideways actual overlap P y And the distance H from the plane of the highest point of the three-dimensional entity of the target to the track plane x And calculates the course preset overlap degree P 0 And a sideways preset overlap P 1
According to the shooting altitude H 0 Heading preset overlap P 0 And a sideways preset overlap P 1 Acquisition of air multi-view tiltAnd (5) imaging.
Further, in the step S1, the method for collecting the near-ground inner layer multi-angle near-view image includes:
setting the horizontal distance H from the inner track surface to the side surface of the target three-dimensional entity 1
Calculating the length of a photographing base line corresponding to the acquisition of the near-field inner-layer imageAnd the space between the navigation belts->
Wherein: f is the focal length of the lens,is the length of the close-range inner layer frame, +.>Is the width of the image frame of the close-range inner layer;
based on the area of the acquisition region and the photographic baseline lengthSpace of the navigation belt->Calculating a shooting starting point position to obtain a first track point position corresponding to the acquisition of the inner-layer close-range image;
and setting a cradle head dip angle corresponding to each navigation belt in the inner-layer track plane, and acquiring a multi-angle close-range image of the inner layer close to the ground according to the first track point position and the cradle head dip angle.
Further, in the step S1, the method for collecting the near-ground outer layer multi-angle near-view image includes:
setting an outer layer track surface, wherein the outer layer track surface comprises an outer layer vertical track surface and an outer layer horizontal track surface, and setting the horizontal distance from the outer layer vertical track surface to the side surface of the target three-dimensional entity as H 2 The vertical distance between the outer layer horizontal track surface and the highest point of the target three-dimensional entity is H 2
Calculating the length of a photographing base line corresponding to the acquisition of the close-range outer layer imageAnd the space between the navigation belts->
Wherein:is the length of the close-up outer layer frame +.>The width of the image width of the close-up outer layer;
based on the area of the acquisition region and the photographic baseline lengthSpace of the navigation belt->Calculating the shooting starting point position to obtain a second track point position corresponding to the acquisition of the outer layer close-range image;
and setting a cradle head dip angle corresponding to each navigation belt in the outer-layer track plane, and collecting the multi-angle close-range image of the outer layer close to the ground according to the position of the second track point and the cradle head dip angle.
Further, 1/5H 0 ≤H 2 ≤5H 1
The problem that images cannot be matched due to too large difference of image resolutions in different data sets is solved, and the difference of image resolutions in different data sets given by Bentley ContextCapture authorities is guaranteed to be within 5 times. When the resolution difference between the near-field image and the oblique image is within 5 times, the relationship between the ground resolution and the shooting distance can be known, and the outer layer acquisition distance (H 2 ) Should be inclined to the altitude H 0 Between 1/5 and 5 times, and should be at the acquisition distance H of the close-range inner layer image 1 Between 1/5 and 5 times. And H is 0 >H 1 . Therefore, the outer acquisition distance (H 2 ) Is set in the range of 1/5H 0 ~5H 1 Between them.
Further, the step S5 includes:
step S5.1: extracting feature points of all images, and recording feature information of the feature points;
step S5.2: finishing feature point matching under the constraint of a epipolar line, and generating an initial patch by adopting a loop iteration method according to the matching points;
step S5.3: diffusing the initial dough sheet under the constraint condition, and generating a new dough sheet in the neighborhood range of the initial dough sheet;
step S5.4: filtering the new patch to obtain dense point data.
Further, the step S6 includes:
step S6.1: constructing an irregular triangular network according to the dense point cloud data;
step S6.2: giving a three-dimensional white film surface texture to the irregular triangular net to obtain an initial three-dimensional solid model;
step S6.3: and correcting the geometric distortion and displacement of the inclined image in the initial three-dimensional solid model to obtain a final three-dimensional solid model.
Further, before step S2, the method further includes: preprocessing the aerial multi-view inclined image, the near-ground inner layer multi-angle near-view image and the near-ground outer layer multi-angle near-view image, wherein the preprocessing comprises distortion correction, brightness adjustment, contrast adjustment, saturation adjustment and association correspondence of image data and POS data.
Based on another aspect of the present disclosure, there is provided a three-dimensional solid model construction apparatus based on full information photogrammetry, including:
the image acquisition module is used for: the method comprises the steps of acquiring an aerial multi-view inclined image, a ground-near outer layer multi-angle close-range image and a ground-near inner layer multi-angle close-range image of a target three-dimensional entity;
and the space three encryption module: the method is used for carrying out space three encryption processing on the aerial multi-view inclined image, the near-ground outer layer multi-angle near-view image and the near-ground inner layer multi-angle near-view image;
integral air three encryption module: the method comprises the steps of combining a near-ground outer layer multi-angle near-view image space three encryption result and a near-ground inner layer multi-angle near-view image space three encryption result, and carrying out integral space three encryption on the combined result; the integral space three encryption module is also used for combining the first integral space three encryption result with the space multi-view inclined image space three encryption result and carrying out integral space three encryption on the combined result;
a multi-view matching module: the method comprises the steps of performing multi-view matching on the second integral space three encryption result to obtain dense point cloud data;
and a three-dimensional model building module: and the method is used for constructing and obtaining a final three-dimensional entity model based on the dense point cloud data.
In the technical scheme, the multi-angle multi-level image of the target three-dimensional entity is acquired based on the image acquisition module, the acquired multi-source image is subjected to independent space three encryption by utilizing the space three encryption module and the integral space three encryption module, then multi-step integral space three encryption is performed, and the multi-view matching module is adopted to perform multi-view matching on the space three encryption result and then the three-dimensional model is constructed based on the three-dimensional model construction module to obtain a final three-dimensional entity model.
Further, the device also comprises a preprocessing module, wherein the preprocessing module is used for preprocessing the aerial multi-view inclined image, the near-ground outer layer multi-angle near-view image and the near-ground inner layer multi-angle near-view image.
Compared with the prior art, the application has the beneficial effects that:
(1) According to the three-dimensional solid model construction method based on full-information photogrammetry, air three encryption processing is respectively carried out on the obtained air multi-view inclined image, the near-ground outer layer multi-angle near-view image and the near-ground inner layer multi-angle near-view image, object point coordinates corresponding to image points and external azimuth elements of each image sheet are respectively calculated, then air three results with different resolutions are combined, and then multiple integral air three solutions are carried out, so that the problem that multi-source, multi-scale and multi-shape non-calibrated image air three encryption is easy to fail is solved, and the stability of data processing is ensured.
(2) The full-information image PMVS dense matching can match a larger number of homonymous image points, so that the three-dimensional live-action model has richer texture details, and the three-dimensional live-action model with better quality can be obtained.
(3) According to the three-dimensional solid model construction device based on full-information photogrammetry, the image acquisition module is used for acquiring multi-angle multi-layer images of a target three-dimensional solid, the three-dimensional encryption module and the integral three-dimensional encryption module are used for carrying out independent three-dimensional encryption on acquired multi-source images, then multi-step integral three-dimensional encryption is carried out, and the multi-view matching module is used for carrying out multi-view matching on three-dimensional encryption results and then the three-dimensional solid model construction module is used for constructing the three-dimensional solid model.
Drawings
FIG. 1 is a flow chart of a three-dimensional solid model construction method according to an embodiment of the application;
FIG. 2 is a schematic diagram of a device according to an embodiment of the present application;
FIG. 3 is a real view of a three-dimensional solid model modeling object according to an embodiment of the present application;
FIG. 4 is a schematic view of a tilt-up photogrammetry dead angle according to an embodiment of the present application;
FIG. 5 is a schematic diagram of a full information image acquisition track surface according to an embodiment of the present application;
FIG. 6 is a diagram illustrating the position of a full-image of a day Wang Dian according to an embodiment of the present application;
fig. 7 is a schematic diagram of PMVS dense matching results according to an embodiment of the present application;
FIG. 8 is a diagram showing the effect of white film on a full-information photogrammetry model according to an embodiment of the present application;
FIG. 9 is a texture map of a full information photogrammetry model according to an embodiment of the present application;
FIG. 10 is a schematic view of the location of various features of a building according to an embodiment of the present application;
FIG. 11 is a graph showing position accuracy versus position accuracy for various embodiments of the present application;
FIG. 12 is a graph showing the comparison of point cloud bias distributions for different models according to an embodiment of the present application;
FIG. 13 is a diagram illustrating the comparison of different model effect graphs according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made more apparent and fully hereinafter with reference to the accompanying drawings, in which some, but not all embodiments of the application are shown. All other embodiments, which can be made by one of ordinary skill in the art without undue burden on the person of ordinary skill in the art based on embodiments of the present application, are within the scope of the present application.
As shown in fig. 1, the present embodiment provides a three-dimensional solid model construction method based on full-information photogrammetry, and in the present embodiment, the temple Wang Dian of the mountain city of flat top in henna is taken as a study object (as shown in fig. 3). The flat-topped shan mosque is built at the end of east China, so far has 1800 years of history, a large number of ancient buildings and ancient calligraphists calligraphy and painting are reserved in the mosque, and the mosque has extremely high history, culture and artistic value. The natural air-flow building model has the advantages that the natural air-flow building model is Wang Dian in the form of a full-flow air-flow building model, the building structure is very complex, and various difficulties of the ancient building in three-dimensional modeling are covered. Especially, due to the influence of mutual shielding of complex building components, the traditional live-action three-dimensional modeling method has a plurality of shooting dead angles (the area A, B in fig. 4) and cannot meet the fine modeling requirement of the ancient building.
The full-information image acquisition device is a large-area Phantom 4RTK unmanned aerial vehicle image acquisition system, a DJI FC6310R camera is mounted, the field of view (FOV) is 84 degrees, the image size is 5472 multiplied by 3648 pixels, and the full-information image acquisition device is a portable, flexible, light-weight and low-cost consumer unmanned aerial vehicle.
The three-dimensional model construction process for the day Wang Dian in this embodiment includes:
step S1: and acquiring an aerial multi-view inclined image, a near-ground outer layer multi-angle near-view image and a near-ground outer layer multi-angle near-view image of the sky palace. Firstly, parameters such as preset altitude, preset overlapping degree and the like are calculated to conduct route planning, and the route is converted into KML format and imported into GS DTK APP to conduct data acquisition. The test is completed in 2022, 5 and 3 days, the shooting time is the midday time, the weather is clear, no wind, the visibility is higher, the satellite signals in the research area are better, the electromagnetic interference is smaller, and the unmanned aerial vehicle test is convenient.
(1) Specifically, the acquisition method of the aerial multi-view oblique image comprises the following steps:
based on lens focal length f, ground resolution GSD and pixel size a, calculating to obtain photographing navigation height H 0 The calculation formula is as follows:
H 0 =f×GSD/a
final calculated row H in this example 0 =60 m (as shown in fig. 5), the relative voyage at the highest point of day Wang Dian is 38m.
Acquiring the actual overlapping degree P of course of the aerial photograph at the top of the target three-dimensional entity x Degree of sideways actual overlap P y And the distance (relative altitude) H from the plane of the highest point of the three-dimensional entity of the target to the track plane x And calculates the course preset overlap degree P 0 And a sideways preset overlap P 1
According to the requirements of low-altitude digital aerial photography regulations, the minimum actual overlapping degree of the heading at the highest point of the day Wang Dian is not less than 53%, the minimum actual overlapping degree of the sideways direction is not less than 8%, the calculated preset overlapping degree of the heading is not less than 70.23%, and the preset overlapping degree of the sideways direction is not less than 41.73%.
According to the shooting altitude H 0 Heading preset overlap P 0 And a sideways preset overlap P 1 And acquiring an aerial multi-view oblique image.
In the embodiment, the heading preset overlapping degree is designed to be 80%, the lateral preset overlapping degree is 70%, and 874 aerial multi-view inclined images are collected.
(2) Specifically, the method for collecting the near-ground inner layer multi-angle near-view image comprises the following steps:
as shown in FIG. 5, the horizontal distance H from the inner track surface to the side of the target three-dimensional solid is set 1 =10m;
Calculating the length of a photographing base line corresponding to the acquisition of the near-field inner-layer image(shooting distance between two adjacent waypoints of the same wayband) and wayband interval +.>
Wherein: f is the focal length of the lens,is the length of the close-range inner layer frame, +.>Is the width of the image frame of the close-range inner layer.
When H is x =H 0 At the time P x =P 0 ,P y =P 1 Therefore, the preset overlapping degree of the close-range inner layer is the actual overlapping degree. Therefore, the accuracy requirement can be met only by designing the course overlapping degree and the side overlapping degree according to the low-altitude digital aerial photography standard. In this embodiment, the preset course overlapping degree is 80% and the preset Pang Xiang overlapping degree is 60% when the near-ground inner layer image is acquired. The calculated space between the navigation belts is 4m (i.e. the width of the space between the navigation lines is 4 m), and the space is Wang Diangao m, so that 5 layers of image data are required to be acquired.
Based on the area of the acquisition region and the photographic baseline lengthSpace of the navigation belt->Calculating a shooting starting point position to obtain a first track point position corresponding to the acquisition of the inner-layer close-range image;
and setting a cradle head dip angle corresponding to each navigation belt in the inner-layer track plane, and acquiring a multi-angle close-range image of the inner layer close to the ground according to the first track point position and the cradle head dip angle.
As shown in fig. 5, the close-up inner-layer image is photographed in 5 layers from top to bottom, layer 1: and setting the angle of the unmanned aerial vehicle holder to be-15 degrees, and carrying out fixed-point shooting at an angle. Layer 2: and setting the angle of the cradle head to be-15 degrees and 0 degree for fixed-point shooting. Layer 3: and setting the angle of the cradle head to be-15 degrees, 0 degrees and 5 degrees for fixed-point shooting. Layer 4: and setting the angle of the cradle head to be-15 degrees, 0 degrees and 10 degrees for fixed-point shooting. Layer 5: and setting the angle of the cradle head to be-15 degrees, 0 degree and 15 degrees for fixed-point shooting.
(3) The method for collecting the near-ground outer layer multi-angle near-view images comprises the following steps:
as shown in FIG. 5, an outer layer track surface is provided, the outer layer track surface comprises an outer layer vertical track surface and an outer layer horizontal track surface, and an outer layer vertical track surface is providedThe horizontal distance from the track surface to the side surface of the target three-dimensional entity is H 2 The vertical distance between the horizontal track surface of the outer layer and the highest point of the building is H 2 Wherein 1/5H 0 ≤H 2 ≤5H 1 In this embodiment 12 m.ltoreq.H 2 Less than or equal to 50m, and finally 26m is selected, wherein the distance between the outer layer horizontal track surface and the ground is 48m;
calculating the length of a photographing base line corresponding to the acquisition of the close-range outer layer imageAnd the space between the navigation belts->
Wherein:is the length of the close-up outer layer frame +.>The width of the image width of the close-up outer layer;
based on the area of the acquisition region and the photographic baseline lengthSpace of the navigation belt->Calculating the shooting starting point position to obtain a second track point position corresponding to the acquisition of the outer layer close-range image;
and setting a cradle head dip angle corresponding to each navigation belt in the outer-layer track plane, and collecting the multi-angle close-range image of the outer layer close to the ground according to the position of the second track point and the cradle head dip angle. In this embodiment, the multi-angle close-up image of the outer layer near the ground adopts a vertex but angle shooting method, and the angles of the Phantom 4RTK cradle head are all set to be-30 ° fixed point shooting.
The total of 188 near-ground outer layer multi-angle near-view images and near-ground inner layer multi-angle near-view images acquired in the embodiment.
The aerial multi-view inclined image, the near-ground inner layer multi-angle near-view image and the near-ground outer layer multi-angle near-view image acquired in the step S1 are preprocessed, wherein the preprocessing comprises distortion correction (an internal azimuth element and a distortion coefficient of a camera are imported into image correction software to carry out distortion processing, negative effects on subsequent processing are eliminated), brightness adjustment, contrast adjustment, saturation adjustment and association correspondence of image data and POS data (POS data and image names are processed, and the image names and the POS data are in one-to-one correspondence).
Step S2: and performing space three encryption processing on the aerial multi-view inclined image, the near-ground outer layer multi-angle near-view image and the near-ground inner layer multi-angle near-view image respectively, wherein the specific method comprises the following steps of: and (3) under the assistance of POS data, introducing ContextCapture software into the aerial multi-view inclined image, the near-ground outer layer multi-angle near-view image and the near-ground inner layer multi-angle near-view image by utilizing a collineation equation based on the principle of beam method aerial triangulation to respectively perform aerial three solutions. Respectively obtaining an aerial multi-view inclined image aerial three encryption result, a near-ground inner layer multi-angle near-view image aerial three encryption result and a near-ground outer layer multi-angle near-view image aerial three encryption result;
step S3: combining the near-ground outer layer multi-angle near-view image space three encryption result and the near-ground outer layer multi-angle near-view image space three encryption result, and performing integral space three encryption on the combined result to obtain a first integral space three encryption result;
step S4: and combining the first integral space three encryption result with the space multi-view oblique image space three encryption result, and carrying out integral space three encryption on the combined result to obtain a second integral space three encryption result, thus obtaining the full information image position of the day Wang Dian (shown in fig. 6).
Step S5: performing multi-view matching (PMVS dense matching) on the second integral space three encryption result to obtain dense point cloud data (shown in fig. 7); the method comprises the following steps:
step S5.1: extracting feature points of all images by utilizing feature point operators such as Harris, DOG, SIFT and the like, and recording feature information of the feature points;
step S5.2: finishing feature point matching under the constraint of a epipolar line, and generating an initial patch by adopting a loop iteration method according to the matching points;
step S5.3: diffusing the initial dough sheet under the constraint condition, and generating a new dough sheet in the neighborhood range of the initial dough sheet;
step S5.4: filtering the new patch to obtain dense point data. Some patches with relatively large errors may be generated during the patch reconstruction process, and thus filtering is required to ensure accuracy of the reconstructed patches. Three filters are adopted in the filtering process.
When the method is used for matching, the utilization of redundant information can be fully considered, and the coordinates of the same-name points can be rapidly and accurately positioned, so that three-dimensional coordinate data of a ground target are accurately obtained.
Step S6: and constructing and obtaining a final three-dimensional entity model based on the dense point cloud data, wherein the method specifically comprises the following steps of:
step S6.1: constructing an irregular triangular network (TIN) according to the dense point cloud data to form a high-resolution high-precision Digital Surface Model (DSM); constructing a whole model through the triangular net of each ground feature;
step S6.2: giving a three-dimensional white film surface texture to the irregular triangular net (the three-dimensional white film is shown in figure 8) to obtain an initial three-dimensional solid model; after the irregular triangular net is constructed, the three-dimensional white film surface texture needs to be endowed. The texture mapping essence is to find the corresponding relation between the two-dimensional space points and the three-dimensional object surface, and map the colors of the two-dimensional space points on the three-dimensional white film surface according to the relation, so as to obtain a three-dimensional model which is fit for human vision.
Step S6.3: and correcting the geometric distortion and displacement of the inclined image in the initial three-dimensional solid model to obtain a final three-dimensional solid model (shown in figure 9). The geometric distortion and displacement of the inclined image are corrected by using a digital differential correction technology, so that the problem of poor projection caused by various reasons is reduced, and the inclined image can be in a normal position.
In this embodiment, three-dimensional solid model quality evaluation is also performed, which specifically includes the following steps:
according to the characteristics of the object to be studied, 5 control points and 11 check points are selected in the embodiment, and the points have obvious positioning characteristics, such as door and window corner points, plaque corner points, house corner points and the like (shown in fig. 3). Real coordinates of the control point and the detection point are measured by using an RTK+total station front intersection method. Ground points were measured using RTK-GPS and the day Wang Dian surface feature points were measured in a front-crossing manner. The control point and check point space reference coordinate system is CGCS 2000, projection is Gaussian projection, and the elevation reference is '1985 national elevation reference', which are all in meters. And the quality evaluation of the three-dimensional live-action model is completed from subjective and objective aspects. Objective evaluation was performed from both the length-width variation of the portion of the model where the problem most easily occurred (as shown in fig. 10) and the positional accuracy of the inspection point, and the results were shown in fig. 11.
In order to further evaluate the quality of the three-dimensional live-action model, a comparison analysis method is also a common method for evaluating the three-dimensional live-action model. The superiority of the result of the application is proved by comparing with the three-dimensional live-action model constructed by the traditional method. The comparison of the two methods mainly compares the number of densely matched points (figure 12), the fineness of white films, the richness of model textures and the approximation degree of a live-action model and an actual ground object (figure 13).
In summary, the three-dimensional modeling method for the ancient architecture has the advantages of complex structure, various components, and rich color and texture information, and particularly has the problem of poor three-dimensional modeling of characters and exquisite patterns of the ancient architecture, and can obtain the three-dimensional real scene fine model which has rich texture information, approximates to a real scene and has high position precision. The application collects the aerial multi-view image and the near-ground multi-layer multi-angle image by using the same unmanned aerial vehicle equipment, and the obtained image has full information characteristics, thereby solving the problem that the complex building is shielded by the self. The full information image acquisition method has the characteristics of convenience, rapidness and easiness in operation, and is particularly not limited by space. The application adopts a step-by-step integral joint space three (namely, the space three is firstly carried out on three types of images respectively, then the two times of integral space three are carried out on three encryption results of the three types of space three), the space multi-view image and the near-ground multi-layer multi-angle image are respectively encrypted, and finally the results are subjected to multiple integral adjustment. The operation technical flow of the space three encryption reduces the requirement that the adjustment system has consistency on the aspects of an image scale, a camera main distance and the like, and has stable resolving performance and high precision. The PMVS dense matching technology is introduced to improve the accuracy of dense matching points, increase the number of matching points, construct a more accurate irregular triangular network and ensure the fineness of the texture of the three-dimensional live-action model. The reconstruction method and the reconstruction device of the three-dimensional real model of the ancient building by full-information photogrammetry are improved methods of traditional oblique photogrammetry three-dimensional modeling, are particularly suitable for small-area independent ground object fine modeling, can effectively solve key problems of digitalization, real reproduction and the like of the ancient building model, and have important significance for protecting the ancient building and repairing damaged buildings.
According to the embodiment, the air multi-view image combined near-ground multi-layer multi-angle image acquisition method is utilized to capture the full information image of the ancient building, so that the problem of missing of three-dimensional modeling image information of the ancient building is solved, and a necessary condition is provided for fine three-dimensional reconstruction of the ancient building.
Meanwhile, the method for taking photos at fixed points by the unmanned aerial vehicle is utilized to obtain multi-layer multi-angle close-up images of the photographed ground object, the accessibility is good, the space limitation is avoided, the attribute of the obtained images is basically the same as that of the aerial multi-view images, the advantages promote the practicability of the method, the key technical problems of digitalization, real reproduction and the like of the ancient building model are solved, and the method has important significance for protecting and repairing the ancient building.
As shown in fig. 2, this embodiment further provides a three-dimensional solid model building device based on full-information photogrammetry, including:
the image acquisition module is used for: the method comprises the steps of acquiring an aerial multi-view inclined image, a near-ground inner layer multi-angle near-view image and a near-ground outer layer multi-angle near-view image of a target three-dimensional entity;
and a pretreatment module: the method is used for preprocessing the aerial multi-view inclined image, the near-ground outer layer multi-angle near-view image and the near-ground inner layer multi-angle near-view image; the preprocessing comprises distortion correction, brightness adjustment, contrast adjustment, saturation adjustment and association correspondence of image data and POS data;
and the space three encryption module: the method is used for carrying out space three encryption processing on the aerial multi-view inclined image, the near-ground outer layer multi-angle near-view image and the near-ground inner layer multi-angle near-view image;
integral air three encryption module: the method comprises the steps of combining a near-ground outer layer multi-angle near-view image space three encryption result and a near-ground inner layer multi-angle near-view image space three encryption result, and carrying out integral space three encryption on the combined result; the integral space three encryption module is also used for combining the first integral space three encryption result with the space multi-view inclined image space three encryption result and carrying out integral space three encryption on the combined result;
a multi-view matching module: the method comprises the steps of performing multi-view matching on the second integral space three encryption result to obtain dense point cloud data;
and a three-dimensional model building module: and the method is used for constructing and obtaining a final three-dimensional entity model based on the dense point cloud data.
Specifically, the image acquisition module is used for completing the following processes:
(1) Based on lens focal length, ground resolution and pixel size, obtaining photographic navigation height H by calculation 0
Acquiring the actual overlapping degree P of the aerial heading of the aerial photograph of the top of the target three-dimensional entity x Degree of sideways actual overlap P y And the distance H from the plane of the highest point of the three-dimensional entity of the target to the track plane x And calculates the course preset overlap degree P 0 And a sideways preset overlap P 1
According to the shooting altitude H 0 Heading preset overlap P 0 And a sideways preset overlap P 1 And acquiring an aerial multi-view oblique image.
(2) Setting the horizontal distance H from the inner track surface to the side surface of the target three-dimensional entity 1
Calculating the length of a photographing base line corresponding to the acquisition of the near-field inner-layer imageAnd the space between the navigation belts->
Wherein: f is the focal length of the lens,is the length of the close-range inner layer frame, +.>Is the width of the image frame of the close-range inner layer;
based on the area of the acquisition region and the photographic baseline lengthSpace of the navigation belt->Calculating a shooting starting point position to obtain a first track point position corresponding to the acquisition of the inner-layer close-range image;
and setting a cradle head dip angle corresponding to each navigation belt in the inner-layer track plane, and acquiring a multi-angle close-range image of the inner layer close to the ground according to the first track point position and the cradle head dip angle.
(3) Setting an outer layer track surface, wherein the outer layer track surface comprises an outer layer vertical track surface and an outer layer horizontal track surface, and setting the horizontal distance from the outer layer vertical track surface to the side surface of the target three-dimensional entity as H 2 The vertical distance between the highest point of the outer layer horizontal track surface and the target three-dimensional entity is H 2
Calculating the length of a photographing base line corresponding to the acquisition of the close-range outer layer imageAnd the space between the navigation belts->
Wherein:is the length of the close-up outer layer frame +.>The width of the image width of the close-up outer layer;
based on the area of the acquisition region and the photographic baseline lengthSpace of the navigation belt->Calculating the shooting starting point position to obtain a second track point position corresponding to the acquisition of the outer layer close-range image;
and setting a cradle head dip angle corresponding to each navigation belt in the outer-layer track plane, and collecting the multi-angle close-range image of the outer layer close to the ground according to the position of the second track point and the cradle head dip angle.
Specifically, the multi-view matching module is used for completing the following processes:
step S5.1: extracting feature points of all images, and recording feature information of the feature points;
step S5.2: finishing feature point matching under the constraint of a epipolar line, and generating an initial patch by adopting a loop iteration method according to the matching points;
step S5.3: diffusing the initial dough sheet under the constraint condition, and generating a new dough sheet in the neighborhood range of the initial dough sheet;
step S5.4: filtering the new patch to obtain dense point data.
Specifically, the three-dimensional model construction module is used for completing the following processes:
step S6.1: constructing an irregular triangular network according to the dense point cloud data;
step S6.2: giving a three-dimensional white film surface texture to the irregular triangular net to obtain an initial three-dimensional solid model;
step S6.3: and correcting the geometric distortion and displacement of the inclined image in the initial three-dimensional solid model to obtain a final three-dimensional solid model.
Although embodiments of the present application have been shown and described, it will be understood by those skilled in the art that various changes, modifications, substitutions and alterations can be made therein without departing from the principles and spirit of the application, the scope of which is defined in the appended claims and their equivalents.

Claims (10)

1. The three-dimensional entity model construction method based on full-information photogrammetry is characterized by comprising the following steps of:
step S1: acquiring an aerial multi-view inclined image, a near-ground outer layer multi-angle near-view image and a near-ground inner layer multi-angle near-view image of a target three-dimensional entity;
step S2: performing air three encryption processing on the air multi-view inclined image, the near-ground outer layer multi-angle near-view image and the near-ground inner layer multi-angle near-view image respectively to obtain an air multi-view inclined image air three encryption result, a near-ground outer layer multi-angle near-view image air three encryption result and a near-ground inner layer multi-angle near-view image air three encryption result;
step S3: combining the near-ground outer layer multi-angle near-view image space three encryption result and the near-ground inner layer multi-angle near-view image space three encryption result, and performing integral space three encryption on the combined result to obtain a first integral space three encryption result;
step S4: combining the first integral space three encryption result with the space three encryption result of the air multi-view oblique image, and carrying out integral space three encryption on the combined result to obtain a second integral space three encryption result;
step S5: performing multi-view matching on the second integral space three encryption result to obtain dense point cloud data;
step S6: and constructing and obtaining a final three-dimensional entity model based on the dense point cloud data.
2. The method for constructing a three-dimensional solid model based on full-information photogrammetry according to claim 1, wherein in the step S1, the method for acquiring the aerial multiview oblique image comprises:
based on lens focal length, ground resolution and pixel size, obtaining photographic navigation height H by calculation 0
Acquiring the actual overlapping degree P of the aerial heading of the aerial photograph of the top of the target three-dimensional entity x Degree of sideways actual overlap P y And the distance H from the plane of the highest point of the three-dimensional entity of the target to the track plane x And calculates the course preset overlap degree P 0 And a sideways preset overlap P 1
According to the shooting altitude H 0 Heading preset overlap P 0 And a sideways preset overlap P 1 And acquiring an aerial multi-view oblique image.
3. The method for constructing a three-dimensional solid model based on full-information photogrammetry according to claim 2, wherein in the step S1, the method for acquiring near-ground inner-layer multi-angle near-view images comprises:
setting the horizontal distance H from the inner track surface to the side surface of the target three-dimensional entity 1
Calculating the length of a photographing base line corresponding to the acquisition of the near-field inner-layer imageAnd the space between the navigation belts->
Wherein: f is the focal length of the lens,is the length of the close-range inner layer frame, +.>Is the width of the image frame of the close-range inner layer;
based on the area of the acquisition region and the photographic baseline lengthSpace of the navigation belt->Calculating a shooting starting point position to obtain a first track point position corresponding to the acquisition of the inner-layer close-range image;
and setting a cradle head dip angle corresponding to each navigation belt in the inner-layer track plane, and acquiring a multi-angle close-range image of the inner layer close to the ground according to the first track point position and the cradle head dip angle.
4. The three-dimensional solid model construction method based on full-information photogrammetry according to claim 3, wherein in the step S1, the method for acquiring near-ground outer layer multi-angle near-view images comprises:
setting an outer layer track surface, wherein the outer layer track surface comprises an outer layer vertical track surface and an outer layer horizontal track surface, and setting the horizontal distance from the outer layer vertical track surface to the side surface of the target three-dimensional entity as H 2 The vertical distance between the outer layer horizontal track surface and the highest point of the target three-dimensional entity is H 2
Calculating the length of a photographing base line corresponding to the acquisition of the close-range outer layer imageAnd the space between the navigation belts->
Wherein:is the length of the close-up outer layer frame +.>The width of the image width of the close-up outer layer;
based on the area of the acquisition region and the photographic baseline lengthSpace of the navigation belt->Calculating the shooting starting point position to obtain a second track point position corresponding to the acquisition of the outer layer close-range image;
and setting a cradle head dip angle corresponding to each navigation belt in the outer-layer track plane, and collecting the multi-angle close-range image of the outer layer close to the ground according to the position of the second track point and the cradle head dip angle.
5. The three-dimensional solid model construction method based on full information photogrammetry according to claim 4, wherein 1/5H 0 ≤H 2 ≤5H 1
6. The three-dimensional solid model construction method based on full information photogrammetry according to claim 1, wherein the step S5 includes:
step S5.1: extracting feature points of all images, and recording feature information of the feature points;
step S5.2: finishing feature point matching under the constraint of a epipolar line, and generating an initial patch by adopting a loop iteration method according to the matching points;
step S5.3: diffusing the initial dough sheet under the constraint condition, and generating a new dough sheet in the neighborhood range of the initial dough sheet;
step S5.4: filtering the new patch to obtain dense point data.
7. The three-dimensional solid model construction method based on full information photogrammetry according to claim 1, wherein the step S6 includes:
step S6.1: constructing an irregular triangular network according to the dense point cloud data;
step S6.2: giving a three-dimensional white film surface texture to the irregular triangular net to obtain an initial three-dimensional solid model;
step S6.3: and correcting the geometric distortion and displacement of the inclined image in the initial three-dimensional solid model to obtain a final three-dimensional solid model.
8. The three-dimensional solid model construction method based on full information photogrammetry according to claim 1, further comprising, before step S2: preprocessing the aerial multi-view inclined image, the near-ground inner layer multi-angle near-view image and the near-ground outer layer multi-angle near-view image, wherein the preprocessing comprises distortion correction, brightness adjustment, contrast adjustment, saturation adjustment and association correspondence of image data and POS data.
9. Three-dimensional solid model construction device based on full information photogrammetry, for implementing the three-dimensional solid model construction method based on full information photogrammetry according to any one of claims 1 to 8, comprising:
the image acquisition module is used for: the method comprises the steps of acquiring an aerial multi-view inclined image, a ground-near outer layer multi-angle close-range image and a ground-near inner layer multi-angle close-range image of a target three-dimensional entity;
and the space three encryption module: the method is used for carrying out space three encryption processing on the aerial multi-view inclined image, the near-ground outer layer multi-angle near-view image and the near-ground inner layer multi-angle near-view image;
integral air three encryption module: the method comprises the steps of combining a near-ground outer layer multi-angle near-view image space three encryption result and a near-ground inner layer multi-angle near-view image space three encryption result, and carrying out integral space three encryption on the combined result; the integral space three encryption module is also used for combining the first integral space three encryption result with the space multi-view inclined image space three encryption result and carrying out integral space three encryption on the combined result;
a multi-view matching module: the method comprises the steps of performing multi-view matching on the second integral space three encryption result to obtain dense point cloud data;
and a three-dimensional model building module: and the method is used for constructing and obtaining a final three-dimensional entity model based on the dense point cloud data.
10. The three-dimensional solid model construction device based on full-information photogrammetry according to claim 9, further comprising a preprocessing module for preprocessing the aerial multi-view oblique image, the near-ground outer multi-angle near-view image, and the near-ground inner multi-angle near-view image.
CN202310775920.XA 2023-06-28 2023-06-28 Three-dimensional entity model construction method and device based on full-information photogrammetry Pending CN116824079A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310775920.XA CN116824079A (en) 2023-06-28 2023-06-28 Three-dimensional entity model construction method and device based on full-information photogrammetry

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310775920.XA CN116824079A (en) 2023-06-28 2023-06-28 Three-dimensional entity model construction method and device based on full-information photogrammetry

Publications (1)

Publication Number Publication Date
CN116824079A true CN116824079A (en) 2023-09-29

Family

ID=88119836

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310775920.XA Pending CN116824079A (en) 2023-06-28 2023-06-28 Three-dimensional entity model construction method and device based on full-information photogrammetry

Country Status (1)

Country Link
CN (1) CN116824079A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117392317A (en) * 2023-10-19 2024-01-12 北京市测绘设计研究院 Live three-dimensional modeling method, device, computer equipment and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117392317A (en) * 2023-10-19 2024-01-12 北京市测绘设计研究院 Live three-dimensional modeling method, device, computer equipment and storage medium

Similar Documents

Publication Publication Date Title
CN111597666B (en) Method for applying BIM to transformer substation construction process
CN111540048B (en) Fine live-action three-dimensional modeling method based on space-ground fusion
CN102506824B (en) Method for generating digital orthophoto map (DOM) by urban low altitude unmanned aerial vehicle
CN104156536B (en) The visualization quantitatively calibrating and analysis method of a kind of shield machine cutter abrasion
CN108107462B (en) RTK and high-speed camera combined traffic sign post attitude monitoring device and method
CN105931234A (en) Ground three-dimensional laser scanning point cloud and image fusion and registration method
CN111091076B (en) Tunnel limit data measuring method based on stereoscopic vision
CN112113542A (en) Method for checking and accepting land special data for aerial photography construction of unmanned aerial vehicle
CN115937288A (en) Three-dimensional scene model construction method for transformer substation
CN113012292B (en) AR remote construction monitoring method and system based on unmanned aerial vehicle aerial photography
CN113793270A (en) Aerial image geometric correction method based on unmanned aerial vehicle attitude information
Rüther et al. A comparison of close-range photogrammetry to terrestrial laser scanning for heritage documentation
CN113971768A (en) Unmanned aerial vehicle-based three-dimensional dynamic detection method for power transmission line illegal building
CN106705962A (en) Method and system for acquiring navigation data
CN109472778B (en) Appearance detection method for towering structure based on unmanned aerial vehicle
Perfetti et al. Fisheye Photogrammetry to Survey Narrow Spaces in Architecture and a Hypogea Environment
CN116824079A (en) Three-dimensional entity model construction method and device based on full-information photogrammetry
CN104180794B (en) The disposal route in digital orthoimage garland region
CN116129067A (en) Urban live-action three-dimensional modeling method based on multi-source geographic information coupling
CN116883604A (en) Three-dimensional modeling technical method based on space, air and ground images
CN110986888A (en) Aerial photography integrated method
Zhou et al. Application of UAV oblique photography in real scene 3d modeling
CN113739767A (en) Method for producing orthoimage aiming at image acquired by domestic area array swinging imaging system
CN116051742A (en) Modeling method and system based on fusion of oblique photographing data and point cloud data
Elhalawani et al. Implementation of close range photogrammetry using modern non-metric digital cameras for architectural documentation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination