CN113496550B - DSM calculation method and device, computer equipment and storage medium - Google Patents
DSM calculation method and device, computer equipment and storage medium Download PDFInfo
- Publication number
- CN113496550B CN113496550B CN202010190845.7A CN202010190845A CN113496550B CN 113496550 B CN113496550 B CN 113496550B CN 202010190845 A CN202010190845 A CN 202010190845A CN 113496550 B CN113496550 B CN 113496550B
- Authority
- CN
- China
- Prior art keywords
- point cloud
- cloud data
- dsm
- scene
- vertex
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000004364 calculation method Methods 0.000 title claims abstract description 110
- 238000000034 method Methods 0.000 claims abstract description 53
- 238000012545 processing Methods 0.000 claims description 27
- 238000013507 mapping Methods 0.000 claims description 18
- 238000012937 correction Methods 0.000 claims description 15
- 238000010276 construction Methods 0.000 claims description 8
- 238000004590 computer program Methods 0.000 claims description 5
- 230000008569 process Effects 0.000 abstract description 17
- 230000003287 optical effect Effects 0.000 description 6
- 238000010586 diagram Methods 0.000 description 5
- 230000006870 function Effects 0.000 description 3
- 238000004891 communication Methods 0.000 description 2
- 239000013307 optical fiber Substances 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 230000000644 propagated effect Effects 0.000 description 2
- 238000003491 array Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000007123 defense Effects 0.000 description 1
- 235000013399 edible fruits Nutrition 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000008707 rearrangement Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 239000002689 soil Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/05—Geographic models
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/20—Finite element generation, e.g. wire-frame surface description, tesselation
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- Computer Graphics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Remote Sensing (AREA)
- Image Processing (AREA)
- Image Generation (AREA)
Abstract
The embodiment of the invention discloses a DSM calculation method, a DSM calculation device, computer equipment and a storage medium, wherein the method comprises the following steps: determining the scene type of a target scene according to the depth information of each point cloud data point in the sparse point cloud data of the target scene; if the scene type is a low fluctuation scene, generating a DSM matched with sparse point cloud data; and if the scene type is a high-volatility scene, generating dense point cloud data according to the sparse point cloud data and generating DSM (digital surface model) matched with the dense point cloud data. The technical scheme of the embodiment of the invention can improve the calculation efficiency and the calculation performance of the DSM calculation process.
Description
Technical Field
The embodiment of the invention relates to the technical field of image processing, in particular to a DSM (Digital Surface Model) calculation method, a DSM calculation device, computer equipment and a storage medium.
Background
DSM is the ground elevation model that contains height values such as surface building, bridge and trees, and the ground undulation condition is expressed to true earth's surface. The DSM can truly express the relief of the ground surface, and it has wide applications in the fields of national economy and national defense construction such as surveying and mapping, hydrology, meteorology, geomorphology, geology, soil, engineering construction, communications, military and the like, as well as the fields of human and natural sciences.
There are many ways to compute DSM, and the most common method is to digitize an existing topographic map to obtain the raw data and use it to construct an irregular triangular mesh to build DSM, or to build DSM directly by interpolation.
In the process of implementing the invention, the inventor finds that the prior art has the following defects: the conventional DSM calculation method has long time consumption of the whole calculation process and adopts a uniform calculation mode for all scenes, so that the calculation efficiency and the calculation performance are low.
Disclosure of Invention
The embodiment of the invention provides a DSM calculation method, a DSM calculation device, computer equipment and a storage medium, which are used for improving the calculation efficiency and the calculation performance of a DSM calculation flow.
In a first aspect, an embodiment of the present invention provides a DSM calculating method, including:
determining the scene type of a target scene according to the depth information of each point cloud data point in the sparse point cloud data of the target scene;
if the scene type is a low fluctuation scene, generating a DSM matched with sparse point cloud data;
and if the scene type is a high-lift scene, generating dense point cloud data according to the sparse point cloud data, and generating DSM matched with the dense point cloud data.
Optionally, determining the scene type of the target scene according to the depth information of each point cloud data point in the sparse point cloud data of the target scene includes:
determining calibration depth information according to the depth information of each point cloud data point in the sparse point cloud data;
calculating a depth difference mean value matched with the sparse point cloud data according to the depth information of each point cloud data point and the calibration depth information, and calculating a depth variance according to the depth difference mean value;
and determining the scene type of the target scene to be a low-fluctuation scene or a high-fluctuation scene according to the depth variance and a preset threshold condition.
Optionally, determining the scene type of the target scene as a low-fluctuation scene or a high-fluctuation scene according to the depth variance and a preset threshold condition, including:
calculating a target threshold according to the depth difference value mean value;
if the depth variance is smaller than or equal to the threshold value, determining that the scene type of the target scene is a low-fluctuation scene;
if the depth variance is greater than the threshold, determining that the scene type of the target scene is a high-volatility scene.
Optionally, generating a DSM matched with the sparse point cloud data includes:
constructing a first Delaunay triangulation network according to the sparse point cloud data;
and generating the DSM matched with the sparse point cloud data according to a preset DSM calculation formula and the first Delaunay triangulation network.
Optionally, generating dense point cloud data according to the sparse point cloud data, and generating a DSM matched with the dense point cloud data, includes:
generating dense point cloud data according to the sparse point cloud data and a dense point cloud computing method;
constructing a second Delaunay triangulation network according to the dense point cloud data;
and generating the DSM matched with the dense point cloud data according to a preset DSM calculation formula and the second Delaunay triangulation network.
Optionally, the preset DSM calculation formula includes:
normal x,y,z =(vertex 1 -vertex 0 )^(vertex 2 -vertex 0 )
k=-(vertex 0 (x)*normal(x)+vertex 1 (y)*normal(y)+vertex 2 (z)*normal(z))
dsm_z=-(normal(x)*dsm_x+normal(y)*dsm_y+k)/normal(z)
wherein, vertex 0 、vertex 1 And vertex 2 For three vertices, normal, of a triangular mesh in the Delaunay triangulation network x,y,z Being the cross product, vertex, between the three vertices of a triangular mesh 0 (x) Is a vertex x 0 Value of x in (1), vertex 1 (y) is the vertex y 0 Y value of (1), vertex 2 (z) is the vertex z 0 Z value in (b), k represents the sum of the weights of the three vertices in different directions (x, y, z), normal (x) is normal x,y,z Calculating the x value of the result point, and normal (y) is normal x,y,z Calculating the y value of the result point, and the normal (z) is normal x,y,z Calculating a z value of the result point, DSM _ z being a z value of the DSM, DSM _ x being an x value of the DSM, and DSM _ y being a y value of the DSM.
Optionally, after generating the DSM matching with the sparse point cloud data, the method further includes:
performing texture mapping and color correction processing on the first Delaunay triangulation network;
performing forward projection calculation according to the processed image grid to obtain a DOM of the target scene;
after generating the DSM that matches the dense point cloud data, further comprising:
recalculating the point cloud consistence according to a preset point cloud consistence calculation formula and the dense point cloud data;
carrying out clustering processing on the dense point cloud data according to the point cloud density to obtain simplified dense point cloud data;
constructing a third Delaunay triangulation network according to the simplified dense point cloud data;
performing texture mapping and color correction processing on the third Delaunay triangulation network;
and performing forward projection calculation according to the processed image grid to obtain the DOM of the target scene.
Optionally, the preset point cloud density calculation formula is as follows:
wherein, pointclosed _ dense is the point cloud density, (min _ x, max _ x) \ (min _ y, max _ y) is the range of the dense point cloud data, and N is d T is a constant number of points of the dense point cloud data.
In a second aspect, an embodiment of the present invention further provides a computing apparatus for DSM, including:
the scene type determining module is used for determining the scene type of the target scene according to the depth information of each point cloud data point in the sparse point cloud data of the target scene;
the first DSM generation module is used for generating DSMs matched with the sparse point cloud data if the scene type is a low-fluctuation scene;
and the second DSM generation module is used for generating dense point cloud data according to the sparse point cloud data and generating DSMs matched with the dense point cloud data if the scene type is a high-rise scene.
In a third aspect, an embodiment of the present invention further provides a computer device, where the computer device includes:
one or more processors;
storage means for storing one or more programs;
when the one or more programs are executed by the one or more processors, the one or more processors implement the DSM calculation method provided by any of the embodiments of the present invention.
In a fourth aspect, an embodiment of the present invention further provides a computer storage medium on which a computer program is stored, where the computer program, when executed by a processor, implements the DSM calculating method provided by any of the embodiments of the present invention.
According to the method and the device, the scene type of the target scene is determined according to the depth information of the cloud data points in the sparse point cloud data of the target scene, the DSM matched with the sparse point cloud data is generated when the scene type is determined to be a low-fluctuation scene, the dense point cloud data is generated according to the sparse point cloud data when the scene type is determined to be a high-fluctuation scene, and the DSM matched with the dense point cloud data is generated, so that the problems of low calculation efficiency and low calculation performance of an existing DSM calculation method are solved, and the calculation efficiency and the calculation performance of a DSM calculation process are improved.
Drawings
FIG. 1 is a flow chart of a method for computing DSMs according to an embodiment of the invention;
fig. 2 is a flowchart of a DSM calculation method according to the second embodiment of the present invention;
fig. 3 is a flowchart of a DSM calculating method according to a third embodiment of the present invention;
FIG. 4 is a diagram of a computing device for DSMs according to a fourth embodiment of the invention;
fig. 5 is a schematic structural diagram of a computer device according to a fifth embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention.
It should be further noted that, for the convenience of description, only some but not all of the relevant aspects of the present invention are shown in the drawings. Before discussing exemplary embodiments in more detail, it should be noted that some exemplary embodiments are described as processes or methods depicted as flowcharts. Although a flowchart may describe the operations (or steps) as a sequential process, many of the operations can be performed in parallel, concurrently or simultaneously. In addition, the order of the operations may be re-arranged. The process may be terminated when its operations are completed, but may have additional steps not included in the figure. The processes may correspond to methods, functions, procedures, subroutines, and the like.
The terms "first" and "second," and the like in the description and claims of embodiments of the invention and in the drawings, are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "comprising" and "having," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not set forth for a listed step or element but may include steps or elements not listed.
Example one
Fig. 1 is a flowchart of a method for computing a DSM according to an embodiment of the present invention, where the embodiment is applicable to a case of computing the DSM efficiently, and the method may be executed by a computing apparatus of the DSM, and the apparatus may be implemented by software and/or hardware, and may be generally integrated in a computer device. Accordingly, as shown in fig. 1, the method comprises the following operations:
s110, determining the scene type of the target scene according to the depth information of each point cloud data point in the sparse point cloud data of the target scene.
The target scene may be acquired by the unmanned aerial vehicle and needs to generate DSM data, such as a farmland scene or a mountain scene, and the embodiment of the present invention does not limit the specific scene type of the target scene. The depth information may be a depth value of the point cloud data point, i.e., z-axis coordinate data.
To enable processing of a particular scene, a scene type of the target scene may first be identified. In the embodiment of the invention, the scene type of the target scene can be determined according to the depth information of each point cloud data point in the sparse point cloud data of the target scene. The sparse point cloud data may be acquired by an SFM (Structure From Motion) three-dimensional reconstruction method.
And S120, if the scene type is a low fluctuation scene, generating a DSM matched with the sparse point cloud data.
The low-fluctuation scene can be a scene with little surface fluctuation, such as a farmland scene or a plain scene, and the embodiment of the invention does not limit the specific scene type of the low-fluctuation scene.
It should be noted that the low-relief scene has little relief, and therefore the height difference of the object in the corresponding captured image is not large. For such scene types, the accuracy requirements for DSM are not high, so that DSM need not be calculated from dense point cloud data.
Correspondingly, if the scene type of the target scene is determined to be a low-fluctuation scene, the DSM can be directly generated according to the sparse point cloud data of the target scene, and the calculation stage of the dense point cloud data is reduced, so that the calculation time consumption of the DSM is reduced, and the calculation efficiency and the calculation performance of the DSM calculation process are improved.
And S130, if the scene type is a high-volatility scene, generating dense point cloud data according to the sparse point cloud data, and generating DSM (digital surface model) matched with the dense point cloud data.
The high-fluctuation scene may be a scene with large surface fluctuation, such as a mountain scene or a fruit tree scene, and the specific scene type of the high-fluctuation scene is not limited in the embodiment of the present invention.
It should be noted that the high-relief scene has a large surface relief, and therefore the height difference of the object in the corresponding captured image is also large. For this type of scene, the accuracy requirements for DSM are high, and therefore DSM needs to be calculated from the dense point cloud data.
Correspondingly, if the scene type of the target scene is determined to be a high-rise scene, dense point cloud data needs to be generated according to the sparse point cloud data, then DSM (digital document model) needs to be generated according to the dense point cloud data, and the DSM calculation process of the high-rise scene needs a calculation stage of the dense point cloud data so as to meet the high-precision requirement of the DSM.
Therefore, the calculation method of the DSM provided by the embodiment of the invention can distinguish and process different types of scenes, so that the DSM can be quickly calculated, and the calculation efficiency and the calculation performance of the DSM calculation flow can be improved.
According to the method and the device, the scene type of the target scene is determined according to the depth information of the cloud data points in the sparse point cloud data of the target scene, the DSM matched with the sparse point cloud data is generated when the scene type is determined to be a low-fluctuation scene, the dense point cloud data is generated according to the sparse point cloud data when the scene type is determined to be a high-fluctuation scene, and the DSM matched with the dense point cloud data is generated, so that the problems of low calculation efficiency and low calculation performance of an existing DSM calculation method are solved, and the calculation efficiency and the calculation performance of a DSM calculation process are improved.
Example two
Fig. 2 is a flowchart of a DSM calculating method according to a second embodiment of the present invention, which is embodied based on the second embodiment of the present invention, and in the present embodiment, a specific implementation manner of determining a scene type of a target scene according to depth information of each point cloud data point in sparse point cloud data of the target scene and calculating a DSM under different scene types is provided. Correspondingly, as shown in fig. 2, the method of the present embodiment may include:
s210, determining calibration depth information according to the depth information of each point cloud data point in the sparse point cloud data.
The calibrated depth information may be depth information of one point cloud data point in the sparse point cloud data. Optionally, the depth value with the smallest depth value in the cloud data points of each point may be used as the calibrated depth information.
When the calibrated depth information is determined, the depth information z of each point cloud data point in the sparse point cloud data can be determined i Comparing the values to determine the depth information with the minimum value as the calibrated depth information z min 。
S220, calculating a depth difference mean value matched with the sparse point cloud data according to the depth information of each point cloud data point and the calibration depth information, and calculating a depth variance according to the depth difference mean value.
Specifically, the depth information z of each cloud data point can be calculated i Value and calibration depth information z min Difference between | z i -z min And according to the difference value | z i -z min I, calculating depth difference mean value of sparse point cloud data matchingFurther, based on the mean value of the depth difference->Calculating depth variance
And S230, determining the scene type of the target scene to be a low-fluctuation scene or a high-fluctuation scene according to the depth variance and a preset threshold condition.
The preset threshold condition may be a preset condition for determining the depth variance, and the specific condition content may be set according to an actual requirement, which is not limited in the embodiment of the present invention.
Correspondingly, the scene type of the target scene can be judged according to the calculated depth variance and a preset threshold condition.
In an optional embodiment of the present invention, determining the scene type of the target scene as a low-fluctuation scene or a high-fluctuation scene according to the depth variance and a preset threshold condition may include: calculating a target threshold according to the depth difference value mean value; if the depth variance is smaller than or equal to the target threshold, determining that the scene type of the target scene is a low-fluctuation scene; if the depth variance is greater than the target threshold, determining that the scene type of the target scene is a high-volatility scene.
Specifically, the target threshold may be based on the depth difference mean z avr And (4) setting.For example, the target threshold value threshold may be threshold = M × z avr Where M is a constant and may be set to 10, etc., and the embodiment of the present invention does not limit the specific value of M. Further, if the depth variance is less than or equal to the target threshold, the scene type of the target scene may be determined to be a low-fluctuation scene. If the depth variance is greater than the target threshold, then the scene type of the target scene may be determined to be a high-volatility scene.
It should be noted that, in addition to the above method for determining the scene type of the target scene according to the depth variance and the preset threshold condition, the scene type of the target scene may also be determined according to an instruction for the user to autonomously select the triggered scene type of the target scene when calculating the DSM of the target scene.
S240, judging whether the scene type of the target scene is a low fluctuation scene or not, if so, executing S250; otherwise, S270 is executed.
And S250, constructing a first Delaunay triangulation according to the sparse point cloud data.
And S260, generating the DSM matched with the sparse point cloud data according to a preset DSM calculation formula and the first Delaunay triangulation network.
Correspondingly, if the scene type of the target scene is a low-fluctuation scene, a first Delaunay triangulation network, namely a 2.5D mesh, can be constructed according to the sparse point cloud data of the target scene. And then generating the DSM matched with the sparse point cloud data according to a preset DSM calculation formula and the first Delaunay triangulation network.
And S270, generating the dense point cloud data according to the sparse point cloud data and a dense point cloud computing method.
And S280, constructing a second Delaunay triangulation according to the dense point cloud data.
And S290, generating the DSM matched with the dense point cloud data according to a preset DSM calculation formula and the second Delaunay triangulation network.
Optionally, the dense point cloud computing method may be an MVS (Multi View Stereo) algorithm, and the like, and the embodiment of the present invention does not limit the specific method type of the dense point cloud computing method.
Correspondingly, if the scene type of the target scene is a high-volatility scene, dense point cloud data can be generated according to the sparse point cloud data and a dense point cloud calculation method, and then a second Delaunay triangulation, namely a 2.5D mesh, is constructed according to the calculated dense point cloud data. And finally, generating the DSM matched with the dense point cloud data according to a preset DSM calculation formula and a second Delaunay triangulation network.
In an alternative embodiment of the present invention, assume that the Delaunay triangulation network has vertex for each vertex i (x, y, z), the preset DSM calculation formula may include:
normal x,y,z =(vertex 1 -vertex 0 )^(vertex 2 -vertex 0 )
k=-(vertex 0 (x)*normal(x)+vertex 1 (y)*normal(y)+vertex 2 (z)*normal(z))
dsm_z=-(normal(x)*dsm_x+normal(y)*dsm_y+k)/normal(z)
wherein, vertex 0 、vertex 1 And vertex 2 For three vertices, normal, of a triangular mesh in the Delaunay triangulation network x,y,z Being the cross product, vertex, between the three vertices of a triangular mesh 0 (x) Is a vertex x 0 Value of x in (1), vertex 1 (y) is the vertex y 0 Y value of (1), vertex 2 (z) is the vertex z 0 Z value in (b), k represents the sum of the weights of the three vertices in different directions (x, y, z), normal (x) is normal x,y,z Calculating the x value of the result point, and normal (y) is normal x,y,z Calculating the y value of the result point, and the normal (z) is normal x,y,z Calculating a z value of the result point, DSM _ z being a z value of the DSM, DSM _ x being an x value of the DSM, and DSM _ y being a y value of the DSM.
The meaning of the above formula is: after the Delaunay triangulation network (the first Delaunay triangulation network or the second Delaunay triangulation network) is obtained, triangulation interpolation is performed on the Delaunay triangulation network to obtain the DSM.
By adopting the technical scheme, the calibration depth information is determined according to the depth information of each point cloud data point in the sparse point cloud data, the mean value of the depth difference value is calculated according to the depth information of each point cloud data point and the calibration depth information, the depth variance is calculated according to the mean value of the depth difference value, and finally the scene type of the target scene is determined according to the depth variance and the preset threshold condition, so that the scene type of the target scene can be automatically calculated according to the data characteristics of the sparse point cloud data.
EXAMPLE III
Fig. 3 is a flowchart of a method for calculating a DSM according to a third embodiment of the present invention, which is embodied based on the above embodiments, and in this embodiment, a specific implementation manner of acquiring a DOM (Digital ortho image) of a target scene after generating the DSM is provided. Accordingly, as shown in fig. 3, the method of the present embodiment may include:
s310, determining the scene type of the target scene according to the depth information of each point cloud data point in the sparse point cloud data of the target scene.
S320, judging whether the scene type of the target scene is a low fluctuation scene, if so, executing S330; otherwise, S360 is performed.
S330, constructing a first Delaunay triangulation according to the sparse point cloud data.
And S340, generating a DSM matched with the sparse point cloud data according to a preset DSM calculation formula and the first Delaunay triangulation network.
And S350, performing texture mapping and color correction processing on the first Delaunay triangulation network, and performing forward projection calculation according to the processed image mesh to obtain a DOM of the target scene.
For a target scene of a low-fluctuation scene, texture mapping and color correction processing can be directly performed on the first Delaunay triangulation network. Texture mapping refers to the process of mapping texels in texture space to pixels in an image, simply by tiling the image onto a 3D or 2.5D object. And performing forward projection calculation according to the processed image grid attached with the image texture, thereby obtaining the DOM of the target scene.
And S360, generating the dense point cloud data according to the sparse point cloud data and a dense point cloud computing method.
And S370, constructing a second Delaunay triangulation network according to the dense point cloud data.
And S380, generating the DSM matched with the dense point cloud data according to a preset DSM calculation formula and the second Delaunay triangulation network.
And S390, recalculating the point cloud consistency according to a preset point cloud consistency calculation formula and the dense point cloud data.
And S3A0, clustering the dense point cloud data according to the point cloud density to obtain simplified dense point cloud data.
In an optional embodiment of the present invention, the predetermined point cloud density calculation formula may be:
wherein, pointcloud _ dense is the density of the point cloud, (min _ x, max _ x) \ (min _ y, max _ y) is the range of the dense point cloud data, and N is d T is a constant number of points of the dense point cloud data. Alternatively, T may be set to 10, and the specific value of T is not limited in the embodiment of the present invention.
Aiming at a target scene of a high-fluctuation scene, in order to reduce the calculation amount of the DOM and further reduce the calculation time of the DOM, the point cloud density can be recalculated according to a preset point cloud density calculation formula and the dense point cloud data, and then the dense point cloud data is clustered according to the point cloud density to obtain simplified dense point cloud data. The clustering process of dense point cloud data belongs to the prior art, and the embodiment of the invention does not describe the process in detail. The point cloud consistence corresponding to the simplified dense point cloud data is lower than that corresponding to the dense point cloud data, but the simplified dense point cloud data still can keep a three-dimensional model of the whole target scene.
And S3B0, constructing a third Delaunay triangulation network according to the simplified dense point cloud data, performing texture mapping and color correction processing on the third Delaunay triangulation network, and performing forward projection calculation according to the processed image mesh to obtain a DOM of the target scene.
Correspondingly, after the simplified dense point cloud data is obtained, a third Delaunay triangulation network can be constructed according to the simplified dense point cloud data, texture mapping and color correction processing are carried out on the third Delaunay triangulation network, forward projection calculation is carried out according to the processed image grid attached with the image texture, and the DOM of the target scene is obtained.
In the technical scheme, the data volume of the dense point cloud data is reduced by simplifying the dense point cloud data, namely the triangular surface of the 2.5D mesh is reduced, so that the time consumption of texture mapping and color correction is reduced, and the time consumption of DOM is reduced.
By adopting the scheme, the texture mapping and color correction processing are directly carried out on the first Delaunay triangulation network for the target scene of the low-fluctuation scene, and the DOM is obtained by carrying out forward projection calculation according to the processed image grid; for a target scene of a high-rise scene, point cloud density needs to be recalculated according to a preset point cloud density calculation formula and dense point cloud data, dense point cloud data are clustered according to the point cloud density to obtain simplified dense point cloud data, a third Delaunay triangular network is further constructed according to the simplified dense point cloud data, texture mapping and color correction processing are performed on the third Delaunay triangular network, forward projection calculation is performed according to an image grid obtained through processing to obtain a DOM, distinguishing processing of different types of scenes is achieved, and therefore the DOM is calculated quickly, and calculation efficiency and calculation performance of a DOM calculation process are improved.
It should be noted that any permutation and combination between the technical features in the above embodiments also belong to the scope of the present invention.
Example four
Fig. 4 is a schematic diagram of a DSM computing apparatus according to a fourth embodiment of the present invention, where as shown in fig. 4, the DSM computing apparatus includes: a scene type determination module 410, a first DSM generation module 420, and a second DSM generation module 430, wherein:
a scene type determining module 410, configured to determine a scene type of a target scene according to depth information of each point cloud data point in sparse point cloud data of the target scene;
a first DSM generating module 420, configured to generate a DSM matching the sparse point cloud data if the scene type is a low-fluctuation scene;
and a second DSM generating module 430, configured to generate dense point cloud data from the sparse point cloud data and generate a DSM matching the dense point cloud data if the scene type is a high-volatility scene.
According to the method and the device, the scene type of the target scene is determined according to the depth information of the cloud data points in the sparse point cloud data of the target scene, the DSM matched with the sparse point cloud data is generated when the scene type is determined to be a low-fluctuation scene, the dense point cloud data is generated according to the sparse point cloud data when the scene type is determined to be a high-fluctuation scene, and the DSM matched with the dense point cloud data is generated.
Optionally, the scene type determining module 410 includes a calibration depth information determining unit, configured to determine calibration depth information according to depth information of each point cloud data point in the sparse point cloud data; the depth variance calculating unit is used for calculating a depth difference mean value matched with the sparse point cloud data according to the depth information of each point cloud data point and the calibrated depth information, and calculating depth variance according to the depth difference mean value; and the scene type determining unit is used for determining the scene type of the target scene to be a low-fluctuation scene or a high-fluctuation scene according to the depth variance and a preset threshold condition.
Optionally, the scene type determining unit is specifically configured to: calculating a target threshold according to the depth difference value mean value; if the depth variance is smaller than or equal to the threshold value, determining that the scene type of the target scene is a low-fluctuation scene; determining that the scene type of the target scene is a high-lift scene if the depth variance is greater than the threshold.
Optionally, the first DSM generating module 420 includes: the first Delaunay triangulation network construction unit is used for constructing a first Delaunay triangulation network according to the sparse point cloud data; and the first DSM generating unit is used for generating DSMs matched with the sparse point cloud data according to a preset DSM calculation formula and the first Delaunay triangulation network.
Optionally, the second DSM generating module 430 includes: a dense point cloud data generation unit for generating the dense point cloud data according to the sparse point cloud data and a dense point cloud calculation method; the second Delaunay triangulation network construction unit is used for constructing a second Delaunay triangulation network according to the dense point cloud data; and the second DSM generating unit is used for generating DSMs matched with the dense point cloud data according to a preset DSM calculation formula and the second Delaunay triangulation network.
Optionally, the preset DSM calculation formula includes:
normal x,y,z =(vertex 1 -vertex 0 )^(vertex 2 -vertex 0 )
k=-(vertex 0 (x)*normal(x)+vertex 1 (y)*normal(y)+vertex 2 (z)*normal(z))
dsm_z=-(normal(x)*dsm_x+normal(y)*dsm_y+k)/normal(z)
wherein, vertex 0 、vertex 1 And vertex 2 For three vertices, normal, of a triangular mesh in the Delaunay triangulation network x,y,z Being the cross product, vertex, between the three vertices of a triangular mesh 0 (x) Is a vertex x 0 Value of x in (1), vertex 1 (y) is the vertex y 0 Y value of (1), vertex 2 (z) is the vertex z 0 Z value in (b), k represents the sum of the weights of the three vertices in different directions (x, y, z), normal (x) is normal x,y,z Calculating the x value of the result point, and normal (y) is normal x,y,z Calculating the y value of the result point, and the normal (z) is normal x,y,z Calculating a z value of the result point, DSM _ z being a z value of the DSM, DSM _ x being an x value of the DSM, and DSM _ y being a y value of the DSM.
Optionally, the computing device of the DSM further comprises: the first data processing module is used for carrying out texture mapping and color correction processing on the first Delaunay triangulation network; the first DOM generation module is used for carrying out forward projection calculation according to the processed image grid to obtain a DOM of the target scene; the point cloud consistency calculating module is used for recalculating the point cloud consistency according to a preset point cloud consistency calculating formula and the dense point cloud data; the dense point cloud data generation module is used for clustering the dense point cloud data according to the point cloud density to obtain simplified dense point cloud data; the third Delaunay triangulation network construction unit is used for constructing a third Delaunay triangulation network according to the simplified dense point cloud data; the second data processing module is used for performing texture mapping and color correction processing on the third Delaunay triangulation network; and the second DOM generation module is used for carrying out forward projection calculation according to the processed image grid to obtain the DOM of the target scene.
Optionally, the preset point cloud density calculation formula is as follows:
wherein, pointclosed _ dense is the point cloud density, (min _ x, max _ x) \ (min _ y, max _ y) is the range of the dense point cloud data, and N is d T is a constant number of points of the dense point cloud data.
The computing device of the DSM can execute the computing method of the DSM provided by any embodiment of the invention, and has the corresponding functional modules and beneficial effects of the execution method. For details of the DSM calculation method, which are not described in detail in the present embodiment, reference may be made to any embodiment of the present invention.
Since the computing device of the DSM described above is a device that can execute the computing method of the DSM in the embodiment of the present invention, a person skilled in the art can understand the specific implementation of the computing device of the DSM of the present embodiment and various modifications thereof based on the computing method of the DSM described in the embodiment of the present invention, and therefore, how to implement the computing method of the DSM in the embodiment of the present invention by the computing device of the DSM is not described in detail herein. The apparatus used by those skilled in the art to implement the DSM calculation method in the embodiments of the present invention is within the scope of the present application.
EXAMPLE five
Fig. 5 is a schematic structural diagram of a computer device according to a fifth embodiment of the present invention. FIG. 5 illustrates a block diagram of a computer device 512 suitable for use in implementing embodiments of the present invention. The computer device 512 shown in FIG. 5 is only an example and should not bring any limitations to the functionality or scope of use of embodiments of the present invention.
As shown in fig. 5, computer device 512 is in the form of a general purpose computing device. Components of computer device 512 may include, but are not limited to: one or more processors 516, a storage device 528, and a bus 518 that couples the various system components including the storage device 528 and the processors 516.
The processor 516 executes various functional applications and data processing, for example, a calculation method of the DSM provided by the above-described embodiment of the present invention by running a program stored in the storage device 528.
That is, the processing unit implements, when executing the program: determining the scene type of a target scene according to the depth information of each point cloud data point in the sparse point cloud data of the target scene; if the scene type is a low fluctuation scene, generating a DSM matched with sparse point cloud data; and if the scene type is a high-lift scene, generating dense point cloud data according to the sparse point cloud data, and generating DSM matched with the dense point cloud data.
EXAMPLE six
A sixth embodiment of the present invention further provides a computer storage medium storing a computer program, which when executed by a computer processor is configured to perform the method for calculating the DSM as described in any of the above embodiments of the present invention: determining the scene type of a target scene according to the depth information of each point cloud data point in the sparse point cloud data of the target scene; if the scene type is a low fluctuation scene, generating a DSM matched with sparse point cloud data; and if the scene type is a high-volatility scene, generating dense point cloud data according to the sparse point cloud data and generating DSM (digital surface model) matched with the dense point cloud data.
Computer storage media for embodiments of the invention may employ any combination of one or more computer-readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a Read-Only Memory (ROM), an Erasable Programmable Read-Only Memory (EPROM) or flash Memory), an optical fiber, a portable compact disc Read-Only Memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, radio Frequency (RF), etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.
Claims (18)
1. A method for computing a digital surface model, DSM, comprising:
determining the scene type of a target scene according to the depth information of each point cloud data point in the sparse point cloud data of the target scene;
if the scene type is a low fluctuation scene, generating a DSM matched with sparse point cloud data;
and if the scene type is a high-lift scene, generating dense point cloud data according to the sparse point cloud data, and generating DSM matched with the dense point cloud data.
2. The method of claim 1, wherein determining the scene type of the target scene according to the depth information of each point cloud data point in the sparse point cloud data of the target scene comprises:
determining calibration depth information according to the depth information of each point cloud data point in the sparse point cloud data;
calculating a depth difference mean value matched with the sparse point cloud data according to the depth information of each point cloud data point and the calibration depth information, and calculating a depth variance according to the depth difference mean value;
and determining the scene type of the target scene to be a low-fluctuation scene or a high-fluctuation scene according to the depth variance and a preset threshold condition.
3. The method of claim 2, wherein determining the scene type of the target scene as a low-fluctuation scene or a high-fluctuation scene according to the depth variance and a preset threshold condition comprises:
calculating a target threshold according to the depth difference value mean value;
if the depth variance is smaller than or equal to the target threshold, determining that the scene type of the target scene is a low-fluctuation scene;
determining that the scene type of the target scene is a high-lift scene if the depth variance is greater than the target threshold.
4. The method of claim 1, wherein generating a DSM that matches sparse point cloud data comprises:
constructing a first Delaunay triangulation network in Delaunay according to the sparse point cloud data;
and generating the DSM matched with the sparse point cloud data according to a preset DSM calculation formula and the first Delaunay triangulation network.
5. The method of claim 1, wherein generating dense point cloud data from the sparse point cloud data and generating a DSM that matches the dense point cloud data comprises:
generating the dense point cloud data according to the sparse point cloud data and a dense point cloud computing method;
constructing a second Delaunay triangulation network according to the dense point cloud data;
and generating the DSM matched with the dense point cloud data according to a preset DSM calculation formula and the second Delaunay triangulation network.
6. The method of claim 4 or 5, wherein the preset DSM calculation formula comprises:
normal x,y,z =(vertex 1 -vertex 0 )^(vertex 2 -vertex 0 )
k=-(vertex 0 (x)*normal(x)+vertex 1 (y)*normal(y)+vertex 2 (z)*normal(z))
dsm_z=-(normal(x)*dsm_x+normal(y)*dsm_y+k)/normal(z)
wherein, vertex 0 、vertex 1 And vertex 2 For three vertices, normal, of a triangular mesh in the Delaunay triangulation network x,y,z Being the cross product, vertex, between the three vertices of a triangular mesh 0 (x) Is a vertex x 0 Value of x in (1), vertex 1 (y) is the vertex y 0 Y value of (1), vertex 2 (z) is the vertex z 0 Z value in (b), k represents the sum of the weights of the three vertices in different directions (x, y, z), normal (x) is normal x,y,z Calculating the x value of the result point, and normal (y) is normal x,y,z Calculating the y value of the result point, and the normal (z) is normal x,y,z Calculating a z value of the result point, DSM _ z being a z value of the DSM, DSM _ x being an x value of the DSM, and DSM _ y being a y value of the DSM.
7. The method of claim 6, further comprising, after generating the DSM that matches the sparse point cloud data:
performing texture mapping and color correction processing on the first Delaunay triangulation network;
performing forward projection calculation according to the processed image grid to obtain a DOM of the target scene;
after generating the DSM that matches the dense point cloud data, further comprising:
recalculating the point cloud consistence according to a preset point cloud consistence calculation formula and the dense point cloud data;
carrying out clustering processing on the dense point cloud data according to the point cloud density to obtain simplified dense point cloud data;
constructing a third Delaunay triangulation network according to the simplified dense point cloud data;
performing texture mapping and color correction processing on the third Delaunay triangulation network;
and performing forward projection calculation according to the processed image grid to obtain the DOM of the target scene.
8. The method of claim 7, wherein the predetermined point cloud consistency calculation formula is:
wherein, pointclosed _ dense is the point cloud density, (min _ x, max _ x) \ (min _ y, max _ y) is the range of the dense point cloud data, and N is d T is a constant number of points of the dense point cloud data.
9. A computing device for DSM, comprising:
the scene type determining module is used for determining the scene type of the target scene according to the depth information of each point cloud data point in the sparse point cloud data of the target scene;
the first DSM generation module is used for generating DSMs matched with the sparse point cloud data if the scene type is a low-fluctuation scene;
and the second DSM generation module is used for generating dense point cloud data according to the sparse point cloud data and generating DSMs matched with the dense point cloud data if the scene type is a high-rise scene.
10. The apparatus of claim 9, wherein the scene type determination module comprises:
the calibration depth information determining unit is used for determining calibration depth information according to the depth information of each point cloud data point in the sparse point cloud data;
the depth variance calculation unit is used for calculating a depth difference mean value matched with the sparse point cloud data according to the depth information of each point cloud data point and the calibrated depth information, and calculating a depth variance according to the depth difference mean value;
and the scene type determining unit is used for determining the scene type of the target scene to be a low-fluctuation scene or a high-fluctuation scene according to the depth variance and a preset threshold condition.
11. The apparatus according to claim 10, wherein the scene type determining unit is specifically configured to:
calculating a target threshold according to the depth difference value mean value;
if the depth variance is smaller than or equal to the threshold value, determining that the scene type of the target scene is a low-fluctuation scene;
determining that the scene type of the target scene is a high-lift scene if the depth variance is greater than the threshold.
12. The apparatus of claim 9, wherein the first DSM generating module comprises:
the first Delaunay triangulation network construction unit is used for constructing a first Delaunay triangulation network according to the sparse point cloud data;
and the first DSM generating unit is used for generating DSMs matched with the sparse point cloud data according to a preset DSM calculation formula and the first Delaunay triangulation network.
13. The apparatus of claim 9, wherein the second DSM generating module comprises:
a dense point cloud data generation unit for generating the dense point cloud data according to the sparse point cloud data and a dense point cloud calculation method;
the second Delaunay triangulation network construction unit is used for constructing a second Delaunay triangulation network according to the dense point cloud data;
and the second DSM generating unit is used for generating DSMs matched with the dense point cloud data according to a preset DSM calculation formula and the second Delaunay triangulation network.
14. The apparatus of claim 12 or 13, wherein the preset DSM calculation formula comprises:
normal x,y,z =(vertex 1 -vertex 0 )^(vertex 2 -vertex 0 )
k=-(vertex 0 (x)*normal(x)+vertex 1 (y)*normal(y)+vertex 2 (z)*normal(z))
dsm_z=-(normal(x)*dsm_x+normal(y)*dsm_y+k)/normal(z)
wherein, vertex 0 、vertex 1 And vertex 2 For three vertices, normal, of a triangular mesh in the Delaunay triangulation network x,y,z Being the cross product, vertex, between the three vertices of a triangular mesh 0 (x) Is a vertex x 0 Value of x in (1), vertex 1 (y) is the vertex y 0 Y value of (1), vertex 2 (z) is the vertex z 0 Z value in (b), k represents the sum of the weights of the three vertices in different directions (x, y, z), normal (x) is normal x,y,z Calculating the x value of the result point, and the normal (y) is normal x,y,z Calculating the y value of the result point, and the normal (z) is normal x,y,z Calculating a z value of the result point, DSM _ z being a z value of the DSM, DSM _ x being an x value of the DSM, and DSM _ y being a y value of the DSM.
15. The apparatus of claim 14, further comprising:
the first data processing module is used for carrying out texture mapping and color correction processing on the first Delaunay triangulation network;
the first DOM generation module is used for carrying out forward projection calculation according to the processed image grid to obtain a DOM of the target scene;
the point cloud consistency calculating module is used for recalculating the point cloud consistency according to a preset point cloud consistency calculating formula and the dense point cloud data;
the dense point cloud data generation module is used for clustering the dense point cloud data according to the point cloud density to obtain simplified dense point cloud data;
the third Delaunay triangulation network construction unit is used for constructing a third Delaunay triangulation network according to the simplified dense point cloud data;
the second data processing module is used for performing texture mapping and color correction processing on the third Delaunay triangulation network;
and the second DOM generating module is used for performing forward projection calculation according to the processed image grid to obtain the DOM of the target scene.
16. The apparatus of claim 15, wherein the predetermined point cloud consistency calculation formula is:
wherein, pointclosed _ dense is the point cloud density, (min _ x, max _ x) \ (min _ y, max _ y) is the range of the dense point cloud data, and N is d T is a constant number of points of the dense point cloud data.
17. A computer device, characterized in that the computer device comprises:
one or more processors;
storage means for storing one or more programs;
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method for DSM as in any of claims 1-8.
18. A computer storage medium having a computer program stored thereon, the program, when executed by a processor, implementing the DSM computing method of any of claims 1-8.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010190845.7A CN113496550B (en) | 2020-03-18 | 2020-03-18 | DSM calculation method and device, computer equipment and storage medium |
PCT/CN2021/081588 WO2021185322A1 (en) | 2020-03-18 | 2021-03-18 | Image processing method and related device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010190845.7A CN113496550B (en) | 2020-03-18 | 2020-03-18 | DSM calculation method and device, computer equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113496550A CN113496550A (en) | 2021-10-12 |
CN113496550B true CN113496550B (en) | 2023-03-24 |
Family
ID=77993220
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010190845.7A Active CN113496550B (en) | 2020-03-18 | 2020-03-18 | DSM calculation method and device, computer equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113496550B (en) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2011080669A1 (en) * | 2009-12-31 | 2011-07-07 | Rafael Advanced Defense Systems Ltd. | System and method for reconstruction of range images from multiple two-dimensional images using a range based variational method |
CN109242862A (en) * | 2018-09-08 | 2019-01-18 | 西北工业大学 | A kind of real-time digital surface model generation method |
CN109300190A (en) * | 2018-09-06 | 2019-02-01 | 百度在线网络技术(北京)有限公司 | Processing method, device, equipment and the storage medium of three-dimensional data |
WO2019093532A1 (en) * | 2017-11-07 | 2019-05-16 | 공간정보기술 주식회사 | Method and system for acquiring three-dimensional position coordinates without ground control points by using stereo camera drone |
CN109961510A (en) * | 2019-03-07 | 2019-07-02 | 长江岩土工程总公司(武汉) | A kind of high cutting-slope geology quick logging method based on three-dimensional point cloud reconfiguration technique |
CN110298136A (en) * | 2019-07-05 | 2019-10-01 | 广东金雄城工程项目管理有限公司 | Application based on BIM technology scene method of construction and system and in garden landscape digital modeling |
-
2020
- 2020-03-18 CN CN202010190845.7A patent/CN113496550B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2011080669A1 (en) * | 2009-12-31 | 2011-07-07 | Rafael Advanced Defense Systems Ltd. | System and method for reconstruction of range images from multiple two-dimensional images using a range based variational method |
WO2019093532A1 (en) * | 2017-11-07 | 2019-05-16 | 공간정보기술 주식회사 | Method and system for acquiring three-dimensional position coordinates without ground control points by using stereo camera drone |
CN109300190A (en) * | 2018-09-06 | 2019-02-01 | 百度在线网络技术(北京)有限公司 | Processing method, device, equipment and the storage medium of three-dimensional data |
CN109242862A (en) * | 2018-09-08 | 2019-01-18 | 西北工业大学 | A kind of real-time digital surface model generation method |
CN109961510A (en) * | 2019-03-07 | 2019-07-02 | 长江岩土工程总公司(武汉) | A kind of high cutting-slope geology quick logging method based on three-dimensional point cloud reconfiguration technique |
CN110298136A (en) * | 2019-07-05 | 2019-10-01 | 广东金雄城工程项目管理有限公司 | Application based on BIM technology scene method of construction and system and in garden landscape digital modeling |
Non-Patent Citations (1)
Title |
---|
基于LiDAR DSM和等值线分析的高楼统计方法研究;潘桂颖等;《城市勘测》;20170831(第04期);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN113496550A (en) | 2021-10-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10268917B2 (en) | Pre-segment point cloud data to run real-time shape extraction faster | |
US10534870B2 (en) | Methods, apparatuses and computer program products for automatic, non-parametric, non-iterative three dimensional geographic modeling | |
US20150213572A1 (en) | Methods, apparatuses and computer program products for three dimensional segmentation and textured modeling of photogrammetry surface meshes | |
US10957062B2 (en) | Structure depth-aware weighting in bundle adjustment | |
US20230042968A1 (en) | High-definition map creation method and device, and electronic device | |
CN113112603B (en) | Method and device for optimizing three-dimensional model | |
WO2023124676A1 (en) | 3d model construction method, apparatus, and electronic device | |
CN111882632B (en) | Surface detail rendering method, device, equipment and storage medium | |
CN107393008B (en) | System and method for modeling irregular triangular net pyramid of global geographic frame | |
CN111870953B (en) | Altitude map generation method, device, equipment and storage medium | |
CN115082641B (en) | Point cloud rasterization method and device based on gridding multi-neighborhood interpolation | |
CN110648401A (en) | Method and device for unitizing oblique photography model, electronic equipment and storage medium | |
CN116844124A (en) | Three-dimensional object detection frame labeling method, three-dimensional object detection frame labeling device, electronic equipment and storage medium | |
CN113496138A (en) | Dense point cloud data generation method and device, computer equipment and storage medium | |
CN117075171B (en) | Pose information determining method, device and equipment of laser radar and storage medium | |
CN113034582A (en) | Pose optimization device and method, electronic device and computer readable storage medium | |
CN113496550B (en) | DSM calculation method and device, computer equipment and storage medium | |
Zhu et al. | Triangulation of well-defined points as a constraint for reliable image matching | |
US20230036294A1 (en) | Method for processing image, electronic device and storage medium | |
CN116086430A (en) | Grid map construction method, robot, and computer-readable storage medium | |
CN111870954B (en) | Altitude map generation method, device, equipment and storage medium | |
JP2020041950A (en) | Surveying device, surveying method, and program | |
CN116797747A (en) | Underwater detection data visualization method, device, computer equipment and storage medium | |
CN115619986A (en) | Scene roaming method, device, equipment and medium | |
CN117197361B (en) | Live three-dimensional database construction method, electronic device and computer readable medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |