CN111583423A - Method and device for extracting cross section line of point cloud data - Google Patents

Method and device for extracting cross section line of point cloud data Download PDF

Info

Publication number
CN111583423A
CN111583423A CN202010422632.2A CN202010422632A CN111583423A CN 111583423 A CN111583423 A CN 111583423A CN 202010422632 A CN202010422632 A CN 202010422632A CN 111583423 A CN111583423 A CN 111583423A
Authority
CN
China
Prior art keywords
point cloud
cloud data
section line
dimensional
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010422632.2A
Other languages
Chinese (zh)
Inventor
王兰兰
其他发明人请求不公开姓名
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Greenvalley Technology Co ltd
Original Assignee
Beijing Greenvalley Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Greenvalley Technology Co ltd filed Critical Beijing Greenvalley Technology Co ltd
Priority to CN202010422632.2A priority Critical patent/CN111583423A/en
Publication of CN111583423A publication Critical patent/CN111583423A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts

Landscapes

  • Engineering & Computer Science (AREA)
  • Architecture (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention provides a method and equipment for extracting a section line of point cloud data, wherein the method comprises the following steps: loading point cloud data to be processed into a scene to realize visualization of the point cloud data to be processed; acquiring a preset two-dimensional virtual section line on an interface for visually displaying point cloud data to be loaded at a overlooking visual angle; partitioning the point cloud data to be loaded to generate a plurality of pieces of sub-point cloud data; acquiring a preset area covering a two-dimensional virtual cross-section line in each sub-point cloud data; constructing a Delaunay triangulation network based on a preset area; determining the intersection points of each triangle in the Delaunay triangulation network and the two-dimensional virtual section line, and sequentially connecting the intersection points in the sub-point cloud data to form a three-dimensional sub-section line; all the three-dimensional sub section lines are collected to form a complete three-dimensional section line, the data volume processed by hardware at the same time is reduced in a blocking mode, the scheme can meet the requirements of various industries, and the adaptability is wide.

Description

Method and device for extracting cross section line of point cloud data
Technical Field
The invention relates to the field of point cloud data processing, in particular to a method and equipment for extracting a section line of point cloud data.
Background
Along with the rapid popularization and development of various high and new technologies applied to the surveying and mapping industry, the automation degree of the surveying and mapping process is higher and higher, and the digital surveying and mapping technology gradually replaces the application of the traditional surveying and mapping industry. In order to meet the requirements of project planning, engineering construction and regional terrain control, cross-sectional maps are often mapped. The cross-sectional view is a view showing the relief of the ground passing through a point on the centerline of the line and being perpendicular to the centerline of the line. The cross-sectional map may be drawn from cross-sectional measurements or from existing topographical maps or topographical data.
The cross section map has wide industrial application, such as water conservancy and hydropower engineering measurement, municipal road setting and measurement, road drawing, pipeline measurement, tunnel engineering, topographic mapping, disaster relief and protection of cultural relics and the like. Taking road drawing as an example, the measurement of the cross section of the road is to measure the relief condition in the direction orthogonal to the center line of the road, and can be used for roadbed, wall blocking, protection engineering design, earth and stone engineering quantity calculation and the like, and the data of the cross section is arranged according to certain requirements, so that the construction of a database and the application of road design software can be facilitated; in the water conservancy and hydropower industry, engineering measurement is to provide basic data for water conservancy projects such as river channels, reservoirs, water delivery lines and the like, and during receiving, a section diagram and a topographic map need to be compared to ensure construction quality. It should be noted that, in the following description,
the traditional cross-sectional diagram field measurement methods are more, and include a theodolite inclination distance method, a level method, a total station opposite side measurement method and the like. Taking the leveling instrument method as an example, the leveling instrument method adopts a direction frame for orientation, a rigid ruler or a tape measure for distance measurement, and a leveling instrument for height measurement. Typically, the elevation of the instrument is determined by looking back at the centerline pile, and then all points of topographical change are measured, recorded in the field and plotted indoors on millimeter paper. However, in any case, the traditional field mapping is inefficient, labor-intensive, and has been gradually replaced by digital mapping.
In recent years, laser radar (LIDAR) has become a relatively common remote sensing technology and has been widely used in various industries. The data acquired by the laser radar is three-dimensional point cloud, that is, a large amount of spatial position data is used for describing the position, the shape and the like of an object on the earth. The three-dimensional point cloud data has wide industrial application and can also be used for extracting cross section lines. In topographic mapping, adopt the lidar that unmanned aerial vehicle carried on to carry out data high efficiency, quick, need not to contact the object surface, can acquire magnanimity point cloud data in the short time. However, in the solution proposed by publication No. CN201510915095.4, a TIN network is formed by extracting a small amount of point cloud data in the central axis range, and then a cross-hatching is generated, where the processing efficiency is low for a large amount of point cloud data.
Therefore, the existing method is limited by hardware performance and a specific scheme in the existing generating mode of the section line, and the section line cannot be extracted while a large amount of point clouds are loaded.
Disclosure of Invention
Aiming at the defects in the prior art, the invention provides a method and equipment for extracting a section line of point cloud data, the three-dimensional section line is acquired in the partitioned point cloud data in a visual mode based on the intersection point of a Delaunay triangular network and a two-dimensional virtual section line, the data volume processed by hardware at the same time is reduced in a partitioning mode, the method for extracting the three-dimensional section line is not limited to a certain industry, the requirements of various different industries can be met as long as the point cloud data meets certain precision requirements, and the adaptability is wide.
The embodiment of the invention provides the following specific embodiments:
the embodiment of the invention provides a method for extracting a section line of point cloud data, which comprises the following steps:
loading point cloud data to be processed into a scene to realize visualization of the point cloud data to be processed;
acquiring a preset two-dimensional virtual section line on an interface for visually displaying the point cloud data to be loaded at a overlooking visual angle;
partitioning the point cloud data to be loaded to generate a plurality of pieces of sub-point cloud data;
acquiring a preset area covering the two-dimensional virtual cross-sectional line in each sub-point cloud data;
constructing a Delaunay triangulation network based on the preset area;
determining the intersection points of each triangle in the Delaunay triangulation network and the two-dimensional virtual section line, and sequentially connecting the intersection points in the sub-point cloud data to form a three-dimensional sub-section line;
and summarizing all the three-dimensional sub section lines to form a complete three-dimensional section line.
In a specific embodiment, the method further comprises:
extracting the original point cloud data to obtain intermediate point cloud data;
and performing rarefying treatment on the intermediate point cloud data to generate point cloud data to be treated.
In a specific embodiment, the raw point cloud data is acquired by scanning a preset target area through a three-dimensional laser.
In a specific embodiment, the extracting the original point cloud data to obtain intermediate point cloud data includes:
classifying each target in the original point cloud data;
determining useless classes and useful classes in the target according to a preset rule;
and removing targets corresponding to useless categories in the original point cloud data, and reserving useful classified targets to finish extraction to obtain intermediate point cloud data.
In a particular embodiment of the present invention,
the thinning processing is carried out on the intermediate point cloud data to generate point cloud data to be processed, and the method comprises the following steps:
processing the intermediate point cloud data to construct an octree structure;
and performing rarefaction treatment on the processed intermediate point cloud data to generate point cloud data to be processed.
In a particular embodiment of the present invention,
the method for acquiring the preset two-dimensional virtual section line on the interface for visually displaying the point cloud data to be loaded at the overlooking visual angle comprises the following steps:
importing data stored with a preset two-dimensional virtual cross section line so as to display the preset two-dimensional virtual cross section line on an interface for visually displaying the point cloud data to be loaded at a top view angle; or
And acquiring a two-dimensional virtual cross section line drawn by a user on an interface for visually displaying the point cloud data to be loaded at a top view angle.
In a specific embodiment, the partitioning the point cloud data to be loaded to generate a plurality of pieces of sub-point cloud data includes:
acquiring the point cloud density of the point cloud data to be loaded;
determining the volume of a cuboid corresponding to a preset number of point clouds based on the point cloud density;
and partitioning the volume according to the side length of a preset partitioned point cloud to generate a plurality of pieces of sub-point cloud data.
In a specific embodiment, the method further comprises the following steps:
and generating a section diagram based on the three-dimensional section line.
In a particular embodiment of the present invention,
the generating of the profile based on the three-dimensional profile line comprises:
sampling the three-dimensional section line, and generating a section diagram with a preset resolution according to sampling points obtained by sampling; wherein the preset resolution is related to the sampling distance of the samples.
The embodiment of the invention also provides equipment for extracting the section line of the point cloud data, which comprises a module for executing the method.
Therefore, the embodiment of the invention provides a method and equipment for extracting a section line of point cloud data, wherein the method comprises the following steps: loading the point cloud data to be processed into a scene to realize visualization of the point cloud data to be processed; acquiring a preset two-dimensional virtual section line on an interface for visually displaying the point cloud data to be loaded at a overlooking visual angle; partitioning the point cloud data to be loaded to generate a plurality of pieces of sub-point cloud data; acquiring a preset area covering the two-dimensional virtual cross-sectional line in each sub-point cloud data; constructing a Delaunay triangulation network based on the preset area; determining the intersection points of each triangle in the Delaunay triangulation network and the two-dimensional virtual section line, and sequentially connecting the intersection points in the sub-point cloud data to form a three-dimensional sub-section line; and summarizing all the three-dimensional sub section lines to form a complete three-dimensional section line. Through a visual mode, a three-dimensional section line is acquired in the point cloud data after blocking based on a mode of intersection points of a Delaunay triangulation network and a two-dimensional virtual section line, the data volume of hardware simultaneous processing is reduced in a blocking mode, and the method for extracting the three-dimensional section line is not limited to a certain industry, so long as the point cloud data meet certain precision requirements, the requirements of various different industries can be met, and the adaptability is wide.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained according to the drawings without inventive efforts.
Fig. 1 is a schematic flow chart of a method for extracting a cross-sectional line of point cloud data according to an embodiment of the present invention;
fig. 2 is a schematic diagram of a two-dimensional virtual cross-sectional line in the method for extracting a cross-sectional line of point cloud data according to the embodiment of the present invention;
FIG. 3 is a block diagram of a method for extracting a cross-sectional line of point cloud data according to an embodiment of the present invention;
fig. 4 is a schematic diagram of a section line in a method for extracting a section line of point cloud data according to an embodiment of the present invention;
fig. 5 is a detailed schematic diagram of an intersection point in the method for extracting a cross-sectional line of point cloud data according to the embodiment of the present invention;
fig. 6 is a detailed schematic diagram of an intersection point in the method for extracting a cross-sectional line of point cloud data according to the embodiment of the present invention;
fig. 7 is a detailed schematic diagram of an intersection point in the method for extracting a cross-sectional line of point cloud data according to the embodiment of the present invention.
Detailed Description
Various embodiments of the present disclosure will be described more fully hereinafter. The present disclosure is capable of various embodiments and of modifications and variations therein. However, it should be understood that: there is no intention to limit the various embodiments of the disclosure to the specific embodiments disclosed herein, but rather, the disclosure is to cover all modifications, equivalents, and/or alternatives falling within the spirit and scope of the various embodiments of the disclosure.
Example 1
The embodiment 1 of the invention provides a method for extracting a section line of point cloud data, as shown in fig. 1, comprising the following steps:
step 101, loading point cloud data to be processed into a scene to realize visualization of the point cloud data to be processed;
specifically, in an embodiment, the method further includes, before step 101:
step S1, extracting the original point cloud data to obtain intermediate point cloud data;
specifically, the original point cloud data is acquired by scanning a preset target area with a three-dimensional laser.
The three-dimensional laser adopts a laser point cloud technology, and particularly describes an actual object by using points distributed in space, namely describes an absolute spatial position of the object on the earth by using the laser point cloud; these points contain all the objects of the scanning area.
Specifically, the extracting the original point cloud data in the step S1 to obtain intermediate point cloud data includes:
classifying each target in the original point cloud data;
determining useless classes and useful classes in the target according to a preset rule;
and removing targets corresponding to useless categories in the original point cloud data, and reserving useful classified targets to finish extraction to obtain intermediate point cloud data.
All objects within the scanning area include usable and unusable portions (e.g., noise); after the original point cloud data is obtained, the point cloud data needs to be classified, namely, useless points can be filtered according to the classification of the content represented by the marked point cloud blocks or areas, and useful parts are extracted and divided to obtain available point cloud data;
point cloud classification is directed at different industries, and the classification results are different but overlap:
for example, in the power industry, the classification result includes ground points, towers, electric wires, vegetation, houses, cross lines, substations and the like, and the classification result focuses on objects in the power industry;
forestry focuses more on terrain and vegetation, and the results may include ground points, low shrubs, higher vegetation, medium vegetation points, and the like.
The specific classification operation can be completed by combining machine learning with manual classification, the main purpose of classification is to distinguish different objects, subsequent fine operation is facilitated, operation is performed according to different categories, and efficiency can be improved. The classification in the scheme is to adapt to different industry requirements, for example, for the power industry, only the point clouds of categories 1, 3, 4 and 6 may be selected as useful parts, and for the forestry industry, only the point clouds of categories 1, 2 and 5 may be selected as useful parts, and after one or more categories are selected, the cross-hatching line drawing is performed according to the point clouds of the selected categories.
And step S2, performing rarefaction treatment on the intermediate point cloud data to generate point cloud data to be treated.
Specifically, in step S2, performing rarefaction processing on the intermediate point cloud data to generate point cloud data to be processed includes:
processing the intermediate point cloud data to construct an octree structure;
and performing rarefaction treatment on the processed intermediate point cloud data to generate point cloud data to be processed.
Specifically, the structure of the point cloud is organized and arranged (for example, octree construction), and the structured intermediate point cloud data is integrated into a file, so that real-time reading and display of mass point cloud data of the point cloud can be realized, and the generation of subsequent section lines is facilitated.
As for the thinning process, under a general condition, the point cloud data is massive, at least one hundred million levels of point cloud data, the amount of the point cloud data exceeds the memory limit of a computer, and the point cloud data must be displayed in a scene through a thinning party.
102, acquiring a preset two-dimensional virtual cross-sectional line on an interface for visually displaying the point cloud data to be loaded at a top view angle;
specifically, the step 102 of obtaining a preset two-dimensional virtual cross-sectional line on an interface for visually displaying the point cloud data to be loaded at a top view angle includes:
importing data stored with a preset two-dimensional virtual cross section line so as to display the preset two-dimensional virtual cross section line on an interface for visually displaying the point cloud data to be loaded at a top view angle; or
And acquiring a two-dimensional virtual cross section line drawn by a user on an interface for visually displaying the point cloud data to be loaded at a top view angle.
In one embodiment, a series of point sets can be continuously picked up on the point cloud by using a mouse on the display interface of the top view, and then the drawing of the two-dimensional virtual cross-sectional line can be completed. The principle is that the position clicked by the mouse on the computer screen and the position of the visual angle can be connected into a virtual straight line which can be intersected with a plurality of point clouds, and the point cloud closest to the visual point is taken as a picked point. The virtual cross section may span multiple point clouds, and during actual processing, the relation between the cross section and each point cloud needs to be stored for subsequent splicing.
Further, a two-dimensional virtual cross-sectional line may be generated by using a pointing tool, and a specific pointing tool may be implemented by using a C + + program, and a specific appearance after the program interface is implemented is shown in fig. 2. The grey part is a point cloud, and the point selection tool is used for drawing a series of two-dimensional virtual points on the point cloud. The mouse clicks P0-P6 on the screen in sequence, and the points are connected in sequence to form a two-dimensional virtual cross-sectional line (blue line). These cross sectional lines are "virtual" in that they have only (x, y) and no Z, which correspond one-to-one to the actual point cloud. Subsequent Z values are not used until a true profile line is calculated.
More specifically, the display interface is provided with a resolution, such as 1920X 1080. However, the number of the point clouds is huge, and many hundreds of millions of point clouds may be displayed on the screen, and each pixel on the screen may contain a plurality of point clouds. When the mouse clicks P0 on the screen, all the point clouds in the pixel P0 are extracted, and the point cloud with the highest Z value is displayed on the computer screen as the point selected by the mouse.
103, partitioning the point cloud data to be loaded to generate a plurality of pieces of sub-point cloud data;
specifically, the blocking the point cloud data to be loaded in step 103 to generate a plurality of pieces of sub-point cloud data includes:
acquiring the point cloud density of the point cloud data to be loaded;
determining the volume of a cuboid corresponding to a preset number of point clouds based on the point cloud density;
and partitioning the volume according to the side length of a preset partitioned point cloud to generate a plurality of pieces of sub-point cloud data.
By partitioning, the subsequent processing process can be accelerated, and the memory can not be used excessively during processing.
The specific blocks may be:
firstly, the average density theta of the point cloud is calculated, then the side length of the point cloud required by a preset number (for example, 500W) of points is calculated according to the density, and finally, the partitioning is carried out according to the preset partitioning side length.
In one specific example, the average density of the point cloud is determined, for example, by a formula
Figure 85339DEST_PATH_IMAGE001
Figure 533638DEST_PATH_IMAGE002
Figure 844533DEST_PATH_IMAGE003
The number of the total point clouds is calculated,
Figure 567639DEST_PATH_IMAGE004
is a point cloud outer packing box volume
Figure 647590DEST_PATH_IMAGE005
I.e. multiplication of the length of the three sides. Then calculating the point cloud volume required by 500W points
Figure 1211DEST_PATH_IMAGE006
And finally, calculating the side length of the corresponding block of the 500W point:
Figure 596140DEST_PATH_IMAGE007
. Assuming that the preset blocking point cloud has a side length of 50W, the result of point cloud blocking can be as shown in fig. 3, wherein the part less than 50W is blocked according to the remaining length.
By the method, the reasonability of the blocks can be guaranteed, and the reasonability of the memory of the computer can be guaranteed.
After the partitioning, the real section line of each point cloud can be extracted by using the two-dimensional virtual section line.
The actual cross section lines may be multiple lines connected in sequence, and the multiple lines may span multiple point clouds. Therefore, all the intersection points of each point cloud and the section line and the connection sequence of the intersection points need to be calculated, and subsequent integration is facilitated.
In addition, the specific partitioning may be performed in an equal division manner, for example, equal division into 8 shares or other numbers of shares.
104, acquiring a preset area covering the two-dimensional virtual cross-sectional line in each sub-point cloud data;
step 105, constructing a Delaunay triangulation network based on the preset area;
specifically, as shown in fig. 4, for the processing of a single point cloud, the point cloud in a certain buffer area around the two-dimensional virtual cross-sectional line is extracted first, and then the Delaunay triangulation network is constructed by using a point-by-point insertion method.
106, determining intersection points of each triangle in the Delaunay triangulation network and the two-dimensional virtual sectional line, and sequentially connecting the intersection points in the sub-point cloud data to form a three-dimensional sub-sectional line;
and extracting the intersection point of the two-dimensional virtual section line and the Delaunay triangulation network as a real section point.
Specifically, as shown in fig. 4, the two-dimensional virtual cross-sectional lines are a1 to a5, and the intersections with the blocks are S1 to S4, respectively. In fig. 4, in the point cloud block K1, the point clouds in the vicinity of the profile lines A1 to S1 form a Delaunay triangulation network, and the intersections of the A1S1 profile lines with the respective triangles constitute the true profile lines. Since each vertex in the Delaunay triangle has a Z value, the intersection points of the triangulation networks and the two-dimensional virtual profile line (e.g., A1S1 in the point cloud K1) are also three-dimensional point sets.
More specifically, as shown in fig. 5-7, fig. 5-6 are top views of the triangulation network and the cross section line, and are also illustrations of details of the K1 point cloud block triangulation network, and the distribution of actual triangle points is more complicated. First, fig. 5 and 6 are top views, in which the Z value in X, Y, Z is not visible, and Himalayan mountain is higher than the map, and is also projected on the plane of the map. This plane is represented in fig. 6. Ignoring the Z value temporarily may serve to simplify the calculation. In FIG. 6, the partial intersections P0 to P7 between A1-S1 and the triangulation network are shown.
Although the intersections P0 to P7 are triangles drawn out neatly in plan view, in practice, the three vertices of each triangulation network have respective Z values (elevation values) in three-dimensional space, and the Z values of the intersections with the triangulation network are actually different in three-dimensional space. The triangular net represents the actual terrain in three-dimensional space, so that the intersection points represent the actual section lines in three-dimensional space. As shown in FIG. 7, the three-dimensional space states of P0 to P7 in the non-top view state can be seen. In fact, the Z value of each point of the triangulation network is different, which causes the relief of the terrain, and according to the definition of the cross section line, the longitudinal section is intersected with the actual terrain to form a curve describing the relief of the terrain on the section. The longitudinal section is actually a straight line in a two-dimensional plane, a1-S1, and the intersections of the straight line with the terrain-describing triangulation network are a series of points on the cross sectional line.
And 107, summarizing all the three-dimensional sub section lines to form a complete three-dimensional section line.
And after the three-dimensional sub section lines of each piece of sub point cloud data are obtained, sequentially connecting, and forming complete three-dimensional section lines.
Still referring to fig. 7, the real three-dimensional cross-sectional line is essentially a broken line formed by connecting points on each side of a triangle in a certain sequence, i.e., the P0 wavy line P7, and the P0 to P7 in sequence.
In addition, after the three-dimensional section line is generated in step 107, the method may further include:
and generating a section diagram based on the three-dimensional section line.
In a specific embodiment, the generating a cross-sectional view based on the three-dimensional cross-sectional line includes:
sampling the three-dimensional section line, and generating a section diagram with a preset resolution according to sampling points obtained by sampling; wherein the preset resolution is related to the sampling distance of the samples.
After the three-dimensional cross sectional lines are obtained, the three-dimensional cross sectional lines can be used to generate cross sectional views at various resolutions, and the cross sectional lines are specifically sampled to achieve a suitable resolution. Sampling is to take a point on the broken line at a certain sampling distance A; in addition, the resolution can be manually input according to requirements.
Therefore, the method and the device can process mass data, calculate the blocking length by using the point cloud density, block the point cloud, and process the point cloud in parallel because the point cloud is organized in the file. The efficiency of the algorithm is greatly improved. Secondly, a two-dimensional virtual section line can be drawn by using a manual drawing method, then the section line is split while point cloud partitioning is carried out, a Delaunay triangulation network is generated, then a real three-dimensional section line is generated, and finally the real three-dimensional section line is merged. The method is not limited by the industry, and can be used in the industries of forestry, road construction, tunnel construction and the like. And finally, the generated real three-dimensional section lines can be merged and then further sampled to generate a section diagram.
Example 2
The embodiment 2 of the invention also discloses equipment for extracting the section line of the point cloud data, which comprises a module for executing the method in the embodiment 1. Specifically, embodiment 2 of the present invention further includes other features, and for concrete content, please refer to the description of embodiment 1, and for brevity, detailed description is not repeated herein
Therefore, the embodiment of the invention provides a method and equipment for extracting a section line of point cloud data, wherein the method comprises the following steps: loading the point cloud data to be processed into a scene to realize visualization of the point cloud data to be processed; acquiring a preset two-dimensional virtual section line on an interface for visually displaying the point cloud data to be loaded at a overlooking visual angle; partitioning the point cloud data to be loaded to generate a plurality of pieces of sub-point cloud data; acquiring a preset area covering the two-dimensional virtual cross-sectional line in each sub-point cloud data; constructing a Delaunay triangulation network based on the preset area; determining the intersection points of each triangle in the Delaunay triangulation network and the two-dimensional virtual section line, and sequentially connecting the intersection points in the sub-point cloud data to form a three-dimensional sub-section line; and summarizing all the three-dimensional sub section lines to form a complete three-dimensional section line. Through a visual mode, a three-dimensional section line is acquired in the point cloud data after blocking based on a mode of intersection points of a Delaunay triangulation network and a two-dimensional virtual section line, the data volume of hardware simultaneous processing is reduced in a blocking mode, and the method for extracting the three-dimensional section line is not limited to a certain industry, so long as the point cloud data meet certain precision requirements, the requirements of various different industries can be met, and the adaptability is wide.
Those skilled in the art will appreciate that the figures are merely schematic representations of one preferred implementation scenario and that the blocks or flow diagrams in the figures are not necessarily required to practice the present invention.
Those skilled in the art will appreciate that the modules in the devices in the implementation scenario may be distributed in the devices in the implementation scenario according to the description of the implementation scenario, or may be located in one or more devices different from the present implementation scenario with corresponding changes. The modules of the implementation scenario may be combined into one module, or may be further split into a plurality of sub-modules.
The above-mentioned invention numbers are merely for description and do not represent the merits of the implementation scenarios.
The above disclosure is only a few specific implementation scenarios of the present invention, however, the present invention is not limited thereto, and any variations that can be made by those skilled in the art are intended to fall within the scope of the present invention.

Claims (10)

1. A method for extracting a section line of point cloud data is characterized by comprising the following steps:
loading point cloud data to be processed into a scene to realize visualization of the point cloud data to be processed;
acquiring a preset two-dimensional virtual section line on an interface for visually displaying the point cloud data to be loaded at a overlooking visual angle;
partitioning the point cloud data to be loaded to generate a plurality of pieces of sub-point cloud data;
acquiring a preset area covering the two-dimensional virtual cross-sectional line in each sub-point cloud data;
constructing a Delaunay triangulation network based on the preset area;
determining the intersection points of each triangle in the Delaunay triangulation network and the two-dimensional virtual section line, and sequentially connecting the intersection points in the sub-point cloud data to form a three-dimensional sub-section line;
and summarizing all the three-dimensional sub section lines to form a complete three-dimensional section line.
2. The method of claim 1, further comprising:
extracting the original point cloud data to obtain intermediate point cloud data;
and performing rarefying treatment on the intermediate point cloud data to generate point cloud data to be treated.
3. The method of claim 2, wherein the raw point cloud data is acquired by scanning a predetermined target area with a three-dimensional laser.
4. The method of claim 2, wherein the extracting the original point cloud data to obtain the intermediate point cloud data comprises:
classifying each target in the original point cloud data;
determining useless classes and useful classes in the target according to a preset rule;
and removing targets corresponding to useless categories in the original point cloud data, and reserving useful classified targets to finish extraction to obtain intermediate point cloud data.
5. The method of claim 2, wherein the cross-hatching of the point cloud data is performed by a computer,
the thinning processing is carried out on the intermediate point cloud data to generate point cloud data to be processed, and the method comprises the following steps:
processing the intermediate point cloud data to construct an octree structure;
and performing rarefaction treatment on the processed intermediate point cloud data to generate point cloud data to be processed.
6. The method of claim 1, wherein the cross-hatching of the point cloud data is performed by a computer,
the method for acquiring the preset two-dimensional virtual section line on the interface for visually displaying the point cloud data to be loaded at the overlooking visual angle comprises the following steps:
importing data stored with a preset two-dimensional virtual cross section line so as to display the preset two-dimensional virtual cross section line on an interface for visually displaying the point cloud data to be loaded at a top view angle; or
And acquiring a two-dimensional virtual cross section line drawn by a user on an interface for visually displaying the point cloud data to be loaded at a top view angle.
7. The method for extracting the cross hatching of the point cloud data as claimed in claim 1, wherein the step of partitioning the point cloud data to be loaded to generate a plurality of pieces of sub point cloud data comprises:
acquiring the point cloud density of the point cloud data to be loaded;
determining the volume of a cuboid corresponding to a preset number of point clouds based on the point cloud density;
and partitioning the volume according to the side length of a preset partitioned point cloud to generate a plurality of pieces of sub-point cloud data.
8. The method of claim 1, further comprising:
and generating a section diagram based on the three-dimensional section line.
9. The method of claim 8, wherein the cross-hatching of the point cloud data is performed,
the generating of the profile based on the three-dimensional profile line comprises:
sampling the three-dimensional section line, and generating a section diagram with a preset resolution according to sampling points obtained by sampling; wherein the preset resolution is related to the sampling distance of the samples.
10. An apparatus for extracting a cross-sectional line of point cloud data, comprising means for performing the method of any one of claims 1-9.
CN202010422632.2A 2020-05-19 2020-05-19 Method and device for extracting cross section line of point cloud data Pending CN111583423A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010422632.2A CN111583423A (en) 2020-05-19 2020-05-19 Method and device for extracting cross section line of point cloud data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010422632.2A CN111583423A (en) 2020-05-19 2020-05-19 Method and device for extracting cross section line of point cloud data

Publications (1)

Publication Number Publication Date
CN111583423A true CN111583423A (en) 2020-08-25

Family

ID=72111004

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010422632.2A Pending CN111583423A (en) 2020-05-19 2020-05-19 Method and device for extracting cross section line of point cloud data

Country Status (1)

Country Link
CN (1) CN111583423A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103903061A (en) * 2014-03-04 2014-07-02 中国地质科学院矿产资源研究所 Information comprehensive processing device and method in three-dimensional mineral resource prediction evaluation
EP2990995A2 (en) * 2014-08-29 2016-03-02 Leica Geosystems AG Line parametric object estimation
CN106887020A (en) * 2015-12-12 2017-06-23 星际空间(天津)科技发展有限公司 A kind of road vertical and horizontal section acquisition methods based on LiDAR point cloud
CN108470374A (en) * 2018-04-08 2018-08-31 中煤航测遥感集团有限公司 Mass cloud data processing method and processing device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103903061A (en) * 2014-03-04 2014-07-02 中国地质科学院矿产资源研究所 Information comprehensive processing device and method in three-dimensional mineral resource prediction evaluation
EP2990995A2 (en) * 2014-08-29 2016-03-02 Leica Geosystems AG Line parametric object estimation
CN106887020A (en) * 2015-12-12 2017-06-23 星际空间(天津)科技发展有限公司 A kind of road vertical and horizontal section acquisition methods based on LiDAR point cloud
CN108470374A (en) * 2018-04-08 2018-08-31 中煤航测遥感集团有限公司 Mass cloud data processing method and processing device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
刘晓平等: "基于LiDAR点云数据的三角网构建算法", 《软件学报》 *
周建红等: "海量低空机载LiDAR点云的地形断面快速生成算法", 《测绘科学技术学报》 *

Similar Documents

Publication Publication Date Title
CN113706698B (en) Live-action three-dimensional road reconstruction method and device, storage medium and electronic equipment
Verhoeven Mesh is more—using all geometric dimensions for the archaeological analysis and interpretative mapping of 3D surfaces
JPWO2003012740A1 (en) Automatic three-dimensional structure shape generating apparatus, automatic generating method, program thereof, and recording medium storing the program
CN104835202A (en) Quick three-dimensional virtual scene constructing method
CN111737790B (en) Method and equipment for constructing simulated city model
CN116310192A (en) Urban building three-dimensional model monomer reconstruction method based on point cloud
CN109961510B (en) High-cut-slope geological rapid recording method based on three-dimensional point cloud reconstruction technology
US7778808B2 (en) Geospatial modeling system providing data thinning of geospatial data points and related methods
CN111667569B (en) Three-dimensional live-action soil visual accurate measurement and calculation method based on Rhino and Grasshopper
CN110827405A (en) Digital remote sensing geological mapping method and system
CN115861527A (en) Method and device for constructing live-action three-dimensional model, electronic equipment and storage medium
Milde et al. Building reconstruction using a structural description based on a formal grammar
CN116012613B (en) Method and system for measuring and calculating earthwork variation of strip mine based on laser point cloud
Kreylos et al. Point-based computing on scanned terrain with LidarViewer
CN111583423A (en) Method and device for extracting cross section line of point cloud data
Lesparre et al. Simplified 3D city models from LiDAR
Sobak et al. Terrestrial laser scanning assessment of generalization errors in conventional topographic surveys
Aringer et al. Calculation and Update of a 3d Building Model of Bavaria Using LIDAR, Image Matching and Catastre Information
CN116385683B (en) Three-dimensional small drainage basin channel fractal dimension calculation method and system
Ting et al. Application of ground-based 3D laser scanning technology in engineering surveying
CN115774896B (en) Data simulation method, device, equipment and storage medium
Ramadhani An Analysis of the Three-Dimensional Modelling Using LiDAR Data and Unmanned Aerial Vehicle (UAV)(Case Study: Institut Teknologi Sepuluh Nopember, Sukolilo Campus)
CN107833278A (en) Terrain simulation method, apparatus and electronic equipment
CN111366172B (en) Quality detection method and device of digital elevation model and storage medium
Bartoněk et al. Automatic creation of field survey sketch by using of topological codes

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200825

RJ01 Rejection of invention patent application after publication