CN112241998A - Point cloud based rapid sectioning method, intelligent terminal and cloud platform - Google Patents

Point cloud based rapid sectioning method, intelligent terminal and cloud platform Download PDF

Info

Publication number
CN112241998A
CN112241998A CN202011107623.0A CN202011107623A CN112241998A CN 112241998 A CN112241998 A CN 112241998A CN 202011107623 A CN202011107623 A CN 202011107623A CN 112241998 A CN112241998 A CN 112241998A
Authority
CN
China
Prior art keywords
point cloud
data
point
intelligent terminal
cloud
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202011107623.0A
Other languages
Chinese (zh)
Inventor
陈友艺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xincheng Network Technology Yangjiang Co ltd
Original Assignee
Xincheng Network Technology Yangjiang Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xincheng Network Technology Yangjiang Co ltd filed Critical Xincheng Network Technology Yangjiang Co ltd
Priority to CN202011107623.0A priority Critical patent/CN112241998A/en
Publication of CN112241998A publication Critical patent/CN112241998A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/005Tree description, e.g. octree, quadtree
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20104Interactive definition of region of interest [ROI]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a point cloud based rapid sectioning method, an intelligent terminal and a cloud platform, wherein the method comprises the following steps: uploading the point cloud data to enable a cloud platform to convert the point cloud data into an octree binary system for storage, and obtaining a point cloud file; after the viewing instruction is obtained, asynchronously loading a point cloud file from the cloud platform according to the viewing instruction, and visually rendering the point cloud according to the point cloud file; uploading the region of interest and parameters to enable the cloud platform to evaluate the number of point clouds in the region of interest and return points; uploading a generation instruction to enable a cloud platform to calculate the number of point clouds in the region of interest to obtain target data; and uploading the downloading instruction to enable the cloud platform to convert the target data into a corresponding output data type and return the output data type. The cloud platform cloud storage method is based on the browser, supports client sides such as a pc end and a mobile end, stores point cloud data and performs data operation through the cloud platform, reduces hardware requirements of the client sides, and can be widely applied to the field of three-dimensional visualization.

Description

Point cloud based rapid sectioning method, intelligent terminal and cloud platform
Technical Field
The invention relates to the technical field of three-dimensional visualization, three-dimensional reverse, engineering drawing and digital construction, in particular to a point cloud-based rapid sectioning method, an intelligent terminal and a cloud platform.
Background
Cloud point data refers to scanned data recorded in the form of points, each point including three-dimensional coordinates, and some may include color information. Due to the fact that the data volume of the point cloud data is large, a large storage space is needed, a large network broadband, a hard disk, a memory and the like are needed to be occupied during transmission, the point cloud with the large data set is not well managed, and the point cloud with the large data set is not easy to share. The user needs to extract the digital asset information of interest in a large scene. In addition, multiple data conversion formats and data redundancy may exist in the data processing of the point cloud data, most of the current point cloud display interaction is based on an installation client, the copying and sharing are difficult, and the requirement on hardware is high.
Interpretation of terms:
spacing: dot spacing, the spatial distance between point clouds.
LOD (level of detail): a level of detail.
Level: level of detail.
Mesh: a three-dimensional mesh model.
AABB (Axis-aligned Bounding Box) Axis aligns the Bounding Box.
An OBB (oriented bounding box) directional bounding box.
Disclosure of Invention
In order to solve at least one of the technical problems in the prior art to a certain extent, the invention aims to provide a point cloud-based rapid sectioning method, an intelligent terminal and a cloud platform.
The technical scheme adopted by the invention is as follows:
a point cloud-based rapid sectioning method comprises the following steps:
uploading point cloud data to enable a cloud platform to convert the point cloud data into an octree binary system for storage, and obtaining a point cloud file;
after a viewing instruction is obtained, asynchronously loading the point cloud file from the cloud platform according to the viewing instruction, and visually rendering the point cloud according to the point cloud file;
uploading the region of interest and parameters to enable a cloud platform to evaluate the number of point clouds in the region of interest and return points;
uploading a generation starting instruction to enable the cloud platform to dissect the region of interest to obtain target data;
and uploading a downloading instruction to enable the cloud platform to convert the target data into a corresponding output data type and return the output data type.
Further, the point cloud data comprises point coordinate format data and three-dimensional grid format data, and when the point cloud data is the three-dimensional grid format data, the method further comprises a format conversion step before storing the point cloud data:
calculating the areas of all triangles in the three-dimensional grid format data, and establishing an accumulative distribution function according to the calculated areas of the triangles;
carrying out point cloud balanced sampling on each triangle according to the cumulative distribution function to obtain vertex information;
converting the three-dimensional grid format data into point coordinate format data according to the vertex information;
the vertex information includes vertex coordinates P, vertex color information C, and vertex normal information N.
Further, the asynchronously loading the point cloud file from the cloud platform according to the viewing instruction comprises:
setting a task queue in a Javascript main thread according to a viewing instruction, and simultaneously loading a point cloud file by a plurality of callback functions generated by an XMLHttpRequest;
after the XMLHttpRequest loads the point cloud file from the cloud platform, the data to be loaded is analyzed in the parallel thread by using Web Workers.
Further, the visually rendering the point cloud according to the point cloud file comprises:
determining a visible region range based on a view frustum model of the camera;
dynamically loading child nodes of the point cloud file according to the visible region range based on the distance of a camera;
loading constraint conditions to adapt to equipment terminals with different performances such as memory and display;
the WEBGL rendering technology based on the browser renders point clouds, so that different operating systems are compatible.
Further, the method also comprises the step of setting the region of interest:
acquiring camera adjustment parameters input by a user, and acquiring a point cloud area of a screen range according to the camera adjustment parameters;
obtaining an interested target region by adopting a cuboid according to the point cloud region, and obtaining a cuboid transformation matrix T;
saving the cuboid transformation matrix T, and setting projection plane parameters and data quality parameters according to the cuboid transformation matrix T;
the camera adjustment parameters include adjusting at least one of a position, an angle, or a near-far viewpoint of the camera.
Further, the sectioning the region of interest to obtain target data includes:
acquiring the depth of the detail layer number of the octree subnode according to the data quality parameter;
estimating the point number of the point cloud node according to the depth of the detail layer number;
acquiring a cuboid bounding box OBB according to the cuboid transformation matrix T;
and performing collision detection on the octree by using the cuboid bounding box OBB, and acquiring point cloud data of a target area as target data.
Further, the converting the target data into a corresponding output data type and returning the output data type includes:
acquiring sub-point cloud data according to the target data, wherein the sub-point cloud data comprises a point coordinate P and color information C;
outputting a two-dimensional orthographic projection image according to the sub-point cloud data;
and reconstructing a three-dimensional grid surface according to the sub-point cloud data, and outputting Mesh model data.
The other technical scheme adopted by the invention is as follows:
an intelligent terminal, comprising:
at least one processor;
at least one memory for storing at least one program;
when executed by the at least one processor, cause the at least one processor to implement a point cloud-based rapid sectioning method as described above.
The other technical scheme adopted by the invention is as follows:
a point cloud-based rapid sectioning method comprises the following steps:
acquiring point cloud data uploaded by an intelligent terminal, converting the point cloud data into an octree binary system, and storing to acquire a point cloud file;
obtaining a viewing instruction uploaded by an intelligent terminal, and sending the point cloud file to the intelligent terminal according to the viewing instruction so that the intelligent terminal can visually render the point cloud according to the point cloud file;
after acquiring the set interesting area and parameters uploaded by the intelligent terminal, evaluating the number of point clouds in the interesting area, and returning points to the intelligent terminal;
acquiring a starting generation instruction uploaded by the intelligent terminal, sectioning the region of interest to acquire target data;
and acquiring a downloading instruction uploaded by the intelligent terminal, converting the target data into a corresponding output data type, and sending the output data type to the intelligent terminal.
The other technical scheme adopted by the invention is as follows:
a cloud platform, comprising:
at least one processor;
at least one memory for storing at least one program;
when executed by the at least one processor, cause the at least one processor to implement the method described above.
The invention has the beneficial effects that: the cloud platform based on the browser is beneficial to sharing of data, supports clients such as a pc end and a mobile end, stores point cloud data and performs data operation, reduces large data redundancy, can be quickly cut as required to obtain various types of model data, and reduces hardware requirements of the clients.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the following description is made on the drawings of the embodiments of the present invention or the related technical solutions in the prior art, and it should be understood that the drawings in the following description are only for convenience and clarity of describing some embodiments in the technical solutions of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without creative efforts.
FIG. 1 is a data interaction diagram of a point cloud-based rapid sectioning system in an embodiment of the invention;
FIG. 2 is a flowchart illustrating steps of a point cloud-based rapid sectioning method based on an intelligent terminal according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of point cloud file storage in an embodiment of the invention;
FIG. 4 is a schematic structural diagram of asynchronously loading point cloud data according to an embodiment of the present invention;
FIG. 5 is a diagram of a camera slope view cone model according to an embodiment of the present invention;
FIG. 6 is a flowchart of the steps for obtaining a target point cloud in an embodiment of the present invention;
fig. 7 is a flowchart of steps of a cloud platform-based rapid sectioning method according to an embodiment of the present invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the accompanying drawings are illustrative only for the purpose of explaining the present invention, and are not to be construed as limiting the present invention. The step numbers in the following embodiments are provided only for convenience of illustration, the order between the steps is not limited at all, and the execution order of each step in the embodiments can be adapted according to the understanding of those skilled in the art.
In the description of the present invention, it should be understood that the orientation or positional relationship referred to in the description of the orientation, such as the upper, lower, front, rear, left, right, etc., is based on the orientation or positional relationship shown in the drawings, and is only for convenience of description and simplification of description, and does not indicate or imply that the device or element referred to must have a specific orientation, be constructed and operated in a specific orientation, and thus, should not be construed as limiting the present invention.
In the description of the present invention, the meaning of a plurality of means is one or more, the meaning of a plurality of means is two or more, and larger, smaller, larger, etc. are understood as excluding the number, and larger, smaller, inner, etc. are understood as including the number. If the first and second are described for the purpose of distinguishing technical features, they are not to be understood as indicating or implying relative importance or implicitly indicating the number of technical features indicated or implicitly indicating the precedence of the technical features indicated.
In the description of the present invention, unless otherwise explicitly limited, terms such as arrangement, installation, connection and the like should be understood in a broad sense, and those skilled in the art can reasonably determine the specific meanings of the above terms in the present invention in combination with the specific contents of the technical solutions.
As shown in fig. 1, the embodiment provides a quick sectioning system based on point cloud, which includes a system front end (such as a computer, a mobile phone, a tablet, and the like) and a system rear end (such as a server of a cloud platform, and the like), where a user may upload point cloud data through the system front end, store the point cloud data at the system rear end (i.e., a cloud end), and the system rear end stores the point cloud data and returns an access link. According to the access link, the point cloud data is easily analyzed (for example, the access link is sent to other intelligent terminals, and the other intelligent terminals open and check the point cloud data through a browser); and the user can easily check the point cloud data at different system front ends without storing the point cloud data at each system front end, thereby greatly saving the storage space.
Referring to fig. 1, the interaction process of the rapid sectioning system includes an input part, a visualization operation part, a cloud computing part and an output part, and data interaction between the front end and the back end of the system in each part is specifically as follows.
An input section:
when a user needs to upload point cloud data, the point cloud data are firstly transmitted to the front end of the system, and the front end of the system uploads the point cloud data to the back end of the system according to the HTTP. And after receiving the point cloud data, the rear end of the system unifies the format of the point cloud data, converts the point cloud data into octree binary storage, and returns an access link to the front end of the system after the storage is finished. The system front end may access the point cloud data according to the access link.
A visualization operation part:
when a user needs to access and check point cloud data, a point cloud file is loaded through the front end of the system, the front end of the system downloads the point cloud data from the rear end of the system, the point cloud is rendered and displayed, and the user can check the point cloud through a display screen of the front end of the system. Meanwhile, the user can also set an interesting area and parameters, the set parameters are uploaded to the rear end of the system through the front end of the system, and the front end of the system evaluates the number of point clouds in the area according to the set parameters and returns points.
The cloud computing part:
and after the user sets the parameters, determining type data to be generated, and uploading a start generation instruction to the back end of the system through the front end of the system. And after the rear end of the system is connected to the instruction, sectioning the region of interest to obtain target data, and generating and feeding back output data to link the output data to the front end of the system.
An output section:
when a user needs to download a target file, a download request is sent to the system back end through the system front end, the system back end converts target data into a corresponding output data type, and returns slice, point cloud or MESH type data to the system front end, and the user can check the data through a display screen of the system front end and can download the data to the local.
In the system, the front end of the system looks up the point cloud based on the browser, the back end of the system is required to store the point cloud data, the storage space is saved, when the point cloud needs to be shared, only the link data needs to be sent, the point cloud data does not need to be sent, and the network broadband is saved. Because most data are operated and processed by the back end of the system, the hardware requirement of the front end of the system is greatly reduced. In addition, the system can rapidly output two-dimensional digital images, point cloud sub-modules and three-dimensional grid models, and can be widely applied to automatic engineering drawing, digital archiving, mapping, three-dimensional reconstruction and reversal, cloud service, three-dimensional scene data and the like.
Referring to fig. 2, the present embodiment further provides a point cloud-based rapid sectioning method, which is executed by an intelligent terminal (such as a computer, a mobile phone, a tablet, and the like), and includes, but is not limited to, the following steps:
s101, uploading the point cloud data to enable the cloud platform to convert the point cloud data into an octree binary system for storage, and obtaining a point cloud file.
The point cloud data includes point coordinate format data (including txt, xyz, pcd, pts, ply, and the like) and three-dimensional mesh format data (including gltf, obj, ply, stl, and the like). The dot coordinate format data is composed of dot coordinates (x, y, z), color information (r, g, b), and may further include luminance information (I), normal line information (Nx, Ny, Nz), and other optional information. The format file can be obtained by scanning by a laser radar and sampling conversion, and can also be derived by three-dimensional software, such as MeshLab, cloudcompare and the like.
The three-dimensional mesh format data may be derived by three-dimensional modeling software. After the user uploads the format file, the cloud platform is converted into a format based on the point coordinates and then stored. The three-dimensional Mesh model is called Mesh for short, and because the Mesh consists of vertexes and triangular surfaces, the Mesh needs to be subjected to balanced point compensation, and then corresponding vertex information is extracted. Wherein, the format conversion step comprises A1-A5:
a1, number of total sample points number _ of _ points given Mesh.
A2, calculating the area of all triangles in the three-dimensional grid format data, and establishing a cumulative distribution function according to the calculated area of the triangles. The distribution function is:
FX(x)=P(X≤x)
where P represents the number of points and x represents the index of the triangle.
And A3, carrying out point cloud balanced sampling on each triangle according to the accumulative distribution function to obtain vertex information.
For a given arbitrary triangle vertex A, B, C, the equalization points within the triangle are generated as follows;
Figure BDA0002727501900000061
wherein r1 and r2 are random numbers in the range of [0,1 ].
A4, extracting Mesh vertex information including vertex coordinates P (x, y, z), vertex color information C (r, g, b), and vertex normal information N (x, y, z).
And A5, converting the data into data based on a point coordinate format.
After the uploaded point cloud data are received, octrees are respectively generated by the point cloud data according to scene or site space positions, and the results are stored in a file system, wherein each octree has a folder, and each node has a file. And dividing the points of the octree into detail levels according to the point distance.
The node file naming method comprises the following steps:
the root node is defined as r, 8 bits are used to respectively define the child node of the child leaf, and similarly, the child node of the child leaf is also represented by 8 bits. Referring to fig. 3, all points of the r10 leaf node are in the r10.bin file.
The process of converting the point cloud data into the octree binary system for storage is as follows:
1. the octree consists of a single root node R to which points are added one by one.
2. If the point reaches the minimum point SPACING SPACING, the internal node retains the point, otherwise it is passed on to other nodes.
3. The dot SPACING for each LEVEL is halved.
4. Leaf nodes hold all points first.
5. And if the point number of the node reaches a certain set threshold value, expanding the leaf node. It becomes an internal node and adds all stored points to itself. The point with some minimum distance remains in the previous leaf node and all other points are passed down to their newly created child nodes.
6. Data is periodically refreshed and stored to disk, for example, 1000 ten thousand points per process.
7. If the node has not been touched since the last refresh, its data will be removed from the memory during the next refresh.
8. If a point is to be added to a node that has been deleted from memory, data will first be read from disk back to memory.
And S102, after the viewing instruction is obtained, asynchronously loading a point cloud file from the cloud platform according to the viewing instruction, and visually rendering the point cloud according to the point cloud file.
Step S102 specifically includes steps S1021-S1023:
s1021, asynchronously loading point cloud and executing in parallel.
Referring to fig. 4, all tasks are executed in one main thread based on the property that Javascript code (except Web Workers) is single threaded. The callback function generated by the asynchronous version loading resource of the XMLHttpRequest function can be executed in parallel with the main thread. Web Workers is Javascript code that runs in parallel with the main thread. In the system, a task queue is set in a Javascript main thread, and one or more callback functions generated by XMLHttpRequest load a point cloud file at the same time. After the XMLHttpRequest loads the point cloud file from the remote server, the data to be loaded is analyzed in the parallel thread by using Web Workers. In fig. 4, 1 represents that the main thread runs the task queue; 2 represents XMLHttpRequest asynchronously loaded node data; 3, returning the data to the main thread after the loading is finished; 4, representing parallel analysis data of Web Workers; and 5, completing the analysis, returning to the main thread, and preparing for rendering.
And S1022, dynamically loading visibility. The steps include steps C1-C3:
c1, determining the visible region range based on the view cone model of the camera.
All the point cloud data loaded should be within the camera frustum. And obtaining a viewing cone model according to camera parameters, and judging whether the maximum projection distance of all vertexes of the node bounding box AABB in the normal direction of six surfaces of the viewing cone model is less than 0, if so, the node is out of the viewing cone, otherwise, the node is visible.
The method for judging the scope of the octree nodes in the view frustum comprises the following steps:
each octree node has a cube bounding box AABB (lx, ly, lz, ux, uy, uz), which is visible if the AABB is within the view frustum, or marked as invisible, as judged from the root node (i.e. the largest bounding box). If the node is visible, traversing its child nodes, and determining whether the cube bounding box AABB of each child node is within the view frustum range, if so, continuing to traverse until the last leaf node or being limited by other constraints (such as the maximum number of levels of detail level LOD, the total point budget, the node size, etc.), otherwise, marking the node as invisible. Nodes marked as invisible will be discarded and their children will not be traversed.
C2, dynamically loading child nodes based on the distance of the camera.
Calculating the distance from the camera to the child node, firstly obtaining the minimum circumscribed circle of the cube bounding box of the child node, obtaining a central point Pc and a radius by the minimum circumscribed circle, wherein camPos is the position of the camera, and the distance from the camera to the child node is as follows:
distance=sqrt((camPos.x-Pc.x)^2+(camPos.y-Pc.y)^2+(camPos.z-Pc.z)^2)
referring to fig. 5, according to the camera slope cone model, the size of the projection pixel of the child node on the screen can be obtained from the FOV of the camera, the rendering window height ScreenHeight, and the like.
slope=tan(fov/2)
screenPixelRadius=(ScreenHeight/2)/(slope*distance)*radius
If the pixel size of the projection of the child node on the screen is smaller than a given value, the node is discarded, otherwise, the child node is loaded. This ensures that when the octree is farther from the camera, only the lower level nodes (near the root node) are loaded, when closer to the camera, the higher level children (near the leaf nodes) are loaded, the camera distance determines how much detail the node is seen in, and the area near the camera has a higher level of detail.
The visibility of the nodes in the screen may vary with the position of the camera, the field angle FOV, the near-far viewpoint and the direction.
C2, loading constraints.
And simultaneously loading one or more octree data, wherein the octree is loaded from the root node to the child nodes, and so on. Octree nodes with different levels are loaded in an overlapping mode and are rendered jointly to increase the level of detail, increase the total point limit, meet the requirements of equipment with different performances, and define selectable intervals for displaying points for different equipment. Setting the highest level of detail, the depth to load the octree can be defined. An octree can be regarded as a cube bounding box, and its child nodes are equivalent to divide a cube into eight parts, and so on. A higher level of detail indicates that the region has more detail, with thinner squares.
And S1023, rendering the point cloud.
The method comprises the steps of using a WEBGL rendering technology based on a browser to transmit point cloud data information to a Shader through a vertex buffer object VBO, carrying out model space transformation on point cloud coordinate information through a vertex Shader, carrying out pixelation on point cloud colors through a fragment Shader, and processing the whole rendering display process through a rendering pipeline.
S103, uploading the region of interest and the parameters to enable the cloud platform to evaluate the number of point clouds in the region of interest and return points.
Wherein the step of setting the region of interest and the parameters comprises S1031-S1033:
and S1031, the user can adjust the position, angle, distance and near viewpoints and other parameters of the camera to obtain a point cloud area in a screen range.
S1032, acquiring a finer interested target region by using a cuboid.
Newly-built by eight apex coordinates in space definition midpoint as the original point, the side length is 1 cuboid, as follows:
[
{-0.5,-0.5,-0.5},
{-0.5,-0.5,+0.5},
{-0.5,+0.5,-0.5},
{-0.5,+0.5,+0.5},
{+0.5,-0.5,-0.5},
{+0.5,-0.5,+0.5},
{+0.5,+0.5,-0.5},
{+0.5,+0.5,+0.5}
]
the user operates the cuboid in a zooming, rotating and translating mode through interactive modes such as mouse or keyboard input and the like. Since the homogeneous transformation matrix can combine transformations of scaling, rotation, translation, and the like, the scaling, rotation, translation transformation of the final cuboid can be represented by a four-dimensional homogeneous transformation moment T:
Figure BDA0002727501900000091
s1033, storing the cuboid transformation matrix T, and setting parameters such as a projection plane, data quality and the like.
And S104, uploading a starting generation instruction to enable the cloud platform to cut the region of interest to obtain target data.
And sectioning the region of interest set by the user by the server of the cloud platform to obtain target region data. Comprising steps S1041-S1045:
and S1041, acquiring the depth of the detail layer number of the octree child node according to the data quality parameter.
S1042, estimating point number according to the depth of the point cloud nodes.
And S1043, obtaining a cuboid bounding box OBB from the cuboid matrix T.
S1044, see fig. 6, collision detection is performed on the octree using the cuboid bounding box OBB.
And S1045, acquiring point cloud data of the target area.
And S105, uploading the downloading instruction to enable the cloud platform to convert the target data into a corresponding output data type and return the output data type.
Wherein, step S105 includes steps S1051-S1053:
s1051, directly outputting the sub-point cloud data, including point coordinates P (x, y, z), color information C (r, g, b), and the like.
And S1052, outputting the two-dimensional orthographic projection image.
The method comprises the following steps of calculating the minimum external cuboid bounding box AABB (lx, ly, lz, ux, uy and uz) of the OBB by a self-defined bounding box OBB, projecting point cloud data to the corresponding surface of the minimum external cuboid bounding box AABB according to a preset projection surface, wherein a point P inside the bounding box AABB, a projection surface Plane, a Normal of the projection surface and a projection formula of the point P on the projection surface Plane are as follows:
D=dot(P,Normal)
Projected=P–Normal*D
where dot represents dot product, D is the distance from the point P to the projection plane, and Projected is the point coordinate of the point P after projection. And adjusting the proportional size of the projection surface by the scaling factor to finally obtain the orthographic projection.
And S1053, outputting the network model.
Reconstructing a three-dimensional Mesh surface by the point cloud data, outputting Mesh model data, and implementing a reconstruction process by using a Ball-grid-bounding Algorithm (BGA), and similarly, implementing the process by using methods such as Alphashape and Passion.
The idea of the rolling ball algorithm (BPA) is to simulate the use of virtual balls to generate a mesh from a point cloud. First assume that a given point cloud consists of points sampled from the object surface. Based on this assumption, one can imagine rolling a ball over a point cloud surface. The size of the globules depends on the scale of the mesh, slightly larger than the average distance of the point cloud. When the small ball is placed on the surface of the point cloud, the small ball firstly hits three points, and the three points form a triangular surface and serve as three points of a seed triangle. From this position, the ball rolls along the side of the triangle formed by the two vertices. A small sphere creates a triangle every time it hits three points and adds the triangle to the mesh. The ball continues to be scrolled until the grid is fully created.
In the method, the use flow of the user is as follows:
the user mainly has three parts, namely uploading a model file, selecting data of a target area and exporting a data model.
1 uploading three-dimensional files
And directly uploading the browser-based model information to a server and editing the relevant model information. The uploading model file comprises a file stored in a point coordinate form, such as a point cloud format file (txt, xyz, pcd, pts, ply) and the like, and three-dimensional grid data formed by a three-dimensional Mesh grid data model (gltf, obj, ply, stl) in a vertex, a triangular surface, a material and the like. If the three-dimensional Mesh model is uploaded, the system can be converted into a point cloud format file, and the conversion process does not need user intervention.
2 selecting a target area
In the visual interface of the model, the user uses mouse dragging or keyboard input parameters to zoom, rotate and translate the bounding box OBB in the direction of the target area.
3 deriving data models
After the user selects the target area, parameters such as a projection plane, image quality parameters, output data types and the like are set, the system can estimate the number of points of the target area and return the points to the user, and after the user confirms the points, the relevant model data can be generated.
In summary, compared with the prior art, the method of the present embodiment has at least the following beneficial effects:
(1) the browser-based terminal device supports various terminal devices and operating systems, and is good in compatibility.
(2) The large data redundancy is avoided, and the storage space is saved.
(3) Based on the range streaming type loading of the visual area, the light loading is rendered, and the user experience is better.
(4) And based on octree interception, rapid retrieval and extraction are realized, and the retrieval speed is improved.
(5) And outputting a plurality of data models based on the region of interest to form the digital assets.
(6) And the precision is high due to the point cloud-based data processing.
(7) The cloud service-based big data management is easy to share, three-dimensional digital materials, plane diagrams, digital archives and model information can be provided quickly, and efficiency is improved while more digital foundations are provided for smart city construction.
This embodiment also provides an intelligent terminal, includes:
at least one processor;
at least one memory for storing at least one program;
when executed by the at least one processor, cause the at least one processor to implement a point cloud based rapid sectioning method as shown in fig. 2.
The intelligent terminal of the embodiment can execute the point cloud-based rapid sectioning method provided by the embodiment of the method of the invention, can execute any combination of implementation steps of the embodiment of the method, and has corresponding functions and beneficial effects of the method.
As shown in fig. 7, the embodiment further provides a point cloud-based rapid sectioning method, which is performed by a cloud platform, and includes, but is not limited to, the following steps:
s201, point cloud data uploaded by an intelligent terminal are obtained, the point cloud data are converted into an octree binary system to be stored, and a point cloud file is obtained;
s202, obtaining a viewing instruction uploaded by the intelligent terminal, and sending the point cloud file to the intelligent terminal according to the viewing instruction so that the intelligent terminal can visually render the point cloud according to the point cloud file;
s203, after the set interesting region and the set parameters uploaded by the intelligent terminal are obtained, the number of point clouds in the interesting region is evaluated, and points are returned to the intelligent terminal;
s204, acquiring a starting generation instruction uploaded by the intelligent terminal, sectioning the region of interest to acquire target data;
s205, acquiring a downloading instruction uploaded by the intelligent terminal, converting the target data into a corresponding output data type, and sending the output data type to the intelligent terminal.
This embodiment still provides this embodiment and still provides a cloud platform, includes:
at least one processor;
at least one memory for storing at least one program;
when executed by the at least one processor, cause the at least one processor to implement a point cloud based rapid sectioning method as shown in fig. 7.
In alternative embodiments, the functions/acts noted in the block diagrams may occur out of the order noted in the operational illustrations. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality/acts involved. Furthermore, the embodiments presented and described in the flow charts of the present invention are provided by way of example in order to provide a more thorough understanding of the technology. The disclosed methods are not limited to the operations and logic flows presented herein. Alternative embodiments are contemplated in which the order of various operations is changed and in which sub-operations described as part of larger operations are performed independently.
Furthermore, although the present invention is described in the context of functional modules, it should be understood that, unless otherwise stated to the contrary, one or more of the described functions and/or features may be integrated in a single physical device and/or software module, or one or more functions and/or features may be implemented in a separate physical device or software module. It will also be appreciated that a detailed discussion of the actual implementation of each module is not necessary for an understanding of the present invention. Rather, the actual implementation of the various functional modules in the apparatus disclosed herein will be understood within the ordinary skill of an engineer, given the nature, function, and internal relationship of the modules. Accordingly, those skilled in the art can, using ordinary skill, practice the invention as set forth in the claims without undue experimentation. It is also to be understood that the specific concepts disclosed are merely illustrative of and not intended to limit the scope of the invention, which is defined by the appended claims and their full scope of equivalents.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The logic and/or steps represented in the flowcharts or otherwise described herein, e.g., an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Additionally, the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It should be understood that portions of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
In the foregoing description of the specification, reference to the description of "one embodiment/example," "another embodiment/example," or "certain embodiments/examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
While embodiments of the present invention have been shown and described, it will be understood by those of ordinary skill in the art that: various changes, modifications, substitutions and alterations can be made to the embodiments without departing from the principles and spirit of the invention, the scope of which is defined by the claims and their equivalents.
While the preferred embodiments of the present invention have been illustrated and described, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (10)

1. A point cloud-based rapid sectioning method is characterized by comprising the following steps:
uploading point cloud data to enable a cloud platform to convert the point cloud data into an octree binary system for storage, and obtaining a point cloud file;
after a viewing instruction is obtained, asynchronously loading the point cloud file from the cloud platform according to the viewing instruction, and visually rendering the point cloud according to the point cloud file;
uploading the region of interest and parameters to enable a cloud platform to evaluate the number of point clouds in the region of interest and return points;
uploading a generation starting instruction to enable the cloud platform to dissect the region of interest to obtain target data;
and uploading a downloading instruction to enable the cloud platform to convert the target data into a corresponding output data type and return the output data type.
2. The point cloud-based rapid sectioning method according to claim 1, wherein the point cloud data comprises point coordinate format data and three-dimensional mesh format data, and when the point cloud data is the three-dimensional mesh format data, the method further comprises a format conversion step before storing the point cloud data:
calculating the areas of all triangles in the three-dimensional grid format data, and establishing an accumulative distribution function according to the calculated areas of the triangles;
carrying out point cloud balanced sampling on each triangle according to the cumulative distribution function to obtain vertex information;
converting the three-dimensional grid format data into point coordinate format data according to the vertex information;
the vertex information includes vertex coordinates P, vertex color information C, and vertex normal information N.
3. The point cloud based rapid sectioning method according to claim 1, wherein the asynchronously loading the point cloud file from the cloud platform according to the viewing instruction comprises:
setting a task queue in a Javascript main thread according to a viewing instruction, and simultaneously loading a point cloud file by a plurality of callback functions generated by an XMLHttpRequest;
after the XMLHttpRequest loads the point cloud file from the cloud platform, the data to be loaded is analyzed in the parallel thread by using Web Workers.
4. The point cloud based rapid sectioning method according to claim 1, wherein the visualized rendering of the point cloud according to the point cloud file comprises:
determining a visible region range based on a view frustum model of the camera;
dynamically loading child nodes of the point cloud file according to the visible region range based on the distance of a camera;
loading constraint conditions to adapt to equipment terminals with different memory performances and display performances;
the WEBGL rendering technology based on the browser renders point clouds, so that different operating systems are compatible.
5. The point cloud-based rapid sectioning method according to claim 4, further comprising the step of setting an area of interest:
acquiring camera adjustment parameters input by a user, and acquiring a point cloud area of a screen range according to the camera adjustment parameters;
obtaining an interested target region by adopting a cuboid according to the point cloud region, and obtaining a cuboid transformation matrix T;
saving the cuboid transformation matrix T, and setting projection plane parameters and data quality parameters according to the cuboid transformation matrix T;
the camera adjustment parameters include adjusting at least one of a position, an angle, or a near-far viewpoint of the camera.
6. The point cloud-based rapid sectioning method according to claim 5, wherein the sectioning the region of interest to obtain target data comprises:
acquiring the depth of the detail layer number of the octree subnode according to the data quality parameter;
estimating the point number of the point cloud node according to the depth of the detail layer number;
acquiring a cuboid bounding box OBB according to the cuboid transformation matrix T;
and performing collision detection on the octree by using the cuboid bounding box OBB, and acquiring point cloud data of a target area as target data.
7. The point cloud-based rapid sectioning method according to claim 1, wherein the converting the target data into a corresponding output data type and returning the output data type comprises:
acquiring sub-point cloud data according to the target data, wherein the sub-point cloud data comprises a point coordinate P and color information C;
outputting a two-dimensional orthographic projection image according to the sub-point cloud data;
and reconstructing a three-dimensional grid surface according to the sub-point cloud data, and outputting Mesh model data.
8. An intelligent terminal, comprising:
at least one processor;
at least one memory for storing at least one program;
when executed by the at least one processor, cause the at least one processor to implement a point cloud based method of rapid sectioning as claimed in any one of claims 1 to 7.
9. A point cloud-based rapid sectioning method is characterized by comprising the following steps:
acquiring point cloud data uploaded by an intelligent terminal, converting the point cloud data into an octree binary system, and storing to acquire a point cloud file;
obtaining a viewing instruction uploaded by an intelligent terminal, and sending the point cloud file to the intelligent terminal according to the viewing instruction so that the intelligent terminal can visually render the point cloud according to the point cloud file;
after acquiring the set interesting area and parameters uploaded by the intelligent terminal, evaluating the number of point clouds in the interesting area, and returning points to the intelligent terminal;
acquiring a starting generation instruction uploaded by the intelligent terminal, sectioning the region of interest to acquire target data;
and acquiring a downloading instruction uploaded by the intelligent terminal, converting the target data into a corresponding output data type, and sending the output data type to the intelligent terminal.
10. A cloud platform, comprising:
at least one processor;
at least one memory for storing at least one program;
when executed by the at least one processor, cause the at least one processor to implement a point cloud-based rapid sectioning method of claim 9.
CN202011107623.0A 2020-10-16 2020-10-16 Point cloud based rapid sectioning method, intelligent terminal and cloud platform Withdrawn CN112241998A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011107623.0A CN112241998A (en) 2020-10-16 2020-10-16 Point cloud based rapid sectioning method, intelligent terminal and cloud platform

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011107623.0A CN112241998A (en) 2020-10-16 2020-10-16 Point cloud based rapid sectioning method, intelligent terminal and cloud platform

Publications (1)

Publication Number Publication Date
CN112241998A true CN112241998A (en) 2021-01-19

Family

ID=74169348

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011107623.0A Withdrawn CN112241998A (en) 2020-10-16 2020-10-16 Point cloud based rapid sectioning method, intelligent terminal and cloud platform

Country Status (1)

Country Link
CN (1) CN112241998A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112749502A (en) * 2021-01-27 2021-05-04 天津博迈科海洋工程有限公司 Regional virtual assembly lightweight method for oil-gas platform module
CN112927292A (en) * 2021-03-19 2021-06-08 南京市测绘勘察研究院股份有限公司 Ultrafast LAS format point cloud coordinate conversion method
CN113240786A (en) * 2021-05-10 2021-08-10 北京奇艺世纪科技有限公司 Video point cloud rendering method and device, electronic equipment and storage medium
CN116029254A (en) * 2023-01-06 2023-04-28 中山大学 Integrated circuit layout automatic wiring method and system based on path optimization
CN116109470A (en) * 2023-04-13 2023-05-12 深圳市其域创新科技有限公司 Real-time point cloud data rendering method, device, terminal and storage medium
CN116188660A (en) * 2023-04-24 2023-05-30 深圳优立全息科技有限公司 Point cloud data processing method and related device based on stream rendering

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112749502A (en) * 2021-01-27 2021-05-04 天津博迈科海洋工程有限公司 Regional virtual assembly lightweight method for oil-gas platform module
CN112927292A (en) * 2021-03-19 2021-06-08 南京市测绘勘察研究院股份有限公司 Ultrafast LAS format point cloud coordinate conversion method
CN113240786A (en) * 2021-05-10 2021-08-10 北京奇艺世纪科技有限公司 Video point cloud rendering method and device, electronic equipment and storage medium
CN113240786B (en) * 2021-05-10 2023-06-13 北京奇艺世纪科技有限公司 Video point cloud rendering method and device, electronic equipment and storage medium
CN116029254A (en) * 2023-01-06 2023-04-28 中山大学 Integrated circuit layout automatic wiring method and system based on path optimization
CN116029254B (en) * 2023-01-06 2024-04-12 中山大学 Integrated circuit layout automatic wiring method and system based on path optimization
CN116109470A (en) * 2023-04-13 2023-05-12 深圳市其域创新科技有限公司 Real-time point cloud data rendering method, device, terminal and storage medium
CN116109470B (en) * 2023-04-13 2023-06-20 深圳市其域创新科技有限公司 Real-time point cloud data rendering method, device, terminal and storage medium
CN116188660A (en) * 2023-04-24 2023-05-30 深圳优立全息科技有限公司 Point cloud data processing method and related device based on stream rendering
CN116188660B (en) * 2023-04-24 2023-07-11 深圳优立全息科技有限公司 Point cloud data processing method and related device based on stream rendering

Similar Documents

Publication Publication Date Title
CN112241998A (en) Point cloud based rapid sectioning method, intelligent terminal and cloud platform
Schütz Potree: Rendering large point clouds in web browsers
CN108564527B (en) Panoramic image content completion and restoration method and device based on neural network
US9852544B2 (en) Methods and systems for providing a preloader animation for image viewers
JP6725110B2 (en) Image rendering of laser scan data
Richter et al. Out-of-core real-time visualization of massive 3D point clouds
US20130321413A1 (en) Video generation using convict hulls
US10885705B2 (en) Point cloud rendering on GPU using dynamic point retention
US20130027417A1 (en) Alternate Scene Representations for Optimizing Rendering of Computer Graphics
US20220375152A1 (en) Method for Efficiently Computing and Specifying Level Sets for Use in Computer Simulations, Computer Graphics and Other Purposes
Schmohl et al. Stuttgart city walk: A case study on visualizing textured dsm meshes for the general public using virtual reality
JP2023178274A (en) Method and system for generating polygon meshes approximating surfaces using root-finding and iteration for mesh vertex positions
WO2023056879A1 (en) Model processing method and apparatus, device, and medium
WO2023088059A1 (en) Three-dimensional model visibility data storage method and apparatus, device, and storage medium
US9007374B1 (en) Selection and thematic highlighting using terrain textures
Santana Núñez et al. Visualization of large point cloud in unity
Romphf et al. Resurrect3D: An open and customizable platform for visualizing and analyzing cultural heritage artifacts
CN112215959A (en) Three-dimensional model mapping system using picture cutting
CN112802183A (en) Method and device for reconstructing three-dimensional virtual scene and electronic equipment
Ellul et al. LOD 1 VS. LOD 2–Preliminary investigations into differences in mobile rendering performance
Conde et al. LiDAR Data Processing for Digitization of the Castro of Santa Trega and Integration in Unreal Engine 5
WO2023184139A1 (en) Methods and systems for rendering three-dimensional scenes
CN116563505B (en) Avatar generation method, apparatus, electronic device, and storage medium
US11954802B2 (en) Method and system for generating polygon meshes approximating surfaces using iteration for mesh vertex positions
US20230394767A1 (en) Method and system for generating polygon meshes approximating surfaces using root-finding and iteration for mesh vertex positions

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20210119