CN117496065A - Three-dimensional real-time quick display method and system for digital factory - Google Patents

Three-dimensional real-time quick display method and system for digital factory Download PDF

Info

Publication number
CN117496065A
CN117496065A CN202311531485.2A CN202311531485A CN117496065A CN 117496065 A CN117496065 A CN 117496065A CN 202311531485 A CN202311531485 A CN 202311531485A CN 117496065 A CN117496065 A CN 117496065A
Authority
CN
China
Prior art keywords
real
model
time
digital
digital factory
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311531485.2A
Other languages
Chinese (zh)
Inventor
赵荣丽
谢源
刘强
姚福康
邹尚文
冷杰武
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong University of Technology
Original Assignee
Guangdong University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong University of Technology filed Critical Guangdong University of Technology
Priority to CN202311531485.2A priority Critical patent/CN117496065A/en
Publication of CN117496065A publication Critical patent/CN117496065A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/10Geometric CAD
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/20Processor architectures; Processor configuration, e.g. pipelining
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • G06T15/205Image-based rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2111/00Details relating to CAD techniques
    • G06F2111/20Configuration CAD, e.g. designing by assembling or positioning modules selected from libraries of predesigned modules

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Geometry (AREA)
  • Computer Hardware Design (AREA)
  • Computer Graphics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Pure & Applied Mathematics (AREA)
  • Mathematical Optimization (AREA)
  • Mathematical Analysis (AREA)
  • Computational Mathematics (AREA)
  • Computing Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention relates to the technical field of digital factories, and provides a three-dimensional real-time quick display method and system for the digital factories, wherein the method comprises the following steps: s1, establishing a real-time display key frame oriented to a digital factory; s2, carrying out light weight processing on the digital chemical plant mechanical CAD model according to the real-time display key frame; s3, eliminating the digital factory mechanical CAD model in real time through a three-dimensional rendering model and generating a visibility set; s4, rendering and calling optimization based on batch processing and GPU instantiation; s5, submitting the optimized image data in a traversing way and displaying the optimized image data in real time. The invention can improve the three-dimensional real-time quick display effect of the digital factory and has wide application range.

Description

Three-dimensional real-time quick display method and system for digital factory
Technical Field
The invention relates to the technical field of digital factories, in particular to a three-dimensional real-time quick display method and system for the digital factories.
Background
China is moving from the large manufacturing country to the strong manufacturing country, and in recent years, china is actively reforming and upgrading the traditional manufacturing industry, and the transition from the traditional factory to the intelligent factory is required to be subjected to a digital intelligent transformation road, and factory digitalization is an important component part of the intelligent transformation road. The Digital Factory (DF) is based on the related data of the whole life cycle of the product, and in a computer virtual environment, the whole production process is simulated, evaluated and optimized, and the method is further expanded to a novel production organization mode of the whole life cycle of the product. In order to reflect a real physical factory, the digital factory model needs to be in one-to-one correspondence with the physical factory equipment, which leads to a large number of mechanical CAD models of the digital factory, so that the data volume in a three-dimensional scene is suddenly increased, and virtual-real linkage and fluent display of the digital factory are affected.
Although some existing optimization methods have certain effects on three-dimensional scenes aimed at by the optimization methods, for scenes with high complexity and certain mechanical characteristics of a digital factory, the optimization methods cannot be matched with each other well to improve the smoothness of the scenes, and even the program errors or poor operation effects caused by considering the characteristics of the scenes of the digital factory due to lack in the optimization process cannot meet the basic requirements of the scenes of the digital factory.
Meanwhile, the technology communication and resource sharing of the domestic open source community in the field are still in a starting stage, so that the whole programming technology is relatively lagged. In terms of rendering optimization algorithms, three-dimensional visualization software in China is not studied deeply, and only high-configuration hardware is often relied on to achieve good rendering efficiency, which is not friendly to computer equipment with lower configuration.
In addition, real-time fluent rendering leading edge technologies tend to focus on the gaming arts, while in contrast, the industry exhibits a degree of hysteresis in the development of such optimization technologies.
Disclosure of Invention
The invention provides a three-dimensional real-time quick display method for a digital factory, which aims to solve the problems of poor linkage and display effect, poor rendering effect and small application range of the existing digital factory.
In a first aspect, an embodiment of the present invention provides a three-dimensional real-time quick display method for a digital factory, including the following steps:
s1, establishing a real-time display key frame oriented to a digital factory;
s2, carrying out light weight processing on the digital chemical plant mechanical CAD model according to the real-time display key frame;
s3, eliminating the digital factory mechanical CAD model in real time through a three-dimensional rendering model and generating a visibility set;
s4, rendering and calling optimization based on batch processing and GPU instantiation;
s5, submitting the optimized image data in a traversing way and displaying the optimized image data in real time.
Preferably, the step S1 specifically includes the following substeps:
analyzing and summarizing the mechanical CAD model characteristics;
and setting forth a real-time display key framework of the digital factory according to the mechanical CAD model characteristics.
Preferably, the step S2 specifically includes the following substeps:
s21, extracting vertex information of redundant information of the digital factory mechanical CAD model, sequentially comparing the sizes of X, Y, Z coordinate axes of the vertex information, sequencing, discarding the vertexes with the same position information, updating corresponding index information, and finally welding the vertexes to generate a new vertex set; wherein the redundant information further comprises a normal size, texture coordinates and an index value;
s22, processing the specific detail features according to the vertex set;
s23, simplifying the specific detail features based on the edge collapse model;
s24, generating a detail level of the edge collapsed model;
s25, selecting a detail level based on the GPU.
Preferably, the step S22 specifically includes the following substeps:
s221, randomly taking the point set with least constituent model features from the point set of the region;
s222, fitting the selected vertexes to generate geometric features;
s223, selecting an unprocessed vertex to be compared with the generated geometric feature, if the vertex is on the geometric feature, indicating that the vertex is on the geometric feature, adding the vertex, otherwise, only marking the vertex as processed;
s224, if the number of the vertexes in the fitted geometric features is larger than a preset threshold, the correct geometric features can be considered to be obtained, otherwise, the steps S222-S223 are iterated;
s225, if the correct geometric characteristics are not found after a certain number of processing times, dividing the region into other characteristics.
Preferably, the step S23 specifically includes the following substeps:
s231, establishing a basic relationship through the edge collapsed model;
s232, calculating an optimal candidate folding point;
s233, iterating the edge collapse simplification operation.
Preferably, the step S25 specifically includes the following steps:
s251, establishing a buffer area object for model data to be rendered;
s252, performing geometric coloring on the buffer area object to generate a plurality of storage streams;
s253, respectively outputting a plurality of buffer objects to a plurality of storage streams;
s254, judging whether a plurality of buffer objects are output to be empty, if yes, returning to the step S251; and if not, completing the real rendering model.
Preferably, the step S3 specifically includes the following substeps:
s31, view cone elimination based on octree;
s32, eliminating based on the pixel size;
s33, shielding elimination based on shielding inquiry.
Preferably, the step S4 specifically includes the following steps:
s41, carrying out batch processing on static models in a scene, and extracting effective vertex information from grid models with the same material quality;
s42, carrying out batch processing on the dynamic model with simple rendering effect and small data volume;
s43, for complex and numerous dynamic models in the scene, GPU instantiations are utilized to process, and instantiated objects are stored.
Preferably, in the step S41, the batch process includes a static batch process and a dynamic batch process.
In a second aspect, the present invention provides a three-dimensional real-time fast display system for a digital factory, comprising:
the frame building module is used for building a real-time display key frame oriented to a digital factory;
the processing module is used for carrying out light weight processing on the digital chemical plant mechanical CAD model according to the real-time display key frame;
the rejection and generation module is used for rejecting the digital factory mechanical CAD model in real time and generating a visibility set through the three-dimensional rendering model;
the optimization module is used for calling optimization based on batch processing and GPU instantiation;
and the display module is used for submitting the optimized image data traversal and displaying the optimized image data in real time.
Compared with the prior art, the invention has the beneficial effects that the key frame for real-time display of the digital factory is established; carrying out light weight processing on the mechanical CAD model of the digital chemical plant according to the real-time display key frame; the digital factory mechanical CAD model is removed in real time through a three-dimensional rendering model and a visibility set is generated; rendering call optimization based on batch processing and GPU instantiation; and traversing, submitting and displaying the optimized image data in real time. So analyzing and summarizing the mechanical CAD model characteristics, and combining the characteristics of the digital factory, providing a key technical framework for real-time display of the digital factory; simplifying the detail characteristics of the grid model by using a simplifying algorithm so as to achieve the aim of reducing the data quantity, automatically generating a detail level which is suitable for the real-time display requirement of a digital factory by using a related algorithm, and rapidly processing the detail level in real time; then, dividing the scene space of the digital factory, establishing a data structure to a certain extent, carrying out visibility analysis and classification on the three-dimensional model in the scene by utilizing the divided space structure, removing the invisible model and reserving the visible model; finally, according to different grid models in the digital factory, static/dynamic batch processing technology and instantiation technology are used in classification, invisible models in a batch queue are removed, and optimal design is carried out on a rendering queue; the three-dimensional real-time quick display effect of the digital factory is improved, and the application range is wide.
Drawings
The present invention will be described in detail with reference to the accompanying drawings. The foregoing and other aspects of the invention will become more apparent and more readily appreciated from the following detailed description taken in conjunction with the accompanying drawings. In the accompanying drawings:
FIG. 1 is a flow chart of a three-dimensional real-time quick display method for a digital factory provided by an embodiment of the invention;
FIG. 2 is a specific flowchart of S2 provided by an embodiment of the present invention;
FIG. 3 is a simplified process schematic of a round hole feature provided by an embodiment of the present invention;
FIG. 4 is a schematic illustration of edge collapse provided by an embodiment of the present invention;
FIG. 5 is a schematic diagram of real-time LOD calculation based on a GPU according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of a time delay caused by a GPU occlusion query provided by an embodiment of the present invention;
FIG. 7 is a schematic diagram of a memory storage manner of an exemplary object according to an embodiment of the present invention;
fig. 8 is a block diagram of a three-dimensional real-time quick display system for a digital factory according to an embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present invention more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
Example 1
Referring to fig. 1-7, an embodiment of the present invention provides a three-dimensional real-time quick display method for a digital factory, which includes the following steps:
s1, establishing a real-time display key frame oriented to a digital factory.
S2, performing light-weight processing on the digital chemical plant mechanical CAD model according to the real-time display key frame.
And S3, eliminating the digital factory mechanical CAD model in real time through a three-dimensional rendering model and generating a visibility set.
And S4, rendering and calling optimization based on batch processing and GPU instantiation.
S5, submitting the optimized image data in a traversing way and displaying the optimized image data in real time.
Specifically, by analyzing and summarizing the mechanical CAD model characteristics and combining the characteristics of a digital factory, a key technical framework for real-time display of the digital factory is provided; simplifying the detail characteristics of the grid model by using a simplifying algorithm so as to achieve the aim of reducing the data quantity, automatically generating a detail level which is suitable for the real-time display requirement of a digital factory by using a related algorithm, and rapidly processing the detail level in real time; then, dividing the scene space of the digital factory, establishing a data structure to a certain extent, carrying out visibility analysis and classification on the three-dimensional model in the scene by utilizing the divided space structure, removing the invisible model and reserving the visible model; finally, according to different grid models in the digital factory, static/dynamic batch processing technology and instantiation technology are used in classification, invisible models in a batch queue are removed, and optimal design is carried out on a rendering queue; the three-dimensional real-time quick display effect of the digital factory is improved, and the application range is wide.
In this embodiment, the step S1 specifically includes the following substeps:
analyzing and summarizing the mechanical CAD model characteristics;
and setting forth a real-time display key framework of the digital factory according to the mechanical CAD model characteristics.
In this embodiment, the step S2 specifically includes the following substeps:
s21, extracting vertex information of redundant information of the digital factory mechanical CAD model, sequentially comparing the sizes of X, Y, Z coordinate axes of the vertex information, sequencing, discarding the vertexes with the same position information, updating corresponding index information, and finally welding the vertexes to generate a new vertex set; wherein the redundant information further includes a normal size, texture coordinates, and an index value.
The redundant information of the CAD model in the digital factory mainly comprises information such as vertex position, normal line size, texture coordinates, index values and the like, wherein the normal line and the texture information are mutually bound with the vertex position information, and the index values indicate the position information of the vertices. Therefore, redundant data is removed from the model, and the attention point is only required to be placed on the vertex position and index information.
The invention adopts the concept of sorting and de-duplication to carry out vertex welding, and the main concept is as follows: extracting vertex information of a model to be processed, sequentially comparing the sizes of X, Y, Z coordinate axes of the vertex positions, sequencing, discarding the vertexes with the same position information, updating the corresponding index information, processing the vertex information by adopting an auxiliary array, adding the vertexes with different positions into a result set, and finally generating a new vertex set.
S22, processing the special detail features according to the vertex set.
Specifically, firstly, establishing a topological relation of a model such as a region, a cluster, edges and the like of the model, using edges as dividing boundaries between the regions, and dividing the regions into two types: the sharp feature edge and the non-sharp feature edge are distinguished by whether the sharp feature edge and the non-sharp feature edge reach a threshold value or not, namely whether an included angle formed by triangular patches around the edge is larger than a set threshold value or not, if the included angle is larger than the set threshold value, the edge is considered to be the sharp feature edge and is used as a dividing line for dividing a subsequent area, otherwise, the edge is considered to be the non-sharp feature edge; secondly, dividing the grid into different areas by adopting an area growth algorithm, wherein the main steps of the algorithm are as follows: firstly traversing non-merged clusters in a grid, selecting one of the clusters as an initial cluster, traversing non-sharp clusters around the clusters, dividing the traversed non-sharp clusters and the initial cluster into the same area, continuously iterating, and finally finishing the division of the whole grid model; then, the RANSAC algorithm is adopted for the segmented areas, so that the effect of feature recognition is achieved. Preferably, the threshold value of the included angle is set to 25 °.
S23, simplifying the special detail features based on the model of edge collapse.
The final goal of model simplification is to make the program get grid models with different surface numbers, which requires an automatic surface subtraction method. The present invention is to reduce the triangular patches of the mesh model by repeatedly and continuously using a simple edge collapse operation, as shown in fig. 4.
In this operation, both vertices A and B (side AB) are selected and one of the vertices (here A) is "moved" or "collapsed" to the other vertex (in this example B), the number of triangle patches after collapse being reduced by 2 relative to before collapse. The above operation does not generate new vertices, so the algorithm for edge collapse is faster than one based on the quadratic error metric (Quadic Error Metrics, QEM).
Since there are often large plant models in a digital plant, these models have large planes, and the distances between the vertices on the planes are large in spite of the small curvature values on the planes, the collapse cost is large because the collapse cost calculation depends on the length and curvature values of the edges, and if the length values of the edges in the collapse cost calculation are completely removed, the procedure obviously cannot consider the size of the collapse range. The invention selects to multiply the collapse cost by a constant which is the radius of the surrounding sphere of the grid model, so that the grid model has certain elasticity. In addition, when an edge is at an edge, that is, the edge is occupied by only one triangular surface patch, the influence of edge collapse on the grid model cannot be measured by using curvature values, the influence of the edge on the edge around the edge should be calculated, if the collinearity of the edge and the edge is stronger, the influence is smaller, and otherwise, the influence is larger. The invention thus described employs the following set of equations to calculate the collapse cost of an edge, where equation a calculates the collapse cost at the non-edge and equation b calculates the collapse cost at the edge:
wherein radius is the radius of the model enclosing sphere, u and v represent vertexes, T u Is a triangle surface set on the vertex, T uv A triangular surface set shared by u and v, O u collaseEdge is a shared edge of vertex u and vertex v, which is an edge around the vertex.
S24, generating the detail level of the edge collapsed model.
In the real-time display process of the digital factory, as the camera is farther and farther away from some models, the pixel area of the models projected onto the computer screen is reduced, and many details on the models are not easy to find, at this time, simplified versions of the models can be provided to the rendering queue, so that the amount of data transmitted from the CPU to the GPU and the number of top points required to be processed by the GPU are reduced. The invention simplifies all grid models imported into a digital factory to different degrees, thereby generating different detail levels, wherein obvious dividing lines are arranged between the detail levels, namely the pixel number is the size, and the invention generates four detail levels to process the models for balancing the performance and the display effect, and the pixel number of each level isi∈[2,5]I=2 represents the nearest level of detail, i=5 represents the farthest level of detail, and the power of i is set to 4, which helps to enlarge the division level boundary. In addition to generating model detail level boundaries, the invention uses the radius of the bounding sphere of the model as a basic value to calculate, and for other coefficients, needs to be associated with each detail level boundary, so the collapse cost of each level is ∈>i∈[2,5]. The collapse cost for the corresponding level of detail is calculated as follows:
wherein radius is the radius of the model surrounding the sphere, and is easy to obtainFor the furthest level of detail, ">For the most recent level of detail.
S25, selecting a detail level based on the GPU.
The invention converts the pixel number of the model into the distance between the model and the camera, thus reducing the calculated amount and improving the operation efficiency. In the calculation of the selection value, the invention utilizes the characteristic of high parallelization of the GPU to accelerate the calculation of the model detail level.
The overall idea of the algorithm is as follows: and for each model which is simplified to be processed, uploading the calculated level limit distance and the model-view conversion matrix into the geometric shader, multiplying the model with the conversion matrix to obtain the distance between the model and the camera, comparing the distance with the set level limit distance to obtain a level of detail selection value, outputting by using the OpenGL transformation feedback query function, and finally drawing different levels by using the OpenGL command according to the output parameters.
In this embodiment, the step S22 specifically includes the following substeps:
s221, randomly taking the point set with least constituent model features from the point set of the region;
s222, fitting the selected vertexes to generate geometric features;
s223, selecting an unprocessed vertex to be compared with the generated geometric feature, if the vertex is on the geometric feature, indicating that the vertex is on the geometric feature, adding the vertex, otherwise, only marking the vertex as processed;
s224, if the number of the vertexes in the fitted geometric features is larger than a preset threshold, the correct geometric features can be considered to be obtained, otherwise, the steps S222-S223 are iterated;
s225, if the correct geometric characteristics are not found after a certain number of processing times, dividing the region into other characteristics.
In particular, on a mechanical CAD model, the detail geometric features are generally formed by curved surfaces, so that the geometric data thereof is relatively complex, and a large number of vertices and patches often exist. Therefore, when the iterative process is repeated in Step4, a longer time is consumed, but the more the number of iterations, the more accurate the generated geometric features, so that a certain number of iterations is determined; however, in the RANSAC algorithm, as the number of top points increases, the number of iterations of program operation increases, so that time consumption is greatly increased.
The detail characteristics of the mechanical model researched by the invention mainly comprise round holes, cones and the like. The Gaussian mapping of the features is usually round, so that the vertices after the Gaussian mapping can be fitted through a RANSAC algorithm, and then the features are judged through the fitted information. The circular hole features in the mechanical CAD model mainly comprise two forms of a cylinder and a cone, and the recognition of the circular hole features is essentially the recognition of the cylinder and the cone. The identification flow of the cylindrical and conical features is as follows:
first, performing Gaussian mapping on normals of all clusters in a region to be identified to obtain a processed point set.
Second, fit the Gaussian-mapped point set using the RANSAC algorithm. If the radius length of the fitted circle is approximately 1 or within the interval (0, 1), then the correct result is considered to be obtained, otherwise failure is considered to be occurred.
Thirdly, carrying out RANSAC fitting circles on all vertexes in the area for a plurality of times, marking the vertexes which are already fitted, wherein the vertexes will not participate in subsequent fitting, ensuring that each fitting is a new vertex, obtaining circle center coordinates and radiuses of each fitting by the circular fitting, respectively obtaining fitting circles with the lowest and highest relative positions as a bottom surface and a top surface, and considering the characteristics as cylindrical characteristics if the radiuses of the circles after each fitting are approximately equal, otherwise, recognizing the characteristics as column characteristics.
And fourthly, simplifying the typical detail features, taking the round hole features as an example, easily knowing that sharp feature edges are concentrated at the top or the bottom of the round hole, setting the vertexes on the sharp feature edges as sharp feature points, and setting the rest as non-sharp feature points, wherein when the simplification is required, the non-sharp feature points in the round hole are often invisible, so that the non-sharp feature points in the round hole features are directly removed, and data information in the grid model is reduced, namely all vertex information in the middle of the round hole is removed. And for the top circle and the bottom circle of the round hole, the invention can stitch the round hole to prevent the problem of void of the grid model, the stitching process is shown in figure 3, the blue line is a stitched edge, the green point is a non-sharp characteristic point, the blue point is a sharp characteristic point, the vertex is added at the center of the circle, and the blue point is connected with the surrounding sharp characteristic points, so as to generate a plane.
In this embodiment, the step S23 specifically includes the following substeps:
s231, establishing a basic relationship through the edge collapsed model;
s232, calculating an optimal candidate folding point;
s233, iterating the edge collapse simplification operation.
In this embodiment, as shown in fig. 5, the step S25 specifically includes the following steps:
s251, establishing a buffer area object for model data to be rendered;
s252, performing geometric coloring on the buffer area object to generate a plurality of storage streams;
s253, respectively outputting a plurality of buffer objects to a plurality of storage streams;
s254, judging whether a plurality of buffer objects are output to be empty, if yes, returning to the step S251; and if not, completing the real rendering model.
Specifically, some grid models in a digital factory are too large or too small to generate 4 levels of detail for the grid models, so that four storage streams (1 or more storage streams are empty) cannot be generated in a geometric shader.
In this embodiment, the step S3 specifically includes the following substeps:
s31, view cone elimination based on octree.
The method comprises the steps of selecting a bounding box established in the previous section to replace a model for conservation detection, testing the bounding box according to six planes defining a view cone, and testing whether the bounding box is positioned outside, inside or intersected with each plane of the cone. If the bounding box is outside a certain plane of the view cone, immediately ending and indicating that the bounding box is outside the view cone; if the bounding box is within all of the planes of the view cone or intersects a plane, it is indicated that the bounding box is within the view cone. By projecting the normals of each face of the view cone onto the bounding box coordinate system, testing the signs of the projected X, Y and Z components, the whole process performs a total of 9 calculations and 3 comparisons, and in order to simplify the projection operation, the plane equations (mainly plane normals and offsets) of the view cone can be converted into the world coordinate system at the beginning of each frame, so that the AABB bounding box and the view cone plane are in the same coordinate system, the signs of the X, Y and Z components of the view cone plane normals can be quickly used, and only 3 comparisons are needed.
S32, eliminating based on the pixel size.
When the pixel area projected on the screen after model rendering does not reach the threshold value, the pixel area does not need to be submitted to a rendering queue, so that the Draw Call times are reduced, and a large number of smaller mechanical part models in a digital factory can be removed by pixel removal. The bounding sphere is used to calculate the area of the screen, and the shape of the bounding sphere projected onto the screen is always round, so that only one unknown is needed for area calculation, and therefore, the bounding sphere of the node is used for conservative calculation of the pixel value of the screen occupied by the bounding sphere.
S33, shielding elimination based on shielding inquiry.
Among them, there are a large number of models that are blocked from each other in the digital factory scene, and these models may cause serious overdrawing, and these overdrawing may cause waste of workload for the vertex and pixel processing stage in the rendering pipeline, so blocking rejection is required to reject the blocked models, thereby reducing the number of drawing times. The current OpenGL has a delay between the time of issuing the query command and the time of completing the processing in the rendering pipeline, mainly because the computing efficiency of the CPU and the GPU is not fully utilized, so that the CPU waits for the computing result of the GPU or the GPU waits for the data transmission of the CPU at some time. To reduce their waiting time with each other, when the CPU transmits data to the GPU computation, the CPU should continue with subsequent computations instead of waiting for the GPU computation to complete; when the GPU is once the computation is completed, the CPU should immediately transmit data to the GPU, reducing the latency of the GPU. By these methods, the original occlusion query operation can be made more efficient, as shown in FIG. 6.
The operation of the occlusion query instruction also increases performance consumption, in order to reduce the number of times of issuing the occlusion query instruction, nodes are divided into groups, an occlusion query is issued for each such group, if the query result shows that the pixel value is still 0, it indicates that all nodes in the group are still in an invisible state, otherwise, the continuity is broken, and the program issues queries for all nodes in the group individually. In the case where such continuity is not broken, the number of occlusion query instructions decreases as the number of nodes in the group increases; in the case of broken continuity, the number of the shielding inquiry instructions is only one more than that of the case of batch inquiry without using the group, so that the number of the shielding inquiry instructions can be greatly reduced; in addition to using groups to reduce the number of issued occlusion query instructions, the issue of instructions is reduced by using temporal continuity: when the time is t, a node obtains a visible state through shielding inquiry, and the node is considered to still keep the state within the time t+n, so that the average inquiry times of the node are reduced by n times, the smoothness and the correctness of rendering are balanced, the n is assigned by using a random number, and the size range is 5-10.
In this embodiment, the step S4 specifically includes the following steps:
s41, carrying out batch processing on static models in a scene, and extracting effective vertex information from grid models with the same material quality;
s42, carrying out batch processing on the dynamic model with simple rendering effect and small data volume;
s43, for complex and numerous dynamic models in the scene, GPU instantiations are utilized to process, and instantiated objects are stored.
The GPU instantiations classify and group the same network models by acquiring dynamic models at different positions, submit example groups to a rendering queue, update each group of information in real time, and submit the information to the rendering queue.
Specifically, firstly, static models in a scene are subjected to batch processing, and effective vertex information, namely index information, vertex position, normal line information, texture information and the like, is extracted for a grid model with the same material. The vertex positions need to be converted into coordinates in world space, and then the coordinates are respectively combined in corresponding buffer areas, namely, the grid models with the same materials are combined into a larger grid model. When the grid models are to be rendered, the program only submits the combined grid models, so that the number of times of submitting can be greatly reduced; secondly, carrying out batch processing on the dynamic models with simple rendering effect and small data volume, wherein world space position information of each sub-grid model is stored in a batch object unit, and the position information of the dynamic models to be batched is continuously changed, namely, the position information of vertexes in the formed grid models is continuously changed, and corresponding updating is needed; finally, for more complex and numerous dynamic models within the scene, processing is performed using GPU instantiation, and for instantiated objects, the storage approach shown in fig. 7 is used.
In this embodiment, in S41, the batch process includes a static batch process and a dynamic batch process.
The static batch processing comprises the steps of obtaining static models with the same materials, merging the same models, and submitting the same models to a rendering queue.
The dynamic batch processing comprises the steps of obtaining dynamic models of different materials, excluding models under unsatisfied conditions, merging the same models, updating information of the models in real time, and submitting the information to a rendering queue.
In this embodiment, redundant information of mesh model data is removed by using techniques such as vertex welding, so as to reduce storage load, and at the same time, a data base is used for subsequent operations, specific detail features of each mechanical CAD model in a scene are identified by using a mesh segmentation and feature identification algorithm, and at the same time, mesh simplification processing is performed on relatively complex mesh model iterations, and appropriate number of detail layers is generated by the simplified data and detail features, so that parallel computation of a GPU is used to control selection of model detail layers.
And secondly, the model data is calculated and combined through the support of software and hardware such as CPU multithreading, so that the real-time data calculation speed is improved, and the Draw Call times are reduced.
Finally, because part of the model is blocked relative to the camera or outside the camera view cone during the real-time display of the digital factory, the model in the scene is subjected to real-time dynamic and low-cost visibility analysis and elimination.
In addition, since the data of the mechanical CAD model is discretized in nature, that is, is composed of discrete point information or point information forming a topological relation, it is necessary to establish a regular basic relation for the data after the model redundant information is processed for the operations such as the simplification process and the like thereafter. Secondly, for the requirements of visibility analysis rejection, scene traversal and the like, a certain data structure needs to be established to accelerate the calculation, and in order to meet the requirements of time and accuracy, the invention adopts the combination of three data structures of octree, hierarchical bounding volume and Scene Graph (Scene Graph).
Example two
As shown in fig. 8, the present invention provides a three-dimensional real-time fast display system 200 for a digital factory, comprising:
a frame establishment module 201 for establishing a real-time display key frame oriented to a digital factory;
the processing module 202 is used for carrying out light weight processing on the digital chemical plant mechanical CAD model according to the real-time display key frame;
the rejection and generation module 203 is configured to reject and generate a visibility set of the digital plant mechanical CAD model in real time through a three-dimensional rendering model;
an optimization module 204 for rendering call optimization based on batch processing and GPU instantiation;
and the display module 205 is used for submitting and displaying the optimized image data in real time.
Specifically, a real-time display key frame oriented to a digital factory is established through the establishment frame module 201; the processing module 202 performs light weight processing on the digital chemical plant mechanical CAD model according to the real-time display key frame; the rejection and generation module 203 is used for real-time rejection and visibility set generation of the digital factory mechanical CAD model through a three-dimensional rendering model; the optimization module 204 invokes optimization based on batch processing and GPU instantiation rendering; the display module 205 submits and displays the optimized image data traversals in real-time.
So analyzing and summarizing the mechanical CAD model characteristics, and combining the characteristics of the digital factory, providing a key technical framework for real-time display of the digital factory; simplifying the detail characteristics of the grid model by using a simplifying algorithm so as to achieve the aim of reducing the data quantity, automatically generating a detail level which is suitable for the real-time display requirement of a digital factory by using a related algorithm, and rapidly processing the detail level in real time; then, dividing the scene space of the digital factory, establishing a data structure to a certain extent, carrying out visibility analysis and classification on the three-dimensional model in the scene by utilizing the divided space structure, removing the invisible model and reserving the visible model; finally, according to different grid models in the digital factory, static/dynamic batch processing technology and instantiation technology are used in classification, invisible models in a batch queue are removed, and optimal design is carried out on a rendering queue; the three-dimensional real-time quick display effect of the digital factory is improved, and the application range is wide.
The three-dimensional real-time quick display system 200 facing the digital factory can realize the steps in the three-dimensional real-time quick display method facing the digital factory in the above embodiment, and can realize the same technical effects, and the description in the above embodiment is omitted here.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
While the embodiments of the present invention have been illustrated and described in connection with the drawings, what is presently considered to be the most practical and preferred embodiments of the invention, it is to be understood that the invention is not limited to the disclosed embodiments, but on the contrary, is intended to cover various equivalent modifications and equivalent arrangements included within the spirit and scope of the appended claims.

Claims (10)

1. The three-dimensional real-time quick display method for the digital factory is characterized by comprising the following steps of:
s1, establishing a real-time display key frame oriented to a digital factory;
s2, carrying out light weight processing on the digital chemical plant mechanical CAD model according to the real-time display key frame;
s3, eliminating the digital factory mechanical CAD model in real time through a three-dimensional rendering model and generating a visibility set;
s4, rendering and calling optimization based on batch processing and GPU instantiation;
s5, submitting the optimized image data in a traversing way and displaying the optimized image data in real time.
2. The three-dimensional real-time quick display method for a digital factory according to claim 1, wherein said S1 specifically comprises the following sub-steps:
analyzing and summarizing the mechanical CAD model characteristics;
and setting forth a real-time display key framework of the digital factory according to the mechanical CAD model characteristics.
3. The three-dimensional real-time quick display method for a digital factory according to claim 1, wherein said S2 specifically comprises the following sub-steps:
s21, extracting vertex information of redundant information of the digital factory mechanical CAD model, sequentially comparing the sizes of X, Y, Z coordinate axes of the vertex information, sequencing, discarding the vertexes with the same position information, updating corresponding index information, and finally welding the vertexes to generate a new vertex set; wherein the redundant information further comprises a normal size, texture coordinates and an index value;
s22, processing the specific detail features according to the vertex set;
s23, simplifying the specific detail features based on the edge collapse model;
s24, generating a detail level of the edge collapsed model;
s25, selecting a detail level based on the GPU.
4. The three-dimensional real-time quick display method for digital factories according to claim 3, wherein S22 comprises the following steps:
s221, randomly taking the point set with least constituent model features from the point set of the region;
s222, fitting the selected vertexes to generate geometric features;
s223, selecting an unprocessed vertex to be compared with the generated geometric feature, if the vertex is on the geometric feature, indicating that the vertex is on the geometric feature, adding the vertex, otherwise, only marking the vertex as processed;
s224, if the number of the vertexes in the fitted geometric features is larger than a preset threshold, the correct geometric features can be considered to be obtained, otherwise, the steps S222-S223 are iterated;
s225, if the correct geometric characteristics are not found after a certain number of processing times, dividing the region into other characteristics.
5. The three-dimensional real-time quick display method for digital factories according to claim 3, wherein said S23 comprises the following steps:
s231, establishing a basic relationship through the edge collapsed model;
s232, calculating an optimal candidate folding point;
s233, iterating the edge collapse simplification operation.
6. The three-dimensional real-time quick display method for a digital factory according to claim 3, wherein said S25 specifically comprises the steps of:
s251, establishing a buffer area object for model data to be rendered;
s252, performing geometric coloring on the buffer area object to generate a plurality of storage streams;
s253, respectively outputting a plurality of buffer objects to a plurality of storage streams;
s254, judging whether a plurality of buffer objects are output to be empty, if yes, returning to the step S251; and if not, completing the real rendering model.
7. The three-dimensional real-time quick display method for digital factories according to claim 1, wherein the following substeps are included in the step S3:
s31, view cone elimination based on octree;
s32, eliminating based on the pixel size;
s33, shielding elimination based on shielding inquiry.
8. The three-dimensional real-time quick display method for a digital factory according to claim 1, wherein the step S4 specifically comprises the following steps:
s41, carrying out batch processing on static models in a scene, and extracting effective vertex information from grid models with the same material quality;
s42, carrying out batch processing on the dynamic model with simple rendering effect and small data volume;
s43, for complex and numerous dynamic models in the scene, GPU instantiations are utilized to process, and instantiated objects are stored.
9. The three-dimensional real-time quick display method according to claim 8, wherein in said S41, said batch process includes a static batch process and a dynamic batch process.
10. A three-dimensional real-time quick display system for a digital factory, comprising:
the frame building module is used for building a real-time display key frame oriented to a digital factory;
the processing module is used for carrying out light weight processing on the digital chemical plant mechanical CAD model according to the real-time display key frame;
the rejection and generation module is used for rejecting the digital factory mechanical CAD model in real time and generating a visibility set through the three-dimensional rendering model;
the optimization module is used for calling optimization based on batch processing and GPU instantiation;
and the display module is used for submitting the optimized image data traversal and displaying the optimized image data in real time.
CN202311531485.2A 2023-11-16 2023-11-16 Three-dimensional real-time quick display method and system for digital factory Pending CN117496065A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311531485.2A CN117496065A (en) 2023-11-16 2023-11-16 Three-dimensional real-time quick display method and system for digital factory

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311531485.2A CN117496065A (en) 2023-11-16 2023-11-16 Three-dimensional real-time quick display method and system for digital factory

Publications (1)

Publication Number Publication Date
CN117496065A true CN117496065A (en) 2024-02-02

Family

ID=89676298

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311531485.2A Pending CN117496065A (en) 2023-11-16 2023-11-16 Three-dimensional real-time quick display method and system for digital factory

Country Status (1)

Country Link
CN (1) CN117496065A (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019088865A1 (en) * 2017-11-01 2019-05-09 Вебгирз А Гэ Method and system for removing hidden surfaces from a three-dimensional scene
CN113808247A (en) * 2021-11-19 2021-12-17 武汉方拓数字科技有限公司 Method and system for rendering and optimizing three-dimensional model of massive three-dimensional scene
WO2023106710A1 (en) * 2021-12-10 2023-06-15 주식회사 이안에스아이티 Method and device for generating time series three-dimensional visualization data

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019088865A1 (en) * 2017-11-01 2019-05-09 Вебгирз А Гэ Method and system for removing hidden surfaces from a three-dimensional scene
CN113808247A (en) * 2021-11-19 2021-12-17 武汉方拓数字科技有限公司 Method and system for rendering and optimizing three-dimensional model of massive three-dimensional scene
WO2023106710A1 (en) * 2021-12-10 2023-06-15 주식회사 이안에스아이티 Method and device for generating time series three-dimensional visualization data

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
李大平等: ""基于WebGL的BIM模型轻量化解决方案"", 《施工技术》, vol. 52, no. 17, 30 September 2023 (2023-09-30), pages 22 - 25 *
李孟达等: ""基于三维数字化工厂模拟运维仿真的模型轻量化研究"", 《电工技术》, no. 4, 30 April 2023 (2023-04-30), pages 75 - 76 *

Similar Documents

Publication Publication Date Title
US11062501B2 (en) Vertex processing pipeline for building reduced acceleration structures for ray tracing systems
Xu et al. Voxel-based representation of 3D point clouds: Methods, applications, and its potential use in the construction industry
JP5291798B2 (en) Ray tracing system architecture and method
US7844106B2 (en) Method and system for determining poses of objects from range images using adaptive sampling of pose spaces
CN110120097A (en) Airborne cloud Semantic Modeling Method of large scene
CN112257597B (en) Semantic segmentation method for point cloud data
CN113781667B (en) Three-dimensional structure simplified reconstruction method and device, computer equipment and storage medium
US7990380B2 (en) Diffuse photon map decomposition for parallelization of global illumination algorithm
JP2004348702A (en) Image processing method, its apparatus, and its processing system
CN109118588B (en) Automatic color LOD model generation method based on block decomposition
CN112085840A (en) Semantic segmentation method, device, equipment and computer readable storage medium
Chang et al. GPU-friendly multi-view stereo reconstruction using surfel representation and graph cuts
CN115661374B (en) Rapid retrieval method based on space division and model voxelization
CN117280387A (en) Displacement micro-grid for ray and path tracing
CN111783798B (en) Mask generation method for simulated residual point cloud based on significance characteristics
GB2583513A (en) Apparatus, system and method for data generation
CN115221580A (en) Godot-based park digital twin building model construction method
Lee et al. Geometry splitting: an acceleration technique of quadtree-based terrain rendering using GPU
CN106780716A (en) Historical and cultural heritage digital display method
Zhao et al. Completing point clouds using structural constraints for large-scale points absence in 3D building reconstruction
Yang et al. Connectivity-aware Graph: A planar topology for 3D building surface reconstruction
CN117496065A (en) Three-dimensional real-time quick display method and system for digital factory
Sahebdivani et al. Deep learning based classification of color point cloud for 3D reconstruction of interior elements of buildings
JP2005293021A (en) Triangular mesh generation method using maximum opposite angulation, and program
CN113763563A (en) Three-dimensional point cloud geometric grid structure generation method based on plane recognition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination