Disclosure of Invention
Based on this, it is necessary to provide a STEP format rendering method at the WEB end, so as to solve at least one of the above technical problems.
In order to achieve the above purpose, a method for rendering STEP format on WEB side includes the following STEPs:
step S1: obtaining an STEP format original model file through a WEB end file transmission channel; carrying out file data analysis processing on the STEP format model file according to the STEP file analysis library to generate original analysis standard data;
Step S2: carrying out data preprocessing on the original analysis standard data to generate model structured data; performing three-dimensional grid conversion processing on the model structured data by utilizing a polygonal grid generation algorithm to obtain a standard three-dimensional model grid;
step S3: performing three-dimensional model rendering processing on the three-dimensional model grids by utilizing a progressive model analysis mechanism to generate an updated rendering model; performing rendering optimization treatment on the updated rendering model according to the visual field rejection optimization algorithm to generate an optimized rendering model;
step S4: performing file compression processing on the optimized rendering model based on a geometric coding compression algorithm to generate a rendering model data packet; simulating unpacking processing is carried out on the rendering model data packet by adopting a streaming technology, and model rendering entity information weight data is generated; comparing the model rendering entity information weight data with a preset rendering precision threshold value to generate a STEP format model file;
step S5: transmitting the STEP format model file to a user side through a network node to carry out task packaging processing, and generating a real-time rendering task; performing node parallel processing on the real-time rendering task by using a distributed rendering technology to generate a model rendering effect image;
Step S6: acquiring performance data of user equipment; and performing self-adaptive rendering precision adaptation processing on the model rendering effect image according to the user equipment performance data to generate a STEP format real-time rendering model.
According to the invention, the STEP format original model file is acquired through the WEB end file transmission channel, file data analysis processing is carried out on the STEP format model file according to the STEP file analysis library, so that effective acquisition of data used in the subsequent processing STEPs from the file uploaded by a user can be ensured, model data can be ensured to be effectively acquired and converted into original analysis standard data, and a solid foundation is laid for subsequent model processing and analysis; the original analysis standard data is subjected to data preprocessing, so that noise can be reduced, missing data can be filled, the effect of the subsequent processing STEPs can be improved, uncertainty and errors can be reduced, the original data can be converted into a more structured data form by preprocessing, the STEP format model file is subjected to file data analysis processing according to an STEP file analysis library, continuous geometric shapes can be expressed into a grid structure formed by simple shapes such as triangles or quadrilaterals, and the quality and the structuring degree of the model data can be improved; the three-dimensional model mesh is subjected to three-dimensional model rendering processing by utilizing a progressive model analysis mechanism, model details can be loaded gradually as required, the whole complex three-dimensional model is prevented from being loaded at one time, so that the rendering efficiency is improved, meanwhile, the rendering workload on invisible objects can be reduced by visual field rejection optimization, the rendering efficiency is further improved, the real-time rendering is smoother and faster, and model details with different resolutions can be loaded as required through progressive model analysis, so that the memory requirement is reduced. The updated rendering model is rendered and optimized according to the visual field eliminating and optimizing algorithm, so that the rendering of invisible objects can be reduced, and the use of memory is further saved, thereby providing more vivid and detailed rendering results and improving the sense of reality and visual quality of scenes; the method has the advantages that the file compression processing is carried out on the optimized rendering model based on the geometric coding compression algorithm, the volume of rendering model data can be obviously reduced, the occupation of storage space and the bandwidth requirement during network transmission are reduced, the data transmission efficiency is improved, the streaming technology is adopted to carry out the simulation unpacking processing on the rendering model data packet, the rendering model data can be loaded while being transmitted, the waiting time is reduced, faster loading and rendering starting are realized, the user interaction response speed is accelerated, the rendering precision of the model rendering entity information weight data is compared with the preset rendering precision threshold value, the rendering precision of the model can be dynamically controlled, and the rendering quality and performance requirement are flexibly balanced according to the requirement and the performance of display equipment; the STEP format model file is transmitted to the user side through the network node for task packaging processing, so that the computing resource can be fully utilized, and the rendering speed is increased. The method has the advantages that the completion time of the rendering task can be shortened, the user experience and the working efficiency are improved, the real-time rendering task is subjected to node parallel processing by using a distributed rendering technology, and the method can be expanded to a plurality of rendering nodes, so that large-scale and complex model rendering tasks can be processed, and therefore, large-scale models can be efficiently rendered even if the limitation of memory and computing resources is faced, and system breakdown or performance degradation is avoided; the method comprises the steps of obtaining user equipment performance data, carrying out self-adaptive rendering precision adaptation processing on a model rendering effect image according to the user equipment performance data, saving computing resources, accelerating rendering speed, supporting wide equipment types and configurations, and improving rendering effect and detail degree. Therefore, the invention carries out progressive hierarchical rendering on the STEP format model and carries out parallel processing through the nodes so as to improve the rendering precision and the load capacity of the WEB terminal.
Preferably, step S4 comprises the steps of:
step S41: carrying out data packet compression processing on the updated rendering model according to a geometric coding compression algorithm to generate a rendering model data packet; performing format conversion processing on the rendering model data packet based on the STEP format to generate the STEP format rendering model data packet;
step S42: performing data packet cutting processing on the STEP format rendering model data packet to generate a STEP format rendering model data block; performing sequencing mark processing on STEP format rendering model data blocks to generate STEP format rendering sequencing links; based on STEP format rendering ordering links, carrying out stream receiving processing on STEP format rendering model data blocks to generate model rendering entity information weight data;
step S43: comparing the model rendering entity information weight data with a preset rendering precision threshold, and generating a STEP format model file when the model rendering entity information weight data is larger than the rendering precision threshold.
The invention carries out data packet compression processing on the updated rendering model through a geometric coding compression algorithm to generate a rendering model data packet, carries out format conversion processing on the rendering model data packet based on STEP format to generate STEP format rendering model data packet, is beneficial to reducing the size of the data packet and converting the data packet into a universal STEP format, and is convenient for transmission and storage; performing data packet cutting processing on STEP format rendering model data packets to generate STEP format rendering model data blocks, performing sequencing marking processing on the rendering model data blocks to generate STEP format rendering sequencing links, facilitating cutting of large data packets into smaller data blocks, and providing sequencing and linking information for subsequent stream receiving processing; based on STEP format rendering ordering links, stream receiving processing is carried out on STEP format rendering model data blocks to generate model rendering entity information weight data, so that block-by-block receiving and data processing can be realized, memory use is reduced, and entity information and weight data related to model rendering are generated; and comparing the model rendering entity information weight data with a preset rendering precision threshold value, which is beneficial to controlling the rendering precision, ensuring that only entity information with enough importance is contained in a final model file, reducing the file size and improving the rendering efficiency.
Preferably, the functional formula of the geometric coding compression algorithm in step S41 is as follows:
where K is the encoded compressed data size, m is the number of packets, S is the original data size before update, G i Expressed as the difference between the ith vertex data and the corresponding reconstructed vertex data, e is expressed as the maximum error allowed for the loss of data transmission, P ij Represented as vertex positions in the original rendering model, R ij Represented as vertex positions in the reconstructed rendering model, V ij Represented as the vertex normals in the original rendering model,represented as vertex normals in the reconstructed rendering model, μ is represented as model code compression anomaly modifier.
The invention constructs a function formula of a geometric coding compression algorithm, which is used for carrying out loss calculation in the data packet transmission process through the original data size before updating and the maximum error allowed by the ith vertex data and data transmission loss, and the geometric coding compression algorithm can carry out reconstruction compression on the rendered data packet of the model according to the vertex position in the original rendering model and the vertex position in the reconstructed rendering model, so as to realize optimal data packet coding, and carry out dynamic reduction on the rendering precision after model compression according to the vertex normal in the original rendering model and the vertex normal in the reconstructed rendering model, thereby accurately determining the size of the compressed data after coding. In practical application, the formula can sample the data in the region, only a part of important data points are reserved, then the data points are used for reconstruction or interpolation to restore the data of the whole region, the reconstructed model is compared with the original model data, the size of the data packet is compressed and encoded as much as possible while the rendering precision is kept, and the system space is saved. The formula fully considers the number m of the data packets, the original data size S before updating, and the difference G between the ith vertex data and the corresponding reconstruction vertex data i Maximum allowable data transmission lossError e, vertex position P in original rendering model ij Vertex position R in a reconstructed rendering model ij Vertex normals V in the original rendering model ij Vertex normals in reconstructed rendering modelsThe model coding compression anomaly correction quantity mu forms a functional relation according to the original data size S before updating and the correlation between the parameters:
through the interaction relation between the difference value between the ith vertex data and the corresponding reconstructed vertex data and the vertex position in the original rendering model, the influence error of the data packet after compression on the original rendering model can be known, the geometric coding compression of the data packet is carried out under the condition of ensuring the accuracy of the regional data, the maximum error allowed by the data transmission loss is utilized, the data redundancy is reduced under the condition of ensuring the accuracy of the data, the calculation force is saved, the calculation achieves rapid convergence, the data packet coding compression is regulated through the abnormal correction quantity mu of the model coding compression, the size K of the coded compressed data is generated more accurately, and the accuracy and the reliability of the geometric coding compression are improved. Meanwhile, parameters such as maximum error allowed by data transmission loss, data packet number and the like in the formula can be adjusted according to actual conditions, so that the method is suitable for different geometric coding compression scenes, and the applicability and flexibility of the algorithm are improved.
Preferably, step S5 comprises the steps of:
step S51: transmitting the STEP format model file from the server side to the user side according to a network transmission protocol, and generating a transmission STEP format model file;
step S52: when a user receives a transmission STEP format model file, performing task packaging processing on the transmission STEP format model file to generate a real-time rendering task;
step S53: performing rendering node allocation processing on STEP format model file task packages by using a distributed rendering technology to generate model file task nodes; performing node parallel processing on the model file task nodes to generate a model rendering result;
step S54: and carrying out image synthesis processing on the model rendering results generated by each rendering node to generate a model rendering effect image.
According to the invention, the STEP format model file is transmitted from the server side to the user side by using the network transmission protocol, so that the STEP format model file is generated and transmitted, the model file can be effectively transmitted from the server to the user side, and the preparation is made for the subsequent rendering task preparation work; after receiving the transmitted STEP format model file, the user receives the STEP format model file and performs task packaging processing on the STEP format model file to generate a real-time rendering task, and the efficiency of task processing can be improved and subsequent rendering node distribution processing is facilitated by packaging the related rendering tasks together; the STEP format model file task package is subjected to rendering node distribution processing by using a distributed rendering technology, so that model file task nodes are generated, the model file task nodes are subjected to node parallel processing, and a model rendering result is generated, so that the rendering speed and the rendering efficiency are effectively improved; and (3) carrying out image synthesis processing on the model rendering results generated by each rendering node to generate a model rendering effect image, so that a more comprehensive and high-quality model rendering effect image can be obtained, and a more comprehensive and high-quality model rendering effect image can be obtained.
Preferably, step S6 comprises the steps of:
step S61: acquiring user equipment performance data of a user by calling a system API;
step S62: performing equipment performance evaluation processing on the user equipment performance data to generate user equipment performance evaluation indexes; and performing rendering loading resource management on the model rendering effect image based on the user equipment performance evaluation index to generate a STEP format real-time rendering model.
The invention can acquire the equipment performance data of the user side by calling the system API, process and analyze the equipment performance data of the user side, generate the performance evaluation index of the user equipment, and know the processing capacity and rendering performance of the user equipment. These evaluation metrics may be used to determine whether the device has sufficient performance to perform real-time rendering tasks; the method has the advantages that the resources required in the rendering process can be optimized and managed by performing rendering loading resource management on the model rendering effect image, so that real-time rendering can be smoothly performed on the user equipment, the performance of the user equipment can be utilized to the greatest extent, and smooth and high-quality rendering experience is provided.
The invention has the advantages that the original model file is obtained and analyzed, the STEP format model can be converted into the original analysis standard data, an operable data structure is provided for the subsequent processing STEP, the original analysis standard data can be converted into the operable model structured data through the data preprocessing and the three-dimensional grid conversion processing, a more convenient data format is provided for the subsequent rendering and optimizing STEP, the model is processed and rendered more efficiently, the optimized rendering model can be converted into a more compact rendering model data packet through the compression processing and streaming technology, meanwhile, the model rendering entity information weight data is compared and processed, the STEP format model file with adaptability can be generated according to the preset rendering precision threshold value, so as to meet different requirements and bandwidth limitations, the STEP format file can be converted into the operable model structured data through the network transmission and the distributed processing, the data format can be converted into the image rendering device with the parallel processing device according to the quality, the image rendering precision and the image rendering precision can be improved, the image rendering device can be generated according to the image quality and the rendering precision of the user equipment, the image rendering device can be better and the rendering and the image quality can be provided, the image rendering device can be rendered and the image quality of the image device can be better and the image device can be rendered and the image device is more suitable for the rendering and the user device with the rendering device, and flexible processing and adaptation are performed according to performance characteristics of the user equipment. Therefore, the invention carries out progressive hierarchical rendering on the STEP format model and carries out parallel processing through the nodes so as to improve the rendering precision and the load capacity of the WEB terminal.
Detailed Description
The following is a clear and complete description of the technical method of the present patent in conjunction with the accompanying drawings, and it is evident that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present invention without making any inventive effort, are intended to fall within the scope of the present invention.
Furthermore, the drawings are merely schematic illustrations of the present invention and are not necessarily drawn to scale. The same reference numerals in the drawings denote the same or similar parts, and thus a repetitive description thereof will be omitted. Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities. The functional entities may be implemented in software or in one or more hardware modules or integrated circuits or in different networks and/or processor methods and/or microcontroller methods.
It will be understood that, although the terms "first," "second," etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another element. For example, a first element could be termed a second element, and, similarly, a second element could be termed a first element, without departing from the scope of example embodiments. The term "and/or" as used herein includes any and all combinations of one or more of the associated listed items.
In order to achieve the above objective, please refer to fig. 1 to 4, a STEP format rendering method at a WEB end, the method includes the following STEPs:
step S1: obtaining an STEP format original model file through a WEB end file transmission channel; carrying out file data analysis processing on the STEP format model file according to the STEP file analysis library to generate original analysis standard data;
step S2: carrying out data preprocessing on the original analysis standard data to generate model structured data; performing three-dimensional grid conversion processing on the model structured data by utilizing a polygonal grid generation algorithm to obtain a standard three-dimensional model grid;
Step S3: performing three-dimensional model rendering processing on the three-dimensional model grids by utilizing a progressive model analysis mechanism to generate an updated rendering model; performing rendering optimization treatment on the updated rendering model according to the visual field rejection optimization algorithm to generate an optimized rendering model;
step S4: performing file compression processing on the optimized rendering model based on a geometric coding compression algorithm to generate a rendering model data packet; simulating unpacking processing is carried out on the rendering model data packet by adopting a streaming technology, and model rendering entity information weight data is generated; comparing the model rendering entity information weight data with a preset rendering precision threshold value to generate a STEP format model file;
step S5: transmitting the STEP format model file to a user side through a network node to carry out task packaging processing, and generating a real-time rendering task; performing node parallel processing on the real-time rendering task by using a distributed rendering technology to generate a model rendering effect image;
step S6: acquiring performance data of user equipment; and performing self-adaptive rendering precision adaptation processing on the model rendering effect image according to the user equipment performance data to generate a STEP format real-time rendering model.
According to the invention, the STEP format original model file is acquired through the WEB end file transmission channel, file data analysis processing is carried out on the STEP format model file according to the STEP file analysis library, so that effective acquisition of data used in the subsequent processing STEPs from the file uploaded by a user can be ensured, model data can be ensured to be effectively acquired and converted into original analysis standard data, and a solid foundation is laid for subsequent model processing and analysis; the original analysis standard data is subjected to data preprocessing, so that noise can be reduced, missing data can be filled, the effect of the subsequent processing STEPs can be improved, uncertainty and errors can be reduced, the original data can be converted into a more structured data form by preprocessing, the STEP format model file is subjected to file data analysis processing according to an STEP file analysis library, continuous geometric shapes can be expressed into a grid structure formed by simple shapes such as triangles or quadrilaterals, and the quality and the structuring degree of the model data can be improved; the three-dimensional model mesh is subjected to three-dimensional model rendering processing by utilizing a progressive model analysis mechanism, model details can be loaded gradually as required, the whole complex three-dimensional model is prevented from being loaded at one time, so that the rendering efficiency is improved, meanwhile, the rendering workload on invisible objects can be reduced by visual field rejection optimization, the rendering efficiency is further improved, the real-time rendering is smoother and faster, and model details with different resolutions can be loaded as required through progressive model analysis, so that the memory requirement is reduced. The updated rendering model is rendered and optimized according to the visual field eliminating and optimizing algorithm, so that the rendering of invisible objects can be reduced, and the use of memory is further saved, thereby providing more vivid and detailed rendering results and improving the sense of reality and visual quality of scenes; the method has the advantages that the file compression processing is carried out on the optimized rendering model based on the geometric coding compression algorithm, the volume of rendering model data can be obviously reduced, the occupation of storage space and the bandwidth requirement during network transmission are reduced, the data transmission efficiency is improved, the streaming technology is adopted to carry out the simulation unpacking processing on the rendering model data packet, the rendering model data can be loaded while being transmitted, the waiting time is reduced, faster loading and rendering starting are realized, the user interaction response speed is accelerated, the rendering precision of the model rendering entity information weight data is compared with the preset rendering precision threshold value, the rendering precision of the model can be dynamically controlled, and the rendering quality and performance requirement are flexibly balanced according to the requirement and the performance of display equipment; the STEP format model file is transmitted to the user side through the network node for task packaging processing, so that the computing resource can be fully utilized, and the rendering speed is increased. The method has the advantages that the completion time of the rendering task can be shortened, the user experience and the working efficiency are improved, the real-time rendering task is subjected to node parallel processing by using a distributed rendering technology, and the method can be expanded to a plurality of rendering nodes, so that large-scale and complex model rendering tasks can be processed, and therefore, large-scale models can be efficiently rendered even if the limitation of memory and computing resources is faced, and system breakdown or performance degradation is avoided; the method comprises the steps of obtaining user equipment performance data, carrying out self-adaptive rendering precision adaptation processing on a model rendering effect image according to the user equipment performance data, saving computing resources, accelerating rendering speed, supporting wide equipment types and configurations, and improving rendering effect and detail degree. Therefore, the invention carries out progressive hierarchical rendering on the STEP format model and carries out parallel processing through the nodes so as to improve the rendering precision and the load capacity of the WEB terminal.
In the embodiment of the present invention, as described with reference to fig. 1, the STEP flow diagram of a STEP format rendering method at a WEB end of the present invention is shown, and in this example, the STEP format rendering method at a WEB end includes the following STEPs:
step S1: obtaining an STEP format original model file through a WEB end file transmission channel; carrying out file data analysis processing on the STEP format model file according to the STEP file analysis library to generate original analysis standard data;
in the embodiment of the invention, a web page or an application program with an uploading function is created for a user to upload a model file in an STEP format, the user is allowed to select the file and upload the file to a server through an HTML form or an API interface, an analysis library or a tool suitable for processing the STEP format, such as an open-source STEP file analysis library or a professional CAD software library, is used for reading and analyzing the STEP file uploaded by the user by using the selected analysis library in a rear-end server end code, the analysis library can analyze each data item of the STEP file into an operable data structure, such as geometric data, metadata, attributes and the like, an STEP format original model file is obtained, an operating environment and a dependency item, such as a Python environment or other related software, required by installing the analysis library or tool are imported into a code of the server end, the required geometric analysis, metadata and attribute information are extracted from a result returned by using a function or an API provided by the analysis library, and the STEP file data are read, and the required geometric analysis standard data is generated.
Step S2: carrying out data preprocessing on the original analysis standard data to generate model structured data; performing three-dimensional grid conversion processing on the model structured data by utilizing a polygonal grid generation algorithm to obtain a standard three-dimensional model grid;
in the embodiment of the invention, the original analysis standard data is preprocessed to generate the structured data of the model, including removing unnecessary data, merging repeated entities, repairing geometric errors or performing other processing, and the structured data of the model is converted into a standard three-dimensional model grid by selecting a suitable polygonal grid generation algorithm, and the standard three-dimensional model grid is obtained by using, for example, a Triangulation (Triangulation) algorithm, a boundary representation (Boundary Representation) algorithm, a voxellization (voxellization) algorithm or the like.
Step S3: performing three-dimensional model rendering processing on the three-dimensional model grids by utilizing a progressive model analysis mechanism to generate an updated rendering model; performing rendering optimization treatment on the updated rendering model according to the visual field rejection optimization algorithm to generate an optimized rendering model;
in the embodiment of the invention, the three-dimensional model grids are initially subdivided by selecting a proper initial subdivision level, the grids of the current subdivision level are iteratively refined by using a subdivision algorithm, finer grids are generated, and whether the refinement process needs to be stopped is judged according to a preset termination condition. The termination condition may generate an updated rendering model based on a rendering effect, a resource limitation, a user demand, or the like, and perform rendering optimization processing on the updated rendering model according to a visual field rejection optimization algorithm, where the visual field optimization processing includes: determining the viewpoint position of an observer and camera parameters, including a viewing angle, a near clipping surface, a far clipping surface and the like, determining the part which is intersected with or contained in the view cone in the model by calculating the intersection relation between the view cone and the model, removing invisible model parts according to the calculation result of the visual field, and only reserving the visible parts for rendering; the rendering optimization process includes: the method comprises the steps of compressing textures of a model, reducing storage space and transmission bandwidth of the textures, improving rendering speed and quality by special functions of graphic hardware and acceleration algorithms such as a graphic accelerator, a shader and the like, grouping objects needing rendering to reduce the number of rendering calls and the like, and generating an optimized rendering model.
Step S4: performing file compression processing on the optimized rendering model based on a geometric coding compression algorithm to generate a rendering model data packet; simulating unpacking processing is carried out on the rendering model data packet by adopting a streaming technology, and model rendering entity information weight data is generated; comparing the model rendering entity information weight data with a preset rendering precision threshold value to generate a STEP format model file;
in the embodiment of the invention, the geometric data of the optimized rendering model is encoded, the encoded data is compressed by applying a proper compression algorithm to generate the rendering model data packet, the encoded and compressed model data is contained, the streaming technology allows the large data packet to be divided into smaller data blocks in the network transmission process, and the streaming processing of the rendering model data packet is implemented in the following manner: dividing a rendering model data packet into smaller data blocks, transmitting and receiving the data blocks one by using streaming technology such as segmented transmission, frame transmission and the like, performing simulated unpacking processing on a receiving end, recombining the received data blocks into a complete rendering model data packet, and processing the type rendering entity information weight data which refers to data related to an entity of a rendering model and attributes thereof according to the following modes: and extracting entity information and related attributes of the rendering model by using a corresponding algorithm and a data structure, calculating weight information, such as texture weight, color weight and the like, of each entity, matching and correlating model rendering entity information weight data with optimized rendering model data, comparing the model rendering entity information weight data with a preset rendering precision threshold, and generating a STEP format model file.
Step S5: transmitting the STEP format model file to a user side through a network node to carry out task packaging processing, and generating a real-time rendering task; performing node parallel processing on the real-time rendering task by using a distributed rendering technology to generate a model rendering effect image;
in the embodiment of the invention, the file is transmitted by using a file transmission protocol (such as FTP, HTTP and the like) or a network sharing mode (such as network file sharing, cloud storage and the like), the generated STEP format model file is transmitted to a user side through a network node, task packaging processing is carried out on the user side, the method comprises the STEPs of analyzing the received STEP file, extracting model data and rendering parameters, generating a real-time rendering task, and carrying out node parallel processing on the real-time rendering task by using a distributed rendering technology. The distributed rendering technology can utilize the parallel computing capability of a plurality of computers or servers to accelerate the processing speed of rendering tasks, uses a distributed task scheduling framework (such as Apache Hadoop, apache Spark and the like) or automatically realizes the task scheduling logic to conduct node parallel processing on the real-time rendering tasks, and after each node receives a subtask, utilizes a rendering engine or rendering software to conduct actual rendering computation, and can use modes such as message transmission or shared storage to conduct data exchange and synchronization among the nodes. And transmitting the final rendering result to the user terminal through the network node to generate a model rendering effect image.
Step S6: acquiring performance data of user equipment; and performing self-adaptive rendering precision adaptation processing on the model rendering effect image according to the user equipment performance data to generate a STEP format real-time rendering model.
In the embodiment of the invention, in a user side application program, proper API or library is used for acquiring performance data of user equipment, the performance data can comprise information such as type and speed of a processor (CPU), model and performance index of a Graphic Processor (GPU), an operating system and a memory, the acquired performance data of the user equipment are analyzed and evaluated, a rendering precision level suitable for the user equipment is determined according to the performance data of the equipment, lower-performance equipment can adopt lower rendering precision, higher-performance equipment can adopt higher rendering precision, an adaptation process is carried out on a model rendering effect image according to the rendering precision level, parameters such as illumination effect adjustment, material quality and detail level are adjusted, the model is re-rendered by using a rendering engine or rendering software based on the performance data of the equipment and the adaptation process of the self-adaptation rendering precision, the generated real-time rendering model is stored into a STEP format.
Preferably, step S1 comprises the steps of:
step S11: receiving an STEP format original model file uploaded by a user through an uploading channel in a WEB terminal application program;
step S12: carrying out file initialization on the STEP format original model file based on the FreeCAD analysis library to obtain an original model initialization file;
step S13: carrying out file analysis processing on the original model initialization file by utilizing an analysis function built in the analysis library to generate original model analysis data;
step S14: performing anomaly detection processing on the original model analysis data, and performing data restoration processing on the original model analysis data when the original model analysis data detects that the data is missing, so as to generate original model restoration data; when the original model analysis data does not detect the abnormality, STEP standard processing is carried out on the original model analysis data to generate original analysis standard data.
According to the invention, the STEP format original model file uploaded by the user is received through the uploading channel in the WEB side application program, and the STEP format original model file is initialized based on the FreeCAD analysis library, so that correct loading and processing of the model file can be ensured, and a necessary basis is provided for the subsequent analysis STEP; the original model initialization file is subjected to file analysis processing by utilizing an analysis function built in the analysis library, so that geometric information, topological relation and other related data of the model can be contained, and the file format can be saved; and when the original model analysis data does not detect the abnormality, STEP standard processing is performed on the original model analysis data, so that the model data can be ensured to accord with STEP standard specifications.
In the embodiment of the invention, a file uploading channel is provided in a WEB end application program, a user is allowed to select and upload an STEP format original model file, a proper back-end technology and programming language (such as Python, node. Js and the like) are used for receiving and storing the user uploaded file to a designated position of a server, the user uploaded STEP format original model file is received, a FreeCAD analysis library is used for reading and processing the STEP format model file, the FreeCAD is a Computer Aided Design (CAD) software with an open source, a function for processing the STEP format is provided, an initialization function of the FreeCAD analysis library is used for initializing the uploaded STEP format original model file, an initialization file of a model is created, an initialization function in the FreeCAD analysis library is called, the original model initialization file is used as input for processing, the content of the file is in a data structure capable of being operated and processed, analysis data containing the original analysis data can be obtained according to a return result of the analysis function, the analysis data containing the original analysis data can be detected by geometrical information, the entity property, the data can be analyzed by detecting the data with the fault condition or the error condition, and the analysis data can be restored if the error condition is found, and the analysis condition is found; if no abnormality is detected, the analysis data is regarded as STEP standard model data, a corresponding data restoration algorithm or method is adopted for restoration processing aiming at the detected data missing condition, data restoration can be deduced, estimated or completed according to the existing information so as to restore the integrity of an original model, after the restoration processing is completed, the restored original model data is obtained, namely the original model restoration data, if no abnormality is detected, the original model analysis data accords with STEP standard, and the original model analysis data can be directly used to generate the original analysis standard data.
Preferably, step S2 comprises the steps of:
step S21: carrying out data denoising processing on the original analysis standard data to generate original analysis denoising data; performing outlier detection processing on the original analysis denoising data by using an absolute deviation median method to generate original analysis outlier data; performing outlier substitution processing on the original analysis outlier data through mean calculation to generate standard analysis data;
step S22: setting a time domain signal according to modeling requirements; trending the preset time domain signal to obtain a standard time domain signal; performing signal transformation processing on a preset time domain signal by using a fast Fourier transform algorithm to generate a model frequency domain signal; carrying out frequency domain feature extraction processing on the model frequency domain signal according to the phase spectrum to generate frequency domain feature data;
step S23: carrying out data frame integration processing on the frequency domain characteristic data and the standard analysis data to generate model integration data; performing data set balance processing on the model integrated data based on an oversampling method to generate model structured data;
step S24: performing three-dimensional grid conversion processing on the model structured data based on a Delaunay triangulation algorithm to generate a three-dimensional model grid; and carrying out smoothing treatment on the three-dimensional model grid by using a grid simplification algorithm to generate a standard three-dimensional model grid.
According to the method, the data denoising processing is carried out on the original analysis standard data, so that noise and abnormal data can be eliminated, the quality and accuracy of the data are improved, the outlier detection processing is carried out on the original analysis denoising data by utilizing an absolute deviation median method, the quality, accuracy and usability of model data are improved, the abnormal point substitution processing is carried out on the original analysis outlier data through mean value calculation, the reliability of the data can be improved, and subsequent analysis and modeling can be better carried out; setting a time domain signal according to modeling requirements, performing trending treatment on the preset time domain signal, revealing frequency domain characteristics and important signal components of a model, performing signal transformation treatment on the preset time domain signal by using a fast Fourier transform algorithm, improving the visualization of data so as to better extract the characteristics of the data, performing frequency domain characteristic extraction treatment on the model frequency domain signal according to a phase spectrum, and improving the definition of the data so that the data is easier to observe; the frequency domain characteristic data and the standard analysis data are subjected to data frame integration processing, so that the distribution of the data and the balance of samples can be improved, the data set balance processing is performed on the model integration data based on an oversampling method, the accuracy of modeling and analysis results can be improved, and the problem of data unbalance is solved; the three-dimensional grid conversion processing is carried out on the model structured data based on the Delaunay triangulation algorithm, so that the complexity and the storage space requirement of the model can be reduced, the grid simplification algorithm is utilized to carry out the smoothing processing on the three-dimensional model grid, and the visualization effect and the performance of the model can be improved.
As an example of the present invention, referring to fig. 2, the step S2 in this example includes:
step S21: carrying out data denoising processing on the original analysis standard data to generate original analysis denoising data; performing outlier detection processing on the original analysis denoising data by using an absolute deviation median method to generate original analysis outlier data; performing outlier substitution processing on the original analysis outlier data through mean calculation to generate standard analysis data;
in the embodiment of the invention, by applying a proper data denoising algorithm, such as moving average, median filtering and the like, to the original analysis standard data to reduce noise in the data, original analysis denoising data is generated, outlier detection is performed on the denoised data by applying an absolute deviation Median (MAD) method, possible abnormal values are found out, and original analysis outlier data is generated.
Step S22: setting a time domain signal according to modeling requirements; trending the preset time domain signal to obtain a standard time domain signal; performing signal transformation processing on a preset time domain signal by using a fast Fourier transform algorithm to generate a model frequency domain signal; carrying out frequency domain feature extraction processing on the model frequency domain signal according to the phase spectrum to generate frequency domain feature data;
In the embodiment of the invention, the required time domain signal is determined according to modeling requirements, which can be a preset signal, an observed signal or other specific signal types, trending processing is carried out on the preset time domain signal to remove trend components, so as to obtain a standard time domain signal, a Fast Fourier Transform (FFT) algorithm is used for converting the standard time domain signal into a frequency domain signal, so as to obtain a frequency domain representation of a model, a frequency domain signal of the model is generated, and feature extraction processing is carried out on the frequency domain signal of the model based on an analysis method of a phase spectrum, so as to obtain frequency domain feature data.
Step S23: carrying out data frame integration processing on the frequency domain characteristic data and the standard analysis data to generate model integration data; performing data set balance processing on the model integrated data based on an oversampling method to generate model structured data;
in the embodiment of the invention, the frequency domain characteristic data and the standard analysis data are combined or integrated to create a comprehensive data set containing two data sources to generate the model integration data, and if the model integration data is unbalanced, an oversampling method (such as an SMOTE algorithm) can be adopted to generate a synthetic sample so as to balance the data set, and the generated data is the structured data of the model after the processing is completed.
Step S24: performing three-dimensional grid conversion processing on the model structured data based on a Delaunay triangulation algorithm to generate a three-dimensional model grid; and carrying out smoothing treatment on the three-dimensional model grid by using a grid simplification algorithm to generate a standard three-dimensional model grid.
In the embodiment of the invention, the structured data of the model is converted into the three-dimensional grid representation by using a Delaunay triangulation algorithm, delaunay triangulation is a common three-dimensional grid generation method, non-overlapping triangular grids are generated according to a given point set, the generated three-dimensional model grids are subjected to smoothing treatment, and a grid simplification algorithm (such as Laplacian Smoothing) can be used for eliminating noise and irregularity on the grid surfaces, so that smoother standard three-dimensional model grids are obtained.
Preferably, step S3 comprises the steps of:
step S31: carrying out first-round layering treatment on the standard three-dimensional model grid by utilizing a progressive model analysis mechanism to generate a three-dimensional model rough level; carrying out model detail feature stripping treatment on the three-dimensional model rough level to generate model detail fragments; layering the rough three-dimensional model level based on the model detail fragments in a second round to generate a three-dimensional model detail level; performing level priority rendering sequencing treatment on the rough level of the three-dimensional model and the detail level of the three-dimensional model to generate a three-dimensional rendering progressive model;
Step S32: performing model rendering processing on the three-dimensional rendering progressive model based on texture mapping to generate a three-dimensional rendering model; performing model updating processing on the three-dimensional rendering model to generate an updated rendering model;
step S33: performing visual field elimination processing on the updated rendering model according to a preset camera view cone to generate a visual field rendering model; judging the visual field rendering model by using a rejection algorithm, and when the grids in the visual field rendering model are not in the visual field, performing rendering queue rejection by using a visual field rejection optimization algorithm to generate an optimized rendering model.
The invention generates a rough level of the three-dimensional model by first performing a first round of layering processing. And then carrying out model detail characteristic stripping treatment on the rough level to generate model detail fragments. Then, performing a second round of layering processing based on the model detail fragments to generate a detail level of the three-dimensional model, so that progressive loading and optimized rendering are realized, and the rendering efficiency and performance are improved; the rough level and the detail level of the three-dimensional model are subjected to level priority rendering sorting treatment, namely, the rough level is rendered firstly, then the detail is gradually added, so that better visual experience and rendering quality can be provided while the rendering efficiency is ensured; performing model rendering processing on the three-dimensional rendering progressive model by using texture mapping to generate a three-dimensional rendering model, wherein the three-dimensional rendering model can be presented as a visualized object with color, texture and illumination effect; and then, judging the visual field rendering model through a rejection algorithm, and excluding grids which are not in the visual field from a rendering queue, so that unnecessary rendering calculation can be reduced, rendering performance can be improved, and an optimized rendering model is generated, thereby being beneficial to improving performance, efficiency and user experience in the display and rendering processes of the three-dimensional model.
As an example of the present invention, referring to fig. 3, the step S3 in this example includes:
step S31: carrying out first-round layering treatment on the standard three-dimensional model grid by utilizing a progressive model analysis mechanism to generate a three-dimensional model rough level; carrying out model detail feature stripping treatment on the three-dimensional model rough level to generate model detail fragments; layering the rough three-dimensional model level based on the model detail fragments in a second round to generate a three-dimensional model detail level; performing level priority rendering sequencing treatment on the rough level of the three-dimensional model and the detail level of the three-dimensional model to generate a three-dimensional rendering progressive model;
in the embodiment of the invention, the model grid is decomposed into rough levels in a progressive manner, in the process, algorithms and data structures such as octree or quadtree can be used for effectively representing and managing the hierarchical data structures, detailed features of the model such as edges, curves and the like are extracted from the rough levels and are represented as model detail fragments, model detail fragments are generated, the layering result of the first round is further refined, the detail information of the model is added into the layering structure to obtain more model detail levels, three-dimensional model detail levels are generated, different levels of the model are ordered according to priority by using rendering optimization algorithms and technologies, and the model is progressively displayed in a hierarchical order.
Step S32: performing model rendering processing on the three-dimensional rendering progressive model based on texture mapping to generate a three-dimensional rendering model; performing model updating processing on the three-dimensional rendering model to generate an updated rendering model;
in the embodiment of the invention, by mapping the texture image to the model surface by using the texture mapping technology to increase the appearance and detail of the model, for example, the model can be realized by mapping the texture coordinates to the vertices of the model and acquiring color information from the texture image according to the texture coordinates of the vertices, and model rendering processing is performed by using a common rendering Engine including Unity, unreal Engine, openGL, directX and the like, so as to apply the texture mapping to the model and generate realistic rendering results, and a professional model editing software such as Blender, 3ds Max, maya and the like is used to modify and update the three-dimensional model, and in the update processing of the model, some algorithms and data structures may be used to realize the modification of the model, for example, a curved surface reconstruction algorithm may be used to smooth the model, repair broken parts, remove invalid geometric data and the like, so as to generate an updated rendering model.
Step S33: performing visual field elimination processing on the updated rendering model according to a preset camera view cone to generate a visual field rendering model; judging the visual field rendering model by using a rejection algorithm, and when the grids in the visual field rendering model are not in the visual field, performing rendering queue rejection by using a visual field rejection optimization algorithm to generate an optimized rendering model.
In the embodiment of the invention, the camera view cone is a geometric body for representing the view range of the camera, usually a cone or a cube, according to the camera view cone, the model of which parts are positioned in the visual range of the camera can be judged, and according to the result of visual field rejection processing, namely, the part model in the camera view cone, a visual field rendering model is generated, the model is regarded as an object needing to be displayed and processed in the rendering process, a proper rejection algorithm is adopted to process the visual field rendering model, the model and the view cone are subjected to an intersection test to judge whether the model is completely or partially positioned in the view cone, according to the result of the rejection algorithm, the part which is not positioned in the visual field rendering model is removed, an optimized rendering queue is formed, and the optimized rendering model is generated.
Preferably, the function formula of the visual field rejection optimization algorithm in step S33 is specifically as follows:
where f (x, y) is expressed as a function of the visibility determination value of the model in the grid, x is expressed as the abscissa in the grid model, the point, y is expressed as the ordinate in the grid model, z is expressed as the ordinate in the grid model, C x Horizontal coordinate point expressed as camera position, C y Points of the ordinate, C, expressed as camera positions z Vertical coordinate points expressed as camera positions, D expressed as camera field of view, n expressed as grid number, and θ expressed as illumination intensity received by the model,Expressed as camera rotation angle, L x Represented as the abscissa, L, of the model illumination area y Expressed as the ordinate of the model illumination area, I z Expressed as vertical coordinates of the model illumination area, I expressed as model rendering threshold, and epsilon expressed as visual field optimization anomaly adjustment value.
The invention constructs a visual field eliminating optimization algorithm which is used for confirming the visual field of the model through the coordinates of the grid model, the position coordinates of the camera and the illumination intensity received by the model, and the visual field eliminating optimization algorithm can evaluate the influence of illumination on the visual field of the model according to the number of grids and the illumination intensity received by the model, so as to realize optimal visual field judgment, and dynamically and accurately position the model according to the rotation angle of the camera, thereby accurately determining the visual field judgment value function of the model in the grid. In practical application, the formula can judge the visual field of the model according to the spatial coordinates of the model, if the model area is in the visual field, the visual field range is accurately measured to receive illumination influence intensity, so that the model area rendering which is less influenced by illumination and is not in the visual field range is removed, and the huge system calculation amount caused by rendering is reduced. The formula fully considers the abscissa x in the grid model, the ordinate y in the grid model, the ordinate z in the grid model and the abscissa point C of the camera position x Ordinate point C of camera position y Vertical coordinate point C of camera position z Camera field of view D, grid number n, model received illumination intensity θ, camera rotation angleAbscissa L of model illumination area x Ordinate L of model illumination area y Vertical coordinate L of model illumination area z The model rendering threshold I, the visual field optimization anomaly adjustment value epsilon, and a functional relation is formed according to the correlation between the grid quantity n and the parameters:
through the interaction relation between the illumination intensity received by the model and the rotation angle of the camera, the range of the edge and the center area in the visual field of the model can be known, the visual field is optimized under the condition of ensuring the accuracy of area data, the data redundancy is reduced under the condition of ensuring the accuracy of the data by utilizing the rendering threshold value of the model, the calculation force is saved, the calculation is enabled to be converged rapidly, the visual field judgment is adjusted through the visual field optimizing abnormal adjustment value epsilon, the visual field judgment value function f (x, y) of the model in the grid is generated more accurately, and the accuracy and the reliability of the visual field rejection optimization are improved. Meanwhile, parameters such as a model rendering threshold value of a user and a camera rotation angle in the formula can be adjusted according to actual conditions, so that different visual fields can be adapted to eliminating optimized scenes, and the applicability and flexibility of the algorithm are improved.
Preferably, step S32 comprises the steps of:
step S321: setting a rendering environment based on the STEP format, wherein the rendering environment includes a camera position and an illumination intensity; performing three-dimensional model construction processing on the model structured data based on the standard three-dimensional model grid to generate a three-dimensional construction model object;
step S322: performing view transformation processing on the position of the camera to generate a three-dimensional view coordinate system; performing projection transformation processing on the three-dimensional building model object and the three-dimensional view coordinate system to generate a standardized equipment coordinate system;
step S323: performing dimension reduction processing on the three-dimensional building model object and a standardized equipment coordinate system to generate a two-dimensional model pixel point; rasterizing the two-dimensional model pixel points to generate two-dimensional model raster pixel points;
step S324: performing texture mapping processing on the two-dimensional model grating pixel points according to the illumination intensity to generate two-dimensional model texture data; performing planar geometry clipping processing on texture data of the two-dimensional model based on a depth buffer algorithm, and eliminating invisible textures; generating two-dimensional model clipping data;
step S325: cutting the two-dimensional model cutting data into data fragments to generate two-dimensional model cutting fragments; applying a preset fragment shader to a two-dimensional model cutting fragment to perform light ray compound calculation processing, and generating two-dimensional model light ray data;
Step S326: performing data three-dimensional display processing on the two-dimensional model light data by utilizing a frame buffer technology to generate a three-dimensional rendering model;
step S327: obtaining model update data through an external data source; and importing the model update data into the three-dimensional rendering model according to a preset time stamp to perform model update processing, and generating an updated rendering model.
According to the invention, the rendering environment is set based on STEP format, wherein the rendering environment comprises the position of a camera and illumination intensity, structured data of the model is processed according to standard three-dimensional model grids, and a three-dimensional model building object is generated, so that the rendering environment is established and a visual model is built; performing view transformation processing on the position of a camera to generate a three-dimensional view coordinate system, performing projection transformation processing on a three-dimensional building model object and the three-dimensional view coordinate system to generate a standardized equipment coordinate system, and determining the position and projection effect of the model in rendering; performing dimension reduction processing on the three-dimensional building model object and a standardized equipment coordinate system to generate a two-dimensional model pixel point, performing rasterization processing on the two-dimensional model pixel point to generate a two-dimensional model raster pixel point, and converting the three-dimensional model into a two-dimensional pixel point on a screen to prepare data for subsequent texture mapping and rendering; performing texture mapping processing on the two-dimensional model grating pixel points according to illumination intensity to generate two-dimensional model texture data, performing plane geometry clipping processing on the texture data based on a depth buffer algorithm, removing invisible textures, generating two-dimensional model clipping data, and being beneficial to applying proper textures and removing invisible parts to a user during rendering; and performing data segment cutting processing on the two-dimensional model cutting data to generate two-dimensional model cutting segments. Then, applying a preset fragment shader to the cut fragments, performing light ray composite calculation processing, generating two-dimensional model light ray data, and being beneficial to calculating illumination and color for each fragment in the rendering model; and carrying out data three-dimensional display processing on the two-dimensional model light data by utilizing a frame buffer technology to generate a three-dimensional rendering model, acquiring update data of the model by an external data source, importing the update data into the three-dimensional rendering model according to a preset time stamp to carry out model update processing, generating an update rendering model, and being beneficial to displaying the model and synchronously updating the model and the external data so as to maintain the accuracy of rendering and provide a high-quality visual result.
As an example of the present invention, referring to fig. 4, the step S32 in this example includes:
step S321: setting a rendering environment based on the STEP format, wherein the rendering environment includes a camera position and an illumination intensity; performing three-dimensional model construction processing on the model structured data based on the standard three-dimensional model grid to generate a three-dimensional construction model object;
in the embodiment of the invention, by extracting information required for rendering from STEP format data, including parameters such as camera position, illumination intensity and the like, constructing a three-dimensional model object by using corresponding APIs and tools creates an empty three-dimensional constructed model object in a rendering engine, and the empty three-dimensional constructed model object is used for storing the generated three-dimensional model data, for example, in Unity, a camera component can be used for setting the camera position and an illumination component can be used for adjusting the illumination intensity. The standard three-dimensional model data can be converted into the three-dimensional construction model object by loading the model file or creating the grid object, the model is structured based on the standard three-dimensional model grid, the data comprise vertex coordinates, surface information, texture coordinates and the like, the vertex information in the standard three-dimensional model grid is traversed, the processing and the storage are carried out according to a required format or data structure, and the generated three-dimensional construction model object contains the structured data of the whole model for subsequent rendering and processing.
Step S322: performing view transformation processing on the position of the camera to generate a three-dimensional view coordinate system; performing projection transformation processing on the three-dimensional building model object and the three-dimensional view coordinate system to generate a standardized equipment coordinate system;
in the embodiment of the invention, parameters such as the position, the orientation, the upward direction and the like of a camera are determined as input of a view matrix, a mathematical library or a graphic API is used for constructing the view matrix according to the camera parameters, the view matrix is used for converting an object in a scene from a world coordinate system to a camera coordinate system to generate a three-dimensional view coordinate system, after view conversion, projection conversion processing is carried out on a three-dimensional constructed model object and the generated three-dimensional view coordinate system, the three-dimensional constructed model object and the generated three-dimensional view coordinate system are converted into a clipping space or a standardized device coordinate system, the clipping space is a coordinate space taking the camera as a center, a near clipping plane and a far clipping plane as boundaries, in the coordinate space, an object in the scene is clipped and subjected to perspective projection, the standardized device coordinate system is a standardized coordinate space with the range of [ -1,1], in the coordinate space, the object in the scene is subjected to perspective projection and standardized processing, and finally the object is rendered on the screen, and the standardized device coordinate system is generated.
Step S323: performing dimension reduction processing on the three-dimensional building model object and a standardized equipment coordinate system to generate a two-dimensional model pixel point; rasterizing the two-dimensional model pixel points to generate two-dimensional model raster pixel points;
in the embodiment of the invention, a three-dimensional construction model object is converted into a standardized equipment coordinate system from an original coordinate system, the standardized equipment coordinate system is a standardized coordinate space, the range of the standardized equipment coordinate system is generally [ -1,1], the vertex coordinates of the model object are subjected to perspective projection or orthogonal projection, the vertex coordinates are converted into two-dimensional coordinates in the standardized equipment coordinate system, the x, y and z coordinates of each vertex are subjected to proper calculation and mapping to obtain the coordinates of a corresponding two-dimensional model pixel point, a two-dimensional model pixel point is generated, the rasterization is a process of mapping the two-dimensional model pixel point to an actual pixel point on a screen, the two-dimensional model pixel point coordinate is mapped to the pixel coordinate of the screen, for a polygon or curved surface model, a scanning line algorithm or other rasterization algorithm is used for mapping the pixel point coordinate to the screen, interpolation color and depth values of the pixel point are determined on each scanning line, and in the rasterization process, invisible parts can be removed by using a clipping algorithm, so that the rendering efficiency is improved.
Step S324: performing texture mapping processing on the two-dimensional model grating pixel points according to the illumination intensity to generate two-dimensional model texture data; performing planar geometry clipping processing on texture data of the two-dimensional model based on a depth buffer algorithm, and eliminating invisible textures; generating two-dimensional model clipping data;
in the embodiment of the invention, the texture mapping is a process of applying a texture image to a model by performing texture mapping processing on the two-dimensional model grating pixel points according to illumination intensity. Each rasterized pixel point corresponds to a pixel in the texture image, the color value of each rasterized pixel point is calculated according to illumination intensity or other illumination models, in the texture mapping process, the pixel in the texture image is positioned by using texture coordinates, the texture color value corresponding to each rasterized pixel point is calculated by interpolation according to the texture coordinates and the texture image, two-dimensional model texture data are generated, a depth buffering algorithm is used for determining which parts in the model can be seen, for each rasterized pixel point, before the texture mapping is carried out, the depth value of each rasterized pixel point is compared with the value in the depth buffer, if the depth value of the current pixel point is smaller than the value of the corresponding position in the depth buffer, the pixel point is visible, the texture mapping can be carried out, if the depth value of the current pixel point is larger than or equal to the value of the corresponding position in the depth buffer, the pixel point is blocked, the texture mapping is not carried out, after the plane geometry processing is carried out, two-dimensional model clipping data can be generated, and the data can comprise coordinate information of the clipped pixel point, the color value and the like.
Step S325: cutting the two-dimensional model cutting data into data fragments to generate two-dimensional model cutting fragments; applying a preset fragment shader to a two-dimensional model cutting fragment to perform light ray compound calculation processing, and generating two-dimensional model light ray data;
in the embodiment of the invention, the two-dimensional model clipping data is processed by data fragment cutting, wherein the data fragment cutting is a process of dividing the clipping data into small discrete fragments, each fragment corresponds to one rasterized pixel, the clipping data generally comprises information such as coordinates, color values and the like of texture pixel points after clipping, each fragment corresponds to one rasterized pixel point according to the clipping data, the two-dimensional model clipping fragments are generated, a fragment shader is a program for calculating the color of each fragment, generally comprises operations such as illumination calculation, shadow processing, texture sampling and the like, a preset fragment shader is applied to each clipping fragment, light ray compound calculation can be performed in the fragment shader, and factors such as illumination, shadow, texture and the like are considered to determine the final color of each fragment, so as to generate the two-dimensional model light ray data.
Step S326: performing data three-dimensional display processing on the two-dimensional model light data by utilizing a frame buffer technology to generate a three-dimensional rendering model;
In the embodiment of the invention, a frame buffer object is created by using an API provided by a graphic library or a rendering engine, the created frame buffer object is bound to a graphic rendering context, a subsequent rendering operation is validated, a texture buffer object is created and used for storing two-dimensional model light data, wherein the texture buffer object is a special image data object and is used for storing and processing texture data, the generated two-dimensional model light data is bound to the previously created texture buffer object, the texture buffer object is used as an additional object of the frame buffer object, in this way, the rendering operation is directly written into the texture buffer object, the API of the rendering engine or the graphic library is used for setting a target to be rendered and a view port, the rendering target can be a screen, a texture or other rendering targets, the view port defines a display area of a rendering result on the target, the rendering engine or the rendering function of the graphic library is used for performing the rendering operation, in the rendering process, the light data is transferred to a fragment shader for performing illumination calculation and the rendering operation, the result is written into the frame buffer object, after the rendering operation is completed, the texture buffer object can be extracted from the frame buffer object, and if the additional texture buffer object can be rendered, and the rendering result can be obtained from the frame buffer object, and the rendering result can be rendered if the rendering result is obtained.
Step S327: obtaining model update data through an external data source; and importing the model update data into the three-dimensional rendering model according to a preset time stamp to perform model update processing, and generating an updated rendering model.
In the embodiment of the invention, by determining the source of the model update data, which may be a network interface, a file system or other storage devices, reading the model update data from an external data source, analyzing the data and extracting the required model update information according to a preset data format and structure, selecting the latest model update data according to a time stamp for a scene updated in real time, if the external data source provides a plurality of update data and each data has a time stamp, selecting appropriate data according to the current time or a designated time point to perform model update, importing the selected model update data into a three-dimensional rendering model, which may involve applying the updated vertex position, normal information, texture coordinates and the like to the corresponding part of the rendering model, updating the attribute and state of the three-dimensional rendering model to reflect the imported model update data by using an API of a rendering engine or a graphics library, including operations such as vertex coordinate transformation, normal calculation, texture mapping and the like, re-rendering the updated three-dimensional rendering model after the model update processing is completed, and generating the updated model.
Preferably, step S4 comprises the steps of:
step S41: carrying out data packet compression processing on the updated rendering model according to a geometric coding compression algorithm to generate a rendering model data packet; performing format conversion processing on the rendering model data packet based on the STEP format to generate the STEP format rendering model data packet;
step S42: performing data packet cutting processing on the STEP format rendering model data packet to generate a STEP format rendering model data block; performing sequencing mark processing on STEP format rendering model data blocks to generate STEP format rendering sequencing links; based on STEP format rendering ordering links, carrying out stream receiving processing on STEP format rendering model data blocks to generate model rendering entity information weight data;
step S43: comparing the model rendering entity information weight data with a preset rendering precision threshold, and generating a STEP format model file when the model rendering entity information weight data is larger than the rendering precision threshold.
The invention carries out data packet compression processing on the updated rendering model through a geometric coding compression algorithm to generate a rendering model data packet, carries out format conversion processing on the rendering model data packet based on STEP format to generate STEP format rendering model data packet, is beneficial to reducing the size of the data packet and converting the data packet into a universal STEP format, and is convenient for transmission and storage; performing data packet cutting processing on STEP format rendering model data packets to generate STEP format rendering model data blocks, performing sequencing marking processing on the rendering model data blocks to generate STEP format rendering sequencing links, facilitating cutting of large data packets into smaller data blocks, and providing sequencing and linking information for subsequent stream receiving processing; based on STEP format rendering ordering links, stream receiving processing is carried out on STEP format rendering model data blocks to generate model rendering entity information weight data, so that block-by-block receiving and data processing can be realized, memory use is reduced, and entity information and weight data related to model rendering are generated; and comparing the model rendering entity information weight data with a preset rendering precision threshold value, which is beneficial to controlling the rendering precision, ensuring that only entity information with enough importance is contained in a final model file, reducing the file size and improving the rendering efficiency.
In the embodiment of the invention, the updated rendering model is subjected to data packet compression processing through a geometric coding compression algorithm, the compressed data packet is converted into a STEP format rendering model data packet based on STEP (Standard for the Exchange of Product Data) format, the STEP format rendering model data packet is segmented into a plurality of smaller data blocks according to the size, performance requirement or network transmission capacity of the data packet, the segmented STEP format rendering model data blocks are subjected to sequencing marking processing, STEP format rendering sequencing links are generated, sequencing marks can be added according to the sequence of the data blocks or other specific attributes so as to carry out correct data block recombination in subsequent streaming receiving processing, the STEP format rendering model data blocks are subjected to streaming receiving processing based on the STEP format rendering sequencing links, the streaming receiving can receive and process the data blocks in batches, memory occupation and processing load are reduced, the received data blocks are assembled and processed according to the marking information in the sequencing links, model rendering entity information weight data is generated, the model entity information weight data is compared with a preset precision threshold, and if the weight data is larger than the threshold, the STEP format rendering tool is used for generating a STEP file according to the STEP format specification or a STEP file.
Preferably, the functional formula of the geometric coding compression algorithm in step S41 is as follows:
where K is the encoded compressed data size, m is the number of packets, S is the original data size before update, G i Denoted as the difference between the ith vertex data and the corresponding reconstructed vertex data, e is denoted as data transmissionLoss of allowable maximum error, P ij Represented as vertex positions in the original rendering model, R ij Represented as vertex positions in the reconstructed rendering model, V ij Represented as the vertex normals in the original rendering model,represented as vertex normals in the reconstructed rendering model, μ is represented as model code compression anomaly modifier.
The invention constructs a function formula of a geometric coding compression algorithm, which is used for carrying out loss calculation in the data packet transmission process through the original data size before updating and the maximum error allowed by the ith vertex data and data transmission loss, and the geometric coding compression algorithm can carry out reconstruction compression on the rendered data packet of the model according to the vertex position in the original rendering model and the vertex position in the reconstructed rendering model, so as to realize optimal data packet coding, and carry out dynamic reduction on the rendering precision after model compression according to the vertex normal in the original rendering model and the vertex normal in the reconstructed rendering model, thereby accurately determining the size of the compressed data after coding. In practical application, the formula can sample the data in the region, only a part of important data points are reserved, then the data points are used for reconstruction or interpolation to restore the data of the whole region, the reconstructed model is compared with the original model data, the size of the data packet is compressed and encoded as much as possible while the rendering precision is kept, and the system space is saved. The formula fully considers the number m of the data packets, the original data size S before updating, and the difference G between the ith vertex data and the corresponding reconstruction vertex data i Maximum error E allowed by data transmission loss, and vertex position P in original rendering model ij Vertex position R in a reconstructed rendering model ij Vertex normals V in the original rendering model ij Vertex normals in reconstructed rendering modelsModel coding compression anomaly correction amount mu, based on the original data size S before updating and the parametersConstitutes a functional relationship:
through the interaction relation between the difference value between the ith vertex data and the corresponding reconstructed vertex data and the vertex position in the original rendering model, the influence error of the data packet after compression on the original rendering model can be known, the geometric coding compression of the data packet is carried out under the condition of ensuring the accuracy of the regional data, the maximum error allowed by the data transmission loss is utilized, the data redundancy is reduced under the condition of ensuring the accuracy of the data, the calculation force is saved, the calculation achieves rapid convergence, the data packet coding compression is regulated through the abnormal correction quantity mu of the model coding compression, the size K of the coded compressed data is generated more accurately, and the accuracy and the reliability of the geometric coding compression are improved. Meanwhile, parameters such as maximum error allowed by data transmission loss, data packet number and the like in the formula can be adjusted according to actual conditions, so that the method is suitable for different geometric coding compression scenes, and the applicability and flexibility of the algorithm are improved.
Preferably, step S5 comprises the steps of:
step S51: transmitting the STEP format model file from the server side to the user side according to a network transmission protocol, and generating a transmission STEP format model file;
step S52: when a user receives a transmission STEP format model file, performing task packaging processing on the transmission STEP format model file to generate a real-time rendering task;
step S53: performing rendering node allocation processing on STEP format model file task packages by using a distributed rendering technology to generate model file task nodes; performing node parallel processing on the model file task nodes to generate a model rendering result;
step S54: and carrying out image synthesis processing on the model rendering results generated by each rendering node to generate a model rendering effect image.
According to the invention, the STEP format model file is transmitted from the server side to the user side by using the network transmission protocol, so that the STEP format model file is generated and transmitted, the model file can be effectively transmitted from the server to the user side, and the preparation is made for the subsequent rendering task preparation work; after receiving the transmitted STEP format model file, the user receives the STEP format model file and performs task packaging processing on the STEP format model file to generate a real-time rendering task, and the efficiency of task processing can be improved and subsequent rendering node distribution processing is facilitated by packaging the related rendering tasks together; the STEP format model file task package is subjected to rendering node distribution processing by using a distributed rendering technology, so that model file task nodes are generated, the model file task nodes are subjected to node parallel processing, and a model rendering result is generated, so that the rendering speed and the rendering efficiency are effectively improved; and (3) carrying out image synthesis processing on the model rendering results generated by each rendering node to generate a model rendering effect image, so that a more comprehensive and high-quality model rendering effect image can be obtained, and a more comprehensive and high-quality model rendering effect image can be obtained.
In the embodiment of the invention, a STEP format model file at a server side is transmitted to a user side by using a proper network transmission protocol (such as HTTP, TCP/IP, FTP and the like), the STEP format model file is packaged into a transmission format (such as binary data or a specific file format) at the server side for transmission, the transmitted STEP format model file is received and stored as a transmission STEP format model file at the user side to generate a transmission STEP format model file, the received transmission STEP format model file is packaged for processing tasks at the user side, the task package can divide the model file into different task units according to requirements, such as division according to regions, division according to file sizes or division according to rendering parameters, the packaged tasks can be used for subsequent distributed rendering processing to generate real-time rendering tasks, a task packet is distributed and processed by using a distributed rendering technology according to a system configuration and a load balancing strategy, the distributed rendering can use a plurality of computers or nodes to perform parallel processing to generate model file task nodes, then each model file task node is processed, the task node processing is executed, the received task model file is divided into different task model units according to requirements, for generating a composite image rendering results by adopting a parallel thread model, a composite image rendering result can be synthesized, a rendering result can be synthesized by adopting a calculation result, a visual effect can be generated, a rendering result can be synthesized, a rendering image can be obtained, a composite image is produced, a visual effect is a composite, and a rendering result is produced, and a visual effect, and a image is produced, and a image is combined, and a image is required by a desired by a rendering and a rendering image and a image is manufactured, such as fragment shaders, depth buffer algorithms, ray tracing, etc., generate model rendering results.
Preferably, step S6 comprises the steps of:
step S61: acquiring user equipment performance data of a user by calling a system API;
step S62: performing equipment performance evaluation processing on the user equipment performance data to generate user equipment performance evaluation indexes; and performing rendering loading resource management on the model rendering effect image based on the user equipment performance evaluation index to generate a STEP format real-time rendering model.
The invention can acquire the equipment performance data of the user side by calling the system API, process and analyze the equipment performance data of the user side, generate the performance evaluation index of the user equipment, and know the processing capacity and rendering performance of the user equipment. These evaluation metrics may be used to determine whether the device has sufficient performance to perform real-time rendering tasks; the method has the advantages that the resources required in the rendering process can be optimized and managed by performing rendering loading resource management on the model rendering effect image, so that real-time rendering can be smoothly performed on the user equipment, the performance of the user equipment can be utilized to the greatest extent, and smooth and high-quality rendering experience is provided.
In the embodiment of the invention, the performance data of the user equipment is acquired at the user end through calling API (Application Programming Interface) of the system, the equipment information is acquired by using APIs provided by an operating system, such as Android. Os. Build class or iios Uidevice class, such as a processor model, a memory size, display card information and the like, the equipment information is acquired through Web APIs of a browser, such as a navigator. Hardwart Concurrency acquire CPU core number, a navigator. Devicememory acquire equipment memory and the like, the acquired performance data of the user equipment is subjected to equipment performance evaluation processing to generate an equipment performance evaluation index, the equipment performance evaluation index can comprise aspects of processor performance, memory capacity, display card performance and the like, rendering loading resource management is performed based on the performance evaluation index of the user equipment, the detail and the resource use condition of model rendering are determined, the rendering resource management can dynamically adjust parameters, such as resolution, quality, model complexity and the like according to the performance condition of the user equipment, and the rendering parameters can be performed in real time to ensure that the rendering performance range of the user equipment is provided, so that a rendering model P is generated in a real-time rendering format.
The invention has the advantages that the original model file is obtained and analyzed, the STEP format model can be converted into the original analysis standard data, an operable data structure is provided for the subsequent processing STEP, the original analysis standard data can be converted into the operable model structured data through the data preprocessing and the three-dimensional grid conversion processing, a more convenient data format is provided for the subsequent rendering and optimizing STEP, the model is processed and rendered more efficiently, the optimized rendering model can be converted into a more compact rendering model data packet through the compression processing and streaming technology, meanwhile, the model rendering entity information weight data is compared and processed, the STEP format model file with adaptability can be generated according to the preset rendering precision threshold value, so as to meet different requirements and bandwidth limitations, the STEP format file can be converted into the operable model structured data through the network transmission and the distributed processing, the data format can be converted into the image rendering device with the parallel processing device according to the quality, the image rendering precision and the image rendering precision can be improved, the image rendering device can be generated according to the image quality and the rendering precision of the user equipment, the image rendering device can be better and the rendering and the image quality can be provided, the image rendering device can be rendered and the image quality of the image device can be better and the image device can be rendered and the image device is more suitable for the rendering and the user device with the rendering device, and flexible processing and adaptation are performed according to performance characteristics of the user equipment. Therefore, the invention carries out progressive hierarchical rendering on the STEP format model and carries out parallel processing through the nodes so as to improve the rendering precision and the load capacity of the WEB terminal.
The present embodiments are, therefore, to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein.
The foregoing is only a specific embodiment of the invention to enable those skilled in the art to understand or practice the invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.