CN116977523B - STEP format rendering method at WEB terminal - Google Patents

STEP format rendering method at WEB terminal Download PDF

Info

Publication number
CN116977523B
CN116977523B CN202310924068.8A CN202310924068A CN116977523B CN 116977523 B CN116977523 B CN 116977523B CN 202310924068 A CN202310924068 A CN 202310924068A CN 116977523 B CN116977523 B CN 116977523B
Authority
CN
China
Prior art keywords
model
rendering
data
generate
processing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310924068.8A
Other languages
Chinese (zh)
Other versions
CN116977523A (en
Inventor
江慧明
黄红亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Quick Direct Shenzhen Precision Manufacturing Co ltd
Original Assignee
Quick Direct Shenzhen Precision Manufacturing Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Quick Direct Shenzhen Precision Manufacturing Co ltd filed Critical Quick Direct Shenzhen Precision Manufacturing Co ltd
Priority to CN202310924068.8A priority Critical patent/CN116977523B/en
Publication of CN116977523A publication Critical patent/CN116977523A/en
Application granted granted Critical
Publication of CN116977523B publication Critical patent/CN116977523B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation

Abstract

The invention relates to the technical field of data rendering, in particular to a STEP format rendering method at a WEB end. The method comprises the following steps: obtaining an STEP format original model file through a WEB end file transmission channel; carrying out file data analysis processing on the STEP format model file according to the STEP file analysis library to generate original analysis standard data; carrying out data preprocessing on the original analysis standard data to generate model structured data; performing three-dimensional grid conversion processing on the model structured data by utilizing a polygonal grid generation algorithm to obtain a standard three-dimensional model grid; performing three-dimensional model rendering processing on the three-dimensional model grids by utilizing a progressive model analysis mechanism to generate an updated rendering model; performing rendering optimization treatment on the updated rendering model according to the visual field rejection optimization algorithm to generate an optimized rendering model; the invention realizes the rendering method of STEP format at WEB end by carrying out layered rendering and parallel processing on STEP format model.

Description

STEP format rendering method at WEB terminal
Technical Field
The invention relates to the technical field of data rendering, in particular to a STEP format rendering method at a WEB end.
Background
The front body of STEP is IGES (INITIAL GRAPHICS Exchange Specification), which aims to solve the problem of geometric data exchange between different computer systems, the STEP format provides a standardized way to represent complex product model data, including information such as geometry, topology, attributes, relationships, and the like, and the STEP format can be used to exchange and share product data between different computer systems and application programs, so that collaboration and integration in industrial manufacturing and product development processes are promoted, however, rendering and displaying data in the STEP format on the WEB side is always a challenging problem, and some methods existing at present still have a certain limitation in terms of rendering speed, precision and interactivity, so that a new method needs to be provided to solve the problems and improve the experience of displaying data in the STEP format on the WEB side.
Disclosure of Invention
Based on this, it is necessary to provide a STEP format rendering method at the WEB end, so as to solve at least one of the above technical problems.
In order to achieve the above purpose, a method for rendering STEP format on WEB side includes the following STEPs:
Step S1: obtaining an STEP format original model file through a WEB end file transmission channel; carrying out file data analysis processing on the STEP format model file according to the STEP file analysis library to generate original analysis standard data;
Step S2: carrying out data preprocessing on the original analysis standard data to generate model structured data; performing three-dimensional grid conversion processing on the model structured data by utilizing a polygonal grid generation algorithm to obtain a standard three-dimensional model grid;
Step S3: performing three-dimensional model rendering processing on the three-dimensional model grids by utilizing a progressive model analysis mechanism to generate an updated rendering model; performing rendering optimization treatment on the updated rendering model according to the visual field rejection optimization algorithm to generate an optimized rendering model;
step S4: performing file compression processing on the optimized rendering model based on a geometric coding compression algorithm to generate a rendering model data packet; simulating unpacking processing is carried out on the rendering model data packet by adopting a streaming technology, and model rendering entity information weight data is generated; comparing the model rendering entity information weight data with a preset rendering precision threshold value to generate a STEP format model file;
Step S5: transmitting the STEP format model file to a user side through a network node to carry out task packaging processing, and generating a real-time rendering task; performing node parallel processing on the real-time rendering task by using a distributed rendering technology to generate a model rendering effect image;
Step S6: acquiring performance data of user equipment; and performing self-adaptive rendering precision adaptation processing on the model rendering effect image according to the user equipment performance data to generate a STEP format real-time rendering model.
According to the invention, the STEP format original model file is acquired through the WEB end file transmission channel, file data analysis processing is carried out on the STEP format model file according to the STEP file analysis library, so that effective acquisition of data used in the subsequent processing STEPs from the file uploaded by a user can be ensured, model data can be ensured to be effectively acquired and converted into original analysis standard data, and a solid foundation is laid for subsequent model processing and analysis; the original analysis standard data is subjected to data preprocessing, so that noise can be reduced, missing data can be filled, the effect of the subsequent processing STEPs can be improved, uncertainty and errors can be reduced, the original data can be converted into a more structured data form by preprocessing, the STEP format model file is subjected to file data analysis processing according to an STEP file analysis library, continuous geometric shapes can be expressed into a grid structure formed by simple shapes such as triangles or quadrilaterals, and the quality and the structuring degree of the model data can be improved; the three-dimensional model mesh is subjected to three-dimensional model rendering processing by utilizing a progressive model analysis mechanism, model details can be loaded gradually as required, the whole complex three-dimensional model is prevented from being loaded at one time, so that the rendering efficiency is improved, meanwhile, the rendering workload on invisible objects can be reduced by visual field rejection optimization, the rendering efficiency is further improved, the real-time rendering is smoother and faster, and model details with different resolutions can be loaded as required through progressive model analysis, so that the memory requirement is reduced. The updated rendering model is rendered and optimized according to the visual field eliminating and optimizing algorithm, so that the rendering of invisible objects can be reduced, and the use of memory is further saved, thereby providing more vivid and detailed rendering results and improving the sense of reality and visual quality of scenes; the method has the advantages that the file compression processing is carried out on the optimized rendering model based on the geometric coding compression algorithm, the volume of rendering model data can be obviously reduced, the occupation of storage space and the bandwidth requirement during network transmission are reduced, the data transmission efficiency is improved, the streaming technology is adopted to carry out the simulation unpacking processing on the rendering model data packet, the rendering model data can be loaded while being transmitted, the waiting time is reduced, faster loading and rendering starting are realized, the user interaction response speed is accelerated, the rendering precision of the model rendering entity information weight data is compared with the preset rendering precision threshold value, the rendering precision of the model can be dynamically controlled, and the rendering quality and performance requirement are flexibly balanced according to the requirement and the performance of display equipment; the STEP format model file is transmitted to the user side through the network node for task packaging processing, so that the computing resource can be fully utilized, and the rendering speed is increased. The method has the advantages that the completion time of the rendering task can be shortened, the user experience and the working efficiency are improved, the real-time rendering task is subjected to node parallel processing by using a distributed rendering technology, and the method can be expanded to a plurality of rendering nodes, so that large-scale and complex model rendering tasks can be processed, and therefore, large-scale models can be efficiently rendered even if the limitation of memory and computing resources is faced, and system breakdown or performance degradation is avoided; the method comprises the steps of obtaining user equipment performance data, carrying out self-adaptive rendering precision adaptation processing on a model rendering effect image according to the user equipment performance data, saving computing resources, accelerating rendering speed, supporting wide equipment types and configurations, and improving rendering effect and detail degree. Therefore, the invention carries out progressive hierarchical rendering on the STEP format model and carries out parallel processing through the nodes so as to improve the rendering precision and the load capacity of the WEB terminal.
Preferably, step S4 comprises the steps of:
step S41: carrying out data packet compression processing on the updated rendering model according to a geometric coding compression algorithm to generate a rendering model data packet; performing format conversion processing on the rendering model data packet based on the STEP format to generate the STEP format rendering model data packet;
Step S42: performing data packet cutting processing on the STEP format rendering model data packet to generate a STEP format rendering model data block; performing sequencing mark processing on STEP format rendering model data blocks to generate STEP format rendering sequencing links; based on STEP format rendering ordering links, carrying out stream receiving processing on STEP format rendering model data blocks to generate model rendering entity information weight data;
Step S43: comparing the model rendering entity information weight data with a preset rendering precision threshold, and generating a STEP format model file when the model rendering entity information weight data is larger than the rendering precision threshold.
The invention carries out data packet compression processing on the updated rendering model through a geometric coding compression algorithm to generate a rendering model data packet, carries out format conversion processing on the rendering model data packet based on STEP format to generate STEP format rendering model data packet, is beneficial to reducing the size of the data packet and converting the data packet into a universal STEP format, and is convenient for transmission and storage; performing data packet cutting processing on STEP format rendering model data packets to generate STEP format rendering model data blocks, performing sequencing marking processing on the rendering model data blocks to generate STEP format rendering sequencing links, facilitating cutting of large data packets into smaller data blocks, and providing sequencing and linking information for subsequent stream receiving processing; based on STEP format rendering ordering links, stream receiving processing is carried out on STEP format rendering model data blocks to generate model rendering entity information weight data, so that block-by-block receiving and data processing can be realized, memory use is reduced, and entity information and weight data related to model rendering are generated; and comparing the model rendering entity information weight data with a preset rendering precision threshold value, which is beneficial to controlling the rendering precision, ensuring that only entity information with enough importance is contained in a final model file, reducing the file size and improving the rendering efficiency.
Preferably, the functional formula of the geometric coding compression algorithm in step S41 is as follows:
In the method, in the process of the invention, Expressed as the encoded compressed data size,/>Expressed as number of data packets,/>Expressed as the original data size before update,/>Expressed as/>Differences between the respective vertex data and the corresponding reconstructed vertex data,/>Expressed as maximum error allowed for data transmission loss,/>Expressed as vertex positions in the original rendering model,/>Expressed as vertex position in reconstructed rendering model,/>Expressed as vertex normals in the original rendering model,/>Expressed as vertex normals in the reconstructed rendering model,/>Represented as model encoded compression anomaly correction.
The invention constructs a functional formula of a geometric coding compression algorithm for updating the original data size and the first dataAnd carrying out loss calculation in the data packet transmission process on maximum errors allowed by the data transmission loss of the data, wherein the geometric coding compression algorithm can reconstruct and compress the rendering data packet of the model according to the vertex position in the original rendering model and the vertex position in the reconstructed rendering model, so as to realize optimal data packet coding, and carry out dynamic reduction on the rendering precision after model compression according to the vertex normal in the original rendering model and the vertex normal in the reconstructed rendering model, thereby accurately determining the size of the compressed data after coding. In practical application, the formula can sample the data in the region, only a part of important data points are reserved, then the data points are used for reconstruction or interpolation to restore the data of the whole region, the reconstructed model is compared with the original model data, the size of the data packet is compressed and encoded as much as possible while the rendering precision is kept, and the system space is saved. The formula fully considers the number/>, of the data packetsOriginal data size/>, before updateFirst/>Differences between the individual vertex data and the corresponding reconstructed vertex data/>Maximum error allowed by data transmission loss/>Vertex position/>, in original rendering modelVertex position/>, in a reconstructed rendering modelVertex normals/>, in original rendering modelVertex normals/>, in a reconstructed rendering modelModel-encoded compression anomaly correction amount/>According to the original data size/>, before updatingThe interrelationship between the above parameters constitutes a functional relationship:
Through the first The interaction relation between the difference value between each vertex data and the corresponding reconstructed vertex data and the vertex position in the original rendering model can be used for knowing the influence error of the data packet compression on the original rendering model, carrying out geometric coding compression on the data packet under the condition of ensuring the accuracy of the regional data, reducing the data redundancy under the condition of ensuring the accuracy of the data by utilizing the maximum error allowed by the data transmission loss, saving the calculation force, enabling the calculation to achieve rapid convergence, and compressing the abnormal correction quantity/>, through the model codingThe data packet coding compression is adjusted, and the size/> of the coded compressed data is generated more accuratelyThe accuracy and the reliability of geometric coding compression are improved. Meanwhile, parameters such as maximum error allowed by data transmission loss, data packet number and the like in the formula can be adjusted according to actual conditions, so that the method is suitable for different geometric coding compression scenes, and the applicability and flexibility of the algorithm are improved.
Preferably, step S5 comprises the steps of:
Step S51: transmitting the STEP format model file from the server side to the user side according to a network transmission protocol, and generating a transmission STEP format model file;
step S52: when a user receives a transmission STEP format model file, performing task packaging processing on the transmission STEP format model file to generate a real-time rendering task;
step S53: performing rendering node allocation processing on STEP format model file task packages by using a distributed rendering technology to generate model file task nodes; performing node parallel processing on the model file task nodes to generate a model rendering result;
Step S54: and carrying out image synthesis processing on the model rendering results generated by each rendering node to generate a model rendering effect image.
According to the invention, the STEP format model file is transmitted from the server side to the user side by using the network transmission protocol, so that the STEP format model file is generated and transmitted, the model file can be effectively transmitted from the server to the user side, and the preparation is made for the subsequent rendering task preparation work; after receiving the transmitted STEP format model file, the user receives the STEP format model file and performs task packaging processing on the STEP format model file to generate a real-time rendering task, and the efficiency of task processing can be improved and subsequent rendering node distribution processing is facilitated by packaging the related rendering tasks together; the STEP format model file task package is subjected to rendering node distribution processing by using a distributed rendering technology, so that model file task nodes are generated, the model file task nodes are subjected to node parallel processing, and a model rendering result is generated, so that the rendering speed and the rendering efficiency are effectively improved; and (3) carrying out image synthesis processing on the model rendering results generated by each rendering node to generate a model rendering effect image, so that a more comprehensive and high-quality model rendering effect image can be obtained, and a more comprehensive and high-quality model rendering effect image can be obtained.
Preferably, step S6 comprises the steps of:
Step S61: acquiring user equipment performance data of a user by calling a system API;
Step S62: performing equipment performance evaluation processing on the user equipment performance data to generate user equipment performance evaluation indexes; and performing rendering loading resource management on the model rendering effect image based on the user equipment performance evaluation index to generate a STEP format real-time rendering model.
The invention can acquire the equipment performance data of the user side by calling the system API, process and analyze the equipment performance data of the user side, generate the performance evaluation index of the user equipment, and know the processing capacity and rendering performance of the user equipment. These evaluation metrics may be used to determine whether the device has sufficient performance to perform real-time rendering tasks; the method has the advantages that the resources required in the rendering process can be optimized and managed by performing rendering loading resource management on the model rendering effect image, so that real-time rendering can be smoothly performed on the user equipment, the performance of the user equipment can be utilized to the greatest extent, and smooth and high-quality rendering experience is provided.
The invention has the advantages that the original model file is obtained and analyzed, the STEP format model can be converted into the original analysis standard data, an operable data structure is provided for the subsequent processing STEP, the original analysis standard data can be converted into the operable model structured data through the data preprocessing and the three-dimensional grid conversion processing, a more convenient data format is provided for the subsequent rendering and optimizing STEP, the model is processed and rendered more efficiently, the optimized rendering model can be converted into a more compact rendering model data packet through the compression processing and streaming technology, meanwhile, the model rendering entity information weight data is compared and processed, the STEP format model file with adaptability can be generated according to the preset rendering precision threshold value, so as to meet different requirements and bandwidth limitations, the STEP format file can be converted into the operable model structured data through the network transmission and the distributed processing, the data format can be converted into the image rendering device with the parallel processing device according to the quality, the image rendering precision and the image rendering precision can be improved, the image rendering device can be generated according to the image quality and the rendering precision of the user equipment, the image rendering device can be better and the rendering and the image quality can be provided, the image rendering device can be rendered and the image quality of the image device can be better and the image device can be rendered and the image device is more suitable for the rendering and the user device with the rendering device, and flexible processing and adaptation are performed according to performance characteristics of the user equipment. Therefore, the invention carries out progressive hierarchical rendering on the STEP format model and carries out parallel processing through the nodes so as to improve the rendering precision and the load capacity of the WEB terminal.
Drawings
FIG. 1 is a flow chart of STEPs of a STEP format rendering method at a WEB side;
FIG. 2 is a flowchart illustrating the detailed implementation of step S2 in FIG. 1;
FIG. 3 is a flowchart illustrating the detailed implementation of step S3 in FIG. 1;
FIG. 4 is a flowchart illustrating the detailed implementation of step S32 in FIG. 3;
The achievement of the objects, functional features and advantages of the present invention will be further described with reference to the accompanying drawings, in conjunction with the embodiments.
Detailed Description
The following is a clear and complete description of the technical method of the present patent in conjunction with the accompanying drawings, and it is evident that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present invention without making any inventive effort, are intended to fall within the scope of the present invention.
Furthermore, the drawings are merely schematic illustrations of the present invention and are not necessarily drawn to scale. The same reference numerals in the drawings denote the same or similar parts, and thus a repetitive description thereof will be omitted. Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities. The functional entities may be implemented in software or in one or more hardware modules or integrated circuits or in different networks and/or processor methods and/or microcontroller methods.
It will be understood that, although the terms "first," "second," etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another element. For example, a first element could be termed a second element, and, similarly, a second element could be termed a first element, without departing from the scope of example embodiments. The term "and/or" as used herein includes any and all combinations of one or more of the associated listed items.
In order to achieve the above objective, please refer to fig. 1 to 4, a STEP format rendering method at a WEB end, the method includes the following STEPs:
Step S1: obtaining an STEP format original model file through a WEB end file transmission channel; carrying out file data analysis processing on the STEP format model file according to the STEP file analysis library to generate original analysis standard data;
Step S2: carrying out data preprocessing on the original analysis standard data to generate model structured data; performing three-dimensional grid conversion processing on the model structured data by utilizing a polygonal grid generation algorithm to obtain a standard three-dimensional model grid;
Step S3: performing three-dimensional model rendering processing on the three-dimensional model grids by utilizing a progressive model analysis mechanism to generate an updated rendering model; performing rendering optimization treatment on the updated rendering model according to the visual field rejection optimization algorithm to generate an optimized rendering model;
step S4: performing file compression processing on the optimized rendering model based on a geometric coding compression algorithm to generate a rendering model data packet; simulating unpacking processing is carried out on the rendering model data packet by adopting a streaming technology, and model rendering entity information weight data is generated; comparing the model rendering entity information weight data with a preset rendering precision threshold value to generate a STEP format model file;
Step S5: transmitting the STEP format model file to a user side through a network node to carry out task packaging processing, and generating a real-time rendering task; performing node parallel processing on the real-time rendering task by using a distributed rendering technology to generate a model rendering effect image;
Step S6: acquiring performance data of user equipment; and performing self-adaptive rendering precision adaptation processing on the model rendering effect image according to the user equipment performance data to generate a STEP format real-time rendering model.
According to the invention, the STEP format original model file is acquired through the WEB end file transmission channel, file data analysis processing is carried out on the STEP format model file according to the STEP file analysis library, so that effective acquisition of data used in the subsequent processing STEPs from the file uploaded by a user can be ensured, model data can be ensured to be effectively acquired and converted into original analysis standard data, and a solid foundation is laid for subsequent model processing and analysis; the original analysis standard data is subjected to data preprocessing, so that noise can be reduced, missing data can be filled, the effect of the subsequent processing STEPs can be improved, uncertainty and errors can be reduced, the original data can be converted into a more structured data form by preprocessing, the STEP format model file is subjected to file data analysis processing according to an STEP file analysis library, continuous geometric shapes can be expressed into a grid structure formed by simple shapes such as triangles or quadrilaterals, and the quality and the structuring degree of the model data can be improved; the three-dimensional model mesh is subjected to three-dimensional model rendering processing by utilizing a progressive model analysis mechanism, model details can be loaded gradually as required, the whole complex three-dimensional model is prevented from being loaded at one time, so that the rendering efficiency is improved, meanwhile, the rendering workload on invisible objects can be reduced by visual field rejection optimization, the rendering efficiency is further improved, the real-time rendering is smoother and faster, and model details with different resolutions can be loaded as required through progressive model analysis, so that the memory requirement is reduced. The updated rendering model is rendered and optimized according to the visual field eliminating and optimizing algorithm, so that the rendering of invisible objects can be reduced, and the use of memory is further saved, thereby providing more vivid and detailed rendering results and improving the sense of reality and visual quality of scenes; the method has the advantages that the file compression processing is carried out on the optimized rendering model based on the geometric coding compression algorithm, the volume of rendering model data can be obviously reduced, the occupation of storage space and the bandwidth requirement during network transmission are reduced, the data transmission efficiency is improved, the streaming technology is adopted to carry out the simulation unpacking processing on the rendering model data packet, the rendering model data can be loaded while being transmitted, the waiting time is reduced, faster loading and rendering starting are realized, the user interaction response speed is accelerated, the rendering precision of the model rendering entity information weight data is compared with the preset rendering precision threshold value, the rendering precision of the model can be dynamically controlled, and the rendering quality and performance requirement are flexibly balanced according to the requirement and the performance of display equipment; the STEP format model file is transmitted to the user side through the network node for task packaging processing, so that the computing resource can be fully utilized, and the rendering speed is increased. The method has the advantages that the completion time of the rendering task can be shortened, the user experience and the working efficiency are improved, the real-time rendering task is subjected to node parallel processing by using a distributed rendering technology, and the method can be expanded to a plurality of rendering nodes, so that large-scale and complex model rendering tasks can be processed, and therefore, large-scale models can be efficiently rendered even if the limitation of memory and computing resources is faced, and system breakdown or performance degradation is avoided; the method comprises the steps of obtaining user equipment performance data, carrying out self-adaptive rendering precision adaptation processing on a model rendering effect image according to the user equipment performance data, saving computing resources, accelerating rendering speed, supporting wide equipment types and configurations, and improving rendering effect and detail degree. Therefore, the invention carries out progressive hierarchical rendering on the STEP format model and carries out parallel processing through the nodes so as to improve the rendering precision and the load capacity of the WEB terminal.
In the embodiment of the present invention, as described with reference to fig. 1, the STEP flow diagram of a STEP format rendering method at a WEB end of the present invention is shown, and in this example, the STEP format rendering method at a WEB end includes the following STEPs:
Step S1: obtaining an STEP format original model file through a WEB end file transmission channel; carrying out file data analysis processing on the STEP format model file according to the STEP file analysis library to generate original analysis standard data;
In the embodiment of the invention, a web page or an application program with an uploading function is created for a user to upload a model file in an STEP format, the user is allowed to select the file and upload the file to a server through an HTML form or an API interface, an analysis library or a tool suitable for processing the STEP format, such as an open-source STEP file analysis library or a professional CAD software library, is used for reading and analyzing the STEP file uploaded by the user by using the selected analysis library in a rear-end server end code, the analysis library can analyze each data item of the STEP file into an operable data structure, such as geometric data, metadata, attributes and the like, an STEP format original model file is obtained, an operating environment and a dependency item, such as a Python environment or other related software, required by installing the analysis library or tool are imported into a code of the server end, the required geometric analysis, metadata and attribute information are extracted from a result returned by using a function or an API provided by the analysis library, and the STEP file data are read, and the required geometric analysis standard data is generated.
Step S2: carrying out data preprocessing on the original analysis standard data to generate model structured data; performing three-dimensional grid conversion processing on the model structured data by utilizing a polygonal grid generation algorithm to obtain a standard three-dimensional model grid;
in the embodiment of the invention, the original analysis standard data is preprocessed to generate the structured data of the model, including removing unnecessary data, merging repeated entities, repairing geometric errors or performing other processing, the structured data of the model is converted into a standard three-dimensional model grid by selecting a suitable polygonal grid generation algorithm, and the standard three-dimensional model grid is obtained by using, for example, a triangulation (Triangulation) algorithm, a boundary representation (Boundary Representation) algorithm, a voxelization (Voxelization) algorithm or the like.
Step S3: performing three-dimensional model rendering processing on the three-dimensional model grids by utilizing a progressive model analysis mechanism to generate an updated rendering model; performing rendering optimization treatment on the updated rendering model according to the visual field rejection optimization algorithm to generate an optimized rendering model;
In the embodiment of the invention, the three-dimensional model grids are initially subdivided by selecting a proper initial subdivision level, the grids of the current subdivision level are iteratively refined by using a subdivision algorithm, finer grids are generated, and whether the refinement process needs to be stopped is judged according to a preset termination condition. The termination condition may generate an updated rendering model based on a rendering effect, a resource limitation, a user demand, or the like, and perform rendering optimization processing on the updated rendering model according to a visual field rejection optimization algorithm, where the visual field optimization processing includes: determining the viewpoint position of an observer and camera parameters, including a viewing angle, a near clipping surface, a far clipping surface and the like, determining the part which is intersected with or contained in the view cone in the model by calculating the intersection relation between the view cone and the model, removing invisible model parts according to the calculation result of the visual field, and only reserving the visible parts for rendering; the rendering optimization process includes: the method comprises the steps of compressing textures of a model, reducing storage space and transmission bandwidth of the textures, improving rendering speed and quality by special functions of graphic hardware and acceleration algorithms such as a graphic accelerator, a shader and the like, grouping objects needing rendering to reduce the number of rendering calls and the like, and generating an optimized rendering model.
Step S4: performing file compression processing on the optimized rendering model based on a geometric coding compression algorithm to generate a rendering model data packet; simulating unpacking processing is carried out on the rendering model data packet by adopting a streaming technology, and model rendering entity information weight data is generated; comparing the model rendering entity information weight data with a preset rendering precision threshold value to generate a STEP format model file;
In the embodiment of the invention, the geometric data of the optimized rendering model is encoded, the encoded data is compressed by applying a proper compression algorithm to generate the rendering model data packet, the encoded and compressed model data is contained, the streaming technology allows the large data packet to be divided into smaller data blocks in the network transmission process, and the streaming processing of the rendering model data packet is implemented in the following manner: dividing a rendering model data packet into smaller data blocks, transmitting and receiving the data blocks one by using streaming technology such as segmented transmission, frame transmission and the like, performing simulated unpacking processing on a receiving end, recombining the received data blocks into a complete rendering model data packet, and processing the type rendering entity information weight data which refers to data related to an entity of a rendering model and attributes thereof according to the following modes: and extracting entity information and related attributes of the rendering model by using a corresponding algorithm and a data structure, calculating weight information, such as texture weight, color weight and the like, of each entity, matching and correlating model rendering entity information weight data with optimized rendering model data, comparing the model rendering entity information weight data with a preset rendering precision threshold, and generating a STEP format model file.
Step S5: transmitting the STEP format model file to a user side through a network node to carry out task packaging processing, and generating a real-time rendering task; performing node parallel processing on the real-time rendering task by using a distributed rendering technology to generate a model rendering effect image;
In the embodiment of the invention, the file is transmitted by using a file transmission protocol (such as FTP, HTTP and the like) or a network sharing mode (such as network file sharing, cloud storage and the like), the generated STEP format model file is transmitted to a user side through a network node, task packaging processing is carried out on the user side, the method comprises the STEPs of analyzing the received STEP file, extracting model data and rendering parameters, generating a real-time rendering task, and carrying out node parallel processing on the real-time rendering task by using a distributed rendering technology. The distributed rendering technology can utilize the parallel computing capacity of a plurality of computers or servers to accelerate the processing speed of rendering tasks, uses a distributed task scheduling framework (such as Apache Hadoop, APACHE SPARK and the like) or automatically realizes the task scheduling logic to conduct node parallel processing on real-time rendering tasks, and after each node receives a subtask, utilizes a rendering engine or rendering software to conduct actual rendering computation, and can use modes such as message transmission or shared storage to conduct data exchange and synchronization among the nodes. And transmitting the final rendering result to the user terminal through the network node to generate a model rendering effect image.
Step S6: acquiring performance data of user equipment; and performing self-adaptive rendering precision adaptation processing on the model rendering effect image according to the user equipment performance data to generate a STEP format real-time rendering model.
In the embodiment of the invention, in a user side application program, proper API or library is used for acquiring performance data of user equipment, the performance data can comprise information such as type and speed of a processor (CPU), model and performance index of a Graphic Processor (GPU), an operating system and a memory, the acquired performance data of the user equipment are analyzed and evaluated, a rendering precision level suitable for the user equipment is determined according to the performance data of the equipment, lower-performance equipment can adopt lower rendering precision, higher-performance equipment can adopt higher rendering precision, an adaptation process is carried out on a model rendering effect image according to the rendering precision level, parameters such as illumination effect adjustment, material quality and detail level are adjusted, the model is re-rendered by using a rendering engine or rendering software based on the performance data of the equipment and the adaptation process of the self-adaptation rendering precision, the generated real-time rendering model is stored into a STEP format.
Preferably, step S1 comprises the steps of:
step S11: receiving an STEP format original model file uploaded by a user through an uploading channel in a WEB terminal application program;
step S12: carrying out file initialization on the STEP format original model file based on FreeCAD analysis library to obtain an original model initialization file;
Step S13: carrying out file analysis processing on the original model initialization file by utilizing an analysis function built in the analysis library to generate original model analysis data;
Step S14: performing anomaly detection processing on the original model analysis data, and performing data restoration processing on the original model analysis data when the original model analysis data detects that the data is missing, so as to generate original model restoration data; when the original model analysis data does not detect the abnormality, STEP standard processing is carried out on the original model analysis data to generate original analysis standard data.
According to the invention, the STEP format original model file uploaded by the user is received through the uploading channel in the WEB terminal application program, and the STEP format original model file is initialized based on FreeCAD analysis library, so that correct loading and processing of the model file can be ensured, and a necessary basis is provided for the subsequent analysis STEP; the original model initialization file is subjected to file analysis processing by utilizing an analysis function built in the analysis library, so that geometric information, topological relation and other related data of the model can be contained, and the file format can be saved; and when the original model analysis data does not detect the abnormality, STEP standard processing is performed on the original model analysis data, so that the model data can be ensured to accord with STEP standard specifications.
In the embodiment of the invention, a file uploading channel is provided in a WEB end application program, a user is allowed to select and upload an STEP format original model file, a proper back-end technology and programming language (such as Python, node. Js and the like) are used for receiving and storing the user uploaded file to a designated position of a server, the STEP format original model file uploaded by the user is received, a FreeCAD analytic library is used, the STEP format model file can be read and processed, freeCAD is an open-source Computer Aided Design (CAD) software, a function for processing the STEP format is provided, an initialization function of the FreeCAD analytic library is used for initializing the uploaded STEP format original model file, an initialization file of a model is created, an original model initialization file is obtained, an analytic function in the FreeCAD analytic library is called, the original model initialization file is used as input for processing, the content of the file is analyzed into an operable and processable data structure, analysis data containing the original model is obtained according to a return result of the function, the analysis data can comprise the analysis data of the geometrical information, the entity property, the analysis data can be detected, the error data can be detected, and the error analysis data can be obtained if the error analysis data is found, and the error analysis data can be detected, and the error analysis data can be obtained if the error condition exists; if no abnormality is detected, the analysis data is regarded as STEP standard model data, a corresponding data restoration algorithm or method is adopted for restoration processing aiming at the detected data missing condition, data restoration can be deduced, estimated or completed according to the existing information so as to restore the integrity of an original model, after the restoration processing is completed, the restored original model data is obtained, namely the original model restoration data, if no abnormality is detected, the original model analysis data accords with STEP standard, and the original model analysis data can be directly used to generate the original analysis standard data.
Preferably, step S2 comprises the steps of:
Step S21: carrying out data denoising processing on the original analysis standard data to generate original analysis denoising data; performing outlier detection processing on the original analysis denoising data by using an absolute deviation median method to generate original analysis outlier data; performing outlier substitution processing on the original analysis outlier data through mean calculation to generate standard analysis data;
step S22: setting a time domain signal according to modeling requirements; trending the preset time domain signal to obtain a standard time domain signal; performing signal transformation processing on a preset time domain signal by using a fast Fourier transform algorithm to generate a model frequency domain signal; carrying out frequency domain feature extraction processing on the model frequency domain signal according to the phase spectrum to generate frequency domain feature data;
Step S23: carrying out data frame integration processing on the frequency domain characteristic data and the standard analysis data to generate model integration data; performing data set balance processing on the model integrated data based on an oversampling method to generate model structured data;
Step S24: performing three-dimensional grid conversion processing on the model structured data based on a Delaunay triangulation algorithm to generate a three-dimensional model grid; and carrying out smoothing treatment on the three-dimensional model grid by using a grid simplification algorithm to generate a standard three-dimensional model grid.
According to the method, the data denoising processing is carried out on the original analysis standard data, so that noise and abnormal data can be eliminated, the quality and accuracy of the data are improved, the outlier detection processing is carried out on the original analysis denoising data by utilizing an absolute deviation median method, the quality, accuracy and usability of model data are improved, the abnormal point substitution processing is carried out on the original analysis outlier data through mean value calculation, the reliability of the data can be improved, and subsequent analysis and modeling can be better carried out; setting a time domain signal according to modeling requirements, performing trending treatment on the preset time domain signal, revealing frequency domain characteristics and important signal components of a model, performing signal transformation treatment on the preset time domain signal by using a fast Fourier transform algorithm, improving the visualization of data so as to better extract the characteristics of the data, performing frequency domain characteristic extraction treatment on the model frequency domain signal according to a phase spectrum, and improving the definition of the data so that the data is easier to observe; the frequency domain characteristic data and the standard analysis data are subjected to data frame integration processing, so that the distribution of the data and the balance of samples can be improved, the data set balance processing is performed on the model integration data based on an oversampling method, the accuracy of modeling and analysis results can be improved, and the problem of data unbalance is solved; the three-dimensional grid conversion processing is carried out on the model structured data based on the Delaunay triangulation algorithm, so that the complexity and the storage space requirement of the model can be reduced, the grid simplification algorithm is utilized to carry out the smoothing processing on the three-dimensional model grid, and the visualization effect and the performance of the model can be improved.
As an example of the present invention, referring to fig. 2, the step S2 in this example includes:
Step S21: carrying out data denoising processing on the original analysis standard data to generate original analysis denoising data; performing outlier detection processing on the original analysis denoising data by using an absolute deviation median method to generate original analysis outlier data; performing outlier substitution processing on the original analysis outlier data through mean calculation to generate standard analysis data;
in the embodiment of the invention, by applying a proper data denoising algorithm, such as moving average, median filtering and the like, to the original analysis standard data to reduce noise in the data, original analysis denoising data is generated, outlier detection is performed on the denoised data by applying an absolute deviation Median (MAD) method, possible abnormal values are found out, and original analysis outlier data is generated.
Step S22: setting a time domain signal according to modeling requirements; trending the preset time domain signal to obtain a standard time domain signal; performing signal transformation processing on a preset time domain signal by using a fast Fourier transform algorithm to generate a model frequency domain signal; carrying out frequency domain feature extraction processing on the model frequency domain signal according to the phase spectrum to generate frequency domain feature data;
In the embodiment of the invention, the required time domain signal is determined according to modeling requirements, which can be a preset signal, an observed signal or other specific signal types, trending processing is carried out on the preset time domain signal to remove trend components, so as to obtain a standard time domain signal, a Fast Fourier Transform (FFT) algorithm is used for converting the standard time domain signal into a frequency domain signal, so as to obtain a frequency domain representation of a model, a frequency domain signal of the model is generated, and feature extraction processing is carried out on the frequency domain signal of the model based on an analysis method of a phase spectrum, so as to obtain frequency domain feature data.
Step S23: carrying out data frame integration processing on the frequency domain characteristic data and the standard analysis data to generate model integration data; performing data set balance processing on the model integrated data based on an oversampling method to generate model structured data;
In the embodiment of the invention, the frequency domain characteristic data and the standard analysis data are combined or integrated to create a comprehensive data set containing two data sources to generate the model integration data, and if the model integration data is unbalanced, an oversampling method (such as an SMOTE algorithm) can be adopted to generate a synthetic sample so as to balance the data set, and the generated data is the structured data of the model after the processing is completed.
Step S24: performing three-dimensional grid conversion processing on the model structured data based on a Delaunay triangulation algorithm to generate a three-dimensional model grid; and carrying out smoothing treatment on the three-dimensional model grid by using a grid simplification algorithm to generate a standard three-dimensional model grid.
In the embodiment of the invention, the structured data of the model is converted into the three-dimensional grid representation by using a Delaunay triangulation algorithm, delaunay triangulation is a common three-dimensional grid generation method, non-overlapping triangular grids are generated according to a given point set, the generated three-dimensional model grids are subjected to smoothing treatment, and a grid simplification algorithm (such as LAPLACIAN SMOOTHING) can be used for eliminating noise and irregularity on the grid surface to obtain smoother standard three-dimensional model grids.
Preferably, step S3 comprises the steps of:
Step S31: carrying out first-round layering treatment on the standard three-dimensional model grid by utilizing a progressive model analysis mechanism to generate a three-dimensional model rough level; carrying out model detail feature stripping treatment on the three-dimensional model rough level to generate model detail fragments; layering the rough three-dimensional model level based on the model detail fragments in a second round to generate a three-dimensional model detail level; performing level priority rendering sequencing treatment on the rough level of the three-dimensional model and the detail level of the three-dimensional model to generate a three-dimensional rendering progressive model;
step S32: performing model rendering processing on the three-dimensional rendering progressive model based on texture mapping to generate a three-dimensional rendering model; performing model updating processing on the three-dimensional rendering model to generate an updated rendering model;
Step S33: performing visual field elimination processing on the updated rendering model according to a preset camera view cone to generate a visual field rendering model; judging the visual field rendering model by using a rejection algorithm, and when the grids in the visual field rendering model are not in the visual field, performing rendering queue rejection by using a visual field rejection optimization algorithm to generate an optimized rendering model.
The invention generates a rough level of the three-dimensional model by first performing a first round of layering processing. And then carrying out model detail characteristic stripping treatment on the rough level to generate model detail fragments. Then, performing a second round of layering processing based on the model detail fragments to generate a detail level of the three-dimensional model, so that progressive loading and optimized rendering are realized, and the rendering efficiency and performance are improved; the rough level and the detail level of the three-dimensional model are subjected to level priority rendering sorting treatment, namely, the rough level is rendered firstly, then the detail is gradually added, so that better visual experience and rendering quality can be provided while the rendering efficiency is ensured; performing model rendering processing on the three-dimensional rendering progressive model by using texture mapping to generate a three-dimensional rendering model, wherein the three-dimensional rendering model can be presented as a visualized object with color, texture and illumination effect; and then, judging the visual field rendering model through a rejection algorithm, and excluding grids which are not in the visual field from a rendering queue, so that unnecessary rendering calculation can be reduced, rendering performance can be improved, and an optimized rendering model is generated, thereby being beneficial to improving performance, efficiency and user experience in the display and rendering processes of the three-dimensional model.
As an example of the present invention, referring to fig. 3, the step S3 in this example includes:
Step S31: carrying out first-round layering treatment on the standard three-dimensional model grid by utilizing a progressive model analysis mechanism to generate a three-dimensional model rough level; carrying out model detail feature stripping treatment on the three-dimensional model rough level to generate model detail fragments; layering the rough three-dimensional model level based on the model detail fragments in a second round to generate a three-dimensional model detail level; performing level priority rendering sequencing treatment on the rough level of the three-dimensional model and the detail level of the three-dimensional model to generate a three-dimensional rendering progressive model;
In the embodiment of the invention, the model grid is decomposed into rough levels in a progressive manner, in the process, algorithms and data structures such as octree or quadtree can be used for effectively representing and managing the hierarchical data structures, detailed features of the model such as edges, curves and the like are extracted from the rough levels and are represented as model detail fragments, model detail fragments are generated, the layering result of the first round is further refined, the detail information of the model is added into the layering structure to obtain more model detail levels, three-dimensional model detail levels are generated, different levels of the model are ordered according to priority by using rendering optimization algorithms and technologies, and the model is progressively displayed in a hierarchical order.
Step S32: performing model rendering processing on the three-dimensional rendering progressive model based on texture mapping to generate a three-dimensional rendering model; performing model updating processing on the three-dimensional rendering model to generate an updated rendering model;
In the embodiment of the invention, by mapping the texture image to the surface of the model by using the texture mapping technology to increase the appearance and details of the model, for example, the modification of the model can be realized by mapping the texture coordinates to the vertices of the model and acquiring color information from the texture image according to the texture coordinates of the vertices, and model rendering processing is performed by using, for example, a common rendering engine including Unity, unreal Engine, openGL, directX and the like, so as to apply the texture mapping to the model and generate realistic rendering results, and modifying and updating the three-dimensional model by using professional model editing software such as Blender, 3ds Max, maya and the like, and in the updating processing of the model, some algorithms and data structures may be used to realize the modification of the model, for example, a curved surface reconstruction algorithm may be used to smooth the model, repair broken parts, remove invalid geometric data and the like, so as to generate an updated rendering model.
Step S33: performing visual field elimination processing on the updated rendering model according to a preset camera view cone to generate a visual field rendering model; judging the visual field rendering model by using a rejection algorithm, and when the grids in the visual field rendering model are not in the visual field, performing rendering queue rejection by using a visual field rejection optimization algorithm to generate an optimized rendering model.
In the embodiment of the invention, the camera view cone is a geometric body for representing the view range of the camera, usually a cone or a cube, according to the camera view cone, the model of which parts are positioned in the visual range of the camera can be judged, and according to the result of visual field rejection processing, namely, the part model in the camera view cone, a visual field rendering model is generated, the model is regarded as an object needing to be displayed and processed in the rendering process, a proper rejection algorithm is adopted to process the visual field rendering model, the model and the view cone are subjected to an intersection test to judge whether the model is completely or partially positioned in the view cone, according to the result of the rejection algorithm, the part which is not positioned in the visual field rendering model is removed, an optimized rendering queue is formed, and the optimized rendering model is generated.
Preferably, the function formula of the visual field rejection optimization algorithm in step S33 is specifically as follows:
/>
In the method, in the process of the invention, Expressed as a function of the visibility decision value of the model in the grid,/>Expressed as abscissa, point,/>, in the mesh modelExpressed as ordinate in the mesh model,/>Expressed as vertical coordinates in the mesh model,/>Horizontal coordinate point expressed as camera position,/>Ordinate points expressed as camera position,/>Vertical coordinate point expressed as camera position,/>Expressed as camera field of view,/>Expressed as the number of meshes,/>Expressed as the intensity of illumination experienced by the model,/>Expressed as camera rotation angle,/>Expressed as the abscissa of the model illumination area,/>Expressed as the ordinate of the model illumination area,/>Expressed as vertical coordinates of the model illumination area,/>Expressed as model rendering threshold,/>Represented as a visual field optimization anomaly adjustment value.
The invention constructs a visual field eliminating optimization algorithm which is used for confirming the visual field of the model through the coordinates of the grid model, the position coordinates of the camera and the illumination intensity received by the model, and the visual field eliminating optimization algorithm can evaluate the influence of illumination on the visual field of the model according to the number of grids and the illumination intensity received by the model, so as to realize optimal visual field judgment, and dynamically and accurately position the model according to the rotation angle of the camera, thereby accurately determining the visual field judgment value function of the model in the grid. In practical application, the formula can judge the visual field of the model according to the spatial coordinates of the model, if the model area is in the visual field, the visual field range is accurately measured to receive illumination influence intensity, so that the model area rendering which is less influenced by illumination and is not in the visual field range is removed, and the huge system calculation amount caused by rendering is reduced. The formula fully considers the abscissa in the grid modelOrdinate/>, in a mesh modelVertical coordinates/>, in a mesh modelHorizontal coordinate point of camera position/>Ordinate point of camera position/>Vertical coordinate point of camera position/>Camera field of view/>Grid number/>The model is subjected to illumination intensity/>Camera rotation angle/>Abscissa/>, of model illumination areaOrdinate/>, of model illumination areaVertical coordinates/>, of model illumination areaModel rendering threshold/>Visual field optimization anomaly adjustment value/>According to the number of grids/>The interrelationship between the above parameters constitutes a functional relationship:
Through the interaction relation between the illumination intensity received by the model and the rotation angle of the camera, the range of the edge and the center area in the visual field of the model can be known, the visual field is optimized under the condition of ensuring the accuracy of area data, the data redundancy is reduced under the condition of ensuring the accuracy of the data by utilizing the rendering threshold value of the model, the calculation force is saved, the calculation is enabled to be converged rapidly, and the abnormal adjustment value is optimized through the visual field Adjusting the visual field judgment, and generating a visual field judgment value function/>, in the grid, of the model more accuratelyThe accuracy and the reliability of visual field rejection optimization are improved. Meanwhile, parameters such as a model rendering threshold value of a user and a camera rotation angle in the formula can be adjusted according to actual conditions, so that different visual fields can be adapted to eliminating optimized scenes, and the applicability and flexibility of the algorithm are improved.
Preferably, step S32 comprises the steps of:
Step S321: setting a rendering environment based on the STEP format, wherein the rendering environment includes a camera position and an illumination intensity; performing three-dimensional model construction processing on the model structured data based on the standard three-dimensional model grid to generate a three-dimensional construction model object;
Step S322: performing view transformation processing on the position of the camera to generate a three-dimensional view coordinate system; performing projection transformation processing on the three-dimensional building model object and the three-dimensional view coordinate system to generate a standardized equipment coordinate system;
step S323: performing dimension reduction processing on the three-dimensional building model object and a standardized equipment coordinate system to generate a two-dimensional model pixel point; rasterizing the two-dimensional model pixel points to generate two-dimensional model raster pixel points;
Step S324: performing texture mapping processing on the two-dimensional model grating pixel points according to the illumination intensity to generate two-dimensional model texture data; performing planar geometry clipping processing on texture data of the two-dimensional model based on a depth buffer algorithm, and eliminating invisible textures; generating two-dimensional model clipping data;
Step S325: cutting the two-dimensional model cutting data into data fragments to generate two-dimensional model cutting fragments; applying a preset fragment shader to a two-dimensional model cutting fragment to perform light ray compound calculation processing, and generating two-dimensional model light ray data;
step S326: performing data three-dimensional display processing on the two-dimensional model light data by utilizing a frame buffer technology to generate a three-dimensional rendering model;
Step S327: obtaining model update data through an external data source; and importing the model update data into the three-dimensional rendering model according to a preset time stamp to perform model update processing, and generating an updated rendering model.
According to the invention, the rendering environment is set based on STEP format, wherein the rendering environment comprises the position of a camera and illumination intensity, structured data of the model is processed according to standard three-dimensional model grids, and a three-dimensional model building object is generated, so that the rendering environment is established and a visual model is built; performing view transformation processing on the position of a camera to generate a three-dimensional view coordinate system, performing projection transformation processing on a three-dimensional building model object and the three-dimensional view coordinate system to generate a standardized equipment coordinate system, and determining the position and projection effect of the model in rendering; performing dimension reduction processing on the three-dimensional building model object and a standardized equipment coordinate system to generate a two-dimensional model pixel point, performing rasterization processing on the two-dimensional model pixel point to generate a two-dimensional model raster pixel point, and converting the three-dimensional model into a two-dimensional pixel point on a screen to prepare data for subsequent texture mapping and rendering; performing texture mapping processing on the two-dimensional model grating pixel points according to illumination intensity to generate two-dimensional model texture data, performing plane geometry clipping processing on the texture data based on a depth buffer algorithm, removing invisible textures, generating two-dimensional model clipping data, and being beneficial to applying proper textures and removing invisible parts to a user during rendering; and performing data segment cutting processing on the two-dimensional model cutting data to generate two-dimensional model cutting segments. Then, applying a preset fragment shader to the cut fragments, performing light ray composite calculation processing, generating two-dimensional model light ray data, and being beneficial to calculating illumination and color for each fragment in the rendering model; and carrying out data three-dimensional display processing on the two-dimensional model light data by utilizing a frame buffer technology to generate a three-dimensional rendering model, acquiring update data of the model by an external data source, importing the update data into the three-dimensional rendering model according to a preset time stamp to carry out model update processing, generating an update rendering model, and being beneficial to displaying the model and synchronously updating the model and the external data so as to maintain the accuracy of rendering and provide a high-quality visual result.
As an example of the present invention, referring to fig. 4, the step S32 in this example includes:
Step S321: setting a rendering environment based on the STEP format, wherein the rendering environment includes a camera position and an illumination intensity; performing three-dimensional model construction processing on the model structured data based on the standard three-dimensional model grid to generate a three-dimensional construction model object;
In the embodiment of the invention, by extracting information required for rendering from STEP format data, including parameters such as camera position, illumination intensity and the like, constructing a three-dimensional model object by using corresponding APIs and tools creates an empty three-dimensional constructed model object in a rendering engine, and the empty three-dimensional constructed model object is used for storing the generated three-dimensional model data, for example, in Unity, a camera component can be used for setting the camera position and an illumination component can be used for adjusting the illumination intensity. The standard three-dimensional model data can be converted into the three-dimensional construction model object by loading the model file or creating the grid object, the model is structured based on the standard three-dimensional model grid, the data comprise vertex coordinates, surface information, texture coordinates and the like, the vertex information in the standard three-dimensional model grid is traversed, the processing and the storage are carried out according to a required format or data structure, and the generated three-dimensional construction model object contains the structured data of the whole model for subsequent rendering and processing.
Step S322: performing view transformation processing on the position of the camera to generate a three-dimensional view coordinate system; performing projection transformation processing on the three-dimensional building model object and the three-dimensional view coordinate system to generate a standardized equipment coordinate system;
In the embodiment of the invention, parameters such as the position, the orientation, the upward direction and the like of a camera are determined as input of a view matrix, a mathematical library or a graphic API is used for constructing the view matrix according to the camera parameters, the view matrix is used for converting an object in a scene from a world coordinate system to the camera coordinate system to generate a three-dimensional view coordinate system, after view conversion, projection conversion processing is carried out on a three-dimensional constructed model object and the generated three-dimensional view coordinate system, the three-dimensional constructed model object and the generated three-dimensional view coordinate system are converted into a clipping space or a standardized device coordinate system, the clipping space is a coordinate space taking the camera as a center, a near clipping plane and a far clipping plane as boundaries, in the coordinate space, an object in the scene is clipped and subjected to perspective projection, the standardized device coordinate system is a standardized coordinate space, the range is [ -1, 1], in the coordinate space, the object in the scene is subjected to perspective projection and standardized processing, and finally the object is rendered on the screen, and the standardized device coordinate system is generated.
Step S323: performing dimension reduction processing on the three-dimensional building model object and a standardized equipment coordinate system to generate a two-dimensional model pixel point; rasterizing the two-dimensional model pixel points to generate two-dimensional model raster pixel points;
In the embodiment of the invention, a three-dimensional construction model object is converted into a standardized equipment coordinate system from an original coordinate system, the standardized equipment coordinate system is a standardized coordinate space, the range of the standardized equipment coordinate system is generally [ -1, 1], the vertex coordinates of the model object are subjected to perspective projection or orthogonal projection, the vertex coordinates are converted into two-dimensional coordinates in the standardized equipment coordinate system, the x, y and z coordinates of each vertex are subjected to proper calculation and mapping to obtain the coordinates of a corresponding two-dimensional model pixel point, a two-dimensional model pixel point is generated, the rasterization is a process of mapping the two-dimensional model pixel point to an actual pixel point on a screen, the two-dimensional model pixel point coordinate is mapped to the pixel coordinate of the screen, for a polygon or curved surface model, a scanning line algorithm or other rasterization algorithm is used for mapping the pixel point coordinate to the screen, interpolation color and depth values of the pixel point are determined on each scanning line, and in the rasterization process, invisible parts can be removed by using a clipping algorithm, so that the rendering efficiency is improved.
Step S324: performing texture mapping processing on the two-dimensional model grating pixel points according to the illumination intensity to generate two-dimensional model texture data; performing planar geometry clipping processing on texture data of the two-dimensional model based on a depth buffer algorithm, and eliminating invisible textures; generating two-dimensional model clipping data;
In the embodiment of the invention, the texture mapping is a process of applying a texture image to a model by performing texture mapping processing on the two-dimensional model grating pixel points according to illumination intensity. Each rasterized pixel point corresponds to a pixel in the texture image, the color value of each rasterized pixel point is calculated according to illumination intensity or other illumination models, in the texture mapping process, the pixel in the texture image is positioned by using texture coordinates, the texture color value corresponding to each rasterized pixel point is calculated by interpolation according to the texture coordinates and the texture image, two-dimensional model texture data are generated, a depth buffering algorithm is used for determining which parts in the model can be seen, for each rasterized pixel point, before the texture mapping is carried out, the depth value of each rasterized pixel point is compared with the value in the depth buffer, if the depth value of the current pixel point is smaller than the value of the corresponding position in the depth buffer, the pixel point is visible, the texture mapping can be carried out, if the depth value of the current pixel point is larger than or equal to the value of the corresponding position in the depth buffer, the pixel point is blocked, the texture mapping is not carried out, after the plane geometry processing is carried out, two-dimensional model clipping data can be generated, and the data can comprise coordinate information of the clipped pixel point, the color value and the like.
Step S325: cutting the two-dimensional model cutting data into data fragments to generate two-dimensional model cutting fragments; applying a preset fragment shader to a two-dimensional model cutting fragment to perform light ray compound calculation processing, and generating two-dimensional model light ray data;
In the embodiment of the invention, the two-dimensional model clipping data is processed by data fragment cutting, wherein the data fragment cutting is a process of dividing the clipping data into small discrete fragments, each fragment corresponds to one rasterized pixel, the clipping data generally comprises information such as coordinates, color values and the like of texture pixel points after clipping, each fragment corresponds to one rasterized pixel point according to the clipping data, the two-dimensional model clipping fragments are generated, a fragment shader is a program for calculating the color of each fragment, generally comprises operations such as illumination calculation, shadow processing, texture sampling and the like, a preset fragment shader is applied to each clipping fragment, light ray compound calculation can be performed in the fragment shader, and factors such as illumination, shadow, texture and the like are considered to determine the final color of each fragment, so as to generate the two-dimensional model light ray data.
Step S326: performing data three-dimensional display processing on the two-dimensional model light data by utilizing a frame buffer technology to generate a three-dimensional rendering model;
In the embodiment of the invention, a frame buffer object is created by using an API provided by a graphic library or a rendering engine, the created frame buffer object is bound to a graphic rendering context, a subsequent rendering operation is validated, a texture buffer object is created and used for storing two-dimensional model light data, wherein the texture buffer object is a special image data object and is used for storing and processing texture data, the generated two-dimensional model light data is bound to the previously created texture buffer object, the texture buffer object is used as an additional object of the frame buffer object, in this way, the rendering operation is directly written into the texture buffer object, the API of the rendering engine or the graphic library is used for setting a target to be rendered and a view port, the rendering target can be a screen, a texture or other rendering targets, the view port defines a display area of a rendering result on the target, the rendering engine or the rendering function of the graphic library is used for performing the rendering operation, in the rendering process, the light data is transferred to a fragment shader for performing illumination calculation and the rendering operation, the result is written into the frame buffer object, after the rendering operation is completed, the texture buffer object can be extracted from the frame buffer object, and if the additional texture buffer object can be rendered, and the rendering result can be obtained from the frame buffer object, and the rendering result can be rendered if the rendering result is obtained.
Step S327: obtaining model update data through an external data source; and importing the model update data into the three-dimensional rendering model according to a preset time stamp to perform model update processing, and generating an updated rendering model.
In the embodiment of the invention, by determining the source of the model update data, which may be a network interface, a file system or other storage devices, reading the model update data from an external data source, analyzing the data and extracting the required model update information according to a preset data format and structure, selecting the latest model update data according to a time stamp for a scene updated in real time, if the external data source provides a plurality of update data and each data has a time stamp, selecting appropriate data according to the current time or a designated time point to perform model update, importing the selected model update data into a three-dimensional rendering model, which may involve applying the updated vertex position, normal information, texture coordinates and the like to the corresponding part of the rendering model, updating the attribute and state of the three-dimensional rendering model to reflect the imported model update data by using an API of a rendering engine or a graphics library, including operations such as vertex coordinate transformation, normal calculation, texture mapping and the like, re-rendering the updated three-dimensional rendering model after the model update processing is completed, and generating the updated model.
Preferably, step S4 comprises the steps of:
step S41: carrying out data packet compression processing on the updated rendering model according to a geometric coding compression algorithm to generate a rendering model data packet; performing format conversion processing on the rendering model data packet based on the STEP format to generate the STEP format rendering model data packet;
Step S42: performing data packet cutting processing on the STEP format rendering model data packet to generate a STEP format rendering model data block; performing sequencing mark processing on STEP format rendering model data blocks to generate STEP format rendering sequencing links; based on STEP format rendering ordering links, carrying out stream receiving processing on STEP format rendering model data blocks to generate model rendering entity information weight data;
Step S43: comparing the model rendering entity information weight data with a preset rendering precision threshold, and generating a STEP format model file when the model rendering entity information weight data is larger than the rendering precision threshold.
The invention carries out data packet compression processing on the updated rendering model through a geometric coding compression algorithm to generate a rendering model data packet, carries out format conversion processing on the rendering model data packet based on STEP format to generate STEP format rendering model data packet, is beneficial to reducing the size of the data packet and converting the data packet into a universal STEP format, and is convenient for transmission and storage; performing data packet cutting processing on STEP format rendering model data packets to generate STEP format rendering model data blocks, performing sequencing marking processing on the rendering model data blocks to generate STEP format rendering sequencing links, facilitating cutting of large data packets into smaller data blocks, and providing sequencing and linking information for subsequent stream receiving processing; based on STEP format rendering ordering links, stream receiving processing is carried out on STEP format rendering model data blocks to generate model rendering entity information weight data, so that block-by-block receiving and data processing can be realized, memory use is reduced, and entity information and weight data related to model rendering are generated; and comparing the model rendering entity information weight data with a preset rendering precision threshold value, which is beneficial to controlling the rendering precision, ensuring that only entity information with enough importance is contained in a final model file, reducing the file size and improving the rendering efficiency.
In the embodiment of the invention, the updated rendering model is subjected to data packet compression processing through a geometric coding compression algorithm, the compressed data packet is converted into a STEP format rendering model data packet based on STEP (Standard for the Exchange of Product Data) format, the STEP format rendering model data packet is segmented into a plurality of smaller data blocks according to the size, performance requirement or network transmission capacity of the data packet, the segmented STEP format rendering model data blocks are subjected to sequencing marking processing, STEP format rendering sequencing links are generated, sequencing marks can be added according to the sequence of the data blocks or other specific attributes so as to carry out correct data block recombination in subsequent streaming receiving processing, the STEP format rendering model data blocks are subjected to streaming receiving processing based on the STEP format rendering sequencing links, the streaming receiving can receive and process the data blocks in batches, memory occupation and processing load are reduced, the received data blocks are assembled and processed according to the marking information in the sequencing links, model rendering entity information weight data is generated, the model entity information weight data is compared with a preset precision threshold, and if the weight data is larger than the threshold, and a STEP format or a STEP tool is used for generating a STEP format file according to a specification of the STEP format.
Preferably, the functional formula of the geometric coding compression algorithm in step S41 is as follows:
/>
In the method, in the process of the invention, Expressed as the encoded compressed data size,/>Expressed as number of data packets,/>Expressed as the original data size before update,/>Expressed as/>Differences between the respective vertex data and the corresponding reconstructed vertex data,/>Expressed as maximum error allowed for data transmission loss,/>Expressed as vertex positions in the original rendering model,/>Expressed as vertex position in reconstructed rendering model,/>Expressed as vertex normals in the original rendering model,/>Expressed as vertex normals in the reconstructed rendering model,/>Represented as model encoded compression anomaly correction.
The invention constructs a functional formula of a geometric coding compression algorithm for updating the original data size and the first dataAnd carrying out loss calculation in the data packet transmission process on maximum errors allowed by the data transmission loss of the data, wherein the geometric coding compression algorithm can reconstruct and compress the rendering data packet of the model according to the vertex position in the original rendering model and the vertex position in the reconstructed rendering model, so as to realize optimal data packet coding, and carry out dynamic reduction on the rendering precision after model compression according to the vertex normal in the original rendering model and the vertex normal in the reconstructed rendering model, thereby accurately determining the size of the compressed data after coding. In practical application, the formula can sample the data in the region, only a part of important data points are reserved, then the data points are used for reconstruction or interpolation to restore the data of the whole region, the reconstructed model is compared with the original model data, the size of the data packet is compressed and encoded as much as possible while the rendering precision is kept, and the system space is saved. The formula fully considers the number/>, of the data packetsOriginal data size/>, before updateFirst/>Differences between the individual vertex data and the corresponding reconstructed vertex data/>Maximum error allowed by data transmission loss/>Vertex position/>, in original rendering modelVertex position/>, in a reconstructed rendering modelVertex normals/>, in original rendering modelVertex normals/>, in a reconstructed rendering modelModel-encoded compression anomaly correction amount/>According to the original data size/>, before updatingThe interrelationship between the above parameters constitutes a functional relationship:
Through the first The interaction relation between the difference value between each vertex data and the corresponding reconstructed vertex data and the vertex position in the original rendering model can be used for knowing the influence error of the data packet compression on the original rendering model, carrying out geometric coding compression on the data packet under the condition of ensuring the accuracy of the regional data, reducing the data redundancy under the condition of ensuring the accuracy of the data by utilizing the maximum error allowed by the data transmission loss, saving the calculation force, enabling the calculation to achieve rapid convergence, and compressing the abnormal correction quantity/>, through the model codingThe data packet coding compression is adjusted, and the size/> of the coded compressed data is generated more accuratelyThe accuracy and the reliability of geometric coding compression are improved. Meanwhile, parameters such as maximum error allowed by data transmission loss, data packet number and the like in the formula can be adjusted according to actual conditions, so that the method is suitable for different geometric coding compression scenes, and the applicability and flexibility of the algorithm are improved.
Preferably, step S5 comprises the steps of:
Step S51: transmitting the STEP format model file from the server side to the user side according to a network transmission protocol, and generating a transmission STEP format model file;
step S52: when a user receives a transmission STEP format model file, performing task packaging processing on the transmission STEP format model file to generate a real-time rendering task;
step S53: performing rendering node allocation processing on STEP format model file task packages by using a distributed rendering technology to generate model file task nodes; performing node parallel processing on the model file task nodes to generate a model rendering result;
Step S54: and carrying out image synthesis processing on the model rendering results generated by each rendering node to generate a model rendering effect image.
According to the invention, the STEP format model file is transmitted from the server side to the user side by using the network transmission protocol, so that the STEP format model file is generated and transmitted, the model file can be effectively transmitted from the server to the user side, and the preparation is made for the subsequent rendering task preparation work; after receiving the transmitted STEP format model file, the user receives the STEP format model file and performs task packaging processing on the STEP format model file to generate a real-time rendering task, and the efficiency of task processing can be improved and subsequent rendering node distribution processing is facilitated by packaging the related rendering tasks together; the STEP format model file task package is subjected to rendering node distribution processing by using a distributed rendering technology, so that model file task nodes are generated, the model file task nodes are subjected to node parallel processing, and a model rendering result is generated, so that the rendering speed and the rendering efficiency are effectively improved; and (3) carrying out image synthesis processing on the model rendering results generated by each rendering node to generate a model rendering effect image, so that a more comprehensive and high-quality model rendering effect image can be obtained, and a more comprehensive and high-quality model rendering effect image can be obtained.
In the embodiment of the invention, a STEP format model file at a server side is transmitted to a user side by using a proper network transmission protocol (such as HTTP, TCP/IP, FTP and the like), the STEP format model file is packaged into a transmission format (such as binary data or a specific file format) at the server side for transmission, the transmitted STEP format model file is received and stored as a transmission STEP format model file at the user side to generate a transmission STEP format model file, the received transmission STEP format model file is packaged for processing tasks at the user side, the task package can divide the model file into different task units according to requirements, such as division according to regions, division according to file sizes or division according to rendering parameters, the packaged tasks can be used for subsequent distributed rendering processing to generate real-time rendering tasks, a task packet is distributed and processed by using a distributed rendering technology according to a system configuration and a load balancing strategy, the distributed rendering can use a plurality of computers or nodes to perform parallel processing to generate model file task nodes, then each model file task node is processed, the task node processing is executed, the received task model file is divided into different task model units according to requirements, for generating a composite image rendering results by adopting a parallel thread model, a composite image rendering result can be synthesized, a rendering result can be synthesized by adopting a calculation result, a visual effect can be generated, a rendering result can be synthesized, a rendering image can be obtained, a composite image is produced, a visual effect is a composite, and a rendering result is produced, and a visual effect, and a image is produced, and a image is combined, and a image is required by a desired by a rendering and a rendering image and a image is manufactured, such as fragment shaders, depth buffer algorithms, ray tracing, etc., generate model rendering results.
Preferably, step S6 comprises the steps of:
Step S61: acquiring user equipment performance data of a user by calling a system API;
Step S62: performing equipment performance evaluation processing on the user equipment performance data to generate user equipment performance evaluation indexes; and performing rendering loading resource management on the model rendering effect image based on the user equipment performance evaluation index to generate a STEP format real-time rendering model.
The invention can acquire the equipment performance data of the user side by calling the system API, process and analyze the equipment performance data of the user side, generate the performance evaluation index of the user equipment, and know the processing capacity and rendering performance of the user equipment. These evaluation metrics may be used to determine whether the device has sufficient performance to perform real-time rendering tasks; the method has the advantages that the resources required in the rendering process can be optimized and managed by performing rendering loading resource management on the model rendering effect image, so that real-time rendering can be smoothly performed on the user equipment, the performance of the user equipment can be utilized to the greatest extent, and smooth and high-quality rendering experience is provided.
In the embodiment of the invention, the performance data of the user equipment is acquired at the user end by calling API (Application Programming Interface) of the system, the equipment information is acquired by using APIs provided by an operating system, such as android.os.build class or iOS UIDevice class, such as a processor model, a memory size, display card information and the like, the equipment information is acquired by using the Web APIs of a browser, such as a navigator.hardwart Concurrency acquire CPU core number, a navigator.devicememory acquire equipment memory and the like, the acquired performance data of the user equipment is subjected to equipment performance evaluation processing to generate an equipment performance evaluation index, the equipment performance evaluation index can comprise aspects of processor performance, memory capacity, display card performance and the like, rendering loading resource management is performed based on the performance evaluation index of the user equipment, the detail and the resource use condition of model rendering are decided, the rendering loading resource management can dynamically adjust parameters, such as resolution, quality, model complexity and the like according to the performance condition of the user equipment, and the adjustment can be performed in real time, so that the rendering effect of rendering of a rendering format P rendering model is generated within the optimal performance range of the user equipment is ensured.
The invention has the advantages that the original model file is obtained and analyzed, the STEP format model can be converted into the original analysis standard data, an operable data structure is provided for the subsequent processing STEP, the original analysis standard data can be converted into the operable model structured data through the data preprocessing and the three-dimensional grid conversion processing, a more convenient data format is provided for the subsequent rendering and optimizing STEP, the model is processed and rendered more efficiently, the optimized rendering model can be converted into a more compact rendering model data packet through the compression processing and streaming technology, meanwhile, the model rendering entity information weight data is compared and processed, the STEP format model file with adaptability can be generated according to the preset rendering precision threshold value, so as to meet different requirements and bandwidth limitations, the STEP format file can be converted into the operable model structured data through the network transmission and the distributed processing, the data format can be converted into the image rendering device with the parallel processing device according to the quality, the image rendering precision and the image rendering precision can be improved, the image rendering device can be generated according to the image quality and the rendering precision of the user equipment, the image rendering device can be better and the rendering and the image quality can be provided, the image rendering device can be rendered and the image quality of the image device can be better and the image device can be rendered and the image device is more suitable for the rendering and the user device with the rendering device, and flexible processing and adaptation are performed according to performance characteristics of the user equipment. Therefore, the invention carries out progressive hierarchical rendering on the STEP format model and carries out parallel processing through the nodes so as to improve the rendering precision and the load capacity of the WEB terminal.
The present embodiments are, therefore, to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein.
The foregoing is only a specific embodiment of the invention to enable those skilled in the art to understand or practice the invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (7)

1. A STEP format rendering method at a WEB terminal is characterized by comprising the following STEPs:
Step S1: obtaining an STEP format original model file through a WEB end file transmission channel; carrying out file data analysis processing on the STEP format model file according to the STEP file analysis library to generate original analysis standard data;
Step S2: carrying out data preprocessing on the original analysis standard data to generate model structured data; performing three-dimensional grid conversion processing on the model structured data by utilizing a polygonal grid generation algorithm to obtain a standard three-dimensional model grid;
Step S3: performing three-dimensional model rendering processing on the three-dimensional model grids by utilizing a progressive model analysis mechanism to generate an updated rendering model; performing rendering optimization treatment on the updated rendering model according to the visual field rejection optimization algorithm to generate an optimized rendering model; step S3 comprises the steps of:
Step S31: carrying out first-round layering treatment on the standard three-dimensional model grid by utilizing a progressive model analysis mechanism to generate a three-dimensional model rough level; carrying out model detail feature stripping treatment on the three-dimensional model rough level to generate model detail fragments; layering the rough three-dimensional model level based on the model detail fragments in a second round to generate a three-dimensional model detail level; performing level priority rendering sequencing treatment on the rough level of the three-dimensional model and the detail level of the three-dimensional model to generate a three-dimensional rendering progressive model;
step S32: performing model rendering processing on the three-dimensional rendering progressive model based on texture mapping to generate a three-dimensional rendering model; performing model updating processing on the three-dimensional rendering model to generate an updated rendering model; step S32 includes the steps of:
Step S321: setting a rendering environment based on the STEP format, wherein the rendering environment includes a camera position and an illumination intensity; performing three-dimensional model construction processing on the model structured data based on the standard three-dimensional model grid to generate a three-dimensional construction model object;
Step S322: performing view transformation processing on the position of the camera to generate a three-dimensional view coordinate system; performing projection transformation processing on the three-dimensional building model object and the three-dimensional view coordinate system to generate a standardized equipment coordinate system;
step S323: performing dimension reduction processing on the three-dimensional building model object and a standardized equipment coordinate system to generate a two-dimensional model pixel point; rasterizing the two-dimensional model pixel points to generate two-dimensional model raster pixel points;
Step S324: performing texture mapping processing on the two-dimensional model grating pixel points according to the illumination intensity to generate two-dimensional model texture data; performing planar geometry clipping processing on texture data of the two-dimensional model based on a depth buffer algorithm, and eliminating invisible textures; generating two-dimensional model clipping data;
Step S325: cutting the two-dimensional model cutting data into data fragments to generate two-dimensional model cutting fragments; applying a preset fragment shader to a two-dimensional model cutting fragment to perform light ray compound calculation processing, and generating two-dimensional model light ray data;
step S326: performing data three-dimensional display processing on the two-dimensional model light data by utilizing a frame buffer technology to generate a three-dimensional rendering model;
step S327: obtaining model update data through an external data source; importing model update data into the three-dimensional rendering model according to a preset timestamp to perform model update processing, and generating an updated rendering model;
Step S33: performing visual field elimination processing on the updated rendering model according to a preset camera view cone to generate a visual field rendering model; judging the visual field rendering model by using a rejection algorithm, and when the grids in the visual field rendering model are not in the visual field, performing rendering queue rejection by using a visual field rejection optimization algorithm to generate an optimized rendering model;
Step S4: performing file compression processing on the optimized rendering model based on a geometric coding compression algorithm to generate a rendering model data packet; simulating and unpacking the rendering model data packet by adopting a streaming technology, extracting entity information and related attributes of a rendering model, calculating weight information of each entity, and generating model rendering entity information weight data; comparing the model rendering entity information weight data with a preset rendering precision threshold, and generating a STEP format model file when the model rendering entity information weight data is larger than the rendering precision threshold;
Step S5: transmitting the STEP format model file to a user side through a network node to carry out task packaging processing, and generating a real-time rendering task; performing node parallel processing on the real-time rendering task by using a distributed rendering technology to generate a model rendering effect image;
Step S6: acquiring performance data of user equipment; and performing self-adaptive rendering precision adaptation processing on the model rendering effect image according to the user equipment performance data to generate a STEP format real-time rendering model.
2. The method for rendering the STEP format on the WEB side according to claim 1, wherein STEP S1 comprises the STEPs of:
step S11: receiving an STEP format original model file uploaded by a user through an uploading channel in a WEB terminal application program;
step S12: carrying out file initialization on the STEP format original model file based on FreeCAD analysis library to obtain an original model initialization file;
Step S13: carrying out file analysis processing on the original model initialization file by utilizing an analysis function built in the analysis library to generate original model analysis data;
Step S14: performing anomaly detection processing on the original model analysis data, and performing data restoration processing on the original model analysis data when the original model analysis data detects that the data is missing, so as to generate original model restoration data; when the original model analysis data does not detect the abnormality, STEP standard processing is carried out on the original model analysis data to generate original analysis standard data.
3. The method for rendering the STEP format on the WEB side according to claim 1, wherein STEP S2 comprises the STEPs of:
Step S21: carrying out data denoising processing on the original analysis standard data to generate original analysis denoising data; performing outlier detection processing on the original analysis denoising data by using an absolute deviation median method to generate original analysis outlier data; performing outlier substitution processing on the original analysis outlier data through mean calculation to generate standard analysis data;
step S22: setting a time domain signal according to modeling requirements; trending the preset time domain signal to obtain a standard time domain signal; performing signal transformation processing on a preset time domain signal by using a fast Fourier transform algorithm to generate a model frequency domain signal; carrying out frequency domain feature extraction processing on the model frequency domain signal according to the phase spectrum to generate frequency domain feature data;
Step S23: carrying out data frame integration processing on the frequency domain characteristic data and the standard analysis data to generate model integration data; performing data set balance processing on the model integrated data based on an oversampling method to generate model structured data;
Step S24: performing three-dimensional grid conversion processing on the model structured data based on a Delaunay triangulation algorithm to generate a three-dimensional model grid; and carrying out smoothing treatment on the three-dimensional model grid by using a grid simplification algorithm to generate a standard three-dimensional model grid.
4. The method for rendering the STEP format on the WEB side according to claim 1, wherein STEP S4 comprises the STEPs of:
step S41: carrying out data packet compression processing on the updated rendering model according to a geometric coding compression algorithm to generate a rendering model data packet; performing format conversion processing on the rendering model data packet based on the STEP format to generate the STEP format rendering model data packet;
Step S42: performing data packet cutting processing on the STEP format rendering model data packet to generate a STEP format rendering model data block; performing sequencing mark processing on STEP format rendering model data blocks to generate STEP format rendering sequencing links; based on STEP format rendering ordering links, carrying out stream receiving processing on STEP format rendering model data blocks to generate model rendering entity information weight data;
Step S43: comparing the model rendering entity information weight data with a preset rendering precision threshold, and generating a STEP format model file when the model rendering entity information weight data is larger than the rendering precision threshold.
5. The method for rendering STEP format on WEB side according to claim 4, wherein the function formula of the geometric coding compression algorithm in STEP S41 is as follows:
In the method, in the process of the invention, Expressed as the encoded compressed data size,/>Expressed as number of data packets,/>Expressed as the original data size before update,/>Expressed as/>Differences between the respective vertex data and the corresponding reconstructed vertex data,/>Expressed as maximum error allowed for data transmission loss,/>Expressed as vertex positions in the original rendering model,/>Expressed as vertex position in reconstructed rendering model,/>Expressed as vertex normals in the original rendering model,/>Expressed as vertex normals in the reconstructed rendering model,/>Represented as model encoded compression anomaly correction.
6. The method for rendering the STEP format on the WEB side according to claim 1, wherein STEP S5 comprises the STEPs of:
Step S51: transmitting the STEP format model file from the server side to the user side according to a network transmission protocol, and generating a transmission STEP format model file;
step S52: when a user receives a transmission STEP format model file, performing task packaging processing on the transmission STEP format model file to generate a real-time rendering task;
step S53: performing rendering node allocation processing on STEP format model file task packages by using a distributed rendering technology to generate model file task nodes; performing node parallel processing on the model file task nodes to generate a model rendering result;
Step S54: and carrying out image synthesis processing on the model rendering results generated by each rendering node to generate a model rendering effect image.
7. The method for rendering the STEP format on the WEB side according to claim 1, wherein STEP S6 includes the STEPs of:
Step S61: acquiring user equipment performance data of a user by calling a system API;
Step S62: performing equipment performance evaluation processing on the user equipment performance data to generate user equipment performance evaluation indexes; and performing rendering loading resource management on the model rendering effect image based on the user equipment performance evaluation index to generate a STEP format real-time rendering model.
CN202310924068.8A 2023-07-25 2023-07-25 STEP format rendering method at WEB terminal Active CN116977523B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310924068.8A CN116977523B (en) 2023-07-25 2023-07-25 STEP format rendering method at WEB terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310924068.8A CN116977523B (en) 2023-07-25 2023-07-25 STEP format rendering method at WEB terminal

Publications (2)

Publication Number Publication Date
CN116977523A CN116977523A (en) 2023-10-31
CN116977523B true CN116977523B (en) 2024-04-26

Family

ID=88476225

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310924068.8A Active CN116977523B (en) 2023-07-25 2023-07-25 STEP format rendering method at WEB terminal

Country Status (1)

Country Link
CN (1) CN116977523B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117456550B (en) * 2023-12-21 2024-03-15 绘见科技(深圳)有限公司 MR-based CAD file viewing method, device, medium and equipment

Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103077497A (en) * 2011-10-26 2013-05-01 中国移动通信集团公司 Method and device for zooming image in level-of-detail model
CN104021255A (en) * 2014-06-20 2014-09-03 上海交通大学 Multi-resolution hierarchical presenting and hierarchical matching weighted comparison method for CAD (computer aided design) model
CN111127610A (en) * 2019-12-23 2020-05-08 武汉真蓝三维科技有限公司 Point cloud data three-dimensional visualization rendering method and calculation method
CN112270756A (en) * 2020-11-24 2021-01-26 山东汇颐信息技术有限公司 Data rendering method applied to BIM model file
CN112614228A (en) * 2020-12-17 2021-04-06 北京达佳互联信息技术有限公司 Method and device for simplifying three-dimensional grid, electronic equipment and storage medium
CN113178014A (en) * 2021-05-27 2021-07-27 网易(杭州)网络有限公司 Scene model rendering method and device, electronic equipment and storage medium
CN113282290A (en) * 2021-05-31 2021-08-20 上海米哈游璃月科技有限公司 Object rendering method, device and equipment and storage medium
CN114254501A (en) * 2021-12-14 2022-03-29 重庆邮电大学 Large-scale grassland rendering and simulating method
CN114359511A (en) * 2021-12-22 2022-04-15 深圳市菲森科技有限公司 Scheme for real-time rendering of three-dimensional graph
CN114386118A (en) * 2022-01-14 2022-04-22 杭州电子科技大学 CAD model multi-resolution gridding method for self-adaptive maintenance of mechanism semantics
CN114513520A (en) * 2021-12-27 2022-05-17 浙江中测新图地理信息技术有限公司 Web three-dimensional visualization technology based on synchronous rendering of client and server
CN115659445A (en) * 2022-10-31 2023-01-31 大连理工大学 Method for rendering and displaying CAD model on webpage in lightweight mode based on Open Cascade
CN115908715A (en) * 2022-12-12 2023-04-04 深圳市城市公共安全技术研究院有限公司 Loading method and device of building information model, equipment and storage medium
CN115908672A (en) * 2022-11-18 2023-04-04 西安电子科技大学青岛计算技术研究院 Three-dimensional scene rendering acceleration method, system, medium, device and terminal
CN116051708A (en) * 2023-01-30 2023-05-02 四川视慧智图空间信息技术有限公司 Three-dimensional scene lightweight model rendering method, equipment, device and storage medium
CN116310060A (en) * 2023-04-11 2023-06-23 深圳优立全息科技有限公司 Method, device, equipment and storage medium for rendering data

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6456285B2 (en) * 1998-05-06 2002-09-24 Microsoft Corporation Occlusion culling for complex transparent scenes in computer generated graphics
CA2373707A1 (en) * 2001-02-28 2002-08-28 Paul Besl Method and system for processing, compressing, streaming and interactive rendering of 3d color image data
US7321364B2 (en) * 2003-05-19 2008-01-22 Raytheon Company Automated translation of high order complex geometry from a CAD model into a surface based combinatorial geometry format

Patent Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103077497A (en) * 2011-10-26 2013-05-01 中国移动通信集团公司 Method and device for zooming image in level-of-detail model
CN104021255A (en) * 2014-06-20 2014-09-03 上海交通大学 Multi-resolution hierarchical presenting and hierarchical matching weighted comparison method for CAD (computer aided design) model
CN111127610A (en) * 2019-12-23 2020-05-08 武汉真蓝三维科技有限公司 Point cloud data three-dimensional visualization rendering method and calculation method
CN112270756A (en) * 2020-11-24 2021-01-26 山东汇颐信息技术有限公司 Data rendering method applied to BIM model file
CN112614228A (en) * 2020-12-17 2021-04-06 北京达佳互联信息技术有限公司 Method and device for simplifying three-dimensional grid, electronic equipment and storage medium
CN113178014A (en) * 2021-05-27 2021-07-27 网易(杭州)网络有限公司 Scene model rendering method and device, electronic equipment and storage medium
CN113282290A (en) * 2021-05-31 2021-08-20 上海米哈游璃月科技有限公司 Object rendering method, device and equipment and storage medium
CN114254501A (en) * 2021-12-14 2022-03-29 重庆邮电大学 Large-scale grassland rendering and simulating method
CN114359511A (en) * 2021-12-22 2022-04-15 深圳市菲森科技有限公司 Scheme for real-time rendering of three-dimensional graph
CN114513520A (en) * 2021-12-27 2022-05-17 浙江中测新图地理信息技术有限公司 Web three-dimensional visualization technology based on synchronous rendering of client and server
CN114386118A (en) * 2022-01-14 2022-04-22 杭州电子科技大学 CAD model multi-resolution gridding method for self-adaptive maintenance of mechanism semantics
CN115659445A (en) * 2022-10-31 2023-01-31 大连理工大学 Method for rendering and displaying CAD model on webpage in lightweight mode based on Open Cascade
CN115908672A (en) * 2022-11-18 2023-04-04 西安电子科技大学青岛计算技术研究院 Three-dimensional scene rendering acceleration method, system, medium, device and terminal
CN115908715A (en) * 2022-12-12 2023-04-04 深圳市城市公共安全技术研究院有限公司 Loading method and device of building information model, equipment and storage medium
CN116051708A (en) * 2023-01-30 2023-05-02 四川视慧智图空间信息技术有限公司 Three-dimensional scene lightweight model rendering method, equipment, device and storage medium
CN116310060A (en) * 2023-04-11 2023-06-23 深圳优立全息科技有限公司 Method, device, equipment and storage medium for rendering data

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
三维多层次浏览系统的设计与实现;黄丽萍;王庭俊;赵东宏;;重庆电力高等专科学校学报;20200228(第01期);24-26、36 *
基于STEP的三维表面有限元网格生成技术的研究;王玉槐;卢炎麟;周晓;黄建芳;;计算机应用研究;20061010(第10期);144-145、153 *

Also Published As

Publication number Publication date
CN116977523A (en) 2023-10-31

Similar Documents

Publication Publication Date Title
CN108648269B (en) Method and system for singulating three-dimensional building models
CN113178014B (en) Scene model rendering method and device, electronic equipment and storage medium
KR101145260B1 (en) Apparatus and method for mapping textures to object model
US7164420B2 (en) Ray tracing hierarchy
CN102332179B (en) Three-dimensional model data simplification and progressive transmission methods and devices
US7843463B1 (en) System and method for bump mapping setup
US10713844B2 (en) Rendering based generation of occlusion culling models
CN116977523B (en) STEP format rendering method at WEB terminal
US10089782B2 (en) Generating polygon vertices using surface relief information
Merlo et al. 3D model visualization enhancements in real-time game engines
RU2680355C1 (en) Method and system of removing invisible surfaces of a three-dimensional scene
CN116843841B (en) Large-scale virtual reality system based on grid compression
CN106445445B (en) Vector data processing method and device
CN112818450B (en) BIM (building information modeling) model organization method based on block index
Willmott Rapid simplification of multi-attribute meshes
CN116310060B (en) Method, device, equipment and storage medium for rendering data
Scholz et al. Real‐time isosurface extraction with view‐dependent level of detail and applications
CN114461959A (en) WEB side online display method and device of BIM data and electronic equipment
Lee et al. Bimodal vertex splitting: Acceleration of quadtree triangulation for terrain rendering
CA3169797A1 (en) Visualisation of surface features of a virtual fluid
CN113032699A (en) Robot model construction method, robot model construction device and robot processor
WO2023184139A1 (en) Methods and systems for rendering three-dimensional scenes
Kang et al. An efficient simplification and real-time rendering algorithm for large-scale terrain
Deussen et al. Interactive high quality trimmed nurbs visualization using appearance preserving tessellation
CN117635791A (en) 3D model presentation method and system, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20240402

Address after: 518000, 1st Floor, Building A12, Industrial Zone, Hulin Pokeng, No. 165 Nanpu Road, Shangliao Community, Xinqiao Street, Bao'an District, Shenzhen, Guangdong Province

Applicant after: Quick Direct (Shenzhen) Precision Manufacturing Co.,Ltd.

Country or region after: China

Address before: 518000 407, Building F, Tianyou Maker Industrial Park, No. 2, Lixin Road, Qiaotou Community, Fuhai Street, Bao'an District, Shenzhen, Guangdong

Applicant before: Shenzhen Fast Direct Industrial Technology Co.,Ltd.

Country or region before: China

GR01 Patent grant
GR01 Patent grant