CN115908672A - Three-dimensional scene rendering acceleration method, system, medium, device and terminal - Google Patents

Three-dimensional scene rendering acceleration method, system, medium, device and terminal Download PDF

Info

Publication number
CN115908672A
CN115908672A CN202211450581.XA CN202211450581A CN115908672A CN 115908672 A CN115908672 A CN 115908672A CN 202211450581 A CN202211450581 A CN 202211450581A CN 115908672 A CN115908672 A CN 115908672A
Authority
CN
China
Prior art keywords
data
model
compression
animation
algorithm
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211450581.XA
Other languages
Chinese (zh)
Inventor
王建东
李昌令
董学文
赵双睿
葛瑞崟
沈鸿博
张渊
龚少田
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qingdao Institute Of Computing Technology Xi'an University Of Electronic Science And Technology
Original Assignee
Qingdao Institute Of Computing Technology Xi'an University Of Electronic Science And Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingdao Institute Of Computing Technology Xi'an University Of Electronic Science And Technology filed Critical Qingdao Institute Of Computing Technology Xi'an University Of Electronic Science And Technology
Priority to CN202211450581.XA priority Critical patent/CN115908672A/en
Publication of CN115908672A publication Critical patent/CN115908672A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The invention belongs to the technical field of WebGL, computer graphics, digital twinning, rendering optimization acceleration, redundancy elimination and scene compression, and discloses a three-dimensional scene rendering acceleration method, a three-dimensional scene rendering acceleration system, a three-dimensional scene rendering acceleration medium, a three-dimensional scene rendering acceleration device and a three-dimensional scene rendering acceleration terminal, wherein the data type is judged, and if the data type is a three-dimensional model, data separation judgment is performed; judging whether the three-dimensional model contains animation data, if so, performing data separation operation, extracting animation from the model and performing data compression; if not, directly compressing the data; the data compression is divided into model compression and mapping compression, redundant data elimination processing is carried out on data in different formats by adopting different compression algorithms respectively, the model is compressed by adopting a multi-step compression algorithm aiming at the model mapping, and finally the data are returned to the client. Experimental analysis shows that compared with the traditional data processing scheme, the enhanced redundant data elimination method has the advantages that the storage cost is reduced by 66.7%, and the data acquisition time and the storage space are reduced.

Description

Three-dimensional scene rendering acceleration method, system, medium, equipment and terminal
Technical Field
The invention belongs to the technical field of WebGL, computer graphics, digital twinning, rendering optimization acceleration, redundancy elimination and scene compression, and particularly relates to a three-dimensional scene rendering acceleration method, system, medium, equipment and terminal.
Background
At present, the digital twin is an important means for digitalization of the current physical system, and is widely applied to national major industries such as aerospace, grain production, transportation and the like. WebGL is the most promising implementation mode of the digital twin, and the core of the WebGL is that a rendering technology is utilized to map a physical world to a virtual world, so that cross-platform access and three-dimensional visualization are realized. The characteristics of complex digital twin model and huge scene lead to long data acquisition time and high model loading cost during WebGL rendering. In the prior WebGL rendering optimization research, data acquisition time is shortened by methods of eliminating redundant data such as data separation and compression, and the following defects still exist in the prior research: aiming at data elimination, only single processing modes such as data separation, compression and the like are utilized, and the defects in the aspects of reducing acquisition time and storage space are still obvious. Therefore, it is necessary to design a new three-dimensional scene rendering acceleration method.
Through the above analysis, the problems and defects of the prior art are as follows:
(1) The existing digital twin model is complex and huge in scene, so that data acquisition is long in time consumption and model loading cost is high during WebGL rendering.
(2) The existing WebGL rendering optimization method only utilizes single processing modes such as data separation and compression aiming at data elimination, and is still obviously insufficient in the aspects of reducing acquisition time and storage space.
Disclosure of Invention
The invention provides a three-dimensional scene rendering acceleration method, a three-dimensional scene rendering acceleration system, a three-dimensional scene rendering acceleration medium, a three-dimensional scene rendering acceleration device and a three-dimensional scene rendering acceleration terminal, and particularly relates to a three-dimensional scene rendering acceleration method, a three-dimensional scene rendering acceleration system, a three-dimensional scene rendering acceleration medium, a three-dimensional scene rendering acceleration device and a three-dimensional scene rendering acceleration terminal based on separation compression.
The invention is realized in such a way, and provides a three-dimensional scene rendering acceleration method, which comprises the following steps: judging the data type, and if the data type is a three-dimensional model, performing data separation judgment; judging whether the three-dimensional model contains animation data, if so, performing data separation operation, extracting animation from the model and performing data compression; if not, directly compressing the data; the data compression comprises model compression and map compression, redundant data elimination processing is carried out on data in different formats by adopting different compression algorithms, the model is compressed by adopting a multi-step compression algorithm on the map of the model, and finally the data are returned to the client.
Further, the three-dimensional scene rendering acceleration method comprises the following steps:
step one, data separation: dividing the model data into animation data and topology data;
step two, data compression: performing compression operation on the data after the data separation;
step three, mapping compression: and compressing the model by adopting a multi-step compression algorithm aiming at the model map.
Further, the data separation in the first step includes:
the three-dimensional model comprises topological data, map data and animation data; wherein, the chartlet data is stored outside the model file, and the model accesses the chartlet resource through the path; when data separation is performed on a dynamic object, a single model file is decomposed into animation data and topological data. The data separation operation is carried out on the server, the server stores all the model files, the model files are analyzed at the server side, and the animation data and other data are separated. In the process of analyzing the model, the animation file is analyzed into fine and smooth actions, and then complex actions are formed through combination. And after the animation file is separated into single files, identifying the files, wherein the animation data indicates the type of the animation file by adopting the same data format as the model data. And the subsequent compression and cache of the model act on the animation data, and all the animation data are mutually combined to form a multi-granularity animation library.
Carrying out semantic separation on animation data, and dividing the animation data into an interaction type, a transition type, an accompanying type and other types; the interaction type refers to the animation type of the model response after the model is operated by a user; the transition type refers to a transition animation generated by the mode changing to another action in the middle after completing one action; the accompanying type refers to animation which is played by the accompanying model all the time, and if the new animation does not belong to the animation type, the new animation is classified into other types. The animation data and the corresponding commands form a mapping relation and store a database, and when a client side initiates a request for acquiring the animation, all data are received and visualized. The model walking animation shows the walking action of the simulated character model, the animation data belongs to the accompanying type, and the model walks along the path after the path data is provided and stops at the destination.
Further, the animation data of the model is decomposed as a separate file, has the format of the original model, and is identified by the action type. The file is converted into a JSON object through serialization and is used. The animation data is divided into basic actions and complex actions, fine-grained animations are combined into complex animations through combination, the complex animations are applied to models without the animations, models for playing the complex animations are formed, and the models are displayed on a terminal screen in a three-dimensional visualization mode.
Ror represents the proportion of the current file size in the original file, and the calculation mode is as follows:
R or =currentSize/originalModelSize×100%;
wherein currentSize represents the size of the current file; originalModelSize represents the size of the original file; t represents the time required for analyzing the current file; δ or represents the optimization rate of the file. If a model containing animation needs to be loaded, model.fbx and a corresponding animation file need to be loaded at the same time, if the current model does not need animation data, only model.fbx is loaded, and the optimization rate is 0 when original model.fbx is directly loaded.
When only the model is displayed, the proportion rate of the model in the original model is calculated without considering animation data, and then the optimization rate δ or is calculated, wherein the calculation process is as follows:
rate=model.time/originalModel.time;
δor=(1-rate)×100%;
when the initial model is displayed, the optimization rate is 0. When displaying the model with animation, combining the animation data and the model data, taking the combined data as a whole, calculating the proportion of the data in the original model, and then calculating the optimization rate.
Further, the data compression in the step two comprises:
and dividing the three-dimensional model into models of pure character coding and JSON coding according to the coding format, respectively compressing by using gzip and Draco algorithms, and converting the models of other formats into one of the formats for processing. And (4) carrying out mapping compression on the model containing a large amount of mapping contents by adopting a multi-class method.
The model compression represents compression of model topology data, the topology data describe all space information of the model, the topology data are rendered into Mesh according to the space information WebGL, and the Mesh represents a three-dimensional model formed by a series of grids. The model compression method divides all models into models of pure character coding and JSON coding, all models of pure character coding are converted into obj, the model format of JSON coding is converted into gltf, the model of obj format is compressed by using gzip algorithm, and the gltf model is compressed by using Draco algorithm. And before using gzip compression, judging whether the client supports gzip, and judging that the identifier is Accept-Encoding.
The compression ratio calculation formula is as follows:
(1-converted size/original size) × 100%;
and (3) introducing a JSON (Java Server object notation) coded model format into data compression, and uniformly converting the JSON coded model format into a gltf format for processing. The model with the format of gltf describes the whole scene, including the scene, the camera, the light source and the three-dimensional model;
the topological data of the three-dimensional model is stored in binary, and other parts find related contents for reference by using indexes of the binary data; the structural meanings of gltf are as follows: scene represents a scene entrance in the gltf, is positioned at the top layer after the scene is converted into a tree structure, and accesses all contents in the tree structure through scene; the node is a single node in the scene and is used for carrying out rotation, translation and scaling transformation, and the node consists of other nodes and is used as a virtual node to manage other nodes; the node types are divided into mesh, camera, skin, and virtual nodes.
Dividing the three-dimensional model into models of pure character coding and JSON coding, respectively storing the models by using obj and gltf formats, and converting other types into one of obj and gltf formats; the model of pure character coding is compressed by adopting a gzip algorithm, and the model of JSON coding is processed by adopting a Draco lossy compression algorithm.
Further, the mapping compression in step three comprises:
the map compression is divided into three types, wherein the first type is to compress pictures; the second type is compressing the transformed texture; the third category is to replace pictures of various precisions by extension using a single picture. The mapping compression uses the first type of picture compression to accelerate picture acquisition, then uses the second type of texture compression, and finally uses the third type of picture to dynamically generate a multi-precision picture by using a single picture. In the first type of mapping compression, a Jpeg picture is compressed by a Guetzli algorithm, and the multi-stage compression process comprises color space conversion, discrete cosine transform and quantization. Guetzli generates a smaller picture file by optimizing a quantization stage with visual quality loss, and adopts a search algorithm to balance the file size and the minimum loss; the search algorithm describes color perception and visual concealment using a color transform and a discrete cosine transform. The Guetzli compression algorithm only loads the Jpeg picture for the first time for compression, and uses the data cache to obtain the same picture from the local.
In the second type of mapping compression, the texture after mapping conversion is subjected to lossless compression by adopting a BasisUniversal algorithm. The BasisUniversal algorithm converts the png format file into a basis file. In the third type of mapping compression, a single mapping file is used for automatically generating the mappings with various precisions through an algorithm to replace the manually provided mappings with various precisions. During rendering, the maps with different precisions are rendered according to the visual object distance, and when the visual object distance is longer, the map with low precision is rendered; when the visual object distance is close, high-precision mapping is rendered, and the visual effect is enhanced. The automatic generation algorithm adopts a Mipmap algorithm, an anisotropic filtering algorithm and a double-thread interpolation algorithm.
The data compression comprises model compression and map compression, wherein the model compression is used for processing models of pure character coding and JSON coding respectively, the corresponding formats are obj and gltf respectively, and the models are compressed by adopting a gzip algorithm and a Draco algorithm respectively. The map compression is divided into three categories, namely compressing the pictures; compressing the texture; and replacing the multi-precision picture by expansion by using a single picture. The image compression adopts a Guetzli algorithm; the texture compression adopts a BasisUniversal algorithm; the extended pictures adopt a Mipmap algorithm and double-thread interpolation, various precision pictures are generated through one picture, and pictures with various precisions do not need to be prepared in advance.
Another object of the present invention is to provide a three-dimensional scene rendering acceleration system using the three-dimensional scene rendering acceleration method, the three-dimensional scene rendering acceleration system including:
the data separation module is used for separating the model data into animation data and topology data;
the data compression module is used for carrying out compression operation on the data after the data separation;
and the map compression module is used for compressing the model by adopting a multi-step compression algorithm aiming at the model map.
Another object of the present invention is to provide a computer device, which includes a memory and a processor, the memory stores a computer program, and the computer program, when executed by the processor, causes the processor to execute the steps of the three-dimensional scene rendering acceleration method.
Another object of the present invention is to provide a computer-readable storage medium, which stores a computer program, which, when executed by a processor, causes the processor to execute the steps of the three-dimensional scene rendering acceleration method.
Another object of the present invention is to provide an information data processing terminal, which is used for implementing the three-dimensional scene rendering acceleration system.
By combining the technical scheme and the technical problem to be solved, the technical scheme to be protected by the invention has the advantages and positive effects that:
first, aiming at the technical problems existing in the prior art and the difficulty in solving the problems, the technical problems to be solved by the technical scheme of the present invention are closely combined with the technical scheme to be protected and the results and data in the research and development process, and some creative technical effects brought after the problems are solved are analyzed in detail and deeply. The specific description is as follows:
the invention focuses on a digital twin WebGL rendering optimization technology, and provides an enhanced redundant data elimination method based on a redundant data elimination technology. Aiming at the problem of overhigh consumption of data acquisition time and storage space, the invention provides an enhanced redundant data elimination method, which reduces the file size and transmission time through data separation and compression. When the model is loaded for the first time, the data separation decomposes the model into animation data and topological data, and the size of the model is reduced preliminarily; the data compression carries out differentiation processing on the model data, eliminates model redundant information to the maximum extent, and further reduces the size of the model. Experimental analysis shows that compared with the traditional data processing scheme, the enhanced redundant data elimination method provided by the invention has the advantage that the storage cost is reduced by 66.7%.
The three-dimensional scene rendering acceleration method based on separation compression provided by the invention aims to reduce data acquisition time and storage space and is divided into data separation and data compression according to an application sequence. The data separation provided by the invention divides the model data into animation data and topology data, reduces the size of a single file, and further accelerates the transmission time of the file in a network; the data compression is carried out on the data after the data separation, different compression algorithms are adopted for different models with different formats to achieve the optimal compression ratio, a multi-step compression algorithm is adopted for model mapping to further compress the models, and redundant data of the models are eliminated to the maximum extent.
Aiming at the problem of overhigh consumption of data acquisition time and storage space, the invention provides a separation-compression three-dimensional scene rendering acceleration method, which combines data separation and scene compression to reduce the file size and transmission time; when the model is loaded for the first time, the data separation decomposes the model data into animation data and topological data, and the optimization rate can reach 66.7 percent; and the data compression is used for carrying out differentiation processing on the separated data, and carrying out gzip compression and Draco compression on the models of plain text coding and JSON format coding respectively, wherein the compression ratio is up to 86.7% and 61.1%. Experiments show that the enhanced redundant data elimination method reduces data acquisition time and storage space.
Secondly, considering the technical scheme as a whole or from the perspective of products, the technical effect and advantages of the technical scheme to be protected by the invention are specifically described as follows:
the result shows that the three-dimensional scene rendering acceleration method provided by the invention can reduce the time and memory for data acquisition, reduce the data rendering calculation overhead and improve the image display frame rate under the similar rendering effect.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required to be used in the embodiments of the present invention will be briefly described below, and it is obvious that the drawings described below are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a flowchart of a three-dimensional scene rendering acceleration method according to an embodiment of the present invention;
FIG. 2 is a flow chart of a method for enhanced redundant data elimination provided by an embodiment of the present invention;
FIG. 3 is a schematic diagram of a model walking animation provided by an embodiment of the invention;
FIG. 4 is a schematic diagram of a GLTF architecture provided by an embodiment of the present invention;
FIG. 5 is a comparison graph of the Draco algorithm model before and after compression according to an embodiment of the present invention; figure (a) is a schematic diagram before compression of difference. Gltf, and figure (b) is a schematic diagram after compression of difference. Gltf;
FIG. 6 is a schematic diagram of an appearance of a model before texture compression according to an embodiment of the present invention; fig. (a) is a schematic diagram of a model before texture compression, and fig. (b) is a model map.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail with reference to the following embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
In view of the problems in the prior art, the present invention provides a method, a system, a medium, a device, and a terminal for accelerating three-dimensional scene rendering, which are described in detail below with reference to the accompanying drawings.
This section is an explanatory embodiment expanding on the claims so as to fully understand how the present invention is embodied by those skilled in the art.
As shown in fig. 1, the three-dimensional scene rendering acceleration method provided by the embodiment of the present invention includes the following steps:
s101, data separation: dividing the model data into animation data and topology data;
s102, data compression: performing compression operation on the data after data separation;
s103, mapping compression: and compressing the model by adopting a multi-step compression algorithm aiming at the model map.
As a preferred embodiment, the three-dimensional scene rendering acceleration method provided in the embodiment of the present invention specifically includes the following steps:
first step, data separation: the three-dimensional model contains topology data, map data, and animation data. Wherein the map data is stored outside the model file, and the model accesses the map resource through the path. Animation data do not exist on all static objects such as ships, seats, buildings and other objects which cannot generate deformation, and a large amount of animations can be stored in models which can finish deformation or limb animation on dynamic objects such as people, animals, machines and the like. The data separation is carried out on the dynamic objects, the single model file is decomposed into animation data and topological data, the size of the single file is reduced, for the model without animation, the analysis of the file is accelerated by eliminating the animation data, the consumed storage space is reduced, the animation data is changed into resources which can be loaded according to requirements, and the analysis efficiency during the first loading is improved. The data separation operation is carried out on the server, the server stores all the model files, the model files are analyzed at the server side, and the animation data and other data are separated. In the process of analyzing the model, the animation file is analyzed into fine and smooth actions, and then complex actions can be formed through combination. The animation file is separated into single files, the files need to be identified, and the animation data indicates the type of the animation file by adopting the same data format as the model data. The subsequent compression and buffer of the model can be used for animation data, and all the animation data are mutually combined to form a multi-granularity animation library. And performing semantic separation on the animation data, and dividing the animation data into an interaction type, a transition type, an accompanying type and other types. The interaction type refers to the animation type of the model response after the model is operated by the user, such as various gestures, stimulated jerk, movement and the like; the transition type refers to a transition animation generated by the mode changing to another action in the middle after completing one action; the accompanying type refers to animation which is played along with the model, such as walking, standing, running and the like; if the newly added animation does not belong to the animation type, the newly added animation is classified into other types. The mapping relation is formed by the animation data and the corresponding commands, the database is stored, and when the client side initiates a request for acquiring the animation, all data can be received immediately and visualized. Such as a walking animation of the model, as shown in fig. 3. The model walking animation shows the walking action of the simulated character model, the animation is attributed to the accompanying type, and after the path data is provided, the model walks along the path and stops at the destination.
The animation data of the model is decomposed to be used as a separate file, has the format of the original model and is identified through the action type. To facilitate storage of the animation file, the file is converted into a JSON object by serialization and used in that form. The animation data is divided into basic actions and complex actions, fine-grained animations are combined into more complex animations through combination, the more complex animations are applied to models without the animations, the models capable of playing the complex animations are formed, and the models are displayed on a terminal screen in a three-dimensional visualization mode. After the data of the model is separated, the amount of a single file is reduced, a large amount of transmission and analysis time is reduced in the process of loading the model for the first time, and the results are shown in table 1 by carrying out experimental analysis on the size and the analysis time of the model.
TABLE 1 animation data separation results
Figure BDA0003949640080000091
Fbx represents a file containing no animation data, and original file contains all animation data. Ror represents the proportion of the current file size in the original file, and the calculation mode is as follows:
R or =currentSize/originalModelSize×100%
wherein currentSize represents the size of the current file; originalModelSize represents the size of the original file; t represents the time required for analyzing the current file; δ or represents the optimization rate of the file. The data in the table 1 can show that the size of each single file is reduced by more than 50% after the data separation is carried out on the model, and the highest optimization rate reaches 66.7%. Fbx is only loaded by a basic model, and only 357 milliseconds are needed, so that the file size is reduced, the analysis time is reduced, and compared with 1072 milliseconds of the original model, 715 milliseconds are reduced, and the analysis time of the model is greatly reduced. Delta. For the preparation of a coating or The calculation method of the optimization rate is shown in algorithm 1. If a model containing animation needs to be loaded, a model.fbx and a corresponding animation file need to be loaded at the same time, if the current model does not need animation data, only the model.fbx needs to be loaded, and when the original model.fbx is directly loaded, the optimization rate is 0.
Algorithm 1:
Figure BDA0003949640080000101
when only a model is displayed, animation data do not need to be considered, the proportion rate of the model to the original model originalModel is firstly solved, then the optimization rate δ or is solved, and the calculation process is as follows:
rate=model.time/originalModel.time
δor=(1-rate)×100%
when the initial model is displayed, the optimization rate is 0. When the model with animation is displayed, firstly combining animation data and model data, taking the combined data as a whole, calculating the proportion of the combined data in the original model, and finally calculating the optimization rate.
And step two, data compression: in the enhanced redundant data elimination method, data compression is the second step of processing the three-dimensional model. The three-dimensional model has a large number of formats, and is divided into models of pure character coding and JSON coding according to coding formats for compatible processing of all models, the models are compressed by using gzip and Draco algorithms respectively, and the models in other formats are converted into one format for processing. For a model containing a large amount of map contents, map compression is carried out by adopting a plurality of types of methods, and redundant information of a model file is reduced to the greatest extent.
Model compression
The model compression represents compression of model topology data, the topology data describe all spatial information of the model, and according to the information, webGL can render the topology data into Mesh, and the Mesh represents a three-dimensional model formed by a series of grids. Common model formats are obj, 3mf, dae, fbx, gltf, etc. For unified processing, the model compression method divides all models into models of pure character coding and JSON coding, all the models of the pure character coding are converted into obj, the models of the JSON coding are converted into gltf, the models of the obj format are compressed by using a gzip algorithm, and the gltf model is compressed by using a Draco algorithm. The gzip algorithm is named GNUzip in full, is used for file compression of UNIX system for the first time, and adopts the Deflate algorithm in the core. The Deflate algorithm uses both LZ77 and Huffman, which compress the object as a pure character encoded file. Before using gzip compression, whether the client supports gzip needs to be determined, the identifier is Accept-Encoding, and the compressed data of the wheeled ship and other models is shown in table 2.
TABLE 2 compressed data for the wheelbarrow and other models
Figure BDA0003949640080000111
The compression ratio calculation formula is as follows:
(1-converted size/original size). Times.100%
As can be seen from the compression ratio, gzip has a good compression ratio to the model of pure character type coding, the sample is higher than 80%, and the transmission time of the model in the network is reduced by more than 80% through gzip compression because the size of the model is in direct proportion to the transmission time. The gzip algorithm has obvious compression effect, but only acts on a model of pure character type coding, and has no universality. In order to be compatible with other formats, JSON coded model formats are introduced into data compression and are uniformly converted into gltf formats for processing. The gltf format aims to unify all three-dimensional model format standards, and is essentially a JSON object. The model, formatted as gltf, describes the entire scene, including the scene, cameras, light sources, three-dimensional models, etc. The topological data of the three-dimensional model are stored in a binary mode, other parts find related contents for reference by using indexes of the binary data, and the size of the file is smaller compared with other formats because most contents are stored in the binary mode. The gltf is more efficient in storing data and is widely used at present, and the structure of the gltf is shown in fig. 4. The individual structures in the figures have the following meanings: scene: scene entries in gltf, at the top level after the scene is transformed into the tree structure, are accessible to all contents in the tree structure through scene. And (3) a node: the node is a single node in a scene, can perform various transformations such as rotation, translation and scaling, can be composed of other nodes, and can also be used as a virtual node to manage other nodes. The node types are divided into mesh, camera, skin, and virtual nodes.
camera: the camera's own parameters, coordinates and orientation, which render the scene, are defined. All three-dimensional objects in the scene are represented, the appearance of the objects during rendering is defined, and the three-dimensional objects can be used as indexes of the three-dimensional objects. The topological data of the three-dimensional object is stored in the access, and the rendering appearance material is stored in the access. animation: and defining the state of the related node changing along with time, and playing the node as the animation behavior of the model through the animation component.
an access: and virtual storage of data, wherein the virtual storage comprises geometric data, skinning parameters and animation behaviors dependent on time, the data is queried through bufferView, the geometric data is provided to mesh, the skinning parameters are provided to skin, and the animation behaviors are provided to animation. Data reduces storage overhead by converting to binary, and the processor is a key component for making the size of the gltf format file smaller.
material: a rendered appearance of the three-dimensional object is defined. And during rendering, the renderer defines the object surface according to the content, and the rendering is carried out through graphic calculation. To facilitate defining the appearance, all information is defined within the texture, which is subsequently referred to as the texture to give the three-dimensional object a rendered appearance.
texture: consisting of a sampler and an image, intended to map the texture image to the surface of a three-dimensional object.
The Draco algorithm has the best compression effect on the gltf format. Draco is a library released by the Google Chrome team in 2017, 1 month, mainly provides functions of compressing and decompressing 3D geometric grids and point clouds, supports compressed point information, connection information, texture mapping, color information, normal line information and other attributes related to geometry, and is a quantization-based compression method. Its encoder achieves the best compression scale by rearrangement of points based on KD-Tree, compressing any number between 1 and 31, in accordance with tolerable loss of detail. Due to the fact that data structures of point cloud and geometric mesh are different essentially, different compression methods exist for the two types of data, the compression mode of the geometric mesh depends on an Edgebreaker algorithm, the Edgebreaker algorithm encodes the geometric mesh in a spiral mode, all triangular surface connection codes are coded into character strings, accessed vertexes and surfaces are recorded at the same time, and all character strings are compressed independently. The compression ratio is influenced by the compression quantization number and is in an interval of [1,31], and the Draco compression respectively processes each attribute according to different compression quantization numbers.
The Draco algorithm supports lossy compression and lossless compression, in the digital twin, the change of the slight image quality caused by the lossy compression is in a fault-tolerant range, the reduced details cannot be distinguished by naked eyes, and the lossy compression has a higher compression ratio. When the number of the three-dimensional model surfaces is small, the three-dimensional model cannot bear any detail loss, and cannot be compressed at the moment, but the data volume is small, the transmission time is short, the compressed data volume using compression is small, and extra calculation overhead is brought, so that the situation does not need to be processed. The results of the experiment using the Draco algorithm are shown in table 3.
TABLE 3 Draco compression results
Figure BDA0003949640080000131
The compression ratio calculation formula is consistent with the formula.
Experimental data show that the Draco algorithm has a higher compression ratio for the building model. In the hgj. Gltf model, there are a large number of particle models for which the compression ratio of the Draco algorithm is low. The GLTF format has the characteristic of converting data into binary compared with a plain text format, the model can be further compressed, the compression ratio of a Draco algorithm is up to 61.1%, and the use of Draco for compression has important significance for accelerating data transmission. Experiments were performed using lossy compression by Draco, with the ratio before and after compression as shown in fig. 5. After the three models are compressed, the sizes of the models are optimized by more than 40%, but the differences before and after compression cannot be distinguished by naked eyes from the observation of rendering results, and the advantages of the Draco lossy compression algorithm are further verified.
Data compression reduces the large amount of redundant data for the model. For compatible processing, all three-dimensional models are divided into models of pure character coding and JSON coding, obj and gltf formats are respectively used for storage, and other types are converted into one of the types. The model of pure character coding is compressed by adopting a gzip algorithm, the highest compression ratio reaches 86.7%, and the JSON coding model is processed by adopting a Draco lossy compression algorithm, and the highest compression ratio reaches 61.1%. Experiments prove that the data compression effectively reduces the redundant information of the model data and reduces the size of the model.
Thirdly, mapping compression: for a model with an important beauty, a chartlet placed on the surface is very important, the size of chartlet data is often higher than that of topology data, and data compression not only needs to be performed on the topology data, but also needs to be performed on the chartlet. The purpose of map compression is to reduce map data, reduce the memory occupied during analysis and accelerate the analysis speed. In digital twinning, the model cannot directly use the map, and the map needs to be converted into a texture (texture) and transmitted to a buffer so as to be used by the GPU. The size of the texture must be 2 x 2, n and m are any positive integer, and if the size is not satisfactory, extra processing is required to correct the size of the texture. The map is essentially a picture, and the formats of the common pictures are jpg, png, gif and the like, and after the picture is converted into the texture, the occupation and display amounts of the textures with the same size are also the same. The image is compressed and then decompressed at the client, and the decompressed image is converted into texture with the same size as the texture of the image before compression, so that the compression of the image can not reduce the use amount of the video memory, and the compression of the texture can only reduce the use amount of the video memory. The optimization of the mapping can be divided into compression of the picture and compression of the texture, the mapping compression can accelerate the transmission speed of the mapping in the network, and the texture compression can reduce the video memory usage amount. In summary, the mapping compression is divided into three categories, the first category is to compress the picture, so as to reduce the transmission time of the picture in the network; the second type is compressing the converted texture to reduce the usage amount of the video memory; the third category is that a single picture is used for replacing pictures with various precisions through expansion, so that the data volume is greatly reduced. The mapping compression firstly uses the first type of picture compression to accelerate picture acquisition, secondly uses the second type of texture compression to reduce the video memory usage amount, and finally uses the third type of texture compression to dynamically generate a multi-precision picture by using a single picture to reduce the data amount. In the first class of chartlet compression, the Jpeg picture is compressed by a Guetzli algorithm, the compression ratio is increased by 20-30% compared with the current compression algorithm, the quality of the Jpeg picture is mainly related to a multi-stage compression process, and the multi-stage compression process comprises color space conversion, discrete cosine transform and quantization. Guetzli generates smaller picture files by optimizing the quantization stage with visual quality loss, and balances file size and minimum loss using a search algorithm that more thoroughly and in detail describes color perception and visual concealment using simple color transforms and discrete cosine transforms. Guetzli produces smaller picture sizes but takes longer to compress. In an experiment, the compression algorithm only loads the Jpeg picture for compression for the first time, so that the transmission speed of the Jpeg picture in a network is increased, the Jpeg picture is acquired from the local by using the data cache when the same picture is acquired subsequently, and the algorithm does not need to be used again.
In the second type of mapping compression, the texture after mapping conversion is subjected to lossless compression by adopting a BasisUniversal algorithm jointly issued by Google and Binomial in 2019. BasisUniversal is a GPU texture compression codec, can be used like a picture codec, and supports various common texture formats. The file in the png format is converted into a basic file, and after the picture is converted into the texture, the video memory usage amount of the GPU is 6-8 times less, so that the texture transmission efficiency is improved. The conversion process is shown in fig. 6.
In the third type of mapping compression, a single mapping file is used for automatically generating the maps with various precisions through an algorithm, and the maps with various precisions provided manually are replaced. During rendering, the maps with different precisions are rendered according to the viewing object distance, when the viewing object distance is far away, the map with low precision is rendered, the loss precision of the map is negligible due to the over-far visual angle, and when the viewing object distance is near, the map with high precision is rendered, so that the visual effect is enhanced. The automatic generation algorithm can adopt a Mipmap algorithm, an anisotropic filtering algorithm and a double-thread interpolation algorithm, the bilinear interpolation algorithm generates a higher-precision map, sawtooth is avoided, and the Mipmap algorithm and the anisotropic filtering algorithm can generate a lower-precision map to improve storage efficiency. The Mipmap algorithm only costs 1/4 of storage cost per se for storing pictures of all sub-levels, and because the pixels of the sub-levels are generated by dividing the width and the height of an upper-layer map by 2, the sub-level map shows that the areas with inconsistent width and height ratios have poor expression forms. Anisotropic filtering takes into account areas of inconsistent aspect ratios, but results in more computational effort and memory space. Mipmaps are more suitable for performance considerations.
The data compression improves the speed of loading resources for the first time and reduces the memory occupation of the storage resources. The data compression comprises model compression and map compression, the model compression respectively processes models of pure character coding and JSON coding, the corresponding formats are obj and gltf respectively, the compression is respectively carried out by adopting a gzip algorithm and a Draco algorithm, and the highest compression ratio is 86.7% and 61.1%. The mapping compression is divided into three types, namely, the compression is carried out on the picture; compressing the texture; and replacing the multi-precision picture by expansion by using a single picture. The image compression adopts a Guetzli algorithm; the texture compression adopts the BasisUniversal algorithm; the extended picture adopts a Mipmap algorithm and double-thread interpolation, and various precision pictures are generated through one picture without preparing pictures with various precisions in advance.
Aiming at the problem of overhigh consumption of data acquisition time and storage space, the invention provides a separation-compression three-dimensional scene rendering acceleration method, which combines data separation and scene compression to reduce the file size and transmission time. When the model is loaded for the first time, the data separation decomposes the model data into animation data and topological data, and the optimization rate can reach 66.7 percent; and the data compression is used for carrying out differentiation processing on the separated data, and carrying out gzip compression and Draco compression on the models of plain text coding and JSON format coding respectively, wherein the compression ratio is up to 86.7% and 61.1%. Experiments show that the enhanced redundant data elimination method reduces data acquisition time and storage space.
The three-dimensional scene rendering acceleration system provided by the embodiment of the invention comprises:
the data separation module is used for separating the model data into animation data and topology data;
the data compression module is used for carrying out compression operation on the data after the data separation;
and the map compression module is used for compressing the model by adopting a multi-step compression algorithm aiming at the model map.
It should be noted that the embodiments of the present invention can be realized by hardware, software, or a combination of software and hardware. The hardware portions may be implemented using dedicated logic; the software portions may be stored in a memory and executed by a suitable instruction execution system, such as a microprocessor or specially designed hardware. Those skilled in the art will appreciate that the apparatus and methods described above may be implemented using computer executable instructions and/or embodied in processor control code, such code being provided on a carrier medium such as a disk, CD-or DVD-ROM, programmable memory such as read only memory (firmware), or a data carrier such as an optical or electronic signal carrier, for example. The apparatus of the present invention and its modules may be implemented by hardware circuits such as very large scale integrated circuits or gate arrays, semiconductors such as logic chips, transistors, or programmable hardware devices such as field programmable gate arrays, programmable logic devices, or software executed by various types of processors, or a combination of hardware circuits and software, e.g., firmware.
The above description is only for the purpose of illustrating the present invention and the appended claims are not to be construed as limiting the scope of the invention, which is intended to cover all modifications, equivalents and improvements that are within the spirit and scope of the invention as defined by the appended claims.

Claims (10)

1. A three-dimensional scene rendering acceleration method is characterized by comprising the following steps: judging the data type, and if the data type is a three-dimensional model, performing data separation judgment; judging whether the three-dimensional model contains animation data, if so, performing data separation operation, extracting animation from the model and performing data compression; if not, directly compressing the data; the data compression is divided into model compression and mapping compression, redundant data elimination processing is carried out on data in different formats by adopting different compression algorithms respectively, the model is compressed by adopting a multi-step compression algorithm aiming at the model mapping, and finally the data are returned to the client.
2. The three-dimensional scene rendering acceleration method according to claim 1, characterized in that the three-dimensional scene rendering acceleration method comprises the steps of:
step one, data separation: dividing the model data into animation data and topology data;
step two, data compression: performing compression operation on the data after data separation;
step three, mapping compression: and compressing the model by adopting a multi-step compression algorithm aiming at the model map.
3. The three-dimensional scene rendering acceleration method according to claim 2, characterized in that the data separation in the first step includes:
the three-dimensional model comprises topological data, map data and animation data; wherein, the chartlet data is stored outside the model file, and the model accesses the chartlet resource through the path; when data separation is carried out on the dynamic object, a single model file is decomposed into animation data and topological data; the data separation operation is carried out on a server, the server stores all model files, the model files are analyzed at the server side, and animation data and other data are separated; in the process of analyzing the model, the animation file is analyzed into fine and smooth actions, and then complex actions are formed through combination; the animation file is separated into single files and then identified, and the animation data indicates the type of the animation file by adopting the same data format as the model data; the subsequent compression and cache of the model act on the animation data, and all the animation data are combined with each other to form a multi-granularity animation library;
carrying out semantic separation on animation data, and dividing the animation data into an interaction type, a transition type, an accompanying type and other types; the interaction type refers to the animation type of the model response after the model is operated by a user; the transition type refers to a transition animation generated by the mode changing to another action in the middle after completing one action; the accompanying type refers to animation which is played by the accompanying model all the time, and if the newly added animation does not belong to the animation type, the animation is classified into other types; the animation data and the corresponding commands form a mapping relation and store a database, and when a client initiates a request for acquiring the animation, all data are received and visualized; the model walking animation shows the walking action of the simulated character model, the animation data belongs to the accompanying type, and the model walks along the path after the path data is provided and stops at the destination.
4. The three-dimensional scene rendering acceleration method of claim 3, characterized in that the animation data of the model is decomposed as a separate file, has the format of the original model, and is identified by the action type; converting the file into a JSON object through serialization and using the JSON object; the animation data is divided into basic actions and complex actions, fine-grained animations are combined into complex animations through combination, the complex animations are applied to models without the animations, models for playing the complex animations are formed, and the models are displayed on a terminal screen in a three-dimensional visualization mode;
ror represents the proportion of the current file size in the original file, and the calculation mode is as follows:
R or =currentSize/originalModelSize×100%;
wherein currentSize represents the size of the current file; originalModelSize represents the size of the original file; t represents the time required for analyzing the current file; δ or represents the optimization rate of the file; if a model containing animation needs to be loaded, the model.fbx and a corresponding animation file need to be loaded at the same time, if the current model does not need animation data, only the model.fbx is loaded, and the optimization rate is 0 when the original model.fbx is directly loaded;
when only the model is displayed, the proportion rate of the model in the original model is calculated without considering animation data, and then the optimization rate δ or is calculated, wherein the calculation process is as follows:
rate=model.time/originalModel.time;
δor=(1-rate)×100%;
when the initial model is displayed, the optimization rate is 0; when displaying the model with animation, combining the animation data and the model data, taking the combined data as a whole, calculating the proportion of the data in the original model, and then calculating the optimization rate.
5. The three-dimensional scene rendering acceleration method of claim 2, characterized in that the data compression in the second step comprises:
dividing the three-dimensional model into models of pure character coding and JSON coding according to the coding format, respectively compressing by using gzip and Draco algorithms, and converting the models of other formats into one of the formats for processing; performing mapping compression on a model containing a large amount of mapping contents by adopting a multi-class method;
the model compression represents compression of model topological data, the topological data describe all space information of the model, the topological data are rendered into Mesh according to the space information WebGL, and the Mesh represents a three-dimensional model formed by a series of grids; dividing all models into models of pure character coding and JSON coding by a model compression method, converting all model formats of the pure character coding into obj, converting the model format of the JSON coding into gltf, compressing the model of the obj format by using a gzip algorithm, and compressing the gltf model by using a Draco algorithm; before gzip compression is used, whether a client supports gzip is judged, and an identifier is accepted-Encoding;
the compression ratio calculation formula is as follows:
(1-converted size/original size) × 100%;
data compression introduces a JSON coded model format, and uniformly converts the JSON coded model format into a gltf format for processing; the model with the format of gltf describes the whole scene, including the scene, the camera, the light source and the three-dimensional model;
the topological data of the three-dimensional model is stored in binary, and other parts find related contents for reference by using indexes of the binary data; the structural meanings of gltf are as follows: scene represents a scene entrance in the gltf, is positioned at the topmost layer after the scene is converted into a tree structure, and accesses all contents in the tree structure through the scene; the node is a single node in the scene and is used for performing rotation, translation and scaling transformation, and the node consists of other nodes and is used as a virtual node to manage other nodes; the node types are divided into mesh, camera, skin and virtual nodes;
dividing the three-dimensional model into models of pure character coding and JSON coding, respectively storing the models by using obj and gltf formats, and converting other types into one of obj and gltf formats; the model of pure character coding is compressed by adopting a gzip algorithm, and the model of JSON coding is processed by adopting a Draco lossy compression algorithm.
6. The three-dimensional scene rendering acceleration method of claim 2, characterized in that the map compression in step three comprises:
the map compression is divided into three types, wherein the first type is to compress pictures; the second type is compressing the transformed texture; the third category is to replace pictures with various precisions by expanding a single picture; the method comprises the following steps that (1) image sticking compression uses first-class image compression to accelerate image acquisition, then uses second-class texture compression, and finally uses a third-class image to dynamically generate a multi-precision image by using a single image; in the first type of map compression, a Jpeg picture is compressed by a Guetzli algorithm, and the multi-stage compression process comprises color space conversion, discrete cosine transformation and quantization; guetzli generates a smaller picture file by optimizing a quantization stage with visual quality loss, and adopts a search algorithm to balance the file size and the minimum loss; the search algorithm uses color transform and discrete cosine transform to describe color perception and visual concealment; the Guetzli compression algorithm is only used for compressing when a Jpeg picture is loaded for the first time, and is locally obtained by using a data cache when the same picture is obtained subsequently;
in the second class of mapping compression, lossless compression is carried out on the texture after mapping conversion by adopting a BasisUniversal algorithm; converting the png format file into a basic file by a BasisUniversal algorithm; in the third type of map compression, a single map file is used for automatically generating maps with various precisions through an algorithm to replace manually provided maps with various precisions; during rendering, the maps with different precisions are rendered according to the visual object distance, and when the visual object distance is longer, the map with low precision is rendered; when the visual object distance is close, rendering a high-precision map to enhance the visual effect; the automatic generation algorithm adopts a Mipmap algorithm, an anisotropic filtering algorithm and a double-thread interpolation algorithm;
the data compression comprises model compression and map compression, wherein the model compression is used for respectively processing models of pure character coding and JSON coding, the corresponding formats are obj and gltf, and the model compression is respectively carried out by adopting a gzip algorithm and a Draco algorithm; the map compression is divided into three categories, namely compressing the pictures; compressing the texture; replacing the multi-precision picture by expansion by using a single picture; the image compression adopts a Guetzli algorithm; the texture compression adopts a BasisUniversal algorithm; the extended pictures adopt a Mipmap algorithm and double-thread interpolation, and various precision pictures are generated through one picture without preparing pictures with various precisions in advance.
7. A three-dimensional scene rendering acceleration system to which the three-dimensional scene rendering acceleration method according to any one of claims 1 to 6 is applied, characterized in that the three-dimensional scene rendering acceleration system includes:
the data separation module is used for separating the model data into animation data and topology data;
the data compression module is used for carrying out compression operation on the data after the data separation;
and the map compression module is used for compressing the model by adopting a multi-step compression algorithm aiming at the model map.
8. A computer device, characterized in that the computer device comprises a memory and a processor, the memory storing a computer program which, when executed by the processor, causes the processor to carry out the steps of the three-dimensional scene rendering acceleration method according to any one of claims 1 to 6.
9. A computer readable storage medium storing a computer program which, when executed by a processor, causes the processor to perform the steps of the three-dimensional scene rendering acceleration method according to any one of claims 1 to 6.
10. An information data processing terminal characterized in that the information data processing terminal is configured to implement the three-dimensional scene rendering acceleration system according to claim 7.
CN202211450581.XA 2022-11-18 2022-11-18 Three-dimensional scene rendering acceleration method, system, medium, device and terminal Pending CN115908672A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211450581.XA CN115908672A (en) 2022-11-18 2022-11-18 Three-dimensional scene rendering acceleration method, system, medium, device and terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211450581.XA CN115908672A (en) 2022-11-18 2022-11-18 Three-dimensional scene rendering acceleration method, system, medium, device and terminal

Publications (1)

Publication Number Publication Date
CN115908672A true CN115908672A (en) 2023-04-04

Family

ID=86470542

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211450581.XA Pending CN115908672A (en) 2022-11-18 2022-11-18 Three-dimensional scene rendering acceleration method, system, medium, device and terminal

Country Status (1)

Country Link
CN (1) CN115908672A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116977523A (en) * 2023-07-25 2023-10-31 深圳市快速直接工业科技有限公司 STEP format rendering method at WEB terminal
CN117278053A (en) * 2023-11-17 2023-12-22 南京智盟电力有限公司 GLTF-JSON format data compression method, system and device

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116977523A (en) * 2023-07-25 2023-10-31 深圳市快速直接工业科技有限公司 STEP format rendering method at WEB terminal
CN116977523B (en) * 2023-07-25 2024-04-26 快速直接(深圳)精密制造有限公司 STEP format rendering method at WEB terminal
CN117278053A (en) * 2023-11-17 2023-12-22 南京智盟电力有限公司 GLTF-JSON format data compression method, system and device
CN117278053B (en) * 2023-11-17 2024-02-09 南京智盟电力有限公司 GLTF-JSON format data compression method, system and device

Similar Documents

Publication Publication Date Title
CN106611435B (en) Animation processing method and device
CN115908672A (en) Three-dimensional scene rendering acceleration method, system, medium, device and terminal
WO2022193941A1 (en) Image rendering method and apparatus, device, medium, and computer program product
CN110751696B (en) Method, device, equipment and medium for converting BIM (building information modeling) model data into glTF (glTF) data
US11676325B2 (en) Layered, object space, programmable and asynchronous surface property generation system
US7911467B2 (en) Method and system for displaying animation with an embedded system graphics API
US20050110790A1 (en) Techniques for representing 3D scenes using fixed point data
Behr et al. Using images and explicit binary container for efficient and incremental delivery of declarative 3d scenes on the web
US10733793B2 (en) Indexed value blending for use in image rendering
CN112489183A (en) Unity 3D-based skeletal animation rendering method and system
CN115082609A (en) Image rendering method and device, storage medium and electronic equipment
CN114356868A (en) Three-dimensional model file processing method and related equipment thereof
CN112843700B (en) Terrain image generation method and device, computer equipment and storage medium
CN114491352A (en) Model loading method and device, electronic equipment and computer readable storage medium
Terrace et al. Unsupervised conversion of 3D models for interactive metaverses
KR20220141843A (en) Super-resolution of block compressed textures for texture mapping applications
CN115088017A (en) Intra-tree geometric quantization of point clouds
Lluch et al. Interactive three-dimensional rendering on mobile computer devices
CN117372662A (en) Three-dimensional model light weight method based on complex equipment
CN114247138B (en) Image rendering method, device and equipment and storage medium
US11948338B1 (en) 3D volumetric content encoding using 2D videos and simplified 3D meshes
Stein et al. hare3d-rendering large models in the browser
Belayneh Indexed 3D Scene Layers (I3S)–An Efficient Encoding and Streaming OGC Community Standard for Massive Geospatial Content
Burgos et al. MPEG 3D graphics representation
Chávez et al. Lightweight visualization for high-quality materials on WebGL

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination