CN116433818B - Cloud CPU and GPU parallel rendering method - Google Patents
Cloud CPU and GPU parallel rendering method Download PDFInfo
- Publication number
- CN116433818B CN116433818B CN202310285820.9A CN202310285820A CN116433818B CN 116433818 B CN116433818 B CN 116433818B CN 202310285820 A CN202310285820 A CN 202310285820A CN 116433818 B CN116433818 B CN 116433818B
- Authority
- CN
- China
- Prior art keywords
- model
- sub
- rendering
- lightweight
- dimensional model
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000009877 rendering Methods 0.000 title claims abstract description 149
- 238000000034 method Methods 0.000 title claims abstract description 56
- 238000012545 processing Methods 0.000 claims description 19
- 230000008569 process Effects 0.000 claims description 14
- 230000006835 compression Effects 0.000 claims description 9
- 238000007906 compression Methods 0.000 claims description 9
- 238000012163 sequencing technique Methods 0.000 claims description 7
- 238000012544 monitoring process Methods 0.000 claims description 6
- 230000000007 visual effect Effects 0.000 claims description 3
- 230000000694 effects Effects 0.000 description 2
- 238000007654 immersion Methods 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 230000006978 adaptation Effects 0.000 description 1
- 230000002776 aggregation Effects 0.000 description 1
- 238000004220 aggregation Methods 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 238000004806 packaging method and process Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/005—General purpose rendering architectures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5027—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T1/00—General purpose image data processing
- G06T1/20—Processor architectures; Processor configuration, e.g. pipelining
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Software Systems (AREA)
- Computer Graphics (AREA)
- General Engineering & Computer Science (AREA)
- Image Generation (AREA)
- Processing Or Creating Images (AREA)
Abstract
The invention discloses a cloud CPU and GPU parallel rendering method, which comprises the following steps: carrying out format unification and light weight on the three-dimensional model to obtain a light-weight three-dimensional model; aiming at each lightweight three-dimensional model in the cloud rendering server cluster, the lightweight three-dimensional model is divided into a plurality of sub-models by a CPU of the cloud rendering server cluster and a slicing task scheduling method, and an independent GPU scheduling task is generated for each sub-model, so that a plurality of GPU scheduling tasks corresponding to the lightweight three-dimensional model are obtained; and according to the plurality of GPU scheduling tasks corresponding to the lightweight three-dimensional model, dynamically rendering the plurality of sub-models corresponding to the lightweight three-dimensional model by adopting a dynamic loading algorithm to obtain a rendering result of the lightweight three-dimensional model. The invention adopts a dynamic rendering method, effectively improves the rendering speed, reduces the occupation of operation resources, and finally transmits the rendering result to the user terminal in a streaming media mode, thereby improving the loading efficiency of the rendering result.
Description
Technical Field
The invention belongs to the technical field of data processing, and particularly relates to a cloud CPU and GPU parallel rendering method.
Background
To facilitate viewing of three-dimensional models on high performance graphics workstations by mobile devices, it is desirable to aggregate rendering resources, decompose rendering tasks, and increase rendering efficiency. The cloud CPU (Central Processing Unit/Processor, central processing unit) and GPU (Graphics Processing Unit, graphic Processor) parallel rendering method is that real-time rendering is carried out on a model at a server side, and then rendered result pictures are sent to a client side according to display requirements. And the client receives and displays the rendering result of the server through the browser. The CPU, memory and display card resources occupied by the browser are not changed by the factors of the model, and the resource utilization rate is basically kept constant. Most cloud rendering uses a CPU serial architecture for rendering operations, such as alicloud, shanghai net rendering, and Renderbus (rayleigh cloud rendering). The CPU serial architecture adopts the logic of processing one by one, and is applicable to static picture scenes with sequencing; but for large-scale continuous dynamic or real-time graphics application scenarios, it does not have efficient processing capabilities.
Disclosure of Invention
Aiming at the defects in the prior art, the cloud CPU and GPU parallel rendering method provided by the invention solves the problems in the prior art.
In order to achieve the aim of the invention, the invention adopts the following technical scheme:
a cloud CPU and GPU parallel rendering method comprises the following steps:
carrying out format unification and light weight on the three-dimensional model to obtain a light-weight three-dimensional model, wherein the light-weight three-dimensional model is stored in a cloud rendering server cluster;
aiming at each lightweight three-dimensional model in the cloud rendering server cluster, the lightweight three-dimensional model is divided into a plurality of sub-models by a CPU of the cloud rendering server cluster and a slicing task scheduling method, and an independent GPU scheduling task is generated for each sub-model, so that a plurality of GPU scheduling tasks corresponding to the lightweight three-dimensional model are obtained;
and according to the plurality of GPU scheduling tasks corresponding to the lightweight three-dimensional model, dynamically rendering the plurality of sub-models corresponding to the lightweight three-dimensional model by adopting a dynamic loading algorithm to obtain a rendering result of the lightweight three-dimensional model.
In one possible implementation manner, after tasks are scheduled according to the GPUs corresponding to the lightweight three-dimensional model and the sub-models corresponding to the lightweight three-dimensional model are dynamically rendered by adopting a dynamic loading algorithm, the method further includes: and transmitting the rendering result of the lightweight three-dimensional model to the user terminal in a streaming media mode for display.
In one possible implementation manner, the three-dimensional model is subjected to format unification and light weight, and the three-dimensional model comprises a front-end data processing process and a cloud data processing process;
the front-end data processing process comprises the following steps:
the method comprises the steps of obtaining a three-dimensional model to be uploaded, and converting the three-dimensional model to be uploaded into a mesh format file through front-end equipment so as to realize preliminary light weight and uniform format of the three-dimensional model to be uploaded;
uploading the three-dimensional model to be uploaded in the mesh format to a cloud rendering server cluster through front-end equipment;
the cloud data processing process comprises the following steps:
receiving the three-dimensional model to be uploaded transmitted by the front-end equipment through the cloud rendering server cluster to obtain a target three-dimensional model, and compressing the target three-dimensional model by adopting a QEM algorithm to obtain a compressed model; the compression ratio when the QEM algorithm is adopted for compression is between 1:10 and 1:50;
and converting the compression model into a B3D format file to obtain a lightweight model.
In one possible implementation manner, a CPU of a server cluster is rendered through a cloud, and a slice task scheduling method is adopted to divide a lightweight three-dimensional model into a plurality of sub-models, and an independent GPU scheduling task is generated for each sub-model, so as to obtain a plurality of GPU scheduling tasks corresponding to the lightweight three-dimensional model, including:
cutting the lightweight three-dimensional model into blocks through a CPU of the cloud rendering server cluster to obtain a plurality of sub-models and the size of each sub-model;
assigning a unique identifier to each sub-model to obtain a unique identifier corresponding to each sub-model;
determining that the size of the sub-model and the memory ratio are 1:20, and distributing GPU resources for each sub-model, wherein the GPU resources are used for representing the GPU corresponding to the running of the sub-model and the occupied GPU memory size;
and associating the unique identifier of the sub-model with the GPU resource to obtain the GPU scheduling task.
In one possible implementation manner, according to a plurality of GPUs corresponding to the lightweight three-dimensional model, tasks are scheduled, and a plurality of sub-models corresponding to the lightweight three-dimensional model are dynamically rendered by adopting a dynamic loading algorithm, so as to obtain a rendering result of the lightweight three-dimensional model, including:
the method comprises the steps of obtaining the model bounding box size of each sub-model corresponding to the light three-dimensional model, the association degree between each sub-model and a camera view cone and the distance between the center of the model bounding box corresponding to each sub-model and the camera;
determining a rendering scheme of each sub-model according to the model bounding box size of each sub-model, the association degree between each sub-model and a camera view cone and the distance between the center of the model bounding box corresponding to each sub-model and the camera, wherein the rendering scheme comprises the steps of executing a GPU scheduling task of the sub-model, and loading or unloading the sub-model;
and rendering each sub-model according to the GPU scheduling task and the rendering scheme corresponding to each sub-model to obtain a rendering result of the lightweight three-dimensional model.
In one possible implementation manner, obtaining a model bounding box size of each sub-model corresponding to the lightweight three-dimensional model, a degree of association between each sub-model and a camera view cone, and a distance between a center of the model bounding box corresponding to each sub-model and the camera includes:
the model bounding box size of each sub-model corresponding to the lightweight three-dimensional model is obtained as follows:
S=H×W×L
wherein S represents the size of the model bounding box, H represents the height of the model bounding box, W represents the width of the model bounding box, and L represents the length of the model bounding box;
detecting whether the sub-model is in the range of the camera cone view body, if so, determining that the association degree between the sub-model and the camera cone view body is a first preset value, otherwise, determining that the association degree between the sub-model and the camera cone view body is a second preset value; the first preset value and the second preset value are positive and negative opposite values;
and obtaining the distance between the center of the model bounding box corresponding to each sub-model and the camera.
In one possible implementation manner, determining a rendering scheme of each sub-model according to a model bounding box size of each sub-model, a degree of association between each sub-model and a camera view cone, and a distance between a center of a model bounding box corresponding to each sub-model and the camera includes:
a1, judging whether the submodel is positioned in the range of the camera viewing cone according to the association degree between each submodel and the camera viewing cone, if so, determining that a rendering scheme corresponding to the submodel is a GPU scheduling task for executing the submodel, loading the submodel, and otherwise, entering a step A2;
a2, obtaining a sub-model of the unassigned rendering scheme to obtain a target sub-model;
a3, acquiring importance degrees of the target submodels according to the model bounding box size of each submodel, the association degree between each submodel and the visual cone of the camera and the distance between the center of the model bounding box corresponding to each submodel and the camera;
a4, sequencing the importance degrees of the target submodels according to the sequence from large to small, determining the rendering schemes corresponding to the N submodels before sequencing as GPU scheduling tasks for executing the submodels, and loading the submodels; and determining the rendering schemes corresponding to the remaining target submodels as the submodels to be unloaded.
In one possible implementation manner, according to the model bounding box size of each sub-model, the association degree between each sub-model and the camera view cone and the distance between the center of the model bounding box corresponding to each sub-model and the camera, the importance degree of obtaining the target sub-model is as follows:
wherein W represents the importance degree of the target submodel, S represents the size of the model bounding box, epsilon represents the distance between the center of the model bounding box and the camera, and lambda represents the association degree between the submodel and the camera view cone.
In one possible implementation manner, according to the GPU scheduling task and the rendering scheme corresponding to each sub-model, rendering is performed on each sub-model to obtain a rendering result of the lightweight three-dimensional model, including:
determining a sub-model to be loaded according to a rendering scheme corresponding to each sub-model;
determining a unique identifier corresponding to the sub-model to be loaded, executing a GPU scheduling task corresponding to the unique identifier, and rendering each sub-model to obtain a rendering result of the lightweight three-dimensional model.
In one possible embodiment, the method further comprises:
monitoring whether the association degree between each sub-model and the camera view cone is changed or not in real time, if so, generating an independent GPU scheduling task for each sub-model again to obtain a plurality of GPU scheduling tasks corresponding to the lightweight three-dimensional model; and according to the plurality of GPU scheduling tasks corresponding to the lightweight three-dimensional model, dynamically rendering the plurality of sub-models corresponding to the lightweight three-dimensional model by adopting a dynamic loading algorithm to obtain a rendering result of the lightweight three-dimensional model, otherwise, continuously monitoring.
According to the cloud CPU and GPU parallel rendering method provided by the invention, after the rendering task is decomposed, asynchronous rendering is carried out through the CPU and GPU of the cloud rendering server cluster, and the rendering result is fed back to the user terminal, so that the rendering speed is effectively improved, the dynamic rendering method is adopted, the rendering speed is further improved, the occupation of operation resources is reduced, and finally, the rendering result is transmitted to the user terminal in a streaming media mode, and the loading efficiency of the rendering result is improved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and together with the description, serve to explain the principles of the application.
Fig. 1 is a flowchart of a cloud CPU and GPU parallel rendering method provided by the present invention.
Specific embodiments thereof have been shown by way of example in the drawings and will herein be described in more detail. These drawings and the written description are not intended to limit the scope of the inventive concepts in any way, but to illustrate the concepts of the present application to those skilled in the art by reference to specific embodiments.
Description of the embodiments
The following description of the embodiments of the present invention is provided to facilitate understanding of the present invention by those skilled in the art, but it should be understood that the present invention is not limited to the scope of the embodiments, and all the inventions which make use of the inventive concept are protected by the spirit and scope of the present invention as defined and defined in the appended claims to those skilled in the art.
Embodiments of the present invention are described in detail below with reference to the accompanying drawings.
As shown in fig. 1, an embodiment of the present invention provides a cloud CPU and GPU parallel rendering method, including:
s1, carrying out format unification and light weight on the three-dimensional model to obtain a light-weight three-dimensional model, wherein the light-weight three-dimensional model is stored in a cloud rendering server cluster.
The three-dimensional model is subjected to format unification and light weight, and can be directly completed on front-end equipment, a cloud rendering server cluster and the front-end equipment and the cloud rendering server cluster. For example, the front-end equipment can initially lighten the three-dimensional model, so that the size of the three-dimensional model is compressed, network transmission is facilitated, and then the initial lightweight three-dimensional model is further lightened on the cloud rendering server cluster, so that the lightweight three-dimensional model can be obtained.
When browsing the three-dimensional model, two important indexes are used for measuring the smoothness and immersion effect of a user on the browsing of the three-dimensional model, namely dynamic characteristics, natural dynamic characteristics require that about 30 frames of graphic pictures are generated and displayed every second, and the other characteristic is interaction delay, the response time of graphic generation of a system to interaction actions of the user is not longer than 0.1 second, the two indexes depend on the speed of graphic generation of the system, and the obvious graphic generation speed is an important bottleneck of three-dimensional browsing. Graphics generation speed depends on the hardware and software architecture of graphics processing, and today graphics workstations benefit from the greatly improved performance of the CPU and dedicated graphics processor, which are developed at high speed, solving some of the problems from the hardware aspect. How to quickly generate high quality graphics from software is another aspect of research. The Web online automatic analysis adopts a front end and a rear end mode, the front end carries out pretreatment locally, a model is mainly converted into a mesh format of triangular mesh sheets according to categories, unnecessary model data is filtered, the security of the data is ensured, and then the data is packed and uploaded to a cloud end for secondary optimization treatment and packaging to form a custom format B3D format.
S2, aiming at each lightweight three-dimensional model in the cloud rendering server cluster, the lightweight three-dimensional model is divided into a plurality of sub-models by a CPU of the cloud rendering server cluster and a slicing task scheduling method, and an independent GPU scheduling task is generated for each sub-model, so that a plurality of GPU scheduling tasks corresponding to the lightweight three-dimensional model are obtained.
Alternatively, when the format unification and the light weight of the three-dimensional model are directly completed on the front-end equipment, the slice task scheduling method may also be executed on the front-end equipment. However, the front-end device cannot acquire the service condition of the cloud rendering server cluster, and there may be an improper scheduling condition, so in this embodiment, multiple GPU scheduling tasks corresponding to the lightweight three-dimensional model are preferably generated in the cloud rendering server cluster, so that the GPU scheduling tasks more conform to the running state of the cloud rendering server cluster.
And converting the uploaded model file into a spatial three-dimensional entity model standard, and storing the spatial three-dimensional entity model standard into a cloud spatial relation database. The automatic division, automatic step-by-step rendering and parallel aggregation of the model are realized by means of a CPU and GPU parallel serial connection technology. And when three-dimensional browsing is performed, objects in a sight range are unloaded according to the angle of an observer, and dynamic loading and unloading are realized on the objects outside the realization range, so that the real-time drawing requirement of a complex three-dimensional scene is met, and the smoothness and immersion effect of the three-dimensional model during browsing are improved.
And S3, scheduling tasks according to a plurality of GPUs corresponding to the lightweight three-dimensional model, and dynamically rendering a plurality of sub-models corresponding to the lightweight three-dimensional model by adopting a dynamic loading algorithm to obtain a rendering result of the lightweight three-dimensional model.
After initial rendering, a user may perform a rotation operation on the lightweight three-dimensional model, so that the relationship between the sub-model and the camera viewing cone is changed, and therefore, after the rendering result of the lightweight three-dimensional model is obtained, whether the relationship between the sub-model and the camera viewing cone is changed needs to be monitored in real time, and when the relationship is changed, the rendering result needs to be obtained again, so that normal browsing of the three-dimensional model by the user is ensured.
In one possible implementation manner, after tasks are scheduled according to the GPUs corresponding to the lightweight three-dimensional model and the sub-models corresponding to the lightweight three-dimensional model are dynamically rendered by adopting a dynamic loading algorithm, the method further includes: and transmitting the rendering result of the lightweight three-dimensional model to the user terminal in a streaming media mode for display.
In one possible implementation, the three-dimensional model is subjected to format unification and light weight, including a front-end data processing process and a cloud data processing process.
The front-end data processing process comprises the following steps:
the method comprises the steps of obtaining a three-dimensional model to be uploaded, and converting the three-dimensional model to be uploaded into a mesh format file through front-end equipment so as to realize preliminary light weight and uniform format of the three-dimensional model to be uploaded.
And uploading the three-dimensional model to be uploaded in the mesh format to the cloud rendering server cluster through the front-end equipment.
The cloud data processing process comprises the following steps:
and receiving the three-dimensional model to be uploaded, which is transmitted by the front-end equipment, through the cloud rendering server cluster to obtain a target three-dimensional model, and compressing the target three-dimensional model by adopting a QEM (Quadic Error Metrics) algorithm to obtain a compressed model. The compression ratio when the QEM algorithm is adopted for compression is between 1:10 and 1:50.
And converting the compression model into a B3D format file to obtain a lightweight model.
In one possible implementation manner, a CPU of a server cluster is rendered through a cloud, and a slice task scheduling method is adopted to divide a lightweight three-dimensional model into a plurality of sub-models, and an independent GPU scheduling task is generated for each sub-model, so as to obtain a plurality of GPU scheduling tasks corresponding to the lightweight three-dimensional model, including:
and cutting the lightweight three-dimensional model into blocks through a CPU of the cloud rendering server cluster to obtain a plurality of sub-models and the size of each sub-model.
And assigning a unique identifier to each sub-model to obtain a unique identifier corresponding to each sub-model.
Determining that the size of the sub-model and the memory ratio are 1:20, and distributing GPU resources for each sub-model, wherein the GPU resources are used for representing the GPU corresponding to the running of the sub-model and the occupied GPU memory size.
And associating the unique identifier of the sub-model with the GPU resource to obtain the GPU scheduling task.
In one possible implementation manner, according to a plurality of GPUs corresponding to the lightweight three-dimensional model, tasks are scheduled, and a plurality of sub-models corresponding to the lightweight three-dimensional model are dynamically rendered by adopting a dynamic loading algorithm, so as to obtain a rendering result of the lightweight three-dimensional model, including:
and obtaining the model bounding box size of each sub-model corresponding to the lightweight three-dimensional model, the association degree between each sub-model and the camera view cone and the distance between the center of the model bounding box corresponding to each sub-model and the camera.
Determining a rendering scheme of each sub-model according to the model bounding box size of each sub-model, the association degree between each sub-model and the camera view cone and the distance between the center of the model bounding box corresponding to each sub-model and the camera, wherein the rendering scheme comprises the step of executing the GPU scheduling task of the sub-model to load or unload the sub-model.
And rendering each sub-model according to the GPU scheduling task and the rendering scheme corresponding to each sub-model to obtain a rendering result of the lightweight three-dimensional model.
At present, a certain time consumption exists in the aspect of displaying the mass model due to the speed and IO limitation of a hard disk in the aspect of opening the mass model, and the streaming loading technology is used for solving the problem. Different grade relations are created for different parts of the model, so that first macroscopic model preferential loading is realized, and then the model is loaded in the subsequent scene moving process, and second-level display of a large quantity of models is realized.
Dividing according to the size of the model bounding box, wrapping the model with a larger model in the displaying process, carrying out preferential loading in a queue inserting mode, gradually displaying related small nodes after most of the general quantity nodes of the model are displayed, and simultaneously, combining with the view angle position, automatically speaking the model nodes to carry out dynamic unloading and loading so as to lighten the pressure of a display card and realize smooth loading of the general quantity model.
In one possible implementation manner, obtaining a model bounding box size of each sub-model corresponding to the lightweight three-dimensional model, a degree of association between each sub-model and a camera view cone, and a distance between a center of the model bounding box corresponding to each sub-model and the camera includes:
the model bounding box size of each sub-model corresponding to the lightweight three-dimensional model is obtained as follows:
S=H×W×L
where S represents the model bounding box size, H represents the height of the model bounding box, W represents the width of the model bounding box, and L represents the length of the model bounding box.
And detecting whether the sub-model is in the range of the camera cone view body, if so, determining that the association degree between the sub-model and the camera view cone body is a first preset value, otherwise, determining that the association degree between the sub-model and the camera view cone body is a second preset value. The first preset value and the second preset value are positive and negative opposite values.
And obtaining the distance between the center of the model bounding box corresponding to each sub-model and the camera.
In one possible implementation manner, determining a rendering scheme of each sub-model according to a model bounding box size of each sub-model, a degree of association between each sub-model and a camera view cone, and a distance between a center of a model bounding box corresponding to each sub-model and the camera includes:
a1, judging whether the submodel is positioned in the range of the camera viewing cone according to the association degree between each submodel and the camera viewing cone, if so, determining that the rendering scheme corresponding to the submodel is a GPU scheduling task for executing the submodel, loading the submodel, and otherwise, entering the step A2.
A2, obtaining a sub-model of the unassigned rendering scheme to obtain a target sub-model.
A3, obtaining the importance degree of the target sub-model according to the model bounding box size of each sub-model, the association degree between each sub-model and the camera view cone and the distance between the center of the model bounding box corresponding to each sub-model and the camera.
And A4, sequencing the importance degrees of the target submodels according to the sequence from large to small, determining the rendering schemes corresponding to the N submodels before sequencing as GPU scheduling tasks for executing the submodels, and loading the submodels. And determining the rendering schemes corresponding to the remaining target submodels as the submodels to be unloaded.
The range of the camera viewing cone is just a visual range of a user, and only a part of the three-dimensional model can be seen, so that the user can browse the three-dimensional model only by loading the submodel in the range of the camera viewing cone. However, some submodels to be loaded are not in the range of the camera viewing cone, so that the submodels in the range of the camera viewing cone can be loaded first, then the submodels outside the range of the camera viewing cone are loaded, and the browsing efficiency of a user is improved.
When a user browses the three-dimensional model, the three-dimensional model may be rotated, if only the submodel in the range of the camera viewing cone is loaded, a certain time is needed to load the submodel outside the range of the camera viewing cone during rotation, so that part of the submodel outside the range of the camera viewing cone can be loaded to form a transition part. When the user rotates the three-dimensional model, the submodel which is about to appear in the range of the camera viewing cone can be loaded according to the rotation direction of the user.
Optionally, aiming at the submodels in the range of the camera view cone, the submodel corresponding to the model bounding box size can be arranged in order from large to small, and the submodel with the large model bounding box size is loaded preferentially, so that a user can view the whole appearance of the lightweight model first, then load detail parts, and the browsing efficiency of the user is improved. When the sub model is out of the range of the camera view cone, the sub model can be unloaded so as to reduce the consumption of GPU resources.
In one possible implementation manner, according to the model bounding box size of each sub-model, the association degree between each sub-model and the camera view cone and the distance between the center of the model bounding box corresponding to each sub-model and the camera, the importance degree of obtaining the target sub-model is as follows:
wherein W represents the importance degree of the target submodel, S represents the size of the model bounding box, epsilon represents the distance between the center of the model bounding box and the camera, and lambda represents the association degree between the submodel and the camera view cone.
In one possible implementation manner, according to the GPU scheduling task and the rendering scheme corresponding to each sub-model, rendering is performed on each sub-model to obtain a rendering result of the lightweight three-dimensional model, including:
and determining the sub-model to be loaded according to the rendering scheme corresponding to each sub-model.
Determining a unique identifier corresponding to the sub-model to be loaded, executing a GPU scheduling task corresponding to the unique identifier, and rendering each sub-model to obtain a rendering result of the lightweight three-dimensional model.
In one possible embodiment, the method further comprises:
and monitoring whether the association degree between each sub-model and the camera view cone is changed or not in real time, if so, generating an independent GPU scheduling task for each sub-model again to obtain a plurality of GPU scheduling tasks corresponding to the lightweight three-dimensional model. And according to the plurality of GPU scheduling tasks corresponding to the lightweight three-dimensional model, dynamically rendering the plurality of sub-models corresponding to the lightweight three-dimensional model by adopting a dynamic loading algorithm to obtain a rendering result of the lightweight three-dimensional model, otherwise, continuously monitoring.
Optionally, the three-dimensional model is centrally managed and provided to the foreground system in a micro-service mode, and after registration through the sharing service, a model preview function is provided.
According to the cloud CPU and GPU parallel rendering method provided by the invention, after the rendering task is decomposed, asynchronous rendering is carried out through the CPU and GPU of the cloud rendering server cluster, and the rendering result is fed back to the user terminal, so that the rendering speed is effectively improved, the dynamic rendering method is adopted, the rendering speed is further improved, the occupation of operation resources is reduced, and finally, the rendering result is transmitted to the user terminal in a streaming media mode, and the loading efficiency of the rendering result is improved.
It should be noted that any method using the inventive concept should be within the scope of the present invention. Other embodiments of the invention will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This invention is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the invention and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains.
It is to be understood that the invention is not limited to the precise arrangements and instrumentalities shown in the drawings, which have been described above, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the invention is limited only by the appended claims.
Claims (8)
1. A cloud CPU and GPU parallel rendering method is characterized by comprising the following steps:
carrying out format unification and light weight on the three-dimensional model to obtain a light-weight three-dimensional model, wherein the light-weight three-dimensional model is stored in a cloud rendering server cluster;
aiming at each lightweight three-dimensional model in the cloud rendering server cluster, the lightweight three-dimensional model is divided into a plurality of sub-models by a CPU of the cloud rendering server cluster and a slicing task scheduling method, and an independent GPU scheduling task is generated for each sub-model, so that a plurality of GPU scheduling tasks corresponding to the lightweight three-dimensional model are obtained;
according to the plurality of GPU scheduling tasks corresponding to the lightweight three-dimensional model, dynamically rendering the plurality of sub-models corresponding to the lightweight three-dimensional model by adopting a dynamic loading algorithm to obtain a rendering result of the lightweight three-dimensional model;
the CPU of the server cluster is rendered through the cloud, the lightweight three-dimensional model is divided into a plurality of sub-models by adopting a slicing task scheduling method, and independent GPU scheduling tasks are generated for each sub-model, so that a plurality of GPU scheduling tasks corresponding to the lightweight three-dimensional model are obtained, and the method comprises the following steps:
cutting the lightweight three-dimensional model into blocks through a CPU of the cloud rendering server cluster to obtain a plurality of sub-models and the size of each sub-model;
assigning a unique identifier to each sub-model to obtain a unique identifier corresponding to each sub-model;
determining that the size of the sub-model and the memory ratio are 1:20, and distributing GPU resources for each sub-model, wherein the GPU resources are used for representing the GPU corresponding to the running of the sub-model and the occupied GPU memory size;
associating the unique identifier of the sub-model with the GPU resource to obtain a GPU scheduling task;
according to a plurality of GPU scheduling tasks corresponding to the lightweight three-dimensional model, and adopting a dynamic loading algorithm to dynamically render a plurality of sub-models corresponding to the lightweight three-dimensional model, a rendering result of the lightweight three-dimensional model is obtained, and the method comprises the following steps:
the method comprises the steps of obtaining the model bounding box size of each sub-model corresponding to the light three-dimensional model, the association degree between each sub-model and a camera view cone and the distance between the center of the model bounding box corresponding to each sub-model and the camera;
determining a rendering scheme of each sub-model according to the model bounding box size of each sub-model, the association degree between each sub-model and a camera view cone and the distance between the center of the model bounding box corresponding to each sub-model and the camera, wherein the rendering scheme comprises the steps of executing a GPU scheduling task of the sub-model, and loading or unloading the sub-model;
and rendering each sub-model according to the GPU scheduling task and the rendering scheme corresponding to each sub-model to obtain a rendering result of the lightweight three-dimensional model.
2. The cloud CPU and GPU parallel rendering method of claim 1, wherein after scheduling tasks according to the plurality of GPUs corresponding to the lightweight three-dimensional model and dynamically rendering the plurality of sub-models corresponding to the lightweight three-dimensional model by using a dynamic loading algorithm, further comprises: and transmitting the rendering result of the lightweight three-dimensional model to the user terminal in a streaming media mode for display.
3. The cloud CPU and GPU parallel rendering method of claim 1, wherein the three-dimensional model is formatted and lightweight, including a front-end data processing process and a cloud data processing process;
the front-end data processing process comprises the following steps:
the method comprises the steps of obtaining a three-dimensional model to be uploaded, and converting the three-dimensional model to be uploaded into a mesh format file through front-end equipment so as to realize preliminary light weight and uniform format of the three-dimensional model to be uploaded;
uploading the three-dimensional model to be uploaded in the mesh format to a cloud rendering server cluster through front-end equipment;
the cloud data processing process comprises the following steps:
receiving the three-dimensional model to be uploaded transmitted by the front-end equipment through the cloud rendering server cluster to obtain a target three-dimensional model, and compressing the target three-dimensional model by adopting a QEM algorithm to obtain a compressed model; the compression ratio when the QEM algorithm is adopted for compression is between 1:10 and 1:50;
and converting the compression model into a B3D format file to obtain a lightweight model.
4. The cloud CPU and GPU parallel rendering method of claim 1, wherein obtaining the model bounding box size of each sub-model corresponding to the lightweight three-dimensional model, the degree of association between each sub-model and the camera view cone, and the distance between the center of the model bounding box corresponding to each sub-model and the camera comprises:
the model bounding box size of each sub-model corresponding to the lightweight three-dimensional model is obtained as follows:
S=H*W*L
wherein S represents the size of the model bounding box, H represents the height of the model bounding box, W represents the width of the model bounding box, and L represents the length of the model bounding box;
detecting whether the sub-model is in the range of the camera cone view body, if so, determining that the association degree between the sub-model and the camera cone view body is a first preset value, otherwise, determining that the association degree between the sub-model and the camera cone view body is a second preset value; the first preset value and the second preset value are positive and negative opposite values;
and obtaining the distance between the center of the model bounding box corresponding to each sub-model and the camera.
5. The cloud CPU and GPU parallel rendering method of claim 4, wherein determining the rendering scheme of each sub-model according to the model bounding box size of each sub-model, the degree of association between each sub-model and the camera view cone, and the distance between the center of the model bounding box corresponding to each sub-model and the camera comprises:
a1, judging whether the submodel is positioned in the range of the camera viewing cone according to the association degree between each submodel and the camera viewing cone, if so, determining that a rendering scheme corresponding to the submodel is a GPU scheduling task for executing the submodel, loading the submodel, and otherwise, entering a step A2;
a2, obtaining a sub-model of the unassigned rendering scheme to obtain a target sub-model;
a3, acquiring importance degrees of the target submodels according to the model bounding box size of each submodel, the association degree between each submodel and the visual cone of the camera and the distance between the center of the model bounding box corresponding to each submodel and the camera;
a4, sequencing the importance degrees of the target submodels according to the sequence from large to small, determining the rendering schemes corresponding to the N submodels before sequencing as GPU scheduling tasks for executing the submodels, and loading the submodels; and determining the rendering schemes corresponding to the remaining target submodels as the submodels to be unloaded.
6. The cloud CPU and GPU parallel rendering method according to claim 5, wherein the obtaining the importance degree of the target sub-model according to the model bounding box size of each sub-model, the association degree between each sub-model and the camera view cone, and the distance between the center of the model bounding box corresponding to each sub-model and the camera is:
wherein W represents the importance degree of the target submodel, S represents the size of the model bounding box, epsilon represents the distance between the center of the model bounding box and the camera, and lambda represents the association degree between the submodel and the camera view cone.
7. The cloud CPU and GPU parallel rendering method of claim 1, wherein rendering each sub-model according to the GPU scheduling task and the rendering scheme corresponding to each sub-model to obtain a rendering result of the lightweight three-dimensional model comprises:
determining a sub-model to be loaded according to a rendering scheme corresponding to each sub-model;
determining a unique identifier corresponding to the sub-model to be loaded, executing a GPU scheduling task corresponding to the unique identifier, and rendering each sub-model to obtain a rendering result of the lightweight three-dimensional model.
8. The cloud CPU and GPU parallel rendering method of any of claims 4-7, further comprising:
monitoring whether the association degree between each sub-model and the camera view cone is changed or not in real time, if so, generating an independent GPU scheduling task for each sub-model again to obtain a plurality of GPU scheduling tasks corresponding to the lightweight three-dimensional model; and according to the plurality of GPU scheduling tasks corresponding to the lightweight three-dimensional model, dynamically rendering the plurality of sub-models corresponding to the lightweight three-dimensional model by adopting a dynamic loading algorithm to obtain a rendering result of the lightweight three-dimensional model, otherwise, continuously monitoring.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310285820.9A CN116433818B (en) | 2023-03-22 | 2023-03-22 | Cloud CPU and GPU parallel rendering method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310285820.9A CN116433818B (en) | 2023-03-22 | 2023-03-22 | Cloud CPU and GPU parallel rendering method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN116433818A CN116433818A (en) | 2023-07-14 |
CN116433818B true CN116433818B (en) | 2024-04-16 |
Family
ID=87086436
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310285820.9A Active CN116433818B (en) | 2023-03-22 | 2023-03-22 | Cloud CPU and GPU parallel rendering method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116433818B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117934687A (en) * | 2024-01-25 | 2024-04-26 | 中科世通亨奇(北京)科技有限公司 | Three-dimensional model rendering optimization method, system, electronic equipment and storage medium |
Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20050122046A (en) * | 2004-06-23 | 2005-12-28 | 엔에이치엔(주) | Method and system for loading of the image resource |
CN103268253A (en) * | 2012-02-24 | 2013-08-28 | 苏州蓝海彤翔系统科技有限公司 | Intelligent scheduling management method for multi-scale parallel rendering jobs |
CN104952096A (en) * | 2014-03-31 | 2015-09-30 | 中国电信股份有限公司 | CPU and GPU hybrid cloud rendering method, device and system |
CN105263050A (en) * | 2015-11-04 | 2016-01-20 | 山东大学 | Mobile terminal real-time rendering system and method based on cloud platform |
CN110751712A (en) * | 2019-10-22 | 2020-02-04 | 中设数字技术股份有限公司 | Online three-dimensional rendering technology and system based on cloud platform |
CN112270756A (en) * | 2020-11-24 | 2021-01-26 | 山东汇颐信息技术有限公司 | Data rendering method applied to BIM model file |
CN112529994A (en) * | 2020-12-29 | 2021-03-19 | 深圳图为技术有限公司 | Three-dimensional model graph rendering method, electronic device and readable storage medium thereof |
CN112933599A (en) * | 2021-04-08 | 2021-06-11 | 腾讯科技(深圳)有限公司 | Three-dimensional model rendering method, device, equipment and storage medium |
WO2021228031A1 (en) * | 2020-05-09 | 2021-11-18 | 华为技术有限公司 | Rendering method, apparatus and system |
WO2022089592A1 (en) * | 2020-10-30 | 2022-05-05 | 华为技术有限公司 | Graphics rendering method and related device thereof |
CN115802076A (en) * | 2022-11-15 | 2023-03-14 | 上海禹创工程顾问有限公司 | Three-dimensional model distributed cloud rendering method and system and electronic equipment |
CN115797589A (en) * | 2022-11-14 | 2023-03-14 | 北京软通智慧科技有限公司 | Model rendering method and device, electronic equipment and storage medium |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9728001B2 (en) * | 2011-09-23 | 2017-08-08 | Real-Scan, Inc. | Processing and rendering of large image files |
TWI649656B (en) * | 2013-12-26 | 2019-02-01 | 日商史克威爾 艾尼克斯控股公司 | Rendering system, control method and storage medium |
-
2023
- 2023-03-22 CN CN202310285820.9A patent/CN116433818B/en active Active
Patent Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20050122046A (en) * | 2004-06-23 | 2005-12-28 | 엔에이치엔(주) | Method and system for loading of the image resource |
CN103268253A (en) * | 2012-02-24 | 2013-08-28 | 苏州蓝海彤翔系统科技有限公司 | Intelligent scheduling management method for multi-scale parallel rendering jobs |
CN104952096A (en) * | 2014-03-31 | 2015-09-30 | 中国电信股份有限公司 | CPU and GPU hybrid cloud rendering method, device and system |
CN105263050A (en) * | 2015-11-04 | 2016-01-20 | 山东大学 | Mobile terminal real-time rendering system and method based on cloud platform |
CN110751712A (en) * | 2019-10-22 | 2020-02-04 | 中设数字技术股份有限公司 | Online three-dimensional rendering technology and system based on cloud platform |
WO2021228031A1 (en) * | 2020-05-09 | 2021-11-18 | 华为技术有限公司 | Rendering method, apparatus and system |
WO2022089592A1 (en) * | 2020-10-30 | 2022-05-05 | 华为技术有限公司 | Graphics rendering method and related device thereof |
CN112270756A (en) * | 2020-11-24 | 2021-01-26 | 山东汇颐信息技术有限公司 | Data rendering method applied to BIM model file |
CN112529994A (en) * | 2020-12-29 | 2021-03-19 | 深圳图为技术有限公司 | Three-dimensional model graph rendering method, electronic device and readable storage medium thereof |
CN112933599A (en) * | 2021-04-08 | 2021-06-11 | 腾讯科技(深圳)有限公司 | Three-dimensional model rendering method, device, equipment and storage medium |
CN115797589A (en) * | 2022-11-14 | 2023-03-14 | 北京软通智慧科技有限公司 | Model rendering method and device, electronic equipment and storage medium |
CN115802076A (en) * | 2022-11-15 | 2023-03-14 | 上海禹创工程顾问有限公司 | Three-dimensional model distributed cloud rendering method and system and electronic equipment |
Non-Patent Citations (2)
Title |
---|
Visualizing dynamic geosciences phenomena using an octree-based view-dependent LOD strategy within virtual globes;Li J;《Computers&Geosciences》;20111231;摘要 * |
基于大数据技术的云端城市地质三维可视化框架;宋越;高振记;王鹏;;中国矿业;20200615(06);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN116433818A (en) | 2023-07-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
DE102018130037B4 (en) | DYNAMIC JITTER AND LATENCY TOLERANT RENDERING | |
US9582921B1 (en) | Crowd-sourced video rendering system | |
CN110751712A (en) | Online three-dimensional rendering technology and system based on cloud platform | |
EP3264370A1 (en) | Media content rendering method, user equipment, and system | |
CN116433818B (en) | Cloud CPU and GPU parallel rendering method | |
CN105263050A (en) | Mobile terminal real-time rendering system and method based on cloud platform | |
CN103021023A (en) | Three-dimensional scene construction method based on browser | |
WO2022095714A1 (en) | Image rendering processing method and apparatus, storage medium, and electronic device | |
CN111476851A (en) | Image processing method, image processing device, electronic equipment and storage medium | |
CN109460297A (en) | A kind of edge cloud game caching and resource regulating method | |
CN112581578A (en) | Cloud rendering system based on software definition | |
US9327199B2 (en) | Multi-tenancy for cloud gaming servers | |
CN112316433A (en) | Game picture rendering method, device, server and storage medium | |
CN117014655A (en) | Video rendering method, device, equipment and storage medium | |
JP7472298B2 (en) | Placement of immersive media and delivery of immersive media to heterogeneous client endpoints | |
JP7448677B2 (en) | Methods and devices and computer programs for streaming immersive media | |
CN105701850A (en) | Real-time method for collaborative animation | |
Lluch et al. | Interactive three-dimensional rendering on mobile computer devices | |
CN109448092B (en) | Load balancing cluster rendering method based on dynamic task granularity | |
Liu et al. | Design and implementation of distributed rendering system | |
JP7487331B2 (en) | Method for streaming immersive media, and computer system and computer program therefor | |
US20220261946A1 (en) | Cloud-client rendering method based on adaptive virtualized rendering pipeline | |
Debattista et al. | Accelerating the Irradiance Cache through Parallel Component-Based Rendering. | |
Stein et al. | hare3d-rendering large models in the browser | |
Nam et al. | Performance Comparison of 3D File Formats on a Mobile Web Browser |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |