CN104052803A - Decentralized distributed rendering method and system - Google Patents

Decentralized distributed rendering method and system Download PDF

Info

Publication number
CN104052803A
CN104052803A CN201410252493.8A CN201410252493A CN104052803A CN 104052803 A CN104052803 A CN 104052803A CN 201410252493 A CN201410252493 A CN 201410252493A CN 104052803 A CN104052803 A CN 104052803A
Authority
CN
China
Prior art keywords
server
material file
rendering
play
free time
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201410252493.8A
Other languages
Chinese (zh)
Inventor
陈远磊
都政
井革新
李健来
熊超超
靳绍巍
罗文龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Cloud Computing Center Co Ltd
NATIONAL SUPERCOMPUTING CENTER IN SHENZHEN (SHENZHEN CLOUD COMPUTING CENTER)
Original Assignee
Shenzhen Cloud Computing Center Co Ltd
NATIONAL SUPERCOMPUTING CENTER IN SHENZHEN (SHENZHEN CLOUD COMPUTING CENTER)
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Cloud Computing Center Co Ltd, NATIONAL SUPERCOMPUTING CENTER IN SHENZHEN (SHENZHEN CLOUD COMPUTING CENTER) filed Critical Shenzhen Cloud Computing Center Co Ltd
Priority to CN201410252493.8A priority Critical patent/CN104052803A/en
Publication of CN104052803A publication Critical patent/CN104052803A/en
Pending legal-status Critical Current

Links

Abstract

The invention discloses a decentralized distributed rendering method and system. The system comprises multiple client sides (100), a main server (200) and multiple rendering servers (300), wherein each client side (100) is used for presenting a rendering task to the main server (200) and uploading a rendering task material file to the main server (200), the main server (200) is used for dividing each rendering task into multiple sub-tasks, monitoring the working states of the multiple rendering servers (300) and allocating the sub-rendering tasks to a fixed number of idler rendering servers (300) and providing the material file downloading service, and the main server (200) is further used for reading a destination file through a destination file storage path of a database (205) after judging that both the sub-task allocating and rendering work and rendering destination file uploading work are completed and feeding the destination file back to the client sides (100).

Description

A kind of Distributed Rendering Environment method and rendering system of decentralization
Technical field
The present invention relates to cloud Rendering field, more particularly, relate to a kind of Distributed Rendering Environment method and rendering system of decentralization.
Background technology
Graphic making personnel are promoting the development of cloud Rendering for the continuous lifting of playing up quality requirement, the attention rate of the rendering system based on Client/Server pattern is got a promotion gradually, and start to be promoted and to apply in some field.But, there is following defect in this rendering system imperfection:
1) master server load pressure is large.In this rendering system, master server is as unique file download center, and all rendering nodes all need be downloaded and play up required file from master server, for example, play up scene, pinup picture, texture, material, and each rendering node is higher to the dependence of master server.
2) this rendering system is prone to network congestion situation.Because each rendering node all need be downloaded and play up required file from master server, in the time that rendering node quantity is more and download file is larger, the network pressure of master server increases, and blocking up easily appears in the network between each rendering node and master server, very easily causes the generation of network disconnecting.
3) grid resource utilization is low.Each rendering node is after completing master server end file download work in performed file render process, and grid, in idle state, causes network resource utilization low.
4) stability of a system is poor.Because master server is as most important command centre, submit peak hours/period at client rendering task, master server live load is large, and the machine accident that very easily occurs delaying causes whole rendering system in paralyzed state.
5) system extension is poor.The increase of rendering node quantity will bring larger pressure to master server, and in the situation that master server performance bottleneck and network bandwidth problem can not be resolved, rendering node Expansion also will be restricted.
Summary of the invention
The technical problem to be solved in the present invention is, for the above-mentioned defect of prior art, provides a kind of Distributed Rendering Environment method and rendering system of decentralization.
The technical solution adopted for the present invention to solve the technical problems is: construct a kind of Distributed Rendering Environment method of decentralization, comprise the steps:
S1) receive the rendering task material file and the rendering task request that are sent by client, this rendering task is divided into the subtask of M serializing, and generates rendering task assignment information;
S2) carry out registration and play up server contention states monitoring, play up T the free time of random selection list server from the monitoring gained free time and play up server (R 1, R 2r t), this T free time is played up to server (R 1, R 2r t) information adds master server material file to and download queue, and plays up server (R for this T free time 1, R 2r t) distribute separately a sub-rendering task;
S3) play up server (R to this T 1, R 2r t) material file download progress carry out tracking and monitoring, and judge that this T plays up server (R 1, R 2r t) in arbitrary server R that plays up owhile completing material file download work, this free time is played up to another free time in list server and play up server (R t+1) information adds master server material file to and download queue play up server (R for this another free time t+1) distribute next sub-rendering task; Repeat aforesaid operations, until subtask distribute, subtask plays up and when rendering result file uploads task and is all finished, carries out next step S4;
S4) reading result file, and pass the destination file reading back client.
In the Distributed Rendering Environment method of the above-mentioned a kind of decentralization of the present invention, before described step S1, also comprise the steps:
S0) carry out client user at master server end and register operation and play up server registration operation, by user's registration information and play up server registration information and store into the database of master server.
In the Distributed Rendering Environment method of the above-mentioned a kind of decentralization of the present invention, between described step S0 and described step S1, also comprise the steps:
S01) in the time of the rendering task request that includes user's registration information and material file that receives client, determine user class according to user's registration information, and the rendering task of client being submitted in conjunction with client rendering task submission time carries out queue processing.
In the Distributed Rendering Environment method of the above-mentioned a kind of decentralization of the present invention, the rendering task of described in described step S1, client being submitted to is divided into the sub-rendering task of M serializing, and the step that generates rendering task assignment information specifically comprises:
S11) taking frame as cutting apart unit, this rendering task is divided into the subtask of M serializing, the sub-rendering task of this M serializing is numbered according to the order of sequence, generate M rendering task assignment information;
S12) this material file is saved in to the second memory module of master server, and this material file download path and this rendering task assignment information are saved in to database.
In the Distributed Rendering Environment method of the above-mentioned a kind of decentralization of the present invention, described step S2 also comprises: this T of instruction free time is played up server (R 1, R 2r t) obtain relevant rendering task assignment information and material file download path from database, download material file by material file download path from the second memory module.
In the Distributed Rendering Environment method of the above-mentioned a kind of decentralization of the present invention, described step S3 also comprises: play up server (R by this T 1, R 2r t) in completed master server material file download work play up server R oinformation is downloaded queue and is removed from master server material file, and is added to the queue of material file loading source, instruction simultaneously this completed master server material file download work play up server R ocarry out and the corresponding sub-rendering task of this rendering task assignment information, and the destination file of playing up generation is uploaded to the second memory module of master server, and store destination file store path into database;
Described step S4 also comprises: search the destination file store path in database, by this destination file store path from the second memory module reading result file.
In the Distributed Rendering Environment method of the above-mentioned a kind of decentralization of the present invention, described step S3 also comprises: play up server R what this had been completed to master server material file download work oafter removing from master server material file download queue, play up list server and select an other T free time to play up server (R from the free time (O+1), R (O+2)r (O+T)), for this other T free time is played up server (R (O+1), R (O+2)r (O+T)) distribute separately a sub-rendering task, and this other T free time of instruction is played up server (R (O+1), R (O+2)r (O+T)) play up server R from what this had completed master server material file download work odownload material file, rendering task distribution progress is upgraded simultaneously.
In the Distributed Rendering Environment method of the above-mentioned a kind of decentralization of the present invention, described step S3 also comprises: what as judged, this had completed master server material file download work plays up server R oplay up server (R with this other T free time (O+1), R (O+2)r (O+T)) between network occur interrupting or this network quality does not reach pre-provisioning request, what this has been completed to master server material file download work plays up server R ofrom the queue of material file loading source, remove;
The server of playing up that has had the download work of master server material file or played up server material file download work as judgement adds the queue of material file loading source to, specifies this other T free time to play up server (R (O+1), R (O+2)r (O+T)) download material file from this server of playing up that newly adds the queue of material file loading source to.
The present invention also constructs a kind of Distributed Rendering Environment method of decentralization, comprises the steps:
S1 ') in the time receiving the rendering task of being submitted to by client, the master who searches in server is played up in all registrations and select memory space to meet preset requirement plays up server, specify this master to play up server and receive the rendering task material file of being submitted to by client, and this is played up to server info and adds the queue of material file loading source to, the setting of rendering task parameter and material file are deposited to path information storage in the database of master server simultaneously;
S2 ') by this master server, rendering task is divided into the subtask of the individual serializing of M ', and generate rendering task assignment information;
S3 ') operating state of playing up server to play up all registrations server except this master carries out periodic monitor, and play up and select the individual free time of T ' to play up server (R at random list server from the monitoring gained free time 1, R 2r t), individual this T ' free time is played up to server (R 1, R 2r t) ' information is added to and is played up server material file and download queue, and plays up server (R for the individual free time of this T ' 1, R 2r t) ' separately distributed a sub-rendering task;
S4 ') to the individual server (R that plays up of this T ' 1, R 2r t) ' material file download progress carry out tracking and monitoring, and judge the individual server (R that plays up of this T ' 1, R 2r t) ' in arbitrary server R that plays up o' while completing material file download work, this free time is played up to another free time in list server and play up server (R t+1) ' information is added to and is played up server material file and download queue play up server (R for this another free time t+1) ' distribution next son rendering task; Repeat aforesaid operations, until subtask distribute, subtask plays up and when rendering result file uploads task and is all finished, carries out next step S5 ';
S5 ') reading result file, and pass the destination file reading back client.
In the Distributed Rendering Environment method of the above-mentioned a kind of decentralization of the present invention, described in described step S2 ', rendering task is divided into the subtask of the individual serializing of M ', and the step that generates rendering task assignment information specifically comprises:
S21 ') taking frame as cutting apart unit, this rendering task is divided into the subtask of the individual serializing of M ', the subtask of the individual serializing of this M ' is numbered according to the order of sequence, generate the individual rendering task assignment information of M ';
S22 ') this material file is saved in to main the 3rd memory module of playing up server, and this material file download path and this rendering task assignment information are saved in to database.
In the Distributed Rendering Environment method of the above-mentioned a kind of decentralization of the present invention, described step S2 ' also comprises: the individual free time of this T ' of instruction is played up server (R 1, R 2r t) ' obtain relevant rendering task assignment information and material file download path from database, downloads material file by material file download path from the 3rd memory module.
In the Distributed Rendering Environment method of the above-mentioned a kind of decentralization of the present invention, described step S3 ' also comprises: by individual this T ' server (R that plays up 1, R 2r t) ' in completed master server material file download work play up server R o' information is downloaded queue and is removed from master server material file, and is added to the queue of material file loading source, instruction simultaneously this completed master server material file download work play up server R o' carry out and the corresponding sub-rendering task of this rendering task assignment information, and the destination file of playing up generation is uploaded to the second memory module of master server, and store destination file store path into database;
Described step S4 ' also comprises: search the destination file store path in database, by this destination file store path from the second memory module reading result file.
In the Distributed Rendering Environment method of the above-mentioned a kind of decentralization of the present invention, described step S3 ' also comprises: play up server R what this had been completed to master server material file download work o' after removing from master server material file download queue, play up list server and select the individual free time of other T ' to play up server (R from the free time (O+1), R (O+2)r (O+T)) ', plays up server (R for the individual free time of this other T ' (O+1), R (O+2)r (O+T)) ' separately distributed a sub-rendering task, and this other T free time of instruction is played up server (R (O+1), R (O+2)r (O+T)) ' from what this had completed master server material file download work played up server R o' download material file, rendering task distribution progress is upgraded simultaneously.
The present invention also constructs a kind of Distributed Rendering Environment system of decentralization, comprising: multiple clients, the master server establishing a communications link with multiple clients and the multiple servers of playing up that establish a communications link with described master server;
Described in each, client is all for submitting rendering task request to described master server, and will play up required material file and upload to described master server;
Described master server is used for this rendering task to be divided into the subtask of M serializing, and generates rendering task assignment information;
Described master server, also for described multiple operating states of playing up server are carried out to periodic monitoring, is played up T the free time of random selection list server from the monitoring gained free time and is played up server (R 1, R 2r t), this T free time is played up to server (R 1, R 2r t) information adds master server material file to and download queue, and plays up server (R for this T free time 1, R 2r t) distribute separately a sub-rendering task;
A described T free time is played up server (R 1, R 2r t) for obtaining material file download path and task allocation information from described primary server database, download material file by material file download path, carry out and the corresponding sub-rendering task of this task allocation information, and will play up acquired results file and upload to the second memory module of described master server;
Described master server is also for playing up server (R to described T 1, R 2r t) material file download progress carry out tracking and monitoring, and judge that this T plays up server (R 1, R 2r t) in arbitrary server R that plays up owhile completing material file download work, the free time is played up to another free time in list server and play up server R t+1information is added master server material file to and is downloaded queue also for this free time is played up server R t+1distribute next sub-rendering task;
Described master server is also when judging that subtask is distributed, play up subtask and rendering result file uploads task and is all finished, lookup result file store path in its database, read the destination file in its second memory module by destination file store path, and pass the destination file reading back client.
The present invention also constructs a kind of Distributed Rendering Environment system of decentralization, comprising: multiple clients, the master server establishing a communications link with multiple clients and the multiple servers of playing up that establish a communications link with described master server;
Described in each, client is all for submitting rendering task request to described master server;
When described master server is used for receiving the rendering task of described client, the master who searches in server is played up in all registrations and select memory space to meet preset requirement plays up server, specify described master to play up server and receive the required material file of playing up of being submitted to by described client, and described master is played up to server info and add the queue of material file loading source to;
Described master server is also divided into the subtask of the individual serializing of M ' for the rendering task that described client is submitted to, and generates rendering task assignment information;
Described master server also carries out periodic monitor for the operating state of playing up server to play up all registrations server except described master, plays up and selects the individual free time of T ' to play up server (R at random list server from the monitoring gained free time 1, R 2r tindividual this T ' free time is played up server (R by) ', 1, R 2r t) ' information is added to and is played up server material file and download queue, and plays up server (R for the individual free time of this T ' 1, R 2r t) ' separately distributed a sub-rendering task;
Described master server is also for to the individual server (R that plays up of this T ' 1, R 2r t) ' material file download progress carry out tracking and monitoring, and judge the individual server (R that plays up of this T ' 1, R 2r t) ' in arbitrary server R that plays up o' while completing material file download work, this free time is played up to another free time in list server and play up server R t+1' information is added to and is played up server material file and download queue play up server R for this another free time t+1' distribution next son rendering task;
Described master server is also when judging that subtask is distributed, play up subtask and rendering result file uploads task and is all finished, search the destination file store path being stored in database, read by this destination file store path the destination file that is stored in the second memory module, and pass the destination file reading back client.
Because the Distributed Rendering Environment method and system of decentralization of the present invention have adopted the technical scheme of the Distributed Rendering Environment system architecture design of decentralization, so overcome in prior art the master server under Client/Server pattern rendering system as unique file access and download center, cause master server live load large, Client/Server pattern rendering system is high to network bandwidth requirement, master server is accepted peak period and is prone to the defect of delay machine accident or network disconnecting at client rendering task, the live load and the degree of dependence of each rendering system to master server that reduce master server are realized, elevator system network utilization and data transmission efficiency, improve the stability of a system and treatment effeciency, shorten the server end response time, strengthen the object that user experiences.
Brief description of the drawings
Below in conjunction with drawings and Examples, the invention will be further described, in accompanying drawing:
Fig. 1 is the structural representation of the Distributed Rendering Environment system of the decentralization that provides of preferred embodiment of the present invention;
Fig. 2 is the structured flowchart of arbitrary client of the Distributed Rendering Environment system of the decentralization shown in Fig. 1;
Fig. 3 is the structured flowchart of the master server of the Distributed Rendering Environment system of the decentralization shown in Fig. 1;
Fig. 4 is arbitrary structured flowchart of playing up server of the Distributed Rendering Environment system of the decentralization shown in Fig. 1;
Fig. 5 is the Organization Chart of the Distributed Rendering Environment system of the decentralization shown in Fig. 1;
Fig. 6 is the upper flow chart of the Distributed Rendering Environment method of the decentralization that provides of preferred embodiments of the present invention;
Fig. 7 is the lower flow chart of the Distributed Rendering Environment method of the decentralization that provides of preferred embodiments of the present invention.
Embodiment
In order to solve in prior art under Client/Server pattern rendering system master server 200 as unique file access and download center, cause master server 200 live loads large, Client/Server pattern rendering system is high to network bandwidth requirement, master server 200 is accepted peak period and is prone to the defect of delay machine accident or network disconnecting at client rendering task, main innovate point of the present invention is:
1) master server 200 is specified and free time of maintaining fixed qty plays up server and downloads and play up required material file from master server 200 second memory modules 206, be that the server of playing up that playing up of this fixed qty completed master server 200 material file downloading task in server by master server 200 is downloaded queue and removed from master server material file, and the free time of specifying in addition equal number play up server and download material file from master server 200, make the live load of master server 200 and network pressure maintain fixing horizontal;
2) the present invention has adopted the Distributed Rendering Environment system architecture design of decentralization, master server 200 is downloaded file to load and is married again the server of playing up that has completed master server 200 material file downloading task in server of playing up of this fixed qty, what each completed material file download work plays up server using as new material file downloaded resources provider, the free time of the fixed qty distributing for master server 200 is played up server provides material file download service, As time goes on, material file downloaded resources provider quantity is exponential increasing, the network pressure of the Distributed Rendering Environment system of decentralization of the present invention is by decentralized gradually, and system treatment effeciency progressively improves.
Because having adopted the Distributed Rendering Environment system architecture of decentralization, the present invention designs, so solved in prior art the master server 200 under Client/Server pattern rendering system as unique file access and download center, cause master server 200 live loads large, Client/Server pattern rendering system is high to network bandwidth requirement, master server 200 is accepted peak period and is prone to the technical problem of delay machine accident or network disconnecting at client rendering task, the live load and the degree of dependence of each rendering system to master server 200 that reduce master server 200 are realized, elevator system network utilization and data transmission efficiency, improve the stability of a system and treatment effeciency, shorten the server end response time, strengthen the object that user experiences.
In order to make object of the present invention clearer, below in conjunction with drawings and Examples, the present invention is further elaborated.Should be appreciated that specific embodiment described herein, only in order to explain the present invention, is not intended to limit the present invention.
As shown in Figure 1, the Distributed Rendering Environment system of decentralization of the present invention comprise multiple clients 100, the master server 200 that establishes a communications link with the plurality of client 100, multiple servers 300 of playing up of establishing a communications link with this master server 200.
Each client 100 is all for sending rendering task material file and rendering task to master server 200.
This master server 200 is divided into the subtask (M value is determined by rendering task size) of M serializing frame by frame for the rendering task that this client 100 is submitted to, and generates rendering task assignment information.
This master server 200 is also for the plurality of operating state of playing up server 300 is carried out to periodic monitor, and the generation free time is played up list server, plays up T the free time of random selection list server play up server (R from this free time 1, R 2r t), this T free time is played up to server (R 1, R 2r t) information adds master server material file to and download queue, and plays up server (R for this T free time 1, R 2r t) respectively distribute a sub-rendering task.
This T free time is played up server (R 1, R 2r t) obtain task allocation information for the database 205 from master server 200, and download material file by the material file download path of database 205 from the second memory module 206 of master server 200, and carry out and the corresponding sub-rendering task of this task allocation information after completing material file downloading task.
This master server 200 is also for playing up server (R to this T 1, R 2r t) material file download progress carry out tracking and monitoring, and judge that this T plays up server (R 1, R 2r t) in arbitrary server R that plays up owhen (1≤O≤T, and O is positive integer) completes material file download work, another free time that the free time plays up in list server is played up to server R t+1add master server material file to and download queue also for this free time is played up server R t+1distribute next sub-rendering task.
This master server 200 also for judge this subtask distribute and play up, rendering result file uploading operation is while being all finished, lookup result file store path in database 205, from the second memory module 206 reading result files, and pass the destination file reading back client 100 by this destination file store path.
This master server 200 is also in when request of playing up that receives this client 100, specify and the plurality ofly play up the satisfactory master of memory space in server and play up server, play up server by this master and receive the material file of being submitted to by client 100, and by this master play up server add to play up server material file download queue;
This master server 200 is also for this rendering task being divided into the subtask of the individual serializing of M ' taking frame as cutting apart unit, and generates rendering task assignment information.
This master server 200, also for all operating states of playing up server of playing up except this master server are carried out to Real-Time Monitoring, is played up the random selection individual free time of T ' list server from the monitoring gained free time and is played up server (R 1, R 2r tindividual this T ' free time is played up server (R by) ', 1, R 2r t) ' information is added to and is played up server material file and download queue, and plays up server (R for this T free time 1, R 2r t) ' separately distributed a sub-rendering task.
This master server 200 is also for to the individual server (R that plays up of this T ' 1, R 2r t) ' material file download progress carry out tracking and monitoring, and judge the individual server (R that plays up of this T ' 1, R 2r t) ' in arbitrary server R that plays up owhen ' (1≤O≤T, and O is positive integer) completes material file download work, another free time that the free time plays up in list server is played up to server R t+1' information is added to and is played up server material file and download queue and play up server R for this free time t+1' distribute next sub-rendering task.
This master server is also for judging that subtask distributes and play up work, the equal executed of rendering result file uploading operation when complete, lookup result file store path in the database 205 of master server 200, from the second memory module 206 reading result files, and pass the destination file reading back client 100 by this destination file store path.
As shown in Figure 2, each client 100 includes the first processing module 102, is connected with this first processing module 102 first communication module 103, input module 101, the first memory module 105 and display module 104.
This input module 101, for receiving user's input information, is input to the first processing module 102 by user's input information.
This first processing module 102 for example, for inputting definite required material file (model, pinup picture, texture before game picture is shaped) of playing up according to user, this material file is input to this first communication module 103, and sends rendering task to this first communication module 103 and submit instruction to.
When this first communication module 103 is submitted instruction to for receiving rendering task, this material file is submitted to master server 200.
This first communication module 103 is also for receiving the rendering result file being issued by master server 200, and this rendering result file is input to the first memory module 105 stores, and by this first processing module 102, this rendering result file transfer is shown to display module 104.
Wherein, this first communication module 103 can be existing computer network interface card, and this first processing module 102 can be existing computer general-purpose processor.
As shown in Figure 3, master server 200 of the present invention comprise the second processing module 202, the second communication module 201, the second memory module 206 that are connected with this second processing module 202, play up monitoring server module 204, task is distributed and administration module 203 and database 205.
This database 205 is for storing the user's registration information of each client 100 and playing up server registration information.
This second communication module 201 is for receiving the rendering task request that includes user's registration information and material file of being submitted to by client 100, record this rendering task request time of advent, and the rendering task request that comprises rendering task request time of advent is input to the second processing module 202.
This second processing module 202 is carried out queue processing for the user's registration information that provides according to this client 100 and in conjunction with the rendering task that rendering task request is submitted to this client 100 time of advent, determines this client rendering task priority level.
This second processing module 202 is also for searching the not highest rendering task of client rendering task queue medium priority, taking frame as cutting apart unit, this rendering task is divided into the subtask of M serializing, and stores the subtask of this M serializing into database 205.This second processing module 202 is also for generating material file download path, and material file download path is saved in to database 205.
Or, this second processing module 202 is also when receiving the rendering task request of client 100 by second communication module 201, searching all registrations plays up satisfactory one of memory space in server and plays up server, this is played up to server as the main server of playing up, specify this master to play up server and receive the required material file of playing up of being submitted to by this client 100, and this master is played up to server and add the queue of material file loading source to.
This second processing module 202 is also for periodically playing up monitoring server module 204 and send and play up monitoring server instruction to this.
When this is played up monitoring server module 204 and plays up monitoring server instruction for receiving, send and play up server contention states test signal to the plurality of server of playing up, and collect by the plurality of one group of work state information playing up server feedback, generate free time according to this group work state information and play up list server, and this free time is played up to list server be input to the second processing module 202.
Also for playing up from this free time, list server is random selects T free time to play up server to this second processing module 202, and this T free time is played up to server (R 1, R 2r t) information add to master server material file download queue.
This task is distributed and administration module 203 is used to this T free time to play up server (R 1, R 2r t) respectively distribute a subtask, and task distribution progress is upgraded.
This task is distributed and administration module 203 is also played up server (R for this T of instruction free time 1, R 2r t) from database 205 downloading task assignment information, and search the material file download path of database 205, download material file by this material file download path from the second memory module 206 or the main server of playing up.
This second processing module 202 is also for judging that this T is played up server (R 1, R 2r t) in arbitrary server R that plays up o(1≤O≤T, and O is positive integer) while completing material file download work, this server info of playing up that completes material file downloading task is played up to server material file from master server/master and downloaded queue and delete, and this is played up to server info and adds the queue of material file loading source to.What this completed material file downloading task plays up server also for carrying out and the corresponding sub-rendering task of sub-rendering task of its download, and the destination file of playing up generation is uploaded to the destination file store path of the database 205 of master server 200.
Also for playing up from the free time, list server is random selects another free time to play up server to this second processing module 202, this free time is played up to server info and add master server/master to and play up server material file download queue.
This task is distributed and administration module 203 is also used to this free time to play up next sub-rendering task of server-assignment.
Also for playing up from the free time, list server is random selects an other T free time to play up server (R to this second processing module 202 (O+1), R (O+2)r (O+T)), and this other T free time is played up to server (R (O+1), R (O+2)r (O+T)) information add to play up server material file download queue.
This task is distributed and administration module 203 is also used to this T free time to play up server (R (O+1), R (O+2)r (O+T)) respectively distribute a sub-rendering task, and task distribution progress is upgraded.
This second communication module 201 is also for collecting the destination file of playing up and uploading by playing up server, and by the second processing module 202, this destination file write to the second memory module 206.
This second processing module 202 has also all uploaded to the second memory module 206 that database 205 specifies or main while playing up server for the destination file distributing by this task and administration module 203 judges that the subtask of an above-mentioned M serializing has all been assigned and generation is played up in the subtask of this M serializing, search the destination file store path of database 205, play up server reading result file by destination file store path from the second memory module 206 or master, and pass the destination file reading back client 100 by second communication module 201 or the main third communication module 301 of playing up server of master server 200.
In the present invention, this second communication module 201 can be existing server network interface card, and this second processing module 202 can be existing processor-server, and this second memory module 206 can be existing server hard disc.
As shown in Figure 4, each plays up third communication module 301 and the 3rd memory module 303 that server includes the 3rd processing module 302, is connected with the 3rd processing module 302.
This third communication module 301 is for the material file and the rendering task assignment information that are provided by master server 200 are provided, and this material file and rendering task assignment information are input to the 3rd processing module 302.
The 3rd processing module 302 is for this material file being write to the 3rd memory module 303, and for searching this material file and this corresponding subtask of rendering task assignment information, and subtasking is played up operation.
The 3rd processing module 302 also, for this subtask being played up to operation while being finished, sends to third communication module 301 by the destination file of playing up generation, and sends teletype command on destination file to this third communication module 301.
This third communication module 301 also when receiving on destination file teletype command, uploads to by the above results file the destination file store path that master server 200 databases 205 are specified.
Wherein, this third communication module 301 can be existing server network interface card, and the 3rd processing module 302 can be existing processor-server or server video card.
Fig. 4 is the general frame figure of the Distributed Rendering Environment system of decentralization of the present invention.As shown in Figure 4, client N100 submits to and comprises the rendering task request of playing up required material file to master server 200, master server 200 customer in response ends 100 are asked, and play up an one T free time of random selection list server play up server (R from the monitoring gained free time 1, R 2r t), play up server (R by a T 1, R 2r t) add master server material file to and download queue, and play up a separately sub-rendering task of distribution of server for each free time.A the one T free time was played up server (R 1, R 2r t) from the material file download path of the database 205 of master server 200, download material file and relevant rendering task assignment information respectively, and carry out and the corresponding sub-rendering task of this rendering task assignment information.As a T free time is played up server (R 1, R 2r t) in one play up server R o(1≤O≤T, and O is positive integer) complete master server material file download work, this is played up server R by master server 200 oinformation removes and is added to the queue of material file loading source from master server material file download queue, plays up other the 2nd T of selection list server simultaneously play up server (R from the free time of regular update t+1, R t+2r 2T), play up server (R for the 2nd T is individual t+1, R t+2r 2T) respectively distribute a sub-rendering task, and instruction the 2nd T is played up server (R t+1, R t+2r 2T) play up server R from this odownload material file.In addition, this master server 200 also will be played up list server and select another free time to play up server from the free time, for this free time is played up sub-rendering task of server-assignment, and this free time is played up to server and add master server material file download queue to, maintain a fixed threshold T with the number of servers of playing up of master server material file being downloaded to queue, thereby make full use of the performance of master server 200, avoid master server 200 to enter idle state.This threshold value T determines by master server 200 performances and the network bandwidth, and user can set threshold value T according to master server 200 processor performances and grid bandwidth.
As master server 200 judges that the 2nd T is played up server (R t+1, R t+2r 2T) in play up server R t+1complete and play up server material file download work, this is played up server R by this master server 200 t+1information is downloaded and is removed queue and added to the queue of material file loading source from playing up server material file, and play up list server and select other the 3rd T to play up server from the free time of regular update, the 3rd T is played up to server to be added to and plays up server material file and download queue, play up server for the 3rd T is individual and distribute separately a sub-rendering task, plays up server from playing up server R for instruction the 3rd T t+1download material file, and from database 205 downloading task assignment information .... the like.
Share out the work when master server 200 judges subtask, when work is played up in subtask and destination file uploading operation is all finished, destination file in reading database 205 destination file store paths, and pass the destination file reading back client 100, by client 100 reception result files, and on display module 104, present material file rendering effect.
Below will be taking preferred embodiments of the present invention as example, the flow process of the Distributed Rendering Environment method to decentralization of the present invention describes:
As shown in FIG. 6 and 7, in step S101, carry out user at master server end and register operation and play up server registration operation, by user's registration information and play up server info and store into the database 205 of master server 200.
In step S102, master server 200 receives the file that user submits to by client 100 and plays up request, search user's registration information, determine user priority rank according to user profile, reach according to user priority rank and the request of playing up the rendering task that the time submits to client 100 and carry out queue processing.
In step S103, master server 200 is determined the highest rendering task of priority, for cutting apart unit, this rendering task is carried out to dividing processing with frame, obtains the subtask of M serializing.Master server 200 is numbered according to the order of sequence to the subtask of this M serializing, generates M sub-task allocation information, the subtask of this M serializing is saved in to database 205, and this subtask assignment information and material file download path are saved in to database 205.
Or, master server 200 is searched and is selected all registrations to play up the master that memory space in server meets pre-provisioning request and play up server, this master is played up to server and add the queue of material file loading source to, this master of instruction plays up server and receives the required material file of playing up of being submitted to by client 100.Master server carries out dividing processing to this rendering task, obtains the subtask of M serializing.This master plays up server the subtask of this M serializing is numbered according to the order of sequence, generates M sub-task allocation information, and this subtask assignment information and material file download path are sent to database 205 preserves.
In step S104, play up monitoring server module 204 operating state of playing up server through registration is carried out to periodic monitor, generate the free time according to monitoring result and play up list server.The second processing module 202 is played up the random T of selection list server from this free time and is played up server, play up a sub-rendering task of the each distribution of server for this T is individual, and this T of instruction plays up server from database 205 downloading task assignment information, and search the material file download path in database 205, download material file by material file download path from the second memory module 206, and this T the material file download progress of playing up server monitored.
In step S105, master server 200 judges that according to material file download progress monitoring result this T is played up server (R 1, R 2r t) in whether there is the server of playing up of material file download work.As master server 200 judges that this T is played up server (R 1, R 2r t) in not yet occurred the server of playing up of material file download work returning to previous step S104, play up the monitoring of server material file download progress to continue to carry out.As master server 200 judges that this T is played up server (R 1, R 2r t) in one or more server R that play up o(1≤O≤T, and O is positive integer) complete master server material file download work, carry out next step S106.
In step S106, this is played up server R by master server 200 oinformation is downloaded queue and is removed from master server material file, and is added to and play up server material file download queue, plays up server R by this ocarry out correlator rendering task, play up list server and select another free time to play up server from the free time of regular update simultaneously, this free time is played up to server and add master server material file to and download queue, and play up the sub-rendering task of server-assignment one for this free time.
In step S107, master server 200 specifies an other T free time to play up server { R (O+1), R (O+2)r (O+T)play up server material file and download the server of playing up of queue and download material file from newly adding to, distribute and administration module 203 is played up server { R for this T free time by rendering task (O+1), R (O+2)r (O+T)respectively distribute a sub-rendering task, task distribution progress is upgraded, and material file download progress is monitored.
In step S108, master server 200 judges that this T free time plays up server (R (O+1), R (O+2)r (O+T)) in whether had the server of playing up of playing up server material file download work.Complete the server of playing up of playing up server material file download work as do not monitored, return to step S107, to continue to carry out the monitoring of material file download progress.Play up the server of playing up of server material file download work as monitored, carry out next step S109.
In step S109, master server 200 by this complete play up server material file download work play up server add to play up server material file download queue, this plays up server execution correlator rendering task instruction, and the destination file of playing up generation is uploaded to the second memory module 206, and store destination file store path into database 205.
In step S110, master server 200 is specified T free time to play up server again and is downloaded material file from newly adding the server of playing up of playing up server material file download queue to, distribute and administration module 203 is played up a sub-rendering task of the each distribution of server for this T free time by task, task distribution progress is upgraded simultaneously.
In step S111, master server 200 judges whether this M subtask is assigned, and as unallocated complete, jump to step S105, as is assigned, and performs step 112.
In step S112, master server 200 judges that this M subtask plays up the destination file of generation and all uploaded to the second memory module 206, lookup result file storing path in database 205, by destination file storing path reading result file from the second memory module 206, pass the destination file reading back client 100 by second communication module 201.
In sum, the present invention has adopted the Distributed Rendering Environment system architecture design of decentralization, has reduced load and the network pressure of master server 200, and has made system have extremely strong autgmentability.System manager of the present invention can by increase play up number of servers mode come further raising system play up performance and treatment efficiency.
The foregoing is only preferred embodiment of the present invention, not in order to limit the present invention, all any amendments of doing within the spirit and principles in the present invention, be equal to and replace and improvement etc., within all should being included in protection scope of the present invention.

Claims (15)

1. a Distributed Rendering Environment method for decentralization, is characterized in that, comprises the steps:
S1) receive the rendering task material file and the rendering task request that are sent by client (100), this rendering task is divided into the subtask of M serializing, and generates rendering task assignment information;
S2) carry out registration and play up server contention states monitoring, play up T the free time of random selection list server from the monitoring gained free time and play up server (R 1, R 2r t), this T free time is played up to server (R 1, R 2r t) information adds master server material file to and download queue, and plays up server (R for this T free time 1, R 2r t) distribute separately a sub-rendering task;
S3) play up server (R to this T 1, R 2r t) material file download progress carry out tracking and monitoring, and judge that this T plays up server (R 1, R 2r t) in arbitrary server (R that plays up o) while completing material file download work, this free time is played up to another free time in list server and play up server (R t+1) information adds master server material file to and download queue play up server (R for this another free time t+1) distribute next sub-rendering task; Repeat aforesaid operations, until subtask distribute, subtask plays up and when rendering result file uploads task and is all finished, carries out next step S4;
S4) reading result file, and pass the destination file reading back client (100).
2. the Distributed Rendering Environment method of a kind of decentralization according to claim 1, is characterized in that, before described step S1, also comprises the steps:
S0) carry out client user at master server end and register operation and play up server registration operation, by user's registration information and play up the database (205) that server registration information stores master server (200) into.
3. the Distributed Rendering Environment method of a kind of decentralization according to claim 1, is characterized in that, between described step S0 and described step S1, also comprises the steps:
S01) in the time of the rendering task request that includes user's registration information and material file that receives client (100), determine user class according to user's registration information, and the rendering task of client (100) being submitted in conjunction with client rendering task submission time carries out queue processing.
4. the Distributed Rendering Environment method of a kind of decentralization according to claim 1, it is characterized in that, the rendering task of described in described step S1, client (100) being submitted to is divided into the sub-rendering task of M serializing, and the step that generates rendering task assignment information specifically comprises:
S11) taking frame as cutting apart unit, this rendering task is divided into the subtask of M serializing, the sub-rendering task of this M serializing is numbered according to the order of sequence, generate M rendering task assignment information;
S12) this material file is saved in to second memory module (206) of master server (200), and this material file download path and this rendering task assignment information are saved in to database (205).
5. the Distributed Rendering Environment method of a kind of decentralization according to claim 4, is characterized in that, described step S2 also comprises: this T of instruction free time is played up server (R 1, R 2r t) obtain relevant rendering task assignment information and material file download path from database (205), download material file by material file download path from the second memory module (206).
6. the Distributed Rendering Environment method of a kind of decentralization according to claim 5, is characterized in that, described step S3 also comprises: play up server (R by this T 1, R 2r t) in completed master server material file download work play up server (R o) information downloads queue and removes from master server material file, and added to the queue of material file loading source, instruction simultaneously this completed master server material file download work play up server (R o) carry out and the corresponding sub-rendering task of this rendering task assignment information, and the destination file of playing up generation is uploaded to second memory module (206) of master server (200), and store destination file store path into database (205);
Described step S4 also comprises: search the destination file store path in database (205), by this destination file store path from the second memory module (206) reading result file.
7. the Distributed Rendering Environment method of a kind of decentralization according to claim 6, is characterized in that, described step S3 also comprises: play up server (R what this had been completed to master server material file download work o) after master server material file is downloaded and removed queue, play up list server and select an other T free time to play up server (R from the free time (O+1), R (O+2)r (O+T)), for this other T free time is played up server (R (O+1), R (O+2)r (O+T)) distribute separately a sub-rendering task, and this other T free time of instruction is played up server (R (O+1), R (O+2)r (O+T)) play up server (R from what this had completed master server material file download work o) download material file, rendering task distribution progress is upgraded simultaneously.
8. the Distributed Rendering Environment method of a kind of decentralization according to claim 7, is characterized in that, described step S3 also comprises: what as judged, this had completed master server material file download work plays up server (R o) play up server (R with this other T free time (O+1), R (O+2)r (O+T)) between network occur interrupting or this network quality does not reach pre-provisioning request, what this has been completed to master server material file download work plays up server (R o) from the queue of material file loading source, remove;
The server of playing up that has had the download work of master server material file or played up server material file download work as judgement adds the queue of material file loading source to, specifies this other T free time to play up server (R (O+1), R (O+2)r (O+T)) download material file from this server of playing up that newly adds the queue of material file loading source to.
9. a Distributed Rendering Environment method for decentralization, is characterized in that, comprises the steps:
S1 ') in the time receiving the rendering task of being submitted to by client (100), the master who searches in server is played up in all registrations and select memory space to meet preset requirement plays up server, specify this master to play up server and receive the rendering task material file of being submitted to by client (100), and this is played up to server info and adds the queue of material file loading source to, the setting of rendering task parameter and material file are deposited to path information storage to (205) in the database of master server (200) simultaneously;
S2 ') by this master server, rendering task is divided into the subtask of the individual serializing of M ', and generate rendering task assignment information;
S3 ') operating state of playing up server to play up all registrations server except this master carries out periodic monitor, and play up and select the individual free time of T ' to play up server (R at random list server from the monitoring gained free time 1, R 2r t), individual this T ' free time is played up to server (R 1, R 2r t) ' information is added to and is played up server material file and download queue, and plays up server (R for the individual free time of this T ' 1, R 2r t) ' separately distributed a sub-rendering task;
S4 ') to the individual server (R that plays up of this T ' 1, R 2r t) ' material file download progress carry out tracking and monitoring, and judge the individual server (R that plays up of this T ' 1, R 2r t) ' in arbitrary server (R that plays up owhen) ' complete material file download work, this free time is played up to another free time in list server and play up server (R t+1) ' information is added to and is played up server material file and download queue play up server (R for this another free time t+1) ' distribution next son rendering task; Repeat aforesaid operations, until subtask distribute, subtask plays up and when rendering result file uploads task and is all finished, carries out next step S5 ';
S5 ') reading result file, and pass the destination file reading back client (100).
10. the Distributed Rendering Environment method of decentralization according to claim 9, is characterized in that, described in described step S2 ', rendering task is divided into the subtask of the individual serializing of M ', and the step that generates rendering task assignment information specifically comprises:
S21 ') taking frame as cutting apart unit, this rendering task is divided into the subtask of the individual serializing of M ', the subtask of the individual serializing of this M ' is numbered according to the order of sequence, generate the individual rendering task assignment information of M ';
S22 ') this material file is saved in to main the 3rd memory module (303) of playing up server, and this material file download path and this rendering task assignment information are saved in to database (205).
The Distributed Rendering Environment method of 11. a kind of decentralizations according to claim 10, is characterized in that, described step S2 ' also comprises: the individual free time of this T ' of instruction is played up server (R 1, R 2r t) ' obtain relevant rendering task assignment information and material file download path from database (205), downloads material file by material file download path from the 3rd memory module (303).
The Distributed Rendering Environment method of 12. decentralizations according to claim 11, is characterized in that, described step S3 ' also comprises: by individual this T ' server (R that plays up 1, R 2r t) ' in completed master server material file download work play up server (R o) ' information is downloaded queue and is removed from master server material file, and is added to the queue of material file loading source, instruction simultaneously this completed master server material file download work play up server (R o) ' carry out and the corresponding sub-rendering task of this rendering task assignment information, and the destination file of playing up generation is uploaded to second memory module (206) of master server (200), and store destination file store path into database (205);
Described step S4 ' also comprises: search the destination file store path in database (205), by this destination file store path from the second memory module (206) reading result file.
The Distributed Rendering Environment method of 13. decentralizations according to claim 12, is characterized in that, described step S3 ' also comprises: play up server (R what this had been completed to master server material file download work oafter) ' from master server material file download queue removes, play up list server and select the individual free time of other T ' to play up server (R from the free time (O+1), R (O+2)r (O+T)) ', plays up server (R for the individual free time of this other T ' (O+1), R (O+2)r (O+T)) ' separately distributed a sub-rendering task, and this other T free time of instruction is played up server (R (O+1), R (O+2)r (O+T)) ' from what this had completed master server material file download work played up server (R o) ' download material file upgraded rendering task distribution progress simultaneously.
The Distributed Rendering Environment system of 14. 1 kinds of decentralizations, it is characterized in that, comprising: multiple clients (100), the master server (200) establishing a communications link with multiple clients (100) and the multiple servers (300) of playing up that establish a communications link with described master server (200);
Client described in each (100) is all for submitting rendering task request to described master server (200), and will play up required material file and upload to described master server (200);
Described master server (200) is for this rendering task being divided into the subtask of M serializing, and generates rendering task assignment information;
Described master server (200), also for described multiple operating states of playing up server are carried out to periodic monitoring, is played up T the free time of random selection list server from the monitoring gained free time and is played up server (R 1, R 2r t), this T free time is played up to server (R 1, R 2r t) information adds master server material file to and download queue, and plays up server (R for this T free time 1, R 2r t) distribute separately a sub-rendering task;
A described T free time is played up server (R 1, R 2r t) for obtaining material file download path and task allocation information from described master server (200) database (205), download material file by material file download path, carry out and the corresponding sub-rendering task of this task allocation information, and will play up acquired results file and upload to second memory module (206) of described master server (200);
Described master server (200) is also for playing up server (R to described T 1, R 2r t) material file download progress carry out tracking and monitoring, and judge that this T plays up server (R 1, R 2r t) in arbitrary server (R that plays up o) while completing material file download work, the free time is played up to another free time in list server and play up server (R t+1) information adds master server material file to and download queue and play up server (R for this free time t+1) distribute next sub-rendering task;
Described master server (200) is also when judging that subtask is distributed, play up subtask and rendering result file uploads task and is all finished, lookup result file store path in its database (205), read the destination file in its second memory module (206) by destination file store path, and pass the destination file reading back client (100).
The Distributed Rendering Environment system of 15. 1 kinds of decentralizations, it is characterized in that, comprising: multiple clients (100), the master server (200) establishing a communications link with multiple clients (100) and the multiple servers of playing up that establish a communications link with described master server (200);
Client described in each (100) is all for submitting rendering task request to described master server (200);
Described master server (200) is when receiving the rendering task of described client (100), the master who searches in server is played up in all registrations and select memory space to meet preset requirement plays up server, specify described master to play up server and receive the required material file of playing up of being submitted to by described client (100), and described master is played up to server info and add the queue of material file loading source to;
Described master server is also divided into the subtask of the individual serializing of M ' for the rendering task that described client (100) is submitted to, and generates rendering task assignment information;
Described master server (200) also carries out periodic monitor for the operating state of playing up server to play up all registrations server except described master, plays up and selects the individual free time of T ' to play up server (R at random list server from the monitoring gained free time 1, R 2r tindividual this T ' free time is played up server (R by) ', 1, R 2r t) ' information is added to and is played up server material file and download queue, and plays up server (R for the individual free time of this T ' 1, R 2r t) ' separately distributed a sub-rendering task;
Described master server (200) is also for to the individual server (R that plays up of this T ' 1, R 2r t) ' material file download progress carry out tracking and monitoring, and judge the individual server (R that plays up of this T ' 1, R 2r t) ' in arbitrary server (R that plays up owhen) ' complete material file download work, this free time is played up to another free time in list server and play up server (R t+1) ' information is added to and is played up server material file and download queue play up server (R for this another free time t+1) ' distribution next son rendering task;
Described master server (200) is also when judging that subtask is distributed, play up subtask and rendering result file uploads task and is all finished, search the destination file store path being stored in database (205), read by this destination file store path the destination file that is stored in the second memory module (206), and pass the destination file reading back client (100).
CN201410252493.8A 2014-06-09 2014-06-09 Decentralized distributed rendering method and system Pending CN104052803A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410252493.8A CN104052803A (en) 2014-06-09 2014-06-09 Decentralized distributed rendering method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410252493.8A CN104052803A (en) 2014-06-09 2014-06-09 Decentralized distributed rendering method and system

Publications (1)

Publication Number Publication Date
CN104052803A true CN104052803A (en) 2014-09-17

Family

ID=51505154

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410252493.8A Pending CN104052803A (en) 2014-06-09 2014-06-09 Decentralized distributed rendering method and system

Country Status (1)

Country Link
CN (1) CN104052803A (en)

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104468826A (en) * 2014-12-25 2015-03-25 广东威创视讯科技股份有限公司 Distributed rendering method, device and system
CN104991827A (en) * 2015-06-26 2015-10-21 季锦诚 Method for sharing GPU resources in cloud game
CN105447903A (en) * 2015-11-17 2016-03-30 深圳市瑞云科技有限公司 Hybrid rendering method and apparatus thereof
CN105913344A (en) * 2016-04-14 2016-08-31 北京思特奇信息技术股份有限公司 Method and system for aiming at multi-tenant system configuration
CN106157355A (en) * 2016-07-01 2016-11-23 国家超级计算深圳中心(深圳云计算中心) A kind of fluid cloud based on high-performance calculation emulation rendering system and method
CN106202927A (en) * 2016-05-31 2016-12-07 武汉联影医疗科技有限公司 The rendering intent of medical image and system
CN106254489A (en) * 2016-08-16 2016-12-21 王淼 A kind of cloud rendering system without file transmission and method thereof
CN106502794A (en) * 2016-10-24 2017-03-15 深圳市彬讯科技有限公司 A kind of efficient rendering intent of the 3 d effect graph rendered based on high in the clouds
CN108595455A (en) * 2017-12-28 2018-09-28 武汉智博创享科技股份有限公司 A kind of spatial data coordinate transformation method and device
CN109088907A (en) * 2017-06-14 2018-12-25 北京京东尚科信息技术有限公司 File delivery method and its equipment
CN109194976A (en) * 2018-10-22 2019-01-11 网宿科技股份有限公司 Video processing, dissemination method, storage management, Content Management Platform and system
CN109615684A (en) * 2018-12-12 2019-04-12 江苏赞奇科技股份有限公司 A kind of method that decentralization renders online
CN109981801A (en) * 2019-04-30 2019-07-05 深圳微新创世科技有限公司 A kind of Distributed Online rendering method
CN110955504A (en) * 2019-10-21 2020-04-03 量子云未来(北京)信息科技有限公司 Method, server, system and storage medium for intelligently distributing rendering tasks
CN111028124A (en) * 2019-11-29 2020-04-17 安徽赛诚云渲网络科技有限公司 Rendering system
CN111179034A (en) * 2019-12-27 2020-05-19 珠海随变科技有限公司 Commodity pre-rendering method and device, computer equipment and storage medium
US10672179B2 (en) 2015-12-30 2020-06-02 Wuhan United Imaging Healthcare Co., Ltd. Systems and methods for data rendering
CN113852840A (en) * 2021-09-18 2021-12-28 北京百度网讯科技有限公司 Video rendering method and device, electronic equipment and storage medium
CN114390046A (en) * 2022-01-14 2022-04-22 深圳市瑞云科技有限公司 Remote asset file extremely-fast transmission method and transmission system based on Redis database
CN115375530A (en) * 2022-07-13 2022-11-22 北京松应科技有限公司 Multi-GPU collaborative rendering method, system, device and storage medium
WO2023044877A1 (en) * 2021-09-26 2023-03-30 厦门雅基软件有限公司 Render pass processing method and apparatus, electronic device, and storage medium
CN116527748A (en) * 2023-06-26 2023-08-01 亚信科技(中国)有限公司 Cloud rendering interaction method and device, electronic equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110072427A1 (en) * 2009-09-21 2011-03-24 Oracle International Corporation System and method for synchronizing transient resource usage between virtual machines in a hypervisor environment
CN102340522A (en) * 2010-07-15 2012-02-01 腾讯科技(深圳)有限公司 Data transmission method and device
CN102592315A (en) * 2011-01-12 2012-07-18 上海库达数字信息技术有限公司 3D rendering platform based on GPU cloud cluster

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110072427A1 (en) * 2009-09-21 2011-03-24 Oracle International Corporation System and method for synchronizing transient resource usage between virtual machines in a hypervisor environment
CN102340522A (en) * 2010-07-15 2012-02-01 腾讯科技(深圳)有限公司 Data transmission method and device
CN102592315A (en) * 2011-01-12 2012-07-18 上海库达数字信息技术有限公司 3D rendering platform based on GPU cloud cluster

Cited By (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104468826A (en) * 2014-12-25 2015-03-25 广东威创视讯科技股份有限公司 Distributed rendering method, device and system
CN104991827A (en) * 2015-06-26 2015-10-21 季锦诚 Method for sharing GPU resources in cloud game
CN105447903A (en) * 2015-11-17 2016-03-30 深圳市瑞云科技有限公司 Hybrid rendering method and apparatus thereof
US10672179B2 (en) 2015-12-30 2020-06-02 Wuhan United Imaging Healthcare Co., Ltd. Systems and methods for data rendering
US11544893B2 (en) 2015-12-30 2023-01-03 Wuhan United Imaging Healthcare Co., Ltd. Systems and methods for data deletion
CN105913344A (en) * 2016-04-14 2016-08-31 北京思特奇信息技术股份有限公司 Method and system for aiming at multi-tenant system configuration
CN106202927A (en) * 2016-05-31 2016-12-07 武汉联影医疗科技有限公司 The rendering intent of medical image and system
CN106157355A (en) * 2016-07-01 2016-11-23 国家超级计算深圳中心(深圳云计算中心) A kind of fluid cloud based on high-performance calculation emulation rendering system and method
CN106254489A (en) * 2016-08-16 2016-12-21 王淼 A kind of cloud rendering system without file transmission and method thereof
CN106502794B (en) * 2016-10-24 2019-10-11 深圳市彬讯科技有限公司 A kind of efficient rendering method of 3 d effect graph based on cloud rendering
CN106502794A (en) * 2016-10-24 2017-03-15 深圳市彬讯科技有限公司 A kind of efficient rendering intent of the 3 d effect graph rendered based on high in the clouds
CN109088907A (en) * 2017-06-14 2018-12-25 北京京东尚科信息技术有限公司 File delivery method and its equipment
CN108595455A (en) * 2017-12-28 2018-09-28 武汉智博创享科技股份有限公司 A kind of spatial data coordinate transformation method and device
CN108595455B (en) * 2017-12-28 2021-05-07 武汉智博创享科技股份有限公司 Spatial data coordinate conversion method and device
CN109194976A (en) * 2018-10-22 2019-01-11 网宿科技股份有限公司 Video processing, dissemination method, storage management, Content Management Platform and system
CN109615684A (en) * 2018-12-12 2019-04-12 江苏赞奇科技股份有限公司 A kind of method that decentralization renders online
CN109981801A (en) * 2019-04-30 2019-07-05 深圳微新创世科技有限公司 A kind of Distributed Online rendering method
CN109981801B (en) * 2019-04-30 2021-10-26 深圳微新创世科技有限公司 Distributed online rendering method
CN110955504A (en) * 2019-10-21 2020-04-03 量子云未来(北京)信息科技有限公司 Method, server, system and storage medium for intelligently distributing rendering tasks
CN110955504B (en) * 2019-10-21 2022-12-20 量子云未来(北京)信息科技有限公司 Method, server, system and storage medium for intelligently distributing rendering tasks
CN111028124A (en) * 2019-11-29 2020-04-17 安徽赛诚云渲网络科技有限公司 Rendering system
CN111179034A (en) * 2019-12-27 2020-05-19 珠海随变科技有限公司 Commodity pre-rendering method and device, computer equipment and storage medium
CN113852840A (en) * 2021-09-18 2021-12-28 北京百度网讯科技有限公司 Video rendering method and device, electronic equipment and storage medium
CN113852840B (en) * 2021-09-18 2023-08-22 北京百度网讯科技有限公司 Video rendering method, device, electronic equipment and storage medium
WO2023044877A1 (en) * 2021-09-26 2023-03-30 厦门雅基软件有限公司 Render pass processing method and apparatus, electronic device, and storage medium
CN114390046A (en) * 2022-01-14 2022-04-22 深圳市瑞云科技有限公司 Remote asset file extremely-fast transmission method and transmission system based on Redis database
CN115375530A (en) * 2022-07-13 2022-11-22 北京松应科技有限公司 Multi-GPU collaborative rendering method, system, device and storage medium
CN116527748A (en) * 2023-06-26 2023-08-01 亚信科技(中国)有限公司 Cloud rendering interaction method and device, electronic equipment and storage medium
CN116527748B (en) * 2023-06-26 2023-09-15 亚信科技(中国)有限公司 Cloud rendering interaction method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN104052803A (en) Decentralized distributed rendering method and system
US10862957B2 (en) Dissemination of node metrics in server clusters
US20190207869A1 (en) Intelligent Placement within a Data Center
US10713223B2 (en) Opportunistic gossip-type dissemination of node metrics in server clusters
CN109379448B (en) File distributed deployment method and device, electronic equipment and storage medium
CN104243405A (en) Request processing method, device and system
CN103797463A (en) Method and apparatus for assignment of virtual resources within a cloud environment
CN117370029A (en) Cluster resource management in a distributed computing system
CN110727738B (en) Global routing system based on data fragmentation, electronic equipment and storage medium
CN103581207A (en) Cloud terminal data storage system and data storing and sharing method based on cloud terminal data storage system
KR102567565B1 (en) Apparatus and system for managing federated learning resource, and resource efficiency method thereof
CN111459641B (en) Method and device for task scheduling and task processing across machine room
CN111935306B (en) Node scheduling method and device
CN105245500A (en) Multimedia resource sharing method and device
CN111935242B (en) Data transmission method, device, server and storage medium
CN106850720A (en) Method for upgrading software, apparatus and system
CN109962947A (en) Method for allocating tasks and device in a kind of peer-to-peer network
CN110765092A (en) Distributed search system, index distribution method, and storage medium
CN102724301B (en) Cloud database system and method and equipment for reading and writing cloud data
EP3998754A1 (en) Data distribution method, electronic device, and storage medium
CN105893135B (en) Distributed data processing method and data center
CN110286854B (en) Method, device, equipment and storage medium for group member management and group message processing
CN108667920B (en) Service flow acceleration system and method for fog computing environment
CN105917694B (en) Service in telecommunication network provides and activation
JP2007272540A (en) Data distributing method and data distributing system

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20140917