CN110955504A - Method, server, system and storage medium for intelligently distributing rendering tasks - Google Patents

Method, server, system and storage medium for intelligently distributing rendering tasks Download PDF

Info

Publication number
CN110955504A
CN110955504A CN201911001408.XA CN201911001408A CN110955504A CN 110955504 A CN110955504 A CN 110955504A CN 201911001408 A CN201911001408 A CN 201911001408A CN 110955504 A CN110955504 A CN 110955504A
Authority
CN
China
Prior art keywords
rendering
task
rendered
node
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911001408.XA
Other languages
Chinese (zh)
Other versions
CN110955504B (en
Inventor
李甫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hefei Kelast Network Technology Co ltd
Original Assignee
Quantum Cloud Future Beijing Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Quantum Cloud Future Beijing Information Technology Co ltd filed Critical Quantum Cloud Future Beijing Information Technology Co ltd
Priority to CN201911001408.XA priority Critical patent/CN110955504B/en
Publication of CN110955504A publication Critical patent/CN110955504A/en
Application granted granted Critical
Publication of CN110955504B publication Critical patent/CN110955504B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/5038Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the execution order of a plurality of tasks, e.g. taking priority or time dependency constraints into consideration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/48Indexing scheme relating to G06F9/48
    • G06F2209/484Precedence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5021Priority

Abstract

The embodiment of the invention relates to the technical field of computers, and particularly discloses a method, a server, a system and a storage medium for intelligently distributing rendering tasks, wherein the method comprises the following steps: acquiring a task to be rendered uploaded by a user through a user side and task uploading time; adding the task to be rendered into a task queue according to the task uploading time; sequencing the rendering nodes according to the performance of the rendering nodes obtained in advance; the tasks to be rendered are called from the task queue in sequence and distributed to rendering nodes in the current execution sequence; and receiving a rendering result fed back by the rendering nodes in the current execution sequence, and feeding back the rendering result to a user side of a user. By the mode, the working time of image rendering is reduced on the premise of ensuring normal operation of other works, and the working efficiency of image rendering is greatly improved.

Description

Method, server, system and storage medium for intelligently distributing rendering tasks
Technical Field
The embodiment of the invention relates to the technical field of computers, in particular to a method, a server, a system and a storage medium for intelligently distributing rendering tasks.
Background
Image rendering is the process of converting three-dimensional light energy transfer processing into a two-dimensional image. Before image rendering, three-dimensional geometric model information, three-dimensional animation definition information and material information need to be prepared. If the three-dimensional animation is complex and occupies more internal memory, the image rendering process will take longer, and the normal operation of other work is delayed. For example, an animation in a movie special effect will have many frames, and when rendering an image, it is necessary to render a next frame of image after rendering a frame of image, which inevitably takes a long time.
Therefore, how to reduce the working time of image rendering on the premise of ensuring normal operation of other works and improve the working efficiency of image rendering becomes the technical problem to be solved urgently in the application.
Disclosure of Invention
Therefore, the embodiment of the invention provides a method, a server, a system and a storage medium for intelligently allocating rendering tasks, so as to solve the problems that in the prior art, due to the fact that images are complex and occupy a large amount of memory, image rendering takes a long time and work efficiency is low.
In order to achieve the above object, an embodiment of the present invention provides the following:
in a first aspect of embodiments of the present invention, there is provided a method for intelligently allocating rendering tasks, the method being performed by a server for intelligently allocating rendering tasks, comprising:
acquiring a task to be rendered uploaded by a user through a user side and task uploading time;
adding the task to be rendered into a task queue according to the task uploading time;
sequencing the rendering nodes according to the performance of the rendering nodes obtained in advance;
the method comprises the steps that tasks to be rendered are called from a task queue in sequence and distributed to rendering nodes in a current execution sequence, so that the rendering nodes in the current execution sequence render the tasks to be rendered according to preset rules, wherein the preset rules at least comprise rendering execution starting time and user rendering requirements;
and receiving a rendering result fed back by the rendering nodes in the current execution sequence, and feeding back the rendering result to a user side of a user, wherein the rendering nodes are any authorized terminal equipment.
In one embodiment of the invention, rendering node performance is determined synthetically according to the hardware configuration of the rendering node and the idle resources of the current rendering node.
In another embodiment of the present invention, when the task to be rendered uploaded by the user through the user side includes at least two subtasks, each subtask is respectively allocated to one rendering node to perform rendering.
In another embodiment of the present invention, before the tasks to be rendered are dispatched from the task queue to the rendering node in the current execution order, the method further includes:
receiving an operation instruction of a first execution area selected by a user for a task to be rendered;
when the first rendering node in the first execution area is determined to meet execution starting time and user rendering requirements according to the operation instruction, distributing the task to be rendered to the first rendering node;
or when determining that no rendering node in the first execution area meets the execution starting time and/or the user rendering requirement according to the operation instruction, automatically allocating a second execution area meeting the execution starting time and the user rendering requirement for the user, and specifying a second rendering node for executing the rendering task, wherein the execution area comprises at least two rendering nodes, and each execution area comprises at least one rendering node.
In a further embodiment of the present invention, after allocating the task to be rendered to the first rendering node or the second rendering node, the method further includes: and determining an optimal transmission path for the task to be rendered so as to transmit the task to be rendered to at least the first rendering node or the second rendering node in the shortest time.
In another embodiment of the present invention, each execution area respectively includes at least one transit storage node, the task to be rendered and the task upload time are stored in a preset transit storage node, the preset transit storage node is a transit storage node in the execution area of the task to be rendered, and the time for transmitting data is the shortest after the transit storage node establishes a communication connection with a user end of a user;
determining an optimal transmission path for a task to be rendered, specifically comprising:
and determining an optimal transmission path according to the communication relationship among the preset transfer storage node, other transfer storage nodes in the task execution area to be rendered and rendering nodes for executing the search rendering task.
In a second aspect of the embodiments of the present invention, there is provided a server for intelligently allocating rendering tasks, where the receiving unit is configured to obtain a task to be rendered and a task uploading time, which are uploaded by a user through a user side;
the processing unit is used for adding the task to be rendered into the task queue according to the task uploading time;
sequencing the rendering nodes according to the performance of the rendering nodes obtained in advance;
the method comprises the steps that tasks to be rendered are called from a task queue in sequence and distributed to rendering nodes in a current execution sequence, so that the rendering nodes in the current execution sequence render the tasks to be rendered according to preset rules, wherein the preset rules at least comprise rendering execution starting time and user rendering requirements;
the receiving unit is also used for receiving the rendering result fed back by the rendering nodes in the current execution sequence; and the sending unit is used for feeding back the rendering result to a user side of the user, and the rendering node is any authorized terminal equipment.
In a third aspect of embodiments of the present invention, there is provided a system for intelligently allocating rendering tasks, comprising: the system comprises a user side, a server for intelligently distributing rendering tasks and rendering nodes;
the user side is used for uploading the task to be rendered to the server for intelligently distributing the rendering task according to the user operation instruction;
a server intelligently allocating rendering tasks for performing the method of any one of claims 1-7;
the rendering node is used for rendering the task to be rendered according to a preset rule;
and feeding back the rendering result to a server intelligently distributing the rendering task.
In a fourth aspect of embodiments of the present invention, there is provided a computer storage medium having one or more program instructions embodied therein for use by a server intelligently assigned rendering tasks to perform any of the method steps of the above method for intelligently assigning rendering tasks.
According to the embodiment of the invention, the following advantages are provided: after the tasks to be rendered uploaded by the user are obtained, the tasks to be rendered are sequenced according to uploading time, and then rendering nodes with high rendering performance are preferentially allocated to each task to be rendered for rendering. Rendering nodes are rendered according to preset rules, and a plurality of rendering nodes execute rendering work in parallel. Each rendering node is actually a terminal device connected on the network, and can establish communication connection with the server to receive the rendering task under the authorized condition. And the specified execution starting rendering time is to avoid the conflict between the rendering work and other normal work of the rendering node, and when the rendering node is in an idle state or partial resources can execute the rendering work, the rendering task is received and the rendering work is executed. Therefore, the image rendering working time is reduced on the premise that other works are guaranteed to be normally carried out, any terminal equipment on the network can possibly become a rendering node under the authorized condition, the rendering task is rendered in a similar rendering node cluster mode, and the image rendering working efficiency is greatly improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below. It should be apparent that the drawings in the following description are merely exemplary, and that other embodiments can be derived from the drawings provided by those of ordinary skill in the art without inventive effort.
Fig. 1 is a flowchart illustrating a method for intelligently allocating rendering tasks according to an embodiment of the present invention;
FIG. 2 is a schematic structural diagram of a server for intelligently allocating rendering tasks according to another embodiment of the present invention;
fig. 3 is a schematic structural diagram of a system for intelligently allocating rendering tasks according to another embodiment of the present invention.
Detailed Description
The present invention is described in terms of particular embodiments, other advantages and features of the invention will become apparent to those skilled in the art from the following disclosure, and it is to be understood that the described embodiments are merely exemplary of the invention and that it is not intended to limit the invention to the particular embodiments disclosed. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Embodiment 1 of the present invention provides a method for intelligently allocating rendering tasks, specifically, as shown in fig. 1, fig. 1 is a schematic flow diagram of the method for intelligently allocating rendering tasks according to the embodiment of the present invention. The method is executed by a server intelligently distributing rendering tasks and can comprise the following steps:
and step 110, acquiring a task to be rendered uploaded by a user through a user side and task uploading time.
Specifically, the user uploads the task to be rendered through the user side of the user. The method comprises the steps that a server (hereinafter referred to as a server) intelligently distributing rendering tasks receives a task to be rendered uploaded by a user, and simultaneously records task uploading time. The user terminal may exist in the form of APP or in the form of a web page, and is not particularly limited.
And step 120, adding the task to be rendered into a task queue according to the task uploading time.
In order to ensure the fairness of the execution of the rendering tasks, the tasks to be rendered are sequentially added into the task queue according to the task uploading time. Rendering nodes are sequentially arranged to execute the rendering tasks according to the sequencing of the rendering tasks in the task queue.
And step 130, sequencing the rendering nodes according to the pre-acquired rendering node performance.
Specifically, the performance of the node to be rendered can be calculated and comprehensively evaluated according to preset rules. For example, the determination is comprehensively performed according to the hardware configuration of the node to be rendered and the currently remaining free resources. The rendering node is actually a terminal device corresponding to any network user, and the terminal device can establish communication connection with the server after authorization, so that the server can conveniently acquire performance parameters corresponding to the terminal device, such as hardware configuration and currently remaining idle resources, and then perform performance sequencing on the terminal device according to the performance parameters. For example, the model numbers of the GPU, CPU, etc. of the terminal device are matched with the model numbers of the configured hardware in the pre-established database, and the database contains different types of hardware and corresponding scores. Similarly, the remaining memory resources of the terminal device may also be obtained, and the more the remaining resources, the higher the score. And further acquiring the comprehensive score of the terminal equipment, and determining the performance sequence of the terminal equipment, namely the performance sequence of the rendering nodes according to the comprehensive score.
And step 140, calling the tasks to be rendered from the task queue in sequence and distributing the tasks to rendering nodes in the current execution sequence.
And 150, receiving the rendering result fed back by the rendering nodes in the current execution sequence, and feeding back the rendering result to the user side of the user.
Specifically, the above operations are performed so that the rendering nodes in the current execution order render the tasks to be rendered according to preset rules, where the preset rules at least include rendering execution start time and user rendering requirements.
In addition, in the distribution process, the tasks can be preferentially arranged for the rendering nodes with high performance, so that the working efficiency of the rendering tasks is inevitably improved. Moreover, all terminal devices in the network may be used as rendering nodes, and as long as the user holding the terminal device performs registration on the intelligent distribution rendering system by himself or herself due to voluntary behavior. After authorization, the terminal device may act as a rendering node. Moreover, the time for executing the work, namely the rendering execution starting time, can be set by self, for example, the rendering node wants to execute the rendering task between 19 pm and 7 pm, and other times need to complete the normal work of the rendering node. Then, when the system can allocate a task, it does not allocate a task to it at any time, but allocates a rendering task at the time when it can execute the task, taking into account the time when it can execute the task. Or, when all rendering nodes capable of executing the task are assigned with the task, the system may also screen the rendering nodes capable of starting to execute the task within a certain time period nearby, and assign the rendering tasks to the rendering nodes, so that the rendering nodes start to execute the rendering tasks at the rendering execution starting time. Alternatively, if the user does not customize the rendering node's time to perform the rendering task, the rendering node may perform the rendering task when there are currently free resources available. The server can distribute the task to the rendering node first, and the rendering node can schedule the rendering time by itself. In addition, the rendering task includes a rendering requirement of the user, and therefore, the rendering node is required to render according to the rendering requirement of the user.
Generally speaking, each frame of picture is taken as a rendering task, and if a user's rendering task includes multiple frames of pictures, each frame of picture is taken as a sub-task. Each subtask may be respectively allocated to one rendering node, and rendering may be performed in parallel by different rendering nodes. And after the rendering is finished, feeding back to the server, and summarizing by the server. Before distributing the tasks, an identifying name can be given to each subtask, so that after the rendering result is obtained by the server, the subtasks are collected according to the identifying name to form a total rendering task result, and the total rendering task result is fed back to a user uploading the rendering task. Of course, if the rendering task is uploaded and the user is named, the user does not need to rename and the rendering task is summarized according to the user-defined naming. Moreover, the summarized rendering results can be backed up and respectively backed up to different transfer storage nodes, so that the data can be timely repaired when the data is lost or the outgoing line is in other accidents in the data transmission process.
Optionally, in order to further improve rendering efficiency, the rendering tasks may be allocated according to the execution region. For example, rendering nodes in the country are distributed in various cities, and different cities belong to different point areas, for example, the province is the standard. If each province is regarded as an execution region, the server may further include, before performing step 140: and receiving an operation instruction of a first execution area selected by a user for the task to be rendered.
That is, the user may autonomously select an execution region in which the rendering task is executed. The main purpose is to speed up the uploading time of the rendering task. Each execution area contains a sub-server, while the server referred to above is understood to be the overall server for macro-control. The sub-server is responsible for controlling the distribution of rendering tasks in the execution region and the control of corresponding instructions.
Generally, a user is suggested to select an execution area closest to the position of the user, for example, the user in Liaoning province selects an execution area corresponding to the Liaoning province. Alternatively, if multiple execution areas are included in Liaoning province, the execution area closest to the user may be selected according to the geographical location. The specific implementation can be set according to actual conditions.
The main server determines whether the first rendering node in the first execution area meets the execution starting time and the user rendering requirement or not according to the operation instruction, or the main server feeds the user requirement back to the sub-server, and the sub-server determines whether the first rendering node in the execution area corresponding to the sub-server meets the requirement or not. And if so, distributing the task to be rendered to the first rendering node.
Otherwise, if it is determined that no rendering node in the first execution area meets the execution starting time and/or the user rendering requirement, the master server automatically allocates the user to a second execution area meeting the execution starting time and the user rendering requirement, and specifies a second rendering node for executing the rendering task, wherein the execution area comprises at least two rendering nodes, and each execution area comprises at least one rendering node.
Further, after the task to be rendered is allocated to the first rendering node or the second rendering node, the method further includes: and determining an optimal transmission path for the task to be rendered so as to transmit the task to be rendered to at least the first rendering node or the second rendering node in the shortest time.
The transmission path is still used for accelerating the file transmission rate, that is, accelerating the task to be rendered to be transmitted to the target rendering node (the rendering node for which the overall server has been allocated) as soon as possible.
It should be noted that each execution area may include at least one transit storage node. The task to be rendered and the task uploading time are actually stored in a preset transfer storage node. After the communication connection is established between the transfer storage node and the user end of the user, the data transmission time is shortest. Specifically, before it is determined that the data transmission time can formally transmit the rendering task, the user side sends a communication connection request to the sub-server, and the sub-server feeds back, to the user side, information related to all the transit storage nodes in the execution area (for example, information that the transit storage nodes can establish communication connection, such as names and IP addresses of the transit storage nodes) according to the communication connection request. The user side can establish communication connection with each transfer storage node respectively, and sends a test data packet to each transfer storage node in sequence to verify the transmission time, so that the transfer storage node with the shortest transmission data time is determined to be used as the preset transfer storage node. And transmitting the task to be rendered to the transit storage node. And then transmitted to the rendering node through the transit storage node through other transmission paths.
Specifically, a transmission path between any network user side and the rendering node includes at least one transfer storage node. In an ideal state, after the user side of the user transmits the task to be rendered to the preset transfer storage node, the task is directly transmitted to the target rendering node which needs to execute the rendering task through the preset transfer storage node. If the transmission process time is shortest, the transmission path is determined to be the optimal transmission path.
If the transmission is needed to be carried out through other communication nodes, the communication transmission is preferably carried out among a plurality of transit storage nodes, and finally the communication transmission is carried out to a target node needing to execute the rendering task through the transit storage nodes. The transfer rate of the transfer between the transfer storage nodes is often higher than the transfer rate between the rendering nodes, and the transfer between the transfer storage nodes can be understood as intranet transfer because the efficiency is often higher.
It should be noted that all terminal devices in the network that establish communication connection with the servers (including the above-mentioned main server and sub-servers) may load the user side for transmitting the tasks to be rendered, and may also serve as rendering nodes for executing the rendering tasks after registering in the servers. Of course, the rendering task is not obligated to be completed, but a certain reward can be obtained. The user submitting the rendering task also needs to pay a certain payment. This reward may be embodied in the form of points, or in the form of digital currency. The specific setting can be set according to actual conditions, and is not limited too much here.
According to the method for intelligently allocating the rendering tasks, after the tasks to be rendered uploaded by the user are obtained, the tasks to be rendered are sequenced according to the uploading time, and then rendering nodes with high rendering performance are preferentially allocated to each task to be rendered for rendering. Rendering nodes are rendered according to preset rules, and a plurality of rendering nodes execute rendering work in parallel. Each rendering node is actually a terminal device connected on the network, and can establish communication connection with the server to receive the rendering task under the authorized condition. And the specified execution starting rendering time is to avoid the conflict between the rendering work and other normal work of the rendering node, and when the rendering node is in an idle state or partial resources can execute the rendering work, the rendering task is received and the rendering work is executed. Therefore, the image rendering working time is reduced on the premise that other works are guaranteed to be normally carried out, any terminal equipment on the network can possibly become a rendering node under the authorized condition, the rendering task is rendered in a similar rendering node cluster mode, and the image rendering working efficiency is greatly improved.
Corresponding to the foregoing embodiment 1, embodiment 2 of the present invention further provides a server for intelligently allocating rendering tasks, and specifically as shown in fig. 2, the server includes: a receiving unit 201, a processing unit 202 and a transmitting unit 203.
The receiving unit 201 is configured to obtain a task to be rendered, which is uploaded by a user through a user side, and a task uploading time.
And the processing unit 202 is configured to add the task to be rendered to the task queue according to the task uploading time. And sequencing the rendering nodes according to the performance of the pre-acquired rendering nodes, calling the tasks to be rendered from the task queue in sequence and distributing the tasks to be rendered to the rendering nodes in the current execution sequence, so that the rendering nodes in the current execution sequence render the tasks to be rendered according to a preset rule, wherein the preset rule at least comprises rendering execution starting time and user rendering requirements.
The receiving unit 201 is further configured to receive a rendering result fed back by the rendering nodes in the current execution order;
a sending unit 203, configured to feed back the rendering result to a user side of the user, where the rendering node is any authorized terminal device.
Optionally, the performance of the rendering node is determined comprehensively according to the hardware configuration of the rendering node and the idle resources of the current rendering node.
Further optionally, when the task to be rendered uploaded by the user through the user side includes at least two subtasks, the processing unit 202 is specifically configured to allocate each subtask to one rendering node respectively to perform rendering.
Optionally, the receiving unit 201 is further configured to receive an operation instruction of a first execution area selected by a user for the task to be rendered;
the processing unit 202 is further configured to, when it is determined that the first rendering node in the first execution area meets the execution start time and the user rendering requirement according to the operation instruction, allocate the task to be rendered to the first rendering node;
or when determining that no rendering node in the first execution area meets the execution starting time and/or the user rendering requirement according to the operation instruction, automatically allocating a second execution area meeting the execution starting time and the user rendering requirement for the user, and specifying a second rendering node for executing the rendering task, wherein the execution area comprises at least two rendering nodes, and each execution area comprises at least one rendering node.
Optionally, the processing unit 202 is further configured to determine an optimal transmission path for the task to be rendered, so as to transmit the task to be rendered to at least the first rendering node or the second rendering node in the shortest time.
Optionally, each execution area respectively includes at least one transfer storage node, the task to be rendered and the task uploading time are stored in a preset transfer storage node, the preset transfer storage node is a transfer storage node in the task execution area to be rendered, and after the transfer storage node establishes a communication connection with a user side of a user, the data transmission time is shortest;
the processing unit 202 is specifically configured to determine an optimal transmission path according to a communication relationship among a preset relay storage node, other relay storage nodes in the to-be-rendered task execution area, and a rendering node that executes a search rendering task.
The functions executed by each component in the server for intelligently allocating rendering tasks provided by the embodiment of the present invention have been described in detail in the above embodiments, and therefore, redundant description is not repeated here.
According to the server for intelligently allocating the rendering tasks, after the tasks to be rendered uploaded by the user are obtained, the tasks to be rendered are sequenced according to the uploading time, and then rendering nodes with high rendering performance are preferentially allocated for each task to be rendered for rendering. Rendering nodes are rendered according to preset rules, and a plurality of rendering nodes execute rendering work in parallel. Each rendering node is actually a terminal device connected on the network, and can establish communication connection with the server to receive the rendering task under the authorized condition. And the specified execution starting rendering time is to avoid the conflict between the rendering work and other normal work of the rendering node, and when the rendering node is in an idle state or partial resources can execute the rendering work, the rendering task is received and the rendering work is executed. Therefore, the image rendering working time is reduced on the premise that other works are guaranteed to be normally carried out, any terminal equipment on the network can possibly become a rendering node under the authorized condition, the rendering task is rendered in a similar rendering node cluster mode, and the image rendering working efficiency is greatly improved.
Corresponding to the above embodiment, embodiment 3 of the present invention further provides a system for intelligently allocating rendering tasks, specifically as shown in fig. 3, where the system includes: a user terminal 301, a server 302 for intelligently allocating rendering tasks, and a rendering node 303.
The user side 301 is used for uploading a task to be rendered to the server 302 for intelligently distributing the rendering task according to a user operation instruction;
the server 302 for intelligently allocating rendering tasks is configured to perform any one of the method steps of the method for intelligently allocating rendering tasks provided by the above embodiments;
the rendering node 303 is configured to render the task to be rendered according to a preset rule;
the rendering results are fed back to the server 302 that intelligently allocates rendering tasks.
Although fig. 3 simply illustrates a user terminal 301, a server 302 for intelligently allocating rendering tasks, and a rendering node 303, fig. 3 is only for schematically illustrating the communication connection relationship among three components in the system for intelligently allocating rendering tasks, and does not represent a specific number. The specific amount is set according to practical conditions and is not limited too much here.
According to the system for intelligently allocating the rendering tasks, after the tasks to be rendered uploaded by the user are obtained, the tasks to be rendered are sequenced according to the uploading time, and then rendering nodes with high rendering performance are preferentially allocated to each task to be rendered for rendering. Rendering nodes are rendered according to preset rules, and a plurality of rendering nodes execute rendering work in parallel. Each rendering node is actually a terminal device connected on the network, and can establish communication connection with the server to receive the rendering task under the authorized condition. And the specified execution starting rendering time is to avoid the conflict between the rendering work and other normal work of the rendering node, and when the rendering node is in an idle state or partial resources can execute the rendering work, the rendering task is received and the rendering work is executed. Therefore, the image rendering working time is reduced on the premise that other works are guaranteed to be normally carried out, any terminal equipment on the network can possibly become a rendering node under the authorized condition, the rendering task is rendered in a similar rendering node cluster mode, and the image rendering working efficiency is greatly improved.
In correspondence with the above embodiments, embodiments of the present invention also provide a computer storage medium containing one or more program instructions therein. Wherein one or more program instructions are for executing a method of intelligently allocating rendering tasks as described above by a server of intelligently allocating rendering tasks.
Although the invention has been described in detail above with reference to a general description and specific examples, it will be apparent to one skilled in the art that modifications or improvements may be made thereto based on the invention. Accordingly, such modifications and improvements are intended to be within the scope of the invention as claimed.

Claims (9)

1. A method for intelligently allocating rendering tasks, the method being performed by a server for intelligently allocating rendering tasks, comprising:
acquiring a task to be rendered uploaded by a user through a user side and task uploading time;
adding the task to be rendered into a task queue according to the task uploading time;
sequencing the rendering nodes according to the performance of the rendering nodes obtained in advance;
the method comprises the steps that tasks to be rendered are called from a task queue in sequence and distributed to rendering nodes in a current execution sequence, so that the rendering nodes in the current execution sequence render the tasks to be rendered according to preset rules, wherein the preset rules at least comprise rendering execution starting time and user rendering requirements;
and receiving a rendering result fed back by the rendering nodes in the current execution sequence, and feeding back the rendering result to a user side of the user, wherein the rendering nodes are any authorized terminal equipment.
2. The method of claim 1, wherein the rendering node performance is determined synthetically based on the hardware configuration of the rendering node and the idle resources of the current rendering node.
3. The method according to claim 1, wherein when the task to be rendered uploaded by the user through the user side includes at least two subtasks, each subtask is respectively allocated to one rendering node to perform rendering.
4. The method according to any one of claims 1-3, wherein before the tasks to be rendered are called in order from the task queue to be distributed to the rendering nodes in the current execution order, the method further comprises:
receiving an operation instruction of a first execution area selected by a user for a task to be rendered;
when determining that a first rendering node in the first execution region meets execution starting time and user rendering requirements according to the operation instruction, distributing the task to be rendered to the first rendering node;
alternatively, the first and second electrodes may be,
when it is determined that no rendering node in the first execution area meets the execution starting time and/or the rendering requirement of a user according to the operation instruction, automatically allocating the user to a second execution area meeting the execution starting time and the rendering requirement of the user, and specifying a second rendering node for executing a rendering task, wherein the execution area comprises at least two rendering nodes, and each execution area comprises at least one rendering node.
5. The method of claim 4, wherein after the task to be rendered is allocated to the first rendering node or the second rendering node, the method further comprises: and determining an optimal transmission path for the task to be rendered so as to transmit the task to be rendered at least the first rendering node or the second rendering node in the shortest time.
6. The method according to claim 5, wherein each execution area comprises at least one relay storage node, the task to be rendered and the task upload time are stored in a preset relay storage node, the preset relay storage node is one relay storage node in the task execution area to be rendered, and the time for transmitting data is shortest after the relay storage node establishes a communication connection with the user side of the user;
the determining an optimal transmission path for the task to be rendered specifically includes:
and determining the optimal transmission path according to the communication relationship among the preset transfer storage node, other transfer storage nodes in the task execution area to be rendered and rendering nodes for executing the search rendering task.
7. A server for intelligently allocating rendering tasks, the server comprising:
the receiving unit is used for acquiring a task to be rendered and task uploading time uploaded by a user through a user side;
the processing unit is used for adding the task to be rendered into a task queue according to the task uploading time;
sequencing the rendering nodes according to the performance of the rendering nodes obtained in advance;
the method comprises the steps that tasks to be rendered are called from a task queue in sequence and distributed to rendering nodes in a current execution sequence, so that the rendering nodes in the current execution sequence render the tasks to be rendered according to preset rules, wherein the preset rules at least comprise rendering execution starting time and user rendering requirements;
the receiving unit is further configured to receive a rendering result fed back by the rendering nodes in the current execution order;
and the sending unit is used for feeding back the rendering result to the user side of the user, and the rendering node is any authorized terminal equipment.
8. A system for intelligently allocating rendering tasks, the system comprising: the system comprises a user side, a server for intelligently distributing rendering tasks and rendering nodes;
the user side is used for uploading the task to be rendered to a server for intelligently distributing the rendering task according to the user operation instruction;
the server for intelligently allocating rendering tasks is used for executing the method according to any one of claims 1-6;
the rendering node is used for rendering the task to be rendered according to a preset rule;
and feeding back the rendering result to the server for intelligently distributing the rendering task.
9. A computer storage medium having one or more program instructions embodied therein for execution by a server intelligently assigned rendering tasks, the method of any of claims 1-6.
CN201911001408.XA 2019-10-21 2019-10-21 Method, server, system and storage medium for intelligently distributing rendering tasks Active CN110955504B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911001408.XA CN110955504B (en) 2019-10-21 2019-10-21 Method, server, system and storage medium for intelligently distributing rendering tasks

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911001408.XA CN110955504B (en) 2019-10-21 2019-10-21 Method, server, system and storage medium for intelligently distributing rendering tasks

Publications (2)

Publication Number Publication Date
CN110955504A true CN110955504A (en) 2020-04-03
CN110955504B CN110955504B (en) 2022-12-20

Family

ID=69975695

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911001408.XA Active CN110955504B (en) 2019-10-21 2019-10-21 Method, server, system and storage medium for intelligently distributing rendering tasks

Country Status (1)

Country Link
CN (1) CN110955504B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112637267A (en) * 2020-11-27 2021-04-09 成都质数斯达克科技有限公司 Service processing method and device, electronic equipment and readable storage medium
CN116560844A (en) * 2023-05-18 2023-08-08 苏州高新区测绘事务所有限公司 Multi-node resource allocation method and device for cloud rendering

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7092983B1 (en) * 2000-04-19 2006-08-15 Silicon Graphics, Inc. Method and system for secure remote distributed rendering
CN102088472A (en) * 2010-11-12 2011-06-08 中国传媒大学 Wide area network-oriented decomposition support method for animation rendering task and implementation method
CN104052803A (en) * 2014-06-09 2014-09-17 国家超级计算深圳中心(深圳云计算中心) Decentralized distributed rendering method and system
CN104572305A (en) * 2015-01-26 2015-04-29 赞奇科技发展有限公司 Load-balanced cluster rendering task dispatching method
US20160296842A1 (en) * 2013-12-26 2016-10-13 Square Enix Co., Ltd. Rendering system, control method, and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7092983B1 (en) * 2000-04-19 2006-08-15 Silicon Graphics, Inc. Method and system for secure remote distributed rendering
CN102088472A (en) * 2010-11-12 2011-06-08 中国传媒大学 Wide area network-oriented decomposition support method for animation rendering task and implementation method
US20160296842A1 (en) * 2013-12-26 2016-10-13 Square Enix Co., Ltd. Rendering system, control method, and storage medium
CN104052803A (en) * 2014-06-09 2014-09-17 国家超级计算深圳中心(深圳云计算中心) Decentralized distributed rendering method and system
CN104572305A (en) * 2015-01-26 2015-04-29 赞奇科技发展有限公司 Load-balanced cluster rendering task dispatching method

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112637267A (en) * 2020-11-27 2021-04-09 成都质数斯达克科技有限公司 Service processing method and device, electronic equipment and readable storage medium
CN116560844A (en) * 2023-05-18 2023-08-08 苏州高新区测绘事务所有限公司 Multi-node resource allocation method and device for cloud rendering

Also Published As

Publication number Publication date
CN110955504B (en) 2022-12-20

Similar Documents

Publication Publication Date Title
Kliazovich et al. CA-DAG: Modeling communication-aware applications for scheduling in cloud computing
US9122529B2 (en) Dynamic job processing based on estimated completion time and specified tolerance time
WO2020001320A1 (en) Resource allocation method, device, and apparatus
CN109062658A (en) Realize dispatching method, device, medium, equipment and the system of computing resource serviceization
US10686728B2 (en) Systems and methods for allocating computing resources in distributed computing
WO2020119060A1 (en) Method and system for scheduling container resources, server, and computer readable storage medium
CN108021435B (en) Cloud computing task flow scheduling method with fault tolerance capability based on deadline
US11403009B2 (en) Storage system, and method and apparatus for allocating storage resources
CN108960773A (en) Business management method, computer equipment and storage medium
CN110955504B (en) Method, server, system and storage medium for intelligently distributing rendering tasks
CN109085999A (en) data processing method and processing system
CN111078404B (en) Computing resource determining method and device, electronic equipment and medium
CN106612196B (en) Obtain the method and device of resource
CN109257399A (en) Cloud platform application management method and management platform, storage medium
WO2016095524A1 (en) Resource allocation method and apparatus
CN109117244B (en) Method for implementing virtual machine resource application queuing mechanism
CN115543615A (en) Resource allocation method and device, electronic equipment and storage medium
CN107025134B (en) Database service system and method compatible with multiple databases
CN106331192B (en) Network protocol IP address allocation method and device
CN109002364A (en) Optimization method, electronic device and the readable storage medium storing program for executing of interprocess communication
CN108282526A (en) Server dynamic allocation method and system between double clusters
CN112261125B (en) Centralized unit cloud deployment method, device and system
CN115550354A (en) Data processing method and device and computer readable storage medium
CN111835809B (en) Work order message distribution method, work order message distribution device, server and storage medium
CN109413117A (en) Distributed data calculation method, device, server and computer storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20230506

Address after: 230071 Comprehensive Building 3-2985, No. 55 Science Avenue, High tech Zone, Shushan District, Hefei City, Anhui Province

Patentee after: Hefei Kelast Network Technology Co.,Ltd.

Address before: 100021 6235, 6th floor, jinyayuan crossing building, YunhuiLi, Haidian District, Beijing

Patentee before: QUANTUM CLOUD FUTURE (BEIJING) INFORMATION TECHNOLOGY CO.,LTD.