CN116680086A - Scheduling management system based on offline rendering engine - Google Patents

Scheduling management system based on offline rendering engine Download PDF

Info

Publication number
CN116680086A
CN116680086A CN202310915234.8A CN202310915234A CN116680086A CN 116680086 A CN116680086 A CN 116680086A CN 202310915234 A CN202310915234 A CN 202310915234A CN 116680086 A CN116680086 A CN 116680086A
Authority
CN
China
Prior art keywords
rendering
task
node
bearing capacity
rendered
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310915234.8A
Other languages
Chinese (zh)
Other versions
CN116680086B (en
Inventor
林金怡
李韩
何玄
丁焰
李翔
姜三富
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Unicom Online Information Technology Co Ltd
China Unicom WO Music and Culture Co Ltd
Original Assignee
China Unicom Online Information Technology Co Ltd
China Unicom WO Music and Culture Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Unicom Online Information Technology Co Ltd, China Unicom WO Music and Culture Co Ltd filed Critical China Unicom Online Information Technology Co Ltd
Priority to CN202310915234.8A priority Critical patent/CN116680086B/en
Publication of CN116680086A publication Critical patent/CN116680086A/en
Application granted granted Critical
Publication of CN116680086B publication Critical patent/CN116680086B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/5038Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the execution order of a plurality of tasks, e.g. taking priority or time dependency constraints into consideration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/20Processor architectures; Processor configuration, e.g. pipelining
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/48Indexing scheme relating to G06F9/48
    • G06F2209/484Precedence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5021Priority
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Generation (AREA)

Abstract

The application relates to the technical field of image rendering and discloses a scheduling management system based on an offline rendering engine, which comprises a task acquisition module, a task analysis module, a rendering node module and a node scheduling module; the task acquisition module is configured to acquire queue information of a rendering queue; the task analysis module is configured to analyze task demand information of the tasks to be rendered in the queue information; the rendering node module comprises a plurality of rendering nodes, and each rendering node is configured to perform offline rendering on a task to be rendered by using rendering capacity and rendering bearing capacity; the node scheduling module is configured to read node information of idle rendering nodes and/or rendering nodes with residual bearing capacity, and distribute tasks to be rendered to corresponding rendering nodes based on task demand information for offline rendering. According to the method and the device, all the tasks to be rendered in a time period can be efficiently distributed to the corresponding rendering nodes for offline rendering, and the rendering efficiency is improved.

Description

Scheduling management system based on offline rendering engine
Technical Field
The application relates to the technical field of image rendering, in particular to a scheduling management system based on an offline rendering engine.
Background
Rendering refers to the process of generating images from models with software, with the widespread use of computer graphics, whether enterprise users or personal users, increasingly requiring rendering.
While rendering generally includes real-time rendering, which is typically embodied in 3D games, and offline rendering, which is typically embodied in animated movies. In particular, offline rendering is particularly suitable for scenes where the traffic processing is of a relatively large magnitude, and is therefore widely used for advertising, video, animation, and non-real-time image processing scenes for individuals and businesses.
Therefore, when offline rendering is performed, there are generally a plurality of tasks to be rendered that wait for the allocation of rendering nodes in a queue, and then perform offline rendering. In the conventional cloud rendering, for node allocation of tasks to be rendered in a queue, allocation of rendering nodes is generally performed on each task to be rendered at one time based on enqueuing time, however, in this allocation manner, processing capacity and processing capacity of the rendering nodes cannot be utilized to the maximum, which results in lower task response and feedback efficiency of the whole rendering platform, and has adverse effects on working efficiency and user experience.
Disclosure of Invention
The application aims to provide a scheduling management system based on an offline rendering engine, so as to solve the technical problems in the background technology.
In order to achieve the above purpose, the present application discloses the following technical solutions:
a scheduling management system based on an offline rendering engine comprises a task acquisition module, a task analysis module, a rendering node module and a node scheduling module;
the task acquisition module is configured to acquire queue information of a rendering queue, wherein the queue information comprises at least one submitted task to be rendered;
the task analysis module is configured to analyze task demand information of the tasks to be rendered in the queue information, wherein the task demand information comprises task types of the tasks to be rendered and bearing capacity demands of the tasks to be rendered;
the rendering node module comprises a plurality of rendering nodes, and each rendering node is configured to perform offline rendering on a task to be rendered by using rendering capacity and rendering bearing capacity;
the node scheduling module is configured to read node information of idle rendering nodes and/or rendering nodes with residual bearing capacity, and allocate tasks to be rendered to corresponding rendering nodes for offline rendering based on the task demand information.
In one embodiment, the node information includes the number of nodes, the node capabilities, and the remaining bearing capacity of the nodes.
In an embodiment, the queue information further includes a queue length, and when the queue length acquired by the task acquiring module is greater than the number of nodes, the task to be rendered is allocated to one rendering node with the remaining bearing capacity meeting the requirement for offline rendering, which specifically includes:
defining one of the tasks to be rendered as a task a, the bearing capacity requirement of the task a as P, defining the residual bearing capacity of one of the rendering nodes with the residual bearing capacity matched based on the task type of the task a as Q, defining the rendering node as a node b, and distributing the task a to the node b for offline rendering when Q is more than or equal to P, otherwise, distributing the task a to at least one idle rendering node for offline rendering.
In one embodiment, the rendering node module is further configured to cause collaboration between two or more of the rendering nodes for offline rendering based on a collaboration protocol.
In one embodiment, when Q < P, if the bearing capacity of all the idle rendering nodes is less than P, the node scheduling module obtains the bearing capacity of each idle rendering node, which is defined as M respectively 1 、M 2 ……M n Simultaneously acquiring the residual bearing capacity of all rendering nodes with the residual bearing capacity, wherein the residual bearing capacity is respectively defined as N 1 、N 2 ……N d The node scheduling module calculates the sum of the bearing capacity between each idle rendering node and any rendering node with the residual bearing capacity, and the calculation formula of the sum of the bearing capacity is as follows: p (P) sumi =M i +N u Wherein P is sumi Representing the bearing capacity as M i Is N u Is a sum of the load amounts between rendering nodes having the remaining load amounts; the node scheduling module compares the bearing capacity requirement P of the task a with the sum of the bearing capacities of each, selects an idle rendering node corresponding to the bearing capacity sum closest to the bearing capacity requirement P of the task a and a rendering node with the residual bearing capacity as a rendering node group of the task a, and distributes the task a to the rendering node group for offline rendering, wherein the bearing capacity sum corresponding to the selected rendering node group is larger than or equal to the bearing capacity requirement P of the task a.
In one embodiment, when all of the load amounts are summed up P sum When the load capacity requirements P of the task a are smaller than the load capacity requirements P of the task a, the node scheduling module calculates the load capacity sum between two or more idle rendering nodes, compares the load capacity requirements P of the task a with the calculated load capacity sum, selects idle rendering nodes corresponding to the load capacity sum closest to the load capacity requirements P of the task a and rendering nodes with residual load capacities as a rendering node group of the task a, distributes the task a to the rendering node group for offline rendering, wherein the selected rendering nodes are selectedThe sum of the bearing capacity corresponding to the rendering node group is larger than or equal to the bearing capacity requirement P of the task a.
In one embodiment, when all of the load amounts are summed up P sum When the load capacity requirements P of the task a are smaller than the load capacity requirements P of the task a, the node scheduling module calculates the load capacity sum between each idle rendering node and two or more than two rendering nodes with residual load capacity, compares the load capacity requirements P of the task a with the calculated load capacity sum, selects idle rendering nodes corresponding to the load capacity sum closest to the load capacity requirements P of the task a and rendering nodes with residual load capacity as a rendering node group of the task a, and distributes the task a to the rendering node group for offline rendering, wherein the load capacity sum corresponding to the selected rendering node group is larger than or equal to the load capacity requirements P of the task a.
In one embodiment, the node scheduling module is further configured to determine a priority of the task to be rendered, and allocate a rendering node to the task to be rendered with the highest priority in the queue preferentially.
In one embodiment, the priority determination specifically includes: and completing priority identification of the tasks to be rendered according to whether the tasks to be rendered carry priority identifiers or not, and sequencing the identified tasks to be rendered with the priority identifiers based on the order of the priority from high to low.
In one embodiment, the node scheduling module is further configured to: when rendering nodes are allocated to two or more tasks to be rendered with the same priority, the rendering nodes are allocated to the tasks to be rendered with small task demand information preferentially.
The beneficial effects are that: the scheduling management system based on the offline rendering engine is suitable for reasonably scheduling and distributing rendering nodes for the tasks to be rendered in a queue in a selected time period, in the scheduling and distributing process, the task acquisition module acquires the queue information, the task analysis module analyzes the task demand information of the tasks to be rendered, which are enqueued in a target time period, and then the node scheduling module schedules the tasks to be rendered to the appropriate rendering nodes for offline rendering based on the queue information and the task demand information, in the scheduling and distributing process, the task demand information and the bearing capacity and the capacity of the rendering nodes are reasonably utilized, and all the tasks to be rendered in the time period can be effectively distributed to the corresponding rendering nodes for offline rendering, so that the rendering efficiency of the rendering platform is improved, and the system has higher promotion effect on improving the user experience and the user work efficiency.
Furthermore, the scheduling management system based on the offline rendering engine, provided by the application, realizes the rapid scheduling of the tasks to be rendered and the reasonable distribution of the rendering nodes by cooperation rendering of a plurality of rendering nodes and matching with the distribution mechanism, improves the rendering efficiency of each task to be rendered and improves the resource utilization rate of each rendering node.
Drawings
In order to more clearly illustrate the embodiments of the application or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a block diagram illustrating a scheduling management system based on an offline rendering engine according to embodiment 1 of the present application.
Reference numerals: 1. a task acquisition module; 2. a task analysis module; 3. rendering a node module; 4. and a node scheduling module.
Detailed Description
The following description of the technical solutions in the embodiments of the present application will be clear and complete, and it is obvious that the described embodiments are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
In this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising … …" does not exclude the presence of other like elements in a process, method, article or apparatus that comprises the element.
Example 1
The embodiment discloses a scheduling management system based on an offline rendering engine as shown in fig. 1, which comprises a task acquisition module 1, a task analysis module 2, a rendering node module 3 and a node scheduling module 4.
Specifically, the task obtaining module 1 is configured to obtain queue information of a rendering queue, where the queue information includes at least one submitted task to be rendered.
Specifically, the task analysis module 2 is configured to analyze task demand information for the task to be rendered in the queue information, where the task demand information includes a task type of the task to be rendered and a load demand of the task to be rendered.
Specifically, the rendering node module 3 includes a plurality of rendering nodes, and each of the rendering nodes is configured to perform offline rendering on a task to be rendered by using rendering capability and rendering load.
Specifically, the node scheduling module 4 is configured to read node information of idle rendering nodes and/or rendering nodes with residual bearing capacity, and allocate tasks to be rendered to corresponding rendering nodes for offline rendering based on the task demand information.
In this embodiment, the node information includes the number of nodes, the node capability, and the remaining bearing capacity of the node, where the number of nodes represents the number of free rendering nodes, the number of rendering nodes with the remaining bearing capacity, or the sum of the number of free rendering nodes and rendering nodes with the remaining bearing capacity. The node capability represents a functional configuration of each rendering node, such as a color scheme, a scene arrangement, and so on, as some existing functions, the functional configuration of each rendering node is one or more of these functions, and the functional configuration of each rendering node is the same or different. The remaining capacity of the node represents the capacity of the idle rendering node and the remaining capacity of the rendering node having the remaining capacity.
The queue information further includes a queue length, and when the queue length acquired by the task acquiring module 1 is greater than the number of nodes, the task to be rendered is allocated to one rendering node with the residual bearing capacity meeting the requirement for offline rendering, and specifically includes:
defining one of the tasks to be rendered as a task a, the bearing capacity requirement of the task a as P, defining the residual bearing capacity of one of the rendering nodes with the residual bearing capacity matched based on the task type of the task a as Q, defining the rendering node as a node b, and distributing the task a to the node b for offline rendering when Q is more than or equal to P, otherwise, distributing the task a to at least one idle rendering node for offline rendering.
It should be noted that, the task type of the task to be rendered corresponds to the node capability, and if the task type of the task to be rendered is single dye rendering, the rendering node with the function can be successfully matched with the task to be rendered.
Based on the above, when the queue length is greater than the number of nodes, the task a is preferentially allocated to the rendering node with the residual bearing capacity to perform offline rendering, and the idle rendering node can be released to meet the task demands of other tasks to be rendered, so that the utilization rate of the rendering node is improved, and meanwhile, the number of the tasks to be rendered which are simultaneously or sequentially rendered by the rendering platform is improved, the rendering efficiency is improved, the user work efficiency is promoted, and the user experience is improved.
Example 2
Unlike embodiment 1, in this embodiment, the rendering node module 3 is further configured to make collaboration between two or more of the rendering nodes for offline rendering based on a collaboration protocol. It should be noted that, the collaboration protocol may be any one of a working method, a communication method, or an interaction mechanism of cloud and cloud collaboration in the prior art, for example, a related technology related to a method, a device, an apparatus, and a storage medium for digital content production in chinese patent publication No. CN202211420831.5, or a related technology related to a multi-user collaboration method based on 3D cloud rendering, a cloud rendering server, and a cloud rendering system in chinese patent publication No. CN202210830788.3, or a related technology related to a BIM model rendering method based on server and client collaboration in chinese patent publication No. CN202110013645.9, etc. it should be noted that, specific collaboration protocols refer to other related technologies as examples herein, and this specific technology is not the claimed technical content required by the present application, so this text is not specifically detailed, and meanwhile, as a person of ordinary skill in the art can learn and recognize this specific technology further through a related search channel.
In this embodiment, when Q < P, if the bearing capacity of all the idle rendering nodes is smaller than P, the node scheduling module 4 obtains the bearing capacity of each idle rendering node, which is defined as M respectively 1 、M 2 ……M n Simultaneously acquiring the residual bearing capacity of all rendering nodes with the residual bearing capacity, wherein the residual bearing capacity is respectively defined as N 1 、N 2 ……N d The node scheduling module 4 calculates the sum of the bearing capacity between each idle rendering node and any rendering node with the residual bearing capacity, and the calculation formula of the sum of the bearing capacity is as follows: p (P) sumi =M i +N u Wherein P is sumi Representing the bearing capacity as M i Is free of render nodes and has a remaining bearing capacity of N u Is a sum of the load amounts between rendering nodes having the remaining load amounts; the node scheduling module 4 compares the bearing capacity demand P of the task a with each bearing capacity sum, and selects an idle rendering node corresponding to the bearing capacity sum closest to the bearing capacity demand P of the task a and a rendering node with the residual bearing capacity as the rendering node of the task aAnd the group is used for distributing the task a to the rendering node group for offline rendering, wherein the sum of the bearing capacities corresponding to the selected rendering node group is larger than or equal to the bearing capacity requirement P of the task a. The method has the advantages that the processing capacity of each rendering node can be effectively utilized through cooperation between the idle rendering nodes and the rendering nodes with the residual bearing capacity, so that the resources of a rendering platform are optimized, and the efficiency of a rendering task process is improved.
Further, when all the bearing amounts are summed up P sum When the load capacity requirements P of the task a are smaller than the load capacity requirements P of the task a, the node scheduling module 4 calculates the load capacity sum between two or more idle rendering nodes, compares the load capacity requirements P of the task a with the calculated load capacity sum, selects the idle rendering node corresponding to the load capacity sum closest to the load capacity requirements P of the task a and the rendering node with the residual load capacity as a rendering node group of the task a, and distributes the task a to the rendering node group for offline rendering, wherein the load capacity sum corresponding to the selected rendering node group is larger than or equal to the load capacity requirements P of the task a. The method has the advantages that the task a can be rapidly scheduled and the rendering nodes can be rapidly distributed under the cooperation condition that no appropriate idle rendering node and rendering nodes with single residual bearing capacity exist, and therefore the working efficiency of the rendering platform is improved.
Example 3
Unlike example 2, in this example, when all the bearing amounts are summed up P sum When the load capacity requirements P of the task a are smaller than the load capacity requirements P of the task a, the node scheduling module 4 calculates the sum of the load capacities between each idle rendering node and two or more rendering nodes with residual load capacities, compares the load capacity requirements P of the task a with the calculated sum of the load capacities, selects the idle rendering node corresponding to the sum of the load capacities closest to the load capacity requirements P of the task a and the rendering node with residual load capacity as a rendering node group of the task a, and distributes the task a to the rendering node group for offline rendering, wherein the sum of the load capacities corresponding to the selected rendering node group is larger than or equal to the taska load demand P. The method has the advantages that the processing capacity of each rendering node can be effectively utilized through cooperation of the idle rendering node and the rendering nodes with a plurality of residual bearing capacities, so that the resources of a rendering platform are optimized to the greatest extent on the premise that the rendering speed is ensured (the large-space rendering in the task a can be ensured to be stably carried out by utilizing the idle rendering node), and the efficiency of the rendering task process is improved.
Example 4
Unlike embodiment 1, in this embodiment, the node scheduling module 4 is further configured to perform priority judgment on the task to be rendered, and allocate the rendering node to the task to be rendered with the highest priority in the queue preferentially. The priority judgment specifically includes: and completing priority identification of the tasks to be rendered according to whether the tasks to be rendered carry priority identifiers or not, and sequencing the identified tasks to be rendered with the priority identifiers based on the order of the priority from high to low. The design has the advantages that tasks to be rendered which need to be processed preferentially are scheduled and distributed preferentially in a priority judging mode, so that reasonable scheduling of the tasks to be rendered by a rendering platform is improved, and orderly execution of all the tasks to be rendered is ensured.
Further, the node scheduling module 4 is further configured to: when two or more tasks to be rendered with the same priority are distributed to render nodes, the tasks to be rendered with small task demand information are preferentially distributed to render nodes, and particularly, the task demand information is small and mainly is the bearing capacity demand of the tasks to be rendered. The method has the advantages that the task completion speed of the tasks to be rendered is high due to the fact that the carrying capacity requirement of the tasks to be rendered is low, so that the tasks to be rendered of the part are scheduled and distributed more preferentially, the overall process of all the tasks to be rendered which need to be treated preferentially can be ensured, the scheduling and distributing efficiency of all the tasks to be rendered is further ensured, and the method has a high promoting effect on improving the user experience and the user work efficiency.
Based on the above embodiments, it may be clearly that the scheduling management system based on the offline rendering engine of the present application is suitable for reasonably scheduling and distributing rendering nodes for tasks to be rendered in a queue in a selected time period, in the scheduling and distributing process, the task obtaining module 1 obtains queue information, and the task analyzing module 2 analyzes task demand information of the tasks to be rendered queued in a target time period, and then the node scheduling module 4 schedules the tasks to be rendered to a suitable rendering node for offline rendering based on the queue information and the task demand information, and in the scheduling and distributing process, the task demand information and the matching between the carrying capacity and the capability of the rendering node are reasonably utilized, so that all the tasks to be rendered in the time period can be effectively and rapidly and efficiently distributed to the corresponding rendering node for offline rendering, thereby improving the rendering efficiency of the rendering platform, and having a higher promoting effect on improving the user experience and the user work efficiency.
Furthermore, the scheduling management system based on the offline rendering engine, provided by the application, realizes the rapid scheduling of the tasks to be rendered and the reasonable distribution of the rendering nodes by cooperation rendering of a plurality of rendering nodes and matching with the distribution mechanism, improves the rendering efficiency of each task to be rendered and improves the resource utilization rate of each rendering node.
In the embodiments provided by the present application, it is to be understood that the embodiments described herein may be implemented in hardware, software, firmware, middleware, code, or any suitable combination thereof. For a hardware implementation, the processor may be implemented in one or more of the following units: an Application Specific Integrated Circuit (ASIC), a Digital Signal Processor (DSP), a Digital Signal Processing Device (DSPD), a Programmable Logic Device (PLD), a Field Programmable Gate Array (FPGA), a processor, a controller, a microcontroller, a microprocessor, other electronic units designed to perform the functions described herein, or a combination thereof. For a software implementation, some or all of the flow of an embodiment may be accomplished by a computer program to instruct the associated hardware. When implemented, the above-described programs may be stored in or transmitted as one or more instructions or code on a computer-readable medium. Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage media may be any available media that can be accessed by a computer. The computer readable media can include, but is not limited to, RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage media or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer.
Finally, it should be noted that: the foregoing description is only illustrative of the preferred embodiments of the present application, and although the present application has been described in detail with reference to the foregoing embodiments, it will be apparent to those skilled in the art that modifications may be made to the embodiments described, or equivalents may be substituted for elements thereof, and any modifications, equivalents, improvements or changes may be made without departing from the spirit and principles of the present application.

Claims (10)

1. The scheduling management system based on the offline rendering engine is characterized by comprising a task acquisition module, a task analysis module, a rendering node module and a node scheduling module;
the task acquisition module is configured to acquire queue information of a rendering queue, wherein the queue information comprises at least one submitted task to be rendered;
the task analysis module is configured to analyze task demand information of the tasks to be rendered in the queue information, wherein the task demand information comprises task types of the tasks to be rendered and bearing capacity demands of the tasks to be rendered;
the rendering node module comprises a plurality of rendering nodes, and each rendering node is configured to perform offline rendering on a task to be rendered by using rendering capacity and rendering bearing capacity;
the node scheduling module is configured to read node information of idle rendering nodes and/or rendering nodes with residual bearing capacity, and allocate tasks to be rendered to corresponding rendering nodes for offline rendering based on the task demand information.
2. The offline rendering engine-based scheduling management system according to claim 1, wherein the node information includes the number of nodes, the node capabilities, and the remaining bearing capacity of the nodes.
3. The scheduling management system based on an offline rendering engine according to claim 2, wherein the queue information further includes a queue length, and when the queue length acquired by the task acquisition module is greater than the number of nodes, the task to be rendered is allocated to one rendering node with a residual bearing capacity meeting a requirement for offline rendering, and specifically includes:
defining one of the tasks to be rendered as a task a, the bearing capacity requirement of the task a as P, defining the residual bearing capacity of one of the rendering nodes with the residual bearing capacity matched based on the task type of the task a as Q, defining the rendering node as a node b, and distributing the task a to the node b for offline rendering when Q is more than or equal to P, otherwise, distributing the task a to at least one idle rendering node for offline rendering.
4. The offline rendering engine-based dispatch management system of claim 3, wherein the rendering node module is further configured to cause collaboration of offline rendering between two or more of the rendering nodes based on a collaboration protocol.
5. The offline rendering engine-based scheduling management system according to claim 4, wherein when Q < P, if the bearing capacity of all the idle rendering nodes is smaller than P, the node scheduling module obtains the bearing capacity of each idle rendering node, which is defined as M respectively 1 、M 2 ……M n Simultaneously acquiring the residual bearing capacity of all rendering nodes with the residual bearing capacity, wherein the residual bearing capacity is respectively defined as N 1 、N 2 ……N d The node scheduling module calculates the sum of the bearing capacity between each idle rendering node and any rendering node with the residual bearing capacity, and the calculation formula of the sum of the bearing capacity is as follows: p (P) sumi =M i +N u Wherein P is sumi Representing the bearing capacity as M i Is N u Is a sum of the load amounts between rendering nodes having the remaining load amounts; the node scheduling module compares the bearing capacity requirement P of the task a with the sum of the bearing capacities of each, selects an idle rendering node corresponding to the bearing capacity sum closest to the bearing capacity requirement P of the task a and a rendering node with the residual bearing capacity as a rendering node group of the task a, and distributes the task a to the rendering node group for offline rendering, wherein the bearing capacity sum corresponding to the selected rendering node group is larger than or equal to the bearing capacity requirement P of the task a.
6. The offline rendering engine-based scheduling management system according to claim 5, wherein when all the load amounts are summed P sum When the load capacity demand P of the task a is smaller than the load capacity demand P of the task a, the node scheduling module calculates the load capacity sum between two or more idle rendering nodes, compares the load capacity demand P of the task a with the calculated load capacity sum, selects an idle rendering node corresponding to the load capacity sum closest to the load capacity demand P of the task a and a rendering node with the residual load capacity as a rendering node group of the task a, and distributes the task a to the rendering node group for offline rendering, wherein the load capacity sum corresponding to the selected rendering node group is larger than or equal to the load capacity demand P of the task a.
7. The offline rendering engine-based scheduling management system according to claim 5, wherein when all the load amounts are summed P sum When the load demand P of the task a is smaller than that of the task a, the node scheduling module calculates the sum of the load between each idle rendering node and two or more rendering nodes with residual load, and calculates the load of the task aComparing the bearing capacity demand P with the calculated bearing capacity sum, selecting an idle rendering node corresponding to the bearing capacity sum closest to the bearing capacity demand P of the task a and a rendering node with the residual bearing capacity as a rendering node group of the task a, and distributing the task a to the rendering node group for offline rendering, wherein the bearing capacity sum corresponding to the selected rendering node group is larger than or equal to the bearing capacity demand P of the task a.
8. The offline rendering engine-based dispatch management system of claim 1, wherein the node dispatch module is further configured to perform priority determination on the tasks to be rendered, and to allocate rendering nodes to the tasks to be rendered with highest priority in the queue.
9. The offline rendering engine-based dispatch management system of claim 8, wherein the priority determination specifically comprises: and completing priority identification of the tasks to be rendered according to whether the tasks to be rendered carry priority identifiers or not, and sequencing the identified tasks to be rendered with the priority identifiers based on the order of the priority from high to low.
10. The offline rendering engine-based dispatch management system of claim 9, wherein the node dispatch module is further configured to: when rendering nodes are allocated to two or more tasks to be rendered with the same priority, the rendering nodes are allocated to the tasks to be rendered with small task demand information preferentially.
CN202310915234.8A 2023-07-25 2023-07-25 Scheduling management system based on offline rendering engine Active CN116680086B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310915234.8A CN116680086B (en) 2023-07-25 2023-07-25 Scheduling management system based on offline rendering engine

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310915234.8A CN116680086B (en) 2023-07-25 2023-07-25 Scheduling management system based on offline rendering engine

Publications (2)

Publication Number Publication Date
CN116680086A true CN116680086A (en) 2023-09-01
CN116680086B CN116680086B (en) 2024-04-02

Family

ID=87791237

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310915234.8A Active CN116680086B (en) 2023-07-25 2023-07-25 Scheduling management system based on offline rendering engine

Country Status (1)

Country Link
CN (1) CN116680086B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103299347A (en) * 2011-12-31 2013-09-11 华为技术有限公司 Online rendering method and offline rendering method and relevant device based on cloud application
US20140331233A1 (en) * 2013-05-06 2014-11-06 Abbyy Infopoisk Llc Task distribution method and system
CN107274471A (en) * 2017-06-15 2017-10-20 深圳市彬讯科技有限公司 It is a kind of to dispatch system based on rendering parallel multipriority queue offline in real time
CN111949398A (en) * 2020-07-30 2020-11-17 西安万像电子科技有限公司 Resource scheduling method and device
CN112015533A (en) * 2020-08-24 2020-12-01 当家移动绿色互联网技术集团有限公司 Task scheduling method and device suitable for distributed rendering
CN113271351A (en) * 2021-05-13 2021-08-17 叶阗瑞 Cloud computing resource scheduling method, device, equipment and readable storage medium
CN113608871A (en) * 2021-08-02 2021-11-05 腾讯科技(深圳)有限公司 Service processing method and device
CN113742068A (en) * 2021-08-27 2021-12-03 深圳市商汤科技有限公司 Task scheduling method, device, equipment, storage medium and computer program product
WO2023044877A1 (en) * 2021-09-26 2023-03-30 厦门雅基软件有限公司 Render pass processing method and apparatus, electronic device, and storage medium
CN116107710A (en) * 2022-08-10 2023-05-12 北京字跳网络技术有限公司 Method, apparatus, device and medium for processing offline rendering tasks

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103299347A (en) * 2011-12-31 2013-09-11 华为技术有限公司 Online rendering method and offline rendering method and relevant device based on cloud application
US20140331233A1 (en) * 2013-05-06 2014-11-06 Abbyy Infopoisk Llc Task distribution method and system
CN107274471A (en) * 2017-06-15 2017-10-20 深圳市彬讯科技有限公司 It is a kind of to dispatch system based on rendering parallel multipriority queue offline in real time
CN111949398A (en) * 2020-07-30 2020-11-17 西安万像电子科技有限公司 Resource scheduling method and device
CN112015533A (en) * 2020-08-24 2020-12-01 当家移动绿色互联网技术集团有限公司 Task scheduling method and device suitable for distributed rendering
CN113271351A (en) * 2021-05-13 2021-08-17 叶阗瑞 Cloud computing resource scheduling method, device, equipment and readable storage medium
CN113608871A (en) * 2021-08-02 2021-11-05 腾讯科技(深圳)有限公司 Service processing method and device
WO2023011157A1 (en) * 2021-08-02 2023-02-09 腾讯科技(深圳)有限公司 Service processing method and apparatus, server, storage medium, and computer program product
CN113742068A (en) * 2021-08-27 2021-12-03 深圳市商汤科技有限公司 Task scheduling method, device, equipment, storage medium and computer program product
WO2023024410A1 (en) * 2021-08-27 2023-03-02 上海商汤智能科技有限公司 Task scheduling method and apparatus, device, storage medium, computer program product, and computer program
WO2023044877A1 (en) * 2021-09-26 2023-03-30 厦门雅基软件有限公司 Render pass processing method and apparatus, electronic device, and storage medium
CN116107710A (en) * 2022-08-10 2023-05-12 北京字跳网络技术有限公司 Method, apparatus, device and medium for processing offline rendering tasks

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
KUNLUN WANG 等: "Online Task Scheduling and Resource Allocation for Intelligent NOMA-Based Industrial Internet of Things", 《IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS》, vol. 38, no. 5, 16 March 2020 (2020-03-16), pages 803 - 815, XP011786285, DOI: 10.1109/JSAC.2020.2980908 *
朱友康 等: "边缘计算迁移研究综述", 《电信科学》, vol. 35, no. 4, 23 April 2019 (2019-04-23), pages 74 - 94 *

Also Published As

Publication number Publication date
CN116680086B (en) 2024-04-02

Similar Documents

Publication Publication Date Title
CN109918184B (en) Picture processing system, method and related device and equipment
TWI747092B (en) Method, equipment and system for resource scheduling and central server thereof
CN110769278A (en) Distributed video transcoding method and system
CN112905326B (en) Task processing method and device
US20090282413A1 (en) Scalable Scheduling of Tasks in Heterogeneous Systems
CN111506434B (en) Task processing method and device and computer readable storage medium
CN109462769A (en) Direct broadcasting room pendant display methods, device, terminal and computer-readable medium
CN110069341A (en) What binding function configured on demand has the dispatching method of dependence task in edge calculations
CN103067468A (en) Cloud scheduling method and system thereof
CN103336719A (en) Distribution rendering system and method in P2P mode
CN112888005B (en) MEC-oriented distributed service scheduling method
CN106454402A (en) Transcoding task scheduling method and device
CN117041623A (en) Digital person live broadcasting method and device
CA2631255A1 (en) Scalable scheduling of tasks in heterogeneous systems
CN113742009B (en) Desktop cloud environment resource scheduling method, device, equipment and storage medium
CN112685162A (en) High-efficiency scheduling method, system and medium for heterogeneous computing resources of edge server
CN113326025A (en) Single cluster remote continuous release method and device
CN116680086B (en) Scheduling management system based on offline rendering engine
CN117193992A (en) Model training method, task scheduling device and computer storage medium
CN115391053B (en) Online service method and device based on CPU and GPU hybrid calculation
CN112769788A (en) Charging service data processing method and device, electronic equipment and storage medium
CN116010051A (en) Federal learning multitasking scheduling method and device
CN116107710A (en) Method, apparatus, device and medium for processing offline rendering tasks
CN113986511A (en) Task management method and related device
CN107172142B (en) A kind of data dispatching method accelerating cloud computation data center inquiry

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant