CN116560844A - Multi-node resource allocation method and device for cloud rendering - Google Patents
Multi-node resource allocation method and device for cloud rendering Download PDFInfo
- Publication number
- CN116560844A CN116560844A CN202310561829.8A CN202310561829A CN116560844A CN 116560844 A CN116560844 A CN 116560844A CN 202310561829 A CN202310561829 A CN 202310561829A CN 116560844 A CN116560844 A CN 116560844A
- Authority
- CN
- China
- Prior art keywords
- rendering
- server
- node
- sub
- main server
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000009877 rendering Methods 0.000 title claims abstract description 166
- 238000000034 method Methods 0.000 title claims abstract description 26
- 238000013468 resource allocation Methods 0.000 title claims abstract description 16
- 230000005540 biological transmission Effects 0.000 claims abstract description 6
- 230000010354 integration Effects 0.000 claims description 4
- 230000000694 effects Effects 0.000 abstract description 5
- 238000010586 diagram Methods 0.000 description 8
- 238000004364 calculation method Methods 0.000 description 6
- 238000007726 management method Methods 0.000 description 5
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000015556 catabolic process Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000008707 rearrangement Effects 0.000 description 1
- 230000011664 signaling Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5027—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5061—Partitioning or combining of resources
- G06F9/5072—Grid computing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/005—General purpose rendering architectures
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Abstract
The invention discloses a multi-node resource allocation method and device for cloud rendering. The method is executed by a main server in a cloud rendering system, the cloud rendering system comprises a main server and at least one sub-server, the main server and the sub-servers are configured in the same network environment, data transmission between the main server and the sub-servers is realized by adopting WebSocket, the main server and the sub-servers are at least divided into one path of rendering nodes, and the method comprises the following steps: receiving a rendering request sent by a user side; and determining a target rendering node with optimal performance according to the hardware resources of each rendering node and the user parameters, so as to execute the rendering request through the target rendering node. The invention can integrate the advantages of multiple server resources, form multi-node and multi-path cloud rendering resources, and realize multi-concurrence, multi-terminal and low-delay browsing rendering effects by optimizing hardware resource allocation.
Description
Technical Field
The embodiment of the invention relates to the technical field of cloud rendering, in particular to a multi-node resource allocation method and device for cloud rendering.
Background
With the wide application of digital twinning, BIM and other related three-dimensional scenes in various industries, the method has significant application value for various fields of government management, enterprise production and the like. Meanwhile, higher requirements are also put on fast and efficient browsing applications of three-dimensional scenes.
As a current main three-dimensional scene rendering solution, cloud rendering can not provide rendering services by utilizing resources of multiple servers and multiple paths while providing excellent rendering experience, and becomes a key technical bottleneck of the cloud rendering solution at present. The rendering mode of a single node mainly has the following problems: 1. multiple concurrency cannot be supported to meet the use requirements of multiple users; 2. load balancing cannot be achieved, single server resources are severely consumed, and the risk of service breakdown exists.
Disclosure of Invention
In order to solve the problems in the prior art, the invention provides a multi-node resource allocation method and device for cloud rendering, so as to realize multi-server multi-node resource collaborative rendering under a digital twin scene.
In a first aspect, the present invention provides a multi-node resource allocation method for cloud rendering, which is executed by a main server in a cloud rendering system, where the cloud rendering system includes a main server and at least one sub-server, the main server and the sub-servers are configured in a same network environment, and implement data mutual transmission between the main server and the sub-servers by using WebSocket, and the main server and the sub-servers are at least divided into one path of rendering nodes, and the method includes:
s1, receiving a rendering request sent by a user side;
s2, determining a target rendering node with optimal performance according to hardware resources of each rendering node and user parameters, and executing the rendering request through the target rendering node.
Optionally, the S2 specifically includes:
and scoring the rendering nodes by adopting a weight scoring algorithm according to the server hardware GPU corresponding to each rendering node, the server hardware CPU corresponding to each rendering node, the current network of the user side, the current screen resolution of the user side and the server hard disk corresponding to each rendering node, and taking the rendering node with the highest score as a target rendering node.
Optionally, the calculation formula of the weight integration algorithm is as follows:
V=A1*G+A2*C+A3*N+A4*S+A5*D
wherein V is the score of each rendering node, G, C, N, S, D represents the score of the server hardware GPU corresponding to each rendering node, the server hardware CPU corresponding to each rendering node, the current network of the user terminal, the current screen resolution of the user terminal, and the server hard disk corresponding to each rendering node, respectively; a1, A2, A3, A4, A5 are weight factors of G, C, N, S, D, respectively, and A1> A2> A3> A4> A5.
Optionally, G is determined according to the comprehensive performance of the server hardware GPU corresponding to each rendering node;
c, determining according to the memory capacity of the CPU of the server hardware corresponding to each rendering node;
n is determined according to the current network downlink rate of the user terminal;
s, determining according to the current screen resolution of the user side;
and D, determining according to the read-write rate of the server hard disk corresponding to each rendering node.
In a second aspect, an embodiment of the present invention further provides a multi-node resource allocation device for cloud rendering, configured in a main server in a cloud rendering system, where the cloud rendering system includes a main server and at least one sub-server, the main server and the sub-server are configured in a same network environment, and implement data mutual transmission between the main server and the sub-server by using WebSocket, and the main server and the sub-server are at least divided into one path of rendering nodes, where the device includes:
the rendering request receiving module is used for receiving a rendering request sent by a user side;
and the target rendering node determining module is used for determining a target rendering node with optimal performance according to the hardware resources of each rendering node and the user parameters so as to execute the rendering request through the target rendering node.
The invention provides a cloud rendering system composed of a plurality of servers, breaks through the limitation of the existing single-node cloud rendering, determines the target rendering node with optimal performance according to the hardware resources of each rendering node and the user parameters on the basis of multi-node hardware, realizes a multi-node, multi-channel, multi-mode and multi-concurrency cloud rendering scheme, has good flexibility and expandability, and provides a better three-dimensional scene cloud rendering experience for users.
Drawings
Fig. 1 is a schematic diagram of a cloud rendering system according to an embodiment of the present invention;
FIG. 2 is a flowchart of a multi-node resource allocation method for cloud rendering according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of an optimal resource identification result provided by an embodiment of the present invention;
FIG. 4 is a schematic diagram of service process management according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of scheduling core hardware resources in different time periods according to an embodiment of the present invention;
FIG. 6 is a graph of a multi-node cloud rendering effect provided by an embodiment of the present invention;
fig. 7 is a diagram of a cloud rendering effect of another multiple nodes according to an embodiment of the present invention.
Detailed Description
The invention is described in further detail below with reference to the drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting thereof. It should be further noted that, for convenience of description, only some, but not all of the structures related to the present invention are shown in the drawings.
Examples
The technical scheme of the invention is cooperatively completed by hardware and corresponding software algorithms, wherein fig. 1 is a schematic diagram of a cloud rendering system provided by the embodiment of the invention, the cloud rendering system comprises a main server and at least one sub-server, the main server and the sub-server are configured in the same network environment, the WebSocket is adopted to realize data transmission between the main server and the sub-server, and the sub-server is at least divided into one path of rendering nodes.
The main server is used as a core node in the resource and bears key tasks of node management, resource scheduling and allocation and service communication;
the sub-server mainly comprises the following application programs:
(1) Background subroutine: communicating with a main server;
(2) Signaling server node application algorithm: project application management;
(3) Three-dimensional digital twin project package deployment based on Unreal Engine development.
The sub-servers are used as service nodes in the cluster and mainly bear three-dimensional digital twin scene service.
Through configuration of application programs, the main server and the sub servers in the embodiment can be used as rendering nodes, and one server can divide multiple paths of rendering nodes, so that the rendering nodes in the system are expanded, and multiple nodes, multiple paths and multiple concurrences of a cloud rendering scheme are realized.
On the basis of the hardware, the multi-node resource allocation method for cloud rendering provided in this embodiment is executed by a main server in a cloud rendering system, and referring to fig. 2, the method specifically includes the following steps:
s1, a main server receives a rendering request sent by a user.
S2, the main server determines a target rendering node with optimal performance according to hardware resources of each rendering node and user parameters, so that the rendering request is executed through the target rendering node, a three-dimensional scene is started on the target rendering node, and cloud rendering service is provided.
Wherein, the hardware resources of each rendering node include: the method comprises the steps of a server hardware GPU corresponding to each rendering node, a server hardware CPU corresponding to each rendering node and a server hard disk corresponding to each rendering node; the user parameters include: the current network of the user side and the current screen resolution of the user side.
Specifically, the embodiment adopts a weight scoring algorithm, scores each rendering node according to the server hardware GPU corresponding to each rendering node, the server hardware CPU corresponding to each rendering node, the current network of the user side, the current screen resolution of the user side and the server hard disk corresponding to each rendering node, and takes the rendering node with the highest score as the target rendering node.
The multi-node resource allocation method in the embodiment adopts a weight scoring algorithm, sets the scoring weight of each factor according to the key role of the comprehensive influence factor of the hardware resource performance, calculates the comprehensive capacity score of the current idle hardware resource in real time according to the percentage of each index, and the calculation formula of the weight scoring algorithm is as follows:
V=A1*G+A2*C+A3*N+A4*S+A5*D
wherein V is the score of each rendering node, G, C, N, S, D represents the score of the server hardware GPU corresponding to each rendering node, the server hardware CPU corresponding to each rendering node, the current network of the user terminal, the current screen resolution of the user terminal, and the server hard disk corresponding to each rendering node, respectively; a1, A2, A3, A4, A5 are weight factors of G, C, N, S, D, respectively, and A1> A2> A3> A4> A5.
For example, the above calculation formula may be: v=0.5×g+0.2×c+0.15×n+0.1×s+0.05×d.
In the embodiment, G is determined according to the comprehensive performance of the server hardware GPU corresponding to each rendering node; c, determining according to the memory capacity of the CPU of the server hardware corresponding to each rendering node; n is determined according to the current network downlink rate of the user at the user terminal; s, determining according to the current screen resolution of the user side; and D, determining according to the read-write rate of the server hard disk corresponding to each rendering node.
Specifically, the GPU scores are graded according to the comprehensive performance of the current various display cards, and scores according to different grades:
wherein, flagship level: 100 minutes; high-end stage: 80 minutes; medium and high grade: 60 minutes; intermediate level: dividing into 40; entry level: 20 minutes.
Scoring rules of CPU memory capacity: 64G and above: 100 minutes; 32G:80 minutes; 16G:60 minutes; 8G: dividing into 40; 4G:20 minutes.
User network downstream rate scoring rules: 3840kb/s and above: 100 minutes; 2560kb/s to 3840kb/s:80 minutes; 1536kb/s to 2560kb/s:60 minutes; 922kb/s to 1536kb/s: dividing into 40; 922kb/s and the following: 20 minutes.
User side screen resolution scoring rules: 4k resolution: 100 minutes; 2K resolution: 80 minutes; 1k resolution: 60 minutes; lower than 1k resolution: 40 minutes.
Server hard disk read-write rate scoring rules: 300M/S and above: 100 minutes; 150M/S-299M/S: 80 minutes; 70M/S-149M/S: 60 minutes; 69M/S and the following: 40 minutes.
The current network rate of the user and the current screen resolution of the user side are transmitted to a main server through a WebSocket service to be identified and scheduled, and three-dimensional scene configuration matched with the user is dynamically set.
Referring further to fig. 3, if the score of the sub-server 2 in the current cloud rendering system is obtained by calculation through the calculation formula of the weight integration algorithm, the sub-server 2 is the current optimal resource.
With continued reference to fig. 4, this embodiment adopts service process management, and uses the pm2 program to manage the pixel stream service application process in the matching service, so as to implement the running requirements of starting, stopping, load balancing, etc. of the service process.
With continued reference to fig. 5, fig. 5 is a schematic diagram of core hardware resource scheduling in different time periods according to the embodiment of the present invention, the present invention may allocate an optimal rendering node for a rendering request in different time periods, and may implement multi-node, multi-channel, multi-mode, multi-concurrent cloud rendering, and simultaneously has good flexibility and expandability, so as to provide a better three-dimensional scene cloud rendering experience for a user.
With continued reference to fig. 6 and fig. 7, fig. 6 and fig. 7 are each a cloud rendering effect diagram of multiple nodes provided by an embodiment of the present invention.
The invention can integrate the advantages of multiple server resources, form multi-node and multi-path cloud rendering resources, and realize multi-concurrence, multi-terminal and low-delay browsing rendering effects by optimizing hardware resource allocation.
Further, an embodiment of the present invention further provides a multi-node resource allocation device for cloud rendering, configured in a main server in a cloud rendering system, where the cloud rendering system includes a main server and at least one sub-server, the main server and the sub-server are configured in a same network environment, and implement data mutual transmission between the main server and the sub-server by using WebSocket, and the main server and the sub-server are at least divided into one path of rendering nodes, where the device includes:
the rendering request receiving module is used for receiving a rendering request sent by a user side;
and the target rendering node determining module is used for determining a target rendering node with optimal performance according to the hardware resources of each rendering node and the user parameters so as to execute the rendering request through the target rendering node.
The target rendering node determining module is specifically configured to:
and scoring the rendering nodes by adopting a weight scoring algorithm according to the server hardware GPU corresponding to each rendering node, the server hardware CPU corresponding to each rendering node, the current network of the user side, the current screen resolution of the user side and the server hard disk corresponding to each rendering node, and taking the rendering node with the highest score as a target rendering node.
Optionally, the calculation formula of the weight integration algorithm is as follows:
V=A1*G+A2*C+A3*N+A4*S+A5*D
wherein V is the score of each rendering node, G, C, N, S, D represents the score of the server hardware GPU corresponding to each rendering node, the server hardware CPU corresponding to each rendering node, the current network of the user terminal, the current screen resolution of the user terminal, and the server hard disk corresponding to each rendering node, respectively; a1, A2, A3, A4, A5 are weight factors of G, C, N, S, D, respectively, and A1> A2> A3> A4> A5.
G is determined according to the comprehensive performance of the server hardware GPU corresponding to each rendering node;
c, determining according to the memory capacity of the CPU of the server hardware corresponding to each rendering node;
n is determined according to the current network downlink rate of the user terminal;
s, determining according to the current screen resolution of the user side;
and D, determining according to the read-write rate of the server hard disk corresponding to each rendering node.
The multi-node resource allocation device for cloud rendering provided by the embodiment of the invention can execute the multi-node resource allocation method for cloud rendering provided by any embodiment of the invention, has corresponding functional modules and beneficial effects of the execution method, and is not repeated.
Note that the above is only a preferred embodiment of the present invention and the technical principle applied. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, while the invention has been described in connection with the above embodiments, the invention is not limited to the embodiments, but may be embodied in many other equivalent forms without departing from the spirit or scope of the invention, which is set forth in the following claims.
Claims (5)
1. The method is characterized by being executed by a main server in a cloud rendering system, wherein the cloud rendering system comprises the main server and at least one sub-server, the main server and the sub-servers are configured in the same network environment, the WebSocket is adopted to realize data transmission between the main server and the sub-servers, and the main server and the sub-servers are at least divided into one path of rendering nodes, and the method comprises the following steps:
s1, receiving a rendering request sent by a user side;
s2, determining a target rendering node with optimal performance according to hardware resources of each rendering node and user parameters, and executing the rendering request through the target rendering node.
2. The method according to claim 1, wherein S2 specifically comprises:
and scoring the rendering nodes by adopting a weight scoring algorithm according to the server hardware GPU corresponding to each rendering node, the server hardware CPU corresponding to each rendering node, the current network of the user side, the current screen resolution of the user side and the server hard disk corresponding to each rendering node, and taking the rendering node with the highest score as a target rendering node.
3. The method of claim 2, wherein the weight integration algorithm is calculated as follows:
V=A1*G+A2*C+A3*N+A4*S+A5*D
wherein V is the score of each rendering node, G, C, N, S, D represents the score of the server hardware GPU corresponding to each rendering node, the server hardware CPU corresponding to each rendering node, the current network of the user terminal, the current screen resolution of the user terminal, and the server hard disk corresponding to each rendering node, respectively; a1, A2, A3, A4, A5 are weight factors of G, C, N, S, D, respectively, and A1> A2> A3> A4> A5.
4. A method according to claim 3, wherein G is determined according to the overall performance of the server hardware GPU corresponding to each rendering node;
c, determining according to the memory capacity of the CPU of the server hardware corresponding to each rendering node;
n is determined according to the current network downlink rate of the user terminal;
s, determining according to the current screen resolution of the user side;
and D, determining according to the read-write rate of the server hard disk corresponding to each rendering node.
5. The utility model provides a multinode resource allocation device for cloud rendering, its characterized in that disposes in the main server in cloud rendering system, cloud rendering system includes a main server, at least one sub-server, main server and sub-server dispose in same network environment, adopt WebSocket to realize the data intercommunication between main server and the sub-server, main server and sub-server divide into the node of rendering of one way at least, the device includes:
the rendering request receiving module is used for receiving a rendering request sent by a user side;
and the target rendering node determining module is used for determining a target rendering node with optimal performance according to the hardware resources of each rendering node and the user parameters so as to execute the rendering request through the target rendering node.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310561829.8A CN116560844A (en) | 2023-05-18 | 2023-05-18 | Multi-node resource allocation method and device for cloud rendering |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310561829.8A CN116560844A (en) | 2023-05-18 | 2023-05-18 | Multi-node resource allocation method and device for cloud rendering |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116560844A true CN116560844A (en) | 2023-08-08 |
Family
ID=87489554
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310561829.8A Pending CN116560844A (en) | 2023-05-18 | 2023-05-18 | Multi-node resource allocation method and device for cloud rendering |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116560844A (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110955504A (en) * | 2019-10-21 | 2020-04-03 | 量子云未来(北京)信息科技有限公司 | Method, server, system and storage medium for intelligently distributing rendering tasks |
CN111061560A (en) * | 2019-11-18 | 2020-04-24 | 北京视博云科技有限公司 | Cloud rendering resource scheduling method and device, electronic equipment and storage medium |
CN112634122A (en) * | 2020-12-01 | 2021-04-09 | 深圳提亚数字科技有限公司 | Cloud rendering method and system, computer equipment and readable storage medium |
WO2021190651A1 (en) * | 2020-03-27 | 2021-09-30 | 华为技术有限公司 | Rendering quality adjustment method and related device |
CN115098238A (en) * | 2022-07-07 | 2022-09-23 | 北京鼎成智造科技有限公司 | Application program task scheduling method and device |
-
2023
- 2023-05-18 CN CN202310561829.8A patent/CN116560844A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110955504A (en) * | 2019-10-21 | 2020-04-03 | 量子云未来(北京)信息科技有限公司 | Method, server, system and storage medium for intelligently distributing rendering tasks |
CN111061560A (en) * | 2019-11-18 | 2020-04-24 | 北京视博云科技有限公司 | Cloud rendering resource scheduling method and device, electronic equipment and storage medium |
WO2021190651A1 (en) * | 2020-03-27 | 2021-09-30 | 华为技术有限公司 | Rendering quality adjustment method and related device |
CN112634122A (en) * | 2020-12-01 | 2021-04-09 | 深圳提亚数字科技有限公司 | Cloud rendering method and system, computer equipment and readable storage medium |
CN115098238A (en) * | 2022-07-07 | 2022-09-23 | 北京鼎成智造科技有限公司 | Application program task scheduling method and device |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
KR101547498B1 (en) | The method and apparatus for distributing data in a hybrid cloud environment | |
CN113315700B (en) | Computing resource scheduling method, device and storage medium | |
US8982709B2 (en) | Selecting service nodes for an end-to-end service path from a reduced search space | |
US8087025B1 (en) | Workload placement among resource-on-demand systems | |
WO2020258290A1 (en) | Log data collection method, log data collection apparatus, storage medium and log data collection system | |
WO2014026613A1 (en) | Network bandwidth distribution method and terminal | |
WO2022222755A1 (en) | Service processing method and apparatus, and storage medium | |
US9420513B1 (en) | Clustering approach to estimating a network metric for nodes | |
CN108173698B (en) | Network service management method, device, server and storage medium | |
CN110474966B (en) | Method for processing cloud platform resource fragments and related equipment | |
CN110933139A (en) | System and method for solving high concurrency of Web server | |
US11977929B2 (en) | Resource allocation method and apparatus based on edge computing | |
CN110336885A (en) | Fringe node distribution method, device, dispatch server and storage medium | |
CN103336719A (en) | Distribution rendering system and method in P2P mode | |
US11936734B2 (en) | Simulation systems and methods using query-based interest | |
CN109729519A (en) | The method and relevant apparatus of data downloading | |
CN105426228B (en) | A kind of OpenStack virtual machine placement methods towards live streaming media and video code conversion | |
CN102420850B (en) | Resource scheduling method and system thereof | |
CN114064294B (en) | Dynamic resource allocation method and system in mobile edge computing environment | |
CN111491027A (en) | Load balancing method, load balancing device and readable storage medium | |
CN112433844B (en) | Resource allocation method, system, equipment and computer readable storage medium | |
CN104168174A (en) | Method and apparatus for information transmission | |
CN116560844A (en) | Multi-node resource allocation method and device for cloud rendering | |
CN116107732A (en) | Task decomposition method and system for collaborative task system | |
CN113271335A (en) | System for managing and controlling operation of cloud computing terminal and cloud server |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |