CN115883661A - Request dynamic scheduling method in cloud-edge collaborative cloud game scene - Google Patents
Request dynamic scheduling method in cloud-edge collaborative cloud game scene Download PDFInfo
- Publication number
- CN115883661A CN115883661A CN202211489010.7A CN202211489010A CN115883661A CN 115883661 A CN115883661 A CN 115883661A CN 202211489010 A CN202211489010 A CN 202211489010A CN 115883661 A CN115883661 A CN 115883661A
- Authority
- CN
- China
- Prior art keywords
- node
- service
- request
- requests
- cloud
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 24
- 238000012545 processing Methods 0.000 claims abstract description 21
- 230000005540 biological transmission Effects 0.000 claims abstract description 17
- 238000013528 artificial neural network Methods 0.000 claims abstract description 11
- 239000013598 vector Substances 0.000 claims description 28
- 230000015654 memory Effects 0.000 claims description 17
- 230000006870 function Effects 0.000 claims description 16
- 150000001875 compounds Chemical class 0.000 claims description 13
- 238000004364 calculation method Methods 0.000 claims description 8
- 238000005070 sampling Methods 0.000 claims description 8
- 238000004891 communication Methods 0.000 claims description 7
- 230000002776 aggregation Effects 0.000 claims description 5
- 238000004220 aggregation Methods 0.000 claims description 5
- 230000003321 amplification Effects 0.000 claims description 5
- 238000010586 diagram Methods 0.000 claims description 5
- 238000003199 nucleic acid amplification method Methods 0.000 claims description 5
- 230000004913 activation Effects 0.000 claims description 3
- 230000009471 action Effects 0.000 description 5
- 238000007405 data analysis Methods 0.000 description 4
- 238000011161 development Methods 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 238000009877 rendering Methods 0.000 description 4
- 238000006116 polymerization reaction Methods 0.000 description 2
- 230000002787 reinforcement Effects 0.000 description 2
- 238000012549 training Methods 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000007418 data mining Methods 0.000 description 1
- 230000001934 delay Effects 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 230000001737 promoting effect Effects 0.000 description 1
Images
Landscapes
- Multi Processors (AREA)
Abstract
The invention discloses a dynamic request scheduling method under a cloud-edge collaborative cloud game scene, which comprises the following steps: the method comprises the steps that a main node judges the type of a received service request, if the request is a BE service and forwards the request to a cloud center, the cloud center constructs a graph structure according to request information and node states of an edge cluster, a BE service scheduling decision is obtained by using a graph neural network and an A2C algorithm to maximize the total throughput of the BE service as a target, and the cloud center schedules the BE service to a working node of the target edge cluster for processing; if the service is the LC service, the number of the LC services is confirmed, different graph structures are established according to the number and the requested service types, the transmission number of the LC services is maximized and the transmission delay is minimized, distributed scheduling decisions are generated by using OR-Tools, and the main node transmits the LC services to corresponding working nodes. The invention ensures the service quality of the LC service request in the cloud game and optimizes the throughput of the BE service for a long time.
Description
Technical Field
The invention belongs to the technical field of cloud games, and particularly relates to a dynamic request scheduling method in a cloud-edge collaborative cloud game scene.
Background
In the modern day of 5G communication network high-speed development and everything interconnection, a cloud edge cooperative framework integrates the advantages and the characteristics of edge computing and cloud computing, computing resources are organized into a large number of small clusters through sinking cloud computing power, moving up terminal computing power and converging edge computing power, and the clusters bring more agile service to nearby terminal users and simultaneously reduce the flow burden of a backbone network. With the development and popularization of high-speed networks such as 5G, optical fiber and the like, cloud games become an inevitable trend in the game industry. Under the assistance of the cloud edge collaborative framework, cloud games can be developed efficiently, the AI game accelerates the game intelligentization, and more comprehensive cloud edge infrastructure and more stable game environment are provided.
According to the service quality requirement, the specific service requests in the cloud game can be divided into two types. One is a series of delay-sensitive (LC) services which are related to real-time interaction of users and have higher requirements on delay and service quality, such as game rendering, game database synchronization, game performance monitoring and the like; the other type is an offline batch processing (BE) service which can tolerate higher running delay and support failed task restarting and carries out data mining analysis in the background, such as game log collection, user portrait data analysis and the like. Generally speaking, an ideal scheme is to deploy two types of services to cloud game scenes with cloud edge cooperation so as to further improve the utilization rate of machine resources. However, due to the huge difference in characteristics between services, how to coordinate the request scheduling of services with larger difference in processing characteristics in the edge cloud game scene becomes a new challenge to be solved urgently.
Disclosure of Invention
Aiming at the problems, the invention provides a request dynamic scheduling method in a cloud-edge collaborative cloud game scene, which can fully consider the characteristics of the service when scheduling the service request in the cloud-edge collaborative cloud game scene, and solves the problem that the traditional scheduling strategy can not meet the requirements. In order to solve the technical problems, the technical scheme adopted by the invention is as follows:
a request dynamic scheduling method under a cloud-edge collaborative cloud game scene comprises the following steps:
s1, a cloud edge cluster system comprising a cloud center and a plurality of edge clusters is constructed, wherein each edge cluster comprises a main node for receiving a service request and a plurality of working nodes for processing the service request;
s2, judging the types of all received service requests by the main node of each edge cluster, forwarding the requests to a cloud center if the requests are BE services, executing the step S6, confirming the number of LC services received by the main node of the edge cluster if the requests are LC services, and executing the step S3;
s3, judgingIf so, construct a graph structure based on node status and request information>Otherwise, randomly selecting Q from all the received LC services ed-handle,b Multiple requests constitute a first set of requests +>The remaining LC service constitutes a second request set +>Assembling/withholding a first request based on service type>And a second set of requests +>Respectively construct a graph structure>And a map structure->Wherein +>Represents the total number of pending LC services, Q, received by the master node of edge cluster b ed-handle,b The total number of the requests which can be processed by all the working nodes in the edge cluster b is represented, and k represents the service type;
s4, constructing a scheduling objective function of the LC service by taking the maximized number of the transmitted LC services and the minimized transmission time delay of the LC services as targets;
s5, the scheduling objective function obtained in the step S4 and the graph structure obtained in the step S3 are combinedAnd a map structure->The distributed scheduling decision is generated by inputting an OR-Tools solver, and the LC service is transmitted to the corresponding working node by the main node according to the distributed scheduling decision for processing;
s6, the cloud center constructs a graph structure according to the received request information of all BE services and the node states of the edge clustersMap structure->Coding to obtain a coded feature vector;
and S7, inputting the coded feature vector into a neural network, acquiring a BE service scheduling decision by using an A2C algorithm and taking the maximum total throughput of the BE service as a target as a reward function, and scheduling the BE service to a working node of a target edge cluster by the cloud center according to the BE service scheduling decision for processing.
In step S3, the graph structureWherein it is present>Is a set of nodes, ε k Is a set of edges, each node->Is taken to be->Each side(s) of i ,s j )∈ε k Is taken to be->Represent, and node
in the formula (I), the compound is shown in the specification,representing a node s i The maximum amount of CPU resources allocated to the service instance corresponding to the LC request of service type k, device for selecting or keeping>Representing a node s i CPU resource availability, based on a service instance assigned to an LC request of service type k>Representing a node s i Maximum memory allocated for the service instance corresponding to the LC request of service type k, based on the number of times the service instance is granted>Representing a node s i Memory availability allocated for the service instance corresponding to the LC request of service type k>Representing a node s i The supply and demand relation between the processed request quantity and the received LC request quantity with the service type k;
in the formula (I), the compound is shown in the specification,representing a node s i And node s j Communication delay therebetween, c i,j Representing a node s i And node s j Requesting transmission capacity therebetween.
When node s i In the case of the master node,indicates that there is a->A LC request to be distributed, when node s i When the node is a working node, the node can bear the value>An LC request, whose calculation formula is:
in the formula (I), the compound is shown in the specification,represents the amount of CPU resource required for an LC request of service type k, and->Indicating the memory required for an LC request of type k.
when s is i In the case of the master node,representing a second set of requests>At node s i The requested number of (2); when s is i When the working node is selected, the device is turned on>The values are shown as follows:
wherein λ is an amplification factor,represents the amount of CPU resource required for an LC request of service type k, and->Indicating the memory required for an LC request of type k.
The calculation formula of the amplification coefficient lambda is as follows:
in step S4, the expression of the scheduling objective function is:
in the formula (I), the compound is shown in the specification,indicating the receiving node s from the request i To requesting execution node s j E denotes a request transport stream set, h>Indicating whether the requested transport stream f has passed the edge(s) i ,s j ) Transmission, gamma f Indicating the resource requirements for the requested transport stream f,indicating acceptance by a requesting node s m Sending to requesting executing node s j F' denotes a requestTransport stream, and f ≠ f, -, ≠ f>Indicating acceptance by the requesting node s j Sending to requesting executing node s n Request set of(s) m ,s j ) Representing by a node s m To node s j Side of (d)>Representing a node s i And node s j Communication delay therebetween, c i,j Representing a node s i And node s j Requested transmission capacity of between, epsilon k Representing diagram structures>Is taken over, is>Representing diagram structures>Is taken on>Representing a node s i The supply-demand relationship between the amount of requests that can be handled and the number of LC requests of type k of service received.
In step S6, the graph structure is mapped by using a graph neural networkThe encoding to obtain the encoded feature vector comprises the following steps:
and ii, carrying out aggregation operation on the neighbor nodes to obtain the coding feature vector of each node.
The calculation formula of the feature vector is as follows:
in the formula (I), the compound is shown in the specification,graph structure representing a service type of k>Node s in i σ denotes an activation function, W denotes a weight parameter, and>map structure ^ representing service type k-1>Node s in i Is combined with the feature vector of (4), is combined with the feature vector of (4)>Graph structure representing a service type of k>Node s in j Based on the aggregated feature vector of (a), based on the sum of the feature vectors, and based on the sum of the feature vectors>Representing a node s i Is determined.
The invention has the beneficial effects that:
the request is efficiently and reasonably processed through coordination and cooperation of the agile distributed scheduling algorithm with low resource cost and expenditure and the intelligent self-adaptive centralized scheduling algorithm, a customized dynamic request scheduling strategy of a mixed scheduling architecture is designed to deal with delay sensitive services and offline batch processing services in the cloud game, and positive influence can be generated on cloud game development under cloud edge cooperation; the method ensures the service quality of delay sensitive service requests such as game rendering, game database synchronization and the like in the cloud game, and simultaneously optimizes the throughput of offline batch processing services such as game log acquisition, user portrait data analysis and the like for a long time, thereby providing a stable game environment for users, ensuring the operation of the game service and further promoting the development of the cloud game.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic diagram of a scheduling architecture in a cloud-edge collaborative cloud game scenario.
Fig. 2 is a distributed scheduling algorithm for delay-sensitive service requests.
Fig. 3 is a centralized scheduling algorithm for offline batch service requests.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art based on the embodiments of the present invention without inventive step, are within the scope of the present invention.
A request dynamic scheduling method under a cloud-edge collaborative cloud game scene comprises the following steps:
s1, as shown in FIG. 1, a cloud edge cluster system comprising a cloud center and a plurality of edge clusters is constructed, wherein each edge cluster comprises a main node for receiving a service request and a plurality of working nodes for processing the service request;
in the cloud edge cluster system, a cloud center is connected with an edge cluster and the edge cluster through a wide area network, and a main node is connected with a working node and the working node is connected with the working node through a local area network. Each master node periodically collects node state information of working nodes in the cluster, and periodically synchronizes and shares the node state information with other master nodes so as to share the node state information among the master nodes and store the node state information in a database of each node. The node state information comprises idle CPUs, idle memories, task processing conditions and the like. The main node is a decision communication node on the edge cluster, the service request of the cloud game reaching the edge cluster comprises LC service and BE service which are received by the main node as edge access points, the main node executes a distributed request scheduling strategy to the received LC service to determine a target working node, and the cloud executes a centralized request scheduling strategy to the BE service to determine the target working node. The working nodes are request processing nodes on the edge cluster, specific service instances are deployed on the working nodes, the main node forwards the requests to target working nodes of the corresponding edge cluster according to a scheduling strategy, the working nodes process the requests and return processing results to the main node, and the main node forwards the processing results to users. Furthermore, the processing of each request will consume the computational, storage, and bandwidth resources of the node.
Collective adoption of edge clustersIndicate wherein>Representing one of the set of edge clusters. For each edge cluster b by M b A node, wherein, a main node, M b 1 working node, i.e. a set of nodes that can be represented as ≧ or { (R) } for one edge cluster b>
S2, judging the types of all received service requests by the main node of each edge cluster, forwarding the requests to a cloud center if the requests are BE services, executing the step S6, confirming the number of LC services received by the main node of the edge cluster if the requests are LC services, and executing the step S3;
in this embodiment, the service request is a BE service and an LC service, and the BE service and the LC service both include services of a plurality of service types, where the service types include game rendering, game database synchronization, game log acquisition, user portrait data analysis, and the like, and the service types include game rendering, game database synchronization, game log acquisition, and user portrait data analysisRepresenting a collection of service types.
S3, judgingIf so, construct a graph structure based on node status and request information>Otherwise, randomly selecting Q from all the received LC services ed-handle,b Multiple requests constitute a first set of requests>The remaining LC service constitutes a second request set +>Assembling/withholding a first request based on service type>And a second set of requests +>Respectively construct a graph structure>And a map structure->Wherein it is present>Represents the total number of pending LC services, Q, received by the master node of edge cluster b ed-handle,b The total number of requests which can be processed by all the working nodes in the edge cluster b is represented, and k represents the service type;
as shown in fig. 2, for the service type isThe edge cluster is provided with a plurality of LC instances, and a single instance only processes service requests of a certain service type. For each service type>Building a graph structure System information about service type k for an edge cluster defined to be maintained by a master node on the edge cluster, wherein it is present>Is a set of nodes, ε k The node information reflects the occupation amount of k service type resources and the request quantity information, and the node information reflects the connection, delay condition and link request capacity among the nodes. Wherein, for any one side(s) i ,s j )∈ε k All have->And &>
For each nodeCorresponds to a set of node attributes>Wherein it is present>Representing a node s i The maximum amount of CPU resources allocated to the service instance corresponding to the LC request of service type k, device for selecting or keeping>Representing a node s i CPU resource availability, based on a service instance assigned to an LC request of service type k>Representing a node s i Maximum memory allocated for the service instance corresponding to the LC request of service type k, based on the number of times the service instance is granted>Representing a node s i Memory availability allocated for the service instance corresponding to the LC request of service type k>Representing a node s i The supply-demand relation between the amount of the processed requests and the amount of the received LC requests with the service type k reflects the node s i Whether to process the received LC request with service type k when the node s i When the master node is present, it indicates that the master node is present>For each LC request to be distributed, the master node only participates in the distribution scheduling of the request and does not participate in the processing of the request, and for the master node, the node attribute of the master node isAt this time->When node s i In the case of a working node, the service container representing the node may carry @>An LC request, whose calculation formula at this time is:
in the formula (I), the compound is shown in the specification,represents the amount of CPU resources required for an LC request of service type k, and ` the `>Indicating the memory required for an LC request of type k.
For each edge(s) i ,s j )∈ε k All correspond to a set of edge attributesWherein it is present>Representing a node s i And node s j Communication delay therebetween, c i,j Representing a node s i And node s j Requested transmission capacity in between.
For the second request setThe established pattern structure->The difference is that the node attribute is When s is i When it is a master node, in combination with a receiver>Indicating that the second request set +>At node s i The requested number of (2); when s is i When it is a working node, it is on or off>The values are shown as follows:
wherein λ is an amplification factor for ensuring the structure of the mapThe total number of requests receivable by the middle node and the second set of requestsThe calculation formula of (1) is as follows:
picture structureHas the same edge attribute as the graph structure>The edge attribute of (2) is not described herein in detail. When/is>Indicate the total number of LC services pending at that time @>Less than or equal to the number Q of requests which can be carried by the working nodes in the edge cluster ed-handle,b Therefore, the free resources can satisfy the requirements of all pending requests. When +>The total number of LC services pending at that time->The quantity of the requests Q which can be carried by the working nodes in the edge cluster is larger than ed-handle,b In this case, the free resources cannot meet the requirements of all pending requests. Therefore, all LC service requests received by the master node in each edge cluster are divided into two parts to respectively construct a graph structure for solving. First request set +>Has a number of requests of->Which is equal to Q ed-handle,b Aggregate ≦ for the first request>In other words, the free resources of the worker node can still meet the needs of all requests.
S4, constructing a scheduling objective function of the LC service by taking the maximized number of the transmitted LC services and the minimized transmission time delay of the LC services as targets;
the expression of the scheduling objective function is as follows:
in the formula (I), the compound is shown in the specification,indicating the receiving node s from the request i To requesting execution node s j In between, based on the number of requests in the group, and based on the number of requests in the group>Denotes a request transport stream set, E denotes an edge set of the corresponding graph structure, corresponding to ε k ,/>Indicating whether the requested transport stream f has passed the edge(s) i ,s j ) Transmission, gamma f Indicates a resource requirement requesting transport stream f>Indicating acceptance by a requesting node s m Sending to requesting executing node s j F' represents the requested transport stream, and f ≠ f, -, and/or>Indicating acceptance by the requesting node s j Sending to requesting executing node s n Request set of(s) m ,s j ) Represented by node s m To node s j Of (c) is performed.
The constraint condition a indicates that the sum of the resources of the requests for constraining the transmission of each link cannot exceed the upper limit of the transmission capacity of the link request, the constraint condition b indicates that the number of the requests received by each node cannot exceed the processing capacity of the node, and the constraint condition c indicates that the number of the requests sent by each node cannot exceed the sum of the number of the initially owned requests and the number of the received requests.
Requesting whether transport stream f passes an edge(s) i ,s j ) Transmission ofThe expression of (c) is:
the LC service requests arrive dynamically and randomly, and the distributed scheduling policy decides to forward immediately after receiving the LC requests to avoid additional latency delays.
S5, inputting the scheduling objective function obtained in the step S4 and the graph structure obtained in the step S3 into an OR-Tools solver to generate a distributed scheduling decision, and transmitting the LC service to the corresponding working node by the main node according to the distributed scheduling decision;
and the scheduling decision is a scheduling path, and each LC request to be processed according to the scheduling path is distributed to the corresponding target working node for processing.
S6, the cloud center constructs a graph structure according to the received request information of all BE services and the node information of the edge clusterMethod for combining Graph structures with a Graph Neural Network (GNN)>Coding to obtain a coded feature vector; />
Unlike a distributed scheduling mode for handling LC requests, the BE request is performed on the cloud center in a centralized scheduling mode. Each Worker node on the cloud edge cluster is only provided with a public container ring of BE serviceAnd (4) environmental conditions.Wherein it is present>Is a set of nodes and ε' is a set of edges.
For nodeCPU resource whose attributes comprise a node that can BE used to process a BE service>Memory resourceMaximum CPU resource->Maximum memory resource->The CPU and memory resource requirements corresponding to a request for a BE service are expressed as ≥ er>For a side->Whose attributes include the connection delay between nodes>And requested transmission capacity c of the link i,j 。
The graph structure is obtained by utilizing a graph neural networkThe method for coding to obtain the coded feature vector comprises the following steps:
for theSampling neighbor nodes, setting a fixed sampling number p for improving the calculation efficiency, and defining a neighbor indicator h(s) i ,s j ) The following formula:
further, for node s i When neighbor sampling is carried out: if it isSampling with the replacement is carried out until p neighbor nodes are selected; if/or>Then sampling without putting back is carried out until p neighbor nodes are selected. Furthermore, for node s i Is defined as ≥ the set of neighbor nodes>
ii, carrying out aggregation operation on the neighbor nodes to obtain a coding feature vector of each node;
after selecting the neighbor nodes, the nodes are subjected to aggregation operation. Define L ∈ {0,1, ·, L } to represent the polymerization number index, and set the total polymerization number L =2. Furthermore, a node s is defined i The feature vector at the first aggregation isThe attribute information of the neighbor nodes and edges of the node in the graph structure is shown, and the expression is as follows:
in the formula (I), the compound is shown in the specification,node s in graph structure representing service type k i Aggregated feature vector σ represents an activation function, W represents a weight parameter, and->Node s in graph structure representing service type k-1 i The aggregated feature vector +>Node s in graph structure representing service type k-1 j The aggregated feature vector +>Representing a node s i Is determined.
In order to deal with the state information of the system at high latitude, the centralized scheduling algorithm of the BE request introduces the graph neural network, so that the state characteristics of the system can BE better extracted, the training speed of deep reinforcement learning is increased, and the learning capability of the deep reinforcement learning is improved.
And S7, inputting the coded feature vector into a neural network, acquiring a BE service scheduling decision by using an A2C algorithm and taking the maximum total throughput of the BE service as a target as a reward function, and scheduling the BE service to a working node of a target edge cluster by the cloud center according to the BE service scheduling decision for processing.
The expression of the scheduling objective function of the BE service is as follows:
wherein φ 'represents the total throughput of BE service, q' b,t BE service completed on edge cluster b at time tThe number of the cells.
As shown in fig. 3, the neural network of the algorithm of the advertisement Actor-Critic (A2C) is composed of an action network (operator) and an evaluation network (Critic). Wherein the operator network will target a given system stateGenerating a decision action a t The critic network is used for evaluating and guiding decision-making actions of the operator network. Taking all the coded feature vectors as ^ or ^>And input into the operator network to obtain the output action a t According to action a t Can know the BE service can BE scheduled to which working node of which edge cluster, and the cloud center can schedule the BE service according to the a t Scheduling BE requests onto target clusters and calculating a reward value r t And storing the sample in a memory return visit pool. When the number of the memory return visit pool samples reaches a preset threshold value alpha, the critic network randomly extracts alpha samples from the pool for training, and updates the network parameter W e 。
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.
Claims (8)
1. A request dynamic scheduling method under a cloud-edge collaborative cloud game scene is characterized by comprising the following steps:
s1, a cloud edge cluster system comprising a cloud center and a plurality of edge clusters is constructed, wherein each edge cluster comprises a main node for receiving a service request and a plurality of working nodes for processing the service request;
s2, judging the types of all received service requests by the main node of each edge cluster, forwarding the requests to a cloud center if the requests are BE services, executing the step S6, confirming the number of LC services received by the main node of the edge cluster if the requests are LC services, and executing the step S3;
s3, judgingIf so, construct a graph structure based on node status and request information>Otherwise, randomly selecting Q from all the received LC services ed-handle,b Multiple requests constitute a first set of requests +>The remaining LC services constitute a second request set +>Collecting and/or resolving first requests in dependence on service type>And a second set of requests +>Respectively construct a graph structure>And a map structure->Wherein it is present>Representing the total number of LC services to be processed, Q, received by the master node of edge cluster b ed-handle,b The total number of requests which can be processed by all the working nodes in the edge cluster b is represented, and k represents the service type;
s4, constructing a scheduling objective function of the LC service by taking the maximized number of the transmitted LC services and the minimized transmission time delay of the LC services as targets;
s5, the scheduling objective function obtained in the step S4 and the graph structure obtained in the step S3 are combinedAnd a map structure->The method comprises the steps that an OR-Tools solver is input to generate a distributed scheduling decision, and a main node transmits LC service to a corresponding working node for processing according to the distributed scheduling decision;
s6, the cloud center constructs a graph structure according to the received request information of all BE services and the node states of the edge clustersUtilizing a graph neural network to combine graph structures>Coding to obtain a coded feature vector;
and S7, inputting the coded feature vectors into a neural network, acquiring a BE service scheduling decision by using an A2C algorithm and taking the maximum total throughput of the BE service as a target as a reward function, and scheduling the BE service to a working node of a target edge cluster by the cloud center according to the BE service scheduling decision for processing.
2. The method for dynamically scheduling requests in the cloud-edge collaborative cloud game scenario according to claim 1, wherein in step S3, the graph structureWherein it is present>Is a set of nodes, ε k Is a set of edges, each node-> In conjunction with a node attribute of:>each side(s) of i ,s j )∈ε k Is taken to be->Represents and node +>
in the formula (I), the compound is shown in the specification,representing a node s i The maximum amount of CPU resources allocated to the service instance corresponding to the LC request of service type k, device for selecting or keeping>Representing a node s i The CPU resource availability allocated for the service instance corresponding to the LC request of service type k,representing a node s i Maximum memory allocated to service instance corresponding to LC request with service type k,/>Representing a node s i Memory availability allocated for the service instance corresponding to the LC request of service type k>Representing a node s i The supply and demand relation between the request quantity capable of being processed and the LC request quantity with the received service type k;
3. The method for dynamically scheduling requests in the cloud-edge collaborative cloud game scenario according to claim 2, wherein when a node s is present i In the case of the master node,indicates existence of->A LC request to be distributed, when node s i When the node is a working node, the node can bear the value>An LC request, whose formula is:
4. The method for dynamically scheduling requests in the cloud-edge collaborative cloud game scenario according to claim 2, wherein in step S3, the graph structureHas a node attribute of->The expression is as follows:
when s is i In the case of the master node,representing a second set of requests>At node s i The requested number of (2); when s is i In the case of a working node, the node is,the values are shown as follows:
6. the method for dynamically scheduling requests in the cloud-edge collaborative cloud game scenario according to claim 1, wherein in step S4, an expression of the scheduling objective function is:
in the formula (I), the compound is shown in the specification,indicating the receiving node s from the request i To requesting execution node s j In between, based on the number of requests in the group, and based on the number of requests in the group>Represents a request for a set of transport streams, <' > or>Indicating whether the requested transport stream f has passed the edge(s) i ,s j ) Transmission, gamma f Indicates a resource requirement that requests transport stream f, <' > is present>Indicating acceptance by a requesting node s m Sending to requesting executing node s j F' represents the requested transport stream, and f ≠ f, -, and/or>Indicating acceptance by a requesting node s j Sending to requesting executing node s n (ii) set of all requests,(s) m ,s j ) Representing by a node s m To node s j Is on the side of (4), (v) is greater than or equal to>Representing a node s i And node s j Communication delay therebetween, c i,j Representing a node s i And node s j Requested transmission capacity of between, epsilon k Representing diagram structures>Is taken over, is>Represents a diagram configuration->Is taken on>Representing a node s i The supply-demand relationship between the amount of requests that can be handled and the number of LC requests of type k of service received.
7. The method for dynamically scheduling request in the cloud-edge collaborative cloud game scenario according to claim 1, wherein in step S6, the graph structure is mapped to a graph by using a graph neural networkThe encoding to obtain the encoded feature vector comprises the following steps:
and ii, carrying out aggregation operation on the neighbor nodes to obtain the coding feature vector of each node.
8. The method for dynamically scheduling requests in the cloud-edge collaborative cloud game scenario according to claim 7, wherein a calculation formula of the feature vector is as follows:
in the formula (I), the compound is shown in the specification,graph structure representing a service type of k>Node s in i σ represents an activation function, W represents a weight parameter, and>map structure ^ representing service type k-1>Node s in i The feature vector of (2) is aggregated into a feature vector,graph structure representing a service type of k>Node s in j Is combined with the feature vector of (4), is combined with the feature vector of (4)>Representing a node s i Is determined. />
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211489010.7A CN115883661A (en) | 2022-11-25 | 2022-11-25 | Request dynamic scheduling method in cloud-edge collaborative cloud game scene |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211489010.7A CN115883661A (en) | 2022-11-25 | 2022-11-25 | Request dynamic scheduling method in cloud-edge collaborative cloud game scene |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115883661A true CN115883661A (en) | 2023-03-31 |
Family
ID=85763908
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211489010.7A Pending CN115883661A (en) | 2022-11-25 | 2022-11-25 | Request dynamic scheduling method in cloud-edge collaborative cloud game scene |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115883661A (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160323374A1 (en) * | 2015-04-29 | 2016-11-03 | Microsoft Technology Licensing, Llc | Optimal Allocation of Dynamic Cloud Computing Platform Resources |
CN109829718A (en) * | 2019-01-30 | 2019-05-31 | 缀初网络技术(上海)有限公司 | A kind of block chain multi-layer framework and its operation method based on storage application scenarios |
CN113778677A (en) * | 2021-09-03 | 2021-12-10 | 天津大学 | SLA-oriented intelligent optimization method for cloud-edge cooperative resource arrangement and request scheduling |
WO2022021176A1 (en) * | 2020-07-28 | 2022-02-03 | 苏州大学 | Cloud-edge collaborative network resource smooth migration and restructuring method and system |
CN114116157A (en) * | 2021-10-21 | 2022-03-01 | 山东如意毛纺服装集团股份有限公司 | Multi-edge cluster cloud structure in edge environment and load balancing scheduling method |
-
2022
- 2022-11-25 CN CN202211489010.7A patent/CN115883661A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160323374A1 (en) * | 2015-04-29 | 2016-11-03 | Microsoft Technology Licensing, Llc | Optimal Allocation of Dynamic Cloud Computing Platform Resources |
CN109829718A (en) * | 2019-01-30 | 2019-05-31 | 缀初网络技术(上海)有限公司 | A kind of block chain multi-layer framework and its operation method based on storage application scenarios |
WO2022021176A1 (en) * | 2020-07-28 | 2022-02-03 | 苏州大学 | Cloud-edge collaborative network resource smooth migration and restructuring method and system |
CN113778677A (en) * | 2021-09-03 | 2021-12-10 | 天津大学 | SLA-oriented intelligent optimization method for cloud-edge cooperative resource arrangement and request scheduling |
CN114116157A (en) * | 2021-10-21 | 2022-03-01 | 山东如意毛纺服装集团股份有限公司 | Multi-edge cluster cloud structure in edge environment and load balancing scheduling method |
Non-Patent Citations (1)
Title |
---|
李国: "一种面向多类型服务的动态负载均衡算法", 《现代电子技术》, no. 12, 15 June 2017 (2017-06-15) * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110099384B (en) | Multi-user multi-MEC task unloading resource scheduling method based on edge-end cooperation | |
Liu et al. | Online computation offloading and traffic routing for UAV swarms in edge-cloud computing | |
Mebrek et al. | Efficient green solution for a balanced energy consumption and delay in the IoT-Fog-Cloud computing | |
Cui et al. | A blockchain-based containerized edge computing platform for the internet of vehicles | |
CN113778677B (en) | SLA-oriented intelligent optimization method for cloud-edge cooperative resource arrangement and request scheduling | |
CN111556516B (en) | Distributed wireless network task cooperative distribution method facing delay and energy efficiency sensitive service | |
CN113918240B (en) | Task unloading method and device | |
CN115175217A (en) | Resource allocation and task unloading optimization method based on multiple intelligent agents | |
CN113641504B (en) | Information interaction method for improving edge computing effect of multi-agent reinforcement learning | |
CN113037877A (en) | Optimization method for time-space data and resource scheduling under cloud edge architecture | |
CN115629865B (en) | Deep learning inference task scheduling method based on edge calculation | |
CN113553146A (en) | Cloud edge cooperative computing task merging and scheduling method | |
CN114938372B (en) | Federal learning-based micro-grid group request dynamic migration scheduling method and device | |
CN113190342B (en) | Method and system architecture for multi-application fine-grained offloading of cloud-edge collaborative networks | |
CN114205374B (en) | Transmission and calculation joint scheduling method, device and system based on information timeliness | |
CN117539619A (en) | Computing power scheduling method, system, equipment and storage medium based on cloud edge fusion | |
CN116939866A (en) | Wireless federal learning efficiency improving method based on collaborative computing and resource allocation joint optimization | |
CN111741069A (en) | Hierarchical data center resource optimization method and system based on SDN and NFV | |
CN115883661A (en) | Request dynamic scheduling method in cloud-edge collaborative cloud game scene | |
Cao et al. | Performance and stability of application placement in mobile edge computing system | |
CN116109058A (en) | Substation inspection management method and device based on deep reinforcement learning | |
CN117667327A (en) | Job scheduling method, scheduler and related equipment | |
CN115361453A (en) | Load fair unloading and transferring method for edge service network | |
CN115756772A (en) | Dynamic arrangement and task scheduling method and system for edge cloud mixed operation | |
Sun et al. | A resource allocation scheme for edge computing network in smart city based on attention mechanism |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |