CN115883661A - Request dynamic scheduling method in cloud-edge collaborative cloud game scene - Google Patents

Request dynamic scheduling method in cloud-edge collaborative cloud game scene Download PDF

Info

Publication number
CN115883661A
CN115883661A CN202211489010.7A CN202211489010A CN115883661A CN 115883661 A CN115883661 A CN 115883661A CN 202211489010 A CN202211489010 A CN 202211489010A CN 115883661 A CN115883661 A CN 115883661A
Authority
CN
China
Prior art keywords
node
service
request
requests
cloud
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211489010.7A
Other languages
Chinese (zh)
Inventor
李星星
冯一诚
王晓飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Pplabs Network Technology Shanghai Co ltd
Original Assignee
Pplabs Network Technology Shanghai Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Pplabs Network Technology Shanghai Co ltd filed Critical Pplabs Network Technology Shanghai Co ltd
Priority to CN202211489010.7A priority Critical patent/CN115883661A/en
Publication of CN115883661A publication Critical patent/CN115883661A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Multi Processors (AREA)

Abstract

The invention discloses a dynamic request scheduling method under a cloud-edge collaborative cloud game scene, which comprises the following steps: the method comprises the steps that a main node judges the type of a received service request, if the request is a BE service and forwards the request to a cloud center, the cloud center constructs a graph structure according to request information and node states of an edge cluster, a BE service scheduling decision is obtained by using a graph neural network and an A2C algorithm to maximize the total throughput of the BE service as a target, and the cloud center schedules the BE service to a working node of the target edge cluster for processing; if the service is the LC service, the number of the LC services is confirmed, different graph structures are established according to the number and the requested service types, the transmission number of the LC services is maximized and the transmission delay is minimized, distributed scheduling decisions are generated by using OR-Tools, and the main node transmits the LC services to corresponding working nodes. The invention ensures the service quality of the LC service request in the cloud game and optimizes the throughput of the BE service for a long time.

Description

Request dynamic scheduling method in cloud-edge collaborative cloud game scene
Technical Field
The invention belongs to the technical field of cloud games, and particularly relates to a dynamic request scheduling method in a cloud-edge collaborative cloud game scene.
Background
In the modern day of 5G communication network high-speed development and everything interconnection, a cloud edge cooperative framework integrates the advantages and the characteristics of edge computing and cloud computing, computing resources are organized into a large number of small clusters through sinking cloud computing power, moving up terminal computing power and converging edge computing power, and the clusters bring more agile service to nearby terminal users and simultaneously reduce the flow burden of a backbone network. With the development and popularization of high-speed networks such as 5G, optical fiber and the like, cloud games become an inevitable trend in the game industry. Under the assistance of the cloud edge collaborative framework, cloud games can be developed efficiently, the AI game accelerates the game intelligentization, and more comprehensive cloud edge infrastructure and more stable game environment are provided.
According to the service quality requirement, the specific service requests in the cloud game can be divided into two types. One is a series of delay-sensitive (LC) services which are related to real-time interaction of users and have higher requirements on delay and service quality, such as game rendering, game database synchronization, game performance monitoring and the like; the other type is an offline batch processing (BE) service which can tolerate higher running delay and support failed task restarting and carries out data mining analysis in the background, such as game log collection, user portrait data analysis and the like. Generally speaking, an ideal scheme is to deploy two types of services to cloud game scenes with cloud edge cooperation so as to further improve the utilization rate of machine resources. However, due to the huge difference in characteristics between services, how to coordinate the request scheduling of services with larger difference in processing characteristics in the edge cloud game scene becomes a new challenge to be solved urgently.
Disclosure of Invention
Aiming at the problems, the invention provides a request dynamic scheduling method in a cloud-edge collaborative cloud game scene, which can fully consider the characteristics of the service when scheduling the service request in the cloud-edge collaborative cloud game scene, and solves the problem that the traditional scheduling strategy can not meet the requirements. In order to solve the technical problems, the technical scheme adopted by the invention is as follows:
a request dynamic scheduling method under a cloud-edge collaborative cloud game scene comprises the following steps:
s1, a cloud edge cluster system comprising a cloud center and a plurality of edge clusters is constructed, wherein each edge cluster comprises a main node for receiving a service request and a plurality of working nodes for processing the service request;
s2, judging the types of all received service requests by the main node of each edge cluster, forwarding the requests to a cloud center if the requests are BE services, executing the step S6, confirming the number of LC services received by the main node of the edge cluster if the requests are LC services, and executing the step S3;
s3, judging
Figure BDA0003964087780000011
If so, construct a graph structure based on node status and request information>
Figure BDA0003964087780000012
Otherwise, randomly selecting Q from all the received LC services ed-handle,b Multiple requests constitute a first set of requests +>
Figure BDA0003964087780000021
The remaining LC service constitutes a second request set +>
Figure BDA0003964087780000022
Assembling/withholding a first request based on service type>
Figure BDA0003964087780000023
And a second set of requests +>
Figure BDA0003964087780000024
Respectively construct a graph structure>
Figure BDA0003964087780000025
And a map structure->
Figure BDA0003964087780000026
Wherein +>
Figure BDA0003964087780000027
Represents the total number of pending LC services, Q, received by the master node of edge cluster b ed-handle,b The total number of the requests which can be processed by all the working nodes in the edge cluster b is represented, and k represents the service type;
s4, constructing a scheduling objective function of the LC service by taking the maximized number of the transmitted LC services and the minimized transmission time delay of the LC services as targets;
s5, the scheduling objective function obtained in the step S4 and the graph structure obtained in the step S3 are combined
Figure BDA0003964087780000028
And a map structure->
Figure BDA0003964087780000029
The distributed scheduling decision is generated by inputting an OR-Tools solver, and the LC service is transmitted to the corresponding working node by the main node according to the distributed scheduling decision for processing;
s6, the cloud center constructs a graph structure according to the received request information of all BE services and the node states of the edge clusters
Figure BDA00039640877800000210
Map structure->
Figure BDA00039640877800000211
Coding to obtain a coded feature vector;
and S7, inputting the coded feature vector into a neural network, acquiring a BE service scheduling decision by using an A2C algorithm and taking the maximum total throughput of the BE service as a target as a reward function, and scheduling the BE service to a working node of a target edge cluster by the cloud center according to the BE service scheduling decision for processing.
In step S3, the graph structure
Figure BDA00039640877800000212
Wherein it is present>
Figure BDA00039640877800000213
Is a set of nodes, ε k Is a set of edges, each node->
Figure BDA00039640877800000214
Is taken to be->
Figure BDA00039640877800000215
Each side(s) of i ,s j )∈ε k Is taken to be->
Figure BDA00039640877800000216
Represent, and node
Figure BDA00039640877800000217
The node attribute
Figure BDA00039640877800000218
The expression of (a) is:
Figure BDA00039640877800000219
in the formula (I), the compound is shown in the specification,
Figure BDA00039640877800000220
representing a node s i The maximum amount of CPU resources allocated to the service instance corresponding to the LC request of service type k, device for selecting or keeping>
Figure BDA00039640877800000221
Representing a node s i CPU resource availability, based on a service instance assigned to an LC request of service type k>
Figure BDA00039640877800000222
Representing a node s i Maximum memory allocated for the service instance corresponding to the LC request of service type k, based on the number of times the service instance is granted>
Figure BDA00039640877800000223
Representing a node s i Memory availability allocated for the service instance corresponding to the LC request of service type k>
Figure BDA00039640877800000224
Representing a node s i The supply and demand relation between the processed request quantity and the received LC request quantity with the service type k;
the edge attribute
Figure BDA00039640877800000225
The expression of (c) is:
Figure BDA00039640877800000226
in the formula (I), the compound is shown in the specification,
Figure BDA00039640877800000227
representing a node s i And node s j Communication delay therebetween, c i,j Representing a node s i And node s j Requesting transmission capacity therebetween.
When node s i In the case of the master node,
Figure BDA00039640877800000228
indicates that there is a->
Figure BDA00039640877800000229
A LC request to be distributed, when node s i When the node is a working node, the node can bear the value>
Figure BDA00039640877800000230
An LC request, whose calculation formula is:
Figure BDA0003964087780000031
in the formula (I), the compound is shown in the specification,
Figure BDA0003964087780000032
represents the amount of CPU resource required for an LC request of service type k, and->
Figure BDA0003964087780000033
Indicating the memory required for an LC request of type k.
In step S3, the graph structure
Figure BDA0003964087780000034
Has a node attribute of { }>
Figure BDA0003964087780000035
The expression is as follows:
Figure BDA0003964087780000036
when s is i In the case of the master node,
Figure BDA0003964087780000037
representing a second set of requests>
Figure BDA0003964087780000038
At node s i The requested number of (2); when s is i When the working node is selected, the device is turned on>
Figure BDA0003964087780000039
The values are shown as follows:
Figure BDA00039640877800000310
/>
wherein λ is an amplification factor,
Figure BDA00039640877800000311
represents the amount of CPU resource required for an LC request of service type k, and->
Figure BDA00039640877800000312
Indicating the memory required for an LC request of type k.
The calculation formula of the amplification coefficient lambda is as follows:
Figure BDA00039640877800000313
in step S4, the expression of the scheduling objective function is:
Figure BDA00039640877800000314
Figure BDA00039640877800000315
Figure BDA00039640877800000316
Figure BDA00039640877800000317
in the formula (I), the compound is shown in the specification,
Figure BDA00039640877800000318
indicating the receiving node s from the request i To requesting execution node s j E denotes a request transport stream set, h>
Figure BDA00039640877800000319
Indicating whether the requested transport stream f has passed the edge(s) i ,s j ) Transmission, gamma f Indicating the resource requirements for the requested transport stream f,
Figure BDA00039640877800000320
indicating acceptance by a requesting node s m Sending to requesting executing node s j F' denotes a requestTransport stream, and f ≠ f, -, ≠ f>
Figure BDA00039640877800000321
Indicating acceptance by the requesting node s j Sending to requesting executing node s n Request set of(s) m ,s j ) Representing by a node s m To node s j Side of (d)>
Figure BDA00039640877800000322
Representing a node s i And node s j Communication delay therebetween, c i,j Representing a node s i And node s j Requested transmission capacity of between, epsilon k Representing diagram structures>
Figure BDA00039640877800000323
Is taken over, is>
Figure BDA00039640877800000324
Representing diagram structures>
Figure BDA00039640877800000325
Is taken on>
Figure BDA00039640877800000326
Representing a node s i The supply-demand relationship between the amount of requests that can be handled and the number of LC requests of type k of service received.
In step S6, the graph structure is mapped by using a graph neural network
Figure BDA00039640877800000327
The encoding to obtain the encoded feature vector comprises the following steps:
i, graph structure
Figure BDA00039640877800000328
Each node in the system carries out neighbor node sampling;
and ii, carrying out aggregation operation on the neighbor nodes to obtain the coding feature vector of each node.
The calculation formula of the feature vector is as follows:
Figure BDA0003964087780000041
in the formula (I), the compound is shown in the specification,
Figure BDA0003964087780000042
graph structure representing a service type of k>
Figure BDA0003964087780000043
Node s in i σ denotes an activation function, W denotes a weight parameter, and>
Figure BDA0003964087780000044
map structure ^ representing service type k-1>
Figure BDA0003964087780000045
Node s in i Is combined with the feature vector of (4), is combined with the feature vector of (4)>
Figure BDA0003964087780000046
Graph structure representing a service type of k>
Figure BDA0003964087780000047
Node s in j Based on the aggregated feature vector of (a), based on the sum of the feature vectors, and based on the sum of the feature vectors>
Figure BDA0003964087780000048
Representing a node s i Is determined.
The invention has the beneficial effects that:
the request is efficiently and reasonably processed through coordination and cooperation of the agile distributed scheduling algorithm with low resource cost and expenditure and the intelligent self-adaptive centralized scheduling algorithm, a customized dynamic request scheduling strategy of a mixed scheduling architecture is designed to deal with delay sensitive services and offline batch processing services in the cloud game, and positive influence can be generated on cloud game development under cloud edge cooperation; the method ensures the service quality of delay sensitive service requests such as game rendering, game database synchronization and the like in the cloud game, and simultaneously optimizes the throughput of offline batch processing services such as game log acquisition, user portrait data analysis and the like for a long time, thereby providing a stable game environment for users, ensuring the operation of the game service and further promoting the development of the cloud game.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic diagram of a scheduling architecture in a cloud-edge collaborative cloud game scenario.
Fig. 2 is a distributed scheduling algorithm for delay-sensitive service requests.
Fig. 3 is a centralized scheduling algorithm for offline batch service requests.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art based on the embodiments of the present invention without inventive step, are within the scope of the present invention.
A request dynamic scheduling method under a cloud-edge collaborative cloud game scene comprises the following steps:
s1, as shown in FIG. 1, a cloud edge cluster system comprising a cloud center and a plurality of edge clusters is constructed, wherein each edge cluster comprises a main node for receiving a service request and a plurality of working nodes for processing the service request;
in the cloud edge cluster system, a cloud center is connected with an edge cluster and the edge cluster through a wide area network, and a main node is connected with a working node and the working node is connected with the working node through a local area network. Each master node periodically collects node state information of working nodes in the cluster, and periodically synchronizes and shares the node state information with other master nodes so as to share the node state information among the master nodes and store the node state information in a database of each node. The node state information comprises idle CPUs, idle memories, task processing conditions and the like. The main node is a decision communication node on the edge cluster, the service request of the cloud game reaching the edge cluster comprises LC service and BE service which are received by the main node as edge access points, the main node executes a distributed request scheduling strategy to the received LC service to determine a target working node, and the cloud executes a centralized request scheduling strategy to the BE service to determine the target working node. The working nodes are request processing nodes on the edge cluster, specific service instances are deployed on the working nodes, the main node forwards the requests to target working nodes of the corresponding edge cluster according to a scheduling strategy, the working nodes process the requests and return processing results to the main node, and the main node forwards the processing results to users. Furthermore, the processing of each request will consume the computational, storage, and bandwidth resources of the node.
Collective adoption of edge clusters
Figure BDA0003964087780000051
Indicate wherein>
Figure BDA0003964087780000052
Representing one of the set of edge clusters. For each edge cluster b by M b A node, wherein, a main node, M b 1 working node, i.e. a set of nodes that can be represented as ≧ or { (R) } for one edge cluster b>
Figure BDA0003964087780000053
S2, judging the types of all received service requests by the main node of each edge cluster, forwarding the requests to a cloud center if the requests are BE services, executing the step S6, confirming the number of LC services received by the main node of the edge cluster if the requests are LC services, and executing the step S3;
in this embodiment, the service request is a BE service and an LC service, and the BE service and the LC service both include services of a plurality of service types, where the service types include game rendering, game database synchronization, game log acquisition, user portrait data analysis, and the like, and the service types include game rendering, game database synchronization, game log acquisition, and user portrait data analysis
Figure BDA0003964087780000054
Representing a collection of service types.
S3, judging
Figure BDA0003964087780000055
If so, construct a graph structure based on node status and request information>
Figure BDA0003964087780000056
Otherwise, randomly selecting Q from all the received LC services ed-handle,b Multiple requests constitute a first set of requests>
Figure BDA0003964087780000057
The remaining LC service constitutes a second request set +>
Figure BDA0003964087780000058
Assembling/withholding a first request based on service type>
Figure BDA0003964087780000059
And a second set of requests +>
Figure BDA00039640877800000510
Respectively construct a graph structure>
Figure BDA00039640877800000511
And a map structure->
Figure BDA00039640877800000512
Wherein it is present>
Figure BDA00039640877800000513
Represents the total number of pending LC services, Q, received by the master node of edge cluster b ed-handle,b The total number of requests which can be processed by all the working nodes in the edge cluster b is represented, and k represents the service type;
as shown in fig. 2, for the service type is
Figure BDA00039640877800000514
The edge cluster is provided with a plurality of LC instances, and a single instance only processes service requests of a certain service type. For each service type>
Figure BDA00039640877800000515
Building a graph structure
Figure BDA00039640877800000516
Figure BDA00039640877800000517
System information about service type k for an edge cluster defined to be maintained by a master node on the edge cluster, wherein it is present>
Figure BDA00039640877800000518
Is a set of nodes, ε k The node information reflects the occupation amount of k service type resources and the request quantity information, and the node information reflects the connection, delay condition and link request capacity among the nodes. Wherein, for any one side(s) i ,s j )∈ε k All have->
Figure BDA00039640877800000519
And &>
Figure BDA00039640877800000520
For each node
Figure BDA00039640877800000521
Corresponds to a set of node attributes>
Figure BDA00039640877800000522
Wherein it is present>
Figure BDA00039640877800000523
Representing a node s i The maximum amount of CPU resources allocated to the service instance corresponding to the LC request of service type k, device for selecting or keeping>
Figure BDA00039640877800000524
Representing a node s i CPU resource availability, based on a service instance assigned to an LC request of service type k>
Figure BDA00039640877800000525
Representing a node s i Maximum memory allocated for the service instance corresponding to the LC request of service type k, based on the number of times the service instance is granted>
Figure BDA0003964087780000061
Representing a node s i Memory availability allocated for the service instance corresponding to the LC request of service type k>
Figure BDA0003964087780000062
Representing a node s i The supply-demand relation between the amount of the processed requests and the amount of the received LC requests with the service type k reflects the node s i Whether to process the received LC request with service type k when the node s i When the master node is present, it indicates that the master node is present>
Figure BDA0003964087780000063
For each LC request to be distributed, the master node only participates in the distribution scheduling of the request and does not participate in the processing of the request, and for the master node, the node attribute of the master node is
Figure BDA0003964087780000064
At this time->
Figure BDA0003964087780000065
When node s i In the case of a working node, the service container representing the node may carry @>
Figure BDA0003964087780000066
An LC request, whose calculation formula at this time is:
Figure BDA0003964087780000067
in the formula (I), the compound is shown in the specification,
Figure BDA0003964087780000068
represents the amount of CPU resources required for an LC request of service type k, and ` the `>
Figure BDA0003964087780000069
Indicating the memory required for an LC request of type k.
For each edge(s) i ,s j )∈ε k All correspond to a set of edge attributes
Figure BDA00039640877800000610
Wherein it is present>
Figure BDA00039640877800000611
Representing a node s i And node s j Communication delay therebetween, c i,j Representing a node s i And node s j Requested transmission capacity in between.
For the second request set
Figure BDA00039640877800000612
The established pattern structure->
Figure BDA00039640877800000613
The difference is that the node attribute is
Figure BDA00039640877800000614
Figure BDA00039640877800000615
When s is i When it is a master node, in combination with a receiver>
Figure BDA00039640877800000616
Indicating that the second request set +>
Figure BDA00039640877800000617
At node s i The requested number of (2); when s is i When it is a working node, it is on or off>
Figure BDA00039640877800000618
The values are shown as follows:
Figure BDA00039640877800000619
wherein λ is an amplification factor for ensuring the structure of the map
Figure BDA00039640877800000620
The total number of requests receivable by the middle node and the second set of requests
Figure BDA00039640877800000621
The calculation formula of (1) is as follows:
Figure BDA00039640877800000622
picture structure
Figure BDA00039640877800000623
Has the same edge attribute as the graph structure>
Figure BDA00039640877800000624
The edge attribute of (2) is not described herein in detail. When/is>
Figure BDA00039640877800000625
Indicate the total number of LC services pending at that time @>
Figure BDA00039640877800000626
Less than or equal to the number Q of requests which can be carried by the working nodes in the edge cluster ed-handle,b Therefore, the free resources can satisfy the requirements of all pending requests. When +>
Figure BDA00039640877800000627
The total number of LC services pending at that time->
Figure BDA00039640877800000628
The quantity of the requests Q which can be carried by the working nodes in the edge cluster is larger than ed-handle,b In this case, the free resources cannot meet the requirements of all pending requests. Therefore, all LC service requests received by the master node in each edge cluster are divided into two parts to respectively construct a graph structure for solving. First request set +>
Figure BDA00039640877800000629
Has a number of requests of->
Figure BDA00039640877800000630
Which is equal to Q ed-handle,b Aggregate ≦ for the first request>
Figure BDA00039640877800000631
In other words, the free resources of the worker node can still meet the needs of all requests.
S4, constructing a scheduling objective function of the LC service by taking the maximized number of the transmitted LC services and the minimized transmission time delay of the LC services as targets;
the expression of the scheduling objective function is as follows:
Figure BDA0003964087780000071
Figure BDA0003964087780000072
Figure BDA0003964087780000073
Figure BDA0003964087780000074
in the formula (I), the compound is shown in the specification,
Figure BDA0003964087780000075
indicating the receiving node s from the request i To requesting execution node s j In between, based on the number of requests in the group, and based on the number of requests in the group>
Figure BDA0003964087780000076
Denotes a request transport stream set, E denotes an edge set of the corresponding graph structure, corresponding to ε k ,/>
Figure BDA0003964087780000077
Indicating whether the requested transport stream f has passed the edge(s) i ,s j ) Transmission, gamma f Indicates a resource requirement requesting transport stream f>
Figure BDA0003964087780000078
Indicating acceptance by a requesting node s m Sending to requesting executing node s j F' represents the requested transport stream, and f ≠ f, -, and/or>
Figure BDA0003964087780000079
Indicating acceptance by the requesting node s j Sending to requesting executing node s n Request set of(s) m ,s j ) Represented by node s m To node s j Of (c) is performed.
The constraint condition a indicates that the sum of the resources of the requests for constraining the transmission of each link cannot exceed the upper limit of the transmission capacity of the link request, the constraint condition b indicates that the number of the requests received by each node cannot exceed the processing capacity of the node, and the constraint condition c indicates that the number of the requests sent by each node cannot exceed the sum of the number of the initially owned requests and the number of the received requests.
Requesting whether transport stream f passes an edge(s) i ,s j ) Transmission of
Figure BDA00039640877800000710
The expression of (c) is:
Figure BDA00039640877800000711
the LC service requests arrive dynamically and randomly, and the distributed scheduling policy decides to forward immediately after receiving the LC requests to avoid additional latency delays.
S5, inputting the scheduling objective function obtained in the step S4 and the graph structure obtained in the step S3 into an OR-Tools solver to generate a distributed scheduling decision, and transmitting the LC service to the corresponding working node by the main node according to the distributed scheduling decision;
and the scheduling decision is a scheduling path, and each LC request to be processed according to the scheduling path is distributed to the corresponding target working node for processing.
S6, the cloud center constructs a graph structure according to the received request information of all BE services and the node information of the edge cluster
Figure BDA00039640877800000712
Method for combining Graph structures with a Graph Neural Network (GNN)>
Figure BDA00039640877800000713
Coding to obtain a coded feature vector; />
Unlike a distributed scheduling mode for handling LC requests, the BE request is performed on the cloud center in a centralized scheduling mode. Each Worker node on the cloud edge cluster is only provided with a public container ring of BE serviceAnd (4) environmental conditions.
Figure BDA00039640877800000714
Wherein it is present>
Figure BDA00039640877800000715
Is a set of nodes and ε' is a set of edges.
For node
Figure BDA00039640877800000716
CPU resource whose attributes comprise a node that can BE used to process a BE service>
Figure BDA00039640877800000717
Memory resource
Figure BDA00039640877800000718
Maximum CPU resource->
Figure BDA00039640877800000719
Maximum memory resource->
Figure BDA00039640877800000720
The CPU and memory resource requirements corresponding to a request for a BE service are expressed as ≥ er>
Figure BDA00039640877800000721
For a side->
Figure BDA00039640877800000722
Whose attributes include the connection delay between nodes>
Figure BDA00039640877800000723
And requested transmission capacity c of the link i,j
The graph structure is obtained by utilizing a graph neural network
Figure BDA0003964087780000081
The method for coding to obtain the coded feature vector comprises the following steps:
i, pair chartStructure of the device
Figure BDA0003964087780000082
Each node in the system carries out neighbor node sampling;
for the
Figure BDA0003964087780000083
Sampling neighbor nodes, setting a fixed sampling number p for improving the calculation efficiency, and defining a neighbor indicator h(s) i ,s j ) The following formula:
Figure BDA0003964087780000084
further, for node s i When neighbor sampling is carried out: if it is
Figure BDA0003964087780000085
Sampling with the replacement is carried out until p neighbor nodes are selected; if/or>
Figure BDA0003964087780000086
Then sampling without putting back is carried out until p neighbor nodes are selected. Furthermore, for node s i Is defined as ≥ the set of neighbor nodes>
Figure BDA0003964087780000087
ii, carrying out aggregation operation on the neighbor nodes to obtain a coding feature vector of each node;
after selecting the neighbor nodes, the nodes are subjected to aggregation operation. Define L ∈ {0,1, ·, L } to represent the polymerization number index, and set the total polymerization number L =2. Furthermore, a node s is defined i The feature vector at the first aggregation is
Figure BDA0003964087780000088
The attribute information of the neighbor nodes and edges of the node in the graph structure is shown, and the expression is as follows:
Figure BDA0003964087780000089
in the formula (I), the compound is shown in the specification,
Figure BDA00039640877800000810
node s in graph structure representing service type k i Aggregated feature vector σ represents an activation function, W represents a weight parameter, and->
Figure BDA00039640877800000811
Node s in graph structure representing service type k-1 i The aggregated feature vector +>
Figure BDA00039640877800000812
Node s in graph structure representing service type k-1 j The aggregated feature vector +>
Figure BDA00039640877800000813
Representing a node s i Is determined.
In order to deal with the state information of the system at high latitude, the centralized scheduling algorithm of the BE request introduces the graph neural network, so that the state characteristics of the system can BE better extracted, the training speed of deep reinforcement learning is increased, and the learning capability of the deep reinforcement learning is improved.
And S7, inputting the coded feature vector into a neural network, acquiring a BE service scheduling decision by using an A2C algorithm and taking the maximum total throughput of the BE service as a target as a reward function, and scheduling the BE service to a working node of a target edge cluster by the cloud center according to the BE service scheduling decision for processing.
The expression of the scheduling objective function of the BE service is as follows:
Figure BDA00039640877800000814
wherein φ 'represents the total throughput of BE service, q' b,t BE service completed on edge cluster b at time tThe number of the cells.
As shown in fig. 3, the neural network of the algorithm of the advertisement Actor-Critic (A2C) is composed of an action network (operator) and an evaluation network (Critic). Wherein the operator network will target a given system state
Figure BDA00039640877800000815
Generating a decision action a t The critic network is used for evaluating and guiding decision-making actions of the operator network. Taking all the coded feature vectors as ^ or ^>
Figure BDA00039640877800000816
And input into the operator network to obtain the output action a t According to action a t Can know the BE service can BE scheduled to which working node of which edge cluster, and the cloud center can schedule the BE service according to the a t Scheduling BE requests onto target clusters and calculating a reward value r t And storing the sample in a memory return visit pool. When the number of the memory return visit pool samples reaches a preset threshold value alpha, the critic network randomly extracts alpha samples from the pool for training, and updates the network parameter W e
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (8)

1. A request dynamic scheduling method under a cloud-edge collaborative cloud game scene is characterized by comprising the following steps:
s1, a cloud edge cluster system comprising a cloud center and a plurality of edge clusters is constructed, wherein each edge cluster comprises a main node for receiving a service request and a plurality of working nodes for processing the service request;
s2, judging the types of all received service requests by the main node of each edge cluster, forwarding the requests to a cloud center if the requests are BE services, executing the step S6, confirming the number of LC services received by the main node of the edge cluster if the requests are LC services, and executing the step S3;
s3, judging
Figure FDA0003964087770000011
If so, construct a graph structure based on node status and request information>
Figure FDA0003964087770000012
Otherwise, randomly selecting Q from all the received LC services ed-handle,b Multiple requests constitute a first set of requests +>
Figure FDA00039640877700000120
The remaining LC services constitute a second request set +>
Figure FDA00039640877700000121
Collecting and/or resolving first requests in dependence on service type>
Figure FDA00039640877700000122
And a second set of requests +>
Figure FDA00039640877700000123
Respectively construct a graph structure>
Figure FDA00039640877700000124
And a map structure->
Figure FDA0003964087770000013
Wherein it is present>
Figure FDA0003964087770000014
Representing the total number of LC services to be processed, Q, received by the master node of edge cluster b ed-handle,b The total number of requests which can be processed by all the working nodes in the edge cluster b is represented, and k represents the service type;
s4, constructing a scheduling objective function of the LC service by taking the maximized number of the transmitted LC services and the minimized transmission time delay of the LC services as targets;
s5, the scheduling objective function obtained in the step S4 and the graph structure obtained in the step S3 are combined
Figure FDA0003964087770000015
And a map structure->
Figure FDA0003964087770000016
The method comprises the steps that an OR-Tools solver is input to generate a distributed scheduling decision, and a main node transmits LC service to a corresponding working node for processing according to the distributed scheduling decision;
s6, the cloud center constructs a graph structure according to the received request information of all BE services and the node states of the edge clusters
Figure FDA0003964087770000017
Utilizing a graph neural network to combine graph structures>
Figure FDA0003964087770000018
Coding to obtain a coded feature vector;
and S7, inputting the coded feature vectors into a neural network, acquiring a BE service scheduling decision by using an A2C algorithm and taking the maximum total throughput of the BE service as a target as a reward function, and scheduling the BE service to a working node of a target edge cluster by the cloud center according to the BE service scheduling decision for processing.
2. The method for dynamically scheduling requests in the cloud-edge collaborative cloud game scenario according to claim 1, wherein in step S3, the graph structure
Figure FDA0003964087770000019
Wherein it is present>
Figure FDA00039640877700000110
Is a set of nodes, ε k Is a set of edges, each node->
Figure FDA00039640877700000111
Figure FDA00039640877700000112
In conjunction with a node attribute of:>
Figure FDA00039640877700000113
each side(s) of i ,s j )∈ε k Is taken to be->
Figure FDA00039640877700000114
Represents and node +>
Figure FDA00039640877700000115
The node attribute
Figure FDA00039640877700000116
The expression of (a) is:
Figure FDA00039640877700000117
in the formula (I), the compound is shown in the specification,
Figure FDA00039640877700000118
representing a node s i The maximum amount of CPU resources allocated to the service instance corresponding to the LC request of service type k, device for selecting or keeping>
Figure FDA00039640877700000119
Representing a node s i The CPU resource availability allocated for the service instance corresponding to the LC request of service type k,
Figure FDA0003964087770000021
representing a node s i Maximum memory allocated to service instance corresponding to LC request with service type k,/>
Figure FDA0003964087770000022
Representing a node s i Memory availability allocated for the service instance corresponding to the LC request of service type k>
Figure FDA0003964087770000023
Representing a node s i The supply and demand relation between the request quantity capable of being processed and the LC request quantity with the received service type k;
the edge attribute
Figure FDA00039640877700000225
The expression of (a) is:
Figure FDA0003964087770000024
/>
in the formula (I), the compound is shown in the specification,
Figure FDA0003964087770000025
representing a node s i And node s j Communication delay therebetween, c i,j Representing a node s i And node s j Requesting transmission capacity therebetween.
3. The method for dynamically scheduling requests in the cloud-edge collaborative cloud game scenario according to claim 2, wherein when a node s is present i In the case of the master node,
Figure FDA0003964087770000026
indicates existence of->
Figure FDA0003964087770000027
A LC request to be distributed, when node s i When the node is a working node, the node can bear the value>
Figure FDA0003964087770000028
An LC request, whose formula is:
Figure FDA0003964087770000029
in the formula (I), the compound is shown in the specification,
Figure FDA00039640877700000210
represents the amount of CPU resource required for an LC request of service type k, and->
Figure FDA00039640877700000211
Indicating the memory required for an LC request of type k.
4. The method for dynamically scheduling requests in the cloud-edge collaborative cloud game scenario according to claim 2, wherein in step S3, the graph structure
Figure FDA00039640877700000212
Has a node attribute of->
Figure FDA00039640877700000213
The expression is as follows:
Figure FDA00039640877700000214
when s is i In the case of the master node,
Figure FDA00039640877700000215
representing a second set of requests>
Figure FDA00039640877700000216
At node s i The requested number of (2); when s is i In the case of a working node, the node is,
Figure FDA00039640877700000217
the values are shown as follows:
Figure FDA00039640877700000218
wherein λ is an amplification factor,
Figure FDA00039640877700000219
represents the amount of CPU resources required for an LC request of service type k, and ` the `>
Figure FDA00039640877700000220
Indicating the memory required for an LC request of type k.
5. The method for dynamically scheduling the request in the cloud-edge collaborative cloud game scene according to claim 4, wherein a calculation formula of the amplification factor λ is as follows:
Figure FDA00039640877700000221
6. the method for dynamically scheduling requests in the cloud-edge collaborative cloud game scenario according to claim 1, wherein in step S4, an expression of the scheduling objective function is:
Figure FDA00039640877700000222
Figure FDA00039640877700000223
Figure FDA00039640877700000224
Figure FDA0003964087770000031
in the formula (I), the compound is shown in the specification,
Figure FDA00039640877700000313
indicating the receiving node s from the request i To requesting execution node s j In between, based on the number of requests in the group, and based on the number of requests in the group>
Figure FDA00039640877700000314
Represents a request for a set of transport streams, <' > or>
Figure FDA0003964087770000032
Indicating whether the requested transport stream f has passed the edge(s) i ,s j ) Transmission, gamma f Indicates a resource requirement that requests transport stream f, <' > is present>
Figure FDA0003964087770000033
Indicating acceptance by a requesting node s m Sending to requesting executing node s j F' represents the requested transport stream, and f ≠ f, -, and/or>
Figure FDA0003964087770000034
Indicating acceptance by a requesting node s j Sending to requesting executing node s n (ii) set of all requests,(s) m ,s j ) Representing by a node s m To node s j Is on the side of (4), (v) is greater than or equal to>
Figure FDA00039640877700000315
Representing a node s i And node s j Communication delay therebetween, c i,j Representing a node s i And node s j Requested transmission capacity of between, epsilon k Representing diagram structures>
Figure FDA0003964087770000035
Is taken over, is>
Figure FDA0003964087770000036
Represents a diagram configuration->
Figure FDA0003964087770000037
Is taken on>
Figure FDA0003964087770000038
Representing a node s i The supply-demand relationship between the amount of requests that can be handled and the number of LC requests of type k of service received.
7. The method for dynamically scheduling request in the cloud-edge collaborative cloud game scenario according to claim 1, wherein in step S6, the graph structure is mapped to a graph by using a graph neural network
Figure FDA00039640877700000316
The encoding to obtain the encoded feature vector comprises the following steps:
i, map-to-map structure
Figure FDA00039640877700000317
Each node in the system carries out neighbor node sampling;
and ii, carrying out aggregation operation on the neighbor nodes to obtain the coding feature vector of each node.
8. The method for dynamically scheduling requests in the cloud-edge collaborative cloud game scenario according to claim 7, wherein a calculation formula of the feature vector is as follows:
Figure FDA0003964087770000039
in the formula (I), the compound is shown in the specification,
Figure FDA00039640877700000310
graph structure representing a service type of k>
Figure FDA00039640877700000318
Node s in i σ represents an activation function, W represents a weight parameter, and>
Figure FDA00039640877700000311
map structure ^ representing service type k-1>
Figure FDA00039640877700000319
Node s in i The feature vector of (2) is aggregated into a feature vector,
Figure FDA00039640877700000312
graph structure representing a service type of k>
Figure FDA00039640877700000321
Node s in j Is combined with the feature vector of (4), is combined with the feature vector of (4)>
Figure FDA00039640877700000320
Representing a node s i Is determined. />
CN202211489010.7A 2022-11-25 2022-11-25 Request dynamic scheduling method in cloud-edge collaborative cloud game scene Pending CN115883661A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211489010.7A CN115883661A (en) 2022-11-25 2022-11-25 Request dynamic scheduling method in cloud-edge collaborative cloud game scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211489010.7A CN115883661A (en) 2022-11-25 2022-11-25 Request dynamic scheduling method in cloud-edge collaborative cloud game scene

Publications (1)

Publication Number Publication Date
CN115883661A true CN115883661A (en) 2023-03-31

Family

ID=85763908

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211489010.7A Pending CN115883661A (en) 2022-11-25 2022-11-25 Request dynamic scheduling method in cloud-edge collaborative cloud game scene

Country Status (1)

Country Link
CN (1) CN115883661A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160323374A1 (en) * 2015-04-29 2016-11-03 Microsoft Technology Licensing, Llc Optimal Allocation of Dynamic Cloud Computing Platform Resources
CN109829718A (en) * 2019-01-30 2019-05-31 缀初网络技术(上海)有限公司 A kind of block chain multi-layer framework and its operation method based on storage application scenarios
CN113778677A (en) * 2021-09-03 2021-12-10 天津大学 SLA-oriented intelligent optimization method for cloud-edge cooperative resource arrangement and request scheduling
WO2022021176A1 (en) * 2020-07-28 2022-02-03 苏州大学 Cloud-edge collaborative network resource smooth migration and restructuring method and system
CN114116157A (en) * 2021-10-21 2022-03-01 山东如意毛纺服装集团股份有限公司 Multi-edge cluster cloud structure in edge environment and load balancing scheduling method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160323374A1 (en) * 2015-04-29 2016-11-03 Microsoft Technology Licensing, Llc Optimal Allocation of Dynamic Cloud Computing Platform Resources
CN109829718A (en) * 2019-01-30 2019-05-31 缀初网络技术(上海)有限公司 A kind of block chain multi-layer framework and its operation method based on storage application scenarios
WO2022021176A1 (en) * 2020-07-28 2022-02-03 苏州大学 Cloud-edge collaborative network resource smooth migration and restructuring method and system
CN113778677A (en) * 2021-09-03 2021-12-10 天津大学 SLA-oriented intelligent optimization method for cloud-edge cooperative resource arrangement and request scheduling
CN114116157A (en) * 2021-10-21 2022-03-01 山东如意毛纺服装集团股份有限公司 Multi-edge cluster cloud structure in edge environment and load balancing scheduling method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
李国: "一种面向多类型服务的动态负载均衡算法", 《现代电子技术》, no. 12, 15 June 2017 (2017-06-15) *

Similar Documents

Publication Publication Date Title
CN110099384B (en) Multi-user multi-MEC task unloading resource scheduling method based on edge-end cooperation
Liu et al. Online computation offloading and traffic routing for UAV swarms in edge-cloud computing
Mebrek et al. Efficient green solution for a balanced energy consumption and delay in the IoT-Fog-Cloud computing
Cui et al. A blockchain-based containerized edge computing platform for the internet of vehicles
CN113778677B (en) SLA-oriented intelligent optimization method for cloud-edge cooperative resource arrangement and request scheduling
CN111556516B (en) Distributed wireless network task cooperative distribution method facing delay and energy efficiency sensitive service
CN113918240B (en) Task unloading method and device
CN115175217A (en) Resource allocation and task unloading optimization method based on multiple intelligent agents
CN113641504B (en) Information interaction method for improving edge computing effect of multi-agent reinforcement learning
CN113037877A (en) Optimization method for time-space data and resource scheduling under cloud edge architecture
CN115629865B (en) Deep learning inference task scheduling method based on edge calculation
CN113553146A (en) Cloud edge cooperative computing task merging and scheduling method
CN114938372B (en) Federal learning-based micro-grid group request dynamic migration scheduling method and device
CN113190342B (en) Method and system architecture for multi-application fine-grained offloading of cloud-edge collaborative networks
CN114205374B (en) Transmission and calculation joint scheduling method, device and system based on information timeliness
CN117539619A (en) Computing power scheduling method, system, equipment and storage medium based on cloud edge fusion
CN116939866A (en) Wireless federal learning efficiency improving method based on collaborative computing and resource allocation joint optimization
CN111741069A (en) Hierarchical data center resource optimization method and system based on SDN and NFV
CN115883661A (en) Request dynamic scheduling method in cloud-edge collaborative cloud game scene
Cao et al. Performance and stability of application placement in mobile edge computing system
CN116109058A (en) Substation inspection management method and device based on deep reinforcement learning
CN117667327A (en) Job scheduling method, scheduler and related equipment
CN115361453A (en) Load fair unloading and transferring method for edge service network
CN115756772A (en) Dynamic arrangement and task scheduling method and system for edge cloud mixed operation
Sun et al. A resource allocation scheme for edge computing network in smart city based on attention mechanism

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination