CN108965024B - Virtual network function scheduling method based on prediction for 5G network slice - Google Patents

Virtual network function scheduling method based on prediction for 5G network slice Download PDF

Info

Publication number
CN108965024B
CN108965024B CN201810863512.9A CN201810863512A CN108965024B CN 108965024 B CN108965024 B CN 108965024B CN 201810863512 A CN201810863512 A CN 201810863512A CN 108965024 B CN108965024 B CN 108965024B
Authority
CN
China
Prior art keywords
queue
network
service
slice
scheduling
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810863512.9A
Other languages
Chinese (zh)
Other versions
CN108965024A (en
Inventor
唐伦
周钰
马润琳
肖娇
赵国繁
陈前斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Benxi Steel Group Information Automation Co ltd
Shenzhen Wanzhida Technology Transfer Center Co ltd
Original Assignee
Chongqing University of Post and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University of Post and Telecommunications filed Critical Chongqing University of Post and Telecommunications
Priority to CN201810863512.9A priority Critical patent/CN108965024B/en
Publication of CN108965024A publication Critical patent/CN108965024A/en
Application granted granted Critical
Publication of CN108965024B publication Critical patent/CN108965024B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0803Configuration setting
    • H04L41/0823Configuration setting characterised by the purposes of a change of settings, e.g. optimising configuration for enhancing reliability
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/12Discovery or management of network topologies
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/14Network analysis or design
    • H04L41/145Network analysis or design involving simulating, designing, planning or modelling of a network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/14Network analysis or design
    • H04L41/147Network analysis or design for predicting network behaviour

Abstract

The invention relates to a virtual network function scheduling method based on prediction for a 5G network slice, and belongs to the field of mobile communication. The method specifically comprises the following steps: aiming at the service function chain characteristics of dynamic change of service flow, a service function chain queue model based on time delay is established; establishing a multi-queue cache model, and determining the priority of a slice request and the lowest service rate to be provided according to the size of a slice service queue at different moments; dispersing time into a series of continuous time windows, taking queue information in the time windows as training data set samples, and establishing a flow perception model based on prediction; and searching a scheduling method of the VNF under the condition of meeting the resource constraint that the cache of the slice service queue does not overflow according to the predicted size of each slice service queue and the corresponding lowest service rate. The invention realizes the online mapping of the network slices, reduces the overall average scheduling time delay of a plurality of network slices and improves the performance of network service.

Description

Virtual network function scheduling method based on prediction for 5G network slice
Technical Field
The invention belongs to the technical field of mobile communication, and relates to a virtual network function scheduling method based on prediction for 5G network slices.
Background
Network Function Virtualization (NFV) is based on general server hardware, and provides various Network functions in a software form, which provides great support capability for flexible deployment of future Network services and rapid adjustment of Network structures of operators, and in particular, a related Service Function Chaining (SFC) technology is different from a conventional Network Function implementation, creates more flexible and dynamic Network services, and meets more diversified requirements. The service Function chain is an ordered set of Virtual Network Functions (VNFs), and the traffic flows sequentially pass through a plurality of VNFs according to a designated policy to implement on-demand processing of Network traffic. With the increase of the number of terminal devices and network applications, the network traffic is increasing dramatically, and the guarantee of quality of service (QoS) has become an important issue to be solved urgently by service providers, where end-to-end delay and bandwidth are two basic QoS attributes. Different VNFs are scheduled in a manner that provides the same service to the user. However, different scheduling manners may cause a resource configuration of a VNF in a service function chain to change, thereby affecting an end-to-end delay of the service function chain, and therefore, how to combine a state monitoring mechanism to a changing network environment, thereby reasonably realizing optimization of a scheduling relationship of the VNF, a configuration of a virtual resource, and a resource supply and demand balance relationship, reducing a scheduling delay of a virtual network function while ensuring QoS, and improving a resource utilization rate is one of key problems to be solved by a resource management and scheduling mechanism in a 5G network slice.
In the prior art of researching the invention about end-to-end delay in SFC deployment, most of the work is only to solve the resource scheduling problem in a single scheduling period, and queue backlog caused by data accumulation due to service requests changing in a time domain is ignored. Meanwhile, in order to solve the problem of possible hysteresis in the VNF scheduling and virtual resource configuration processes, a resource demand prediction mechanism may be used to monitor the network state. The invention proves that the neural network technology can well predict the incidence relation between the resource characteristics and the resource requirements, but the method is rarely applied to solve the problem of combination of resource requirement prediction and virtual network function scheduling. While long short term memory network (LSTM) is one of the classic methods of deep learning, LSTM is a deep learning model that is improved from RNN and can be used for time series prediction analysis. The method has strong data feature fitting capability, and the most essential features of the data are mined by automatically extracting the features of resource requirements from a large amount of data for training, so that the prediction precision is higher than that of a traditional statistical model. Meanwhile, the method is improved from RNN, and is more suitable for processing the long-distance dependence problem.
Based on the advantages, the invention adopts the long-short term memory network to predict the minimum demand of the service function chain on the resources in real time. Based on the prediction result, a dynamic scheduling and resource allocation scheme of a service function chain VNF is provided, and a maximum and minimum ant colony algorithm is introduced to realize dynamic deployment of a plurality of service function chains.
Disclosure of Invention
In view of this, the present invention provides a virtual network function scheduling method for 5G network slices based on prediction, which can monitor a network state through a prediction mechanism, predict a minimum requirement of a service function chain on resources according to a queue information characteristic of the network slice, provide a dynamic scheduling and resource allocation scheme for the service function chain VNF based on the result, schedule underlying resources in a time dimension, protect the resources without reserving the underlying network, find a communication path where the virtual network can obtain the maximum resources, implement online mapping of the network slice, and reduce an overall average scheduling delay of multiple network slices.
In order to achieve the purpose, the invention provides the following technical scheme:
a virtual network function scheduling method based on prediction for 5G network slices specifically comprises the following steps:
s1: under the application scene of a 5G network slice, aiming at the service function chain characteristics of dynamic change of service flow, establishing a network model of a service function chain queue based on time delay, a service function chain queue model and a multi-queue time delay model;
s2: establishing a multi-queue cache model, and determining the priority of a slice request and the lowest service rate to be provided according to the size of a slice service queue at different moments in order to prevent queue data from being lost when the cache space is limited;
s3: dispersing time into a series of continuous time windows, taking queue information in the time windows as training data set samples, and establishing a flow perception model based on prediction;
s4: and searching a scheduling method of the VNF under the condition of meeting the resource constraint that the cache of the slice service queue does not overflow according to the predicted size of each slice service queue and the corresponding lowest service rate.
Further, in step S1, the network model of the delay-based service function chain queue is:
the virtual network topology is represented by a weighted undirected graph G ═ V, E, where V represents a set of virtual nodes and E represents a set of virtual links; b ismRepresenting the total output link bandwidth of node m, shared by the virtual links connected to that node, for a network slice SiThe set of virtual network functions handling the service request is denoted as Fi={fi1,fij,...fiJ},i∈[1,|S|]z,j∈[1,|Fi|]zWhere S denotes the set of all network slices and J denotes FiThe number of middle VNFs; for the VNFs that make up the service function chain, denoted by f, where fijRepresenting a network slice SiThe jth VNF that needs to be scheduled; order to
Figure BDA0001750315220000021
Representing the ability to perform a virtualized network function fijOf a virtual node set, wherein
Figure BDA0001750315220000031
Further, in step S1, the service function chain queue model is:
let Γ ═ 1.·, T.., T } denote the set of timeslots over which the network operates, where the duration of each timeslot T is defined as Ts(ii) a Thus in time slot t, f is performedijThe first virtual link connected with the node is allocated with bandwidth resources
Figure BDA0001750315220000032
Represents; order to
Figure BDA0001750315220000033
Represents a slice SiNode execution of f within time slot tijThe actual service rate provided; qi(t) denotes a slice S within a time slot tiThe queue length of (a), i.e. the number of packets waiting to be transmitted;
assuming that each slice leases a corresponding number of cache resources for caching a corresponding one of the traffic data, order A for each queuei(t) represents the arrival process of the data packet, assuming packet arrival process A due to the randomness of the data generated by the aperiodic application of the virtual network useri(t) compliance parameter is λiThe packet arrival processes of all users are distributed independently in different scheduling time slots, i.e. the successive arrival time intervals follow mutually independent lambdaiA negative exponential distribution of; let Mi(t) represents the packet size, assuming that the packet size follows an average of
Figure BDA0001750315220000034
Is distributed exponentially, the average processing rate of the data packets is
Figure BDA0001750315220000035
The queue length update process is therefore represented as:
Figure BDA0001750315220000036
wherein the content of the first and second substances,
Figure BDA0001750315220000037
indicating the number of data packets processed in time slot t.
Further, in step S1, the multi-queue delay model is:
the time delay comprises queuing time delay, processing time delay and transmission time delay; order to
Figure BDA0001750315220000038
Respectively represent slices SiAverage queuing delay of an arriving data packet queue before the data packet queue is processed by each node in the whole network, average processing delay on a corresponding virtual node in the whole network and average transmission delay of data packet queue transmitted on a corresponding link in the whole network; data stream of a network slice is arranged at the mostThe average difference value between the time point of the last node after processing and the time point of the network slice request arrival is defined as the average scheduling delay, is represented by tau, and satisfies the following conditions:
Figure BDA0001750315220000039
processing time delay XiProcessing latency of VNF executed by multiple virtual nodes
Figure BDA00017503152200000310
Is composed of
Figure BDA00017503152200000311
Figure BDA00017503152200000312
Since the packet size obeys an average of
Figure BDA00017503152200000313
Is distributed exponentially, so
Figure BDA00017503152200000314
Respectively obey parameters of
Figure BDA00017503152200000315
Are independent of each other, i.e. are irish distributions:
Figure BDA00017503152200000316
the average processing delay of a packet can be derived from the characteristics of the Ireland distribution as follows:
Figure BDA0001750315220000041
similarly, the average transmission delay of the data packet is:
Figure BDA0001750315220000042
the average queuing delay is:
Figure BDA0001750315220000043
wherein
Figure BDA0001750315220000044
Representing execution of f in a service function chainijThe latency distribution function of the node.
Therefore, the network slice SiThe total average scheduling delay of the data packets is as follows:
Figure BDA0001750315220000045
wherein the packet size obeys an average value of
Figure BDA0001750315220000046
The distribution of indices; wi(t) represents the execution of f in the service function chainijOf nodes, in particular Wi(t)=P(Wi1+Wij+...+WiJT is less than or equal to t). The optimization objective of the present invention is to minimize the overall average scheduling delay of the VNF of the service function chain requested by multiple network slices in the network, which is expressed as: min τ, where τ is max { τ12,...,τi}。
Further, in step S2, the multi-queue cache model is:
dynamic resource scheduling is typically related to queue buffer status (e.g., remaining buffer size and current queue length), packet arrival rate, etc. The longer the queue length of the virtual network in the system, the greater the delay of the data buffered by the virtual network, so that the delay performance of the virtual network can be directly influenced and the overflow probability of the queue buffer of the virtual network can be reduced by dynamically adjusting the scheduling of resources. In the present invention only the slice service queue overflow situation is considered, since a queue underflow means that the resources allocated to process the slice service are sufficient and will not cause data lossMiss, queue overflow means that the resources allocated to handle the slice traffic are not sufficient, resulting in a loss of bits when the queue length reaches the slice buffer ceiling.
Figure BDA0001750315220000047
Represents the maximum buffer length allowed by the ith slice queue, and the length of the queue changes dynamically with the arrival rate of the data packets, so that every T passessThe deployment of the service function chain and the allocation of resources are optimized once. If at the current TsThe length of the ith queue in the queue is larger than that corresponding to the queue at the moment
Figure BDA0001750315220000048
Indicating that there is bit overflow and bit loss occurs. Thus, the optimization problem can be described as providing an appropriate service rate to ensure that the queue length is less than
Figure BDA0001750315220000051
In order to reduce the average bit loss rate of the slice and realize effective allocation of resources, the invention calculates the lowest service rate required for preventing the overflow of the slice queue, and for any t, the increment of the length of the ith slice queue can be expressed as:
Ii(t)=Ai(t)-Di(t)
for the start of any t +1 slots, the length of the ith slice queue can be expressed as:
Qi(t+1)=Qi(t)+Ii(t)
the slice queue does not overflow and needs to satisfy:
Figure BDA0001750315220000052
the service rate can be obtained to satisfy:
Figure BDA0001750315220000053
further, in step S3, the flow sensing model based on prediction is:
the invention aims to maximize the service rate on the premise of ensuring that each slice queue does not overflow, so that the system performance reaches a relative balance between throughput and fairness, effectively improves the system throughput while ensuring the fairness, and minimizes the overall average scheduling delay of a network. Due to Ai(t) is determined by the arrival of the data packet in the time slot t and has certain randomness, so the invention adopts a prediction method based on the LSTM to predict the minimum service rate which ensures that the slice queue does not overflow in advance
Figure BDA0001750315220000054
And according to the predicted result, a deployment mode for optimizing the service function chain and a resource allocation strategy are made in advance, so that the network efficiency is improved.
The requirement of the lowest resource for preventing the queue from overflowing in the buffer is influenced by the data packet arrival rate A of the service requested by the useriAnd queue length Q at the previous timeiThe influence of (2) can be taken as a slicing feature by observing or monitoring the queue length in the current cache and the historical data of the arrival rate of the data packets of the service requested by the user. Specifically, in the virtual network G ═ (V, E), for the jth VNF of the service function chain ith, order
Figure BDA0001750315220000055
Representing the minimum resource (e.g., CPU resources, memory resources, etc. of the VNF) requirement to prevent queue overflow, the present invention only considers the use of CPU resources for the sake of simplicity. So slicing SiIs characterized by: x is the number ofi=[Ai,Qi]Wherein A isiIndicating packet arrival rate, QiIndicating the queue length at the last time; defining a discrete time window of length epsilon, dividing the discrete time window into a plurality of discrete time segmentsThe data in the time window is taken as a historical data sample, so that in the range from the historical time t-epsilon to t, the data set input by the network model is represented as:
Figure BDA0001750315220000061
the samples of each sample set are different, and after sample data is preprocessed, an LSTM model is constructed for forward calculation, including state calculation and output calculation; and then reverse training weights are performed to improve the performance of the prediction.
Further, the forward calculation in the prediction-based traffic perception model specifically refers to: performing a calculation of a state of each slice by performing an iterative process using a sigma (W) (sigmoid activation function) associated with each slice, the result of the state calculation being used in an output calculation to determine a resource demand prediction value; the method specifically comprises the following steps:
(1) observing the arrival rate of data packets of a user request service and recording the queue length of a certain amount of data packets after being processed;
(2) calculating the hidden layer state and the long-term unit state of the network layer by using the obtained slice state;
(3) the results of the last two steps are used to determine a predicted resource requirement value.
To achieve accurate prediction of VNF resource requirements, the weighting function requires iterative training, which requires the use of data such as input x to the neural network and target output ξ, the goal of the training being to minimize a penalizing quadratic cost function:
Figure BDA0001750315220000062
wherein the first term of the penalizing quadratic cost function is a standard error term,
Figure BDA0001750315220000063
in order to predict the value of the target,
Figure BDA0001750315220000064
is the true value; the second term is a penalty function, and beta' is a constant term; the goal of the training is to find the best weight W (characteristic of the fitting data) so that the cost function is minimized, and the training algorithm is based on a gradient descent optimization algorithm.
Further, the backward training in the prediction-based flow perception model specifically includes the following steps:
(1) when the iteration number k is 0, initializing the weight W, and calculating the output value of each neuron in a forward direction, namely ft,it,ct,ot,htValues of five vectors, ft,it,ct,ot,htRespectively showing a forgetting gate, an input gate, a unit state, an output gate and a hidden layer.
(2) Reversely calculating the error term delta value of each neuron; the back propagation of the LSTM error term includes two directions: one is the backward propagation along the time, namely, the error term of each moment is calculated from the current t moment; the other is to propagate the error term to the upper layer;
(3) calculating the gradient of each weight by using a Back Propagation (BPTT) algorithm according to the corresponding error term; the update weight is shown as follows:
Figure BDA0001750315220000065
wherein the content of the first and second substances,
Figure BDA0001750315220000071
indicates the learning rate, GwA penalty quadratic cost function is represented.
Further, in step S4, the method for scheduling the service function chain VNF includes: solving the optimal path scheduled by the VNF by adopting ant colony algorithm modeling so as to realize the deployment problem of the service function chain; the problem is based on the predicted minimum service rate that ensures that the slice queue does not overflow as described in step S3
Figure BDA0001750315220000072
On the premise of meeting the minimum resource requirement, searching an optimal service function chain deployment path through a maximum and minimum ant colony algorithm to obtain a maximum resource allocation scheme, so as to minimize the overall VNF scheduling delay; the overall scheduling delay is calculated by the delay model of the multi-queue described in step S1.
Further, the method for deploying the multiple service function chains based on the maximum and minimum ant colony algorithm comprises the following specific steps:
(1) initializing parameters such as ant scale, pheromone factors, heuristic function importance degree factors, pheromone volatilization factors, pheromone constants, maximum iteration times and the like;
(2) updating a virtual node set which is accessed by the VNFs of the service function chain in the tabu table;
(3) determining a node set which can be selected by the next VNF according to the tabu table;
determining a next node for processing a VNF module in a roulette manner according to the state transition probability on the premise that the virtual node can process the VNF; therefore, while a part of ants are ensured to follow a VNF scheduling strategy with higher pheromone, a new local optimal solution can be found through a random scheduling strategy; wherein the state transition probability is defined as:
Figure BDA0001750315220000073
wherein c represents fij-1K denotes a next node,
Figure BDA0001750315220000074
indicates that f can be executedijAlpha represents an pheromone factor, and the value of the pheromone factor reflects the degree of influence of the motion behavior of the ants on the pheromone concentration in the searching process; beta is an importance degree factor of the heuristic function, the function is to reflect the relative importance of the heuristic function in the state transition probability, the larger beta is, the more ant can determine the state transition probability by the greedy rule, etakA heuristic function is represented;
(4) after all VNFs in one ant complete the scheduling strategy according to the scheduling strategy, distributing the computing resources and the output link bandwidth resources on the virtual machine to the corresponding VNFs and the corresponding links in a proportional fair manner, and meanwhile, carrying out weighted calculation on the distributed resources in order to ensure the continuity of processing of data packets at each node to finally obtain corresponding resources distributed by each VNF and each link;
(5) the specific updating process of the pheromone is as follows: 1) reducing the concentration of all pheromones by p%; 2) for each ant in each iteration process, the sum of the resources correspondingly distributed by the path selection is converted into the variation amount of the pheromone, so that the pheromone is updated; because different paths of each ant are selected to cause different allocated resources, the updated pheromone concentration matrix has different results on corresponding different nodes.
(6) Updating the pheromone concentration volatilization coefficient, the pheromone factor, the heuristic function importance degree factor, the pheromone concentration and other parameters through the maximum and minimum intervals; and repeating the steps, and finding the scheduling solution of the VNFs of the optimal service function chain after finishing multiple iterations.
The invention has the beneficial effects that: aiming at the problem of queue backlog generated by data accumulation caused by service requests changing in a time domain under a 5G network slice scene, a service function chain queue model and a cache model based on time delay are established, instead of only stopping work on solving the problem of resource scheduling on a single scheduling period; on the basis, a traffic perception model based on the LSTM neural network is established to predict the future resource minimum demand condition of the service slice. According to the prediction result, a dynamic service function chain VNF scheduling and resource allocation scheme is provided, and a dynamic deployment model for realizing a plurality of service function chains based on the maximum and minimum ant colony algorithm is established. The prediction method of the invention not only has good prediction effect, but also dynamically schedules the virtual network resources on the time sequence, more conforms to the actual network condition, optimizes the time delay of the slicing service and improves the performance of the network service.
Drawings
In order to make the object, technical scheme and beneficial effect of the invention more clear, the invention provides the following drawings for explanation:
FIG. 1 is a diagram illustrating an example scenario in which embodiments of the present invention may be used;
FIG. 2 is a flow chart of virtual network function scheduling in the present invention;
FIG. 3 is a diagram of a model of a queuing system in accordance with the present invention;
FIG. 4 is a schematic diagram of a resource demand prediction model based on an LSTM neural network according to the present invention;
FIG. 5 is a diagram of the structure of an LSTM neuron in accordance with the present invention;
fig. 6 is a flow chart of deployment of multiple service function chains based on the maximum and minimum ant colony algorithm in the present invention.
Detailed Description
Preferred embodiments of the present invention will be described in detail below with reference to the accompanying drawings.
FIG. 1 is a diagram illustrating an example scenario in which embodiments of the present invention may be applied. As shown in fig. 1, different types of slices represent different service types, and from left to right, there are an infinite virtual network user, a virtual network management platform, a virtual network function scheduling layer, and a physical resource pool. In the system, a cloud server in a physical resource pool provides various types of physical network resources including computing resources, cache resources, link bandwidth resources and the like, and a virtual network management platform realizes the scheduling of a virtual network function module and the flexible allocation of the physical network resources according to the service state of a virtual network user, QoS requirements and the like. In order to more effectively distribute physical resources and realize efficient utilization of the physical resources, the virtual network management platform designed by the invention comprises a service request unit, a load analysis module, a resource management entity, a network state monitoring entity, a virtual network scheduler and the like, wherein the service request unit is used for caching data packets newly arrived by each slice user, the load analysis module is used for analyzing the cache load characteristics of each slice, predicting the load state of the next period and preventing a queue from overflowing the resources required to be provided by the lowest service rate of the cache, the virtual network scheduler determines the deployment scheme of each service function chain according to the evaluation result of the load analysis module, and the resource management entity distributes the optimal physical resource quantity obtained by each virtual network function module after the service function chain is deployed, so that the QoS requirement of each network slice is ensured. The network state monitoring entity is used for observing the real-time state of each physical resource.
The invention aims to predict the lowest resource demand of the service function chain in the next period by monitoring the cache load state and the data packet arrival rate of the user request slicing service in real time, and based on the result, the virtual network scheduler and the resource management entity realize the service provision according to the formulated scheduling and resource allocation scheme of the VNF.
Fig. 2 is a flowchart of scheduling based on virtual network functions in the present invention, where the size and number of data packets requested by a network slice service are random, and the lowest resource demand of the next cycle service function chain is predicted according to the traffic characteristics and the cache load characteristics of the network slice, so as to implement virtual network function scheduling and dynamic resource allocation in the time dimension. As shown in fig. 2, the steps are as follows:
step 201: generating a full-connection type virtual network topological structure, wherein the types of virtual network function modules which can be processed by virtual nodes, different types of network slices and service function chains for realizing the slice service are formed;
step 202: establishing a service function chain queue model and a cache model based on time delay;
step 203: collecting historical data packet arrival data of the network slices and historical buffer queue length (namely buffer load);
step 204: predicting the minimum requirement of service function chain resources by adopting an LSTM-based neural network model for the collected data, wherein a gradient descent optimization algorithm is adopted in the training method;
step 205: judging whether the punishment secondary cost function exceeds a threshold, if so, returning to 203; otherwise, go on to step 205;
step 206: executing a plurality of service function chain deployment operations based on a maximum and minimum ant colony algorithm to realize VNF scheduling and dynamic resource allocation;
step 207: calculating the overall scheduling time delay of the network slices after resource allocation based on the multi-queue time delay model established in the step 202; returning to the step 203 to predict the resource requirement of the next period;
FIG. 3 is a diagram of a model of a queue system in the present invention, slicing S at time slot tiQueue length of (Q)i(t) indicates that the parameter also indicates the number of packets waiting to be transmitted, and for each queue order A, assuming that a corresponding number of cache resources are leased for each slice for caching a corresponding one of the traffic datai(T) represents the arrival process of the data packet, and the queue length can be dynamically changed along with the change of the arrival rate of the data packet due to the randomness of data generated by the aperiodic application of the virtual network user, so that each time T passes throughsThe deployment mode of the primary service function chain and the allocation of resources are dynamically optimized according to the queue length. I.e. by dynamically adjusting the service rate DiAnd (t) maximizing the service rate on the premise of ensuring that the current queue cache does not overflow, so that the system performance reaches a relative balance between throughput and fairness, the system throughput is effectively improved while the fairness is ensured, and the overall average scheduling time delay of the network is minimized.
FIG. 4 is a schematic diagram of a resource demand prediction model based on the LSTM neural network in the present invention, in which the demand of the lowest resource for preventing queue overflow in the buffer is received by the packet arrival rate A of the service requested by the useriAnd queue length Q at the previous timeiTherefore, the monitored historical data of the queue length in the current buffer and the arrival rate of the data packet of the service requested by the user is firstly used as the slice characteristics by the load analysis module, that is, the characteristics of each slice i can be expressed as: x is the number ofi=[Ai,Qi]To represent historical slice states and historical arrival rates of packets, a time window of discrete length epsilon is defined, and the data in the time window is taken as a historical data sample, namely: x ═ x (t-epsilon), …, x (t), where the samples in each sample set are different, and the sample data is preprocessed and passed through
Figure BDA0001750315220000101
ct=f⊙ct-1+it⊙gt、ht=ot⊙tanh(ct) The constructed LSTM network performs a prediction of the minimum resource requirements of the service function chain. Where σ and ⊙ denote the activation function sigmoid and the element-level product, respectively. The prediction process comprises two steps of forward calculation and reverse training, and the training algorithm adopts a gradient descent optimization algorithm.
Fig. 5 is a diagram of LSTM neuron structure in the present invention, the model uses a state h (hidden layer) to represent the input characteristics of each slice load, c is a unit state to store the long-term state, and x represents the input of the neural network, i.e. the historical data sample. It can be seen that at time t, there are three of these neuron inputs: input value x of the network at the present momenttThe output value h of the neuron at the previous timet-1And cell state c at the previous timet-1The neuron outputs are two: neuron output value h at current momenttCurrent cell state ct
Fig. 6 is a flowchart of deployment of multiple service function chains based on the maximum and minimum ant colony algorithm in the present invention, as shown in fig. 6, the steps are as follows:
step 601: inputting a prediction result of the minimum requirement of the VNF resources of the service function chain;
step 602: initializing parameters such as ant scale, pheromone factors, heuristic function importance degree factors, pheromone volatilization factors, pheromone constants, maximum iteration times and the like;
step 603: calculating the state transition probability of each ant and scheduling the VNF in the form of roulette;
step 604: calculating the calculation resources distributed to the VNF and the bandwidth resources of the link according to a proportional fair mode on the deployment result of the service function chain of each ant;
step 605: updating an pheromone matrix, wherein the updating comprises two steps, 1) the concentration of all pheromones is reduced by p%, and 2) for each ant in each iteration process, the sum of resources correspondingly distributed by path selection is converted into the variation of the pheromone, so that the updating of the pheromone is realized;
step 606: judging whether the local optimal solution is trapped, if so, continuing to execute the step 607; if not, returning to 603 for next iteration;
step 607: and updating the pheromone concentration volatilization coefficient rho, the pheromone factor alpha, the heuristic function importance degree factor beta and the pheromone concentration tau, and returning to 603 for next iteration.
Finally, it is noted that the above-mentioned preferred embodiments illustrate rather than limit the invention, and that, although the invention has been described in detail with reference to the above-mentioned preferred embodiments, it will be understood by those skilled in the art that various changes in form and detail may be made therein without departing from the scope of the invention as defined by the appended claims.

Claims (4)

1. A virtual network function scheduling method based on prediction for 5G network slices is characterized by comprising the following steps:
s1: under the application scene of a 5G network slice, aiming at the service function chain characteristics of dynamic change of service flow, establishing a network model of a service function chain queue based on time delay, a service function chain queue model and a multi-queue time delay model;
s2: establishing a multi-queue cache model, and determining the priority of a slice request and the lowest service rate to be provided according to the size of a slice service queue at different moments in order to prevent queue data from being lost when the cache space is limited;
s3: dispersing time into a series of continuous time windows, taking queue information in the time windows as training data set samples, and establishing a flow perception model based on prediction;
s4: according to the predicted size of each slice service queue and the corresponding lowest service rate, searching a scheduling method of the VNF under the constraint of the resource which meets the condition that the cache of the slice service queue does not overflow;
in step S1, the network model of the service function chain queue based on the delay includes:
the virtual network topology is represented by a weighted undirected graph G ═ V, E, where V represents a set of virtual nodes and E represents a set of virtual links; b ismRepresenting the total output link bandwidth of node m, shared by the virtual links connected to that node, for a network slice SiThe set of virtual network functions handling the service request is denoted as Fi={fi1,fij,...fiJ},i∈[1,|S|]z,j∈[1,|Fi|]zWhere S denotes the set of all network slices and J denotes FiThe number of VNFs in (C) is represented by f for the VNFs constituting the service function chain, wherein f isijRepresenting a network slice SiThe jth VNF that needs to be scheduled; order to
Figure FDA0003023940060000011
Representing the ability to perform a virtualized network function fijOf a virtual node set, wherein
Figure FDA0003023940060000012
In step S1, the service function chain queue model is:
let Γ ═ 1.·, T.., T } denote the set of timeslots over which the network operates, where the duration of each timeslot T is defined as Ts(ii) a Thus in time slot t, f is performedijThe first virtual link connected with the node is allocated with bandwidth resources
Figure FDA0003023940060000013
Represents; order to
Figure FDA0003023940060000014
Represents a slice SiNode execution of f within time slot tijThe actual service rate provided; qi(t) denotes a slice S within a time slot tiThe queue length of (a), i.e. the number of packets waiting to be transmitted;
assuming that each slice leases a corresponding number of cache resources for caching a corresponding one of the traffic data, for each sliceQueue order Ai(t) represents the arrival process of the data packet, assuming packet arrival process A due to the randomness of the data generated by the aperiodic application of the virtual network useri(t) compliance parameter is λiThe packet arrival processes of all users are distributed independently in different scheduling time slots, i.e. the successive arrival time intervals follow mutually independent lambdaiA negative exponential distribution of; let Mi(t) represents the packet size, assuming that the packet size follows an average of
Figure FDA0003023940060000015
Is distributed exponentially, the average processing rate of the data packets is
Figure FDA0003023940060000016
The queue length update process is therefore represented as:
Figure FDA0003023940060000021
wherein the content of the first and second substances,
Figure FDA0003023940060000022
indicates the number of packets processed in the time slot t;
in step S1, the delay model of the multiple queues is:
the time delay comprises queuing time delay, processing time delay and transmission time delay; order to
Figure FDA0003023940060000023
Respectively represent slices SiAverage queuing delay of an arriving data packet queue before the data packet queue is processed by each node in the whole network, average processing delay on a corresponding virtual node in the whole network and average transmission delay of data packet queue transmitted on a corresponding link in the whole network; defining the average difference value between the time point of the data stream of one network slice processed on the last node and the time point of the network slice request arrival as the average scheduling delay, wherein the average scheduling delay is represented by tau and satisfies the following conditions:
Figure FDA0003023940060000024
network slicing SiThe total average scheduling delay of the data packets is as follows:
Figure FDA0003023940060000025
wherein the packet size obeys an average value of
Figure FDA0003023940060000026
The distribution of indices; wi(t) represents the execution of f in the service function chainijA latency distribution function of the node of (1); therefore, the optimization objective is to minimize the overall average scheduling latency of the VNF of the service function chain for multiple network slice requests within the network, expressed as: min τ, where τ is max { τ12,...,τi};
In step S2, the multi-queue cache model is: calculating the lowest service rate required for preventing the overflow of the slice queue, wherein the service rate satisfies the following conditions:
Figure FDA0003023940060000027
wherein R isi(t) represents the lowest service rate that service i should provide,
Figure FDA0003023940060000028
representing the maximum buffer length allowed by the ith slice queue;
in step S3, the flow sensing model based on prediction is:
by adopting the prediction method based on the LSTM, the minimum service rate for ensuring that the slice queue does not overflow is predicted in advance
Figure FDA0003023940060000029
According to the predicted result, a deployment mode of the optimized service function chain and a resource allocation strategy are made in advance, so that the network efficiency is improved; section SiIs characterized by: x is the number ofi=[Ai,Qi]Wherein A isiIndicating packet arrival rate, QiIndicating the queue length at the last time; defining a discrete time window with the length of epsilon, and taking the data in the time window as a historical data sample, so that the data set input by the network model in the range from the historical time t-epsilon to t is represented as:
Figure FDA0003023940060000031
the samples of each sample set are different, and after sample data is preprocessed, an LSTM model is constructed for forward calculation, including state calculation and output calculation; then carrying out reverse training weight to improve the prediction performance;
in step S4, the scheduling method of the service function chain VNF includes: solving the optimal path scheduled by the VNF by adopting ant colony algorithm modeling so as to realize the deployment problem of the service function chain; the problem is based on the predicted minimum service rate that ensures that the slice queue does not overflow as described in step S3
Figure FDA0003023940060000032
On the premise of meeting the minimum resource requirement, searching an optimal service function chain deployment path through a maximum and minimum ant colony algorithm to obtain a maximum resource allocation scheme, so as to minimize the overall VNF scheduling delay; the overall scheduling delay is calculated by the delay model of the multi-queue described in step S1.
2. The method for scheduling virtual network functions based on prediction for a 5G network slice according to claim 1, wherein the forward computation in the traffic awareness model based on prediction specifically refers to: performing a repeated iteration process by using a sigmoid activation function sigma (W) related to each slice to realize the calculation of each slice state, wherein the result of the state calculation is used for outputting calculation, thereby determining a resource demand predicted value; the method specifically comprises the following steps:
(1) observing the arrival rate of data packets of a user request service and recording the queue length of a certain amount of data packets after being processed;
(2) calculating the hidden layer state and the long-term unit state of the network layer by using the obtained slice state;
(3) the results of the last two steps are used to determine a predicted resource requirement value.
3. The method for virtual network function scheduling based on prediction for 5G network slices according to claim 1, wherein the backward training in the traffic awareness model based on prediction specifically comprises the following steps:
(1) when the iteration number k is 0, initializing the weight W, and calculating the output value of each neuron in a forward direction, namely ft,it,ct,ot,htValues of five vectors, where ft,it,ct,ot,htRespectively representing a forgetting gate, an input gate, a unit state, an output gate and a hidden layer;
(2) reversely calculating the error term delta value of each neuron; the back propagation of the LSTM error term includes two directions: one is the backward propagation along the time, namely, the error term of each moment is calculated from the current t moment; the other is to propagate the error term to the upper layer;
(3) calculating the gradient of each weight by using a Back Propagation (BPTT) algorithm according to the corresponding error term; the update weight is shown as follows:
Figure FDA0003023940060000033
wherein the content of the first and second substances,
Figure FDA0003023940060000041
indicates the learning rate, GwAnd representing a punished quadratic cost function, wherein the expression is as follows:
Figure FDA0003023940060000042
wherein the first term of the penalizing quadratic cost function is a standard error term,
Figure FDA0003023940060000043
in order to predict the value of the target,
Figure FDA0003023940060000044
is the true value; the second term is a penalty function, and beta' is a constant term; the goal of the training is to find the optimal weight W such that the cost function is minimized.
4. The virtual network function scheduling method based on prediction for 5G network slice according to claim 1, wherein the specific steps of the multiple service function chain deployment method based on the maximum and minimum ant colony algorithm are as follows:
(1) initializing ant scale, pheromone factors, heuristic function importance degree factors, pheromone volatilization factors, pheromone constants and maximum iteration times;
(2) updating a virtual node set which is accessed by the VNFs of the service function chain in the tabu table;
(3) determining a node set which can be selected by the next VNF according to the tabu table;
determining a next node for processing a VNF module in a roulette manner according to the state transition probability on the premise that the virtual node can process the VNF; wherein the state transition probability is defined as:
Figure FDA0003023940060000045
wherein c represents fij-1K denotes a next node,
Figure FDA0003023940060000046
indicates that f can be executedijAlpha represents an pheromone factor, and the value of the pheromone factor reflects the degree of influence of the motion behavior of the ants on the pheromone concentration in the searching process; beta is an importance degree factor of the heuristic function, the function is to reflect the relative importance of the heuristic function in the state transition probability, the larger beta is, the more ant can determine the state transition probability by the greedy rule, etakA heuristic function is represented;
(4) after all VNFs in one ant complete the scheduling strategy according to the scheduling strategy, distributing the computing resources and the output link bandwidth resources on the virtual machine to the corresponding VNFs and the corresponding links in a proportional fair manner, and meanwhile, carrying out weighted calculation on the distributed resources in order to ensure the continuity of processing of data packets at each node to finally obtain corresponding resources distributed by each VNF and each link;
(5) the specific updating process of the pheromone is as follows: 1) reducing the concentration of all pheromones by p%; 2) for each ant in each iteration process, the sum of the resources correspondingly distributed by the path selection is converted into the variation amount of the pheromone, so that the pheromone is updated;
(6) updating the pheromone concentration volatilization coefficient, the pheromone factor, the heuristic function importance degree factor and the pheromone concentration through the maximum and minimum intervals; and repeating the steps, and finding the scheduling solution of the VNFs of the optimal service function chain after finishing multiple iterations.
CN201810863512.9A 2018-08-01 2018-08-01 Virtual network function scheduling method based on prediction for 5G network slice Active CN108965024B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810863512.9A CN108965024B (en) 2018-08-01 2018-08-01 Virtual network function scheduling method based on prediction for 5G network slice

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810863512.9A CN108965024B (en) 2018-08-01 2018-08-01 Virtual network function scheduling method based on prediction for 5G network slice

Publications (2)

Publication Number Publication Date
CN108965024A CN108965024A (en) 2018-12-07
CN108965024B true CN108965024B (en) 2021-08-13

Family

ID=64466590

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810863512.9A Active CN108965024B (en) 2018-08-01 2018-08-01 Virtual network function scheduling method based on prediction for 5G network slice

Country Status (1)

Country Link
CN (1) CN108965024B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11824736B2 (en) 2019-05-24 2023-11-21 Telefonaktiebolaget Lm Ericsson (Publ) First entity, second entity, third entity, and methods performed thereby for providing a service in a communications network

Families Citing this family (49)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109361431B (en) * 2018-12-13 2020-10-27 中国科学院计算技术研究所 Slice scheduling method and system
CN109547263B (en) * 2018-12-15 2021-08-20 华南理工大学 Network-on-chip optimization method based on approximate calculation
CN109889366B (en) * 2019-01-04 2020-06-16 烽火通信科技股份有限公司 Network traffic increment counting and analyzing method and system
CN109862442B (en) * 2019-02-22 2022-05-17 伟乐视讯科技股份有限公司 Input stream processing method and processing device based on IP transmission
CN109905330B (en) * 2019-03-22 2020-10-16 西南交通大学 Dynamic weighted fair queue train network scheduling method based on queue length
CN110222379A (en) * 2019-05-17 2019-09-10 井冈山大学 Manufacture the optimization method and system of network service quality
CN110381541B (en) * 2019-05-28 2023-12-26 中国电力科学研究院有限公司 Smart grid slice distribution method and device based on reinforcement learning
CN112311578B (en) * 2019-07-31 2023-04-07 中国移动通信集团浙江有限公司 VNF scheduling method and device based on deep reinforcement learning
CN110618879B (en) * 2019-08-15 2021-07-30 北京三快在线科技有限公司 Message processing method and device, electronic equipment and computer readable medium
CN110545204B (en) * 2019-08-30 2022-09-23 海南电网有限责任公司 Resource allocation method and server based on external penalty function and fruit fly optimization
CN110557294A (en) * 2019-09-25 2019-12-10 南昌航空大学 PSN (packet switched network) time slicing method based on network change degree
CN110519783B (en) * 2019-09-26 2021-11-16 东华大学 5G network slice resource allocation method based on reinforcement learning
CN110971451B (en) * 2019-11-13 2022-07-26 国网河北省电力有限公司雄安新区供电公司 NFV resource allocation method
CN112929187B (en) * 2019-12-05 2023-04-07 中国电信股份有限公司 Network slice management method, device and system
CN113259145B (en) * 2020-02-13 2022-06-03 中国移动通信集团浙江有限公司 End-to-end networking method and device for network slicing and network slicing equipment
CN111314235B (en) * 2020-02-19 2023-08-11 广东技术师范大学 Network delay optimization method based on virtual network function resource demand prediction
CN113543160B (en) * 2020-04-14 2024-03-12 中国移动通信集团浙江有限公司 5G slice resource allocation method, device, computing equipment and computer storage medium
CN111526070B (en) * 2020-04-29 2022-06-03 重庆邮电大学 Service function chain fault detection method based on prediction
CN111538570B (en) * 2020-05-12 2024-03-19 广东电网有限责任公司电力调度控制中心 Energy-saving and QoS guarantee-oriented VNF deployment method and device
CN113766523B (en) * 2020-06-02 2023-08-01 中国移动通信集团河南有限公司 Method and device for predicting network resource utilization rate of serving cell and electronic equipment
CN111669787B (en) * 2020-06-05 2024-02-23 国网上海市电力公司 Resource allocation method and device based on time delay sensitive network slice
CN111629443B (en) * 2020-06-10 2022-07-26 中南大学 Optimization method and system for dynamic spectrum slicing frame in super 5G Internet of vehicles
CN112306719B (en) * 2020-11-23 2022-05-31 中国科学院计算机网络信息中心 Task scheduling method and device
CN112543119B (en) * 2020-11-27 2022-02-18 西安交通大学 Service function chain reliability deployment method based on deep reinforcement learning
CN112653580B (en) * 2020-12-16 2022-11-08 国网河南省电力公司信息通信公司 Virtual network resource allocation method based on active detection under network slice
CN112737823A (en) * 2020-12-22 2021-04-30 国网北京市电力公司 Resource slice allocation method and device and computer equipment
CN112732409B (en) * 2021-01-21 2022-07-22 上海交通大学 Method and device for enabling zero-time-consumption network flow load balancing under VNF architecture
CN112887142B (en) * 2021-01-26 2022-01-21 贵州大学 Method for generating web application slice in virtualized wireless access edge cloud
CN112954742B (en) * 2021-02-08 2023-03-24 中国科学院计算技术研究所 Resource allocation method for mobile communication network slice
CN115134288B (en) * 2021-03-10 2023-08-15 中国移动通信集团广东有限公司 Communication network route scheduling method and system
CN113207048B (en) * 2021-03-15 2022-08-05 广东工业大学 Neural network prediction-based uplink bandwidth allocation method in 50G-PON (Passive optical network)
CN113098707B (en) * 2021-03-16 2022-05-03 重庆邮电大学 Virtual network function demand prediction method in edge network
CN115209431B (en) * 2021-04-13 2023-10-27 中移(成都)信息通信科技有限公司 Triggering method, device, equipment and computer storage medium
CN115242630B (en) * 2021-04-23 2023-10-27 中国移动通信集团四川有限公司 5G network slice arrangement method and device and electronic equipment
US11575582B2 (en) 2021-05-18 2023-02-07 International Business Machines Corporation Service chain based network slicing
CN113411207B (en) * 2021-05-28 2022-09-20 中国人民解放军战略支援部队信息工程大学 Service function circulation arrangement basic platform and method of intelligent network service function chain
CN113676909A (en) * 2021-07-20 2021-11-19 东北大学 Virtual network function universal scheduling method under 5G/B5G environment
CN113411223B (en) * 2021-08-03 2022-03-11 上海交通大学 Industrial software defined network slicing method based on edge cooperation
CN113810939B (en) * 2021-08-17 2023-07-18 中国人民解放军战略支援部队信息工程大学 User-noninductive network slice dynamic mapping device and method
CN113784395B (en) * 2021-08-26 2023-08-15 南京邮电大学 5G network slice resource allocation method and system
CN113923129B (en) * 2021-09-08 2023-08-29 中国人民解放军战略支援部队信息工程大学 VNF demand prediction method and system based on data driving
CN114070708B (en) * 2021-11-18 2023-08-29 重庆邮电大学 Virtual network function resource consumption prediction method based on flow characteristic extraction
CN114268548A (en) * 2021-12-24 2022-04-01 国网河南省电力公司信息通信公司 Network slice resource arranging and mapping method based on 5G
CN114500560B (en) * 2022-01-06 2024-04-26 浙江鼎峰科技股份有限公司 Edge node service deployment and load balancing method for minimizing network delay
CN114585024B (en) * 2022-02-10 2023-03-31 电子科技大学 Slice access control method of 5G/B5G network
CN114390489A (en) * 2022-03-04 2022-04-22 重庆邮电大学 Service deployment method for end-to-end network slice
CN115080263B (en) * 2022-05-12 2023-10-27 吉林省吉林祥云信息技术有限公司 Batch processing scale selection method in real-time GPU service
CN117082009B (en) * 2023-10-16 2024-02-27 天翼安全科技有限公司 Cloud resource management method and management system based on software definition security
CN117082008B (en) * 2023-10-17 2023-12-15 深圳云天畅想信息科技有限公司 Virtual elastic network data transmission scheduling method, computer device and storage medium

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108063830A (en) * 2018-01-26 2018-05-22 重庆邮电大学 A kind of network section dynamic resource allocation method based on MDP

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108063830A (en) * 2018-01-26 2018-05-22 重庆邮电大学 A kind of network section dynamic resource allocation method based on MDP

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于可靠性的5G网络切片在线映射算法;唐伦等;《电子与信息学报》;20180606;全文 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11824736B2 (en) 2019-05-24 2023-11-21 Telefonaktiebolaget Lm Ericsson (Publ) First entity, second entity, third entity, and methods performed thereby for providing a service in a communications network

Also Published As

Publication number Publication date
CN108965024A (en) 2018-12-07

Similar Documents

Publication Publication Date Title
CN108965024B (en) Virtual network function scheduling method based on prediction for 5G network slice
CN110275758B (en) Intelligent migration method for virtual network function
CN111953758B (en) Edge network computing unloading and task migration method and device
CN102724103B (en) Proxy server, hierarchical network system and distributed workload management method
CN112039965B (en) Multitask unloading method and system in time-sensitive network
Guo et al. Dynamic performance optimization for cloud computing using M/M/m queueing system
CN106973413B (en) Self-adaptive QoS control method for wireless sensor network
KR100233091B1 (en) Atm traffic control apparatus and method
CN112486690A (en) Edge computing resource allocation method suitable for industrial Internet of things
CN111338807B (en) QoE (quality of experience) perception service enhancement method for edge artificial intelligence application
WO2023124947A1 (en) Task processing method and apparatus, and related device
Lin et al. A model-based approach to streamlining distributed training for asynchronous SGD
CN114780247B (en) Flow application scheduling method and system with flow rate and resource sensing
CN114564312A (en) Cloud edge-side cooperative computing method based on adaptive deep neural network
Wu et al. Burst-level congestion control using hindsight optimization
Cheng et al. Proscale: Proactive autoscaling for microservice with time-varying workload at the edge
CN111131447A (en) Load balancing method based on intermediate node task allocation
CN111740925A (en) Deep reinforcement learning-based flow scheduling method
CN114706631A (en) Unloading decision method and system in mobile edge calculation based on deep Q learning
Manavi et al. Resource allocation in cloud computing using genetic algorithm and neural network
CN116915869A (en) Cloud edge cooperation-based time delay sensitive intelligent service quick response method
Stavrinides et al. Multi-criteria scheduling of complex workloads on distributed resources
CN114693141A (en) Transformer substation inspection method based on end edge cooperation
Azarnova et al. Estimation of time characteristics of systems with network topology and stochastic processes of functioning
CN115118783A (en) Task unloading method based on heterogeneous communication technology ultra-reliable low-delay reinforcement learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20231106

Address after: 117000 No. 130, Guangyu Road, Pingshan District, Benxi City, Liaoning Province

Patentee after: BENXI STEEL (GROUP) INFORMATION AUTOMATION CO.,LTD.

Address before: 1003, Building A, Zhiyun Industrial Park, No. 13 Huaxing Road, Henglang Community, Dalang Street, Longhua District, Shenzhen City, Guangdong Province, 518000

Patentee before: Shenzhen Wanzhida Technology Transfer Center Co.,Ltd.

Effective date of registration: 20231106

Address after: 1003, Building A, Zhiyun Industrial Park, No. 13 Huaxing Road, Henglang Community, Dalang Street, Longhua District, Shenzhen City, Guangdong Province, 518000

Patentee after: Shenzhen Wanzhida Technology Transfer Center Co.,Ltd.

Address before: 400065 Chongqing Nan'an District huangjuezhen pass Chongwen Road No. 2

Patentee before: CHONGQING University OF POSTS AND TELECOMMUNICATIONS