Summary of the invention
In view of this, the virtual network function scheduling the purpose of the present invention is to provide a kind of 5G network slice based on prediction
Method, can realize the monitoring of network state by the mechanism of prediction, predict clothes according to the queuing message feature that network is sliced
Function chain of being engaged in proposes the scheduling and resource of a kind of dynamic service function chain VNF based on the result to the Minimum requirements of resource
Allocation plan is scheduled underlying resource on time dimension, protects resource in not reserved bottom-layer network, finds virtual network
It can get the communication path of maximum resource, realize the online mapping of network slice, reduce the ensemble average scheduling of multiple network slices
Time delay.
In order to achieve the above objectives, the invention provides the following technical scheme:
A kind of virtual network function dispatching method of the 5G network slice based on prediction, specifically includes the following steps:
S1: it under the application scenarios of 5G network slice, for the service function chain feature of service traffics dynamic change, establishes
The Time Delay Model of the network model of service function queue chain based on time delay, service function queue chain model and more queues;
S2: establishing more queue cache models, when spatial cache is limited, to prevent queuing data from losing, in different moments
According to slice service queue size, the lowest service rate that determines the priority of slice request and should provide;
S3: being a series of continuous time windows by time discrete, and using the queuing message in time window as training data
Collect sample, establishes the traffic aware model based on prediction;
S4: it according to the every kind of slice service queue size predicted and corresponding lowest service rate, finds and meets slice
The dispatching method of optimal service function chain VNF under the resource constraint that service queue caching is not spilt over.
Further, in step S1, the network model of the service function queue chain based on time delay are as follows:
Virtual network topology is indicated that wherein V indicates the set of dummy node by weighted-graph G=(V, E), and E indicates empty
The set of quasi- link;BmThe total output link bandwidth for indicating node m, is shared by the virtual link connecting with the node, for net
Network is sliced Si, the virtual network function set expression of processing business request is Fi={ fi1,fij,...fiJ, i ∈ [1, | S |]z,j
∈[1,|Fi|]z, wherein S indicates the set of all-network slice, and J indicates FiThe number of middle VNF;For forming service function chain
VNF, indicated with f, wherein fijIndicate that network is sliced SiJ-th of VNF for needing to dispatch;It enablesExpression is able to carry out virtualization
Network function fijDummy node set, wherein
Further, in step S1, the service function queue chain model are as follows:
Enable Γ={ 1 ..., t..., T } indicate the network operation time slot sets, wherein define each time slot t it is lasting when
Between be Ts;Therefore in time slot t, and f is performedijThe bandwidth resources distributed of the connected the l articles virtual link of node useIt indicates;It enablesIndicate slice SiF is executed in time slot t interior nodesijThe service speed actually provided;Qi(t) it indicates
S is sliced in time slot tiQueue length, that is, indicate etc. number-of-packet to be transmitted;
Assuming that each slice rents the cache resources of respective numbers for caching its corresponding business datum, for every
A queue enables Ai(t) arrival process for indicating data packet, since the application of virtual network user aperiodicity generates the random of data
Property, it is assumed that packet arrival process Ai(t) obeying parameter is λiPoisson distribution, the packet arrival process of all users is in different scheduling
Time slot is independently distributed, i.e., mutually independent λ is obeyed at successive arrival time intervaliQuantum condition entropy;Enable Mi(t) number is indicated
According to packet size, it is assumed that data package size obeys average value and isExponential distribution, then the average treatment rate of data packet beTherefore the length renewal process of queue indicates are as follows:
Wherein,Indicate the processed number of data packets in time slot t.
Further, in step S1, the Time Delay Model of more queues are as follows:
The time delay includes queuing delay, processing delay and propagation delay time;It enablesIt respectively indicates and cuts
Piece SiThe data packet queue of arrival is in whole network by the average queuing delay, accordingly empty in whole network before each node processing
Average treatment time delay on quasi- node and the mean transit delay in the transmission of whole network corresponding link;One network is cut
The mean difference at time point and network slice request arrival time point that the data flow of piece has been handled on the last one node
It is defined as average scheduling delay, is indicated with τ, and is met:
Processing delay XiThe processing delay of VNF is executed by multiple dummy nodesComposition, and Because packet size obeys average valueExponential distribution, soParameter is obeyed respectively
ForExponential distribution and mutually indepedent, as Ireland distribution:Data can be obtained by Irish distribution property
The average treatment time delay of packet are as follows:
The similarly mean transit delay of data packet are as follows:
Average queuing delay are as follows:
WhereinIt indicates to execute f in service function chainijNode waiting
Annual distribution function.
So network is sliced SiData packet overall average scheduling delay are as follows:
Wherein, data package size obedience average value isExponential distribution;Wi(t) it indicates to execute f in service function chainij
Node serving-time distribution function, specially Wi(t)=P (Wi1+Wij+...+WiJ≤t).Optimization aim of the invention is
The ensemble average scheduling delay for minimizing the service function chain VNF of multiple network slice requests in network, indicates are as follows: min τ,
Middle τ=max { τ1,τ2,...,τi}。
Further, in step S2, more queue cache models are as follows:
Usual dynamic resource scheduling and queue buffer status (such as: remaining cache size and current queue size), data
Packet arrival rate etc. is related.More long then its data cached delay of the queue length of virtual network in systems is bigger, therefore, passes through
The scheduling of dynamic adjustresources can directly affect its delay performance and reduce the overflow probability of the queue caching of virtual network.At this
Invention only considers to be sliced service queue overflow situation, because queue underflows mean to distribute to the resource for handling the slice business
Be it is sufficient, not will cause loss of data, and queue overflow mean to distribute to the resource for handling the slice business be it is inadequate,
It will cause the loss of bit when queue length reaches the slice caching upper limit.Indicate that i-th of slice queue is permitted
Perhaps largest buffered length, in the present invention because queue length can dynamically become with the variation of the arrival rate of data packet
Change, therefore every by TsThe deployment way of a service function chain and the distribution of resource will be optimized.If in current TsInterior i-th
A queue length is greater than corresponding at this timeIllustrate there is bit spilling, bit drop-out occurs.Optimization problem can retouch as a result,
It states to provide a service rate appropriate and going to ensure that queue length is less than
In order to reduce the average bit Loss Rate of slice, realize that effective distribution of resource, present invention calculating prevent slice team
Column service speed minimum needed for overflowing, in any t, the increment of i-th of slice queue length can be indicated are as follows:
Ii(t)=Ai(t)-Di(t)
When starting for any t+1 time slot, the length of i-th of slice queue be may be expressed as:
Qi(t+1)=Qi(t)+Ii(t)
Slice queue does not spill over and needs to meet:
It can should meet in the hope of service speed:
Further, in step S3, the traffic aware model based on prediction are as follows:
The arrival rate of data packet is determined that service rate depends on service function by type of service and user to the request of business
The strategy of deployment and the resource distribution of chain, when arrival process and service process, are mutually independent, and it is an object of the present invention to protect
It demonstrate,proves under the premise of each slice queue is not spilt over and maximizes service speed, so that system performance reaches between handling capacity, fairness
To a relative equilibrium, throughput of system is effectively improved while guaranteeing fairness, when minimizing the scheduling of network ensemble average
Prolong.Due to AiIt (t) is to determine there is certain randomness, therefore the present invention is by using base by data packet arrival in time slot t
In the prediction technique of LSTM, look-ahead goes out to guarantee the lowest serve rate that slice queue is not spilt overAccording to prediction
As a result the deployment way of optimization service function chain and the allocation strategy of resource are formulated in advance, to improve network efficiency.
Due to preventing the demand of the minimum resources of queue overflow from being requested the data packet of the business to reach by user in caching
Rate AiAnd the queue length Q of last momentiInfluence, can by the current cache queue length observing or monitor with
And user requests the historical data of the data packet arrival rate of the business as slice feature.Specially at virtual network G=(V, E)
In, for service function chain j-th of VNF of i, enableExpression prevents minimum resources (cpu resource of such as VNF, storage of queue overflow
Resource etc.) demand, in order to simplify problem, the present invention only considers the use of cpu resource.So slice SiCharacter representation are as follows: xi=
[Ai,Qi], wherein AiIndicate data packet arrival rate, QiIndicate the queue length of last moment;Define a length be ε it is discrete when
Between window, using the data in the time window as a historical data sample, therefore, in the range of historical juncture t- ε to t, net
The dataset representation of network mode input are as follows:
The sample of each sample set is different, after being pre-processed to sample data construct LSTM model carry out before to
It calculates, is calculated comprising state computation and output;Then reverse train weight is carried out again to improve the performance of prediction.
Further, the forward calculation in the traffic aware model based on prediction specifically refers to: by using with it is each
Related σ (W) (sigmoid activation primitive) is sliced to carry out the calculating that the process of iterating realizes each slice state, state
The result of calculating is calculated for exporting, so that it is determined that resource requirement predicted value;Specifically includes the following steps:
(1) observe user's requested service data packet arrival rate and record a certain amount of data packet it is processed after team
Column length;
(2) using obtained slice state successively calculate network hiding layer state and long-term location mode;
(3) the resource requirement value of prediction is determined using the result of upper two step.
In order to realize that VNF resource requirement is accurately predicted, weighting function needs training repeatedly, this process need using
Data, the trained targets such as input x and target the output ξ to neural network are to make to punish secondary cost function minimization:
The first item for wherein punishing secondary cost function is standard error item,For predicted value,For true value;Second
Item is penalty, and β ' is constant term;Trained target is to find optimal weight W (feature of fitting data) to make cost letter
Number minimizes, and training algorithm is based on gradient optimization algorithm.
Further, in the traffic aware model based on prediction reverse train specifically includes the following steps:
(1) when the number of iterations κ=0, weight W, the output valve of each neuron of forward calculation, i.e. f are initializedt,it,ct,
ot,htThe value of five vectors, ft,it,ct,ot,htIt respectively indicates forgetting door, input gate, be location mode, out gate, hidden layer.
(2) the error term δ value of each neuron of retrospectively calculate;The backpropagation of LSTM error term includes both direction: one
A is the backpropagation along the time, i.e., since current t moment, calculates the error term at each moment;The other is by error term
Upper layer is propagated;
(3) according to corresponding error term, using back-propagation algorithm (Back Propagation Trough Time,
BPTT), the gradient of each weight is calculated;Weight is updated to be shown below:
Wherein,Indicate learning rate, GwIt indicates to punish secondary cost function.
Further, in step S4, the dispatching method of the service function chain VNF refers to: being asked using ant group algorithm modeling
The optimal path of solution VNF scheduling is to realize the deployment issue of service function chain;Described problem is based on logical described in step S3
Cross the lowest serve rate that the guarantee slice queue that prediction obtains is not spilt overIn the premise for meeting minimum resources demand
Under, optimal service function chain deployment path is found to obtain maximum resource allocation plan by minimax ant group algorithm, thus
Minimize whole VNF scheduling delay;Integrated scheduling time delay is calculated by the Time Delay Model of more queues described in step S1.
Further, the specific steps of a plurality of service function chain dispositions method based on minimax ant group algorithm are as follows:
(1) usually to ant scale, information prime factor, the heuristic function significance level factor, pheromones volatilization factor, information
The parameters such as several and maximum number of iterations are initialized;
(2) the dummy node set that service function chain VNFs has been accessed in taboo list is updated;
(3) node set that next VNF can be selected is determined according to taboo list;
It is true in a manner of roulette method according to state transition probability under the premise of the VNF can be handled by meeting dummy node
Surely the next node of VNF module is handled;The same of the higher VNF scheduling strategy of pheromones is followed in a part of ant of guarantee in this way
When, moreover it is possible to new locally optimal solution is found by the strategy of random schedule;Wherein state transition probability is defined as:
Wherein, c indicates fij-1Dummy node, k indicate next node,Expression is able to carry out fijDummy node collection
It closes, α indicates that information prime factor, value reflection ant motor behavior during scanning for receive pheromone concentration influence
Degree;β is the heuristic function significance level factor, and effect is the relative importance for reacting heuristic function in state transition probability, β
Bigger, state transition probability, η can be determined with the rule for approaching greed by representing antkRepresent heuristic function;
(4) after the scheduling strategy more than all VNF in an ant are pressed completes scheduling strategy, fair with ratio
Mode by virtual machine computing resource and output link bandwidth resource allocation give corresponding VNF and link, while in order to guarantee
Data packet is weighted the resource being assigned in the continuity of each node processing, finally obtains each VNF and link
The respective resources distributed;
(5) update of pheromones, specific renewal process are as follows: 1) all pheromone concentrations are reduced into p%;2) to each iteration
Every ant of process is realized by converting corresponding the sum of the resource being assigned to of its Path selection to the variation of pheromones
The update of pheromones;Because every ant difference Path selection causes the resource being assigned to have differences, updated letter
It is different to cease result of the plain concentration matrix on the different node of correspondence.
(6) by minimax section to pheromone concentration volatility coefficient, information prime factor, heuristic function significance level because
The parameters such as son and pheromone concentration are updated;It repeats the above steps, after completing successive ignition, finds optimal service function
The scheduling solution of chain VNFs.
The beneficial effects of the present invention are: the present invention is for the service request changed in time-domain under 5G network slice scene
Problem is overstock in the queue for causing the accumulation of data and generating, and establishes service function queue chain model and caching mould based on time delay
Type, rather than work is only rested on the resource scheduling solved on single dispatching cycle;One kind is established on this basis
Following resource Minimum requirements situation of traffic aware model prediction business slice based on LSTM neural network.It is tied according to prediction
Fruit proposes scheduling and the Resource Allocation Formula of a kind of dynamic service function chain VNF, establishes a kind of based on minimax ant colony
Algorithm realizes the Dynamical Deployment model of a plurality of service function chain.Prediction technique of the invention not only has good prediction effect, and
And dynamic dispatching is carried out to virtual network resource in time series, actual network condition of more coincideing optimizes slice industry
The time delay of business improves the performance of network service.
Specific embodiment
Below in conjunction with attached drawing, a preferred embodiment of the present invention will be described in detail.
Fig. 1 is can be using the schematic diagram of the scene example of the embodiment of the present invention.As shown in Figure 1, different types of slice table
Show different types of service, is followed successively by unlimited virtual network user, virtual network management platform, virtual network function from left to right
Dispatch layer, physical resource pool.Within the system, Cloud Server in physical resource pool provide comprising computing resource, cache resources,
A plurality of types of physical network resources such as link bandwidth resource, virtual network manage platform then according to the business of virtual network user
State, QoS demand etc. realize the flexible allocation of the scheduling of virtual network function module, physical network resource.In order to more efficient
Ground distributes physical resource, efficiently utilizes with realizing physical resource, and the virtual network management platform that the present invention designs is by service request
The part such as unit, load analysis module, resource management entity, network status monitoring entity, virtual network scheduler forms,
In, service request unit newly reaches ground data packet for caching each slice user, and load analysis module is for analyzing each slice
Load characteristic is cached, and predicts next cyclic loading state and the money provided needed for queue overflow caching lowest serve rate is provided
Source, virtual network scheduler then determine the deployment scheme of every service function chain according to the assessment result of load analysis module, money
Source control entity distributes the optimum physical stock number that each virtual network function module obtains after service function chain completes deployment,
To ensure the QoS demand of each network slice.The effect of network status monitoring entity is to observe the real-time status of each physical resource.
Target of the invention is exactly to request the data packet of slice business to arrive by real-time monitoring caching load condition and user
Up to rate, next cycle service function chain is predicted to the Minimum requirements of resource, being based on should be as a result, by virtual network scheduler and resource
Management entity realizes the offer of business according to the scheduling of the service function chain VNF of formulation and Resource Allocation Formula.
Fig. 2 is in the present invention based on virtual network function scheduling flow figure, and the data packet that network is sliced service request is big
Small, quantity is all traffic characteristic and the next cycle service function chain of caching load characteristic prediction that be random, being sliced according to network
The dynamic allocation to virtual network function scheduling and resource are realized in minimum resource requirement on time dimension.As shown in Fig. 2,
Steps are as follows:
Step 201: generating full connecting-type virtual network topology, the virtual network function module that dummy node can be handled
Type, different types of network slice and the service function chain composition for realizing the slice business;
Step 202: establishing service function queue chain model and cache model based on time delay;
Step 203: collection network is sliced historical data packet arrival data and history buffer queue length (caches negative
It carries);
Step 204: minimum to service function chain resource using the neural network model based on LSTM for the data of collection
Demand is predicted, wherein trained method uses gradient optimization algorithm;
Step 205: judging whether the secondary cost function of punishment is more than thresholding, if so, returning to 203;Otherwise step is continued to execute
Rapid 205;
Step 206: executing a plurality of service function chain deployment operation based on minimax ant group algorithm, realize to VNF tune
Degree and dynamic resource allocation;
Step 207: the whole tune of network slice after the more queue Time Delay Model computational resource allocations established based on step 202
Spend time delay;Return step 203 carries out next period resource requirement prediction;
Fig. 3 is the queue system illustraton of model in the present invention, is sliced S in time slot tiQueue length Qi(t) it indicates, the ginseng
Number such as also illustrates that at the number-of-packet to be transmitted, it is assumed that each slice rent the cache resources of respective numbers for cache its corresponding one
A business datum enables A for each queuei(t) arrival process for indicating data packet, since virtual network user aperiodicity is answered
With the randomness for generating data, queue length can dynamically change with the variation of the arrival rate of data packet, therefore every warp
Cross TsIt will be according to the deployment way of service function chain of queue length dynamic optimization and the distribution of resource.Adjusted by dynamic
Whole service rate Di(t) to guarantee to maximize service speed under the premise of current queue caching is not spilt over, so that system performance exists
Handling capacity reaches a relative equilibrium between fairness, throughput of system is effectively improved while guaranteeing fairness, and minimizes
Network ensemble average scheduling delay.
Fig. 4 is the resource requirement prediction model schematic diagram based on LSTM neural network in the present invention, in the prediction model,
Due to preventing the demand of the minimum resources of queue overflow from being requested by user the data packet arrival rate A of the business in cachingiAnd
The queue length Q of last momentiInfluence, therefore first by load analysis module queue in the current cache monitored is long
Degree and user request the historical data of the data packet arrival rate of the business as slice feature, i.e., the feature of each slice i can
To indicate are as follows: xi=[Ai,Qi], in order to indicate the slice state and data packet history arrival rate of history, define herein one long
Degree is the discrete time window of ε may be expressed as: x={ x (t- using the data in the time window as a historical data sample
ε) ..., x (t) }, the sample of each sample set is different, and after pre-processing to sample data, is passed throughct=f ⊙ ct-1+it⊙gt、ht=ot⊙tanh(ct) building LSTM network carry out service function
The prediction of energy chain minimum resources demand.Wherein σ and ⊙ respectively indicates activation primitive sigmoid and Element-Level product.The mistake of prediction
Journey includes two step of forward calculation and reverse train, and training algorithm uses gradient optimization algorithm.
Fig. 5 is LSTM neuronal structure figure in the present invention, which indicates each to cut using a kind of state h (hidden layer)
The input feature vector of piece load, c is location mode, and for saving long-term state, x indicates the input of neural network, i.e. history number
According to sample.It can be seen that there are three neuron inputs: the input value x of current time network in t momentt, last moment nerve
The output valve h of membert-1And the location mode c of last momentt-1, there are two neuron outputs: the output of current time neuron
Value ht, current time location mode ct。
Fig. 6 disposes flow chart to be based on a plurality of service function chain of minimax ant group algorithm in the present invention, as shown in fig. 6,
Steps are as follows:
Step 601: the prediction result of input service function chain VNF resource Minimum requirements;
Step 602: initialization ant scale, information prime factor, the heuristic function significance level factor, pheromones volatilization because
The parameters such as son, pheromones constant, maximum number of iterations;
Step 603: state transition probability being calculated to every ant and carries out the scheduling of VNF in the form of roulette method;
Step 604: to the service function chain deployment result of every ant, fair mode calculates and distributes to VNF's in proportion
The bandwidth resources of computing resource and link;
Step 605: update Pheromone Matrix, wherein include two small steps, 1) by all pheromone concentrations reduce p%, 2) it is right
Every ant of each iterative process, by the variation for converting corresponding the sum of the resource being assigned to of its Path selection to pheromones
Amount, realizes the update of pheromones;
Step 606: judging whether to fall into locally optimal solution, if so, continuing to execute step 607;It is carried out if it is not, returning to 603
Next iteration;
Step 607: updating pheromone concentration volatility coefficient ρ, pheromones factor-alpha, heuristic function significance level factor-beta, letter
Plain concentration τ is ceased, and returns to 603 and carries out next iteration.
Finally, it is stated that preferred embodiment above is only used to illustrate the technical scheme of the present invention and not to limit it, although logical
It crosses above preferred embodiment the present invention is described in detail, however, those skilled in the art should understand that, can be
Various changes are made to it in form and in details, without departing from claims of the present invention limited range.