CN112199153A - Virtual network function VNF instance deployment method and device - Google Patents

Virtual network function VNF instance deployment method and device Download PDF

Info

Publication number
CN112199153A
CN112199153A CN202011027090.5A CN202011027090A CN112199153A CN 112199153 A CN112199153 A CN 112199153A CN 202011027090 A CN202011027090 A CN 202011027090A CN 112199153 A CN112199153 A CN 112199153A
Authority
CN
China
Prior art keywords
type
vnf
network
vnf instance
adjusted
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011027090.5A
Other languages
Chinese (zh)
Inventor
徐思雅
杨会峰
郭少勇
孙辰军
方蓬勃
童日明
刘玮
李逸民
王智慧
吕鹏鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
State Grid Corp of China SGCC
Beijing University of Posts and Telecommunications
Information and Telecommunication Branch of State Grid Hebei Electric Power Co Ltd
Original Assignee
State Grid Corp of China SGCC
Beijing University of Posts and Telecommunications
Information and Telecommunication Branch of State Grid Hebei Electric Power Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by State Grid Corp of China SGCC, Beijing University of Posts and Telecommunications, Information and Telecommunication Branch of State Grid Hebei Electric Power Co Ltd filed Critical State Grid Corp of China SGCC
Priority to CN202011027090.5A priority Critical patent/CN112199153A/en
Publication of CN112199153A publication Critical patent/CN112199153A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects

Abstract

The embodiment of the invention provides a Virtual Network Function (VNF) instance deployment method and device. The scheme is as follows: acquiring a first flow rate of a historical time slot of each service chain before a current time slot and a second flow rate of the current time slot; predicting a third flow rate of each service chain in the next time slot according to the first flow rate and the second flow rate; determining the number of the VNF instances to be adjusted of each type in the next time slot based on the third flow rate and the number of the VNF instances of each type; for each type of VNF instance, in a case that the number to be adjusted of the type of VNF instance indicates the number of the type of VNF instance, the number to be adjusted of the type of VNF instance is deployed in the NFV network. By the technical scheme provided by the embodiment of the invention, the problem that the time consumption for deploying the VNF instance is long when the actual flow reaches can be effectively avoided, so that the service quality of the NFV network when the actual flow reaches is effectively improved.

Description

Virtual network function VNF instance deployment method and device
Technical Field
The invention relates to the technical field of internet, in particular to a Virtual Network Function (VNF) instance deployment method and device.
Background
Network Function Virtualization (NFV) is a technology for implementing Network functions operable on industry standard servers in a software manner. NFV can efficiently deploy and manage various network functions. Compared to traditional middleboxes that rely on dedicated hardware, NFV utilizes commodity servers using standard hardware, and Network functions can be flexibly extended with changes in user traffic, i.e., a Virtualization Network Function (VNF) instance. For example, as traffic in the NVF network increases, the number of VNF instances may be increased in the NFV network to meet service needs. Alternatively, when the traffic in the NFV network is small, part of VNF instances may be released to save operational costs.
However, at present, dynamic expansion of network functions is often passive, that is, VNF instances in the NFV network are often deployed according to the size of current traffic only when the traffic actually arrives, and especially, when new VNF instances are deployed in the NFV network, Virtual Machine (VM) images also need to be transmitted, which consumes a certain time and affects service quality during deployment.
Disclosure of Invention
The embodiment of the invention aims to provide a Virtual Network Function (VNF) instance deployment method and device so as to improve the service quality of an NFV network when actual traffic arrives. The specific technical scheme is as follows:
the embodiment of the invention provides a VNF instance deployment method, which comprises the following steps:
acquiring a first traffic rate of a first number of historical time slots before a current time slot of each service chain in the NFV network and a second traffic rate of the current time slot;
predicting a third flow rate of each service chain in a next time slot according to the first flow rate and the second flow rate;
determining the number to be adjusted of each type of VNF instance in the NFV network at a next timeslot based on the third flow rate of each service chain and the number of each type of VNF instance in each service chain;
for each type of VNF instance in the NFV network, deploying a number of VNF instances of the type to be adjusted in the NFV network if the number of VNF instances of the type indicates that the number of VNF instances of the type is increased in the next time slot.
An embodiment of the present invention further provides a VNF instance deployment apparatus, where the apparatus includes:
an obtaining module, configured to obtain a first traffic rate of a first number of historical time slots before a current time slot of each service chain in the NFV network, and a second traffic rate of the current time slot;
a prediction module, configured to predict a third traffic rate of each service chain in a next timeslot according to the first traffic rate and the second traffic rate;
a determining module, configured to determine, based on the third flow rate of each service chain and the number of VNF instances of each type in each service chain, a number to be adjusted for each VNF instance of each type in the NFV network at a next timeslot;
a deployment module, configured to, for each type of VNF instance in the NFV network, deploy, in the NFV network, the number of VNF instances of the type to be adjusted when the number of VNF instances of the type indicates that the number of VNF instances of the type is increased in the next timeslot.
The embodiment of the invention also provides electronic equipment which comprises a processor, a communication interface, a memory and a communication bus, wherein the processor, the communication interface and the memory complete communication with each other through the communication bus;
a memory for storing a computer program;
a processor, configured to implement any of the above VNF instance deployment method steps when executing a program stored in the memory.
An embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored in the computer-readable storage medium, and when the computer program is executed by a processor, the steps of the VNF instance deployment method described in any one of the above are implemented.
Embodiments of the present invention further provide a computer program product including instructions, which when run on a computer, cause the computer to perform any of the above VNF instance deployment methods.
The embodiment of the invention has the following beneficial effects:
the VNF instance deployment method and apparatus provided in the embodiments of the present invention may predict, according to a first traffic rate of a first number of historical timeslots before a current timeslot of each service chain in the NFV network and a second traffic rate of the current timeslot, a third traffic rate of each service chain in a next timeslot, and determine, based on the third traffic rate of each service chain and the number of each type of VNF instance in each service chain, a to-be-adjusted number of each type of VNF instance in the NFV network at the next timeslot, so that the to-be-adjusted number of each type of VNF instance indicates that the number of the type of VNF instance is increased at the next timeslot, and the VNF instance is deployed in the NFV network in advance, which may effectively avoid a problem that deployment of the VNF instance consumes a long time when actual traffic reaches, thereby effectively improving service quality of the NFV network when actual traffic reaches.
Of course, not all of the advantages described above need to be achieved at the same time in the practice of any one product or method of the invention.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other embodiments can be obtained by using the drawings without creative efforts.
Fig. 1 is a first flowchart of a VNF instance deployment method according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of a hidden unit in a GUR neural network according to an embodiment of the present invention;
fig. 3 is a schematic flow chart of a GRU neural network training method according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of pre-GRU neural network training according to an embodiment of the present invention;
FIG. 5-a is a first schematic diagram of a comparison of neural network performance provided by an embodiment of the present invention;
FIG. 5-b is a second schematic diagram of a comparison of neural network performance provided by an embodiment of the present invention;
fig. 6 is a second flowchart of a VNF instance deployment method according to an embodiment of the present invention;
fig. 7 is a third flowchart illustrating a VNF instance deployment method according to an embodiment of the present invention;
fig. 8 is a fourth flowchart illustrating a VNF instance deployment method according to an embodiment of the present invention;
fig. 9 is a fifth flowchart of a VNF instance deployment method according to an embodiment of the present invention;
figure 10-a is a schematic diagram of VNF instance deployment costs for a service chain according to an embodiment of the present invention;
fig. 10-b is a schematic diagram of the total cost for NFV network deployment provided by an embodiment of the present invention;
FIG. 10-c is a schematic diagram of the total cost of operation of a system provided by an embodiment of the present invention;
fig. 11-a is a schematic diagram of the actual traffic rate of a service chain in 720 time slots according to an embodiment of the present invention;
fig. 11-b is a first schematic diagram of traffic rates of a service chain within 720 time slots, which are obtained based on threshold prediction according to an embodiment of the present invention;
fig. 11-c is a first schematic diagram of traffic rates of a service chain within 720 time slots, which are predicted based on a GRU neural network according to an embodiment of the present invention;
fig. 11-d is a second schematic diagram of traffic rates of a service chain within 720 time slots, which are obtained based on threshold prediction according to an embodiment of the present invention;
fig. 11-e is a second schematic diagram of traffic rates of a service chain within 720 time slots, which are predicted based on a GRU neural network according to an embodiment of the present invention;
fig. 12 is a schematic structural diagram of a VNF instance deployment apparatus according to an embodiment of the present invention;
fig. 13 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In order to solve the problems that the time consumption for deploying a VNF instance in the existing NVF network is long and the service quality of the NVF network is affected, the embodiment of the invention provides a VNF instance deploying method. The method may be applied to any electronic device.
As shown in fig. 1, fig. 1 is a first flowchart of a VNF instance deployment method according to an embodiment of the present invention. The method comprises the following steps.
Step S101, a first traffic rate of a first number of historical time slots before a current time slot of each service chain in the NFV network and a second traffic rate of the current time slot are obtained.
Step S102, according to the first flow rate and the second flow rate, predicting a third flow rate of each service chain in the next time slot.
Step S103, determining the number of VNF instances of each type in the NFV network to be adjusted in the next timeslot based on the third flow rate of each service chain and the number of VNF instances of each type in each service chain.
Step S104, for each type of VNF instance in the NFV network, in a case that the to-be-adjusted number of the type of VNF instance indicates that the number of the type of VNF instance is increased in the next timeslot, deploying the to-be-adjusted number of the type of VNF instances in the NFV network.
According to the method provided by the embodiment of the invention, the third flow rate of each service chain in the next time slot is predicted according to the first flow rate of the first number of historical time slots before the current time slot and the second flow rate of the current time slot in each service chain in the NFV network, and the number to be adjusted of each type of VNF instance in the NFV network in the next time slot is determined based on the third flow rate of each service chain and the number of each type of VNF instance in each service chain, so that the number to be adjusted of each type of VNF instance indicates that the number of the type of VNF instance is increased in the next time slot, and the VNF instance is deployed in the NFV network in advance, which can effectively avoid the problem that the time consumption for deploying the VNF instance is long when the actual traffic reaches, thereby effectively improving the service quality of the NFV network when the actual traffic reaches.
In this embodiment of the present invention, the network nodes in the NFV network may include network nodes such as a server and a traffic forwarding device. Different types of VNF instances may be deployed on the server, and each type of VNF instance may provide a corresponding network function service for the data flow. The VNF instance through which the data stream passes is a Service Chain (also called SFC).
The NFV Network includes, but is not limited to, VNF instances such as Network Address Translation (NAT), Intrusion Detection System (IDS), Firewall (Firewall), Intrusion Prevention System (IPS), and the like. Here, the type of the VNF example described above is not particularly limited.
The following examples illustrate the present invention. For convenience of description, the following description will be made by taking an electronic device as an execution subject, and does not have any limiting effect.
For step S101, a first traffic rate of a first number of historical time slots before a current time slot of each service chain in the NFV network and a second traffic rate of the current time slot are obtained.
In this embodiment of the present invention, the NFV network includes a plurality of service chains, and each service chain may include a plurality of network nodes. Different types of VNF instances may be deployed on each network node. Taking the first network node on each service chain as a source node and the last network node as a destination node, the traffic rate may be expressed as a unit transmission rate of a data stream transmitted from the source node to the destination node, and the unit thereof may be Million bits per second (Mbps), which represents the number of bits transmitted per second.
The electronic device can determine, in real time, a traffic rate corresponding to each service chain in the NFV network. When the first traffic rate and the second traffic rate are obtained, the electronic device may respectively obtain traffic rates corresponding to a first number of historical time slots before a time slot (i.e., a current time slot) of a current time point as the first traffic rate, and obtain traffic rates of the time slot of the current time point as the second traffic rate.
The first flow rate and the second flow rate may be an average value of flow rates corresponding to respective time points in each time slot, or may be a flow rate corresponding to a certain time point in each time slot. In addition, the current time slot and the historical time slot may be divided in units of seconds, for example, one time slot every 30 seconds, or may be divided in units of minutes, for example, one time slot every 5 minutes.
In an optional embodiment, when the electronic device obtains the first traffic rate and the second traffic rate, each time slot may be divided according to a current time point, for example, the electronic device may determine the current time slot and the historical time slot by taking the current time point as a starting time point of the current time slot.
In an alternative embodiment, the size of each time slot may be the same or different, depending on the network environment, the number of data streams, and other factors. Each time slot is not particularly limited herein.
The first number may be set according to user requirements, for example, in order to make the predicted third flow data more accurate, the first number may be set to be larger when the first number is set. That is, more first flow rates are obtained, and the third flow rate is predicted. Here, the first number is not particularly limited.
For step S102, a third flow rate of each service chain in the next time slot is predicted according to the first flow rate and the second flow rate.
The next timeslot is the timeslot next to the timeslot where the current time point is located (i.e., the current timeslot).
In an optional embodiment, for each service chain, the electronic device may use a first traffic rate of the service chain and a second traffic rate of the service chain as inputs, and predict a third traffic rate of the service chain in a next time slot by using a pre-trained Gated round robin Unit (GRU) neural network; wherein the GRU neural network is trained based on a fourth traffic rate for a second number of historical time slots.
The GRU neural network is a variant of a Long Short-Term Memory (LSTM) neural network, and has a simpler structure than an LSTM neural network. Specifically, three gate functions, namely an input gate, a forgetting gate and an output gate, are introduced into the LSTM neural network to control the input value, the memory value and the output value. However, there are only two gate functions in the GRU neural network, namely an update gate and a reset gate.
In an embodiment of the present invention, the GRU neural network may include an input layer, a hidden layer and an output layer. Wherein, a plurality of hidden units can be included in the hidden layer. For convenience of understanding, fig. 2 is an exemplary illustration, and fig. 2 is a schematic structural diagram of a hidden unit in a GUR neural network according to an embodiment of the present invention. Including an update gate z in the hidden unit shown in fig. 2tAnd a reset gate r t201, 203, 205 denote multiplication operations, 202 denotes summation operations, 204 is subtraction from 1, 206 and 207 denote sigmoid regression operations (i.e., σ), and 208 denotes hyperbolic tangent operations (i.e., tanh). Wherein the door z is updatedtFor controlling the extent to which the state information of the previous time instant, which may be expressed as the location information of the VNF instance of the previous time instant in the NFV network, is brought into the current state, the update gate ztA larger value of (a) indicates that more state information is brought in at the previous time. Reset gate rtFor controlling how much information of the previous state is written to the current candidate set
Figure BDA0002702443650000061
Upper, reset gate rtThe smaller the information indicating the previous state is written.
The GRU neural network may include a plurality of hidden units as shown in fig. 2, where the hidden unit shown in fig. 2 is the t-th hidden unit, ht-1Information output for the t-1 th hidden unit, htThe information outputted by the t-th hidden unit (i.e. the information outputted by the hidden unit shown in FIG. 2), xtIs the t information input to the GRU neural network, i.e. the t traffic rate.
Based on the GRU neural network provided by the embodiment of the invention, the influence of historical input and output on the current output can be fully considered by adopting a multi-input single-output structure, so that more characteristics in the input first flow rate and second flow rate can be learned, and the predicted third flow rate is more accurate.
In an alternative embodiment, as shown in fig. 3, fig. 3 is a schematic flow chart of a GRU neural network training method according to an embodiment of the present invention. The method comprises the following steps.
Step S301, a preset training set is obtained.
The predetermined training set includes a fourth traffic rate for a second number of historical time slots. For the acquisition of the predetermined training set, reference may be made to the above-mentioned acquisition manner of the first traffic rate, which is not specifically described herein.
The second number of historical time slots is a second number of consecutive historical time slots. The second number may be greater than the first number. Here, the second number is not particularly limited.
Step S302, inputting a fourth quantity of fourth flow rate in a preset training set to a preset GRU neural network, and predicting the predicted flow rate of a target time slot, wherein the target time slot is the next time slot of the last time slot in the input fourth quantity of historical time slots.
In an optional embodiment, the electronic device may select the fourth traffic rates with the fourth quantity by using a preset time slot window, and input the fourth traffic rates with the fourth quantity obtained by selection to a preset GRU neural network to predict the predicted traffic rate of the target time slot. Wherein the fourth number is less than the second number.
For convenience of understanding, fig. 4 is an illustration of an embodiment of the invention, and fig. 4 is a schematic diagram of a predetermined GRU neural network training. The curve 401 is a curve formed by the second number of fourth flow rates obtained in step S301.
Now, assuming that the fourth number is 2, the electronic device may slide on the curve 401 by using the preset time slot window, and input the fourth traffic values of the two historical time slots included in the preset time slot window to the preset GRU neural network, so as to predict the predicted traffic rate of the target time slot. For example, the fourth traffic rate corresponding to the time slot 402 and the time slot 403 is input to the preset GRU neural network, and the predicted traffic rate of the time slot 404 is predicted.
Step S303, calculating a loss value of the preset GRU neural network according to the predicted flow rate of the target time slot and a fourth flow rate of the target time slot in the preset training set.
Still taking fig. 4 as an example, the electronic device may calculate an error value between the predicted traffic rate of the timeslot 404 and the fourth traffic rate of the timeslot 404 on the curve 401, so as to obtain a loss value of the predetermined GRU neural network.
In the embodiment of the present invention, the loss value may be calculated by using a loss function such as a mean square error function and a mean absolute error. The calculation of the above loss value is not specifically described here.
And step S304, when the loss value is not less than the preset loss value threshold value, adjusting the parameters of the preset GRU neural network, and returning to execute the step S302.
In this step, when the loss value is not less than the preset loss value threshold, the electronic device may determine that the preset GRU neural network is not converged. At this time, the electronic device may adjust a parameter of the preset GRU neural network, and return to the step of performing the step of inputting the fourth number of fourth traffic rates in the preset training set to the preset GRU neural network to predict the predicted traffic rate of the target timeslot.
In the embodiment of the present invention, the parameters in the predetermined GRU neural network include, but are not limited to, an offset and a weight. The parameter adjustment of the preset GRU neural network may adopt a gradient descent method, a reverse adjustment method, or the like. Here, the parameter adjustment process is not specifically described.
Step S305, when the loss value is smaller than the preset loss value threshold, determining the current preset GRU neural network as the trained GRU neural network.
In this step, the electronic device may determine that the predetermined GRU neural network converges when the loss value is less than a predetermined loss value threshold. At this time, the electronic device may determine the current preset GRU neural network as the trained GRU neural network.
In the above embodiment, the third flow rate of the next time slot is predicted by the pre-trained GRU neural network. In addition, the electronic device may employ other deep learning networks, such as the LSTM neural network described above. Here, the deep learning network used for predicting the third flow rate is not particularly limited.
In an optional embodiment, in order to improve the accuracy of the third flow rate prediction, the electronic device may train multiple deep learning networks, so as to select a deep learning network with the minimum cost according to the cost generated by the NFV network during under-provisioning or over-provisioning, and then predict the third flow rate of each service chain in the next time slot of the current time slot by using the deep learning network.
In an alternative embodiment, the electronic device may calculate the cost incurred in obtaining the above-described under-supply or over-supply using the following formula.
Figure BDA0002702443650000071
Where t is the t-th time slot, Cprovision(t) is the total cost of the t-th time slot, I is the number of service chains included in the NFV network, I is the I-th service chain,
Figure BDA0002702443650000081
is the traffic rate of the ith service chain at the actual arrival of the t time slot, alphai(t) predicted traffic rate, ζ, for the ith service chain at t time slotc(t) unit cost per gigabit (Gbit) traffic incurred waiting for network function services or dropped in the NFV network, K being the number of types of VNF instances included in the NFV network, K being the kth type of VNF instance, Pk(t) performance parameters required for the k-type VNF instance predicted for the t-slot,
Figure BDA0002702443650000085
for t-slot k-type VNF instanceDesired property parameter, ζo(t) is the cost per unit of VNF instance performance parameters.
The performance parameters described above are used to represent the processing power of the VNF instance. The larger the performance parameter, the stronger the processing power of the VNF instance; the smaller the performance parameter, the less processing power of the VNF instance.
In the embodiments of the present invention, when the above description is made
Figure BDA0002702443650000082
At this time, the NFV network is under-provisioned, that is, the processing capability provided by the NFV network cannot meet the actual requirement, and at this time, the VNF instance in the NFV network waits for or discards the data flow due to insufficient resources when providing the network function service for the data flow, thereby resulting in a cost loss, that is, the ζ is described abovec(t) of (d). When the above is mentioned
Figure BDA0002702443650000083
In this case, the NFV network is over-provisioned, that is, the processing capacity provided by the NFV network is greater than the actual demand, and some resources in the NFV network are wasted, that is, the NFV network is over-provisioned
Figure BDA0002702443650000084
In another alternative embodiment, the electronic device may further select a neural network model from the plurality of neural network models to predict the third flow rate of each service chain in the time slot next to the current time slot according to other parameters, such as root mean square error, iteration time, and the like.
For ease of understanding, the GRU neural network and the LSTM neural network are trained using the same pre-set training set, as described in connection with FIGS. 5-a and 5-b. Fig. 5-a is a first schematic diagram of performance comparison of a neural network according to an embodiment of the present invention. Fig. 5-b is a second schematic diagram of performance comparison of a neural network according to an embodiment of the present invention. In fig. 5-a and 5-b, graph 501 represents a GRU neural network and graph 502 represents an LSTM neural network.
In the diagram shown in fig. 5-a, the root mean square error of the GRU neural network and the LSTM neural network is shown as a function of the predetermined time slot window, and the predetermined time slot window is 5, 10, 15, 20, 25, 30. By comparison, the root mean square error variation trends corresponding to the GRU neural network and the LSTM neural network are basically consistent. However, under the condition that the size of the preset time slot window is the same, the root mean square error of the GRU neural network is obviously smaller than that of the LSTM neural network. That is, the prediction accuracy of the GRU neural network on the third flow rate is obviously higher than that of the LSTM neural network on the third flow rate. Thus, taking into account the prediction accuracy of the GRU neural network and the LSTM neural network, the GRU neural network may be determined as a neural network model that predicts the third flow rate for each service chain at the time slot next to the current time slot.
In an alternative embodiment, based on the schematic diagram shown in fig. 5-a, the root mean square error is the smallest when the size of the predetermined timeslot window is 20. That is, when the size of the predetermined timeslot window is 20, the prediction accuracy for the third flow rate is relatively high. Therefore, the size of the preset slot window may be set to 20.
In the schematic diagram shown in fig. 5-b, both taking the fixed iteration times of the GRU neural network and the LSTM neural network as 200 times as an example, the iteration time of both the GRU neural network and the LSTM neural network shows an increasing trend with the increase of the preset time slot window. However, the growth trend of the iteration time of the GRU neural network is significantly smaller than that of the LSTM neural network, and the gap increases with the increase of the preset time slot window. That is, the iteration efficiency of the GRU neural network is significantly higher than that of the LSTM neural network. Thus, taking into account the predicted efficiencies of the GRU neural network and the LSTM neural network, the GRU neural network may be determined as a neural network model that predicts the third flow rate for each service chain at the time slot next to the current time slot.
In an optional embodiment, according to the method shown in fig. 1, an embodiment of the present invention further provides a VNF instance deployment method. As shown in fig. 6, fig. 6 is a second flowchart of a VNF instance deployment method according to an embodiment of the present invention. The method comprises the following steps.
Step S601, obtain a first traffic rate of a first number of historical time slots before a current time slot of each service chain in the NFV network, and a second traffic rate of the current time slot.
Step S602, predict a third flow rate of each service chain in the next time slot according to the first flow rate and the second flow rate.
The above steps S601 to S602 are the same as the above steps S101 to S102.
Step S603, calculate a first performance parameter required by each type VNF instance of the NFV network in the next timeslot according to the third flow rate of each service chain and the number of each type VNF instance in each service chain.
In an alternative embodiment, the electronic device may calculate the first performance parameter required by the NFV network for each type of VNF instance at the next timeslot using the following formula.
Figure BDA0002702443650000091
Wherein, Pk(t +1) is the first performance parameter required by a VNF instance of type k in the t +1 slot. [ t ] ofi,tii]For the time range of the t-th time slot, τiIs the life cycle of the ith service chain. Function fki(t +1)) is used to represent the traffic rate of the VNF instance of type k in the t +1 time slot as a function of the performance parameters of the VNF instance of type k,
Figure BDA0002702443650000092
is the number of k-type VNF instances in the ith service chain.
In the embodiment of the present invention, the functional relationship between the traffic rate of the k-type VNF instance in the t +1 timeslot and the performance parameter of the k-type VNF instance may be linear, that is, the function fki(t +1)) is a linear function. E.g. function fki(t +1)) may be a direct proportional function. Here, for the function fki(t +1)) is not particularly limited.
Step S604, calculating the number of VNF instances of each type in the NFV network to be adjusted in the next timeslot according to the first performance parameter required by the NFV network in the next timeslot for each VNF instance of the type and the second performance parameter of each VNF instance of the type.
In an alternative embodiment, the electronic device may calculate the number of VNF instances of each type in the NFV network to be adjusted at the next timeslot using the following formula.
Figure BDA0002702443650000101
Wherein the content of the first and second substances,
Figure BDA0002702443650000102
for the number of k-type VNF instances to be adjusted at t +1 slot for the NFV network,
Figure BDA0002702443650000103
for rounding operation, Pk(t) Performance parameters of VNF instances of type k in the t slot, CkIs a second performance parameter for one instance of type k.
In the embodiment of the invention, when P isk(t+1)>PkAt (t), the above
Figure BDA0002702443650000104
Is a positive number, and at this time,
Figure BDA0002702443650000105
indicating that the number of instances of the k-type VNF is increased in the next slot of the current slot. When P is presentk(t+1)<PkAt (t), the above
Figure BDA0002702443650000106
Is a negative number, and at this time,
Figure BDA0002702443650000107
indicating that the number of k-type VNF instances is reduced in the next slot of the current slot. When P is presentk(t+1)=PkAt (t), the above
Figure BDA0002702443650000108
Is 0, and at this time,
Figure BDA0002702443650000109
indicating that the number of k-type VNF instances is unchanged in the next slot of the current slot.
The above-mentioned steps S603 to S604 are steps obtained by thinning the above-mentioned step S103.
Through the above steps S603 and S604, the electronic device may accurately determine the data to be adjusted of each type of VNF instance required in the NFV network at the next timeslot, so that the NFV network is deployed based on the data to be adjusted, and the accuracy of VNF instance deployment is improved.
Step S605, for each type of VNF instance in the NFV network, in a case that the to-be-adjusted number of the type of VNF instance indicates that the number of the type of VNF instance is increased in the next timeslot, deploying the to-be-adjusted number of the type of VNF instances in the NFV network.
Step S605 is the same as step S104.
In an optional embodiment, according to the method shown in fig. 1, an embodiment of the present invention further provides a VNF instance deployment method. As shown in fig. 7, fig. 7 is a third flowchart of a VNF instance deployment method according to an embodiment of the present invention. The method comprises the following steps.
Step S701 is to obtain a first traffic rate of a first number of historical time slots before a current time slot of each service chain in the NFV network, and a second traffic rate of the current time slot.
Step S702 is to predict a third traffic rate of each service chain in the next timeslot according to the first traffic rate and the second traffic rate.
Step S703 is to determine the number of VNF instances of each type in the NFV network to be adjusted in the next timeslot based on the third flow rate of each service chain and the number of VNF instances of each type in each service chain.
Step S704, for each type of VNF instance in the NFV network, if the to-be-adjusted number of VNF instances of the type indicates that the number of VNF instances of the type is increased in the next timeslot, calculating a minimum cost for deployment of each type of VNF instance according to a preset weight value of each type of VNF instance and a resource utilization rate of each node in the NFV network.
In an optional embodiment, for each type VNF instance in the NFV network, the electronic device may compare the determined number of VNF instances of the type to be adjusted in the next timeslot with a preset number threshold, that is, 0, so as to determine whether the number of VNF instances of the type to be adjusted in the next timeslot is greater than the preset number threshold. When the number of VNF instances of the type to be adjusted in the next timeslot is greater than the preset number threshold, the electronic device may determine that the number to be adjusted is a positive value, that is, the number of VNF instances of the type to be adjusted in the next timeslot indicates that the number of VNF instances of the type is increased in the next timeslot. When the number of to-be-adjusted instances of the type VNF in the next timeslot is less than the preset number threshold, the electronic device may determine that the number of to-be-adjusted instances is a negative value, that is, the number of to-be-adjusted instances of the type VNF in the next timeslot indicates that the number of VNF instances of the type VNF is decreased in the next timeslot.
In an alternative embodiment, the electronic device may calculate the minimum cost for each type of VNF instance deployment using the following formula.
Figure BDA0002702443650000111
Wherein the content of the first and second substances,
Figure BDA0002702443650000112
for the minimum cost of k-type VNF instance deployment at t +1 slot,
Figure BDA0002702443650000113
the unit cost for the preset resource utilization flow for the k-type VNF instance,
Figure BDA0002702443650000114
indicating resource utilization, Cv(t+1)Performance parameter of NFV network for t +1 time slot, CvPerformance parameters, U, preset for NFV networks0Is a preset resource utilization threshold.
In the embodiment of the present invention, when the resource utilization rate is lower than the preset utilization rate threshold, the minimum cost of each type of VNF instance deployment is in a linear relationship with the resource utilization rate. When the resource utilization rate is higher than the preset utilization rate threshold, the minimum cost for deploying each type of VNF instance is in an exponential relationship with the resource utilization rate, that is, the larger the resource utilization rate is, the higher the deployment cost of the VNF instance is, the more expensive the server resource is.
Step S705, determining a quadruple in the markov decision process according to the minimum cost for deployment of each type of VNF instance.
The quadruple in the markov decision process includes { S, a, R, P }, where S, a, R, P respectively represent a state space, a motion space, a reward function, and a transition probability.
The state space S may be represented as deployment positions corresponding to all newly deployed VNF instances in the NFV network, and specifically may be represented as a one-dimensional vector S { { loc (k) }m)}loc(km)∈[1,|V|],k∈K}。loc(km) Is the deployment location where the mth k-type VNF instance is located.
The action space a may be represented as a state corresponding to a location change action when each VNF instance in the NFV network is deployed, and specifically may be represented as a one-dimensional vector
Figure BDA0002702443650000115
Figure BDA0002702443650000116
For example, the location of the mth k-type VNF instance is moved from a server to a state corresponding to another server.
The reward function R may be expressed as the instantaneous reward cost that the NFV network can obtain by changing the state from s' to s with a certain position change action. The reward function R may be expressed as:
R(s,s′,a)=Cost(s′)-Cost(s)
a is an action, and may be specifically expressed as an action of adjusting the position of the VNF instance. Cost (s ') is the Cost of operating the NFV network in state s'. Cost(s) is the running cost of the NFV network at state s.
The transition probability can be learned by model-free (model-free). The method of determining the transition probability is not specifically described here.
In an alternative embodiment, the electronic device may calculate the above operating cost using the following formula.
Figure BDA0002702443650000121
Wherein, Crunning(t) is the operating cost of the NFV network at t slots, V is the number of network nodes included in the NFV network, V is the vth network node,
Figure BDA0002702443650000122
the number of k-type VNF instances deployed on the v-th network node for the t-slot.
Step S706, for each type of VNF instance, determining, by using a pre-trained A3C network, deployment positions of a number of VNF instances of the type to be adjusted in the NFV network according to the quadruple and the number of VNF instances of the type to be adjusted in the next time slot, where the A3C network is obtained by training the sample quadruple in the sample markov decision process.
In the embodiment of the present invention, the Asynchronous dominant Actor (A3C) network includes two deep neural networks, namely, an Actor network and a Critic network. The actor network and the critic network respectively take the state space S in the quadruple as input, the output of the actor network is the probability distribution of each action taken in the current state, and the electronic device can determine the action with the highest probability as the next action, namely the deployment position of the VNF instances of the type to be adjusted in the NFV network in the next time slot. The output of the critic network is a real number that represents the reward that would be obtained in response to the current policy, i.e., the adjustment of the deployment location in the NFV network from the input state s to the state s' at the next time slot for each type of VNF instance.
In the embodiment of the present invention, the training of the A3C network does not have preset sample data, the markov quadruple defines an action, a state, and a reward value, and the training is specifically completed by searching step by step, and the transition probability is adjusted by the change of the reward value, so as to obtain an optimal transition probability matrix. The whole training system is composed of a global agent and a plurality of parallel local agents. After each agent and the environment interact with a certain amount of data, the gradient of the neural network loss function in the thread of the agent is calculated, and the neural network of the global agent is updated by using the calculated gradient. That is, the parallel local intelligence can independently use the accumulated gradient to update the network parameters of the global neural network respectively. At intervals, the neural network of the local agent updates the network parameters of the local agent to the network parameter parameters of the public neural network (namely, the network parameters of the neural network of the global agent), so as to guide the following environment interaction and complete the training process of the A3C network. Here, the training process of the A3C network is not specifically described.
Step S707, according to the deployment position of the VNF instance of the type in the NFV network, deploying a number of VNF instances of the type to be adjusted in the NFV network.
The above-mentioned steps S704 to S707 are steps obtained by thinning the above-mentioned step S104.
With the steps S704-S707, the electronic device fully considers the deployment cost of each type of VNF instance when training the A3C network, so that the VNF instance deployment cost is reduced while the VNF instance in the NFV network is deployed.
In an optional embodiment, according to the method shown in fig. 1, an embodiment of the present invention further provides a VNF instance deployment method. As shown in fig. 8, fig. 8 is a fourth flowchart of a VNF instance deployment method according to an embodiment of the present invention. The method comprises the following steps.
Step S801 is to obtain a first traffic rate of a first number of historical time slots before a current time slot of each service chain in the NFV network, and a second traffic rate of the current time slot.
Step S802, predict a third flow rate of each service chain in the next time slot according to the first flow rate and the second flow rate.
Step S803, determining the number of VNF instances of each type in the NFV network to be adjusted in the next timeslot based on the third traffic rate of each service chain and the number of VNF instances of each type in each service chain.
Step S804, for each type of VNF instance in the NFV network, in a case that the to-be-adjusted number of the type of VNF instance indicates that the number of the type of VNF instance is increased in the next timeslot, deploying the to-be-adjusted number of the type of VNF instances in the NFV network.
The above steps S801 to S804 are the same as the above steps S101 to S104.
Step S805, for each type of VNF instance in the NFV network, if the to-be-adjusted number of VNF instances of the type indicates that the number of VNF instances of the type is reduced in the next timeslot, based on the to-be-adjusted number of VNF instances of the type, the state of the to-be-adjusted number of VNF instances of the type in the NFV network is adjusted to a first state when the next timeslot arrives, where the first state is used to indicate that the VNF instances are on a network node in the NFV network but do not provide network function services.
In the embodiment of the present invention, for each VNF instance, a current state of the VNF instance in the NFV network may be one of a first state, a second state, and a third state. Wherein the first state (which may also be denoted as IDLE state) is used to indicate that the VNF instance is on a network node in the NFV network but does not provide network function services. I.e. the VNF instance is still in a network node of the NFV network, but the VNF instance does not provide network function services for traffic in the NFV network. The second state (which may also be denoted as a DELETED state) is used to indicate that the VNF instance has been removed from a network node of the NFV network. The third state (which may also be denoted as ACTIVE state) is used to indicate that the VNF instance is on a network node in the NFV network and provides network function services.
For the VNF instance in the first state, the electronic device may enable the VNF instance to provide a network function service for traffic in the NFV network by adjusting a current state of the VNF instance, that is, adjusting the current state of the VNF instance from the first state to a third state. For the VNF instance in the second state, the electronic device may cause the VNF instance to provide network function services for traffic in the NFV network by relocating the VNF instance on a network node in the NFV network.
Through the step S805, the electronic device may redeploy the VNF instance in the NFV network when the traffic of the next timeslot is actually reached, so as to reduce resource consumption caused by redundant VNF instances.
In an optional embodiment, in the step S805, based on the number to be adjusted of the VNF instances of the type, the state of the number to be adjusted of the VNF instances of the type in the NFV network is adjusted to a first state when the next timeslot arrives, which may specifically be:
the electronic device adjusts, based on the to-be-adjusted number of VNF instances of the type, a state of the to-be-adjusted number of VNF instances of the type with a minimum utilization rate in the NFV network to a first state when a next timeslot arrives.
In the embodiment of the present invention, by adjusting the states of the VNF instances of the type to be adjusted with the smallest utilization rate to the first state, the possibility that the VNF instance with a higher utilization rate is frequently deployed can be effectively reduced.
In an alternative embodiment, according to the method shown in fig. 8, an embodiment of the present invention provides a VNF instance deployment method. As shown in fig. 9, fig. 9 is a fifth flowchart of a VNF instance deployment method according to an embodiment of the present invention. The method comprises the following steps.
Step S901 obtains a first traffic rate of a first number of historical time slots before a current time slot of each service chain in the NFV network, and a second traffic rate of the current time slot.
Step S902, predict a third flow rate of each service chain in the next time slot according to the first flow rate and the second flow rate.
Step S903, determining the number of VNF instances of each type in the NFV network to be adjusted in the next timeslot based on the third flow rate of each service chain and the number of VNF instances of each type in each service chain.
Step S904, for each type VNF instance in the NFV network, in a case that the to-be-adjusted number of the type VNF instance indicates that the number of the type VNF instance is increased in the next timeslot, deploying the to-be-adjusted number of the type VNF instances in the NFV network.
Step S905, for each type of VNF instance in the NFV network, if the to-be-adjusted number of the type of VNF instance indicates that the number of the type of VNF instance is reduced in the next timeslot, based on the to-be-adjusted number of the type of VNF instance, the state of the to-be-adjusted number of the type of VNF instance in the NFV network is adjusted to a first state when the next timeslot arrives, where the first state is used to indicate that the VNF instance is on a network node in the NFV network but does not provide a network function service.
The above-described steps S901 to S905 are the same as the above-described steps S801 to S805.
Step S906, recording the VNF instance whose current state is the first state into a preset queue.
In this embodiment of the present invention, the electronic device may construct a preset queue for a VNF instance in a first state in the NFV network. When the current state of a certain VNF instance in the NFV network changes to the first state, the electronic device may record the VNF instance in a preset queue.
In an optional embodiment, the preset queue may include at least an identifier of the VNF instance and time information. The time information may be a state change time of the VNF instance, or may be a time recorded to a preset queue by the VNF instance. In addition, the preset queue may further include a type of each VNF instance.
In an optional embodiment, for the preset queue, in order to facilitate distinguishing and searching each type of VNF instance, a corresponding preset queue may exist for each type of VNF instance.
In the embodiment of the present invention, the preset queue is not particularly limited.
In step S907, for each VNF instance in the preset queue, in a case that the existence duration of the VNF instance is greater than a preset duration threshold, removing the VNF instance from the network node where the VNF instance is located, and adjusting the current state of the VNF instance to the second state.
In this step, the electronic device may detect in real time whether the existence duration of each VNF instance in the preset queue is greater than a preset duration threshold. When the existence duration of a certain VNF instance in the preset queue is greater than the preset duration threshold, the electronic device may remove the VNF instance from the network node where the VNF instance is located, and adjust the current state of the VNF instance to the second state. At this time, the VNF embodiment will not be included in the preset queue any more.
By detecting the existence duration of each VNF instance in the first state, the electronic device may remove the VNF instance in the first state for a longer time and consume lower NFV resources in time.
In an embodiment of the invention, the resource consumption rate of the VNF instance in the first state is smaller than the resource consumption rate of the VNF instance in the third state. I.e. the VNF instance in the first state consumes relatively little resources.
In an optional embodiment, based on the preset queue, the deploying the number of VNF instances of the type to be adjusted in the step S904 in the NFV network may specifically include the following two cases.
In case one, if the number of VNF instances of the type in the preset queue is not less than the number of VNF instances of the type to be adjusted, the number of VNF instances of the type to be adjusted in the preset queue is deployed in the NFV network.
In case two, if the number of the type VNF instances in the preset queue is smaller than the number of the type VNF instances to be adjusted, the type VNF instances in the preset queue and a third number of the type VNF instances are deployed in the NFV network, where the third number is a difference between the number of the type VNF instances to be adjusted and the number of the type VNF instances in the preset queue.
The third number of VNF instances of the type may be VNF instances in a third state.
The VNF instances are frequently created or deleted due to traffic fluctuations, which may result in an increase in the amount of computation and an increase in the deployment cost. Therefore, in the embodiment of the present invention, by adjusting the state of the VNF instance in the third state to the first state and adjusting the state of the VNF instance after a preset time threshold (hereinafter referred to as a buffering mechanism), the efficiency of VNF instance deployment may be improved, and the computation amount of the frequent creation or deletion process of the VNF instance and the deployment cost of the VNF instance may be effectively reduced.
For ease of understanding, the following description is made in conjunction with FIGS. 10-a, 10-b, and 10-c. Figure 10-a is a schematic diagram of VNF instance deployment costs on a service chain according to an embodiment of the present invention. Fig. 10-b is a schematic diagram of the total cost for NFV network deployment provided by the embodiment of the present invention. Fig. 10-c is a schematic diagram of the total cost of operation of a system provided by an embodiment of the present invention.
In the schematic diagram shown in fig. 10-a, a curve 1001 is the deployment cost of a VNF instance on a service chain under the above-mentioned buffering mechanism, and a curve 1002 is the deployment cost of a VNF instance on the same service chain without the above-mentioned buffering mechanism. By comparison, the deployment cost of curve 1001 is significantly lower than that of curve 1002.
In the schematic shown in FIG. 10-b, graph 1003 is the total cost of deployment with a buffering mechanism and graph 1004 is the total cost of deployment without a buffering mechanism. By contrast, the total deployment cost of the deployment method with the buffer mechanism is obviously lower than that of the deployment method without the buffer mechanism, and the gap between the total deployment cost with the buffer mechanism and the total deployment cost without the buffer mechanism is larger and larger as the number of service chains is increased.
In the schematic diagram shown in fig. 10-c, a graph 1005 is a total operating cost of the system in 720 slots for deploying VNF instances in the NFV network by using a Greedy Algorithm (GATP), a graph 1006 is a total operating cost of the system in 720 slots for deploying VNF instances in the NFV network by using a deep Q learning (DQNTP) algorithm, and a graph 1007 is a total operating cost of the system in 720 slots for deploying VNF instances provided by the embodiment of the present invention. By contrast, the total running cost of all three algorithms increases as the number of service chains increases. However, the total running cost of the algorithm provided by the embodiment of the present invention is the lowest and is always lower than the other two algorithms (i.e., the GATP algorithm and the DQNTP algorithm). For example, when there are 150 service chains in the NFV network, the total cost of operation for graph 1007 is 7.4% and 22.2% lower than the total cost of operation for graph 1005 and graph 1006, respectively. Therefore, the VNF instance deployment method provided by the embodiment of the invention is better in operation total cost.
In the embodiment of the present invention, the performance of the trained GRU neural network and A3C network may affect the deployment cost of the VNF instance. Therefore, in training the above GRU neural network and A3C network, the cost C generated in consideration of the above under-supply or over-supply is addedprovision(t) and running cost CrunningIn addition to (t), the minimum operating cost of the NFV network may also be considered. The training mode is not specifically described here.
In an alternative embodiment, the electronic device may calculate the minimum operating cost C of the NFV network by using the following formula.
C=Minimise[∑t∈ΓCrunning(t)+Cdeployment(t)+Cprovision(t)]
Wherein, Minimise [ o ]]Indicating that o is reduced to a minimum value, Γ is the total number of slots,
Figure BDA0002702443650000161
Cdeployment(t) is the deployment cost of VNF instances on all network nodes in a t-slot NFV network,
Figure BDA0002702443650000162
for the deployment cost of a type k VNF instance,
Figure BDA0002702443650000163
the number of k-type VNFs newly deployed on the network node v for the t-slot may be specificallyExpressed as:
Figure BDA0002702443650000164
max is the max operation,
Figure BDA0002702443650000165
for the number of instances of type k VNFs on the t-slot network node v,
Figure BDA0002702443650000166
the number of instances of a type k VNF on a network node is t-1 time slots.
The minimum operating cost C described above satisfies the following condition:
Figure BDA0002702443650000171
wherein, ckFor the performance parameters of the k-type VNF instance,
Figure BDA0002702443650000172
the total performance parameter for each VNF instance that is used to limit the t-slot deployment on each network node of the NFV network is not greater than the total performance parameter for that node,
Figure BDA0002702443650000173
for limiting that each VNF instance can only be deployed on one network node.
Figure BDA0002702443650000174
The method is used for representing the quantity relation between k-type VNF instances of t-time slots deployed in the NFV network and k-type VNF instances of t-time slots running.
In order to further embody that the GRU neural network is adopted to improve the accuracy of the third flow rate and ensure the service quality of the NFV network in the above embodiments, the following description is made with reference to fig. 11-a to fig. 11-e. Fig. 11-a is a schematic diagram of an actual traffic rate of a service chain in 720 time slots according to an embodiment of the present invention. Fig. 11-b is a first schematic diagram of traffic rates of a service chain within 720 time slots, which are obtained based on threshold prediction according to an embodiment of the present invention. Fig. 11-c is a first schematic diagram of traffic rates of a service chain within 720 time slots, which are predicted based on a GRU neural network according to an embodiment of the present invention. Fig. 11-d is a second schematic diagram of traffic rates of a service chain within 720 time slots, which are obtained based on threshold prediction according to an embodiment of the present invention. Fig. 11-e is a second schematic diagram of traffic rates of a service chain within 720 time slots, which are predicted based on a GRU neural network according to an embodiment of the present invention. Wherein FIG. 11-d is a combination of FIG. 11-a and FIG. 11-b, and FIG. 11-e is a combination of FIG. 11-a and FIG. 11-c.
As can be seen from a comparison of fig. 11-d and 11-e, both of the approaches shown in fig. 11-d and 11-e can deploy more VNF instances as arriving traffic increases and reduce the number of VNF instances as arriving traffic decreases, thereby tracking the arriving traffic. In fig. 11-d, the time slot 91 is taken as an example, the traffic rate of the service chain is 494.5420Mbps, and the number of VNF instances required is 10. Because a delay of 30 seconds is generated when the VNF instance is deployed, the threshold-based VNF scaling mechanism has a certain hysteresis, the traffic of the timeslot 91 in the 30 seconds still provides network function services for the data stream by using the VNF instance in the NFV network when the timeslot 90 is used, and 9 VNF instances are deployed in the NFV network when the timeslot 90 is used. Therefore, 19.1% of the traffic in the 30 seconds cannot be served by the network function in time, which may result in the service quality of the NFV network being degraded. However, in fig. 11-e, the number of VNF instances to be deployed for the timeslot 91 is predicted at the timeslot 90, that is, 10 VNF instances are deployed in the NFV network, and at this time, the traffic rate of the network function service that can be provided is 500Mbps, so the VNF instances in the NFV network can already meet the requirement of the timeslot 91 before the timeslot 91 arrives.
Based on the same inventive concept, according to the VNF instance deployment method provided by the embodiment of the present invention, an embodiment of the present invention further provides a VNF instance deployment apparatus. As shown in fig. 12, fig. 12 is a schematic structural diagram of a VNF instance deployment apparatus according to an embodiment of the present invention. The apparatus includes the following modules.
An obtaining module 1201, configured to obtain a first traffic rate of a first number of historical time slots before a current time slot of each service chain in the NFV network, and a second traffic rate of the current time slot;
a predicting module 1202, configured to predict, according to the first traffic rate and the second traffic rate, a third traffic rate of each service chain in a next timeslot;
a determining module 1203, configured to determine, based on the third traffic rate of each service chain and the number of each type VNF instance in each service chain, a number to be adjusted of each type VNF instance in the NFV network at a next timeslot;
a deployment module 1204, configured to deploy, for each type of VNF instance in the NFV network, the number of VNF instances of the type to be adjusted in the NFV network when the number of VNF instances of the type to be adjusted indicates that the number of VNF instances of the type is increased in the next timeslot.
Optionally, the predicting module 1202 may be specifically configured to predict, for each service chain, a third flow rate of the service chain in a next time slot by using a pre-trained GRU neural network and taking the first flow rate of the service chain and the second flow rate of the service chain as inputs; wherein the GRU neural network is trained based on a fourth traffic rate for a second number of historical time slots.
Optionally, the determining module 1203 may be specifically configured to calculate a first performance parameter required by each type VNF instance of the NFV network in a next timeslot according to the third flow rate of each service chain and the number of each type VNF instance in each service chain;
and calculating the quantity to be adjusted of each type of VNF instance in the NFV network at the next time slot according to the first performance parameter required by the NFV network at each type of VNF instance at the next time slot and the second performance parameter of each type of VNF instance.
Optionally, the deployment module 1204 may be specifically configured to calculate a minimum cost for deployment of each type of VNF instance according to a preset weight value of each type of VNF instance and a resource utilization rate of each node in the NFV network;
determining a quadruple in a Markov decision process according to the minimum cost of each type of VNF instance deployment;
for each type of VNF instance, determining the deployment positions of the VNF instances of the type to be adjusted in the NFV network by utilizing a pre-trained A3C network according to the quadruple and the quantity to be adjusted of the VNF instances of the type in the next time slot; the A3C network is obtained by training a sample quadruple in a sample Markov decision process;
and deploying the number of the VNF instances of the type to be adjusted in the NFV network according to the deployment positions of the VNF instances of the type in the NFV network.
Optionally, the VNF force deployment apparatus may further include:
and an adjusting module, configured to, for each type of VNF instance in the NFV network, if the to-be-adjusted number of the type of VNF instance indicates that the number of the type of VNF instance is reduced at a next timeslot, adjust, based on the to-be-adjusted number of the type of VNF instance, a state of the to-be-adjusted number of the type of VNF instance in the NFV network to a first state when the next timeslot arrives, where the first state is used to indicate that the VNF instance is on a network node in the NFV network but does not provide a network function service.
Optionally, the adjusting module may be specifically configured to adjust, based on the number to be adjusted of the VNF instances of the type, a state of the VNF instances of the type with a minimum utilization rate in the NFV network to a first state when a next timeslot arrives.
Optionally, the VNF instance deployment apparatus may further include:
the recording module is used for recording the VNF instances with the current state as the first state into a preset queue;
a removing module, configured to, for each VNF instance in the preset queue, remove the VNF instance from the network node where the VNF instance is located when an existing time length of the VNF instance is greater than a preset time length threshold, and adjust a current state of the VNF instance to a second state.
Optionally, the deployment module 1204 may be specifically configured to deploy, to the NFV network, a number of VNF instances of the type to be adjusted in the preset queue if the number of the VNF instances of the type in the preset queue is not less than the number of the VNF instances of the type to be adjusted;
if the number of the VNF instances of the type in the preset queue is smaller than the number of the VNF instances of the type to be adjusted, deploying the VNF instances of the type in the preset queue and a third number of the VNF instances of the type in the NFV network, where the third number is a difference value between the number of the VNF instances of the type to be adjusted and the number of the VNF instances of the type in the preset queue.
By the device provided by the embodiment of the invention, the third flow rate of each service chain in the next time slot is predicted according to the first flow rate of the first number of historical time slots before the current time slot and the second flow rate of the current time slot in each service chain in the NFV network, and the number to be adjusted of each type of VNF instance in the NFV network in the next time slot is determined based on the third flow rate of each service chain and the number of each type of VNF instance in each service chain, so that the number to be adjusted of each type of VNF instance indicates that the number of the type of VNF instance is increased in the next time slot, and the VNF instance is deployed in the NFV network in advance, which can effectively avoid the problem that the time consumed for deploying the VNF instance is long when the actual traffic reaches, thereby effectively improving the service quality of the NFV network when the actual traffic reaches.
Based on the same inventive concept, according to the VNF instance deployment method provided by the above-described embodiment of the present invention, an embodiment of the present invention further provides an electronic device, as shown in fig. 13, including a processor 1301, a communication interface 1302, a memory 1303 and a communication bus 1304, where the processor 1301, the communication interface 1302, and the memory 1303 complete communication with each other through the communication bus 1304;
a memory 1303 for storing a computer program;
the processor 1301 is configured to implement the following steps when executing the program stored in the memory 1303:
acquiring a first traffic rate of a first number of historical time slots before a current time slot of each service chain in the NFV network and a second traffic rate of the current time slot;
predicting a third flow rate of each service chain in the next time slot according to the first flow rate and the second flow rate;
determining the number to be adjusted of each type of VNF instance in the NFV network at the next time slot based on the third flow rate of each service chain and the number of each type of VNF instance in each service chain;
for each type of VNF instance in the NFV network, deploying the to-be-adjusted number of VNF instances of the type in the NFV network if the to-be-adjusted number of VNF instances of the type indicates that the number of VNF instances of the type is increased at a next timeslot.
According to the electronic device provided by the embodiment of the invention, the third flow rate of each service chain in the next time slot is predicted according to the first flow rate of the first number of historical time slots before the current time slot and the second flow rate of the current time slot in each service chain in the NFV network, and the number to be adjusted of each type of VNF instance in the NFV network in the next time slot is determined based on the third flow rate of each service chain and the number of each type of VNF instance in each service chain, so that the number of each type of VNF instance in the NFV network is increased in the next time slot according to the number to be adjusted of each type of VNF instance, and the VNF instance is deployed in the NFV network in advance, which can effectively avoid the problem that the time consumption for deploying the VNF instance is long when the actual traffic reaches, thereby effectively improving the service quality of the NFV network when the actual traffic reaches.
The communication bus mentioned in the electronic device may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The communication bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown, but this does not mean that there is only one bus or one type of bus.
The communication interface is used for communication between the electronic equipment and other equipment.
The Memory may include a Random Access Memory (RAM) or a Non-Volatile Memory (NVM), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the processor.
The Processor may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but also Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components.
Based on the same inventive concept, according to the VNF instance deployment method provided in the above-described embodiment of the present invention, an embodiment of the present invention further provides a computer-readable storage medium, in which a computer program is stored, and the computer program, when executed by a processor, implements the steps of any of the above-described VNF instance deployment methods.
Based on the same inventive concept, according to the VNF instance deployment method provided in the above-described embodiment of the present invention, an embodiment of the present invention further provides a computer program product including instructions, which, when run on a computer, causes the computer to execute any one of the VNF instance deployment methods in the above-described embodiments.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the invention to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, server, or data center to another website site, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. The term "comprising", without further limitation, means that the element so defined is not excluded from the group consisting of additional identical elements in the process, method, article, or apparatus that comprises the element.
All the embodiments in the present specification are described in a related manner, and the same and similar parts among the embodiments may be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for embodiments such as the apparatus, the electronic device, the computer-readable storage medium, and the computer program product, since they are substantially similar to the method embodiments, the description is simple, and for relevant points, reference may be made to part of the description of the method embodiments.
The above description is only for the preferred embodiment of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention shall fall within the protection scope of the present invention.

Claims (10)

1. A method for Virtual Network Function (VNF) instance deployment, the method comprising:
acquiring a first traffic rate of a first number of historical time slots before a current time slot of each service chain in a Network Function Virtualization (NFV) network and a second traffic rate of the current time slot;
predicting a third flow rate of each service chain in a next time slot according to the first flow rate and the second flow rate;
determining the number to be adjusted of each type of VNF instance in the NFV network at a next timeslot based on the third flow rate of each service chain and the number of each type of VNF instance in each service chain;
for each type of VNF instance in the NFV network, deploying a number of VNF instances of the type to be adjusted in the NFV network if the number of VNF instances of the type indicates that the number of VNF instances of the type is increased in the next time slot.
2. The method of claim 1, wherein the step of predicting the third traffic rate for each service chain in the next time slot according to the first traffic rate and the second traffic rate comprises:
for each service chain, taking a first flow rate of the service chain and a second flow rate of the service chain as input, and predicting a third flow rate of the service chain in the next time slot by utilizing a pre-trained gated cyclic unit GRU neural network; wherein the GRU neural network is trained based on a fourth traffic rate for a second number of historical time slots.
3. The method of claim 1, wherein the step of determining the to-be-adjusted number of each type of VNF instance in the NFV network at a next timeslot based on the third traffic rate of each service chain and the number of each type of VNF instance in each service chain comprises:
calculating a first performance parameter required by each type VNF instance of the NFV network at a next time slot according to the third flow rate of each service chain and the number of each type VNF instance in each service chain;
and calculating the quantity to be adjusted of each type of VNF instance in the NFV network at the next time slot according to the first performance parameter required by each type of VNF instance at the next time slot of the NFV network and the second performance parameter of each type of VNF instance.
4. The method according to claim 1, wherein the step of deploying the number of VNF instances of the type to be adjusted in the NFV network comprises:
calculating the minimum cost of each type of VNF instance deployment according to the preset weight value of each type of VNF instance and the resource utilization rate of each node in the NFV network;
determining a quadruple in a Markov decision process according to the minimum cost of each type of VNF instance deployment;
for each type of VNF instance, determining the deployment positions of the VNF instances of the type to be adjusted in the NFV network by utilizing a pre-trained asynchronous dominant actor critic A3C network according to the quadruple and the quantity to be adjusted of the VNF instances of the type at the next time slot; wherein, the A3C network is obtained by sample four-tuple training in the process of sample Markov decision;
and deploying the number of VNF instances of the type to be adjusted in the NFV network according to the deployment positions of the VNF instances of the type in the NFV network.
5. The method of claim 1, further comprising:
for each type of VNF instance in the NFV network, if the to-be-adjusted number of VNF instances of that type indicates that the number of VNF instances of that type is reduced in the next time slot, adjusting, based on the to-be-adjusted number of VNF instances of that type, a state of the to-be-adjusted number of VNF instances of that type in the NFV network at arrival of the next time slot to a first state, the first state being used to indicate that the VNF instances are on a network node in the NFV network but do not provide network function services.
6. The method according to claim 5, wherein the step of adjusting the state of the VNF instances of the type to be adjusted in the NFV network to the first state when the next time slot arrives based on the number of VNF instances of the type to be adjusted comprises:
and adjusting the state of the VNF instances of the type with the minimum utilization rate to a first state when the next time slot arrives based on the number of VNF instances of the type to be adjusted.
7. The method of claim 5, further comprising:
recording the VNF instances with the current state being the first state into a preset queue;
for each VNF instance in the preset queue, in the case that the existence duration of the VNF instance is greater than a preset duration threshold, removing the VNF instance from the network node where the VNF instance is located, and adjusting the current state of the VNF instance to a second state.
8. The method according to claim 7, wherein the step of deploying the number of VNF instances of the type to be adjusted in the NFV network comprises:
if the number of the type of VNF instances in the preset queue is not less than the number to be adjusted of the type of VNF instances, deploying the number of the type of VNF instances to be adjusted in the preset queue to the NFV network;
if the number of the type of VNF instances in the preset queue is smaller than the number of the VNF instances of the type to be adjusted, deploying the type of VNF instances in the preset queue and a third number of the type of VNF instances into the NFV network, where the third number is a difference between the number of the VNF instances of the type to be adjusted and the number of the VNF instances of the type in the preset queue.
9. An apparatus for Virtual Network Function (VNF) instance deployment, the apparatus comprising:
an obtaining module, configured to obtain a first traffic rate of a first number of historical time slots before a current time slot of each service chain in a Network Function Virtualization (NFV) network, and a second traffic rate of the current time slot;
a prediction module, configured to predict a third traffic rate of each service chain in a next timeslot according to the first traffic rate and the second traffic rate;
a determining module, configured to determine, based on the third flow rate of each service chain and the number of VNF instances of each type in each service chain, a number to be adjusted for each VNF instance of each type in the NFV network at a next timeslot;
a deployment module, configured to, for each type of VNF instance in the NFV network, deploy, in the NFV network, the number of VNF instances of the type to be adjusted when the number of VNF instances of the type indicates that the number of VNF instances of the type is increased in the next timeslot.
10. The apparatus according to claim 9, wherein the predicting module is specifically configured to, for each service chain, use a pre-trained gated round robin unit GRU neural network to predict a third traffic rate of the service chain in a next time slot, with a first traffic rate of the service chain and a second traffic rate of the service chain as inputs; wherein the GRU neural network is trained based on a fourth traffic rate for a second number of historical time slots.
CN202011027090.5A 2020-09-25 2020-09-25 Virtual network function VNF instance deployment method and device Pending CN112199153A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011027090.5A CN112199153A (en) 2020-09-25 2020-09-25 Virtual network function VNF instance deployment method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011027090.5A CN112199153A (en) 2020-09-25 2020-09-25 Virtual network function VNF instance deployment method and device

Publications (1)

Publication Number Publication Date
CN112199153A true CN112199153A (en) 2021-01-08

Family

ID=74007554

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011027090.5A Pending CN112199153A (en) 2020-09-25 2020-09-25 Virtual network function VNF instance deployment method and device

Country Status (1)

Country Link
CN (1) CN112199153A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114401194A (en) * 2021-12-29 2022-04-26 山东省计算中心(国家超级计算济南中心) Dynamic expansion method and platform supporting network function virtualization and computer

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110225277A1 (en) * 2010-03-11 2011-09-15 International Business Machines Corporation Placement of virtual machines based on server cost and network cost
WO2018000240A1 (en) * 2016-06-29 2018-01-04 Orange Method and system for the optimisation of deployment of virtual network functions in a communications network that uses software defined networking
CN108616377A (en) * 2016-12-13 2018-10-02 中国电信股份有限公司 Business chain virtual machine control method and system
CN109995583A (en) * 2019-03-15 2019-07-09 清华大学深圳研究生院 A kind of scalable appearance method and system of NFV cloud platform dynamic of delay guaranteed
CN111464335A (en) * 2020-03-10 2020-07-28 北京邮电大学 Intelligent service customization method and system for endogenous trusted network

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110225277A1 (en) * 2010-03-11 2011-09-15 International Business Machines Corporation Placement of virtual machines based on server cost and network cost
WO2018000240A1 (en) * 2016-06-29 2018-01-04 Orange Method and system for the optimisation of deployment of virtual network functions in a communications network that uses software defined networking
CN108616377A (en) * 2016-12-13 2018-10-02 中国电信股份有限公司 Business chain virtual machine control method and system
CN109995583A (en) * 2019-03-15 2019-07-09 清华大学深圳研究生院 A kind of scalable appearance method and system of NFV cloud platform dynamic of delay guaranteed
CN111464335A (en) * 2020-03-10 2020-07-28 北京邮电大学 Intelligent service customization method and system for endogenous trusted network

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
RIMING TONG ET AL.: "VNF Dynamic Scaling and Deployment Algorithm Based on Traffic Prediction", 《2020 INTERNATIONAL WIRELESS COMMUNICATIONS AND MOBILE COMPUTING(IWCMC)》 *
王东升等: "电力5G业务切片全生命周期研究", 《电力信息与通信技术》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114401194A (en) * 2021-12-29 2022-04-26 山东省计算中心(国家超级计算济南中心) Dynamic expansion method and platform supporting network function virtualization and computer
CN114401194B (en) * 2021-12-29 2023-08-01 山东省计算中心(国家超级计算济南中心) Dynamic expansion method, platform and computer supporting network function virtualization

Similar Documents

Publication Publication Date Title
US11233710B2 (en) System and method for applying machine learning algorithms to compute health scores for workload scheduling
CN112000459B (en) Method for expanding and shrinking capacity of service and related equipment
CN112416554B (en) Task migration method and device, electronic equipment and storage medium
CN109324875B (en) Data center server power consumption management and optimization method based on reinforcement learning
US7941387B2 (en) Method and system for predicting resource usage of reusable stream processing elements
Barati et al. A hybrid heuristic-based tuned support vector regression model for cloud load prediction
CN113778691B (en) Task migration decision method, device and system
Yao et al. Perturbation analysis and optimization of multiclass multiobjective stochastic flow models
KR102027303B1 (en) Migration System and Method by Fuzzy Value Rebalance in Distributed Cloud Environment
Cai et al. SARM: service function chain active reconfiguration mechanism based on load and demand prediction
Tuli et al. Start: Straggler prediction and mitigation for cloud computing environments using encoder lstm networks
Roy et al. Online reinforcement learning of optimal threshold policies for Markov decision processes
CN113992527A (en) Network service function chain online migration method and system
US11620207B2 (en) Power efficient machine learning in cloud-backed mobile systems
US20230053575A1 (en) Partitioning and placement of models
Hammami et al. On-policy vs. off-policy deep reinforcement learning for resource allocation in open radio access network
CN112199153A (en) Virtual network function VNF instance deployment method and device
Shayesteh et al. Automated concept drift handling for fault prediction in edge clouds using reinforcement learning
KR20230089509A (en) Bidirectional Long Short-Term Memory based web application workload prediction method and apparatus
US20200267091A1 (en) Joint Control Of Communication And Computation Resources Of A Computerized System
KR20200042221A (en) Apparatus and Method for managing power of processor in a mobile terminal device
US20230060623A1 (en) Network improvement with reinforcement learning
Wang et al. Model-based scheduling for stream processing systems
Martins et al. A prediction-based multisensor heuristic for the internet of things
Wang et al. Uncertainty-aware weighted fair queueing for routers based on deep reinforcement learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20210108

WD01 Invention patent application deemed withdrawn after publication