CN113918277A - Data center-oriented service function chain optimization arrangement method and system - Google Patents

Data center-oriented service function chain optimization arrangement method and system Download PDF

Info

Publication number
CN113918277A
CN113918277A CN202111101331.0A CN202111101331A CN113918277A CN 113918277 A CN113918277 A CN 113918277A CN 202111101331 A CN202111101331 A CN 202111101331A CN 113918277 A CN113918277 A CN 113918277A
Authority
CN
China
Prior art keywords
vnf
sfc
optimization
server
service
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111101331.0A
Other languages
Chinese (zh)
Inventor
黄骅
吴玉静
曹斌
范菁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University of Technology ZJUT
Original Assignee
Zhejiang University of Technology ZJUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University of Technology ZJUT filed Critical Zhejiang University of Technology ZJUT
Priority to CN202111101331.0A priority Critical patent/CN113918277A/en
Publication of CN113918277A publication Critical patent/CN113918277A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0803Configuration setting
    • H04L41/0823Configuration setting characterised by the purposes of a change of settings, e.g. optimising configuration for enhancing reliability
    • H04L41/0833Configuration setting characterised by the purposes of a change of settings, e.g. optimising configuration for enhancing reliability for reduction of network energy consumption
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0896Bandwidth or capacity management, i.e. automatically increasing or decreasing capacities
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/12Discovery or management of network topologies
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/14Network analysis or design
    • H04L41/145Network analysis or design involving simulating, designing, planning or modelling of a network
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45595Network integration; Enabling network access in virtual machine instances
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

A service function chain optimization arranging method for a data center considers resources required by virtual network functions as variable parameters, analyzes the relation among request arrival rate, calculation resources and processing delay based on an arranging theory and elastic resource allocation, and provides a two-stage heuristic arranging strategy by taking energy consumption and network delay as optimization indexes. In the first stage, a series of mapping relations between VNFs and servers are obtained by utilizing a greedy deployment strategy; and in the second stage, on the basis of the solution set in the previous stage, the nonlinear constraint optimization problem is further solved, and the optimal configuration of the computing resources is obtained. The invention also discloses a system for implementing the service function chain optimization arrangement method facing the data center. The invention effectively reduces the energy consumption of the server, the occupancy rate of the link bandwidth and the service delay, and improves the success rate of deployment. The invention completes the resource optimization arrangement of the SFC under the condition of giving network topology and SFC data, and the arrangement result can be obtained in controllable operation time.

Description

Data center-oriented service function chain optimization arrangement method and system
Technical Field
The invention relates to the field of network service function deployment in network communication, in particular to a service function chain optimization arrangement method facing a data center.
Technical Field
With the rapid emergence of new internet services, the conventional network architecture has been difficult to support the continuously-improved performance requirements of the new generation of networks. Network Function Virtualization (NFV) is one of the key technologies for implementing a 5G service architecture. NFV makes various network elements in the existing network software into Virtual Network Functions (VNFs), and in a network environment constructed based on the VNFs, Service functions are mainly carried by Service Function Chains (SFCs), and a Service Function Chain refers to an ordered VNF sequence through which traffic passes. SFC Orchestration (Service Function Chain organization) refers to a given set of known SFCs, and on the premise that certain constraints are met, a VNF is deployed in a reasonable manner to achieve optimization of Service performance or economy. The conventional SFC orchestration method usually processes resources required by the VNF as system parameters, ignores the relationship between the resources and the VNF performance, and adopts the following algorithms for minimizing the resource consumption or bandwidth resource consumption of the virtual machine:
in the first method, a joint optimization algorithm of VNF deployment and path selection is designed with the aim of minimizing the usage of computing resources and bandwidth resources. The algorithm firstly finds out an SFC (Small form-factor computing) scheduling path meeting the bandwidth resource requirement, and then determines the deployment position of a VNF (virtual network function).
And the second SFC arranging algorithm which reduces resource overhead and ensures QoS is designed based on the Viterbi algorithm. In order to ensure the QoS requirement of network service, the node migration is carried out on the link with high delay, so as to reduce the transmission delay and meet the requirement of service quality.
The third method uses a weighted service chain placement algorithm based on service priority and a request scheduling algorithm based on combinatorial optimization.
And the fourth method is a service function chain deployment method based on a greedy strategy, and selects a service path with the minimum deployment cost by exhaustively exhausting all paths meeting the requirements of connectivity and strategies.
The above methods mostly assume that the resource requirements and processing rate of the VNF are predetermined. However, in an actual application scenario, the processing rate of the VNF is affected by various factors such as resource allocation and request arrival rate, and further affects the service performance of the SFC. At present, the existing method adds a dynamic thought, and respectively establishes a queuing theory model and an elastic resource allocation model, but the two models are not connected. Aiming at the problem of dynamic resource allocation, the invention further considers the relationship between service arrival rate, resource allocation and VNF processing capacity by taking the service function chain arrangement of the data center as a scene based on the arrangement theory idea, and establishes a service function chain arrangement optimization model by taking reduction of energy consumption and network delay as optimization targets. Meanwhile, in order to ensure that a satisfactory solution of the problem can be obtained within a limited time, the invention adopts a heuristic idea and designs a two-stage heuristic algorithm.
Disclosure of Invention
The invention provides a data center-oriented service function chain optimization arranging method and system, which aim to overcome the defects of the prior art.
The invention provides a two-stage heuristic arranging method by considering the defects of the existing research and combining an arranging theory and a resource elastic allocation mechanism and establishing a service function chain arranging model facing a data center by taking energy consumption and network delay as optimization targets. In the first stage, a series of mapping relations between VNFs and servers are obtained by utilizing a greedy deployment strategy; and in the second stage, on the basis of the solution set in the previous stage, the nonlinear constraint optimization problem is further solved, and the optimal configuration of the computing resources is obtained.
A service function chain optimization arranging method facing a data center is characterized in that: providing a service function chain optimization arrangement model based on elastic resource allocation by analyzing the relation among the request arrival rate, the computing resources and the processing delay; on the basis, energy consumption and network delay are used as optimization indexes, a two-stage algorithm is designed, firstly, a feasible solution set of the problem is obtained through a greedy deployment algorithm, and then, the nonlinear constraint optimization problem is solved to obtain optimal resource allocation; the optimized configuration of resources is realized through a service function chain arrangement algorithm based on the resource elastic distribution, and the service quality is improved; the method specifically comprises the following steps:
1. constructing an arrangement model;
and constructing a service function chain optimization arrangement model based on elastic resource allocation, and providing a foundation for realizing an optimization method.
The invention is based on a fat tree (fat tree) network topology, which is widely adopted in data centers. The whole topology is divided into an edge layer, a convergence layer, a core layer and a server layer from top to bottom. For the k-way tree architecture, the k-way tree architecture comprises k pod, and the number of the edge switches and the aggregation switches in each pod is k/2. The number of core switches is (k/2)2The number of aggregation layer and edge layer switches is k2The total number of servers supported by the network is k3And/4, and all VNFs can only be deployed at the server layer.
1.1, constructing a dynamic resource allocation model;
on the basis of researching the influence of the resource allocation on the VNF performance and the end-to-end delay, a dynamic resource allocation model is constructed.
The underlying network is abstracted as an undirected weighted graph G ═ N, E, where N is the set of network nodes and E is the set of links. The VNF is mostly deployed in a container, so the VNF's ability to process requests is affected by the amount of computing resources allocated to the VNF.
Adopting M/M/1 queuing model to simulate VNF service, let kappaiFor serving function chains siThe arrival rate of the requests of (a),
Figure BDA0003271042230000031
is VNF
Figure BDA0003271042230000032
The service rate of (a), as known from queuing theory,
Figure BDA0003271042230000033
average processing time of
Figure BDA0003271042230000034
And kappai
Figure BDA0003271042230000035
There is the following relationship between:
Figure BDA0003271042230000036
in a practical application scenario, service rate
Figure BDA0003271042230000037
Has great relation with the allocation of computing resources, wherein CPU resources and memory resources are uniformly processed into computing resources, and an elastic resource allocation mode is adopted to ensure that
Figure BDA0003271042230000038
Is VNF
Figure BDA0003271042230000039
The amount of computing resources allocated and assuming service rates
Figure BDA00032710422300000310
And
Figure BDA00032710422300000311
there is a piecewise linear relationship between:
Figure BDA00032710422300000312
parameter(s)
Figure BDA00032710422300000313
And
Figure BDA00032710422300000314
respectively by the following formulasIs calculated to obtain
Figure BDA00032710422300000315
Figure BDA00032710422300000316
Wherein the content of the first and second substances,
Figure BDA00032710422300000317
is composed of
Figure BDA00032710422300000318
The value range of (A) is,
Figure BDA00032710422300000319
and
Figure BDA00032710422300000320
the allocation for taking resources is respectively
Figure BDA00032710422300000321
And
Figure BDA00032710422300000322
time corresponding service rate.
By the formula (2), in the value range
Figure BDA00032710422300000323
In the method, the service rate of the VNF rises linearly with the increase of the allocated resources;
Figure BDA00032710422300000324
and
Figure BDA00032710422300000325
are respectively VNF
Figure BDA00032710422300000326
Minimum and maximum values of resource allocation if
Figure BDA00032710422300000327
Then
Figure BDA00032710422300000328
Can not be deployed when
Figure BDA00032710422300000329
When the temperature of the water is higher than the set temperature,
Figure BDA00032710422300000330
the service rate of (2) reaches a maximum value, i.e., the processing capacity reaches an upper limit.
For simplicity of description, note
Figure BDA00032710422300000331
Wherein
Figure BDA00032710422300000332
Allocating a base number for the resource; at this time VNF
Figure BDA00032710422300000333
Average processing time
Figure BDA00032710422300000334
The calculation formula is shown in formula (5)
Figure BDA00032710422300000335
The constraints to be satisfied are as follows
Figure BDA00032710422300000336
Figure BDA0003271042230000041
Constraint (6) controls
Figure BDA0003271042230000042
Constraint (7) indicates that the service rate is greater than the arrival rate, and if constraint (7) is not satisfied, backlog of service requests results.
1.2 constructing system constraints;
system constraints need to be considered in the SFC arranging process and are divided into computing resource constraints, broadband resource constraints and SFC end-to-end time delay constraints; first, when a VNF is deployed on a physical host, it needs to occupy a certain amount of computing resources and cannot exceed the upper limit of resources that the host can provide, so there are
Figure BDA0003271042230000043
Wherein N is the number of servers of network G, RnRepresenting resource constraints numbered N servers, M the number of service function chains SFC, N the number of network nodes siRepresents SFC, M of number iiRepresents siThe number of VNFs contained in (a).
Variable 0-1
Figure BDA0003271042230000044
Is defined as follows, if s isiWhen the jth VNF of (a) is deployed in network node n,
Figure BDA0003271042230000045
take 1, otherwise take 0. lm,nM, N ∈ N representing a physical link between network nodes m and N, for any physical link lm,nThe sum of the bandwidth resources occupied by all the SFCs mapped on the link cannot exceed the maximum available bandwidth of the link, so there is the following constraint
Figure BDA0003271042230000046
Where ρ isiAs SFC chains siThe bandwidth resources that are required to be occupied,
Figure BDA0003271042230000047
is a link lm,nIs a variable of 0 to 1
Figure BDA0003271042230000048
Is defined as follows if the logical link is
Figure BDA0003271042230000049
Through am,n
Figure BDA00032710422300000410
Taking 1, otherwise taking 0,
Figure BDA00032710422300000411
representative service chain siAnd (3) a logical link between VNF u and u + 1.
For any VNF in all SFCs, mapping can only be done to one server, so the following constraint holds
Figure BDA00032710422300000412
Wherein the content of the first and second substances,
Figure BDA00032710422300000413
indicates whether to change siIs deployed on network node n, when s isiWhen the optical fiber is deployed on the n,
Figure BDA00032710422300000414
take 1, otherwise take 0.
Second, if all virtual links are available
Figure BDA00032710422300000415
Routing is unique during transmission, i.e. for arbitrary
Figure BDA00032710422300000416
There are the following constraints
Figure BDA0003271042230000051
Finally, each link s needs to be guaranteediEnd-to-end delay requirements; assuming end-to-end delay per SFC by transmission delay
Figure BDA0003271042230000052
And processing time delay
Figure BDA0003271042230000053
Two parts, wherein dm,nIs a link lm,nThe transmission delay of (2). The sum of the two parts is less than or equal to the delay threshold of the service flow, namely the constraint (12) is satisfied
Figure BDA0003271042230000054
Wherein DiIs SFCsiThe total delay threshold of (a) is,
Figure BDA0003271042230000055
reference is made to the formulae (3) and (4).
Due to the fact that all SFCs are deployed in the same machine room based on the SFC deployment scene of the data center, transmission delay in the constraint (12) can be ignored, and the formula (12) is simplified into
Figure BDA0003271042230000056
1.3, selecting an optimization index;
the optimization indexes are that the minimum transmission delay cost C and the energy consumption cost E are adopted, and the total delay cost is
Figure BDA0003271042230000057
Further considering the system energy consumption expense E, the energy consumption E of starting up isbaseAnd run-time energy consumption EallocThe two parts are formed into a whole body,Ebaseenergy consumption when a VNF is not deployed for server boot-up, obviously EbaseIn direct proportion to the number of the started hosts, a 0-1 variable h is definednSatisfy the requirement of
Figure BDA0003271042230000058
As is apparent from equation (15), for any server n, if VNF is deployed, then hn1, otherwise 0; total power-on energy consumption
Figure BDA0003271042230000059
Wherein gamma is a coefficient of the number of the atoms,
Figure BDA00032710422300000510
representing the number of primary servers on which the VNF is deployed; eallocDefining the runtime energy consumption of the host n for the energy consumption occupied by the VNF, which is proportional to the computing resource consumption
Figure BDA00032710422300000511
Is composed of
Figure BDA0003271042230000061
Where T is a coefficient, so the total energy consumption overhead is
Figure BDA0003271042230000062
In summary, the SFC layout optimization problem is specifically expressed as
min(E+C) (18)
s.t(6)-(15)
2. Realizing SFC optimal arrangement;
the goal of this stage is to deploy SFCs after a given set of SFCs and achieve the optimization goal of orchestration, i.e., reduce energy consumption and network latency.
2.1 realizing node and link mapping by adopting a greedy-based strategy;
the first step of SFC orchestration is to map VNF nodes and links, and in the mapping process, the system constraints analyzed in step 1.2 need to be satisfied, including resource constraints, bandwidth constraints, and end-to-end network delay constraints. A greedy mapping strategy is adopted, the optimization aim is that the hop count in the SFC deployment process is minimum, namely, the transmission link is shortest, and the strategy further considers the transmission delay on the premise that the SFC can be successfully deployed.
The greedy strategy selects a current optimal value in the solution of each step, so the solution obtained in the step can be regarded as a local optimal solution, the greedy algorithm needs to decompose the total problem into sub-problems, and the sub-problems in the text are the selection of a server and a physical link by deploying a VNF.
Firstly, generating a hop count matrix hoss according to an applied network topology, wherein the hop count matrix hoss is a square matrix with the length being the number of servers, stores hop counts among the servers, namely distance, and takes one hop through a switch; starting mapping work on the basis; the SFC sets need to be processed, so the work should be performed in the order of SFC sequential deployment and VNF sequential deployment in one SFC.
Starting to search from the server with the number 1 when deployment work is started, if the computing resources which can be provided by the server meet the resource requirement of the VNF and the residual bandwidth resources of the link directly connected with the VNF meet the bandwidth requirement of the SFC, successfully deploying the VNF on the server with the number 1, and if not, starting to search other servers until deployment is successful, or ending the deployment work of the SFC if all the servers are not successfully searched; the next VNF after successful deployment starts searching resources from the current server; when the deployment fails, the VNF searches for the servers in the order of the servers in the hops matrix, and the server with the smaller hop count is preferably selected. The search order when deploying a VNF is the current server, the servers connected to the same edge switch, the servers in the same Pod, the servers in other pods.
In the fat tree type topology, if two servers are connected to the same edge switch, only one link exists between the two servers, and when the two servers are located at the same Pod or different pods, two links exist between the two servers, and the hop counts of the two links are the same. After the server to be deployed is determined, whether a link between the two servers meets bandwidth constraints is judged, and if so, deployment is finished; if not, continuing to select the next server until finding the server meeting the constraint condition, and if not, then the SFC cannot be deployed.
2.2 optimizing resource allocation;
after the mapping relation from the VNF to the virtual machine is obtained by utilizing the heuristic mapping algorithm of the previous stage, the variable is obtained at the moment
Figure BDA0003271042230000071
And
Figure BDA0003271042230000072
known as the start-up deployment energy consumption EbaseTo determine a value, a variable is defined
Figure BDA0003271042230000073
The problem (18) can be further reduced to a constrained non-linear optimization problem as follows
Figure BDA0003271042230000074
Figure BDA0003271042230000075
Constraint requirement in problem (19)
Figure BDA0003271042230000076
In the range of values
Figure BDA0003271042230000077
Internal total energy can satisfy
Figure BDA0003271042230000078
The physical meaning of the above conditions being determined by adjusting the resourcesSource allocation, always enabling VNF
Figure BDA0003271042230000079
Service rate of
Figure BDA00032710422300000710
Greater than dynamic service chain siRequest arrival rate of (k)i. If this condition is not met, it means that the dynamic service chain s is presentiCannot be deployed. Constraining
Figure BDA00032710422300000711
Ensuring time delay constraint
Figure BDA00032710422300000712
And is
Figure BDA00032710422300000713
The distributed computing resources are guaranteed not to exceed the resource upper limit of any server.
The constrained nonlinear optimization problem is solved by selecting a KKT condition popularized by a Lagrangian multiplier method, and the augmented Lagrangian function of the optimization problem (19) is as follows:
Figure BDA0003271042230000081
wherein the content of the first and second substances,
Figure BDA0003271042230000082
and gammanIs a KKT multiplier. According to the KKT condition, the optimization problem is required to be solved, and the following conditions are required to be met:
Figure BDA0003271042230000083
solving the optimization problem (21) yields the number of resources allocated per VNF, which enables the orchestration method to achieve high resource utilization and low latency. The optimized resource allocation flow is shown in fig. 4.
The system for implementing the service function chain optimization arrangement method facing the data center comprises a model construction module and an SFC arrangement module which are sequentially connected, wherein the SFC arrangement module is used for carrying out the service function chain optimization arrangement method facing the data center
1. The model construction module constructs a service function chain optimization arrangement model based on elastic resource allocation, and provides a foundation for the realization of an optimization method.
And 2, the SFC arranging module deploys the SFCs after the SFC set is given, and achieves the optimization goal of arranging, namely reducing energy consumption and network delay.
The invention provides a service function chain optimization arrangement model based on elastic resource allocation by analyzing the relation between the request arrival rate, the computing resources and the processing delay. On the basis, energy consumption and network delay are used as optimization indexes, a two-stage algorithm is designed, firstly, a feasible solution set of the problem is obtained through a greedy deployment algorithm, and then, the nonlinear constraint optimization problem is solved to obtain optimal resource allocation. Through a service function chain arrangement algorithm based on resource elastic allocation, the optimal allocation of resources can be realized, and the service quality is improved.
The invention has the advantages that: the energy consumption of the server, the occupancy rate of the link bandwidth and the service delay are effectively reduced, the deployment success rate is improved, and the arrangement result can be obtained within controllable operation time.
Drawings
FIG. 1 is a net topology of the fat tree type of the present invention.
Fig. 2 is a graph of service rate versus resource allocation in accordance with the present invention.
FIG. 3 is a greedy deployment flow diagram based on the fat tree type network topology of the present invention.
FIG. 4 is a flow chart of resource optimization configuration of the present invention.
Detailed Description
The technical scheme of the invention is further explained by combining the attached drawings.
The invention relates to a service function chain optimization arrangement method facing a data center, which comprises the following two steps:
1. constructing an arrangement model;
and constructing a service function chain optimization arrangement model based on elastic resource allocation, and providing a foundation for realizing an optimization method.
The invention is based on a fat tree (fat tree) network topology, which is widely adopted in data centers. The fat tree type network topology is shown in fig. 1, and the whole topology is divided into an edge layer, an aggregation layer, a core layer and a server layer from top to bottom. For the k-way tree architecture, the k-way tree architecture comprises k pod, and the number of the edge switches and the aggregation switches in each pod is k/2. The number of core switches is (k/2)2The number of aggregation layer and edge layer switches is k2The total number of servers supported by the network is k3And/4, and all VNFs can only be deployed at the server layer.
1.1, constructing a dynamic resource allocation model;
on the basis of researching the influence of the resource allocation on the VNF performance and the end-to-end delay, a dynamic resource allocation model is constructed.
The underlying network is abstracted as an undirected weighted graph G ═ N, E, where N is the set of network nodes and E is the set of links. The VNF is mostly deployed in a container, so the VNF's ability to process requests is affected by the amount of computing resources allocated to the VNF.
Adopting M/M/1 queuing model to simulate VNF service, let kappaiFor serving function chains siThe arrival rate of the requests of (a),
Figure BDA0003271042230000091
is VNF
Figure BDA0003271042230000101
The service rate of (a), as known from queuing theory,
Figure BDA0003271042230000102
average processing time of
Figure BDA0003271042230000103
And kappai
Figure BDA0003271042230000104
There is a relationship betweenComprises the following steps:
Figure BDA0003271042230000105
in a practical application scenario, service rate
Figure BDA0003271042230000106
Has great relation with the allocation of computing resources, wherein CPU resources and memory resources are uniformly processed into computing resources, and an elastic resource allocation mode is adopted to ensure that
Figure BDA0003271042230000107
Is VNF
Figure BDA0003271042230000108
The amount of computing resources allocated and assuming service rates
Figure BDA0003271042230000109
And
Figure BDA00032710422300001010
there is a piecewise linear relationship between:
Figure BDA00032710422300001011
service rate and resource allocation relationship as shown in FIG. 2, parameters
Figure BDA00032710422300001012
And
Figure BDA00032710422300001013
are respectively calculated by the following formula
Figure BDA00032710422300001014
Figure BDA00032710422300001015
Wherein the content of the first and second substances,
Figure BDA00032710422300001016
is composed of
Figure BDA00032710422300001017
The value range of (A) is,
Figure BDA00032710422300001018
and
Figure BDA00032710422300001019
the allocation for taking resources is respectively
Figure BDA00032710422300001020
And
Figure BDA00032710422300001021
time corresponding service rate.
By the formula (2), in the value range
Figure BDA00032710422300001022
In the method, the service rate of the VNF rises linearly with the increase of the allocated resources;
Figure BDA00032710422300001023
and
Figure BDA00032710422300001024
are respectively VNF
Figure BDA00032710422300001025
Minimum and maximum values of resource allocation if
Figure BDA00032710422300001026
Then
Figure BDA00032710422300001027
Can not be deployed when
Figure BDA00032710422300001028
When the temperature of the water is higher than the set temperature,
Figure BDA00032710422300001029
the service rate of (2) reaches a maximum value, i.e., the processing capacity reaches an upper limit.
For simplicity of description, note
Figure BDA00032710422300001030
Wherein
Figure BDA00032710422300001031
Allocating a base number for the resource; at this time VNF
Figure BDA00032710422300001032
Average processing time
Figure BDA00032710422300001033
The calculation formula is shown in formula (5)
Figure BDA00032710422300001034
The constraints to be satisfied are as follows
Figure BDA00032710422300001035
Figure BDA00032710422300001036
Constraint (6) controls
Figure BDA00032710422300001037
Constraint (7) indicates that the service rate is greater than the arrival rate, and if constraint (7) is not satisfied, backlog of service requests results.
1.2 constructing system constraints;
system constraints need to be considered in the SFC arranging process and are divided into computing resource constraints, broadband resource constraints and SFC end-to-end time delay constraints; first, when a VNF is deployed on a physical host, it needs to occupy a certain amount of computing resources and cannot exceed the upper limit of resources that the host can provide, so there are
Figure BDA0003271042230000111
Wherein N is the number of servers of network G, RnRepresenting resource constraints numbered N servers, M the number of service function chains SFC, N the number of network nodes siRepresents SFC, M of number iiRepresents siThe number of VNFs contained in (a).
Variable 0-1
Figure BDA0003271042230000112
Is defined as follows, if s isiWhen the jth VNF of (a) is deployed in network node n,
Figure BDA0003271042230000113
take 1, otherwise take 0. lm,nM, N ∈ N representing a physical link between network nodes m and N, for any physical link lm,nThe sum of the bandwidth resources occupied by all the SFCs mapped on the link cannot exceed the maximum available bandwidth of the link, so there is the following constraint
Figure BDA0003271042230000114
Where ρ isiAs SFC chains siThe bandwidth resources that are required to be occupied,
Figure BDA0003271042230000115
is a link lm,nIs a variable of 0 to 1
Figure BDA0003271042230000116
Is defined as follows if the logical link is
Figure BDA0003271042230000117
Through am,n
Figure BDA0003271042230000118
Taking 1, otherwise taking 0,
Figure BDA0003271042230000119
representative service chain siAnd (3) a logical link between VNF u and u + 1.
For any VNF in all SFCs, mapping can only be done to one server, so the following constraint holds
Figure BDA00032710422300001110
Wherein the content of the first and second substances,
Figure BDA00032710422300001111
indicates whether to change siIs deployed on network node n, when s isiWhen the optical fiber is deployed on the n,
Figure BDA00032710422300001112
take 1, otherwise take 0.
Second, if all virtual links are available
Figure BDA00032710422300001113
Routing is unique during transmission, i.e. for arbitrary
Figure BDA00032710422300001114
There are the following constraints
Figure BDA00032710422300001115
Finally, each link s needs to be guaranteediEnd-to-end delay requirements; assuming end-to-end delay per SFC by transmission delay
Figure BDA0003271042230000121
And processing time delay
Figure BDA0003271042230000122
Two parts, wherein dm,nIs a link lm,nThe transmission delay of (2). The sum of the two parts is less than or equal to the delay threshold of the service flow, namely the constraint (12) is satisfied
Figure BDA0003271042230000123
Wherein DiIs SFC siThe total delay threshold of (a) is,
Figure BDA0003271042230000124
reference is made to the formulae (3) and (4).
Due to the fact that all SFCs are deployed in the same machine room based on the SFC deployment scene of the data center, transmission delay in the constraint (12) can be ignored, and the formula (12) is simplified into
Figure BDA0003271042230000125
1.3, selecting an optimization index;
the optimization indexes are that the minimum transmission delay cost C and the energy consumption cost E are adopted, and the total delay cost is
Figure BDA0003271042230000126
Further considering the system energy consumption expense E, the energy consumption E of starting up isbaseAnd run-time energy consumption EallocTwo-part construction, EbaseEnergy consumption when a VNF is not deployed for server boot-up, obviously EbaseIn direct proportion to the number of the started hosts, a 0-1 variable h is definednSatisfy the requirement of
Figure BDA0003271042230000127
As is apparent from equation (15), for any server n, if VNF is deployed, then hn1, otherwise 0; total power-on energy consumption
Figure BDA0003271042230000128
Wherein gamma is a coefficient of the number of the atoms,
Figure BDA0003271042230000129
representing the number of primary servers on which the VNF is deployed; eallocDefining the runtime energy consumption of the host n for the energy consumption occupied by the VNF, which is proportional to the computing resource consumption
Figure BDA00032710422300001210
Is composed of
Figure BDA00032710422300001211
Where T is a coefficient, so the total energy consumption overhead is
Figure BDA0003271042230000131
In summary, the SFC layout optimization problem is specifically expressed as
min(E+C) (18)
s.t(6)-(15)
2. Realizing SFC optimal arrangement;
the goal of this stage is to deploy SFCs after a given set of SFCs and achieve the optimization goal of orchestration, i.e., reduce energy consumption and network latency.
2.1 realizing node and link mapping by adopting a greedy-based strategy;
the first step of SFC orchestration is to map VNF nodes and links, and in the mapping process, the system constraints analyzed in step 1.2 need to be satisfied, including resource constraints, bandwidth constraints, and end-to-end network delay constraints. A greedy mapping strategy is adopted, the optimization aim is that the hop count in the SFC deployment process is minimum, namely, the transmission link is shortest, and the strategy further considers the transmission delay on the premise that the SFC can be successfully deployed.
The greedy strategy selects a current optimal value in the solution of each step, so the solution obtained in the step can be regarded as a local optimal solution, the greedy algorithm needs to decompose the total problem into sub-problems, and the sub-problems in the text are the selection of a server and a physical link by deploying a VNF.
Firstly, generating a hop count matrix hoss according to an applied network topology, wherein the hop count matrix hoss is a square matrix with the length being the number of servers, stores hop counts among the servers, namely distance, and takes one hop through a switch; starting mapping work on the basis; the SFC sets need to be processed, so the work should be performed in the order of SFC sequential deployment and VNF sequential deployment in one SFC.
Starting to search from the server with the number 1 when deployment work is started, if the computing resources which can be provided by the server meet the resource requirement of the VNF and the residual bandwidth resources of the link directly connected with the VNF meet the bandwidth requirement of the SFC, successfully deploying the VNF on the server with the number 1, and if not, starting to search other servers until deployment is successful, or ending the deployment work of the SFC if all the servers are not successfully searched; the next VNF after successful deployment starts searching resources from the current server; when the deployment fails, the VNF searches for the servers in the order of the servers in the hops matrix, and the server with the smaller hop count is preferably selected. The search order when deploying a VNF is the current server, the servers connected to the same edge switch, the servers in the same Pod, the servers in other pods.
In the fat tree type topology, if two servers are connected to the same edge switch, only one link exists between the two servers, and when the two servers are located at the same Pod or different pods, two links exist between the two servers, and the hop counts of the two links are the same. As shown in fig. 2, the server 21 and the server 22 are connected to a unified edge switch, so that there is only one link between 21 and 22, i.e. 21- >13- >22, and the hop count is 1. Server 25 and server 28 are located in the same Pod, and the two links between them are 25- >15- >7- >16- >28 and 25- >15- >8- >16- >28, and the hop count is 2. Server 29 and server 34 are located at different Pods, and the two links between them are 29- >17- >10- >4- >12- >19- >34 and 29- >17- >10- >3- >12- >19- >34, and the hop count is 2.
After the server to be deployed is determined, whether a link between the two servers meets bandwidth constraints is judged, and if so, deployment is finished; if not, continuing to select the next server until finding the server meeting the constraint condition, and if not, then the SFC cannot be deployed.
A greedy deployment flow is shown in fig. 3.
2.2 optimizing resource allocation;
after the mapping relation from the VNF to the virtual machine is obtained by utilizing the heuristic mapping algorithm of the previous stage, the variable is obtained at the moment
Figure BDA0003271042230000141
And
Figure BDA0003271042230000142
known as the start-up deployment energy consumption EbaseTo determine a value, a variable is defined
Figure BDA0003271042230000143
Equation (19) can be further evolved into a constrained nonlinear optimization problem as follows
Figure BDA0003271042230000144
Figure BDA0003271042230000145
Constraint requirement in question (20)
Figure BDA0003271042230000146
In the range of values
Figure BDA0003271042230000147
Internal total energy can satisfy
Figure BDA0003271042230000148
The physical meaning of the above conditions, i.e. by adjusting the resource allocation, always enables the VNF
Figure BDA0003271042230000149
Service rate of
Figure BDA00032710422300001410
Greater than dynamic service chain siRequest arrival rate of (k)i. If this condition is not met, it means that the dynamic service chain s is presentiCannot be deployed.
Figure BDA00032710422300001411
The delay constraint is guaranteed. The last constraint ensures that the allocated computing resources do not exceed the resource upper bound of any server.
The invention adopts a Lagrange multiplier method to solve the constraint nonlinear optimization problem, and the augmented Lagrange function of the optimization problem formula (20) is
Figure BDA0003271042230000151
Wherein the content of the first and second substances,
Figure BDA0003271042230000152
and gammanIs a KKT multiplier. According to the KKT condition, it is required to solve the above optimization problem, and the following conditions need to be satisfied in addition to the constraint condition of equation (20):
Figure BDA0003271042230000153
solving the optimization problem (21) yields the number of resources allocated per VNF, which enables the orchestration method to achieve high resource utilization and low latency. The optimized resource allocation flow is shown in fig. 4.
2.3 description of the algorithm;
the part arranges the contents in step 2.1 and step 2.2 into the following algorithm flow, and the method provided by the invention is explained in detail through the specific flow.
Figure BDA0003271042230000154
Figure BDA0003271042230000161
The invention aims to solve the problem of designing a service function linkage strategy based on a queuing theory and elastic resource allocation. By analyzing the relationship among the request arrival rate, the computing resources and the processing delay, the invention provides a two-stage heuristic algorithm, thereby effectively reducing the energy consumption and the network delay.
The system for implementing the service function chain optimization arrangement method facing the data center comprises a model construction module and an SFC arrangement module which are sequentially connected, wherein the SFC arrangement module is used for carrying out the service function chain optimization arrangement method facing the data center
1. The model construction module constructs a service function chain optimization arrangement model based on elastic resource allocation, provides a foundation for the realization of an optimization method, and specifically comprises the technical content of the step 1 of the invention.
And 2, after the SFC set is given, the SFC arrangement module deploys the SFCs and achieves the optimization goal of arrangement, namely, the energy consumption and the network delay are reduced, and the technical content of the step 2 of the invention is specifically included.
The invention effectively reduces the energy consumption of the server, the occupancy rate of the link bandwidth and the service delay, improves the deployment success rate, and ensures that the arrangement result can be obtained within controllable operation time by the heuristic idea. The function finally realized by the invention is to complete the resource optimization arrangement of the SFC under the condition of given network topology and SFC data.

Claims (2)

1. A service function chain optimization arranging method facing a data center is characterized in that: providing a service function chain optimization arrangement model based on elastic resource allocation by analyzing the relation among the request arrival rate, the computing resources and the processing delay; on the basis, energy consumption and network delay are used as optimization indexes, a two-stage algorithm is designed, firstly, a feasible solution set of the problem is obtained through a greedy deployment algorithm, and then, the nonlinear constraint optimization problem is solved to obtain optimal resource allocation; the optimized configuration of resources is realized through a service function chain arrangement algorithm based on the resource elastic distribution, and the service quality is improved; the method specifically comprises the following steps:
1. constructing an arrangement model;
constructing a service function chain optimization arrangement model based on elastic resource allocation, and providing a foundation for realizing an optimization method;
based on fat tree (fat tree) network topology, the fat tree network topology is divided into an edge layer, a convergence layer, a core layer and a server layer from top to bottom; for the k-way fat tree architecture, the k-way fat tree architecture comprises k pots, and the number of edge switches and aggregation switches in each pot is k/2; the number of core switches is (k/2)2The number of aggregation layer and edge layer switches is k2The total number of servers supported by the network is k34, all VNFs can only be deployed in the server layer;
1.1, constructing a dynamic resource allocation model;
on the basis of researching the influence of the resource allocation on the VNF performance and the end-to-end delay, a dynamic resource allocation model is constructed;
abstracting an underlying network into an undirected weighted graph G (N, E), wherein N is a network node set, and E is a link set; the VNF is mostly deployed in a container, so the VNF processing request capability is affected by the amount of computing resources allocated to the VNF;
adopting M/M/1 queuing model to simulate VNF service, let kappaiFor serving function chains siThe arrival rate of the requests of (a),
Figure FDA0003271042220000011
is VNF
Figure FDA0003271042220000012
The service rate of (a), as known from queuing theory,
Figure FDA0003271042220000013
average processing time of
Figure FDA0003271042220000014
And kappai
Figure FDA0003271042220000015
There is the following relationship between:
Figure FDA0003271042220000016
in a practical application scenario, service rate
Figure FDA0003271042220000017
Has great relation with the allocation of computing resources, wherein CPU resources and memory resources are uniformly processed into computing resources, and an elastic resource allocation mode is adopted to ensure that
Figure FDA0003271042220000018
Is VNF
Figure FDA0003271042220000019
The amount of computing resources allocated and assuming service rates
Figure FDA00032710422200000110
And
Figure FDA00032710422200000111
there is a piecewise linear relationship between:
Figure FDA00032710422200000112
parameter(s)
Figure FDA00032710422200000113
And
Figure FDA00032710422200000114
are respectively calculated by the following formula
Figure FDA00032710422200000115
Figure FDA00032710422200000116
Wherein the content of the first and second substances,
Figure FDA00032710422200000117
is composed of
Figure FDA00032710422200000118
The value range of (A) is,
Figure FDA00032710422200000119
and
Figure FDA00032710422200000120
the allocation for taking resources is respectively
Figure FDA00032710422200000121
And
Figure FDA00032710422200000122
the corresponding service rate;
by the formula (2), in the value range
Figure FDA00032710422200000123
In the method, the service rate of the VNF rises linearly with the increase of the allocated resources;
Figure FDA00032710422200000124
and
Figure FDA00032710422200000125
are respectively VNF
Figure FDA00032710422200000126
Minimum and maximum values of resource allocation if
Figure FDA00032710422200000127
Then
Figure FDA00032710422200000128
Can not be deployed when
Figure FDA00032710422200000129
When the temperature of the water is higher than the set temperature,
Figure FDA00032710422200000130
the service rate of (2) reaches a maximum value, i.e. the processing capacity reaches an upper limit;
for simplicity of description, note
Figure FDA00032710422200000131
Wherein
Figure FDA00032710422200000132
Allocating a base number for the resource; at this time VNF
Figure FDA00032710422200000133
Average processing time
Figure FDA00032710422200000134
The calculation formula is shown in formula (5)
Figure FDA00032710422200000135
The constraints to be satisfied are as follows
Figure FDA0003271042220000021
Figure FDA0003271042220000022
Constraint (6) controls
Figure FDA0003271042220000023
Constraint (7) indicates that the service rate is greater than the arrival rate, which would result in a backlog of service requests if constraint (7) is not satisfied;
1.2 constructing system constraints;
system constraints need to be considered in the SFC arranging process and are divided into computing resource constraints, broadband resource constraints and SFC end-to-end time delay constraints; first, when a VNF is deployed on a physical host, it needs to occupy a certain amount of computing resources and cannot exceed the upper limit of resources that the host can provide, so there are
Figure FDA0003271042220000024
Wherein N is the number of servers of network G, RnRepresenting resource constraints numbered N servers, M the number of service function chains SFC, N the number of network nodes, siRepresents SFC, M of number iiRepresents siThe number of VNFs contained in (a). Variable 0-1
Figure FDA0003271042220000025
Is defined as follows, if s isiWhen the jth VNF of (a) is deployed in network node n,
Figure FDA0003271042220000026
take 1, otherwise take 0. lm,nM, N ∈ N representing a physical link between network nodes m and N, for any physical link lm,nThe sum of the bandwidth resources occupied by all the SFCs mapped on the link cannot exceed the maximum available bandwidth of the link, so there is the following constraint
Figure FDA0003271042220000027
Where ρ isiAs SFC chains siThe bandwidth resources that are required to be occupied,
Figure FDA0003271042220000028
is a link lm,nIs a variable of 0 to 1
Figure FDA0003271042220000029
Is defined as follows if the logical link is
Figure FDA00032710422200000210
Through am,n
Figure FDA00032710422200000211
Taking 1, otherwise taking 0,
Figure FDA00032710422200000212
representative service chain siAnd (3) a logical link between VNF u and u + 1.
For any VNF in all SFCs, mapping can only be done to one server, so the following constraint holds
Figure FDA00032710422200000213
Wherein the content of the first and second substances,
Figure FDA00032710422200000214
indicates whether to change siIs deployed on network node n, when s isiWhen the optical fiber is deployed on the n,
Figure FDA00032710422200000215
take 1, otherwise take 0.
Second, if all virtual links are available
Figure FDA00032710422200000216
Routing is unique during transmission, i.e. for arbitrary
Figure FDA00032710422200000217
There are the following constraints
Figure FDA00032710422200000218
Finally, each link s needs to be guaranteediEnd-to-end delay requirements; assuming end-to-end delay per SFC by transmission delay
Figure FDA00032710422200000219
And processing time delay
Figure FDA00032710422200000220
Two parts, wherein dm,nIs a link lm,nThe transmission delay of (2). The sum of the two parts is less than or equal to the delay threshold of the service flow, namely the constraint (12) is satisfied
Figure FDA00032710422200000221
Wherein DiIs SFC siThe total delay threshold of (a) is,
Figure FDA00032710422200000222
reference is made to the formulae (3) and (4).
Due to the fact that all SFCs are deployed in the same machine room based on the SFC deployment scene of the data center, transmission delay in the constraint (12) can be ignored, and the formula (12) is simplified into
Figure FDA00032710422200000223
1.3, selecting an optimization index;
the optimization indexes are minimized transmission delay overhead C and energy consumption overhead E, and the total delay overhead is as follows:
Figure FDA0003271042220000031
further considering the system energy consumption expense E, the energy consumption E of starting up isbaseAnd run-time energy consumption EallocTwo-part construction, EbaseEnergy consumption when a VNF is not deployed for server boot-up, obviously EbaseIn direct proportion to the number of the started hosts, a 0-1 variable h is definednSatisfy the requirement of
Figure FDA0003271042220000032
As is apparent from equation (15), for any server n, if VNF is deployed, then hn1, otherwise 0; total power-on energy consumption
Figure FDA0003271042220000033
Wherein gamma is a coefficient of the number of the atoms,
Figure FDA0003271042220000034
representing the number of primary servers on which the VNF is deployed; eallocDefining the runtime energy consumption of the host n for the energy consumption occupied by the VNF, which is proportional to the computing resource consumption
Figure FDA0003271042220000035
Is composed of
Figure FDA0003271042220000036
Where T is a coefficient, so the total energy consumption overhead is
Figure FDA0003271042220000037
In summary, the SFC layout optimization problem is specifically expressed as
min(E+C) (18)
s.t(6)-(15)
2. Realizing SFC optimal arrangement;
the goal of this stage is to deploy SFCs after a given set of SFCs and achieve the optimization goal of orchestration, i.e., reduce energy consumption and network latency;
2.1 realizing node and link mapping by adopting a greedy-based strategy;
the first step of SFC orchestration is to map VNF nodes and links, and in the mapping process, system constraints analyzed in step 1.2 need to be satisfied, including resource constraints, bandwidth constraints, and end-to-end network delay constraints; a greedy mapping strategy is adopted, the optimization aim is that the hop count in the SFC deployment process is minimum, namely, the transmission link is shortest, and the strategy further considers the transmission delay on the premise that the SFC can be successfully deployed;
the greedy strategy selects a current optimal value in the solution of each step, so the solution obtained in the step can be regarded as a local optimal solution, the greedy algorithm decomposes a total problem into a plurality of sub-problems, and the sub-problems in the text are the selection of a server and a physical link by deploying a VNF each time;
firstly, generating a hop count matrix hoss according to an applied network topology, wherein the hop count matrix hoss is a square matrix with the length being the number of servers, stores hop counts among the servers, namely distance, and takes one hop through a switch; starting mapping work on the basis; the SFC sets are required to be processed, so that the work is carried out according to the sequence of the sequential deployment of the SFCs and the sequential deployment of VNFs in one SFC;
starting to search from the server with the number 1 when deployment work is started, if the computing resources which can be provided by the server meet the resource requirement of the VNF and the residual bandwidth resources of the link directly connected with the VNF meet the bandwidth requirement of the SFC, successfully deploying the VNF on the server with the number 1, and if not, starting to search other servers until deployment is successful, or ending the deployment work of the SFC if all the servers are not successfully searched; searching resources from the current server by the successfully deployed next VNF, searching the servers in the order of the VNF in the hoss matrix when the deployment fails, and preferentially selecting the server with less hop count; the search sequence when the VNF is deployed is that of a current server, a server connected with the same edge switch, a server in the same Pod and servers in other pods;
in the fat tree type topological graph, if two servers are connected with the same edge switch, only one link exists between the two servers, and when the two servers are located at the same Pod or different pods, two links exist between the two servers, and the hop counts of the two links are the same; after the server to be deployed is determined, whether a link between the two servers meets bandwidth constraints is judged, and if so, deployment is finished; if not, continuing to select the next server until finding a server meeting the constraint condition, and if not, then the SFC cannot be deployed;
2.2 optimizing configuration resources;
after the mapping relation from the VNF to the virtual machine is obtained by utilizing the heuristic mapping algorithm of the previous stage, the variable is obtained at the moment
Figure FDA0003271042220000041
And
Figure FDA0003271042220000042
known as the start-up deployment energy consumption EbaseTo determine a value, a variable is defined
Figure FDA0003271042220000043
Problem (1)8) Further simplified to the following constrained nonlinear optimization problem
Figure FDA0003271042220000044
Figure FDA0003271042220000045
Constraint requirement in problem (19)
Figure FDA0003271042220000046
In the range of values
Figure FDA0003271042220000047
Internal total energy can satisfy
Figure FDA0003271042220000048
The physical meaning of the above conditions, i.e. by adjusting the resource allocation, always enables the VNF
Figure FDA0003271042220000049
Service rate of
Figure FDA00032710422200000410
Greater than dynamic service chain siRequest arrival rate of (k)i(ii) a If this condition is not met, it means that the dynamic service chain s is presentiThe deployment cannot be performed; constraining
Figure FDA00032710422200000411
Ensuring time delay constraint
Figure FDA00032710422200000412
And is
Figure FDA00032710422200000413
Ensuring that no more than any of the allocated computing resources are exceededThe resource upper limit of the server;
the constrained nonlinear optimization problem is solved by selecting the KKT condition generalized by the Lagrange multiplier method, and the augmented Lagrange function of the optimization problem (19) is
Figure FDA00032710422200000414
Wherein the content of the first and second substances,
Figure FDA00032710422200000415
and gammanIs a KKT multiplier; according to the KKT condition, the optimization problem is required to be solved, and the following conditions are required to be met:
Figure FDA0003271042220000051
when the original problem is a convex optimization problem, the KKT condition is a sufficient necessary condition for obtaining a group of optimal solutions, namely the solutions enable the objective function to be a global minimum; the final solution to be obtained is the number of resources allocated to each VNF, which enables the orchestration method to achieve high resource utilization and low latency.
2. The system for implementing the data center-oriented service function chain optimization orchestration method according to claim 1, wherein: the system comprises a model construction module and an SFC arrangement module which are connected in sequence; wherein
The model construction module constructs a service function chain optimization arrangement model based on elastic resource allocation, and provides a basis for the realization of an optimization method;
after the SFC set is given, the SFC arranging module deploys the SFCs and achieves the optimization goal of arranging, namely, reducing energy consumption and network delay.
CN202111101331.0A 2021-09-18 2021-09-18 Data center-oriented service function chain optimization arrangement method and system Pending CN113918277A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111101331.0A CN113918277A (en) 2021-09-18 2021-09-18 Data center-oriented service function chain optimization arrangement method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111101331.0A CN113918277A (en) 2021-09-18 2021-09-18 Data center-oriented service function chain optimization arrangement method and system

Publications (1)

Publication Number Publication Date
CN113918277A true CN113918277A (en) 2022-01-11

Family

ID=79235356

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111101331.0A Pending CN113918277A (en) 2021-09-18 2021-09-18 Data center-oriented service function chain optimization arrangement method and system

Country Status (1)

Country Link
CN (1) CN113918277A (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114124713A (en) * 2022-01-26 2022-03-01 北京航空航天大学 Service function chain arrangement method for operation level function parallel and self-adaptive resource allocation
CN114466059A (en) * 2022-01-20 2022-05-10 天津大学 Method for providing reliable service function chain for mobile edge computing system
CN114650234A (en) * 2022-03-14 2022-06-21 中天宽带技术有限公司 Data processing method and device and server
CN114827284A (en) * 2022-04-21 2022-07-29 中国电子技术标准化研究院 Service function chain arrangement method and device in industrial Internet of things and federal learning system
CN115118748A (en) * 2022-06-21 2022-09-27 上海交通大学 Intelligent manufacturing scene micro-service deployment scheme and resource redistribution method
CN115865706A (en) * 2022-11-28 2023-03-28 国网重庆市电力公司电力科学研究院 5G network capability opening-based power automatic business arrangement method
CN115913952A (en) * 2022-11-01 2023-04-04 南京航空航天大学 Efficient parallelization and deployment method of multi-target service function chain based on CPU + DPU platform
CN115955402A (en) * 2023-03-14 2023-04-11 中移动信息技术有限公司 Service function chain determining method, device, equipment, medium and product
CN116401055A (en) * 2023-04-07 2023-07-07 天津大学 Resource efficiency optimization-oriented server non-perception computing workflow arrangement method
CN116545876A (en) * 2023-06-28 2023-08-04 广东技术师范大学 SFC cross-domain deployment optimization method and device based on VNF migration

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114466059A (en) * 2022-01-20 2022-05-10 天津大学 Method for providing reliable service function chain for mobile edge computing system
CN114124713A (en) * 2022-01-26 2022-03-01 北京航空航天大学 Service function chain arrangement method for operation level function parallel and self-adaptive resource allocation
CN114650234A (en) * 2022-03-14 2022-06-21 中天宽带技术有限公司 Data processing method and device and server
CN114650234B (en) * 2022-03-14 2023-10-27 中天宽带技术有限公司 Data processing method, device and server
CN114827284A (en) * 2022-04-21 2022-07-29 中国电子技术标准化研究院 Service function chain arrangement method and device in industrial Internet of things and federal learning system
CN114827284B (en) * 2022-04-21 2023-10-03 中国电子技术标准化研究院 Service function chain arrangement method and device in industrial Internet of things and federal learning system
CN115118748A (en) * 2022-06-21 2022-09-27 上海交通大学 Intelligent manufacturing scene micro-service deployment scheme and resource redistribution method
CN115118748B (en) * 2022-06-21 2023-09-26 上海交通大学 Intelligent manufacturing scene micro-service deployment scheme and resource redistribution method
CN115913952A (en) * 2022-11-01 2023-04-04 南京航空航天大学 Efficient parallelization and deployment method of multi-target service function chain based on CPU + DPU platform
US11936758B1 (en) 2022-11-01 2024-03-19 Nanjing University Of Aeronautics And Astronautics Efficient parallelization and deployment method of multi-objective service function chain based on CPU + DPU platform
CN115865706A (en) * 2022-11-28 2023-03-28 国网重庆市电力公司电力科学研究院 5G network capability opening-based power automatic business arrangement method
CN115955402B (en) * 2023-03-14 2023-08-01 中移动信息技术有限公司 Service function chain determining method, device, equipment, medium and product
CN115955402A (en) * 2023-03-14 2023-04-11 中移动信息技术有限公司 Service function chain determining method, device, equipment, medium and product
CN116401055A (en) * 2023-04-07 2023-07-07 天津大学 Resource efficiency optimization-oriented server non-perception computing workflow arrangement method
CN116401055B (en) * 2023-04-07 2023-10-03 天津大学 Resource efficiency optimization-oriented server non-perception computing workflow arrangement method
CN116545876B (en) * 2023-06-28 2024-01-19 广东技术师范大学 SFC cross-domain deployment optimization method and device based on VNF migration
CN116545876A (en) * 2023-06-28 2023-08-04 广东技术师范大学 SFC cross-domain deployment optimization method and device based on VNF migration

Similar Documents

Publication Publication Date Title
CN113918277A (en) Data center-oriented service function chain optimization arrangement method and system
CN108260169B (en) QoS guarantee-based dynamic service function chain deployment method
CN114338504B (en) Micro-service deployment and routing method based on network edge system
CN108322333B (en) Virtual network function placement method based on genetic algorithm
CN107682203B (en) Security function deployment method based on service chain
CN112738820A (en) Dynamic deployment method and device of service function chain and computer equipment
WO2023024219A1 (en) Joint optimization method and system for delay and spectrum occupancy in cloud-edge collaborative network
CN110087250B (en) Network slice arranging scheme and method based on multi-objective joint optimization model
WO2023039965A1 (en) Cloud-edge computing network computational resource balancing and scheduling method for traffic grooming, and system
Fu et al. Performance optimization for blockchain-enabled distributed network function virtualization management and orchestration
CN112104491B (en) Service-oriented network virtualization resource management method
CN105282038A (en) Distributed asterism networking optimization method based on stability analysis and used in mobile satellite network
CN107147530B (en) Virtual network reconfiguration method based on resource conservation
CN107196806B (en) Topological proximity matching virtual network mapping method based on sub-graph radiation
CN105530199B (en) Method for mapping resource and device based on SDN multi-area optical network virtualization technology
CN114071582A (en) Service chain deployment method and device for cloud-edge collaborative Internet of things
CN110191155B (en) Parallel job scheduling method, system and storage medium for fat tree interconnection network
CN112953761A (en) Virtual-real resource mapping method for virtual network construction in multi-hop network
WO2020134133A1 (en) Resource allocation method, substation, and computer-readable storage medium
CN107360031B (en) Virtual network mapping method based on optimized overhead-to-revenue ratio
CN105553882A (en) Method for scheduling SDN data plane resources
CN110535705B (en) Service function chain construction method capable of adapting to user time delay requirement
CN113490279B (en) Network slice configuration method and device
CN103618674A (en) A united packet scheduling and channel allocation routing method based on an adaptive service model
KR101800320B1 (en) Network on chip system based on bus protocol, design method for the same and computer readable recording medium in which program of the design method is recorded

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination