CN113918277A - Data center-oriented service function chain optimization arrangement method and system - Google Patents
Data center-oriented service function chain optimization arrangement method and system Download PDFInfo
- Publication number
- CN113918277A CN113918277A CN202111101331.0A CN202111101331A CN113918277A CN 113918277 A CN113918277 A CN 113918277A CN 202111101331 A CN202111101331 A CN 202111101331A CN 113918277 A CN113918277 A CN 113918277A
- Authority
- CN
- China
- Prior art keywords
- vnf
- sfc
- optimization
- server
- service
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45533—Hypervisors; Virtual machine monitors
- G06F9/45558—Hypervisor-specific management and integration aspects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5027—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/08—Configuration management of networks or network elements
- H04L41/0803—Configuration setting
- H04L41/0823—Configuration setting characterised by the purposes of a change of settings, e.g. optimising configuration for enhancing reliability
- H04L41/0833—Configuration setting characterised by the purposes of a change of settings, e.g. optimising configuration for enhancing reliability for reduction of network energy consumption
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/08—Configuration management of networks or network elements
- H04L41/0896—Bandwidth or capacity management, i.e. automatically increasing or decreasing capacities
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/12—Discovery or management of network topologies
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/14—Network analysis or design
- H04L41/145—Network analysis or design involving simulating, designing, planning or modelling of a network
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45533—Hypervisors; Virtual machine monitors
- G06F9/45558—Hypervisor-specific management and integration aspects
- G06F2009/45595—Network integration; Enabling network access in virtual machine instances
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Abstract
A service function chain optimization arranging method for a data center considers resources required by virtual network functions as variable parameters, analyzes the relation among request arrival rate, calculation resources and processing delay based on an arranging theory and elastic resource allocation, and provides a two-stage heuristic arranging strategy by taking energy consumption and network delay as optimization indexes. In the first stage, a series of mapping relations between VNFs and servers are obtained by utilizing a greedy deployment strategy; and in the second stage, on the basis of the solution set in the previous stage, the nonlinear constraint optimization problem is further solved, and the optimal configuration of the computing resources is obtained. The invention also discloses a system for implementing the service function chain optimization arrangement method facing the data center. The invention effectively reduces the energy consumption of the server, the occupancy rate of the link bandwidth and the service delay, and improves the success rate of deployment. The invention completes the resource optimization arrangement of the SFC under the condition of giving network topology and SFC data, and the arrangement result can be obtained in controllable operation time.
Description
Technical Field
The invention relates to the field of network service function deployment in network communication, in particular to a service function chain optimization arrangement method facing a data center.
Technical Field
With the rapid emergence of new internet services, the conventional network architecture has been difficult to support the continuously-improved performance requirements of the new generation of networks. Network Function Virtualization (NFV) is one of the key technologies for implementing a 5G service architecture. NFV makes various network elements in the existing network software into Virtual Network Functions (VNFs), and in a network environment constructed based on the VNFs, Service functions are mainly carried by Service Function Chains (SFCs), and a Service Function Chain refers to an ordered VNF sequence through which traffic passes. SFC Orchestration (Service Function Chain organization) refers to a given set of known SFCs, and on the premise that certain constraints are met, a VNF is deployed in a reasonable manner to achieve optimization of Service performance or economy. The conventional SFC orchestration method usually processes resources required by the VNF as system parameters, ignores the relationship between the resources and the VNF performance, and adopts the following algorithms for minimizing the resource consumption or bandwidth resource consumption of the virtual machine:
in the first method, a joint optimization algorithm of VNF deployment and path selection is designed with the aim of minimizing the usage of computing resources and bandwidth resources. The algorithm firstly finds out an SFC (Small form-factor computing) scheduling path meeting the bandwidth resource requirement, and then determines the deployment position of a VNF (virtual network function).
And the second SFC arranging algorithm which reduces resource overhead and ensures QoS is designed based on the Viterbi algorithm. In order to ensure the QoS requirement of network service, the node migration is carried out on the link with high delay, so as to reduce the transmission delay and meet the requirement of service quality.
The third method uses a weighted service chain placement algorithm based on service priority and a request scheduling algorithm based on combinatorial optimization.
And the fourth method is a service function chain deployment method based on a greedy strategy, and selects a service path with the minimum deployment cost by exhaustively exhausting all paths meeting the requirements of connectivity and strategies.
The above methods mostly assume that the resource requirements and processing rate of the VNF are predetermined. However, in an actual application scenario, the processing rate of the VNF is affected by various factors such as resource allocation and request arrival rate, and further affects the service performance of the SFC. At present, the existing method adds a dynamic thought, and respectively establishes a queuing theory model and an elastic resource allocation model, but the two models are not connected. Aiming at the problem of dynamic resource allocation, the invention further considers the relationship between service arrival rate, resource allocation and VNF processing capacity by taking the service function chain arrangement of the data center as a scene based on the arrangement theory idea, and establishes a service function chain arrangement optimization model by taking reduction of energy consumption and network delay as optimization targets. Meanwhile, in order to ensure that a satisfactory solution of the problem can be obtained within a limited time, the invention adopts a heuristic idea and designs a two-stage heuristic algorithm.
Disclosure of Invention
The invention provides a data center-oriented service function chain optimization arranging method and system, which aim to overcome the defects of the prior art.
The invention provides a two-stage heuristic arranging method by considering the defects of the existing research and combining an arranging theory and a resource elastic allocation mechanism and establishing a service function chain arranging model facing a data center by taking energy consumption and network delay as optimization targets. In the first stage, a series of mapping relations between VNFs and servers are obtained by utilizing a greedy deployment strategy; and in the second stage, on the basis of the solution set in the previous stage, the nonlinear constraint optimization problem is further solved, and the optimal configuration of the computing resources is obtained.
A service function chain optimization arranging method facing a data center is characterized in that: providing a service function chain optimization arrangement model based on elastic resource allocation by analyzing the relation among the request arrival rate, the computing resources and the processing delay; on the basis, energy consumption and network delay are used as optimization indexes, a two-stage algorithm is designed, firstly, a feasible solution set of the problem is obtained through a greedy deployment algorithm, and then, the nonlinear constraint optimization problem is solved to obtain optimal resource allocation; the optimized configuration of resources is realized through a service function chain arrangement algorithm based on the resource elastic distribution, and the service quality is improved; the method specifically comprises the following steps:
1. constructing an arrangement model;
and constructing a service function chain optimization arrangement model based on elastic resource allocation, and providing a foundation for realizing an optimization method.
The invention is based on a fat tree (fat tree) network topology, which is widely adopted in data centers. The whole topology is divided into an edge layer, a convergence layer, a core layer and a server layer from top to bottom. For the k-way tree architecture, the k-way tree architecture comprises k pod, and the number of the edge switches and the aggregation switches in each pod is k/2. The number of core switches is (k/2)2The number of aggregation layer and edge layer switches is k2The total number of servers supported by the network is k3And/4, and all VNFs can only be deployed at the server layer.
1.1, constructing a dynamic resource allocation model;
on the basis of researching the influence of the resource allocation on the VNF performance and the end-to-end delay, a dynamic resource allocation model is constructed.
The underlying network is abstracted as an undirected weighted graph G ═ N, E, where N is the set of network nodes and E is the set of links. The VNF is mostly deployed in a container, so the VNF's ability to process requests is affected by the amount of computing resources allocated to the VNF.
Adopting M/M/1 queuing model to simulate VNF service, let kappaiFor serving function chains siThe arrival rate of the requests of (a),is VNFThe service rate of (a), as known from queuing theory,average processing time ofAnd kappai、There is the following relationship between:
in a practical application scenario, service rateHas great relation with the allocation of computing resources, wherein CPU resources and memory resources are uniformly processed into computing resources, and an elastic resource allocation mode is adopted to ensure thatIs VNFThe amount of computing resources allocated and assuming service ratesAndthere is a piecewise linear relationship between:
Wherein the content of the first and second substances,is composed ofThe value range of (A) is,andthe allocation for taking resources is respectivelyAndtime corresponding service rate.
By the formula (2), in the value rangeIn the method, the service rate of the VNF rises linearly with the increase of the allocated resources;andare respectively VNFMinimum and maximum values of resource allocation ifThenCan not be deployed whenWhen the temperature of the water is higher than the set temperature,the service rate of (2) reaches a maximum value, i.e., the processing capacity reaches an upper limit.
For simplicity of description, noteWhereinAllocating a base number for the resource; at this time VNFAverage processing timeThe calculation formula is shown in formula (5)
The constraints to be satisfied are as follows
Constraint (6) controlsConstraint (7) indicates that the service rate is greater than the arrival rate, and if constraint (7) is not satisfied, backlog of service requests results.
1.2 constructing system constraints;
system constraints need to be considered in the SFC arranging process and are divided into computing resource constraints, broadband resource constraints and SFC end-to-end time delay constraints; first, when a VNF is deployed on a physical host, it needs to occupy a certain amount of computing resources and cannot exceed the upper limit of resources that the host can provide, so there are
Wherein N is the number of servers of network G, RnRepresenting resource constraints numbered N servers, M the number of service function chains SFC, N the number of network nodes siRepresents SFC, M of number iiRepresents siThe number of VNFs contained in (a).
Variable 0-1Is defined as follows, if s isiWhen the jth VNF of (a) is deployed in network node n,take 1, otherwise take 0. lm,nM, N ∈ N representing a physical link between network nodes m and N, for any physical link lm,nThe sum of the bandwidth resources occupied by all the SFCs mapped on the link cannot exceed the maximum available bandwidth of the link, so there is the following constraint
Where ρ isiAs SFC chains siThe bandwidth resources that are required to be occupied,is a link lm,nIs a variable of 0 to 1Is defined as follows if the logical link isThrough am,n,Taking 1, otherwise taking 0,representative service chain siAnd (3) a logical link between VNF u and u + 1.
For any VNF in all SFCs, mapping can only be done to one server, so the following constraint holds
Wherein the content of the first and second substances,indicates whether to change siIs deployed on network node n, when s isiWhen the optical fiber is deployed on the n,take 1, otherwise take 0.
Second, if all virtual links are availableRouting is unique during transmission, i.e. for arbitraryThere are the following constraints
Finally, each link s needs to be guaranteediEnd-to-end delay requirements; assuming end-to-end delay per SFC by transmission delayAnd processing time delayTwo parts, wherein dm,nIs a link lm,nThe transmission delay of (2). The sum of the two parts is less than or equal to the delay threshold of the service flow, namely the constraint (12) is satisfied
Wherein DiIs SFCsiThe total delay threshold of (a) is,reference is made to the formulae (3) and (4).
Due to the fact that all SFCs are deployed in the same machine room based on the SFC deployment scene of the data center, transmission delay in the constraint (12) can be ignored, and the formula (12) is simplified into
1.3, selecting an optimization index;
the optimization indexes are that the minimum transmission delay cost C and the energy consumption cost E are adopted, and the total delay cost is
Further considering the system energy consumption expense E, the energy consumption E of starting up isbaseAnd run-time energy consumption EallocThe two parts are formed into a whole body,Ebaseenergy consumption when a VNF is not deployed for server boot-up, obviously EbaseIn direct proportion to the number of the started hosts, a 0-1 variable h is definednSatisfy the requirement of
As is apparent from equation (15), for any server n, if VNF is deployed, then hn1, otherwise 0; total power-on energy consumptionWherein gamma is a coefficient of the number of the atoms,representing the number of primary servers on which the VNF is deployed; eallocDefining the runtime energy consumption of the host n for the energy consumption occupied by the VNF, which is proportional to the computing resource consumptionIs composed of
Where T is a coefficient, so the total energy consumption overhead is
In summary, the SFC layout optimization problem is specifically expressed as
min(E+C) (18)
s.t(6)-(15)
2. Realizing SFC optimal arrangement;
the goal of this stage is to deploy SFCs after a given set of SFCs and achieve the optimization goal of orchestration, i.e., reduce energy consumption and network latency.
2.1 realizing node and link mapping by adopting a greedy-based strategy;
the first step of SFC orchestration is to map VNF nodes and links, and in the mapping process, the system constraints analyzed in step 1.2 need to be satisfied, including resource constraints, bandwidth constraints, and end-to-end network delay constraints. A greedy mapping strategy is adopted, the optimization aim is that the hop count in the SFC deployment process is minimum, namely, the transmission link is shortest, and the strategy further considers the transmission delay on the premise that the SFC can be successfully deployed.
The greedy strategy selects a current optimal value in the solution of each step, so the solution obtained in the step can be regarded as a local optimal solution, the greedy algorithm needs to decompose the total problem into sub-problems, and the sub-problems in the text are the selection of a server and a physical link by deploying a VNF.
Firstly, generating a hop count matrix hoss according to an applied network topology, wherein the hop count matrix hoss is a square matrix with the length being the number of servers, stores hop counts among the servers, namely distance, and takes one hop through a switch; starting mapping work on the basis; the SFC sets need to be processed, so the work should be performed in the order of SFC sequential deployment and VNF sequential deployment in one SFC.
Starting to search from the server with the number 1 when deployment work is started, if the computing resources which can be provided by the server meet the resource requirement of the VNF and the residual bandwidth resources of the link directly connected with the VNF meet the bandwidth requirement of the SFC, successfully deploying the VNF on the server with the number 1, and if not, starting to search other servers until deployment is successful, or ending the deployment work of the SFC if all the servers are not successfully searched; the next VNF after successful deployment starts searching resources from the current server; when the deployment fails, the VNF searches for the servers in the order of the servers in the hops matrix, and the server with the smaller hop count is preferably selected. The search order when deploying a VNF is the current server, the servers connected to the same edge switch, the servers in the same Pod, the servers in other pods.
In the fat tree type topology, if two servers are connected to the same edge switch, only one link exists between the two servers, and when the two servers are located at the same Pod or different pods, two links exist between the two servers, and the hop counts of the two links are the same. After the server to be deployed is determined, whether a link between the two servers meets bandwidth constraints is judged, and if so, deployment is finished; if not, continuing to select the next server until finding the server meeting the constraint condition, and if not, then the SFC cannot be deployed.
2.2 optimizing resource allocation;
after the mapping relation from the VNF to the virtual machine is obtained by utilizing the heuristic mapping algorithm of the previous stage, the variable is obtained at the momentAndknown as the start-up deployment energy consumption EbaseTo determine a value, a variable is definedThe problem (18) can be further reduced to a constrained non-linear optimization problem as follows
Constraint requirement in problem (19)In the range of valuesInternal total energy can satisfyThe physical meaning of the above conditions being determined by adjusting the resourcesSource allocation, always enabling VNFService rate ofGreater than dynamic service chain siRequest arrival rate of (k)i. If this condition is not met, it means that the dynamic service chain s is presentiCannot be deployed. ConstrainingEnsuring time delay constraintAnd isThe distributed computing resources are guaranteed not to exceed the resource upper limit of any server.
The constrained nonlinear optimization problem is solved by selecting a KKT condition popularized by a Lagrangian multiplier method, and the augmented Lagrangian function of the optimization problem (19) is as follows:
wherein the content of the first and second substances,and gammanIs a KKT multiplier. According to the KKT condition, the optimization problem is required to be solved, and the following conditions are required to be met:
solving the optimization problem (21) yields the number of resources allocated per VNF, which enables the orchestration method to achieve high resource utilization and low latency. The optimized resource allocation flow is shown in fig. 4.
The system for implementing the service function chain optimization arrangement method facing the data center comprises a model construction module and an SFC arrangement module which are sequentially connected, wherein the SFC arrangement module is used for carrying out the service function chain optimization arrangement method facing the data center
1. The model construction module constructs a service function chain optimization arrangement model based on elastic resource allocation, and provides a foundation for the realization of an optimization method.
And 2, the SFC arranging module deploys the SFCs after the SFC set is given, and achieves the optimization goal of arranging, namely reducing energy consumption and network delay.
The invention provides a service function chain optimization arrangement model based on elastic resource allocation by analyzing the relation between the request arrival rate, the computing resources and the processing delay. On the basis, energy consumption and network delay are used as optimization indexes, a two-stage algorithm is designed, firstly, a feasible solution set of the problem is obtained through a greedy deployment algorithm, and then, the nonlinear constraint optimization problem is solved to obtain optimal resource allocation. Through a service function chain arrangement algorithm based on resource elastic allocation, the optimal allocation of resources can be realized, and the service quality is improved.
The invention has the advantages that: the energy consumption of the server, the occupancy rate of the link bandwidth and the service delay are effectively reduced, the deployment success rate is improved, and the arrangement result can be obtained within controllable operation time.
Drawings
FIG. 1 is a net topology of the fat tree type of the present invention.
Fig. 2 is a graph of service rate versus resource allocation in accordance with the present invention.
FIG. 3 is a greedy deployment flow diagram based on the fat tree type network topology of the present invention.
FIG. 4 is a flow chart of resource optimization configuration of the present invention.
Detailed Description
The technical scheme of the invention is further explained by combining the attached drawings.
The invention relates to a service function chain optimization arrangement method facing a data center, which comprises the following two steps:
1. constructing an arrangement model;
and constructing a service function chain optimization arrangement model based on elastic resource allocation, and providing a foundation for realizing an optimization method.
The invention is based on a fat tree (fat tree) network topology, which is widely adopted in data centers. The fat tree type network topology is shown in fig. 1, and the whole topology is divided into an edge layer, an aggregation layer, a core layer and a server layer from top to bottom. For the k-way tree architecture, the k-way tree architecture comprises k pod, and the number of the edge switches and the aggregation switches in each pod is k/2. The number of core switches is (k/2)2The number of aggregation layer and edge layer switches is k2The total number of servers supported by the network is k3And/4, and all VNFs can only be deployed at the server layer.
1.1, constructing a dynamic resource allocation model;
on the basis of researching the influence of the resource allocation on the VNF performance and the end-to-end delay, a dynamic resource allocation model is constructed.
The underlying network is abstracted as an undirected weighted graph G ═ N, E, where N is the set of network nodes and E is the set of links. The VNF is mostly deployed in a container, so the VNF's ability to process requests is affected by the amount of computing resources allocated to the VNF.
Adopting M/M/1 queuing model to simulate VNF service, let kappaiFor serving function chains siThe arrival rate of the requests of (a),is VNFThe service rate of (a), as known from queuing theory,average processing time ofAnd kappai、There is a relationship betweenComprises the following steps:
in a practical application scenario, service rateHas great relation with the allocation of computing resources, wherein CPU resources and memory resources are uniformly processed into computing resources, and an elastic resource allocation mode is adopted to ensure thatIs VNFThe amount of computing resources allocated and assuming service ratesAndthere is a piecewise linear relationship between:
service rate and resource allocation relationship as shown in FIG. 2, parametersAndare respectively calculated by the following formula
Wherein the content of the first and second substances,is composed ofThe value range of (A) is,andthe allocation for taking resources is respectivelyAndtime corresponding service rate.
By the formula (2), in the value rangeIn the method, the service rate of the VNF rises linearly with the increase of the allocated resources;andare respectively VNFMinimum and maximum values of resource allocation ifThenCan not be deployed whenWhen the temperature of the water is higher than the set temperature,the service rate of (2) reaches a maximum value, i.e., the processing capacity reaches an upper limit.
For simplicity of description, noteWhereinAllocating a base number for the resource; at this time VNFAverage processing timeThe calculation formula is shown in formula (5)
The constraints to be satisfied are as follows
Constraint (6) controlsConstraint (7) indicates that the service rate is greater than the arrival rate, and if constraint (7) is not satisfied, backlog of service requests results.
1.2 constructing system constraints;
system constraints need to be considered in the SFC arranging process and are divided into computing resource constraints, broadband resource constraints and SFC end-to-end time delay constraints; first, when a VNF is deployed on a physical host, it needs to occupy a certain amount of computing resources and cannot exceed the upper limit of resources that the host can provide, so there are
Wherein N is the number of servers of network G, RnRepresenting resource constraints numbered N servers, M the number of service function chains SFC, N the number of network nodes siRepresents SFC, M of number iiRepresents siThe number of VNFs contained in (a).
Variable 0-1Is defined as follows, if s isiWhen the jth VNF of (a) is deployed in network node n,take 1, otherwise take 0. lm,nM, N ∈ N representing a physical link between network nodes m and N, for any physical link lm,nThe sum of the bandwidth resources occupied by all the SFCs mapped on the link cannot exceed the maximum available bandwidth of the link, so there is the following constraint
Where ρ isiAs SFC chains siThe bandwidth resources that are required to be occupied,is a link lm,nIs a variable of 0 to 1Is defined as follows if the logical link isThrough am,n,Taking 1, otherwise taking 0,representative service chain siAnd (3) a logical link between VNF u and u + 1.
For any VNF in all SFCs, mapping can only be done to one server, so the following constraint holds
Wherein the content of the first and second substances,indicates whether to change siIs deployed on network node n, when s isiWhen the optical fiber is deployed on the n,take 1, otherwise take 0.
Second, if all virtual links are availableRouting is unique during transmission, i.e. for arbitraryThere are the following constraints
Finally, each link s needs to be guaranteediEnd-to-end delay requirements; assuming end-to-end delay per SFC by transmission delayAnd processing time delayTwo parts, wherein dm,nIs a link lm,nThe transmission delay of (2). The sum of the two parts is less than or equal to the delay threshold of the service flow, namely the constraint (12) is satisfied
Wherein DiIs SFC siThe total delay threshold of (a) is,reference is made to the formulae (3) and (4).
Due to the fact that all SFCs are deployed in the same machine room based on the SFC deployment scene of the data center, transmission delay in the constraint (12) can be ignored, and the formula (12) is simplified into
1.3, selecting an optimization index;
the optimization indexes are that the minimum transmission delay cost C and the energy consumption cost E are adopted, and the total delay cost is
Further considering the system energy consumption expense E, the energy consumption E of starting up isbaseAnd run-time energy consumption EallocTwo-part construction, EbaseEnergy consumption when a VNF is not deployed for server boot-up, obviously EbaseIn direct proportion to the number of the started hosts, a 0-1 variable h is definednSatisfy the requirement of
As is apparent from equation (15), for any server n, if VNF is deployed, then hn1, otherwise 0; total power-on energy consumptionWherein gamma is a coefficient of the number of the atoms,representing the number of primary servers on which the VNF is deployed; eallocDefining the runtime energy consumption of the host n for the energy consumption occupied by the VNF, which is proportional to the computing resource consumptionIs composed of
Where T is a coefficient, so the total energy consumption overhead is
In summary, the SFC layout optimization problem is specifically expressed as
min(E+C) (18)
s.t(6)-(15)
2. Realizing SFC optimal arrangement;
the goal of this stage is to deploy SFCs after a given set of SFCs and achieve the optimization goal of orchestration, i.e., reduce energy consumption and network latency.
2.1 realizing node and link mapping by adopting a greedy-based strategy;
the first step of SFC orchestration is to map VNF nodes and links, and in the mapping process, the system constraints analyzed in step 1.2 need to be satisfied, including resource constraints, bandwidth constraints, and end-to-end network delay constraints. A greedy mapping strategy is adopted, the optimization aim is that the hop count in the SFC deployment process is minimum, namely, the transmission link is shortest, and the strategy further considers the transmission delay on the premise that the SFC can be successfully deployed.
The greedy strategy selects a current optimal value in the solution of each step, so the solution obtained in the step can be regarded as a local optimal solution, the greedy algorithm needs to decompose the total problem into sub-problems, and the sub-problems in the text are the selection of a server and a physical link by deploying a VNF.
Firstly, generating a hop count matrix hoss according to an applied network topology, wherein the hop count matrix hoss is a square matrix with the length being the number of servers, stores hop counts among the servers, namely distance, and takes one hop through a switch; starting mapping work on the basis; the SFC sets need to be processed, so the work should be performed in the order of SFC sequential deployment and VNF sequential deployment in one SFC.
Starting to search from the server with the number 1 when deployment work is started, if the computing resources which can be provided by the server meet the resource requirement of the VNF and the residual bandwidth resources of the link directly connected with the VNF meet the bandwidth requirement of the SFC, successfully deploying the VNF on the server with the number 1, and if not, starting to search other servers until deployment is successful, or ending the deployment work of the SFC if all the servers are not successfully searched; the next VNF after successful deployment starts searching resources from the current server; when the deployment fails, the VNF searches for the servers in the order of the servers in the hops matrix, and the server with the smaller hop count is preferably selected. The search order when deploying a VNF is the current server, the servers connected to the same edge switch, the servers in the same Pod, the servers in other pods.
In the fat tree type topology, if two servers are connected to the same edge switch, only one link exists between the two servers, and when the two servers are located at the same Pod or different pods, two links exist between the two servers, and the hop counts of the two links are the same. As shown in fig. 2, the server 21 and the server 22 are connected to a unified edge switch, so that there is only one link between 21 and 22, i.e. 21- >13- >22, and the hop count is 1. Server 25 and server 28 are located in the same Pod, and the two links between them are 25- >15- >7- >16- >28 and 25- >15- >8- >16- >28, and the hop count is 2. Server 29 and server 34 are located at different Pods, and the two links between them are 29- >17- >10- >4- >12- >19- >34 and 29- >17- >10- >3- >12- >19- >34, and the hop count is 2.
After the server to be deployed is determined, whether a link between the two servers meets bandwidth constraints is judged, and if so, deployment is finished; if not, continuing to select the next server until finding the server meeting the constraint condition, and if not, then the SFC cannot be deployed.
A greedy deployment flow is shown in fig. 3.
2.2 optimizing resource allocation;
after the mapping relation from the VNF to the virtual machine is obtained by utilizing the heuristic mapping algorithm of the previous stage, the variable is obtained at the momentAndknown as the start-up deployment energy consumption EbaseTo determine a value, a variable is definedEquation (19) can be further evolved into a constrained nonlinear optimization problem as follows
Constraint requirement in question (20)In the range of valuesInternal total energy can satisfyThe physical meaning of the above conditions, i.e. by adjusting the resource allocation, always enables the VNFService rate ofGreater than dynamic service chain siRequest arrival rate of (k)i. If this condition is not met, it means that the dynamic service chain s is presentiCannot be deployed.The delay constraint is guaranteed. The last constraint ensures that the allocated computing resources do not exceed the resource upper bound of any server.
The invention adopts a Lagrange multiplier method to solve the constraint nonlinear optimization problem, and the augmented Lagrange function of the optimization problem formula (20) is
Wherein the content of the first and second substances,and gammanIs a KKT multiplier. According to the KKT condition, it is required to solve the above optimization problem, and the following conditions need to be satisfied in addition to the constraint condition of equation (20):
solving the optimization problem (21) yields the number of resources allocated per VNF, which enables the orchestration method to achieve high resource utilization and low latency. The optimized resource allocation flow is shown in fig. 4.
2.3 description of the algorithm;
the part arranges the contents in step 2.1 and step 2.2 into the following algorithm flow, and the method provided by the invention is explained in detail through the specific flow.
The invention aims to solve the problem of designing a service function linkage strategy based on a queuing theory and elastic resource allocation. By analyzing the relationship among the request arrival rate, the computing resources and the processing delay, the invention provides a two-stage heuristic algorithm, thereby effectively reducing the energy consumption and the network delay.
The system for implementing the service function chain optimization arrangement method facing the data center comprises a model construction module and an SFC arrangement module which are sequentially connected, wherein the SFC arrangement module is used for carrying out the service function chain optimization arrangement method facing the data center
1. The model construction module constructs a service function chain optimization arrangement model based on elastic resource allocation, provides a foundation for the realization of an optimization method, and specifically comprises the technical content of the step 1 of the invention.
And 2, after the SFC set is given, the SFC arrangement module deploys the SFCs and achieves the optimization goal of arrangement, namely, the energy consumption and the network delay are reduced, and the technical content of the step 2 of the invention is specifically included.
The invention effectively reduces the energy consumption of the server, the occupancy rate of the link bandwidth and the service delay, improves the deployment success rate, and ensures that the arrangement result can be obtained within controllable operation time by the heuristic idea. The function finally realized by the invention is to complete the resource optimization arrangement of the SFC under the condition of given network topology and SFC data.
Claims (2)
1. A service function chain optimization arranging method facing a data center is characterized in that: providing a service function chain optimization arrangement model based on elastic resource allocation by analyzing the relation among the request arrival rate, the computing resources and the processing delay; on the basis, energy consumption and network delay are used as optimization indexes, a two-stage algorithm is designed, firstly, a feasible solution set of the problem is obtained through a greedy deployment algorithm, and then, the nonlinear constraint optimization problem is solved to obtain optimal resource allocation; the optimized configuration of resources is realized through a service function chain arrangement algorithm based on the resource elastic distribution, and the service quality is improved; the method specifically comprises the following steps:
1. constructing an arrangement model;
constructing a service function chain optimization arrangement model based on elastic resource allocation, and providing a foundation for realizing an optimization method;
based on fat tree (fat tree) network topology, the fat tree network topology is divided into an edge layer, a convergence layer, a core layer and a server layer from top to bottom; for the k-way fat tree architecture, the k-way fat tree architecture comprises k pots, and the number of edge switches and aggregation switches in each pot is k/2; the number of core switches is (k/2)2The number of aggregation layer and edge layer switches is k2The total number of servers supported by the network is k34, all VNFs can only be deployed in the server layer;
1.1, constructing a dynamic resource allocation model;
on the basis of researching the influence of the resource allocation on the VNF performance and the end-to-end delay, a dynamic resource allocation model is constructed;
abstracting an underlying network into an undirected weighted graph G (N, E), wherein N is a network node set, and E is a link set; the VNF is mostly deployed in a container, so the VNF processing request capability is affected by the amount of computing resources allocated to the VNF;
adopting M/M/1 queuing model to simulate VNF service, let kappaiFor serving function chains siThe arrival rate of the requests of (a),is VNFThe service rate of (a), as known from queuing theory,average processing time ofAnd kappai、There is the following relationship between:
in a practical application scenario, service rateHas great relation with the allocation of computing resources, wherein CPU resources and memory resources are uniformly processed into computing resources, and an elastic resource allocation mode is adopted to ensure thatIs VNFThe amount of computing resources allocated and assuming service ratesAndthere is a piecewise linear relationship between:
Wherein the content of the first and second substances,is composed ofThe value range of (A) is,andthe allocation for taking resources is respectivelyAndthe corresponding service rate;
by the formula (2), in the value rangeIn the method, the service rate of the VNF rises linearly with the increase of the allocated resources;andare respectively VNFMinimum and maximum values of resource allocation ifThenCan not be deployed whenWhen the temperature of the water is higher than the set temperature,the service rate of (2) reaches a maximum value, i.e. the processing capacity reaches an upper limit;
for simplicity of description, noteWhereinAllocating a base number for the resource; at this time VNFAverage processing timeThe calculation formula is shown in formula (5)
The constraints to be satisfied are as follows
Constraint (6) controlsConstraint (7) indicates that the service rate is greater than the arrival rate, which would result in a backlog of service requests if constraint (7) is not satisfied;
1.2 constructing system constraints;
system constraints need to be considered in the SFC arranging process and are divided into computing resource constraints, broadband resource constraints and SFC end-to-end time delay constraints; first, when a VNF is deployed on a physical host, it needs to occupy a certain amount of computing resources and cannot exceed the upper limit of resources that the host can provide, so there are
Wherein N is the number of servers of network G, RnRepresenting resource constraints numbered N servers, M the number of service function chains SFC, N the number of network nodes, siRepresents SFC, M of number iiRepresents siThe number of VNFs contained in (a). Variable 0-1Is defined as follows, if s isiWhen the jth VNF of (a) is deployed in network node n,take 1, otherwise take 0. lm,nM, N ∈ N representing a physical link between network nodes m and N, for any physical link lm,nThe sum of the bandwidth resources occupied by all the SFCs mapped on the link cannot exceed the maximum available bandwidth of the link, so there is the following constraint
Where ρ isiAs SFC chains siThe bandwidth resources that are required to be occupied,is a link lm,nIs a variable of 0 to 1Is defined as follows if the logical link isThrough am,n,Taking 1, otherwise taking 0,representative service chain siAnd (3) a logical link between VNF u and u + 1.
For any VNF in all SFCs, mapping can only be done to one server, so the following constraint holds
Wherein the content of the first and second substances,indicates whether to change siIs deployed on network node n, when s isiWhen the optical fiber is deployed on the n,take 1, otherwise take 0.
Second, if all virtual links are availableRouting is unique during transmission, i.e. for arbitraryThere are the following constraints
Finally, each link s needs to be guaranteediEnd-to-end delay requirements; assuming end-to-end delay per SFC by transmission delayAnd processing time delayTwo parts, wherein dm,nIs a link lm,nThe transmission delay of (2). The sum of the two parts is less than or equal to the delay threshold of the service flow, namely the constraint (12) is satisfied
Wherein DiIs SFC siThe total delay threshold of (a) is,reference is made to the formulae (3) and (4).
Due to the fact that all SFCs are deployed in the same machine room based on the SFC deployment scene of the data center, transmission delay in the constraint (12) can be ignored, and the formula (12) is simplified into
1.3, selecting an optimization index;
the optimization indexes are minimized transmission delay overhead C and energy consumption overhead E, and the total delay overhead is as follows:
further considering the system energy consumption expense E, the energy consumption E of starting up isbaseAnd run-time energy consumption EallocTwo-part construction, EbaseEnergy consumption when a VNF is not deployed for server boot-up, obviously EbaseIn direct proportion to the number of the started hosts, a 0-1 variable h is definednSatisfy the requirement of
As is apparent from equation (15), for any server n, if VNF is deployed, then hn1, otherwise 0; total power-on energy consumptionWherein gamma is a coefficient of the number of the atoms,representing the number of primary servers on which the VNF is deployed; eallocDefining the runtime energy consumption of the host n for the energy consumption occupied by the VNF, which is proportional to the computing resource consumptionIs composed of
Where T is a coefficient, so the total energy consumption overhead is
In summary, the SFC layout optimization problem is specifically expressed as
min(E+C) (18)
s.t(6)-(15)
2. Realizing SFC optimal arrangement;
the goal of this stage is to deploy SFCs after a given set of SFCs and achieve the optimization goal of orchestration, i.e., reduce energy consumption and network latency;
2.1 realizing node and link mapping by adopting a greedy-based strategy;
the first step of SFC orchestration is to map VNF nodes and links, and in the mapping process, system constraints analyzed in step 1.2 need to be satisfied, including resource constraints, bandwidth constraints, and end-to-end network delay constraints; a greedy mapping strategy is adopted, the optimization aim is that the hop count in the SFC deployment process is minimum, namely, the transmission link is shortest, and the strategy further considers the transmission delay on the premise that the SFC can be successfully deployed;
the greedy strategy selects a current optimal value in the solution of each step, so the solution obtained in the step can be regarded as a local optimal solution, the greedy algorithm decomposes a total problem into a plurality of sub-problems, and the sub-problems in the text are the selection of a server and a physical link by deploying a VNF each time;
firstly, generating a hop count matrix hoss according to an applied network topology, wherein the hop count matrix hoss is a square matrix with the length being the number of servers, stores hop counts among the servers, namely distance, and takes one hop through a switch; starting mapping work on the basis; the SFC sets are required to be processed, so that the work is carried out according to the sequence of the sequential deployment of the SFCs and the sequential deployment of VNFs in one SFC;
starting to search from the server with the number 1 when deployment work is started, if the computing resources which can be provided by the server meet the resource requirement of the VNF and the residual bandwidth resources of the link directly connected with the VNF meet the bandwidth requirement of the SFC, successfully deploying the VNF on the server with the number 1, and if not, starting to search other servers until deployment is successful, or ending the deployment work of the SFC if all the servers are not successfully searched; searching resources from the current server by the successfully deployed next VNF, searching the servers in the order of the VNF in the hoss matrix when the deployment fails, and preferentially selecting the server with less hop count; the search sequence when the VNF is deployed is that of a current server, a server connected with the same edge switch, a server in the same Pod and servers in other pods;
in the fat tree type topological graph, if two servers are connected with the same edge switch, only one link exists between the two servers, and when the two servers are located at the same Pod or different pods, two links exist between the two servers, and the hop counts of the two links are the same; after the server to be deployed is determined, whether a link between the two servers meets bandwidth constraints is judged, and if so, deployment is finished; if not, continuing to select the next server until finding a server meeting the constraint condition, and if not, then the SFC cannot be deployed;
2.2 optimizing configuration resources;
after the mapping relation from the VNF to the virtual machine is obtained by utilizing the heuristic mapping algorithm of the previous stage, the variable is obtained at the momentAndknown as the start-up deployment energy consumption EbaseTo determine a value, a variable is definedProblem (1)8) Further simplified to the following constrained nonlinear optimization problem
Constraint requirement in problem (19)In the range of valuesInternal total energy can satisfyThe physical meaning of the above conditions, i.e. by adjusting the resource allocation, always enables the VNFService rate ofGreater than dynamic service chain siRequest arrival rate of (k)i(ii) a If this condition is not met, it means that the dynamic service chain s is presentiThe deployment cannot be performed; constrainingEnsuring time delay constraintAnd isEnsuring that no more than any of the allocated computing resources are exceededThe resource upper limit of the server;
the constrained nonlinear optimization problem is solved by selecting the KKT condition generalized by the Lagrange multiplier method, and the augmented Lagrange function of the optimization problem (19) is
Wherein the content of the first and second substances,and gammanIs a KKT multiplier; according to the KKT condition, the optimization problem is required to be solved, and the following conditions are required to be met:
when the original problem is a convex optimization problem, the KKT condition is a sufficient necessary condition for obtaining a group of optimal solutions, namely the solutions enable the objective function to be a global minimum; the final solution to be obtained is the number of resources allocated to each VNF, which enables the orchestration method to achieve high resource utilization and low latency.
2. The system for implementing the data center-oriented service function chain optimization orchestration method according to claim 1, wherein: the system comprises a model construction module and an SFC arrangement module which are connected in sequence; wherein
The model construction module constructs a service function chain optimization arrangement model based on elastic resource allocation, and provides a basis for the realization of an optimization method;
after the SFC set is given, the SFC arranging module deploys the SFCs and achieves the optimization goal of arranging, namely, reducing energy consumption and network delay.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111101331.0A CN113918277A (en) | 2021-09-18 | 2021-09-18 | Data center-oriented service function chain optimization arrangement method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111101331.0A CN113918277A (en) | 2021-09-18 | 2021-09-18 | Data center-oriented service function chain optimization arrangement method and system |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113918277A true CN113918277A (en) | 2022-01-11 |
Family
ID=79235356
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111101331.0A Pending CN113918277A (en) | 2021-09-18 | 2021-09-18 | Data center-oriented service function chain optimization arrangement method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113918277A (en) |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114124713A (en) * | 2022-01-26 | 2022-03-01 | 北京航空航天大学 | Service function chain arrangement method for operation level function parallel and self-adaptive resource allocation |
CN114466059A (en) * | 2022-01-20 | 2022-05-10 | 天津大学 | Method for providing reliable service function chain for mobile edge computing system |
CN114650234A (en) * | 2022-03-14 | 2022-06-21 | 中天宽带技术有限公司 | Data processing method and device and server |
CN114827284A (en) * | 2022-04-21 | 2022-07-29 | 中国电子技术标准化研究院 | Service function chain arrangement method and device in industrial Internet of things and federal learning system |
CN115118748A (en) * | 2022-06-21 | 2022-09-27 | 上海交通大学 | Intelligent manufacturing scene micro-service deployment scheme and resource redistribution method |
CN115865706A (en) * | 2022-11-28 | 2023-03-28 | 国网重庆市电力公司电力科学研究院 | 5G network capability opening-based power automatic business arrangement method |
CN115913952A (en) * | 2022-11-01 | 2023-04-04 | 南京航空航天大学 | Efficient parallelization and deployment method of multi-target service function chain based on CPU + DPU platform |
CN115955402A (en) * | 2023-03-14 | 2023-04-11 | 中移动信息技术有限公司 | Service function chain determining method, device, equipment, medium and product |
CN116401055A (en) * | 2023-04-07 | 2023-07-07 | 天津大学 | Resource efficiency optimization-oriented server non-perception computing workflow arrangement method |
CN116545876A (en) * | 2023-06-28 | 2023-08-04 | 广东技术师范大学 | SFC cross-domain deployment optimization method and device based on VNF migration |
-
2021
- 2021-09-18 CN CN202111101331.0A patent/CN113918277A/en active Pending
Cited By (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114466059A (en) * | 2022-01-20 | 2022-05-10 | 天津大学 | Method for providing reliable service function chain for mobile edge computing system |
CN114124713A (en) * | 2022-01-26 | 2022-03-01 | 北京航空航天大学 | Service function chain arrangement method for operation level function parallel and self-adaptive resource allocation |
CN114650234A (en) * | 2022-03-14 | 2022-06-21 | 中天宽带技术有限公司 | Data processing method and device and server |
CN114650234B (en) * | 2022-03-14 | 2023-10-27 | 中天宽带技术有限公司 | Data processing method, device and server |
CN114827284A (en) * | 2022-04-21 | 2022-07-29 | 中国电子技术标准化研究院 | Service function chain arrangement method and device in industrial Internet of things and federal learning system |
CN114827284B (en) * | 2022-04-21 | 2023-10-03 | 中国电子技术标准化研究院 | Service function chain arrangement method and device in industrial Internet of things and federal learning system |
CN115118748A (en) * | 2022-06-21 | 2022-09-27 | 上海交通大学 | Intelligent manufacturing scene micro-service deployment scheme and resource redistribution method |
CN115118748B (en) * | 2022-06-21 | 2023-09-26 | 上海交通大学 | Intelligent manufacturing scene micro-service deployment scheme and resource redistribution method |
CN115913952A (en) * | 2022-11-01 | 2023-04-04 | 南京航空航天大学 | Efficient parallelization and deployment method of multi-target service function chain based on CPU + DPU platform |
US11936758B1 (en) | 2022-11-01 | 2024-03-19 | Nanjing University Of Aeronautics And Astronautics | Efficient parallelization and deployment method of multi-objective service function chain based on CPU + DPU platform |
CN115865706A (en) * | 2022-11-28 | 2023-03-28 | 国网重庆市电力公司电力科学研究院 | 5G network capability opening-based power automatic business arrangement method |
CN115955402B (en) * | 2023-03-14 | 2023-08-01 | 中移动信息技术有限公司 | Service function chain determining method, device, equipment, medium and product |
CN115955402A (en) * | 2023-03-14 | 2023-04-11 | 中移动信息技术有限公司 | Service function chain determining method, device, equipment, medium and product |
CN116401055A (en) * | 2023-04-07 | 2023-07-07 | 天津大学 | Resource efficiency optimization-oriented server non-perception computing workflow arrangement method |
CN116401055B (en) * | 2023-04-07 | 2023-10-03 | 天津大学 | Resource efficiency optimization-oriented server non-perception computing workflow arrangement method |
CN116545876B (en) * | 2023-06-28 | 2024-01-19 | 广东技术师范大学 | SFC cross-domain deployment optimization method and device based on VNF migration |
CN116545876A (en) * | 2023-06-28 | 2023-08-04 | 广东技术师范大学 | SFC cross-domain deployment optimization method and device based on VNF migration |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113918277A (en) | Data center-oriented service function chain optimization arrangement method and system | |
CN108260169B (en) | QoS guarantee-based dynamic service function chain deployment method | |
CN114338504B (en) | Micro-service deployment and routing method based on network edge system | |
CN108322333B (en) | Virtual network function placement method based on genetic algorithm | |
CN107682203B (en) | Security function deployment method based on service chain | |
CN112738820A (en) | Dynamic deployment method and device of service function chain and computer equipment | |
WO2023024219A1 (en) | Joint optimization method and system for delay and spectrum occupancy in cloud-edge collaborative network | |
CN110087250B (en) | Network slice arranging scheme and method based on multi-objective joint optimization model | |
WO2023039965A1 (en) | Cloud-edge computing network computational resource balancing and scheduling method for traffic grooming, and system | |
Fu et al. | Performance optimization for blockchain-enabled distributed network function virtualization management and orchestration | |
CN112104491B (en) | Service-oriented network virtualization resource management method | |
CN105282038A (en) | Distributed asterism networking optimization method based on stability analysis and used in mobile satellite network | |
CN107147530B (en) | Virtual network reconfiguration method based on resource conservation | |
CN107196806B (en) | Topological proximity matching virtual network mapping method based on sub-graph radiation | |
CN105530199B (en) | Method for mapping resource and device based on SDN multi-area optical network virtualization technology | |
CN114071582A (en) | Service chain deployment method and device for cloud-edge collaborative Internet of things | |
CN110191155B (en) | Parallel job scheduling method, system and storage medium for fat tree interconnection network | |
CN112953761A (en) | Virtual-real resource mapping method for virtual network construction in multi-hop network | |
WO2020134133A1 (en) | Resource allocation method, substation, and computer-readable storage medium | |
CN107360031B (en) | Virtual network mapping method based on optimized overhead-to-revenue ratio | |
CN105553882A (en) | Method for scheduling SDN data plane resources | |
CN110535705B (en) | Service function chain construction method capable of adapting to user time delay requirement | |
CN113490279B (en) | Network slice configuration method and device | |
CN103618674A (en) | A united packet scheduling and channel allocation routing method based on an adaptive service model | |
KR101800320B1 (en) | Network on chip system based on bus protocol, design method for the same and computer readable recording medium in which program of the design method is recorded |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |