CN112732409A - Method and device for enabling zero-time-consumption network flow load balancing under VNF architecture - Google Patents

Method and device for enabling zero-time-consumption network flow load balancing under VNF architecture Download PDF

Info

Publication number
CN112732409A
CN112732409A CN202110081628.9A CN202110081628A CN112732409A CN 112732409 A CN112732409 A CN 112732409A CN 202110081628 A CN202110081628 A CN 202110081628A CN 112732409 A CN112732409 A CN 112732409A
Authority
CN
China
Prior art keywords
vnf
packet receiving
judgment
load
queue
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110081628.9A
Other languages
Chinese (zh)
Other versions
CN112732409B (en
Inventor
李健
张沪滨
管海兵
殷豪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Jiaotong University
Original Assignee
Shanghai Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Jiaotong University filed Critical Shanghai Jiaotong University
Priority to CN202110081628.9A priority Critical patent/CN112732409B/en
Publication of CN112732409A publication Critical patent/CN112732409A/en
Application granted granted Critical
Publication of CN112732409B publication Critical patent/CN112732409B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/505Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the load
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/4557Distribution of virtual machine instances; Migration and load balancing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45595Network integration; Enabling network access in virtual machine instances

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The invention discloses a method and a device for enabling zero-time-consumption network flow load balancing under a VNF architecture, and relates to the technical field of communication. The method maintains a plurality of packet receiving queues for each VNF, and decouples the packet receiving queues from the VNFs through customized packet receiving APIs, so that the attribution rights of the VNFs can be transferred among the VNFs. The method maintains the mapping relation between the packet receiving queue and the VNF to which the packet receiving queue belongs and the mapping relation between the network flow quintuple and the packet receiving queue in a network strategy, and makes all packet receiving buffer areas held by a plurality of VNFs with the same function on the same host machine as load balance as possible. When flexible extension and contraction of the VNF occur, ownership of a part of packet receiving buffer areas held by the original VNF is directly transferred to the newly created VNF, so that the new VNF immediately processes data packets of the existing flow, and load balancing is achieved with almost zero time-consuming efficiency. The method and the device support the cross-node VNF packet receiving process based on the RDMA-like protocol, and enhance the flexibility of the NFV platform in dealing with the heavy load task.

Description

Method and device for enabling zero-time-consumption network flow load balancing under VNF architecture
Technical Field
The present invention relates to the field of communications technologies, and in particular, to a method and an apparatus for enabling zero-time-consumption network flow load balancing in a VNF architecture.
Background
NFV (Network Functions Virtualization) is a technology for constructing a virtual machine based on an inexpensive general-purpose server to replace conventional NF (Network Functions) such as a dedicated hardware switch, a hardware router, and a hardware firewall. Software NF deployed in the NFV platform is called VNF (Virtualization Network Functions), and a cloud operator provides services by adopting a single VNF or a chain formed by a plurality of VNFs, so that the research and development period is greatly shortened, the deployment cost is relieved, the difficulty of automatic management is reduced, and the flexibility of the whole platform is enhanced.
In order to efficiently utilize resources of a computing system, a demand exists in an NFV platform that VNF deployment conditions dynamically change according to traffic load density, that is, a demand for flexible scaling of VNFs. In general, VNF elastic scaling includes the following scenarios: capacity expansion of the VNFs, namely the number of the VNFs is increased along with the increase of the services or the configuration of a single VNF is enhanced along with the increase of the services; VNF scalability, i.e. the number of VNFs decreases with decreasing traffic or the configuration of a single VNF decreases with decreasing traffic. Among them, the change of the number of VNFs is a very common VNF elastic expansion and contraction manner in the NFV platform. For VNFs with the same function, the load degree depends on the number of network flows served and the activity degree of the corresponding network flow, so that each VNF traffic load balancing problem can be approximated to the load balancing problem of the network flows served by the VNF. When VNF elastic expansion occurs, how to enable load balance and enable the load balance at the fastest speed practically influences the VNF elastic expansion effect and the NFV platform pressure resistance, and has great optimization value.
In the existing method, the existing focus of the VNF elastic expansion problem is often focused on the time for performing VNF capacity expansion, that is, how to predict the arrival of a large number of user requests, and adjust the number of VNFs in advance. And less research is done on how to perform efficient load balancing. Generally, a common method for balancing network flow load is: for an existing network flow, its destination VNF is not changed; and for the newly added network flow, recalculating the target VNF according to the modulo operation of Hash (Hash) of the quintuple, and the like, so that the target VNF can fall on the newly generated VNF node. Optimally, the existing network flow without receiving packets for a longer time is transferred to a new node by updating the network policy. The disadvantages are that: for the existing stream with long time and frequent data packets, the stream cannot be effectively transferred. In extreme cases, the new flows are quite sparse, and it is difficult to enable load balancing, whether optimized or not, even if the number of VNFs is increased over and over again. In addition, when the ownership of the network flow is transferred between different VNFs, modification of the network policy may be involved, and this modification may disable a "five-tuple-target VNF" mapping cache portion of the packet forwarding component, so that the packet forwarding component in a saturated state needs to further make additional resources to recalculate the target VNF based on a complex network policy, which further affects efficient packet forwarding. It is noted that expanding the resources of a single VNF does not universally address the VNF stress problem described above. On one hand, the method of expanding resources puts restrictions on the implementation of the VNF, for example, a single-process VNF based on Docker (application container engine technology) supports expansion by the number of VNFs, but it is difficult to effectively utilize multiple CPUs by resource expansion; in addition, the expansion of resources is also limited by the total amount of resources of the current host.
Therefore, those skilled in the art are devoted to developing a method and apparatus for enabling zero-time network flow load balancing in a VNF architecture.
Disclosure of Invention
In view of the above-mentioned drawbacks of the prior art, the technical problem to be solved by the present invention is to adopt what method and what device are provided to implement zero-time traffic load balancing when VNF elastic expansion occurs, so as to optimize VNF elastic expansion effect and improve NFV platform pressure resistance.
To achieve the above object, the present invention provides a method for enabling zero-time-consuming network flow load balancing under a VNF architecture, comprising the following steps:
step A: dividing a plurality of VNFs into a plurality of VNF sets according to different VNF functions, enabling all the VNFs included in each VNF set to have the same function, and enabling the VNF sets to be in load balance with each other; meanwhile, a plurality of packet receiving queues are created in a shared memory, a plurality of packet receiving queues are distributed for each conventional VNF, and the load condition of each packet receiving queue is maintained;
and B: for each VNF set, when a new network flow arrives, performing first judgment, wherein the first judgment comprises judging whether a first VNF which is positioned at a current host and has a packet receiving queue which does not reach a rated load is included in the VNF set; if the first judgment result is yes, binding the new network flow with the packet receiving queue which does not reach the rated load in the first VNF; if the result of the first judgment is negative, selecting a ninth VNF including a packet receiving queue with the lightest total load from the VNF set, allocating a new packet receiving queue to the ninth VNF, and binding the new network flow with the packet receiving queue with the lightest total load;
and C: performing a second determination and a third determination for each of the set of VNFs; the second judgment is to judge whether there is a second VNF having at least one packet receiving queue whose actual load is much higher than a rated threshold, and if the second judgment result is yes, a new packet receiving queue is allocated to the second VNF, and part of network flows in an original packet receiving queue in the second VNF are bound to the new packet receiving queue again; the third judgment is to judge whether a plurality of third packet receiving queues with actual loads far lower than a rated threshold exist in each VNF, if the third judgment result is yes, the plurality of third packet receiving queues with actual loads far lower than the rated threshold are divided into one packet receiving queue to be bound and a plurality of packet receiving queues to be released, the packet receiving queues to be bound are bound with the network flows related to the plurality of third packet receiving queues with actual loads far lower than the rated threshold, and the plurality of packet receiving queues to be released are released;
step D: performing a fourth judgment on each VNF set, wherein the fourth judgment is that whether at least one heavy-load VNF exists in each VNF set, and the number of packet receiving queues of the heavy-load VNF far exceeds a second rated threshold; if the result of the fourth judgment is yes, expanding the volume of the VNF set including the heavy-load VNF;
step F: performing a fifth judgment and a sixth judgment on each VNF set, where the fifth judgment is to judge whether multiple light-load VNFs exist in each VNF set, and the number of packet receiving queues of the light-load VNFs is far lower than a second rated threshold; the sixth judgment is to judge whether the local resources are sufficient; if the result of the fifth judgment is yes and the result of the sixth judgment is yes, performing capacity reduction on the VNF set including the light-load VNF;
step G: and performing seventh judgment and eighth judgment on each VNF set, wherein the seventh judgment is used for judging whether a remote VNF located on a remote host exists, the eighth judgment is used for judging whether physical resources of the current host are sufficient, if the result of the seventh judgment is yes and the result of the eighth judgment is yes, a new eighth VNF located locally is created, a packet receiving queue owned by the remote VNF is redistributed to the local eighth VNF, and then the remote VNF is released.
Further, the capacity expansion in the step D includes local capacity expansion and non-local capacity expansion;
further, the local capacity expansion includes determining a local resource condition of the current host, and if the local resource is sufficient, newly building a fourth VNF locally, and reallocating a part of the packet receiving queue of the heavy-load VNF to the fourth VNF, so that the load balancing is finished.
Further, if the non-local capacity expansion includes judging the local resource condition of the current host, if the local resource is insufficient, the step E is executed.
Further, the step E includes selecting a suitable idle host, creating a remote fifth VNF on the idle host, establishing remote direct data access communication between the current host and the idle host, and transferring a part of the packet receiving queue of the heavy-load VNF to the fifth VNF, so that the fifth VNF completes packet receiving through remote memory access, and load balancing ends.
Further, the capacity reduction in the step F includes, for the plurality of light-load VNFs, binding all packet receiving queues owned by the plurality of light-load VNFs to one of the light-load VNFs, and releasing the other light-load VNFs.
The invention also provides a device for realizing zero-time-consumption network flow load balancing under the VNF architecture, which comprises an expansion module, a capacity reduction module, a management component, a packet forwarding component and an API component;
the capacity expansion module is configured to perform a local capacity expansion operation and a non-local capacity expansion operation;
the capacity reduction module is configured to perform a capacity reduction operation.
Further, the management component provides notifications to the API component regarding the offloading, filling of receive packet queues.
Further, the API component provides a report to the management component regarding the receive packet queue load condition in the current VNF.
Further, the packet forwarding component queries a packet receiving queue included in the VNF and a load condition of the packet receiving queue.
Compared with the prior art, the invention has the beneficial technical effects that:
(1) the invention completes load balance through the transfer of the home right of the packet receiving queue when VNF elastic expansion and contraction occur, so that the existing or newly added network flow can be served by the new VNF.
(2) The transfer of the packet receiving queue only relates to the change of the mapping relation of { sending VNF, quintuple } → packet receiving queue', and belongs to zero time-consuming operation, so that load balancing is achieved immediately, and the pressure of the existing VNF is relieved efficiently.
(3) The invention maintains the mapping relation of { sending packet VNF, quintuple } → receiving packet queue' in the network strategy, and the packet forwarding component establishes the mapping relation by combining with the network strategy calculation when forwarding the corresponding data packet for the first time and caches the mapping relation for subsequent use. When VNF elastic expansion and contraction occur, only the attribution right of the packet receiving queue is transferred, and the relation is not changed, so that the cache maintained by the packet forwarding component does not need to be invalidated. Therefore, the packet forwarding process keeps efficient operation, and a new bottleneck cannot be introduced under the condition that the original load of the network is heavy.
(4) The invention realizes the high-efficiency access of the remote VNF to the local packet receiving queue by using an RDMA (remote direct data access) mechanism, thereby breaking through the limitation of the total amount of the resources of the local host and achieving the load balance of the cross-physical nodes. Meanwhile, the packet receiving queues accessed remotely are not involved in subsequent load balancing, the number of remote accesses is limited, and the local VNF is used for replacing the remote VNF when the load of the current host machine is light, so that the packet receiving process is performed more efficiently.
The conception, the specific structure and the technical effects of the present invention will be further described with reference to the accompanying drawings to fully understand the objects, the features and the effects of the present invention.
Drawings
FIG. 1 is a schematic diagram illustrating local capacity expansion of a VNF according to a preferred embodiment of the present invention;
fig. 2 is a schematic diagram of non-local capacity expansion of a VNF across nodes according to a preferred embodiment of the present invention.
Detailed Description
The technical contents of the preferred embodiments of the present invention will be more clearly and easily understood by referring to the drawings attached to the specification. The present invention may be embodied in many different forms of embodiments and the scope of the invention is not limited to the embodiments set forth herein.
As shown in fig. 1, the apparatus for enabling zero-time-consuming network flow load balancing under a VNF architecture of the present invention includes a management component, a packet forwarding component, and a packet receiving API component. The three components execute respective tasks and interact based on platform resources and the built memory data structure thereof, and jointly maintain the operation of the internal system of the host machine. The connections and arrows in the figure represent the interaction between the components and the resources, and the functions of the three components are first explained below.
A management component: the component maintains the affiliation of each packet receiving queue and the VNF; determining whether a packet receiving queue needs to be added to the current VNF or not by combining the load condition of each packet receiving queue reported by the API component; meanwhile, the component determines and initiates the VNF scaling management according to the network topology condition and the load condition of the VNF with the same function.
API layer components: and providing a package receiving API, wherein each VNF is provided with an API layer component and performs package receiving operation through the API provided by the VNF. Preferably, the packet receiving behavior is to traverse a packet receiving queue held by the current VNF, where a pointer to a data packet located in the shared memory area is stored in the packet receiving queue.
A packet forwarding component: providing packet forwarding functionality between VNFs. Specifically, data packets in the VNF packet sending queue are read, the quintuple of the data packets is obtained, a target packet receiving queue is obtained based on the quintuple, and the pointer is removed from the VNF packet sending queue and is recorded into the corresponding packet receiving queue. Preferably, the packet forwarding component maintains a mapping of "{ send packet VNF, quintuple } → receive packet queue". Preferably, when forwarding the corresponding data packet for the first time, the packet forwarding process needs to calculate the target packet receiving queue by combining the network policy and the attribution condition of the packet receiving queue-VNF, and cache the relationship into the table, and when forwarding the data packet of this type subsequently, the cache is hit without complex calculation.
Three components maintain a pre-agreed shared buffer to achieve:
the management component notifies the API component about unloading and filling of the queue;
the API component reports the load condition of each queue of the current VNF to the management component;
and the packet forwarding component queries the queue condition and the load condition of the target VNF.
In fig. 1, only 5 VNFs are involved for simplicity. Where VNF1 is the overall entry of the host, then its data packets are forwarded to VNF2 and VNF3 as needed, and VNF2 and VNF3 are a group of VNFs with the same function, and load-balanced with each other, and the data packets sent out from these two VNFs are summarized to VNF5, and leave the current host after processing of VNF 5.
The device for enabling zero-time-consumption network flow load balancing under the VNF architecture further comprises a capacity expansion module and a capacity reduction module. The capacity expansion module is configured to perform a local capacity expansion operation and a non-local capacity expansion operation. The capacity reduction module is configured to perform a capacity reduction operation.
The operation flow of the present invention in the local capacity expansion scenario is described in detail below with reference to fig. 1.
Step D01: in an initial state, three sets of VNFs, namely { VNF1}, { VNF2, VNF3}, and { VNF5} are in the current host, wherein VNF2 and VNF3 have the same functions and are load balanced with each other. The management component creates a plurality of packet receiving queues in the shared memory, allocates the plurality of packet receiving queues to each existing VNF, and maintains the load condition of each packet receiving queue.
Step D02: when a new network flow arrives, the management component reviews the set of VNFs that can handle the network flow. If a VNF located on the current host exists in the set and the VNF has a receive queue that has not reached the rated load, the management component binds the network flow with the receive queue of the VNF. Otherwise, the management component selects the VNF with the lightest total load of the packet receiving queue from the VNF set, allocates a new packet receiving queue to the VNF, and binds the network flow with the new packet receiving queue. The mapping relationships established by the management component are enforced by the packet forwarding component. In the figure, taking VNF2 as an example of having a packet receiving queue 2.1 that has not yet reached the rated load, the management component establishes a mapping relationship of "{ VNF1, quintuple } → 2.1 packet receiving queue of VNF 2", and when the packet forwarding component traverses from the packet sending queue of VNF1 to the corresponding data packet, the packet forwarding component delivers the corresponding data packet to the 2.1 packet receiving queue of VNF 2. Then, the VNF2 traverses the packet receiving queues 2.1 to 2.3 owned by the VNF through a packet receiving API provided by the API layer, and completes receiving the network stream data packet.
Step D03: the management component performs load balancing between the receive packet queues of the same VNF. For a single VNF, if the load of a certain packet receiving queue is far higher than a rated threshold value, a new packet receiving queue is allocated to the VNF, and part of network flows in the original packet receiving queue are bound to the new packet receiving queue again. On the other hand, if a plurality of packet receiving queues with loads far lower than the rated threshold exist, the related network flows of the queues are bound to one queue, and other queues are released, so that the combination of different packet receiving queues is completed.
Step D04: the management component performs local capacity expansion of the VNF set as needed. In the VNF set, if the number of the packet receiving queues of a certain heavy-load VNF far exceeds a rated threshold, the VNF set is expanded. Specifically, the physical resource condition of the current host is determined, if the physical resources are sufficient, a local new VNF is established, and a part of the packet receiving queue of the heavy-load VNF is reallocated to the new local new VNF, and the load balancing is finished. In the figure, a { VNF2, VNF3} set is expanded, a VNF4 is newly created, and an original VNF2 packet receiving queue 2.3 and an original VNF3 packet receiving queue 3.1 are handed over to a VNF4 for processing. In the load balancing process, the API layer component of each VNF learns the occurrence of capacity expansion through the interaction with the shared memory information of the management component, so that the traversed packet receiving queue set is updated.
Preferably, the creation of the receive queue by the management component in step D01 is performed in the form of pre-allocation of a resource pool, where the initial number of receive queues allocated by each VNF is 1.
Preferably, in step D02, when the packet forwarding process acquires the load condition and the corresponding rated value of each packet receiving queue of the VNF from the shared memory. If there is a packet receiving queue whose load has reached the rated value, which means that other packet receiving queues will also reach the rated value, the packet forwarding process applies for an additional packet receiving queue for the current VNF to the management component through the shared memory. For performance, the packet forwarding process does not wait for the VNF to complete the assembly of the receive packet queue, but binds the current data flow to the receive packet queue with the lightest load of the current VNF. The rated value is set by the management component according to the network topology condition when the corresponding function VNF is started.
Preferably, the load condition of the packet receiving queue in step D03 is determined according to the current received dense condition. When executing packet receiving, the API layer increases the packet receiving count of the current VNF packet receiving queue maintained in the shared memory by 1, the management component periodically refers to the shared memory, obtains the packet receiving count of the corresponding VNF queue within a period time, divides the packet receiving count by the period time to obtain the packet receiving count within a unit time, and uses the packet receiving count as a load value of the corresponding packet receiving queue, and resets the packet receiving count of the corresponding packet receiving queue to 0 for subsequent statistics.
Preferably, the threshold value in step D04 is set by the management component according to the network topology, and is allowed to change dynamically. For example, the threshold value is 3, which means that each VNF holds at most three receive queues, and when all receive queues reach the rated load, the VNF scaling is started.
Preferably, the VNF4 is created in step D04, and also the sending queue of the VNF4 is created, and the mapping relationship between "{ sending packet VNF, quintuple } → receiving packet queue" maintained by the packet forwarding component for the sending queues of the legacy VNF2 and VNF3 is copied for use by the VNF4, for example, "{ VNF2, quintuple f1} → VNF5 receiving packet queue 5.1" is copied into "{ VNF4, quintuple f1} → VNF5 receiving packet queue 5.1" for use. For the newly added network flow sent from the VNF4, the destination packet receiving queue needs to be calculated by combining the network policy and the target VNF load condition. It should be noted that the mapping between "{ VNF1, quintuple } → receive queues 2.1-3.3" need not be changed. In summary, the present invention fully ensures the continuous validity of the local forwarding table cache of the packet forwarding component, and avoids the additional overhead introduced by the recalculation of the VNF flexible cache invalidation.
Fig. 2 shows an example of a flow of cross-node load balancing in a VNF capacity expansion scenario according to the present invention. For simplicity, only VNF1, VNF2, VNF3 and VNF5 are provided on the host machine 1. The VNF1 is an overall entry of the host 1, and the packets therein are forwarded to the VNF2 and the VNF3 with the same function in a load-balanced manner, and are respectively processed and then forwarded to the VNF5 of the host 2. Wherein, VNF2 holds packet receiving queues 2.1-2.3, and VNF3 holds packet receiving queues 3.1-3.3.
The following describes in detail the operation flow of the present invention in a cross-node non-local capacity expansion scenario with reference to fig. 2.
Step E01-step E03 occur on host 1 and correspond to step D01-step D03 of fig. 1.
Step E04: the management component on the host 1 selects a proper idle host 2 according to the overall load condition of the current NFV platform, and creates a remote VNF4 on the host 2. Meanwhile, the host 1 establishes RDMA remote direct data access communication with the host 2, hands over the packet receiving queue 2.3 of the original VNF2 and the packet receiving queue 3.1 of the original VNF3 to the remote VNF4, and makes the VNF4 execute remote memory access through the API layer to complete packet reception, and load balancing is finished.
Preferably, step E04 preferentially selects the next host, here host 2, in a tandem relationship with the extension objects (i.e., VNF2 and VNF3 in the present embodiment), and establishes a new VNF thereon, such as VNF4 of fig. 2. This is done to minimize the situation of transmission across nodes.
Preferably, the RDMA-like link in step E04 is established between the host 1 management component and the package receiving API of VNF 4. And after finishing the memory access authority interaction of the RDMA-like protocol, the VNF4 traverses a packet receiving queue on the host 1 to obtain a data packet pointer in a unilateral RDMA mode through the API layer, calculates the position of the data packet on the host 1 according to the pointer value, further reads the content of the data packet through the unilateral RDMA and inputs the content of the data packet into a shared buffer area of the host 2, and then inputs a host 2 pointer pointing to the corresponding data packet into a packet sending queue of the host for subsequent processing.
Preferably, in step E04, each node management component records the relationship between the VNF having remote interaction and the packet receiving queue, including the source VNF of the relevant packet receiving queue, and returns the result when the load is light, so as to control and reduce the number of remote memory accesses, and only use remote interaction if necessary, and switch the interaction mode back to more efficient local memory access once the resource is idle.
The operation flow of the present invention in the VNF capacity reduction scenario is further given below.
Step F01: for each set of VNFs, if the management component detects that there are multiple lightly loaded VNFs for the set with a number of receive queues well below a rated threshold and that local resources are sufficient, the set of VNFs is debugged. Specifically, for the lightly loaded VNF, the packet receiving queue owned by the lightly loaded VNF is bound to one of the VNFs, and the other VNFs are released. Taking fig. 1 as an example, as the network load changes and each VNF packet receiving queue is split and merged, if the management component detects that VNF2 has 2 packet receiving queues, and VNF3 and VNF4 each have only a single packet receiving queue, VNF3 and VNF4 are merged. For example, the receive queue of VNF4 is reassigned to VNF3 and VNF4 is released.
Step F02: for each VNF set, if the management component detects that a remote VNF located on a remote host exists in the set and physical resources of the current host are sufficient, a local VNF is newly established, a packet receiving queue owned by the remote VNF is redistributed to the local VNF, and then the remote VNF is released. Taking fig. 2 as an example, as the network load changes, if the management component on the host 1 detects that the physical resources of the host 1 are sufficient, the VNF is newly built on the host 1, the queues 2.3 and 3.1 owned by the remote VNF4 are re-allocated to the new VNF, and then the VNF4 is released.
Preferably, step F01 is performed periodically by the management component on the current host. The management component preferentially selects a VNF which is located on the current host machine and has the lightest total load of the packet receiving queues in the VNF set, redistributes the packet receiving queues of the VNF and then releases the VNF.
Preferably, in step F02, if the remote VNF is successfully converted into the local VNF, step F01 is executed once, so that the remote VNF returning to the local is immediately involved in the local contraction. For example, if there are a remote VNF and a local VNF in the VNF set, where the number of packet receiving queues is much smaller than the rated threshold, the remote VNF is converted into the local VNF in step F02, and then the two local VNFs are merged in step F01, so that the capacity reduction is completed.
The foregoing detailed description of the preferred embodiments of the invention has been presented. It should be understood that numerous modifications and variations could be devised by those skilled in the art in light of the present teachings without departing from the inventive concepts. Therefore, the technical solutions available to those skilled in the art through logic analysis, reasoning and limited experiments based on the prior art according to the concept of the present invention should be within the scope of protection defined by the claims.

Claims (10)

1. A method for enabling zero-time-consuming network flow load balancing under a VNF architecture is characterized by comprising the following steps:
step A: dividing a plurality of VNFs into a plurality of VNF sets according to different VNF functions, enabling all the VNFs included in each VNF set to have the same function, and enabling the VNF sets to be in load balance with each other; meanwhile, a plurality of packet receiving queues are created in a shared memory, a plurality of packet receiving queues are distributed for each conventional VNF, and the load condition of each packet receiving queue is maintained;
and B: for each VNF set, when a new network flow arrives, performing first judgment, wherein the first judgment comprises judging whether a first VNF which is positioned at a current host and has a packet receiving queue which does not reach a rated load is included in the VNF set; if the first judgment result is yes, binding the new network flow with the packet receiving queue which does not reach the rated load in the first VNF; if the result of the first judgment is negative, selecting a ninth VNF including a packet receiving queue with the lightest total load from the VNF set, allocating a new packet receiving queue to the ninth VNF, and binding the new network flow with the packet receiving queue with the lightest total load;
and C: performing a second determination and a third determination for each of the set of VNFs; the second judgment is to judge whether there is a second VNF having at least one packet receiving queue whose actual load is much higher than a rated threshold, and if the second judgment result is yes, a new packet receiving queue is allocated to the second VNF, and part of network flows in an original packet receiving queue in the second VNF are bound to the new packet receiving queue again; the third judgment is to judge whether a plurality of third packet receiving queues with actual loads far lower than a rated threshold exist in each VNF, if the third judgment result is yes, the plurality of third packet receiving queues with actual loads far lower than the rated threshold are divided into one packet receiving queue to be bound and a plurality of packet receiving queues to be released, the packet receiving queues to be bound are bound with the network flows related to the plurality of third packet receiving queues with actual loads far lower than the rated threshold, and the plurality of packet receiving queues to be released are released;
step D: performing a fourth judgment on each VNF set, wherein the fourth judgment is that whether at least one heavy-load VNF exists in each VNF set, and the number of packet receiving queues of the heavy-load VNF far exceeds a second rated threshold; if the result of the fourth judgment is yes, expanding the volume of the VNF set including the heavy-load VNF;
step F: performing a fifth judgment and a sixth judgment on each VNF set, where the fifth judgment is to judge whether multiple light-load VNFs exist in each VNF set, and the number of packet receiving queues of the light-load VNFs is far lower than a second rated threshold; the sixth judgment is to judge whether the local resources are sufficient; if the result of the fifth judgment is yes and the result of the sixth judgment is yes, performing capacity reduction on the VNF set including the light-load VNF;
step G: and performing seventh judgment and eighth judgment on each VNF set, wherein the seventh judgment is used for judging whether a remote VNF located on a remote host exists, the eighth judgment is used for judging whether physical resources of the current host are sufficient, if the result of the seventh judgment is yes and the result of the eighth judgment is yes, a new eighth VNF located locally is created, a packet receiving queue owned by the remote VNF is redistributed to the local eighth VNF, and then the remote VNF is released.
2. The method for enabling zero-time-consuming network flow load balancing under the VNF architecture of claim 1, wherein the expansion in the step D includes local expansion and non-local expansion.
3. The method of claim 2, wherein the locally expanding capacity comprises determining a local resource condition of the current host, and if the local resource is sufficient, building a fourth VNF locally, and reallocating a part of the receive queue of the heavy-load VNF to the fourth VNF, and the load balancing is finished.
4. The method according to claim 2, wherein the non-local capacity expansion comprises determining a local resource condition of the current host, and if the local resource is insufficient, performing step E.
5. The method according to claim 1, wherein the step E comprises selecting an appropriate idle host, creating a remote fifth VNF on the idle host, establishing a remote direct data access communication between the current host and the idle host, and transferring a partial receive queue of the heavy VNF to the fifth VNF, so that the fifth VNF completes the receive through the remote memory access, and the load balancing is completed.
6. The method for enabling zero-time-consuming network flow load balancing under a VNF architecture of claim 1, wherein the companding in the step F includes, for the plurality of lightly-loaded VNFs, binding all of the receive queues owned by the plurality of lightly-loaded VNFs to one of the lightly-loaded VNFs, and releasing the other lightly-loaded VNFs.
7. A device for enabling zero-time-consumption network flow load balancing under a VNF architecture is characterized by comprising a capacity expansion module, a capacity reduction module, a management component, a packet forwarding component and an API component;
the capacity expansion module is configured to perform a local capacity expansion operation and a non-local capacity expansion operation;
the capacity reduction module is configured to perform a capacity reduction operation.
8. The apparatus under the VNF architecture for enabling zero-time consuming network flow load balancing of claim 7, wherein the management component provides notification to the API component regarding receive packet queue offload, fill.
9. The apparatus for enabling zero-time consuming network flow load balancing under a VNF architecture of claim 7, wherein the API component provides a report to the management component regarding the receive packet queue load condition in a current VNF.
10. The apparatus for enabling zero-time-consuming network flow load balancing under the VNF architecture of claim 7, wherein the packet forwarding component queries a packet receiving queue included in the VNF and a load condition of the packet receiving queue.
CN202110081628.9A 2021-01-21 2021-01-21 Method and device for enabling zero-time-consumption network flow load balancing under VNF architecture Active CN112732409B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110081628.9A CN112732409B (en) 2021-01-21 2021-01-21 Method and device for enabling zero-time-consumption network flow load balancing under VNF architecture

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110081628.9A CN112732409B (en) 2021-01-21 2021-01-21 Method and device for enabling zero-time-consumption network flow load balancing under VNF architecture

Publications (2)

Publication Number Publication Date
CN112732409A true CN112732409A (en) 2021-04-30
CN112732409B CN112732409B (en) 2022-07-22

Family

ID=75594588

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110081628.9A Active CN112732409B (en) 2021-01-21 2021-01-21 Method and device for enabling zero-time-consumption network flow load balancing under VNF architecture

Country Status (1)

Country Link
CN (1) CN112732409B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108306912A (en) * 2017-01-12 2018-07-20 中兴通讯股份有限公司 Virtual network function management method and its device, network function virtualization system
CN108965024A (en) * 2018-08-01 2018-12-07 重庆邮电大学 A kind of virtual network function dispatching method of the 5G network slice based on prediction
CN109189552A (en) * 2018-08-17 2019-01-11 烽火通信科技股份有限公司 Virtual network function dilatation and capacity reduction method and system
CN109995583A (en) * 2019-03-15 2019-07-09 清华大学深圳研究生院 A kind of scalable appearance method and system of NFV cloud platform dynamic of delay guaranteed
CN111611051A (en) * 2020-04-28 2020-09-01 上海交通大学 Method for accelerating first distribution of data packets on NFV platform

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108306912A (en) * 2017-01-12 2018-07-20 中兴通讯股份有限公司 Virtual network function management method and its device, network function virtualization system
CN108965024A (en) * 2018-08-01 2018-12-07 重庆邮电大学 A kind of virtual network function dispatching method of the 5G network slice based on prediction
CN109189552A (en) * 2018-08-17 2019-01-11 烽火通信科技股份有限公司 Virtual network function dilatation and capacity reduction method and system
CN109995583A (en) * 2019-03-15 2019-07-09 清华大学深圳研究生院 A kind of scalable appearance method and system of NFV cloud platform dynamic of delay guaranteed
CN111611051A (en) * 2020-04-28 2020-09-01 上海交通大学 Method for accelerating first distribution of data packets on NFV platform

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
A.H. GHORAB: "Joint VNF Load Balancing and Service Auto-Scaling in NFV with Multimedia Case Study", 《2020 25TH INTERNATIONAL COMPUTER CONFERENCE, COMPUTER SOCIETY OF IRAN (CSICC)》 *
BO YI: "Design and Implementation of Network-Aware VNF Migration Mechanism", 《IEEE》 *
CHAOQUN YOU: "Efficient Load Balancing for the VNF Deployment with Placement Constraints", 《ICC 2019 - 2019 IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS (ICC)》 *
FRANCISCO CARPIO: "VNF placement with replication for Loac balancing in NFV networks", 《IEEE》 *

Also Published As

Publication number Publication date
CN112732409B (en) 2022-07-22

Similar Documents

Publication Publication Date Title
Meng et al. Online deadline-aware task dispatching and scheduling in edge computing
Meng et al. Dedas: Online task dispatching and scheduling with bandwidth constraint in edge computing
WO2017133623A1 (en) Data stream processing method, apparatus, and system
Barbette et al. RSS++ load and state-aware receive side scaling
US10296392B2 (en) Implementing a multi-component service using plural hardware acceleration components
US11010198B2 (en) Data processing system having a hardware acceleration plane and a software plane
CN109347739B (en) Method for providing resource allocation and access point selection strategy for multi-access edge computing
CN107135268B (en) Distributed task computing method based on information center network
JP2017521806A (en) Scheduling method and apparatus in a multiprocessing environment
JP6574314B2 (en) Packet forwarding
US20210051211A1 (en) Method and system for image pulling
CN106209402A (en) The telescopic method of a kind of virtual network function and equipment
CN110177055B (en) Pre-allocation method of edge domain resources in edge computing scene
Li et al. Finedge: A dynamic cost-efficient edge resource management platform for NFV network
CN112732409B (en) Method and device for enabling zero-time-consumption network flow load balancing under VNF architecture
KR20160025926A (en) Apparatus and method for balancing load to virtual application server
CN111432438A (en) Base station processing task real-time migration method
CN112685167A (en) Resource using method, electronic device and computer program product
CN115766729A (en) Data processing method for four-layer load balancing and related device
Mahapatra et al. A Heterogeneous Load Balancing Approach in Centralized BBU-Pool of C-RAN Architecture
Bustos-Jimenez et al. Balancing active objects on a peer to peer infrastructure
CA2576800A1 (en) Accelerated data switching on symmetric multiprocessor systems using port affinity
Zhang et al. Speeding up vm startup by cooperative vm image caching
EP3148163B1 (en) Function transfer method, device and system
CN113395183A (en) Virtual node scheduling method and system for network simulation platform VLAN interconnection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant