WO2022198524A1 - Procédé de déploiement d'instance de service, et procédé et système d'équilibrage de charge entre nœuds - Google Patents

Procédé de déploiement d'instance de service, et procédé et système d'équilibrage de charge entre nœuds Download PDF

Info

Publication number
WO2022198524A1
WO2022198524A1 PCT/CN2021/082824 CN2021082824W WO2022198524A1 WO 2022198524 A1 WO2022198524 A1 WO 2022198524A1 CN 2021082824 W CN2021082824 W CN 2021082824W WO 2022198524 A1 WO2022198524 A1 WO 2022198524A1
Authority
WO
WIPO (PCT)
Prior art keywords
container
type
load balancer
hardware resource
node
Prior art date
Application number
PCT/CN2021/082824
Other languages
English (en)
Chinese (zh)
Inventor
张龙斌
马毓博
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to CN202180096165.1A priority Critical patent/CN117043748A/zh
Priority to PCT/CN2021/082824 priority patent/WO2022198524A1/fr
Publication of WO2022198524A1 publication Critical patent/WO2022198524A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines

Definitions

  • the present application relates to the technical field of network function virtualization (NFV), and in particular, to a service instance deployment method, a load balancing method and system between nodes.
  • NFV network function virtualization
  • Figure 1 shows a schematic diagram of heterogeneous deployment for the target.
  • the solution for NFV to support containerized heterogeneous deployment generally adopts virtualised network function descriptor (VNFD) and cloud application topology and orchestration specifications (topology and orchestration).
  • VNFD virtualised network function descriptor
  • topology and orchestration topology and orchestration
  • specification of cloud application, tosca) template defines different flavors (flavor refers to the basic scale that describes how much processing and storage space the created server contains (flavors are a way to describe the basic dimensions of a server to be created including how much CPU, RAM, and disc space are allocated to a server built with this flavor)) of the node (node)/container collection (pod)/container (container) capability.
  • Host group (host aggregate, HA) layer servers of different systems (eg, ARM, X86) are divided into different HAs;
  • Node (also known as virtual machine (VM)) layer defines the node types of different flavors and belongs to the corresponding HA.
  • VM virtual machine
  • Pod layer For nodes of different flavors, define different pod types and build dependencies;
  • Container layer For different pod types, define different container types (ARM, X86) and corresponding container images.
  • X86 type servers are divided into X86_HA
  • ARM type servers are divided into ARM_HA
  • VNFD template define two sets of templates for different systems: X86_C32_VM and ARM_C32_VM, and assign them to different HAs, where C32 represents 32 cores;
  • X86_C32_POD two templates are defined for different systems: X86_C32_POD and ARM_C32_POD, the X86_C32_Container template and the corresponding X86 image are included in the X86_C32_POD; the ARM_C32_Container template and the corresponding X86 image are included in the ARM_C32_POD;
  • VNFM virtualised network function manager
  • the embodiments of the present application provide a service instance deployment method and system, so as to reduce the number of templates that need to be managed during heterogeneous deployment.
  • a first aspect provides a service instance deployment method, the method includes: a virtual network function manager receives a service instance deployment request from a network management device, the service instance deployment request includes a node type requested to be instantiated; the virtual network function manager receives a service instance deployment request from a network management device.
  • the network function manager obtains corresponding hardware resources from the host group corresponding to the node type according to the service instance deployment request; the virtual network function manager sends a container deployment request to the container manager, and the container deployment request Including the type of the hardware resource; and the container manager obtains the container image corresponding to the type of the hardware resource according to the container deployment request and the correspondence between at least one type of container image and the type of the hardware resource, and deploying the container set on the node and deploying the container image on the container set.
  • the method is applied to a service instance deployment system, which adopts a unified container set template and a unified container template, and establishes a correspondence between at least one type of container image and the type of hardware resource in the image database.
  • the VNFM can carry the type of hardware resource when sending a container deployment request to the container manager.
  • the container image corresponding to the type of hardware resource, so as to realize the deployment of service instances under heterogeneous deployment. Due to the adoption of a unified container collection template and a unified container template, the number of templates is saved.
  • the method further includes: uploading, by the network management device, at least one type of container image to a mirror database; and establishing, by the mirror database, the type of the at least one type of container image and hardware resources corresponding relationship.
  • the container image includes any one of the following types: an X86 container image and an ARM container image.
  • the type of the hardware resource is the system to which the hardware resource belongs; or the type of the hardware resource is the model of the hardware resource under the system to which the hardware resource belongs.
  • a service instance deployment system in a second aspect, is provided, and the system has the function of implementing the above-mentioned first aspect.
  • the system includes modules or units or means corresponding to the steps involved in executing the above-mentioned first aspect,
  • the functions or units or means may be implemented by software, or by hardware, or by executing corresponding software by hardware.
  • the virtual network function manager is configured to receive a service instance deployment request from the network management device, where the service instance deployment request includes the node type requested to be instantiated; the virtual network function manager is further configured to the service instance deployment request, and obtain the corresponding hardware resources from the host group corresponding to the node type; the virtual network function manager is further configured to send a container deployment request to the container manager, where the container deployment request includes the The type of hardware resource; and the container manager is configured to obtain the container image corresponding to the type of the hardware resource according to the container deployment request and the correspondence between at least one type of container image and the type of the hardware resource, and in the A container set is deployed on the node and the container image is deployed on the container set.
  • the network management device is configured to upload at least one type of container image to an image database; and the image database is configured to establish a correspondence between the at least one type of container image and the type of hardware resources relation.
  • the system includes a processor, and the processor executes program instructions to complete any possible implementation or the method in the implementation manner of the first aspect above.
  • the system may further include one or more memories, the memories are used for coupling with the processor, and the memories may store necessary computer programs or instructions for implementing the functions involved in the first aspect above.
  • the processor may execute computer programs or instructions stored in the memory, and when the computer programs or instructions are executed, cause the system to implement the method in any possible implementation or implementation manner of the first aspect above.
  • the system includes a processor that can be used to couple with the memory.
  • the memory may store necessary computer programs or instructions to implement the functions involved in the first aspect above.
  • the processor may execute computer programs or instructions stored in the memory, and when the computer programs or instructions are executed, cause the system to implement the method in any possible implementation or implementation manner of the first aspect above.
  • the system includes a processor and an interface circuit, wherein the processor is configured to communicate with other devices through the interface circuit, and execute the method in any possible implementation or implementation manner of the first aspect above .
  • the processor may be implemented by hardware or software.
  • the processor may be a logic circuit, an integrated circuit, etc.; when implemented by software, the processor
  • the processor may be a general-purpose processor implemented by reading software codes stored in memory.
  • the above processors may be one or more, and the memory may be one or more.
  • the memory may be integrated with the processor, or the memory may be provided separately from the processor. In a specific implementation process, the memory and the processor may be integrated on the same chip, or may be separately provided on different chips.
  • the embodiment of the present application does not limit the type of the memory and the manner of setting the memory and the processor.
  • a method for load balancing between nodes comprising: a load balancer server receiving service processing capabilities of a first container and a second container in at least one node reported by a load balancer client, and The service processing capability includes the type and container capability of the hardware resource of the first container, and the type and container capability of the hardware resource of the second container; the load balancer server the weight corresponding to the type, the service processing capability of the first container and the service processing capability of the second container, to determine the number of tokens of the first container and the number of tokens of the second container; and the The load balancer server distributes traffic corresponding to the number of tokens in the first container to the first container according to the number of tokens in the first container, and distributes traffic according to the number of tokens in the second container Traffic corresponding to the number of tokens of the second container to the second container.
  • the service processing capability of the container on the node is collected by the load balancer client, and reported to the load balancer server.
  • the container allocates the corresponding number of tokens, and then distributes the corresponding traffic to the container according to the allocated number of tokens, which can balance the load between nodes and improve the utilization of hardware resources.
  • the service processing capability further includes load information
  • the method further includes: receiving, by the load balancer server, an updated load on the at least one node reported by the load balancer client information; and the load balancer server adjusts the number of tokens corresponding to the updated load information according to the weight corresponding to each type of hardware resource and the service processing capability.
  • the load balancer server distributes traffic according to the determined number of tokens of the container, but after a period of operation, the load (specifically, the CPU load, etc.) on the node (specifically on each container of the node) will change. change.
  • the load balancer client periodically collects the load information on the node and reports it to the load balancer server. After receiving the load information reported by the load balancer client, the load balancer server adjusts the number of tokens allocated to the first container, thereby realizing load balancing.
  • the type of the hardware resource is the system to which the hardware resource belongs; or the type of the hardware resource is the model to which the hardware resource belongs under the system to which the hardware resource belongs.
  • the method further includes: the network management device configures the load balancer server with a weight corresponding to each type of hardware resource.
  • the method further includes: the VNFM receiving a capacity expansion request from a network management device, where the capacity expansion request includes the first node and/or the second node where the at least one third container for which capacity expansion is requested is located the type of the container set in which the at least one third container that requests capacity expansion is located is located; the container manager deploys the at least one third container on the container set of the first node and/or the second node Container image; the load balancer client reports the service processing capability of the at least one third container to the load balancer server; the load balancer server according to the weight corresponding to each type of hardware resource and the service processing capability of the at least one third container, determine the number of tokens corresponding to the service processing capability of the at least one third container; and the load balancer server according to the service processing capability of the at least one third container; The number of tokens corresponding to the service processing capability, and the traffic corresponding to the number of tokens is distributed to the at least one third container.
  • the load balancer server allocates the corresponding number of tokens to the container according to the service processing capability of the container, and distributes traffic to the existing container and the expanded container according to the number of tokens, effectively It balances the load between nodes and improves the utilization of hardware resources.
  • the method further includes: receiving, by the VNFM, a capacity reduction request from the network management device, where the capacity reduction request includes a type of a third node where the at least one fourth container requested to be scaled down is located , the type of the container set where the at least one fourth container requested to be reduced is located, and the number of containers of the at least one fourth container requested to be reduced; the VNFM sends a pre-scaling notification to the load balancer server, so The pre-scaling notification includes the type of the third node and the type of the container set; the load balancer server selects the at least one fourth container to be recycled from the container set in the third node, sending the pre-scaling notification to the at least one fourth container; after receiving the pre-scaling response sent by the at least one fourth container, the load balancer server recycles and distributes it to the at least one fourth container and the VNFM sends a container set recycling notification to the load balancer server, where the container set recycling notification is used to notify the container to be
  • the load balancer server allocates the corresponding number of tokens to the remaining containers according to the business processing capabilities of the remaining containers, and distributes traffic to the remaining containers according to the number of tokens, which effectively balances the The load between nodes improves the utilization of hardware resources.
  • a load balancing system between nodes is provided, and the system has the function of implementing the third aspect.
  • the system includes modules or units or means corresponding to the steps involved in the third aspect. ), the functions or units or means may be implemented by software, or by hardware, or by executing corresponding software by hardware.
  • the system includes: a load balancer server and a load balancer client; wherein: the load balancer server is configured to receive at least one node reported by the load balancer client
  • the business processing capabilities of the first container and the second container the business processing capabilities include the type of hardware resources and container capabilities of the first container, and the type of hardware resources and container capabilities of the second container; the container capability; the The load balancer server is further configured to determine the token of the first container according to the configured weight corresponding to each type of hardware resource, the service processing capability of the first container and the service processing capability of the second container and the number of tokens of the second container; and the load balancer server is further configured to distribute traffic corresponding to the number of tokens of the first container to the number of tokens of the first container according to the number of tokens of the first container.
  • the first container, and according to the token quantity of the second container distribute traffic corresponding to the token quantity of the second container to the second container.
  • the service processing capability further includes load information; the load balancer server is configured to receive updated load information on the at least one node reported by the load balancer client; and the load balancer The server is configured to adjust the number of tokens corresponding to the updated load information according to the weight corresponding to each type of hardware resource and the service processing capability.
  • the type of the hardware resource is the system to which the hardware resource belongs; or the type of the hardware resource is the model of the hardware resource under the system to which the hardware resource belongs.
  • the system further includes a network management device; the network management device is configured to configure the load balancer server with a weight corresponding to each type of hardware resource.
  • the system further includes a VNFM and a container manager;
  • the VNFM is configured to receive a capacity expansion request from the network management device, and the capacity expansion request includes the first node where the at least one third container that requests capacity expansion is located. and/or the type of the second node, the type of the container set where the at least one third container for which expansion is requested is located;
  • the container manager is used to deploy all the container sets on the first node and/or the second node.
  • the load balancer client is used to report the service processing capability of the at least one third container to the load balancer server;
  • the load balancer server is also used for According to the weight corresponding to the type of each hardware resource and the service processing capability of the at least one third container, determine the number of tokens corresponding to the service processing capability of the at least one third container;
  • the load balancer The server is further configured to distribute traffic corresponding to the number of tokens to the at least one third container according to the number of tokens corresponding to the service processing capability of the at least one third container.
  • the VNFM is further configured to receive a capacity reduction request from the network management device, where the capacity reduction request includes the type of the third node where the at least one fourth container requested to be reduced is located, and the at least one request to be reduced is located.
  • the VNFM is further configured to send a pre-scaling notification to the load balancer server, where the pre-scaling notification Including the type of the third node and the type of the container set;
  • the load balancer server is further configured to select the at least one fourth container to be recycled from the container set in the third node, and send The at least one fourth container sends the pre-scaling notification;
  • the load balancer server is further configured to recycle and distribute to the at least one fourth container after receiving the pre-scaling response sent by the at least one fourth container.
  • the system includes a processor, and may also include a transceiver, where the transceiver is configured to transmit and receive signals, and the processor executes program instructions to accomplish any possible implementation of the third aspect or method in the implementation.
  • the system may further include one or more memories, which are used for coupling with the processor, and the memories may store necessary computer programs or instructions for implementing the functions involved in the third aspect.
  • the processor can execute computer programs or instructions stored in the memory, and when the computer programs or instructions are executed, make the system implement the method in any possible implementation or implementation manner of the third aspect.
  • the system includes a processor that can be used to couple with the memory.
  • the memory may store necessary computer programs or instructions to implement the functions involved in the third aspect above.
  • the processor may execute computer programs or instructions stored in the memory, and when the computer programs or instructions are executed, cause the system to implement the method in any possible implementation or implementation manner of the third aspect above.
  • the system includes a processor and an interface circuit, wherein the processor is configured to communicate with other devices through the interface circuit, and execute the method in any possible implementation or implementation manner of the third aspect above .
  • the processor can be implemented by hardware or software.
  • the processor can be a logic circuit, an integrated circuit, etc.; when implemented by software, the processor
  • the processor may be a general-purpose processor implemented by reading software codes stored in memory.
  • the above processors may be one or more, and the memory may be one or more.
  • the memory may be integrated with the processor, or the memory may be provided separately from the processor. In a specific implementation process, the memory and the processor may be integrated on the same chip, or may be separately provided on different chips.
  • the embodiment of the present application does not limit the type of the memory and the manner of setting the memory and the processor.
  • a computer-readable storage medium is provided, and instructions are stored in the computer-readable storage medium, which, when executed on a computer, cause the computer to perform the methods described in the above aspects.
  • a computer program product comprising instructions which, when run on a computer, cause the computer to perform the methods of the above aspects.
  • Figure 1 is a schematic diagram of the currently adopted heterogeneous deployment architecture
  • FIG. 2 is a schematic diagram of an NFV architecture provided by an embodiment of the present application.
  • FIG. 3 is a schematic structural diagram of a service instance deployment system provided by an embodiment of the present application.
  • FIG. 4 is a schematic flowchart of a service instance deployment method provided by an embodiment of the present application.
  • FIG. 5 is a schematic diagram of a heterogeneous deployment architecture provided by an embodiment of the present application.
  • FIG. 6 is a schematic structural diagram of a load balancing system between nodes according to an embodiment of the present application.
  • FIG. 7 is a schematic flowchart of a load balancing method between nodes according to an embodiment of the present application.
  • FIG. 8 is a schematic diagram of load balancing between nodes under a heterogeneous deployment according to an embodiment of the present application.
  • FIG. 9 is a schematic flowchart of another load balancing method between nodes provided by an embodiment of the present application.
  • FIG. 10 is a schematic flowchart of another method for load balancing between nodes provided by an embodiment of the present application.
  • FIG. 11 is a schematic structural diagram of a service instance deployment/load balancing system between nodes provided by an embodiment of the present application.
  • the NFV architecture can implement various networks, such as a local area network (LAN), an internet protocol (IP) network, or an evolved packet core (EPC) network.
  • the NFV architecture may include NFV management and orchestration system (NFV-MANO) 110, NFV infrastructure (NFV infrastructure, NFVI) 150, multiple virtualized network functions (virtualized network functions) , VNF) 140, a plurality of equipment management (element management, EM) 130, and one or more operation support system/business support system (operation support system/business support system, OSS/BSS) 120.
  • the NFV-MANO 110 may include an NFV orchestrator (NFV orchestrator, NFVO) 111 , one or more VNFMs 112 , and one or more VIMs 113 .
  • NFVO111 It is mainly responsible for the life cycle management of virtualized services, as well as the allocation and scheduling of virtual infrastructure and virtual resources in NFVI.
  • the NFVO 111 may communicate with one or more VNFMs 112 to perform resource-related requests, send configuration information to the VNFMs 104, and collect status information for the VNFs 140.
  • NFVO 111 may also communicate with VIM 113 to perform resource allocation and/or reservation, exchange virtualized hardware resource configuration and status information.
  • VNFM112 It is mainly responsible for the life cycle management of one or more VNFs, such as instantiating (instantiating) VNF140, updating (updating) VNF140, querying VNF140, elastically scaling (scaling) VNF140, and terminating (terminating) VNF140.
  • VNFM 112 can communicate with VNF 140 to complete VNF lifecycle management and exchange configuration and status information.
  • VIM 113 mainly responsible for controlling and managing the interaction of VNF 140 with computing hardware 1521, storage hardware 1522, network hardware 1523, virtual computing 1511 (eg virtual machine (VM)), virtual storage 1512 and virtual network 15132.
  • the VIM 113 performs resource management functions, including managing infrastructure resources, allocation (eg, adding resources to virtual containers), and operational functions (eg, collecting NFVI fault information).
  • VNFM 112 may communicate with VIM 113 to request resource allocations, exchange virtualized hardware resource configuration and status information.
  • NFVI 150 may include a hardware resource layer composed of computing hardware 1521, storage hardware 1522, network hardware 1523, a virtualization layer, and a virtual resource layer composed of virtual computing 1511, virtual storage 1512, and virtual network 1513.
  • the computing hardware 1521 in the hardware resource layer can be a dedicated processor or a general-purpose processor used to provide processing and computing functions, such as a central processing unit (CPU); the storage hardware 1522 is used to provide storage capabilities, such as , disk or network attached storage (NAS); network hardware 1523 may be a switch, router and/or other network device.
  • the virtualization layer in the NFVI 150 is used to abstract the hardware resources of the hardware resource layer, decouple the VNF 140 from the physical layer to which the hardware resources belong, and provide virtual resources to the VNF.
  • the virtual resource layer may include virtual computing 1511 , virtual storage 1512 , and virtual networking 1513 .
  • the virtual computing 1511 and the virtual storage 1512 may be provided to the VNF 140 in the form of virtual machines or other virtual containers, for example, one or more virtual machines form a VNF 140 .
  • the virtualization layer forms a virtual network 1513 by abstracting the network hardware 1523 .
  • the virtual network 1513 is used to implement communication between multiple virtual machines or between multiple virtual containers of other types carrying VNFs.
  • EM130 It is a system used to configure and manage equipment in the traditional telecommunication system; in the NFV architecture, EM130 can also be used to configure and manage VNFs, and initiate new VNF instantiations to VNFM112. Cycle management operations.
  • OSS/BSS120 Supports various end-to-end telecommunication services.
  • the management functions supported by OSS include: network configuration, service provision, fault management, etc.; BSS processes orders, payment, revenue, etc., and supports product management, order management, Revenue management and customer management.
  • VNF140 Corresponding to the physical network function (PNF) in the traditional non-virtualized network, such as a virtualized packet core (evolved packet core, EPC) node (such as a mobility management entity (mobility management entity) , MME), serving gateway (serving gateway, SGW), public data network gateway (public data network gateway, PGW), etc.).
  • EPC virtualized packet core
  • MME mobility management entity
  • SGW serving gateway
  • public data network gateway public data network gateway
  • PGW public data network gateway
  • an embodiment of the present application further provides a service instance deployment system.
  • the system 200 includes a network management device 21 , a VNFM 22 , a container manager 23 and a mirror database 24 .
  • the container manager 23 may be the above-mentioned CaaS manager, and is responsible for the deployment of container images.
  • the mirror database 24, also known as an image repository, is used to store various container images.
  • the VNFM22 is configured to receive a service instance deployment request from the network management device, and the service instance deployment request includes the node type requested to be instantiated;
  • VNFM22 is further configured to acquire corresponding hardware resources from the host group corresponding to the node type according to the service instance deployment request;
  • the VNFM22 is further configured to send a container deployment request to the container manager 23, where the container deployment request includes the type of the hardware resource;
  • the container manager 23 acquires a container image corresponding to the type of the hardware resource according to the container deployment request and the correspondence between at least one type of container image and the type of the hardware resource, and deploys the container on the node collection and deploying the container image on the container collection.
  • the network management device 21 is configured to upload at least one type of container image to the image database
  • the image database 24 is used to establish the correspondence between the at least one type of container image and the type of hardware resource.
  • the service instance deployment method includes the following steps:
  • the network management device uploads at least one type of container image to the image database.
  • the image database may be located on a certain server, and the network management device uploads at least one type of container image to the image database.
  • the container image includes any of the following types: X86 container image, ARM container image.
  • the container image may specifically be a container image package (referred to as a "container image package"), for example, the ARM container image package is app_arm.zip, and the X86 container image package is app_x86.zip.
  • This step is an optional step (represented by a dotted line in the figure), that is, this step may not be performed, and the image database may pre-store at least one type of container image.
  • the image database stores at least one type of container image, and establishes a correspondence between the at least one type of container image and the type of hardware resource.
  • the type of the hardware resource may be the system to which the hardware resource belongs, for example, the X86 system or the ARM system.
  • the type of the hardware resource may also be the model of the hardware resource under the system to which the hardware resource belongs, for example, C32, C16 under the X86 system; another example, C32, C16 under the ARM system.
  • the image database After receiving the at least one type of container image uploaded by the network management device, the image database establishes a correspondence between the at least one type of container image and the type of hardware resources. For example, when the mirror database receives an X86 container image, the corresponding relationship between the container image and the X86 system is established; when the mirror database receives an ARM container image, the corresponding relationship between the container image and the ARM system is established. For another example, when the mirror database receives the X86 C32 container image, the corresponding relationship between the container image and the X86 C32 model is established; when the mirror database receives the X86 C16 container image, the corresponding relationship between the container image and the X86 C16 model is established.
  • FIG. 5 a schematic diagram of a heterogeneous deployment architecture provided by an embodiment of the present application, wherein the VNFD template is the same as the VNFD template in the heterogeneous deployment architecture shown in FIG. 1 ; in the tosca template, the The X86_C32_POD and ARM_C32_POD are merged into COMM_C32_POD, and the X86_C32_Container and ARM_C32_Container in Figure 1 are merged into COMM_C32_Container.
  • the pod-level template upwards describes the container template and corresponding container image deployed under the pod type.
  • the network management device defines the VNFD template and the tosca template according to the rules shown in Figure 5, and in the COMM_C32_POD template, establishes the correspondence between the type of hardware resources and the container image through key-value pairs. For example, to establish the correspondence between the system (ARM or X86) of the hardware resource and the container image, the container image package corresponding to ARM is app_arm.zip, and the container image package corresponding to X86 is app_x86.zip. For another example, establish a correspondence between the model of the hardware resource and the container image under the system to which the hardware resource belongs.
  • This step is an optional step (represented by a dotted line in the figure), that is, this step may not be performed, and the image database may pre-establish a correspondence between at least one type of container image and the type of hardware resources.
  • the network management device defines the association between the node type and the host group in the VNFD template in the VNFM.
  • the network management device can define the association between the node type and the host group in the VNFD template in the VNFM, that is, define the association between the node type and the hardware resources. For example, node 1 to node 3 are associated with the X86 host group, and node 4 to node 6 are associated with the ARM host group.
  • This step is an optional step (indicated by a dotted line in the figure), that is, this step may not be performed.
  • the VNFM defines the association between the node type and the host group in the VNFD template in advance.
  • the following process describes how to deploy a service instance, taking two types of service instance deployment, X86 service instance and ARM service instance as examples:
  • the network management device sends a request for deploying an X86 service instance to the VNFM. Accordingly, the VNFM receives the request to deploy the X86 service instance.
  • the request includes that the node type requested to be instantiated is an X86 node, for example, it may be X86_C32_VM as shown in FIG. 5 .
  • S105'.VNFM obtains hardware resources from the X86 host group according to the VNFD template definition.
  • step S103 the VNFM defines the association between the node type and the host group in the VNFD template, then the VNFM determines the host group corresponding to the X86 node as the X86 host group according to the definition of the VNFD template, and reports the X86 host group to the X86 host group. Get hardware resources.
  • the X86 host group sends a response to the VNFM for notifying the VNFM node that the deployment is feasible.
  • the VNFM sends a container deployment request to the container manager. Accordingly, the container manager receives the container deployment request.
  • the container deployment request includes the type of the hardware resource, here, the type of the hardware resource is X86.
  • Container deployment request also known as node management. That is, the VNFM calls the interface of the container manager for node management. Since the container deployment request carries the type of hardware resource (or the type of node), the container manager can perceive the type of the node.
  • the VNFM notifies the container manager to deploy the stack (called "stack pull"), that is, to deploy the container set on the aforementioned nodes.
  • the container manager selects an X86 container image in the image database according to the container deployment request and the correspondence between at least one type of container image and the type of hardware resources.
  • the container manager obtains the container image corresponding to the type of the hardware resource from the image database.
  • the image database stores the correspondence between at least one type of container image and the type of hardware resource, so that according to the type of hardware resource in the container deployment request, the X86 container image corresponding to the X86 hardware resource can be selected and sent to the container manager. Returns an X86 container image.
  • the image database may also send the above correspondence to the container manager in advance, and the container manager sends the image database to the image database according to the container deployment request and the correspondence between at least one type of container image and the type of hardware resources. Get the corresponding container image.
  • the container manager deploys the container image on the above container set.
  • the container manager After acquiring the above-mentioned X86 container image, the container manager deploys the X86 container image in the determined container set, so that the deployment of the X86 service instance is completed.
  • the network management device sends a deployment ARM service instance request to the VNFM.
  • the VNFM receives the deployment ARM service instance request.
  • the request includes the ARM node type requested to be instantiated, for example, it can be ARM_C32_VM as shown in Figure 5.
  • VNFM obtains hardware resources from the ARM host group according to the VNFD template definition.
  • step S103 the VNFM defines the association between the node type and the host group in the VNFD template, then the VNFM determines that the host group corresponding to the ARM node is the ARM host group according to the definition of the VNFD template, and reports to the ARM host group Get hardware resources.
  • the ARM host group sends a response to the VNFM to notify the VNFM that the node deployment is feasible.
  • the VNFM sends a container deployment request to the container manager.
  • the container manager receives the container deployment request.
  • the container deployment request includes the type of the hardware resource, and here, the type of the hardware resource is ARM.
  • the VNFM calls the container manager's interface for node management. Since the container deployment request carries the type of hardware resource (or the type of node), the container manager can perceive the type of the node.
  • VNFM notifies the container manager to expand on the original stack, that is, deploy the ARM container image on the original container set.
  • the container manager selects the ARM container image in the image database according to the container deployment request and the correspondence between at least one type of container image and the type of hardware resource.
  • the container manager obtains the container image corresponding to the type of the hardware resource from the image database.
  • the image database stores the correspondence between at least one type of container image and the type of hardware resource, so that according to the type of hardware resource in the container deployment request, the ARM container image corresponding to the ARM hardware resource can be selected and sent to the container manager. Returns the ARM container image.
  • the container manager deploys the container image on the container set.
  • the container manager After acquiring the above-mentioned ARM container image, the container manager deploys the ARM container image in the determined container set, so that the deployment of the ARM service instance is completed.
  • a service instance deployment method provided according to an embodiment of the present application is applied in a service instance deployment system.
  • the system adopts a unified container set template and a unified container template, and establishes at least one type of container image and hardware in an image database. Correspondence between types of resources.
  • the VNFM can carry the type of hardware resource when sending a container deployment request to the container manager.
  • the container image corresponding to the type of hardware resource, so as to realize the deployment of service instances under heterogeneous deployment.
  • the number of templates is saved due to the adoption of a unified container set template and a unified container template.
  • containers deployed on different types of nodes lack direct perception of the business processing capabilities of the nodes, resulting in no way to fully utilize the business processing capabilities of the hardware.
  • the load between nodes is uneven, resulting in waste of resources.
  • the processing capability of ARM_C32_VM is significantly higher than that of ARM_C16_VM.
  • the same container image deployed on it cannot sense the load difference of the system to adjust dynamically when it is running, resulting in inconsistent load.
  • an embodiment of the present application provides a load balancing system between nodes.
  • the system 300 includes a network management device 31 , a VNFM 32 , a container manager 33 , and a load balancer Server 34 and Load Balancer Client 35. in:
  • the load balancer server 34 is configured to receive the service processing capabilities of the first container and the second container in at least one node reported by the load balancer client 35, where the service processing capabilities include the first container's service processing capabilities. Types of hardware resources and container capabilities, and types of hardware resources and container capabilities of the second container;
  • the load balancer server 34 is further configured to determine the first container according to the configured weight corresponding to each type of hardware resource, the service processing capability of the first container, and the service processing capability of the second container. the number of tokens and the number of tokens associated with the second container;
  • the load balancer server 34 is further configured to distribute traffic corresponding to the number of tokens of the first container to the first container according to the number of tokens of the first container, and to distribute traffic corresponding to the number of tokens of the first container to the first container, and according to the number of tokens of the first container the token quantity of the second container, distribute traffic corresponding to the token quantity of the second container to the second container.
  • the service processing capability further includes load information
  • the load balancer server 34 is further configured to receive updated load information on the at least one node reported by the load balancer client;
  • the load balancer server 34 is further configured to adjust the number of tokens corresponding to the updated load information according to the weight corresponding to each type of hardware resource and the service processing capability.
  • the type of the hardware resource is the system to which the hardware resource belongs; or the type of the hardware resource is the model of the hardware resource under the system to which the hardware resource belongs.
  • the network management device 31 is configured to configure the weight corresponding to each type of hardware resource.
  • the virtual network function manager 32 is configured to receive a capacity expansion request from the network management device, and the capacity expansion request includes the first node and/or the second node where the at least one third container that requests capacity expansion is located. , the type of the container set in which the at least one third container for which expansion is requested is located;
  • the container manager 33 is configured to deploy the container image of the at least one third container on the container set of the first node and/or the second node;
  • the load balancer client 35 is configured to report the service processing capability of the at least one third container to the load balancer server;
  • the load balancer server 34 is further configured to determine the service processing capability corresponding to the at least one third container according to the weight corresponding to the type of each hardware resource and the service processing capability of the at least one third container. number of tokens;
  • the load balancer server 34 is further configured to distribute traffic corresponding to the number of tokens to the at least one third container according to the number of tokens corresponding to the service processing capability of the at least one third container.
  • the virtual network function manager 32 is further configured to receive a capacity reduction request from the network management device, where the capacity reduction request includes the type of the third node where the at least one fourth container requested to be reduced is located, the The type of the container set in which the at least one fourth container requested to be reduced is located and the number of containers of the at least one fourth container that is requested to be reduced;
  • the virtual network function manager 32 is further configured to send a pre-scaling notification to the load balancer server, where the pre-scaling notification includes the type of the third node and the type of the container set;
  • the load balancer server 34 is further configured to select the at least one fourth container to be recycled from the container set in the third node, and send the pre-scaling notification to the at least one fourth container;
  • the load balancer server 34 is further configured to, after receiving the pre-scaling response sent by the at least one fourth container, reclaim the token distributed to the at least one fourth container, and notify the virtual network function management The pre-scaling of the device is completed;
  • the virtual network function manager 32 is further configured to send a container set recycling notification to the load balancer server, where the container set recycling notification is used to notify recycling of the container set.
  • an embodiment of the present application further provides a load balancing method between nodes, and the method may include the following steps:
  • the network management device configures the load balancer server with a weight corresponding to at least one type of hardware resource.
  • the network management device may configure a weight corresponding to at least one type of hardware resource to the load balancer server according to the measured data. For example, assuming that the hardware capability of X86 is stronger than that of ARM, the weight corresponding to the hardware resources of the X86 system can be configured to be 1, and the weight corresponding to the hardware resources of the ARM system can be configured to be 0.9; for another example, the hardware capability of X86 C32 is stronger than that of X86 C16, Then you can configure the weight corresponding to the hardware resource of X86 C32 to 1, and configure the weight corresponding to the hardware resource of X86 C16 to 0.5.
  • This step is optional (represented by a dotted line in the figure), that is, this step may not be performed, and the load balancer server pre-stores the weight corresponding to the type of at least one hardware resource.
  • the load balancer server receives the service processing capabilities of the first container and the second container in at least one node reported by the load balancer client, where the service processing capabilities include the type of hardware resources of the first container, the container capability, and the first container. Two types of container hardware resources and container capabilities.
  • the load balancer client is embedded in the service container process.
  • the load balancer client can obtain the type of hardware resources (hardware model/system) of the node where the container is located and the service processing capability of the container, and report it to the Load balancer server.
  • the service processing capability of the container includes the type of the container's hardware resources and the container capability.
  • the types of hardware resources of the containers may be the same, but the container capabilities are different. For example, it is assumed that the systems to which the hardware resources of the first container and the second container belong are both ARM systems, the first container has 2 threads, the second container has 1 thread, and each thread is associated with a processing core. Then the container capacity of the first container is stronger than that of the second container.
  • the types of hardware resources of the containers and the capabilities of the containers are different.
  • the system to which the hardware resources of the first container belong is the X86 system
  • the first container has two threads
  • the system to which the hardware resources of the second container belong is the ARM system
  • the second container has one thread.
  • the load balancer server manages containers in one or more nodes, and the load balancer server receives the service processing capabilities of one or more containers on one or more nodes.
  • the first container and the second container may belong to the same node, or may belong to different nodes.
  • the load balancer server determines the number of tokens of the first container and the number of tokens associated with the second container according to the configured weight corresponding to each type of hardware resource, the service processing capability of the first container and the service processing capability of the second container number of tokens.
  • the load balancer server obtains the weight corresponding to each type of hardware resource configured by the network management device, as well as the service processing capability of the first container and the service processing capability of the second container, and can be configured according to the type of each hardware resource.
  • the weight of the first container and the business processing capability of the second container determine the number of tokens of the first container and the number of tokens associated with the second container. The number of tokens corresponds to the amount of traffic that the container can distribute.
  • the number of tokens weight corresponding to the type of hardware resource*container capability.
  • the weight corresponding to the hardware resources of the X86 system is 1, the weight corresponding to the hardware resources of the ARM system is 0.9, the hardware resources of the first container belong to the X86 system, the hardware resources of the second container belong to the ARM system, and the first container belongs to the ARM system. If the container capability of the container is 10 threads, and the container capability of the second container is also 10 threads, it can be determined that the number of tokens of the first container is 10, and the number of tokens of the second container is 9.
  • the load balancer server distributes traffic corresponding to the number of tokens of the first container to the first container according to the number of tokens of the first container.
  • the load balancer server distributes the traffic corresponding to the token number of the first container to the first container according to the determined token number of the first container.
  • the load balancer server distributes traffic corresponding to the number of tokens of the second container to the second container according to the number of tokens of the second container.
  • the load balancer server distributes traffic corresponding to the number of tokens of the second container to the second container according to the number of tokens of the second container that has been determined.
  • the number of tokens of the first container is 10, and the number of tokens of the second container is 9, and the ratio of traffic distributed by the first container to that of the second container is 10:9.
  • the load balancer server receives the updated load information on at least one node reported by the load balancer client.
  • the load balancer server distributes traffic according to the determined number of tokens of the container, but after a period of operation, the load (specifically, CPU load, etc.) on the node (specifically, each container of the node) will change.
  • the load balancer client periodically collects the load information on the node and reports it to the load balancer server.
  • the load balancer server adjusts the number of tokens corresponding to the updated load information according to the weight and service processing capability corresponding to each type of hardware resource.
  • the load balancer server After the load balancer server receives the load information reported by the load balancer client, for example, in the above steps, the container capacity of the first container is 10 threads, and the load balancer server allocates 10 threads to the first container. card, the load balancer server distributes traffic corresponding to 10 tokens to the first container. However, the traffic corresponding to these 10 tokens exceeds the CPU load of the first container. Therefore, the load balancer server adjusts the allocation to The number of tokens of the first container, for example, adjust the number of tokens allocated to the first container to be 8.
  • the container capacity of the second container is 9 threads
  • the load balancer server allocates 9 tokens to the second container
  • the load balancer server distributes 9 tokens to the second container.
  • the traffic corresponding to these 9 tokens is much lower than the CPU load of the second container. Therefore, the load balancer server adjusts the number of tokens allocated to the second container. For example, adjust the number of tokens allocated to the second container. The number of tokens is 12.
  • the service processing capability of the container on the node is collected by the load balancer client, and reported to the load balancer server, and the load balancer server is based on the type of hardware resources.
  • the corresponding weight and the service processing capability of the container the corresponding number of tokens is allocated to the container, and then the corresponding traffic is distributed to the container according to the allocated number of tokens, which can balance the load between nodes and improve the utilization of hardware resources.
  • a schematic diagram of load balancing between nodes under a heterogeneous deployment illustrated by an embodiment of the present application illustrates the process of load balancing between container 1 and container 2, wherein the container The type of hardware resources corresponding to 1 is the ARM system, and the type of hardware resources corresponding to container 2 is the X86 system, that is, container 1 and container 2 are a heterogeneous deployment.
  • the network management device configures the load balancer server with weights corresponding to various hardware resources.
  • the load balancer clients where the container 1 and the container 2 are located respectively report the service processing capability of the container to the load balancer server, and the service processing capability includes the type of hardware resources of the container and the container capability.
  • step S303 the load balancer server allocates three tokens to container 1 according to the type of hardware resources and container capability of container 1.
  • the load balancer server allocates 4 tokens to container 2 according to the type of hardware resources and container capability of container 2.
  • the number of tokens the weight corresponding to the type of the hardware resource of the container * the capacity of the container.
  • FIG. 9 it is a schematic flowchart of still another load balancing method between nodes provided by an embodiment of the present application.
  • the method may be based on the process of FIG. 7 or may be an independent process.
  • the method includes the following steps:
  • the network management device configures the load balancer server with a weight corresponding to the type of at least one hardware resource.
  • step S201 in the embodiment shown in FIG. 7 .
  • the network management device sends a capacity expansion request to the VNFM. Accordingly, the VNFM receives the capacity expansion request.
  • one or more third containers may be expanded in the first node and/or the second node according to service requirements.
  • the capacity expansion request includes the type of the first node and/or the second node where the third container that is requested to be expanded is located, and the type of the container set where the third container that is requested to be expanded is located.
  • the container manager deploys a container image of at least one third container on the container set of the first node and/or the second node. Taking the deployment of one third container as an example here, the process of deploying multiple third containers is the same as deploying one third container. Specifically, steps S403 to S405 are included.
  • the VNFM sends a container deployment request to the container manager. Accordingly, the container manager receives the container deployment request.
  • the container deployment request is used to request to deploy the third container in the first node and/or the second node.
  • the container deployment request may include the type of hardware resource.
  • the VNFM sends a container set expansion request to the container manager. Accordingly, the container manager receives the container set expansion request.
  • the third container is specifically located in a container set in the first node and/or the second node. Therefore, the VNFM may also send a container set expansion request to the container manager.
  • the container manager deploys the container image of the third container on the container set.
  • the load balancer client where the third container is located reports the service processing capability of the third container to the load balancer server.
  • the load balancer server receives the service processing capability of the third container.
  • the load balancer client can obtain the business processing capability of the third container.
  • the service processing capability of the third container includes the type and container capability of hardware resources corresponding to the third container.
  • the load balancer client where the third container is located reports the service processing capability of the third container to the load balancer server.
  • the load balancer server determines the number of tokens corresponding to the service processing capability of the third container according to the weight corresponding to each type of hardware resource and the service processing capability of the third container.
  • the load balancer server allocates the number of tokens corresponding to the service processing capability of the third container to the third container.
  • This step is optional (represented by a dotted line in the figure), and the load balancer server may not perform this step.
  • the third container sends a notification to the VNFM to notify that the expansion is completed.
  • the load balancer server distributes traffic corresponding to the number of tokens to the first container according to the number of tokens corresponding to the service processing capability of the first container.
  • the load balancer server After the load balancer server re-determines the number of tokens corresponding to the business processing capabilities of each container according to the business processing capabilities of the newly added third container and the existing container, it can process the services according to the re-determined first container. The number of tokens corresponding to the capability, and when the traffic arrives, the traffic corresponding to the number of tokens is distributed to the first container.
  • the load balancer server distributes traffic corresponding to the number of tokens to the third container according to the number of tokens corresponding to the service processing capability of the third container.
  • the load balancer server Since the load balancer server has determined the number of tokens corresponding to the service processing capability of the third container, when traffic arrives, it can distribute the number of tokens corresponding to the service processing capability of the third container according to the number of tokens corresponding to the service processing capability of the third container. Corresponding flow to the third container.
  • the load balancer server allocates the corresponding number of tokens to the container according to the service processing capability of the container, and allocates the corresponding number of tokens to the container according to the number of tokens.
  • FIG. 10 it is a schematic flowchart of still another load balancing method between nodes provided by an embodiment of the present application.
  • the method may be based on the process of FIG. 7 or may be an independent process.
  • the method includes the following steps:
  • the network management device sends a capacity reduction request to the VNFM. Accordingly, the VNFM receives the shrinking request.
  • the network management device may request to downsize one or more of the fourth containers.
  • the fourth container requesting downsizing may be located at the third node.
  • the shrinking request includes the type of the third node where the fourth container requested to shrink is located, the type of the container set where the fourth container requested to shrink is located, and the number of containers of the fourth container requested to shrink.
  • the type of the third node refers to the type of the host group associated with the third node, that is, the type of hardware resources.
  • the type of the container collection can also be the same as the type of the node.
  • the VNFM sends a pre-scaling notification to the load balancer server.
  • the load balancer server receives the pre-scaling notification.
  • the pre-scaling notification is used to notify the load balancer server to reduce the fourth container on the third node, so that the load balancer server does not distribute traffic to the fourth container after receiving the pre-scaling notification.
  • the pre-scaling notification includes the type of the third node and the type of the container set.
  • the load balancer server selects a fourth container to be recycled from the container set in the third node.
  • the load balancer server recycles a corresponding number of fourth containers in the container set in the corresponding third node according to the pre-scaling notification, specifically, recycles a corresponding number of service instances of the fourth container, and cleans up the resources of the service instance .
  • the load balancer server sends a pre-scaling notification to the fourth container. Accordingly, the fourth container receives the pre-shrunk notification.
  • the pre-shrinkage notification is used to notify the fourth container that the container is recycled.
  • the fourth container sends a pre-scaling response to the load balancer server. Accordingly, the load balancer server receives the pre-scaling response.
  • the pre-scaling response is used to notify that the pre-scaling notification has been successfully received and that the fourth container has handed over the service running on the fourth container.
  • the load balancer server sends a token recycling notification to the fourth container. Accordingly, the fourth container receives the token recycling notification.
  • the load balancer server After receiving the pre-scaling response of the fourth container, the load balancer server sends a token recycling notification to the fourth container.
  • the token recycling notification is used to notify recycling of the token distributed to the fourth container.
  • the fourth container sends a token recycling response to the load balancer server. Accordingly, the load balancer server receives the token recycling response.
  • the token recycling response is used to notify that the token recycling notification sent by the load balancer server has been successfully received.
  • the load balancer server sends a pre-scaling response to the VNFM. Accordingly, the VNFM receives the prescaled response.
  • the pre-scaling response is used to notify the VNFM that the pre-scaling is complete.
  • the VNFM sends a container set recycling notification to the load balancer server.
  • the load balancer server receives the container set recycling notification.
  • the container collection recycling notification is used to notify the recycling container collection.
  • the VNFM sends a container set recycling notification to the load balancer server.
  • the load balancer server sends a container set recycling response to the VNFM.
  • This step is optional (indicated by a dotted line in the figure).
  • the load balancer server may not send a container set recycling response to the VNFM. By default, the load balancer server A recycle response for this container collection has been received.
  • the load balancer server distributes corresponding traffic to the first container according to the number of tokens of the first container.
  • the load balancer server distributes the corresponding traffic to the first container according to the number of tokens of the first container.
  • the load balancer server allocates the corresponding number of tokens to the remaining containers according to the service processing capabilities of the remaining containers, and according to the number of tokens Distributing traffic to the remaining containers effectively balances the load between nodes and improves the utilization of hardware resources.
  • FIG. 11 a schematic structural diagram of a service instance deployment/load balancing system between nodes is also provided.
  • the service instance deployment/load balancing system between nodes is used to execute the above service instance deployment/load balancing method between nodes .
  • Part or all of the above methods may be implemented by hardware, and may also be implemented by software or firmware.
  • the service instance deployment/load balancing system between nodes may be a chip or an integrated circuit during specific implementation.
  • a service instance deployment/node load balancing system provided in FIG. 11 can be used. 400 to achieve.
  • the service instance deployment/load balancing system 400 between nodes may include:
  • the memory 43 and the processor 44 may be one or more, and one processor is taken as an example in FIG. 11 ), and may also include an input device 41 and an output device 42 .
  • the input device 41 , the output device 42 , the memory 43 , and the processor 44 may be connected by a bus or in other ways, wherein the connection by a bus is taken as an example in FIG. 11 .
  • the processor may be a central processing unit (CPU), a network processor (NP), or a WLAN device.
  • the processor may further include a hardware chip.
  • the above-mentioned hardware chip may be an application-specific integrated circuit (ASIC), a programmable logic device (PLD) or a combination thereof.
  • the above-mentioned PLD can be a complex programmable logic device (CPLD), a field-programmable gate array (FPGA), a general-purpose array logic (generic array logic, GAL) or any combination thereof.
  • the memory may include volatile memory (volatile memory), such as random-access memory (RAM); the memory may also include non-volatile memory (non-volatile memory), such as flash memory (flash memory) , a hard disk drive (HDD) or a solid-state drive (SSD); the memory may also include a combination of the above-mentioned types of memory.
  • volatile memory such as random-access memory (RAM)
  • non-volatile memory such as flash memory (flash memory) , a hard disk drive (HDD) or a solid-state drive (SSD)
  • flash memory flash memory
  • HDD hard disk drive
  • SSD solid-state drive
  • the memory may also include a combination of the above-mentioned types of memory.
  • the processor 44 is configured to invoke the program instructions stored in the memory 43 to execute the method steps in FIG. 4 , FIG. 7 , FIG. 9 or FIG. 10 .
  • one or more embodiments of the present disclosure may be provided as a method, system or computer program product. Accordingly, one or more embodiments of the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, one or more embodiments of the present disclosure may employ a computer program implemented on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein form of the product.
  • computer-usable storage media including, but not limited to, disk storage, CD-ROM, optical storage, etc.
  • An embodiment of the present application further provides a chip system, including: at least one processor and an interface, the at least one processor is coupled to a memory through an interface, and when the at least one processor executes a computer program or instruction in the memory, the above-mentioned The method of any method embodiment is performed.
  • the chip system may be composed of chips, or may include chips and other discrete devices, which are not specifically limited in this embodiment of the present application.
  • Embodiments of the present application further provide a computer-readable storage medium, where a computer program can be stored on the storage medium, and when the program is executed by a processor, the service instance deployment/load balancing between nodes described in any embodiment of the present disclosure can be implemented. steps of the method.
  • Embodiments of the present application also provide a computer program product containing instructions, which, when run on a computer, cause the computer to execute the steps of the method for deploying a service instance/load balancing between nodes described in any embodiment of the present disclosure.
  • At least one (a) of a, b, or c can represent: a, b, c, a-b, a-c, b-c, or a-b-c, where a, b, c may be single or multiple .
  • words such as “first” and “second” are used to distinguish the same or similar items with basically the same function and effect. Those skilled in the art can understand that the words “first”, “second” and the like do not limit the quantity and execution order, and the words “first”, “second” and the like are not necessarily different.
  • words such as “exemplary” or “for example” are used to represent examples, illustrations or illustrations. Any embodiments or designs described in the embodiments of the present application as “exemplary” or “such as” should not be construed as preferred or advantageous over other embodiments or designs. Rather, the use of words such as “exemplary” or “such as” is intended to present the related concepts in a specific manner to facilitate understanding.
  • the disclosed system, apparatus and method may be implemented in other manners.
  • the division of the unit is only a logical function division, and there may be other division methods in actual implementation, for example, multiple units or components may be combined or integrated into another system, or some features may be ignored, or not implement.
  • the shown or discussed mutual coupling, or direct coupling, or communication connection may be through some interfaces, indirect coupling or communication connection of devices or units, and may be in electrical, mechanical or other forms.
  • Units described as separate components may or may not be physically separated, and components shown as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution in this embodiment.
  • the above-mentioned embodiments it may be implemented in whole or in part by software, hardware, firmware or any combination thereof.
  • software it can be implemented in whole or in part in the form of a computer program product.
  • the computer program product includes one or more computer instructions.
  • the computer program instructions When the computer program instructions are loaded and executed on a computer, the procedures or functions according to the embodiments of the present application are generated in whole or in part.
  • the computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable device.
  • the computer instructions may be stored in or transmitted over a computer-readable storage medium.
  • the computer instructions can be sent from one website site, computer, server, or data center to another by wire (eg, coaxial cable, fiber optic, digital subscriber line (DSL)) or wireless (eg, infrared, wireless, microwave, etc.)
  • wire e.g. coaxial cable, fiber optic, digital subscriber line (DSL)
  • wireless e.g., infrared, wireless, microwave, etc.
  • the computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that includes one or more available media integrated.
  • the available media may be read-only memory (ROM), or random access memory (RAM), or magnetic media, such as floppy disks, hard disks, magnetic tapes, magnetic disks, or optical media, such as , digital versatile disc (digital versatile disc, DVD), or semiconductor media, for example, solid state disk (solid state disk, SSD) and the like.
  • ROM read-only memory
  • RAM random access memory
  • magnetic media such as floppy disks, hard disks, magnetic tapes, magnetic disks, or optical media, such as , digital versatile disc (digital versatile disc, DVD), or semiconductor media, for example, solid state disk (solid state disk, SSD) and the like.

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

Des modes de réalisation de la présente demande concernent un procédé de déploiement d'instance de service, appliqué à un système de déploiement d'instance de service qui utilise un modèle de pod de conteneur unifié et un modèle de conteneur unifié, et établit, dans une base de données d'images miroir, une correspondance entre au moins un type d'image miroir de conteneur et des types de ressources matérielles. Pendant le déploiement d'une instance de service, en transportant, dans une demande de déploiement d'instance de service, un type de nœud qui est demandé à être instancié, un VNFM peut transporter le type d'une ressource matérielle lors de l'envoi d'une demande de déploiement de conteneur à un gestionnaire de conteneur, et le gestionnaire de conteneur peut obtenir, à partir de la base de données d'images miroir, une image miroir de conteneur correspondant au type de la ressource matérielle, de façon à mettre en œuvre le déploiement d'une instance de service dans des déploiements hétérogènes. Du fait de l'utilisation d'un modèle de pod de conteneur unifié et d'un modèle de conteneur unifié, le nombre de modèles est réduit.
PCT/CN2021/082824 2021-03-24 2021-03-24 Procédé de déploiement d'instance de service, et procédé et système d'équilibrage de charge entre nœuds WO2022198524A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202180096165.1A CN117043748A (zh) 2021-03-24 2021-03-24 服务实例部署方法、节点间的负载均衡方法及系统
PCT/CN2021/082824 WO2022198524A1 (fr) 2021-03-24 2021-03-24 Procédé de déploiement d'instance de service, et procédé et système d'équilibrage de charge entre nœuds

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/082824 WO2022198524A1 (fr) 2021-03-24 2021-03-24 Procédé de déploiement d'instance de service, et procédé et système d'équilibrage de charge entre nœuds

Publications (1)

Publication Number Publication Date
WO2022198524A1 true WO2022198524A1 (fr) 2022-09-29

Family

ID=83395048

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/082824 WO2022198524A1 (fr) 2021-03-24 2021-03-24 Procédé de déploiement d'instance de service, et procédé et système d'équilibrage de charge entre nœuds

Country Status (2)

Country Link
CN (1) CN117043748A (fr)
WO (1) WO2022198524A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117539643A (zh) * 2024-01-09 2024-02-09 上海晨钦信息科技服务有限公司 信用卡清分清算平台、批量任务处理方法及服务器

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104219127A (zh) * 2014-08-30 2014-12-17 华为技术有限公司 一种虚拟网络实例的创建方法以及设备
CN106533935A (zh) * 2015-09-14 2017-03-22 华为技术有限公司 一种在云计算系统中获取业务链信息的方法和装置
US20190036956A1 (en) * 2017-07-25 2019-01-31 Nicira, Inc. Context engine model
CN111221618A (zh) * 2018-11-23 2020-06-02 华为技术有限公司 一种容器化虚拟网络功能的部署方法和装置
CN112425129A (zh) * 2018-07-18 2021-02-26 华为技术有限公司 云计算系统中集群速率限制的方法和系统

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104219127A (zh) * 2014-08-30 2014-12-17 华为技术有限公司 一种虚拟网络实例的创建方法以及设备
CN106533935A (zh) * 2015-09-14 2017-03-22 华为技术有限公司 一种在云计算系统中获取业务链信息的方法和装置
US20190036956A1 (en) * 2017-07-25 2019-01-31 Nicira, Inc. Context engine model
CN112425129A (zh) * 2018-07-18 2021-02-26 华为技术有限公司 云计算系统中集群速率限制的方法和系统
CN111221618A (zh) * 2018-11-23 2020-06-02 华为技术有限公司 一种容器化虚拟网络功能的部署方法和装置

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117539643A (zh) * 2024-01-09 2024-02-09 上海晨钦信息科技服务有限公司 信用卡清分清算平台、批量任务处理方法及服务器
CN117539643B (zh) * 2024-01-09 2024-03-29 上海晨钦信息科技服务有限公司 信用卡清分清算平台、批量任务处理方法及服务器

Also Published As

Publication number Publication date
CN117043748A (zh) 2023-11-10

Similar Documents

Publication Publication Date Title
JP7074880B2 (ja) ネットワーク・スライスの展開方法および装置
US12020055B2 (en) VNF service instantiation method and apparatus
WO2020186911A1 (fr) Procédé et dispositif de gestion de ressources pour fonction de réseau virtualisé, vnf, conteneurisée
EP3471345B1 (fr) Procédé d'attribution de ressources fondé sur un sla et nfvo
KR101932872B1 (ko) 네트워크 기능들 가상화의 관리 및 오케스트레이션을 위한 방법, 디바이스, 및 프로그램
US20190230004A1 (en) Network slice management method and management unit
AU2015419073B2 (en) Life cycle management method and device for network service
US10924966B2 (en) Management method, management unit, and system
WO2019179301A1 (fr) Procédé et dispositif permettant de gérer une ressource virtuelle
WO2018201856A1 (fr) Système et procédé pour centre de données à auto-organisation
US11871280B2 (en) VNF instantiation method, NFVO, VIM, VNFM, and system
CN109995552B (zh) Vnf服务实例化方法及装置
US11301284B2 (en) Method for managing VNF instantiation and device
US20210326306A1 (en) Method and apparatus for deploying virtualised network function
JP2020028049A (ja) 通信制御装置、通信制御システム、通信制御方法および通信制御プログラム
WO2022198524A1 (fr) Procédé de déploiement d'instance de service, et procédé et système d'équilibrage de charge entre nœuds
CN112015515A (zh) 一种虚拟网络功能的实例化方法及装置
WO2018014351A1 (fr) Procédé et appareil de configuration de ressources
WO2024104311A1 (fr) Procédé de déploiement d'une fonction de réseau virtualisée et dispositif de communication
WO2023066224A1 (fr) Procédé et appareil de déploiement d'un service de conteneur
JP7450072B2 (ja) 仮想化ネットワーク・サービス配備方法及び装置
CN113098705B (zh) 网络业务的生命周期管理的授权方法及装置
JP2024502038A (ja) スケーリング方法および装置
CN115617446A (zh) 一种虚拟化网络功能的资源调度方法以及相关设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21932152

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 202180096165.1

Country of ref document: CN

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21932152

Country of ref document: EP

Kind code of ref document: A1