CN113254165A - Load flow distribution method and device for virtual machine and container, and computer equipment - Google Patents

Load flow distribution method and device for virtual machine and container, and computer equipment Download PDF

Info

Publication number
CN113254165A
CN113254165A CN202110778851.9A CN202110778851A CN113254165A CN 113254165 A CN113254165 A CN 113254165A CN 202110778851 A CN202110778851 A CN 202110778851A CN 113254165 A CN113254165 A CN 113254165A
Authority
CN
China
Prior art keywords
node
target
load
service
virtual machine
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110778851.9A
Other languages
Chinese (zh)
Other versions
CN113254165B (en
Inventor
陈硕实
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Inaco Technology Beijing Co ltd
Original Assignee
Inaco Technology Beijing Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Inaco Technology Beijing Co ltd filed Critical Inaco Technology Beijing Co ltd
Priority to CN202110778851.9A priority Critical patent/CN113254165B/en
Publication of CN113254165A publication Critical patent/CN113254165A/en
Application granted granted Critical
Publication of CN113254165B publication Critical patent/CN113254165B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5077Logical partitioning of resources; Management or configuration of virtualized resources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45562Creating, deleting, cloning virtual machine instances
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/4557Distribution of virtual machine instances; Migration and load balancing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45587Isolation or security of virtual machine instances
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45595Network integration; Enabling network access in virtual machine instances

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The application discloses a load flow distribution method, a device and computer equipment of a virtual machine and a container, relating to the technical field of computers, wherein the method comprises the following steps: binding the Kong gateway to the rear end of the load balancer to receive the traffic service request forwarded by the load balancer; if the Kong gateway receives a flow service request of a target service, extracting a target virtual machine node and/or a target container node matched with a target website domain name corresponding to the flow service request from a load node bound at the back end, wherein the load node is obtained by searching a second corresponding relation corresponding to the target mixed running service after the Kong ingress searches the target mixed running service matched with the target website domain name based on the first corresponding relation and is configured on a load at the back end of the Kong gateway; and distributing the load traffic of the target service to the target virtual machine node and/or the target container node by utilizing the Kong gateway based on the dynamically adjusted preset traffic distribution weight rule. The method and the device are suitable for load flow balancing scheduling between the virtual machines and the containers.

Description

Load flow distribution method and device for virtual machine and container, and computer equipment
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method and an apparatus for allocating load flows of a virtual machine and a container, and a computer device.
Background
With the popularization of container technology, more and more developers choose to deploy applications into containers to run. For a large number of applications that have existed before the container technology became popular, it is a difficult problem for developers to reduce the impact on the online users as much as possible, in order to migrate from the virtual machine to the container environment safely.
One method commonly adopted at present is to mount a node where a Kong gateway is located and a virtual machine node together to the rear end of a universal load balancer, and distribute a small part of traffic to applications running in a container by configuring weights, so as to achieve the effect of gray level testing; another way is to set the service type to loadbalancer to bind the service backend container group directly to slb through a cloud-controller-manager provided by the cloud service provider.
However, the above-mentioned manner of directly implementing scheduling by using a load balancer requires that each application needs to configure a load balancing device separately, and also needs to bind a virtual machine node on the load balancing device, which results in higher complexity of service configuration and further increases the cost of manual management.
Disclosure of Invention
In view of this, the present application provides a method, an apparatus, and a computer device for load traffic allocation of virtual machines and containers, which can be used to solve the technical problems of higher complexity of service configuration and higher labor management cost caused by the current manner of implementing scheduling by using a load balancer.
According to an aspect of the present application, there is provided a load traffic distribution method for virtual machines and containers, the method including:
binding the Kong gateway to the rear end of the load balancer to receive the traffic service request forwarded by the load balancer;
if the Kong gateway receives a traffic service request of a target service, extracting a target virtual machine node and/or a target container node matched with a target website domain name corresponding to the traffic service request from a load node bound at the back end, wherein the load node is obtained by searching a second corresponding relation corresponding to the target mixed running service after the Kong ingress reses searches the target mixed running service matched with the target website domain name based on the first corresponding relation, and is configured to a load at the back end of the Kong gateway;
and distributing the load traffic of the target service to the target virtual machine node and/or the target container node by utilizing the Kong gateway based on a dynamically adjusted preset traffic distribution weight rule.
According to another aspect of the present application, there is provided a load traffic distribution apparatus of a virtual machine and a container, the apparatus including:
the receiving module is used for binding the Kong gateway to the rear end of the load balancer so as to receive the flow service request forwarded by the load balancer;
an extraction module, configured to extract, if the Kong gateway receives a traffic service request of a target service, a target virtual machine node and/or a target container node that are matched with a target website domain name corresponding to the traffic service request from a load node bound at a back end, where the load node is obtained by obtaining a target mixed running service that is matched with the target website domain name based on a first corresponding relationship by Kong ingress, obtaining a second corresponding relationship based on the target mixed running service, and configuring the second corresponding relationship to a load at the back end of the Kong gateway;
and the distribution module is used for distributing the load traffic of the target service to the target virtual machine node and/or the target container node by utilizing the Kong gateway based on a dynamically adjusted preset traffic distribution weight rule.
According to yet another aspect of the present application, there is provided a non-transitory readable storage medium having stored thereon a computer program which, when executed by a processor, implements the above-described load traffic distribution method for virtual machines and containers.
According to yet another aspect of the present application, there is provided a computer device including a nonvolatile readable storage medium, a processor, and a computer program stored on the nonvolatile readable storage medium and executable on the processor, the processor implementing the load flow allocation method of the virtual machine and the container when executing the program.
By means of the technical scheme, compared with the current mode of utilizing the load balancer to realize scheduling, the load flow distribution method of the virtual machine and the container can bind the Kong gateway to the rear end of the load balancer to receive the flow service request forwarded by the load balancer; and when the Kong gateway receives a traffic service request of a target service, a target virtual machine node and/or a target container node matched with a target website domain name corresponding to the traffic service request can be extracted from the load nodes bound at the back end. After the Kong ingress reses finds a target mixed running service matched with the domain name of the target website based on the first corresponding relation, the load node is found based on a second corresponding relation corresponding to the target mixed running service and configured to a load at the rear end of the Kong gateway; furthermore, the load traffic of the target service can be distributed to the target virtual machine node and/or the target container node by using a dynamically adjusted preset traffic distribution weight rule through a Kong gateway, so that the intelligent distribution of the load traffic is realized. According to the technical scheme, the virtual machine nodes and the container nodes can be abstracted into resources, the balanced scheduling of the load is achieved by utilizing the Kong gateway, at the moment, the load balancing equipment only needs to detect the survival state of the Kong gateway nodes, and the health check of all back-end services is completed by the Kong gateway. The virtual host node and the container node are leveled, load weight can be independently set for each node in a Kong gateway dynamic adjustment mode, and therefore load flow can be safely migrated from the virtual machine to the container environment without influencing an online user.
The foregoing description is only an overview of the technical solutions of the present application, and the present application can be implemented according to the content of the description in order to make the technical means of the present application more clearly understood, and the following detailed description of the present application is given in order to make the above and other objects, features, and advantages of the present application more clearly understandable.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
fig. 1 illustrates a flowchart of a load traffic distribution method for virtual machines and containers according to an embodiment of the present disclosure;
fig. 2 is a schematic flowchart illustrating a load traffic distribution method for virtual machines and containers according to an embodiment of the present disclosure;
FIG. 3 is a schematic structural diagram illustrating a virtual machine and container hybrid load balancer according to an embodiment of the present disclosure;
fig. 4 is a schematic structural diagram illustrating a load traffic distribution apparatus for virtual machines and containers according to an embodiment of the present disclosure;
fig. 5 shows a schematic structural diagram of another load flow distribution device for virtual machines and containers according to an embodiment of the present application.
Detailed Description
The present application will be described in detail below with reference to the accompanying drawings in conjunction with embodiments. It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict.
Aiming at the technical problems of higher complexity of service configuration and higher labor management cost caused by the current method for realizing scheduling by utilizing a load balancer, the application provides a load flow distribution method of a virtual machine and a container, as shown in fig. 1, the method comprises the following steps:
101. binding the Kong gateway to the back end of the load balancer to receive the traffic service request forwarded by the load balancer.
Among them, the Kong Gateway is a highly available and easily extensible API Gateway project written based on OpenResty (Nginx + Lua module), and is sourced by Mashape corporation. The Kong gateway can provide an easy-to-use RESTful API to operate and configure the API management system, so that the Kong gateway can horizontally extend a plurality of Kong servers, and uniformly distribute requests to each back-end server through the preposed load balancing configuration to deal with a large batch of network requests; the load corresponds to a website application when a certain website is accessed, the load is deployed on a plurality of servers to form a cluster to realize an external application, only a few servers are exposed to the outside, when the website is accessed, the access to a rear-end server is usually realized through a few front-end servers, the few front-end servers correspond to a load balancing cluster, and the load balancing cluster plays a role in uniformly distributing flow to tens of thousands of load nodes arranged at the rear end. In the application, the Kong gateway can be utilized to realize load balancing processing by binding the Kong gateway to the back end of the load balancer of the cloud service provider. Specifically, all service requests received by the load balancer bound to the Kong gateway are forwarded to the Kong gateway, and at this time, the Kong gateway distributes load traffic to a back-end server corresponding to the service in a balanced manner based on a dynamically adjusted preset traffic distribution weight rule, and the back-end server is used for responding and executing traffic service requests.
Since the load is generally deployed on the virtual machine node before the container technology appears, load balancing is realized, and then external access is performed. When a container appears, more and more developers choose to deploy applications into the container to run, in view of the advantages of the container in terms of starting speed and running performance relative to the virtual machine. The method aims to solve the technical problems that how to safely migrate a large number of applications existing before the container technology is popular from a virtual machine to a container environment, the influence on an on-line user is reduced as much as possible, service configuration is simplified, and labor management cost is reduced. Compared with the current mode that the node where the Kong gateway is located and the virtual machine node are mounted to the rear end of the universal load balancer together, and a small part of traffic is distributed to the application running in the container through configuring the weight, the method can abstract the virtual machine node and the container node into resources, bind the Kong gateway to the rear end of the load balancer, and at the moment, the load balancer only needs to detect the survival state of the Kong gateway node, and load balancing and health check of all rear-end services are completed by the Kong gateway. And the Kong gateway can extract the virtual machine node and/or the container node matched with the target website domain name corresponding to the traffic service request from the load node bound at the back end based on the received traffic service request of the target service, at the moment, the virtual host node and the container node are leveled, and in the process of gray level test, the load weight can be independently set for each node in a dynamic adjustment mode of the Kong gateway, so that the load traffic can be safely migrated from the virtual machine to the container environment, and the influence on an online user can not be caused.
In the application, the Kong gateway is directly bound to the back end of the load balancer, so that the prior art is replaced, and the virtual machine node and the container node are directly bound to the load balancer, so that the following defects in the prior art can be overcome: 1. load balancing equipment needs to be configured for each service; 2. a plurality of virtual machine nodes need to be bound on the load balancing equipment, so that the complexity is high, and further the cost of manual management is high; 3. the load balancing device needs to perform health check on the mounted node so as to automatically remove the mounted node when the back-end node fails and cannot provide service, and the virtual machine node and the kong gateway node need to be respectively configured with different health check rules, which cannot be realized by the existing load balancing.
As for the execution subject of the present application, the virtual machine and container hybrid load balancer may be a virtual machine and container hybrid load balancer, as shown in fig. 3, the virtual machine and container hybrid load balancer may include a Kong gateway, a Kong ingress interface, a mixed running Service, and a load node Endpoints, where a first correspondence between a preset website domain name and the mixed running Service is created based on the Kong ingress, a second correspondence is created between the mixed running Service and the load node Endpoints, and the load node may specifically include a virtual machine node and a container node; the Kong gateway can be used for monitoring the creation, change and deletion of Service, in addition, the Kong gateway can also monitor the first corresponding relation through the Kong ingress, namely the Kong gateway monitors the two corresponding relations at the same time, and through the first corresponding relation and the second corresponding relation, the Kong gateway can realize the load balance from the network domain name to the load node Endpoints.
102. And if the Kong gateway receives the traffic service request of the target service, extracting the target virtual machine node and/or the target container node matched with the domain name of the target website corresponding to the traffic service request from the load nodes bound at the back end.
The load nodes are obtained by searching a target mixed running service matched with the domain name of the target website based on the first corresponding relation and then searching a second corresponding relation corresponding to the target mixed running service and configured to the load at the rear end of the Kong gateway.
For this embodiment, the load balancer of the cloud service provider bound by the Kong gateway may forward the corresponding traffic service request to the Kong gateway, and after receiving the traffic service request of the target service, the Kong gateway may further search the target mixed running service matched with the target service in the first corresponding relationship based on the Kong ingress, and further determine the target virtual machine node and/or the target container node matched with the target mixed running service based on the second corresponding relationship, where the target virtual machine node and/or the target container node may be used to bear the traffic request of the target service.
103. And distributing the load traffic of the target service to the target virtual machine node and/or the target container node by utilizing the Kong gateway based on the dynamically adjusted preset traffic distribution weight rule.
For the embodiment, in order to avoid the service influence on the online user when the load is migrated from the virtual machine to the container environment, the load traffic allocated to the container can be dynamically adjusted from a small amount to a large amount by presetting the traffic allocation weight rule, and a gray level test process of the service response state is performed, so that the safety and stability of the transition process of the load traffic from the virtual machine to the container are ensured.
By the load traffic distribution method of the virtual machine and the container in the embodiment, the Kong gateway can be bound to the back end of the load balancer to receive the traffic service request forwarded by the load balancer; and when the Kong gateway receives a traffic service request of a target service, a target virtual machine node and/or a target container node matched with a target website domain name corresponding to the traffic service request can be extracted from the load nodes bound at the back end. After the Kong ingress reses finds a target mixed running service matched with the domain name of the target website based on the first corresponding relation, the load node is found based on a second corresponding relation corresponding to the target mixed running service and configured to a load at the rear end of the Kong gateway; furthermore, the load traffic of the target service can be distributed to the target virtual machine node and/or the target container node by using a dynamically adjusted preset traffic distribution weight rule through a Kong gateway, so that the intelligent distribution of the load traffic is realized. According to the technical scheme, the virtual machine nodes and the container nodes can be abstracted into resources, the balanced scheduling of the load is achieved through the Kong gateway, the load balancing equipment only needs to detect the survival state of the Kong gateway nodes, and the health check of all back-end services is completed through the Kong gateway. The virtual host node and the container node are leveled, load weight can be set for each node independently in a dynamic adjustment mode, load flow can be safely migrated from the virtual machine to the container environment, and the influence on an online user can be avoided.
Further, as a refinement and an extension of the specific implementation of the foregoing embodiment, in order to fully illustrate the implementation process in this embodiment, another load flow allocation method for virtual machines and containers is provided, as shown in fig. 2, where the method includes:
201. binding the Kong gateway to the back end of the load balancer to receive the traffic service request forwarded by the load balancer.
For this embodiment, after binding the Kong gateway to the back end of the load balancer, in order to enable the Kong gateway to implement balanced scheduling of load traffic, the steps of the embodiment may further include: respectively configuring corresponding mixed running services aiming at preset website domain names, and creating a first corresponding relation between the preset website domain names and the mixed running services based on Kong ingress, wherein the first corresponding relation is used for searching the mixed running services matched with the preset website domain names; creating a second corresponding relation between the mixed running service and the executable virtual machine node and the executable container node based on a preset label matched with a preset website domain name, wherein the second corresponding relation is used for searching the executable virtual machine node and the executable container node which execute the flow service under the preset website domain name based on the mixed running service; and binding the executable virtual machine nodes and the executable container nodes to the corresponding loads of the Kong gateway backend traffic service in a mixed running service node cluster form by using a Kong ingress Controller so as to perform traffic load balancing scheduling on the executable virtual machine nodes and the executable container nodes by using the Kong gateway. The method includes the steps that a first corresponding relation is established between a website domain name and a mixed running Service by utilizing the Kong ingress, a second corresponding relation is established between the Service and a load node at the rear end, and the Kong gateway monitors the two corresponding relations at the same time, and actually associates the website domain name with the load node at the rear end. When the Kong gateway monitors the first corresponding relationship, the corresponding mixed running Service can be determined based on the website domain name in the Service flow request, for example, the mixed running Service corresponding to the order Service can be searched based on the website domain name corresponding to the order, and the mixed running Service corresponding to the user Service can be searched based on the website domain name corresponding to the user. For the user, if the input website domain name "www.orderqq.com" can be accessed to serve the order, and the input website domain name "www.userqq.com" can be accessed to serve the user, the first corresponding relationship can be used to implement this.
The preset website Domain Name (also called network Domain) is the Name of a certain computer or a computer group on the Internet composed of a string of names separated by points, and is used for positioning and identifying the computer (sometimes also referred to as geographical position) during data transmission, the appearance of the Domain Name aims to solve the defects that an IP address has inconvenient memory and cannot display the Name and the property of an address organization, and the like, and the common Domain Name is provided with com, net, cn, edu, top, xyz, and the like. Wikipedia, org is a domain name, for example, and corresponds to IP address 208.80.152.2. The Domain Name System (DNS) is like an automatic telephone directory, and we can dial wikipedia Name directly instead of a telephone number (IP address). Directly invoking the name of the web site, the domain name system translates the name (e.g., www. wikipedia. org) for human use into an IP address (e.g., 208.80.152.2) for machine recognition. The IP address is a digital body identifier of an Internet host used for routing and addressing, which is not easy to be memorized by people, thereby generating a character type identifier of a domain name; the mixed running Service is a Service which can support the mixed undertaking of the virtual machine and the container; the preset label is a character label in a preset format determined according to a preset website domain name, for example, for a preset website domain name of www.
For the implementation benefit, the preset website domain name is a preset website domain name with a flow Service requirement, each preset website domain name is correspondingly configured with a mixed running Service, and each mixed running Service is matched with a group of virtual machine nodes and/or container nodes and is used for bearing the load flow of the flow Service under the preset website domain name. For the virtual machine node and the container node, corresponding preset labels can be marked on the basis of corresponding IP addresses respectively, a policy scheme is deployed, specifically, the traffic service request carries the preset labels corresponding to the load nodes for bearing load traffic, and the corresponding load nodes can be found through the preset labels, that is, the virtual machine can be accessed and the container can be accessed. Specifically, all virtual machine nodes and container nodes executing the traffic service under the preset website domain name can be bound to the load corresponding to the traffic service at the rear end of the Kong gateway in a mixed-running service node cluster mode by using a Kong ingress Controller, so that target virtual machine nodes and target container nodes used for executing the target service corresponding to the preset website domain name are extracted from the mixed-running service node cluster by using the Kong gateway, and then the load traffic of the target service is distributed to the target virtual machine nodes and the target container nodes, thereby realizing the balanced scheduling of the load traffic.
202. And if the Kong gateway receives the flow service request of the target service, determining the target label identification matched with the target service.
For this embodiment, in a specific application scenario, after the Kong gateway receives the traffic service request of the target service, the website domain name corresponding to the target service may be further retrieved, and the target tag identifier corresponding to the website domain name may be determined according to the generation rule corresponding to the tag identifier.
203. And extracting a mixed running service node cluster matched with the target label identification from the load nodes bound at the back end, wherein the mixed running service node cluster comprises an executable virtual machine node and an executable container node which are configured with the target label identification.
For this embodiment, each website domain name corresponds to a different mixdown Service, and each Service corresponds to a different group of load nodes, that is, a mixdown Service node cluster. After determining the target label identifier corresponding to the website domain name based on the embodiment step 202, the corresponding target mixed-running service may be further retrieved by using a Kong ingress interface based on the first corresponding relationship, and the virtual machine node and/or the container node for bearing the target service load traffic may be determined according to the target mixed-running service and the second corresponding relationship, specifically, a mixed-running service node cluster including the virtual machine node and/or the container node for bearing the target service load traffic may be retrieved from the load nodes bound to the rear end of the Kong gateway, and all the virtual machine nodes and container nodes configured with the target label identifier are stored in the mixed-running service node cluster. Furthermore, health monitoring can be performed on the virtual machine nodes and the container nodes in the mixed-running service node cluster by utilizing the Kong gateway, and all executable virtual machine nodes and executable container nodes in a healthy state can be screened out.
Wherein, the load node is obtained and configured to the Kong gateway backend load based on the first corresponding relationship and the second corresponding relationship in step 201 of the embodiment. Specifically, after the Kong ingress reses finds the target mixed running service matched with the domain name of the target website based on the first corresponding relationship, the second corresponding relationship corresponding to the target mixed running service is found and configured to the rear-end load of the Kong gateway.
For example, if the traffic service request corresponds to a treasure application, the corresponding tag may be "APP: treasure ", the IP address corresponds to the attribute, can look for in the backend node bound behind Kong gateway" APP: "treasure" this label matching mixed running service node cluster, and then all marked "APPs: the load node in the healthy state of treasure serves as the load node under the treasure application and is used for bearing the load flow of treasure.
204. And respectively determining an executable virtual machine node and an executable container node in the mixed service node cluster as a target virtual machine node and a target container node for responding to the execution traffic service request.
For this embodiment, all executable virtual machine nodes and all executable container nodes in the mixed-running service node cluster may be determined as target virtual machine nodes and target container nodes for responding to the execution traffic service request, respectively; the partial executable virtual machine nodes and the partial executable container nodes in the mixed service node cluster can be further determined as target virtual machine nodes and target container nodes for responding to the execution traffic service request respectively based on the load size of the traffic service.
205. And distributing the load traffic of the target service to the target virtual machine node and/or the target container node by utilizing the Kong gateway based on the dynamically adjusted preset traffic distribution weight rule.
For this embodiment, in order to avoid the service impact on the online user when the load is migrated from the virtual machine to the container environment, the load traffic of the target service may be dynamically adjusted from a small amount to a large amount by a preset traffic distribution weight rule, so as to ensure the security and stability of the transition process of the load traffic from the virtual machine to the container. Correspondingly, the embodiment steps may specifically include: respectively configuring a first traffic distribution weight and a second traffic distribution weight for a target virtual machine node and a target container node based on a first preset traffic distribution weight rule; distributing the load traffic of the target service to the target virtual machine node and/or the target container node according to the first traffic distribution weight and the second traffic distribution weight; if the load flow in the target container node configured with the second flow distribution weight is judged to operate normally, increasing the second flow distribution weight based on a second preset flow distribution weight rule, and distributing the load flow to the target container node according to the increased second flow distribution weight; and sequentially executing the process of increasing the second traffic distribution weight based on the second preset traffic distribution weight rule until the load traffic of the target service is completely distributed to the target container node according to the second traffic distribution weight.
For example, for 100 ten thousand load flows, firstly, based on the first preset flow distribution weight rule, most of the load flows, for example, 99%, can be injected into the virtual host, and a small part of the flows, for example, 1%, can be injected into the container, and at this time, the gray scale test can be performed on the load flows of 1 ten thousand people in the container, and since the load flows in the container are small, the service influence on the online user is not generated. At this time, if the network is not available in the container, or the container itself has a problem and cannot provide a service normally, a corresponding adjustment scheme may be made in time, and when it is determined that the network problem in the container has been solved or the gray scale test result shows that the service in the container is executing normally, the flow distribution weight corresponding to the container may be further increased step by step according to a second preset flow distribution weight rule under the gray scale cycle test, and if the flow distribution weight corresponding to the container is adjusted step by step to 10%, 20%, 50% … … until reaching 100%, further when the flow distribution weight corresponding to the container is adjusted to 100%, the secure migration of the load flow from the virtual machine to the container environment is realized.
In a specific application scenario, an executable virtual machine node and an executable container node in a mixed-running service node cluster are configured, created and updated by using kubernets, and a specific implementation process may be as follows: monitoring whether a change task of an executable virtual machine node and/or an executable container node exists by utilizing Kubernets; if the change task exists, judging whether the executable virtual machine node and/or the executable container node to be changed has mounted mixed running service; and if the mounted mixed running service exists, updating the executable virtual machine nodes and/or the executable container nodes in the mixed running service node cluster corresponding to the mixed running service according to the changed task.
The mixed running Service is a load balancing mode of k8s, but in the application, the Kong gateway does not use the load balancing mode, and the load balancing mode used by the Kong gateway monitors creation, change and deletion of Kong ingress, and can connect the website domain name with the Service. Meanwhile, the method can also monitor the creation, change, deletion and the like of the Service. Specifically, when the mixed running Service is created, changed and deleted, the load node corresponding to the mixed running Service also changes, and if a mixed running Service is created, the load node corresponding to the mixed running Service at the rear end is newly added; when one mixed running Service is deleted, all load nodes corresponding to the mixed running Service are deleted; when the mixed running Service is changed (if the capacity is expanded), the load node corresponding to the mixed running Service is also correspondingly increased. Correspondingly, whether a change task of the executable virtual machine node and/or the executable container node exists can be monitored by utilizing Kubernets, and when the change task exists, whether the mixed running service of mounting the executable virtual machine node and/or the executable container node to be subjected to change exists can be further judged; and if the mounted mixed running service exists, synchronously updating the executable virtual machine nodes and/or the executable container nodes in the mixed running service node cluster corresponding to the mixed running service according to the change task.
In a specific application scenario, for this embodiment, the load balancing device only needs to detect the survival state of the Kong gateway node, and all health checks of the backend services are completed by the Kong gateway. The corresponding embodiment specifically includes the following steps: detecting the survival state of the Kong gateway node by using a load balancer; health status detection of the backend bound virtual machine nodes and container nodes is performed with the Kong gateway node to remove failed virtual machine nodes and/or container nodes.
Wherein the Kong gateway node is a virtual machine or a container group running the Kong gateway service. The load balancing device can monitor the load balancing scheduling process of the Kong gateway when forwarding the traffic service request to the Kong gateway, and can further report the traffic service request to the virtual host or the container group at the previous stage when judging that the Kong gateway is not in the survival state according to the load balancing scheduling process, and forward the traffic service request to other Kong gateways which are used for executing the target service and are in the survival state, so as to realize load balancing scheduling.
Correspondingly, when the Kong gateway performs health detection on the load nodes, the response state of the load nodes can be detected while load flow is injected into the load nodes, and if the load nodes are judged to be not responded or not responded completely within a preset time period, the load nodes can be judged to be abnormal, and then the virtual machine nodes and/or container nodes with faults can be removed from the Kong gateway.
By the load flow distribution method of the virtual machine and the container, the Kong gateway can be bound to the rear end of the load balancer to receive the flow service request forwarded by the load balancer; and when the Kong gateway receives a traffic service request of a target service, a target virtual machine node and/or a target container node matched with a target website domain name corresponding to the traffic service request can be extracted from the load nodes bound at the back end. After the Kong ingress reses finds a target mixed running service matched with the domain name of the target website based on the first corresponding relation, the load node is found based on a second corresponding relation corresponding to the target mixed running service and configured to a load at the rear end of the Kong gateway; furthermore, the load traffic of the target service can be distributed to the target virtual machine node and/or the target container node by using a dynamically adjusted preset traffic distribution weight rule through a Kong gateway, so that the intelligent distribution of the load traffic is realized. According to the technical scheme, the virtual machine nodes and the container nodes can be abstracted into resources, the balanced scheduling of the load is achieved by utilizing the Kong gateway, at the moment, the load balancing equipment only needs to detect the survival state of the Kong gateway nodes, and the health check of all back-end services is completed by the Kong gateway. The virtual host node and the container node are leveled, load weight can be independently set for each node in a Kong gateway dynamic adjustment mode, and therefore load flow can be safely migrated from the virtual machine to the container environment without influencing an online user.
Further, as a specific implementation of the method shown in fig. 1 and fig. 2, an embodiment of the present application provides a load traffic distribution apparatus for virtual machines and containers, as shown in fig. 4, the apparatus includes: a receiving module 31, an extracting module 32, and a distributing module 33;
a receiving module 31, configured to bind the Kong gateway to a back end of the load balancer to receive a traffic service request forwarded by the load balancer;
an extracting module 32, configured to extract, if the Kong gateway receives a traffic service request of a target service, a target virtual machine node and/or a target container node that are matched with a target website domain name corresponding to the traffic service request from a load node bound at a back end, where the load node is obtained by obtaining a target mixed-run service that is matched with the target website domain name based on a first corresponding relationship by Kong ingress, obtaining a second corresponding relationship based on the target mixed-run service, and configuring the second corresponding relationship to a load at the back end of the Kong gateway;
and the allocating module 33 may be configured to allocate the load traffic of the target service to the target virtual machine node and/or the target container node by using a Kong gateway based on the dynamically adjusted preset traffic allocation weight rule.
In a specific application scenario, as shown in fig. 5, the apparatus further includes: a first creating module 34, a second creating module 35, a binding module 36;
the first creating module 34 is configured to configure corresponding mixed running services for preset website domain names respectively, and create a first corresponding relationship between the preset website domain names and the mixed running services based on Kong ingress, where the first corresponding relationship is used to find the mixed running services matched with the preset website domain names;
the second creating module 35 may be configured to create a second corresponding relationship between the running-mix service and the executable virtual machine node and between the running-mix service and the executable container node based on the preset tag matched with the preset website domain name, where the second corresponding relationship is used to search the executable virtual machine node and the executable container node that execute the traffic service under the preset website domain name based on the running-mix service;
and a binding module 36, configured to utilize the Kong ingress Controller to bind the executable virtual machine node and the executable container node to a load corresponding to the Kong gateway backend traffic service in a mixed-running service node cluster form, so as to utilize the Kong gateway to perform traffic load balancing scheduling on the executable virtual machine node and the executable container node.
Correspondingly, in order to extract a target virtual machine node and/or a target container node matched with a target website domain name corresponding to the traffic service request from the back-end bound load nodes, the extraction module 32 may be specifically configured to determine a target tag identifier matched with the target service; extracting a mixed running service node cluster matched with the target label identification from the load nodes bound at the back end, wherein the mixed running service node cluster comprises an executable virtual machine node and an executable container node which are configured with the target label identification; and respectively determining an executable virtual machine node and an executable container node in the mixed service node cluster as a target virtual machine node and a target container node for responding to the execution traffic service request.
In a specific application scenario, in order to allocate, by using a Kong gateway, a load traffic of a target service to a target virtual machine node and/or a target container node based on a dynamically adjusted preset traffic allocation weight rule, the allocation module 33 is specifically configured to allocate, based on a first preset traffic allocation weight rule, a first traffic allocation weight and a second traffic allocation weight to the target virtual machine node and the target container node, respectively; distributing the load traffic of the target service to the target virtual machine node and/or the target container node according to the first traffic distribution weight and the second traffic distribution weight; if the load flow in the target container node configured with the second flow distribution weight is judged to operate normally, increasing the second flow distribution weight based on a second preset flow distribution weight rule, and distributing the load flow to the target container node according to the increased second flow distribution weight; and sequentially executing the process of increasing the second traffic distribution weight based on the second preset traffic distribution weight rule until the load traffic of the target service is completely distributed to the target container node according to the second traffic distribution weight.
In a specific application scenario, as shown in fig. 5, the apparatus further includes: a configuration creation update module 37;
a configuration creation update module 37 operable to perform configuration, creation, and update of executable virtual machine nodes and/or executable container nodes within the troops service node cluster using kubernets.
Correspondingly, the configuration creation and update module 37 is specifically configured to monitor whether there is a change task of an executable virtual machine node and/or an executable container node by using kubernets; if the change task exists, judging whether the executable virtual machine node and/or the executable container node to be changed has mounted mixed running service; and if the mounted mixed running service exists, updating the executable virtual machine nodes and/or the executable container nodes in the mixed running service node cluster corresponding to the mixed running service according to the changed task.
In a specific application scenario, in order to implement detection on the survival state of the Kong gateway node and the health states of the virtual machine node and the container node, as shown in fig. 5, the apparatus further includes: a first detection module 38, a second detection module 39;
a first detection module 38, operable to detect a Kong gateway node survival status with a load balancer;
and a second detection module 39, configured to perform health status detection on the backend bound virtual machine nodes and container nodes by using the Kong gateway node, so as to remove the failed virtual machine node and/or container node.
It should be noted that other corresponding descriptions of the functional units related to the load flow allocation apparatus for virtual machines and containers provided in this embodiment may refer to the corresponding descriptions in fig. 1 to fig. 2, and are not described herein again.
Based on the methods shown in fig. 1 to 2, correspondingly, the present embodiment further provides a non-volatile storage medium, on which computer readable instructions are stored, and when the computer readable instructions are executed by a processor, the method for load traffic distribution of virtual machines and containers shown in fig. 1 to 2 is implemented.
Based on such understanding, the technical solution of the present application may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (which may be a CD-ROM, a usb disk, a removable hard disk, etc.), and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method of the embodiments of the present application.
Based on the method shown in fig. 1 to fig. 2 and the virtual device embodiments shown in fig. 4 and fig. 5, in order to achieve the above object, the present embodiment further provides a computer device, where the computer device includes a storage medium and a processor; a nonvolatile storage medium for storing a computer program; a processor for executing a computer program to implement the load flow allocation method of the virtual machine and the container as shown in fig. 1 to 2.
Optionally, the computer device may further include a user interface, a network interface, a camera, Radio Frequency (RF) circuitry, a sensor, audio circuitry, a WI-FI module, and so forth. The user interface may include a Display screen (Display), an input unit such as a keypad (Keyboard), etc., and the optional user interface may also include a USB interface, a card reader interface, etc. The network interface may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface), etc.
It will be understood by those skilled in the art that the present embodiment provides a computer device structure that is not limited to the physical device, and may include more or less components, or some components in combination, or a different arrangement of components.
The nonvolatile storage medium can also comprise an operating system and a network communication module. The operating system is a program that manages the hardware and software resources of the computer device described above, supporting the operation of information handling programs and other software and/or programs. The network communication module is used for realizing communication among components in the nonvolatile storage medium and communication with other hardware and software in the information processing entity device.
Through the above description of the embodiments, those skilled in the art will clearly understand that the present application can be implemented by software plus a necessary general hardware platform, and can also be implemented by hardware.
By applying the technical scheme of the application, compared with the prior art, the application can bind the Kong gateway to the rear end of the load balancer to receive the traffic service request forwarded by the load balancer; and when the Kong gateway receives a traffic service request of a target service, a target virtual machine node and/or a target container node matched with a target website domain name corresponding to the traffic service request can be extracted from the load nodes bound at the back end. After the Kong ingress reses finds a target mixed running service matched with the domain name of the target website based on the first corresponding relation, the load node is found based on a second corresponding relation corresponding to the target mixed running service and configured to a load at the rear end of the Kong gateway; furthermore, the load traffic of the target service can be distributed to the target virtual machine node and/or the target container node by using a dynamically adjusted preset traffic distribution weight rule through a Kong gateway, so that the intelligent distribution of the load traffic is realized. According to the technical scheme, the virtual machine nodes and the container nodes can be abstracted into resources, the balanced scheduling of the load is achieved by utilizing the Kong gateway, at the moment, the load balancing equipment only needs to detect the survival state of the Kong gateway nodes, and the health check of all back-end services is completed by the Kong gateway. The virtual host node and the container node are leveled, load weight can be independently set for each node in a Kong gateway dynamic adjustment mode, and therefore load flow can be safely migrated from the virtual machine to the container environment without influencing an online user.
Those skilled in the art will appreciate that the figures are merely schematic representations of one preferred implementation scenario and that the blocks or flow diagrams in the figures are not necessarily required to practice the present application. Those skilled in the art will appreciate that the modules in the devices in the implementation scenario may be distributed in the devices in the implementation scenario according to the description of the implementation scenario, or may be located in one or more devices different from the present implementation scenario with corresponding changes. The modules of the implementation scenario may be combined into one module, or may be further split into a plurality of sub-modules.
The above application serial numbers are for description purposes only and do not represent the superiority or inferiority of the implementation scenarios. The above disclosure is only a few specific implementation scenarios of the present application, but the present application is not limited thereto, and any variations that can be made by those skilled in the art are intended to fall within the scope of the present application.

Claims (10)

1. A load flow distribution method of a virtual machine and a container is characterized by comprising the following steps:
binding the Kong gateway to the rear end of the load balancer to receive the traffic service request forwarded by the load balancer;
if the Kong gateway receives a traffic service request of a target service, extracting a target virtual machine node and/or a target container node matched with a target website domain name corresponding to the traffic service request from a load node bound at the back end, wherein the load node is obtained by searching a second corresponding relation corresponding to the target mixed running service after the Kong ingress reses searches the target mixed running service matched with the target website domain name based on the first corresponding relation, and is configured to a load at the back end of the Kong gateway;
and distributing the load traffic of the target service to the target virtual machine node and/or the target container node by utilizing the Kong gateway based on a dynamically adjusted preset traffic distribution weight rule.
2. The method of claim 1, further comprising:
respectively configuring corresponding mixed running services aiming at preset website domain names, and creating a first corresponding relation between the preset website domain names and the mixed running services based on Kong ingress, wherein the first corresponding relation is used for searching the mixed running services matched with the preset website domain names;
creating a second corresponding relation between the mixed running service and an executable virtual machine node and an executable container node based on a preset label matched with the preset website domain name, wherein the second corresponding relation is used for searching the executable virtual machine node and the executable container node for executing the flow service under the preset website domain name based on the mixed running service;
and binding the executable virtual machine node and the executable container node to a load corresponding to the traffic service at the back end of the Kong gateway in a mixed running service node cluster mode by using a Kong ingress Controller so as to perform traffic load balancing scheduling on the executable virtual machine node and the executable container node by using the Kong gateway.
3. The method according to claim 1, wherein if the Kong gateway receives a traffic service request of a target service, extracting a target virtual machine node and/or a target container node that matches a target website domain name corresponding to the traffic service request from a backend-bound load node, specifically comprising:
determining a target tag identification matched with the target service;
extracting a mixed running service node cluster matched with the target label identification from load nodes bound at the back end, wherein the mixed running service node cluster comprises an executable virtual machine node and an executable container node which are configured with the target label identification;
and respectively determining an executable virtual machine node and an executable container node in the mixed running service node cluster as a target virtual machine node and a target container node for responding to the flow service execution request.
4. The method according to claim 1, wherein the allocating, by the Kong gateway, the load traffic of the target service to the target virtual machine node and/or the target container node based on a dynamically adjusted preset traffic allocation weight rule specifically comprises:
respectively configuring a first traffic distribution weight and a second traffic distribution weight for the target virtual machine node and the target container node based on a first preset traffic distribution weight rule;
distributing the load traffic of the target service to the target virtual machine node and/or the target container node according to the first traffic distribution weight and the second traffic distribution weight;
if the load flow in the target container node configured with the second flow distribution weight is judged to operate normally, increasing the second flow distribution weight based on a second preset flow distribution weight rule, and distributing the load flow to the target container node according to the increased second flow distribution weight;
and sequentially executing the process of increasing the second traffic distribution weight based on a second preset traffic distribution weight rule until all the load traffic of the target service is distributed to the target container node according to the second traffic distribution weight.
5. The method of claim 3, further comprising:
the configuration, creation, and updating of executable virtual machine nodes and/or executable container nodes within a troops service node cluster is performed using kubernets.
6. The method according to claim 5, wherein the performing, by using kubernets, the update of the executable virtual machine nodes and/or the executable container nodes in the mixed running service node cluster specifically includes:
monitoring whether a change task of an executable virtual machine node and/or an executable container node exists by utilizing Kubernets;
if the change task exists, judging whether the executable virtual machine node and/or the executable container node to be changed has mounted mixed running service;
and if the mounted mixed running service exists, updating the executable virtual machine nodes and/or the executable container nodes in the mixed running service node cluster corresponding to the mixed running service according to the change task.
7. The method according to any one of claims 1 to 6, further comprising:
detecting a Kong gateway node survival status with the load balancer;
and performing health state detection on the backend bound virtual machine nodes and the container nodes by utilizing the Kong gateway node so as to remove the virtual machine nodes and/or the container nodes with faults.
8. A load flow distribution apparatus for virtual machines and containers, comprising:
the receiving module is used for binding the Kong gateway to the rear end of the load balancer so as to receive the flow service request forwarded by the load balancer;
an extraction module, configured to extract, if the Kong gateway receives a traffic service request of a target service, a target virtual machine node and/or a target container node that are matched with a target website domain name corresponding to the traffic service request from a load node bound at a back end, where the load node is obtained by obtaining a target mixed running service that is matched with the target website domain name based on a first corresponding relationship by Kong ingress, obtaining a second corresponding relationship based on the target mixed running service, and configuring the second corresponding relationship to a load at the back end of the Kong gateway;
and the distribution module is used for distributing the load traffic of the target service to the target virtual machine node and/or the target container node by utilizing the Kong gateway based on a dynamically adjusted preset traffic distribution weight rule.
9. A non-transitory readable storage medium having stored thereon a computer program, wherein the program, when executed by a processor, implements the method for load traffic allocation of virtual machines and containers according to any one of claims 1 to 7.
10. A computer device comprising a non-volatile readable storage medium, a processor, and a computer program stored on the non-volatile readable storage medium and executable on the processor, wherein the processor when executing the program implements the load flow allocation method for virtual machines and containers of any one of claims 1 to 7.
CN202110778851.9A 2021-07-09 2021-07-09 Load flow distribution method and device for virtual machine and container, and computer equipment Active CN113254165B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110778851.9A CN113254165B (en) 2021-07-09 2021-07-09 Load flow distribution method and device for virtual machine and container, and computer equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110778851.9A CN113254165B (en) 2021-07-09 2021-07-09 Load flow distribution method and device for virtual machine and container, and computer equipment

Publications (2)

Publication Number Publication Date
CN113254165A true CN113254165A (en) 2021-08-13
CN113254165B CN113254165B (en) 2021-10-08

Family

ID=77191118

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110778851.9A Active CN113254165B (en) 2021-07-09 2021-07-09 Load flow distribution method and device for virtual machine and container, and computer equipment

Country Status (1)

Country Link
CN (1) CN113254165B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114938375A (en) * 2022-05-16 2022-08-23 聚好看科技股份有限公司 Container group updating equipment and container group updating method
CN115002218A (en) * 2022-05-26 2022-09-02 平安银行股份有限公司 Traffic distribution method, traffic distribution device, computer equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8958293B1 (en) * 2011-12-06 2015-02-17 Google Inc. Transparent load-balancing for cloud computing services
CN105610632A (en) * 2016-02-14 2016-05-25 华为技术有限公司 Virtual network device and related method
CN105634956A (en) * 2015-12-31 2016-06-01 华为技术有限公司 Message forwarding method, device and system
CN110532101A (en) * 2019-09-03 2019-12-03 中国联合网络通信集团有限公司 The deployment system and method for micro services cluster
US20200403922A1 (en) * 2019-06-24 2020-12-24 Vmware, Inc. Load balancing of l2vpn traffic over multiple ipsec vpn tunnels

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8958293B1 (en) * 2011-12-06 2015-02-17 Google Inc. Transparent load-balancing for cloud computing services
CN105634956A (en) * 2015-12-31 2016-06-01 华为技术有限公司 Message forwarding method, device and system
CN105610632A (en) * 2016-02-14 2016-05-25 华为技术有限公司 Virtual network device and related method
CN110896371A (en) * 2016-02-14 2020-03-20 华为技术有限公司 Virtual network equipment and related method
US20200403922A1 (en) * 2019-06-24 2020-12-24 Vmware, Inc. Load balancing of l2vpn traffic over multiple ipsec vpn tunnels
CN110532101A (en) * 2019-09-03 2019-12-03 中国联合网络通信集团有限公司 The deployment system and method for micro services cluster

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114938375A (en) * 2022-05-16 2022-08-23 聚好看科技股份有限公司 Container group updating equipment and container group updating method
CN114938375B (en) * 2022-05-16 2023-06-02 聚好看科技股份有限公司 Container group updating equipment and container group updating method
CN115002218A (en) * 2022-05-26 2022-09-02 平安银行股份有限公司 Traffic distribution method, traffic distribution device, computer equipment and storage medium
CN115002218B (en) * 2022-05-26 2023-08-04 平安银行股份有限公司 Traffic distribution method, traffic distribution device, computer equipment and storage medium

Also Published As

Publication number Publication date
CN113254165B (en) 2021-10-08

Similar Documents

Publication Publication Date Title
US10691445B2 (en) Isolating a portion of an online computing service for testing
CN107302604B (en) Kubernetes-based PaaS platform domain name configuration method and device and electronic equipment
CN113254165B (en) Load flow distribution method and device for virtual machine and container, and computer equipment
CN106301829A (en) A kind of method and apparatus of Network dilatation
CN111182089B (en) Container cluster system, method and device for accessing big data assembly and server
CN109981493B (en) Method and device for configuring virtual machine network
US11036535B2 (en) Data storage method and apparatus
CN111432045B (en) Method, device and equipment for testing server scheduling algorithm of domain name system
CN109995552B (en) VNF service instantiation method and device
CN109151025B (en) Load balancing method and device based on URL, computer storage medium and equipment
CN111327647A (en) Method and device for providing service to outside by container and electronic equipment
US20210144515A1 (en) Systems and methods for multi-access edge computing node selection
US20190223051A1 (en) Load balancing method and related device
CN112333289A (en) Reverse proxy access method, device, electronic equipment and storage medium
CN107172214A (en) A kind of service node with load balancing finds method and device
CN108737591A (en) A kind of method and device of service configuration
US10749982B2 (en) Multiple geography service routing
WO2016095644A1 (en) High availability solution method and device for database
CN106254411A (en) For providing the system of service, server system and method
CN110830492B (en) Method and system for mutually scheduling edge applications based on CoreDNS registration service
CN108347465B (en) Method and device for selecting network data center
US20200313981A1 (en) Method and device for processing a network service instantiation request
CN114356456A (en) Service processing method, device, storage medium and electronic equipment
CN114584545A (en) Data management method, device, system, storage medium and electronic equipment
CN112889247B (en) VNF service instantiation method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant