CN115834708A - Load balancing method, device, equipment and computer readable storage medium - Google Patents

Load balancing method, device, equipment and computer readable storage medium Download PDF

Info

Publication number
CN115834708A
CN115834708A CN202211476539.5A CN202211476539A CN115834708A CN 115834708 A CN115834708 A CN 115834708A CN 202211476539 A CN202211476539 A CN 202211476539A CN 115834708 A CN115834708 A CN 115834708A
Authority
CN
China
Prior art keywords
service
grid
information
grid agent
mapping relation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211476539.5A
Other languages
Chinese (zh)
Inventor
王绍坤
黄明亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yusur Technology Co ltd
Original Assignee
Yusur Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yusur Technology Co ltd filed Critical Yusur Technology Co ltd
Priority to CN202211476539.5A priority Critical patent/CN115834708A/en
Publication of CN115834708A publication Critical patent/CN115834708A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Computer And Data Communications (AREA)

Abstract

The present disclosure relates to a load balancing method, apparatus, device and computer readable storage medium, the method comprising: acquiring service information of each application service on a cloud server node; determining a mapping relation between the service and a corresponding service grid agent instance thereof according to the service information; and sending the received flow message to a corresponding service grid agent example according to the mapping relation, and processing the flow message by the service grid agent example. The method and the system have the advantages that the mapping relation between the service and the corresponding service grid agent example is obtained according to the information provided by the cloud native grid control plane, the received flow messages are sequentially guided to different grid examples, load balancing is achieved on the service grid agent examples, flow overload or resource idle on each service grid agent example is effectively avoided, and the stability of the service is improved.

Description

Load balancing method, device, equipment and computer readable storage medium
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a load balancing method, apparatus, device, and computer-readable storage medium.
Background
The Service Mesh (Service Mesh) is an infrastructure layer in cloud services, and is focused on realizing reliable transmission of Service requests among micro services, and providing flow control layer operations such as Service discovery, load balancing, request routing, rule configuration and the like on the basis of the Service Mesh.
At present, a cloud native service grid is generally formed by using a multi-instance agent and application services on cloud server nodes on a DPU, that is, a plurality of service grid agent instances are deployed on the DPU, and a full connection mode is established between the service grid agent instances and all services on the cloud server nodes, so that concurrent traffic of all services on the cloud server nodes can be borne, but due to unordered distribution of the services and traffic, load balance of the service grid agent instances cannot be guaranteed, and service experience is poor.
Disclosure of Invention
In order to solve the technical problem, the present disclosure provides a load balancing method, device, apparatus, and computer readable storage medium, so as to ensure load balancing of a service grid agent instance and improve stability of a service.
In a first aspect, the disclosed embodiments provide a load balancing method applied to a data processor including a plurality of containerized service grid agent instances, the method including:
acquiring service information of each application service on a cloud server node;
determining a mapping relation between the service and a corresponding service grid agent instance thereof according to the service information;
and sending the received flow message to a corresponding service grid agent example according to the mapping relation, and processing the flow message by the service grid agent example.
In some embodiments, the data processor further comprises a session planner and an incoming direction traffic navigator;
after the service information of each application service on the cloud server node is acquired, the method further comprises the following steps:
registering information of the plurality of containerized service grid agent instances in the session planner.
In some embodiments, the service information includes domain information of the service and/or five tuple information of the service.
In some embodiments, determining a mapping relationship between the service and its corresponding service grid proxy instance based on the service information comprises:
according to the service information, the session planner determines the mapping relation between each application service on the server node and the service grid agent instance to generate a mapping relation table;
and sending the mapping relation table to a corresponding service grid agent instance and the incoming direction traffic navigator.
In some embodiments, after sending the mapping relationship table to the corresponding service grid proxy instance, the method further comprises:
and the service grid agent instance establishes a service link with the corresponding service on the cloud server node according to the mapping relation table.
In some embodiments, sending the received traffic packet to the corresponding service grid agent instance according to the mapping relationship includes:
according to the mapping relation table, the incoming direction flow navigator identifies the received flow message and determines a service grid agent example corresponding to the flow message;
and sending the flow message to the service grid agent example, and processing the flow message by the service grid agent example.
In some embodiments, the received traffic packet is sent to a corresponding service grid agent instance according to the mapping relationship, and after the service grid agent instance processes the traffic packet, the method further includes:
and accelerating the processed flow message and then sending out the flow message.
In a second aspect, an embodiment of the present disclosure provides a load balancing apparatus, including:
the acquisition module is used for acquiring service information of each application service on the cloud server node;
the determining module is used for determining the mapping relation between the service and the corresponding service grid agent instance thereof according to the service information;
and the sending module is used for sending the received flow message to a corresponding service grid agent example according to the mapping relation, and the service grid agent example processes the flow message.
In a third aspect, an embodiment of the present disclosure provides an electronic device, including:
a memory;
a processor; and
a computer program;
wherein the computer program is stored in the memory and configured to be executed by the processor to implement the method of the first aspect.
In a fourth aspect, the present disclosure provides a computer-readable storage medium, on which a computer program is stored, the computer program being executed by a processor to implement the method of the first aspect.
In a fifth aspect, the disclosed embodiments also provide a computer program product, which includes a computer program or instructions, when executed by a processor, implement the load balancing method as described above.
According to the load balancing method, the device, the equipment and the computer readable storage medium provided by the embodiment of the disclosure, the mapping relation between the service and the corresponding service grid agent instance is obtained according to the information provided by the cloud native grid control plane, and the received flow messages are orderly guided to different grid instances, so that the load balancing is realized on a plurality of service grid agent instances, the occurrence of flow overload or resource idle on each service grid agent instance is effectively avoided, and the stability of the service is improved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
In order to more clearly illustrate the embodiments or technical solutions in the prior art of the present disclosure, the drawings used in the description of the embodiments or prior art will be briefly described below, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without inventive exercise.
FIG. 1 is a schematic diagram of a cloud-native services grid;
fig. 2 is a flowchart of a load balancing method provided in the embodiment of the present disclosure
Fig. 3 is a schematic diagram of an application scenario provided by the embodiment of the present disclosure;
fig. 4 is a flowchart of a load balancing method according to another embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of a load balancing apparatus according to an embodiment of the present disclosure;
fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
Detailed Description
In order that the above objects, features and advantages of the present disclosure may be more clearly understood, aspects of the present disclosure will be further described below. It should be noted that the embodiments and features of the embodiments of the present disclosure may be combined with each other without conflict.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present disclosure, but the present disclosure may be practiced in other ways than those described herein; it is to be understood that the embodiments disclosed in the specification are only a few embodiments of the present disclosure, and not all embodiments.
A Data Processing Unit (DPU), also known as a dedicated Data processor, is a new generation computing chip that is Data-centric, I/O-intensive, and supports infrastructure resource layer virtualization using software-defined technology routes, and has many advantages such as improving computing system efficiency, reducing total cost of ownership of the overall system, improving Data Processing efficiency, and reducing performance loss of other computing chips. The Service Mesh (Service Mesh) is an infrastructure layer in cloud services, and is focused on realizing reliable transmission of Service requests among micro services, and providing flow control layer operations such as Service discovery, load balancing, request routing, rule configuration and the like on the basis of the Service Mesh.
FIG. 1 is a schematic diagram of a cloud-native services grid. At present, a cloud native service grid as shown in fig. 1 is generally formed by using a multi-instance agent and application services on a cloud server node on a DPU, that is, a manner of deploying multiple service grid agent instances on the DPU and establishing full connections with all services on the cloud server node is used to ensure that concurrent traffic of all services on the cloud server node can be carried, that is, a connection is established between each service and each grid agent instance, but in this manner, the distribution of traffic and traffic is unordered, and thus, it cannot be ensured that traffic load is balanced on each agent instance. Due to unbalanced traffic load, the traffic flow carried by some proxy instances is too large, thereby causing increased service delay or larger jitter amplitude, resulting in poor service experience or failure to provide stable service; and the flow load carried by some agent instances is small, and redundant service capacity can be provided, so that the resources of the memory and the processor on the DPU are idle or wasted.
To address this problem, embodiments of the present disclosure provide a load balancing method, which is described below with reference to specific embodiments.
Fig. 2 is a flowchart of a load balancing method according to an embodiment of the present disclosure. The method can be applied to an application scenario shown in fig. 3, where the application scenario includes a cloud server node and a DPU, and the DPU works on the cloud server node and provides a heterogeneous network computing acceleration engine with high bandwidth and low delay for the cloud server node; an SoC (system on chip) is an operating system deployed on a DPU. A Session planner (Session Scheduler) and an ingress traffic navigator (ingress traffic navigator) are deployed on the DPU, and in a possible implementation, both the Session planner and the ingress traffic navigator are implemented based on a container technology support.
The following describes the load balancing method shown in fig. 1 with reference to the application scenario shown in fig. 3, where the method includes the following specific steps:
s201, acquiring service information of each application service on the cloud server node.
Kubernetes, k8s or "kube" for short, is an open-source Linux container automation operation and maintenance platform, and eliminates many manual operations involved in the deployment and the scaling of containerization applications. If a plurality of hosts are combined into a cluster to run a Linux container, kubernets can simply and efficiently manage the cluster. A service grid control plane Agent (such as an Istio control plane Agent) is deployed on each cloud server node based on Kubernets and used for synchronizing with the service grid control plane and acquiring full-scale service application node information, CRD configuration information and service grid configuration information managed by the Kubernets.
The session planner acquires dynamic information and user-defined information (CRD) of each application service from a grid control plane Agent on the cloud server node. Optionally, the service information further includes domain information of the service and/or five-tuple information of the service.
S202, according to the service information, determining a mapping relation between the service and the corresponding service grid agent instance.
According to the domain information or the five-tuple information, for a certain service, the session planner may provide one or more service grid proxy instances of the service grid service for the service, and form a one-to-one or one-to-many mapping relationship between the service on the cloud server node and the one or more grid proxy instances providing the service grid service for each service on the DPU.
S203, sending the received flow message to a corresponding service grid agent example according to the mapping relation, and processing the flow message by the service grid agent example.
And the ingress direction traffic navigation identifies the application service required by the received traffic message, sends the traffic message to a service grid agent instance corresponding to the required application service according to the one-to-one or one-to-many mapping relation between the service on the cloud server node and one or more grid agent instances providing service grid services for each service on the DPU, and provides a service request, traffic forwarding and grid services by the service grid agent instance.
The method comprises the steps of acquiring service information of each application service on a cloud server node; determining a mapping relation between the service and a corresponding service grid agent instance thereof according to the service information; and sending the received flow message to a corresponding service grid agent example according to the mapping relation, processing the flow message by the service grid agent example, obtaining the mapping relation between the service and the corresponding service grid agent example according to the information provided by the cloud native grid control plane, and orderly guiding the received flow message to different grid examples, thereby realizing load balance on a plurality of service grid agent examples, effectively avoiding the occurrence of flow overload or resource idle on each service grid agent example, and improving the stability of the service.
Fig. 4 is a flowchart of a load balancing method according to another embodiment of the present disclosure, as shown in fig. 4, the method includes the following steps:
s401, acquiring service information of each application service on the cloud server node.
A service grid control plane Agent (such as an Istio control plane Agent) is deployed on each cloud server node based on Kubernets and used for synchronizing with the service grid control plane and acquiring full-scale service application node information, CRD configuration information and service grid configuration information managed by the Kubernets. In a possible implementation manner, an application service container is already deployed on a cloud server node based on kubernets, and a certain implementation manner (such as isotope \ linear) of a service grid control plane is deployed, so as to synchronize with the service grid control plane and obtain full service application node information, CRD configuration information, and service grid configuration information managed by kubernets.
And the session planner acquires the dynamic information of each application service and the CRD configuration information from the grid control plane Agent on the cloud server node. Optionally, the service information further includes domain information of the service and/or five-tuple information of the service.
S402, registering the information of the containerized service grid agent instances into the session planner.
A plurality of service grid agent instances deployed on the DPU SoC based on the Docker composition have respective configuration information, such as ID identification, IP address, etc. of each service grid agent instance, for identifying different service grid agent instances. Before the service grid service is realized, the information of each grid agent instance needs to be registered in the session planner, so that the session planner can allocate a corresponding service grid agent instance to each service according to the information of each service on the cloud server node.
In some embodiments, a corresponding number of service grid agent instances may be deployed on the DPU SoC based on the Docker composition according to a deployment scale of an application service on a cloud server node, and a specific deployment manner is not limited in this embodiment of the present disclosure.
S403, according to the service information, the session planner determines the mapping relationship between each application service on the server node and the service grid agent instance, and generates a mapping relationship table.
According to the domain information or the quintuple information, for a certain service, the session planner may provide one or more service grid proxy instances of the service grid service for the service, form a one-to-one or one-to-many mapping relationship between the service on the cloud server node and the one or more grid proxy instances providing the service grid service for each service on the DPU, and generate a corresponding mapping relationship table.
S404, sending the mapping relation table to the corresponding service grid agent instance and the incoming direction traffic navigator.
Meanwhile, the session planner issues the grid service configuration information provided by each service grid agent instance to the corresponding service grid agent instance.
S405, the service grid agent instance establishes a service link with the corresponding service on the cloud server node according to the mapping relation table.
And after receiving the mapping relation table, the service grid agent instance establishes a service link with the corresponding service on the corresponding cloud server node according to the mapping relation table and the grid service configuration information which is provided by the service grid agent instance. I.e., equivalent to "distributing" services on a cloud server node to different service grid proxy instances that provide traffic forwarding and grid services for service requests.
S406, according to the mapping relation table, the incoming direction flow navigator identifies the received flow message, and determines a service grid agent example corresponding to the flow message.
S407, sending the flow message to the service grid proxy example, and processing the flow message by the service grid proxy example.
And S408, accelerating the processed flow message and then sending out the flow message.
The DPU can use hardware to offload and accelerate subsequent traffic forwarding, significantly increasing the operating speed compared to forwarding by software. Accelerating the processed flow message through the DPU and then sending the flow message to a target node to complete the service of the flow message.
The method comprises the steps of acquiring service information of each application service on a cloud server node; registering information of the plurality of containerized service grid agent instances in the session planner; according to the service information, the session planner determines the mapping relation between each application service on the server node and the service grid agent instance to generate a mapping relation table; sending the mapping relation table to a corresponding service grid agent instance and the incoming direction traffic navigator; the service grid agent instance establishes a service link with the corresponding service on the cloud server node according to the mapping relation table; according to the mapping relation table, the incoming direction flow navigator identifies the received flow message and determines a service grid agent example corresponding to the flow message; sending the flow message to the service grid agent example, and processing the flow message by the service grid agent example; the processed flow message is sent out after being accelerated, the received flow message is orderly guided to different grid examples, load balance is realized on a plurality of service grid agent examples, flow overload or resource idle on each service grid agent example is effectively avoided, and service stability is improved.
Fig. 5 is a schematic structural diagram of a load balancing apparatus according to an embodiment of the present disclosure. The load balancing apparatus may be a data processor as described in the above embodiments, or the load balancing apparatus may be a component or assembly in the data processor. The load balancing device provided in the embodiment of the present disclosure may execute the processing procedure provided in the embodiment of the load balancing method, as shown in fig. 5, the load balancing device 50 includes: an acquisition module 51, a determination module 52 and a sending module 53; the obtaining module 51 is configured to obtain service information of each application service on a cloud server node; the determining module 52 is configured to determine, according to the service information, a mapping relationship between the service and its corresponding service grid agent instance; the sending module 53 is configured to send the received traffic packet to a corresponding service grid agent instance according to the mapping relationship, and the service grid agent instance processes the traffic packet.
Optionally, the data processor further includes a session planner and an incoming traffic navigator, and the load balancing apparatus 50 further includes a registration module 54, configured to register information of the plurality of containerized service grid agent instances in the session planner.
Optionally, the service information includes domain information of the service and/or five-tuple information of the service.
Optionally, the determining module 52 includes a generating unit 521 and a sending unit 522; the generating unit 521 is configured to determine, by the session planner, a mapping relationship between each application service on the server node and the service grid proxy instance according to the service information, and generate a mapping relationship table; the sending unit 522 is configured to send the mapping relationship table to the corresponding service grid agent instance and the inbound traffic navigator.
Optionally, the determining module 52 further includes a connecting unit 523, configured to establish a service link with a corresponding service on the cloud server node according to the mapping relationship table.
Optionally, the sending unit 522 is further configured to identify, by the ingress direction traffic navigator, the received traffic packet according to the mapping relationship table, and determine a service grid proxy instance corresponding to the traffic packet; and sending the flow message to the service grid agent example, and processing the flow message by the service grid agent example.
Optionally, the load balancing apparatus 50 further includes an acceleration module 55, configured to accelerate the processed traffic packet and then send the accelerated traffic packet.
The load balancing apparatus in the embodiment shown in fig. 5 may be used to implement the technical solution of the above method embodiment, and the implementation principle and the technical effect are similar, which are not described herein again.
Fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure. The electronic device may be a device to be upgraded as described in the above embodiments. The electronic device provided in the embodiment of the present disclosure may execute the processing procedure provided in the embodiment of the load balancing method, as shown in fig. 6, the electronic device 60 includes: memory 61, processor 62, computer programs and communication interface 63; wherein the computer program is stored in the memory 61 and is configured to be executed by the processor 62 for the load balancing method as described above.
In addition, the embodiment of the present disclosure also provides a computer readable storage medium, on which a computer program is stored, where the computer program is executed by a processor to implement the load balancing method described in the foregoing embodiment.
Furthermore, the embodiments of the present disclosure also provide a computer program product, which includes a computer program or instructions, and when the computer program or instructions are executed by a processor, the computer program or instructions implement the load balancing method as described above.
Computer program code for carrying out operations for the present disclosure may be written in any combination of one or more programming languages, including but not limited to an object oriented programming language such as Java, smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure herein is not limited to the particular combination of features described above, but also encompasses other embodiments in which any combination of the features described above or their equivalents does not depart from the spirit of the disclosure. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.
Further, while operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. Under certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limitations on the scope of the disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.
It is noted that, in this document, relational terms such as "first" and "second," and the like, may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrases "comprising a," "8230," "8230," or "comprising" does not exclude the presence of additional like elements in a process, method, article, or apparatus that comprises the element.
The previous description is only for the purpose of describing particular embodiments of the present disclosure, so as to enable those skilled in the art to understand or implement the present disclosure. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the disclosure. Thus, the present disclosure is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. A method of load balancing, applied to a data processor comprising a plurality of containerized service grid proxy instances, the method comprising:
acquiring service information of each application service on a cloud server node;
determining a mapping relation between the service and a corresponding service grid agent instance thereof according to the service information;
and sending the received flow message to a corresponding service grid agent example according to the mapping relation, and processing the flow message by the service grid agent example.
2. The method of claim 1, wherein the data processor further comprises a session planner and an ingress direction traffic navigator;
after the service information of each application service on the cloud server node is acquired, the method further comprises the following steps:
registering information of the plurality of containerized service grid agent instances in the session planner.
3. The method according to claim 1, wherein the service information comprises domain information of the service and/or five tuple information of the service.
4. The method of claim 2, wherein determining a mapping relationship between the service and its corresponding service grid proxy instance based on the service information comprises:
according to the service information, the session planner determines the mapping relation between each application service on the server node and the service grid agent instance to generate a mapping relation table;
and sending the mapping relation table to a corresponding service grid agent instance and the incoming direction traffic navigator.
5. The method of claim 4, wherein after sending the mapping table to the corresponding service grid proxy instance, the method further comprises:
and the service grid agent instance establishes a service link with the corresponding service on the cloud server node according to the mapping relation table.
6. The method of claim 4, wherein sending the received traffic packets to the corresponding service grid agent instances according to the mapping relationship comprises:
according to the mapping relation table, the incoming direction flow navigator identifies the received flow message and determines a service grid agent example corresponding to the flow message;
and sending the flow message to the service grid agent example, and processing the flow message by the service grid agent example.
7. The method of claim 1, wherein the received traffic message is sent to a corresponding service grid proxy instance according to the mapping relationship, and after the traffic message is processed by the service grid proxy instance, the method further comprises:
and accelerating the processed flow message and then sending out the flow message.
8. A load balancing apparatus, comprising:
the acquisition module is used for acquiring service information of each application service on the cloud server node;
the determining module is used for determining the mapping relation between the service and the corresponding service grid agent instance according to the service information;
and the sending module is used for sending the received flow message to a corresponding service grid agent example according to the mapping relation, and the service grid agent example processes the flow message.
9. An electronic device, comprising:
a memory;
a processor; and
a computer program;
wherein the computer program is stored in the memory and configured to be executed by the processor to implement the method of any one of claims 1-7.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the method according to any one of claims 1-7.
CN202211476539.5A 2022-11-23 2022-11-23 Load balancing method, device, equipment and computer readable storage medium Pending CN115834708A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211476539.5A CN115834708A (en) 2022-11-23 2022-11-23 Load balancing method, device, equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211476539.5A CN115834708A (en) 2022-11-23 2022-11-23 Load balancing method, device, equipment and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN115834708A true CN115834708A (en) 2023-03-21

Family

ID=85530757

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211476539.5A Pending CN115834708A (en) 2022-11-23 2022-11-23 Load balancing method, device, equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN115834708A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115883655A (en) * 2022-12-07 2023-03-31 中科驭数(北京)科技有限公司 Service request processing method and device, electronic equipment and storage medium
CN117061338A (en) * 2023-08-16 2023-11-14 中科驭数(北京)科技有限公司 Service grid data processing method, device and system based on multiple network cards
CN117170816A (en) * 2023-09-19 2023-12-05 中科驭数(北京)科技有限公司 DPU-based containerized data acquisition method, system and deployment method
CN117061338B (en) * 2023-08-16 2024-06-07 中科驭数(北京)科技有限公司 Service grid data processing method, device and system based on multiple network cards

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114244898A (en) * 2021-11-16 2022-03-25 阿里巴巴(中国)有限公司 Service grid-based workload preheating method and device
CN114615268A (en) * 2022-03-28 2022-06-10 阿里巴巴(中国)有限公司 Service network, monitoring node, container node and equipment based on Kubernetes cluster
CN114637549A (en) * 2022-02-22 2022-06-17 阿里巴巴(中国)有限公司 Data processing method, system and storage medium for service grid-based application
CN114726863A (en) * 2022-04-27 2022-07-08 阿里云计算有限公司 Method, device, system and storage medium for load balancing

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114244898A (en) * 2021-11-16 2022-03-25 阿里巴巴(中国)有限公司 Service grid-based workload preheating method and device
CN114637549A (en) * 2022-02-22 2022-06-17 阿里巴巴(中国)有限公司 Data processing method, system and storage medium for service grid-based application
CN114615268A (en) * 2022-03-28 2022-06-10 阿里巴巴(中国)有限公司 Service network, monitoring node, container node and equipment based on Kubernetes cluster
CN114726863A (en) * 2022-04-27 2022-07-08 阿里云计算有限公司 Method, device, system and storage medium for load balancing

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115883655A (en) * 2022-12-07 2023-03-31 中科驭数(北京)科技有限公司 Service request processing method and device, electronic equipment and storage medium
CN117061338A (en) * 2023-08-16 2023-11-14 中科驭数(北京)科技有限公司 Service grid data processing method, device and system based on multiple network cards
CN117061338B (en) * 2023-08-16 2024-06-07 中科驭数(北京)科技有限公司 Service grid data processing method, device and system based on multiple network cards
CN117170816A (en) * 2023-09-19 2023-12-05 中科驭数(北京)科技有限公司 DPU-based containerized data acquisition method, system and deployment method

Similar Documents

Publication Publication Date Title
WO2020228469A1 (en) Method, apparatus and system for selecting mobile edge computing node
WO2020228505A1 (en) Method, device, and system for selecting mobile edge computing node
EP3391210B1 (en) Scalable tenant networks
CN109561171B (en) Configuration method and device of virtual private cloud service
US9749145B2 (en) Interoperability for distributed overlay virtual environment
EP2907028B1 (en) Virtual machine multicast/broadcast in virtual network
US9880870B1 (en) Live migration of virtual machines using packet duplication
EP2499787B1 (en) Smart client routing
US11928514B2 (en) Systems and methods providing serverless DNS integration
US11095716B2 (en) Data replication for a virtual networking system
CN108141469B (en) Data plane manipulation in a load balancer
WO2023030417A1 (en) Packet processing method and device, storage medium, and computer program product
JP2022532731A (en) Avoiding congestion in slice-based networks
Liu et al. CFN-dyncast: Load Balancing the Edges via the Network
CN115834708A (en) Load balancing method, device, equipment and computer readable storage medium
CN112968965A (en) Metadata service method, server and storage medium for NFV network node
CN109067573B (en) Traffic scheduling method and device
CN113783963B (en) Data transmission method, server node, gateway equipment and network system
CN116232884A (en) Proxy instance management method, device, electronic equipment and storage medium
CN114024971A (en) Service data processing method, Kubernetes cluster and medium
US11108652B2 (en) Server assisted network discovery (SAND)
CN115883655B (en) Service request processing method and device, electronic equipment and storage medium
Byun et al. A real-time message delivery method of publish/subscribe model in distributed cloud environment
CN115883655A (en) Service request processing method and device, electronic equipment and storage medium
Teivo Evaluation of low latency communication methods in a Kubernetes cluster

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination