CN114024971B - Service data processing method, kubernetes cluster and medium - Google Patents

Service data processing method, kubernetes cluster and medium Download PDF

Info

Publication number
CN114024971B
CN114024971B CN202111226347.4A CN202111226347A CN114024971B CN 114024971 B CN114024971 B CN 114024971B CN 202111226347 A CN202111226347 A CN 202111226347A CN 114024971 B CN114024971 B CN 114024971B
Authority
CN
China
Prior art keywords
service
node
service data
load
address
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111226347.4A
Other languages
Chinese (zh)
Other versions
CN114024971A (en
Inventor
李瑞寒
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhengzhou Yunhai Information Technology Co Ltd
Original Assignee
Zhengzhou Yunhai Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhengzhou Yunhai Information Technology Co Ltd filed Critical Zhengzhou Yunhai Information Technology Co Ltd
Priority to CN202111226347.4A priority Critical patent/CN114024971B/en
Publication of CN114024971A publication Critical patent/CN114024971A/en
Application granted granted Critical
Publication of CN114024971B publication Critical patent/CN114024971B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1004Server selection for load balancing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/12Avoiding congestion; Recovering from congestion
    • H04L47/125Avoiding congestion; Recovering from congestion by balancing the load, e.g. traffic engineering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1004Server selection for load balancing
    • H04L67/1008Server selection for load balancing based on parameters of servers, e.g. available memory or workload
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1004Server selection for load balancing
    • H04L67/1023Server selection for load balancing based on a hash applied to IP addresses or costs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/104Peer-to-peer [P2P] networks
    • H04L67/1044Group management mechanisms 
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/50Reducing energy consumption in communication networks in wire-line communication networks, e.g. low power modes or reduced link rate

Abstract

The invention provides a business data processing method, which is applied to a Kubernetes cluster, and comprises the following steps: when receiving service data of a client, a control node determines a service network to which the service data belongs and forwards the service data to a flow inlet device corresponding to the service network; the traffic entry device determines a load node to which the traffic data corresponds and forwards the traffic data to the load node so that a container application in the load node processes the traffic data. The invention can realize the access of the multi-service network to the container application in the same Kubernetes cluster through the forwarding of the multi-flow inlet device, does not need to redeploy the Kubernetes cluster when the service network is newly added, and can effectively improve the utilization rate of cluster resources. The invention also provides a Kubernetes cluster and a computer readable storage medium, which have the beneficial effects.

Description

Service data processing method, kubernetes cluster and medium
Technical Field
The present invention relates to the field of servers, and in particular, to a service data processing method, a Kubernetes cluster, and a computer readable storage medium.
Background
Kubernetes cluster is a high performance cluster system capable of deploying container applications efficiently. In the related art, a container application deployed in a Kubernetes cluster can only meet the service requirement of one service network, and when a new service network needs to use the container application, the Kubernetes cluster needs to be deployed additionally, so that the cluster resource waste is easily caused.
Disclosure of Invention
The invention aims to provide a service data processing method, a Kubernetes cluster and a computer readable storage medium, which can realize that a plurality of service networks can access container applications in the same Kubernetes cluster through forwarding of a plurality of flow inlet devices, do not need to redeploy the Kubernetes cluster when a service network is newly added, and can effectively improve the utilization rate of cluster resources.
In order to solve the technical problems, the invention provides a service data processing method applied to a Kubernetes cluster, comprising the following steps:
when receiving service data of a client, a control node determines a service network to which the service data belongs and forwards the service data to a flow inlet device corresponding to the service network;
the traffic entry device determines a load node corresponding to the service data and forwards the service data to the load node so that a container application in the load node processes the service data.
Optionally, the traffic entry device determines a load node corresponding to the service data, and forwards the service data to the load node, including:
and the flow inlet device searches idle load nodes corresponding to the service data by using a load balancing service and forwards the service data to the idle load nodes.
Optionally, the load balancing service is Haproxy.
Optionally, the traffic entry device includes a plurality of working nodes, and before the control node receives the service data of the client, the traffic entry device further includes:
the flow inlet device configures a keepalive service and a virtual IP address so that the keepalive service detects the working state of the working node in real time and maps the virtual IP address to any target working node which works normally;
storing the mapping relation between the virtual IP address and the target working node to the control node, and providing the virtual IP address for a target client of a corresponding service network so that the target client can send the service data by using the virtual IP address;
correspondingly, the determining the service network to which the service data belongs and forwarding the service data to the traffic entry device corresponding to the service network includes:
the control node extracts a target IP address of the service data and searches a target working node corresponding to the target IP address according to the mapping relation;
and forwarding the service data to the target working node.
Optionally, before the traffic entry device configures the keepalive service and the virtual IP address, the method further includes:
the cluster control node acquires the number of the service networks;
grouping the load nodes according to the number, setting each group of load nodes as the flow inlet devices, and setting the load nodes in each flow inlet device as the working nodes.
The invention also provides a Kubernetes cluster, comprising: control node, flow entry device and load node, wherein,
the control node is used for determining a service network to which the service data belongs when the service data of the client is received, and forwarding the service data to a flow inlet device corresponding to the service network;
the traffic entry device is configured to determine a load node corresponding to the service data, and forward the service data to the load node;
the load node is configured to process the service data using a container application.
Alternatively, the process may be carried out in a single-stage,
the traffic entry device is further configured to search for an idle load node corresponding to the service data by using a load balancing service, and forward the service data to the idle load node.
Optionally, the load balancing service is Haproxy.
Optionally, the flow inlet device comprises a plurality of working nodes, wherein,
the flow inlet device is further configured to configure a keep alive service and a virtual IP address, so that the keep alive service detects the working state of the working node in real time, and maps the virtual IP address to any target working node that works normally; storing the mapping relation between the virtual IP address and the target working node to the control node, and providing the virtual IP address for a target client of a corresponding service network so that the target client can send the service data by using the virtual IP address;
correspondingly, the control node is further configured to extract a target IP address of the service data, and search a target working node corresponding to the target IP address according to the mapping relationship; and forwarding the service data to the target working node.
The present invention also provides a computer readable storage medium having stored therein computer executable instructions which, when loaded and executed by a processor, implement a business data processing method as described above.
The invention provides a business data processing method, which is applied to a Kubernetes cluster, and comprises the following steps: when receiving service data of a client, a control node determines a service network to which the service data belongs and forwards the service data to a flow inlet device corresponding to the service network; the traffic entry device determines a load node corresponding to the service data and forwards the service data to the load node so that a container application in the load node processes the service data.
As can be seen, when the control node in the invention receives the service data of the client, firstly, the service network from which the service data is received and the flow entry device corresponding to the service network are determined, and the service data is forwarded to the flow entry device; in addition, after receiving the service data, the traffic inlet device forwards the service data to the corresponding load node so that the container application in the load node processes the service data, in other words, the invention can realize that the multi-service network accesses the container application in the same Kubernetes cluster through the forwarding of the multi-traffic inlet device, does not need to redeploy the Kubernetes cluster when the service network is newly added, and can effectively improve the utilization rate of cluster resources. The invention also provides a Kubernetes cluster and a computer readable storage medium, which have the beneficial effects.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required to be used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only embodiments of the present invention, and that other drawings can be obtained according to the provided drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of a service data processing method according to an embodiment of the present invention;
fig. 2 is a block diagram of a Kubernetes cluster according to an embodiment of the present invention;
fig. 3 is a block diagram of another Kubernetes cluster according to an embodiment of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments of the present invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Kubernetes cluster is a high performance cluster system capable of deploying container applications efficiently. In the related art, a container application deployed in a Kubernetes cluster can only meet the service requirement of one service network, and when a new service network needs to use the container application, the Kubernetes cluster needs to be deployed additionally, so that the cluster resource waste is easily caused. In view of this, the embodiment of the invention provides a service data processing method, which can realize that multiple service networks access container applications in the same Kubernetes cluster through forwarding of multiple flow entry devices, and does not need to redeploy the Kubernetes cluster when a service network is newly added, so that the utilization rate of cluster resources can be effectively improved. Referring to fig. 1, fig. 1 is a flowchart of a service data processing method according to an embodiment of the present invention, where the method is applied to Kubernetes clusters, and may include:
s101, when receiving service data of a client, a control node determines a service network to which the service data belongs and forwards the service data to a flow entry device corresponding to the service network.
In the embodiment of the invention, each service network is additionally provided with a corresponding flow inlet device which is responsible for forwarding service data to a corresponding container application so as to realize the access of a client to the container application. The embodiment of the invention does not limit the number of the service network and the flow inlet devices, and the service network and the flow inlet devices are in one-to-one correspondence, so that the number of the service network and the flow inlet devices is the same, and the service network and the flow inlet devices can be set according to actual application requirements. In other words, in the embodiment of the invention, if a newly added service network appears, only a corresponding flow inlet device is needed to be added, and the Kubernetes cluster is not needed to be deployed again, so that the utilization rate of the resources of the Kubernetes cluster can be effectively improved, and the resource waste is avoided.
Further, the control node is a node in the Kubernetes cluster, which is responsible for component operation and overall cluster container control management, and in terms of transmitting service data, the control node generally directly transmits the service data to a corresponding load node (i.e. a node where a container application is located). In the implementation of the invention, after receiving the service data, the control node actively determines the service network to which the service data belongs and transmits the service data to the traffic inlet device corresponding to the service network. Because the clients of the same service network generally have the same network segment information, the service network to which the service data belongs can be determined according to the network segment information of the source IP address in the service data. Of course, in order to facilitate the access of the client, the IP address of the traffic entry device corresponding to the traffic network may also be directly provided to the client of the same traffic network, so that the control node may determine the corresponding traffic network and traffic entry device according to the target IP address in the traffic data.
Further, the embodiment of the invention is not limited to the specific structure of the traffic entry device, and the device may be composed of one working node or a plurality of working nodes, where the working node is configured to receive service data, determine a load node corresponding to the service data, and forward the service data to the corresponding load node in a route forwarding manner. In the embodiment of the invention, since the traffic entry device is an important link in the data path between the client and the container application, sufficient redundancy needs to be provided, and thus the traffic entry device may be composed of a plurality of working nodes. The embodiment of the invention is not limited to the number of the working nodes in each flow inlet device, the number of the working nodes in each flow inlet device can be different, and when the number is larger, the performance of the flow inlet device is stronger, and the specific number can be set according to the actual application requirement. Further, the embodiment of the invention does not limit the IP address provided by the traffic entry device to the target service network client, and may be either the IP address of a certain working node or a virtual IP address, where the virtual IP address may be mapped to any working node that works normally, and when the original mapped working node is abnormal, the virtual IP address is automatically switched to another working node that works normally. If the working node can stably work, the flow inlet device can take the IP address of the working node as the IP address of the application of the access container of the target client; if the working node may have an abnormal situation such as downtime, the traffic entry device may also provide the virtual IP address to the target client. In the embodiment of the invention, in order to avoid the situation that a client cannot access a container application due to abnormal conditions such as downtime and the like of a certain working node, a virtual IP address can be set in a flow inlet device. In particular, a keepalive service may be provided in the traffic ingress device, wherein keepalive service is a lightweight, high availability solution under Linux. The service can detect the working condition of each working node in real time and map the virtual IP address to any working node which works normally; if the working node mapped by the virtual IP address is abnormal, the keepalive service can map the virtual IP address to other working nodes working normally, so that the high availability of the flow inlet device is ensured. It should be noted that, the above description only describes the operation mode of the keepalive service simply, and the specific operation process refers to the related technology of the keepalive service. Further, it can be understood that after the keepalive service determines the mapping relationship between the virtual IP address and the working node, the traffic entry device can save the mapping relationship to the control node, so that the control node forwards traffic through the mapping relationship.
In one possible scenario, the traffic entry device comprises a plurality of working nodes, and may further comprise, before the control node receives the traffic data of the client:
step 11: the flow inlet device configures a keepalive service and a virtual IP address so that the keepalive service detects the working state of a working node in real time and maps the virtual IP address to any target working node which works normally;
step 12: storing the mapping relation between the virtual IP address and the target working node to a control node, and providing the virtual IP address for the target client of the corresponding service network so that the target client can send service data by using the virtual IP address;
correspondingly, determining the service network to which the service data belongs and forwarding the service data to the traffic entry device corresponding to the service network may include:
step 21: the control node extracts a target IP address of the service data and searches a target working node corresponding to the target IP address according to the mapping relation;
step 22: and forwarding the service data to the target working node.
Further, it should be noted that, the embodiment of the present invention is not limited to a specific working node, and the working node may be a node that is set separately and is dedicated to forwarding service data, or may be a load node. If each node needs to be independent, the work of each node is unified, and the working node can be a node which is independently arranged and is special for forwarding service data; if cluster resources need to be saved, the working node can also be a load node. In the embodiment of the invention, in order to save cluster resources and avoid the traffic inlet device from occupying a large amount of server resources, the load node can be directly used as the working node. In other words, the load node can run the container application and also can forward the service data at the same time, so that the utilization rate of cluster resources can be effectively improved.
In one possible case, before the traffic entry device configures the keepalive service and the virtual IP address, the method may further include:
step 31: the cluster control node acquires the number of service networks;
step 32: the load nodes are grouped according to the number, each group of load nodes is set as a flow inlet device, and the load nodes in each flow inlet device are set as working nodes.
It is understood that the number of load nodes in each group may be different and may be set according to the actual application requirements. Of course, all load nodes can be grouped, nodes with lower load rate in the load nodes can also be grouped, and the setting can be performed according to actual application requirements.
S102, the flow inlet device determines a load node corresponding to the service data and forwards the service data to the load node so that a container application in the load node processes the service data.
After receiving the service data, the traffic entry device may send the service data to the corresponding load node according to a preset routing relationship. The embodiment of the invention does not limit the specific mode of forwarding the service data to the load node by the traffic inlet device, and can refer to the related technology of the cluster. Furthermore, in order to improve the working efficiency of the load nodes, a load balancing service can be additionally arranged in the flow inlet device to avoid the condition that the workload of a single load node is overlarge, and idle load nodes corresponding to service data can be queried through the load balancing service. The embodiment of the invention does not limit the specific working process of the load balancing service, and can refer to the related technology of load balancing; the embodiment of the invention is not limited to specific complex balancing service, and can refer to load balancing service applicable to Kubernetes. In one possible scenario, since Haproxy is a common load balancing service for Kubernetes clusters, haproxy can be used as a load balancing service in a traffic ingress device, where Haproxy is an application proxy that can provide high availability, load balancing. Further, if the traffic entry device includes a plurality of working nodes, one or more working nodes in the device may also configure load balancing services, and the working nodes are set as load balancers, so as to satisfy load balancing under different load scenarios. Of course, if the working node receiving the traffic is not a load balancer, the traffic needs to be forwarded to the load balancer for load balancing.
In one possible scenario, the traffic entry device determines a load node corresponding to the traffic data and forwards the traffic data to the load node, which may include:
step 41: the traffic entry device searches for an idle load node corresponding to the traffic data by using the load balancing service, and forwards the traffic data to the idle load node.
In one possible scenario, the load balancing service is Haproxy.
Based on the above embodiment, when the control node in the present invention receives the service data of the client, it first determines the service network from which the service data comes and the traffic entry device corresponding to the service network, and forwards the service data to the traffic entry device, and because each service network has a corresponding traffic entry device, the present invention can realize the effect that multiple service networks access the same Kubernetes cluster by adding a corresponding traffic entry device to the service network; in addition, after receiving the service data, the traffic inlet device forwards the service data to the corresponding load node so that the container application in the load node processes the service data, in other words, the invention can realize that the multi-service network accesses the container application in the same Kubernetes cluster through the forwarding of the multi-traffic inlet device, does not need to redeploy the Kubernetes cluster when the service network is newly added, and can effectively improve the utilization rate of cluster resources.
The business data processing method is described below based on a specific structural block diagram. Referring to fig. 2, fig. 2 is a block diagram of a Kubernetes cluster according to an embodiment of the present invention, and the implementation process of the method is as follows:
1. and establishing load balancing of multiple network segments.
If the service network expands N or the service network is isolated by N networks, the working nodes are required to be divided into N groups, 1 or more nodes are selected from each group of nodes to deploy load balancers (Haproxy 1 to HaproxyN), and after the service traffic comes in, each load balancer is responsible for forwarding the traffic of the corresponding network segment and then forwarding the traffic to the corresponding working load.
2. Flow unified ingress implementation
The IP accessing the service application should be fixed and as little as possible the service IP of the working node, a virtual service IP address needs to be established in the working node. This allows access to the application by way of a fixed IP address + port. If the service network extends N or N networks are isolated from the service network, N different and fixed IP addresses need to be selected for the N service networks as service virtual IP addresses (virtual IP address 1 to virtual IP address N) of each network segment, and fault drift of the service virtual IP addresses is achieved by deploying N keep-alive (keep-alive 1 to keep-alive dn) in N groups of working nodes.
As shown in fig. 2, the container cluster based on Kubernetes is divided into a control node (master, which includes control nodes 1-3) and a working node (working node 1-N in the figure), if the service network expansion is performed on the cluster, all nodes of the cluster are added with a physical network card to be expanded and an expansion network IP address is configured. And then isolating the existing load balancing nodes of the configured service network, selecting 1 or more of the remaining nodes, deploying load balancing and keepalive services to the nodes, and configuring virtual IP addresses of the extended network.
The following describes a Kubernetes cluster and a computer readable storage medium provided by the embodiments of the present invention, and the Kubernetes cluster and the computer readable storage medium described below and the service data processing method described above may be referred to correspondingly.
Referring to fig. 3, fig. 3 is a block diagram of another Kubernetes cluster according to an embodiment of the present invention, where the cluster may include: a control node 301, a flow entry device 302, and a load node 303, wherein,
a control node 301, configured to determine a service network to which service data belongs when service data of a client is received, and forward the service data to a traffic entry device 302 corresponding to the service network;
a traffic ingress device 302, configured to determine a load node 303 corresponding to the traffic data, and forward the traffic data to the load node 303;
load node 303 for processing traffic data with the container application.
Optionally, the traffic entry device 302 is further configured to find an idle load node 303 corresponding to the traffic data by using the load balancing service, and forward the traffic data to the idle load node 303.
Optionally, the load balancing service is Haproxy.
Optionally, the flow inlet device 302 comprises a plurality of working nodes, wherein,
the flow entry device 302 is further configured to configure a keep alive service and a virtual IP address, so that the keep alive service detects the working state of the working node in real time, and maps the virtual IP address to any target working node that works normally; storing the mapping relation between the virtual IP address and the target working node to the control node 301, and providing the virtual IP address for the target client of the corresponding service network, so that the target client can send service data by using the virtual IP address;
correspondingly, the control node 301 is further configured to extract a target IP address of the service data, and find a target working node corresponding to the target IP address according to the mapping relationship; and forwarding the service data to the target working node.
Optionally, the cluster control node 301 may be further configured to obtain the number of service networks; the load nodes 303 are grouped by number, and each group of load nodes 303 is set as a traffic inlet device 302, and the load nodes 303 in each traffic inlet device 302 are set as working nodes.
The embodiment of the invention also provides a computer readable storage medium, and a computer program is stored on the computer readable storage medium, and when the computer program is executed by a processor, the steps of the service data processing method of any embodiment are realized.
Since the embodiments of the computer readable storage medium portion and the embodiments of the service data processing method portion correspond to each other, the embodiments of the computer readable storage medium portion are referred to the description of the embodiments of the service data processing method portion, and are not repeated herein.
In the description, each embodiment is described in a progressive manner, and each embodiment is mainly described by the differences from other embodiments, so that the same similar parts among the embodiments are mutually referred. For the device disclosed in the embodiment, since it corresponds to the method disclosed in the embodiment, the description is relatively simple, and the relevant points refer to the description of the method section.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative elements and steps are described above generally in terms of functionality in order to clearly illustrate the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. The software modules may be disposed in Random Access Memory (RAM), memory, read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
The service data processing method, the Kubernetes cluster and the computer readable storage medium provided by the invention are described in detail above. The principles and embodiments of the present invention have been described herein with reference to specific examples, the description of which is intended only to facilitate an understanding of the method of the present invention and its core ideas. It should be noted that it will be apparent to those skilled in the art that various modifications and adaptations of the invention can be made without departing from the principles of the invention and these modifications and adaptations are intended to be within the scope of the invention as defined in the following claims.

Claims (7)

1. A method for processing service data, applied to Kubernetes clusters, the method comprising:
when receiving service data of a client, a control node determines a service network to which the service data belongs and forwards the service data to a flow inlet device corresponding to the service network; the number of the service networks is at least two;
the flow inlet device determines a load node corresponding to the service data and forwards the service data to the load node so that a container application in the load node processes the service data; a plurality of service networks share the container application;
the traffic entry device comprises a plurality of working nodes, and before the control node receives the service data of the client, the traffic entry device further comprises:
the flow inlet device configures a keepalive service and a virtual IP address so that the keepalive service detects the working state of the working node in real time and maps the virtual IP address to any target working node which works normally;
storing the mapping relation between the virtual IP address and the target working node to the control node, and providing the virtual IP address for a target client of a corresponding service network so that the target client can send the service data by using the virtual IP address;
correspondingly, the determining the service network to which the service data belongs and forwarding the service data to the traffic entry device corresponding to the service network includes:
the control node extracts a target IP address of the service data and searches a target working node corresponding to the target IP address according to the mapping relation;
forwarding the service data to the target working node;
before the traffic inlet device configures the keepalive service and the virtual IP address, the method further includes:
the control node obtains the number of the service networks;
grouping the load nodes according to the number, setting each group of load nodes as the flow inlet devices, and setting the load nodes in each flow inlet device as the working nodes.
2. The traffic data processing method according to claim 1, wherein the traffic entry device determines a load node to which the traffic data corresponds, and forwards the traffic data to the load node, comprising:
and the flow inlet device searches idle load nodes corresponding to the service data by using a load balancing service and forwards the service data to the idle load nodes.
3. The traffic data processing method according to claim 2, wherein the load balancing service is Haproxy.
4. A Kubernetes cluster comprising: control node, flow entry device and load node, wherein,
the control node is used for determining a service network to which the service data belongs when the service data of the client is received, and forwarding the service data to a flow inlet device corresponding to the service network; the number of the service networks is at least two;
the traffic entry device is configured to determine a load node corresponding to the service data, and forward the service data to the load node;
the load node is used for processing the service data by utilizing a container application; a plurality of service networks share the container application;
wherein the flow inlet device comprises a plurality of working nodes, wherein,
the flow inlet device is further configured to configure a keep alive service and a virtual IP address, so that the keep alive service detects the working state of the working node in real time, and maps the virtual IP address to any target working node that works normally; storing the mapping relation between the virtual IP address and the target working node to the control node, and providing the virtual IP address for a target client of a corresponding service network so that the target client can send the service data by using the virtual IP address;
correspondingly, the control node is further configured to extract a target IP address of the service data, and search a target working node corresponding to the target IP address according to the mapping relationship; forwarding the service data to the target working node;
the control node is further configured to obtain the number of the service networks; grouping the load nodes according to the number, setting each group of load nodes as the flow inlet devices, and setting the load nodes in each flow inlet device as the working nodes.
5. The Kubernetes cluster of claim 4, wherein the plurality of clusters of Kubernetes,
the traffic entry device is further configured to search for an idle load node corresponding to the service data by using a load balancing service, and forward the service data to the idle load node.
6. The Kubernetes cluster of claim 5, wherein the load balancing service is Haproxy.
7. A computer readable storage medium having stored therein computer executable instructions which when loaded and executed by a processor implement the business data processing method of any of claims 1 to 3.
CN202111226347.4A 2021-10-21 2021-10-21 Service data processing method, kubernetes cluster and medium Active CN114024971B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111226347.4A CN114024971B (en) 2021-10-21 2021-10-21 Service data processing method, kubernetes cluster and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111226347.4A CN114024971B (en) 2021-10-21 2021-10-21 Service data processing method, kubernetes cluster and medium

Publications (2)

Publication Number Publication Date
CN114024971A CN114024971A (en) 2022-02-08
CN114024971B true CN114024971B (en) 2024-02-13

Family

ID=80057059

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111226347.4A Active CN114024971B (en) 2021-10-21 2021-10-21 Service data processing method, kubernetes cluster and medium

Country Status (1)

Country Link
CN (1) CN114024971B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114745185A (en) * 2022-04-18 2022-07-12 阿里巴巴(中国)有限公司 Cluster access method and device

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9444744B1 (en) * 2015-04-04 2016-09-13 Cisco Technology, Inc. Line-rate selective load balancing of permitted network traffic
CN110209492A (en) * 2019-03-21 2019-09-06 腾讯科技(深圳)有限公司 A kind of data processing method and device
CN111638957A (en) * 2020-06-01 2020-09-08 山东汇贸电子口岸有限公司 Method for realizing cluster sharing type public cloud load balance
CN111935312A (en) * 2020-09-21 2020-11-13 深圳蜂巢互联(南京)科技研究院有限公司 Industrial Internet container cloud platform and flow access control method thereof
CN112445623A (en) * 2020-12-14 2021-03-05 招商局金融科技有限公司 Multi-cluster management method and device, electronic equipment and storage medium
CN112492022A (en) * 2020-11-25 2021-03-12 上海中通吉网络技术有限公司 Cluster, method, system and storage medium for improving database availability
CN112905305A (en) * 2021-03-03 2021-06-04 国网电力科学研究院有限公司 VPP-based cluster type virtualized data forwarding method, device and system

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9444744B1 (en) * 2015-04-04 2016-09-13 Cisco Technology, Inc. Line-rate selective load balancing of permitted network traffic
CN110209492A (en) * 2019-03-21 2019-09-06 腾讯科技(深圳)有限公司 A kind of data processing method and device
CN111638957A (en) * 2020-06-01 2020-09-08 山东汇贸电子口岸有限公司 Method for realizing cluster sharing type public cloud load balance
CN111935312A (en) * 2020-09-21 2020-11-13 深圳蜂巢互联(南京)科技研究院有限公司 Industrial Internet container cloud platform and flow access control method thereof
CN112492022A (en) * 2020-11-25 2021-03-12 上海中通吉网络技术有限公司 Cluster, method, system and storage medium for improving database availability
CN112445623A (en) * 2020-12-14 2021-03-05 招商局金融科技有限公司 Multi-cluster management method and device, electronic equipment and storage medium
CN112905305A (en) * 2021-03-03 2021-06-04 国网电力科学研究院有限公司 VPP-based cluster type virtualized data forwarding method, device and system

Also Published As

Publication number Publication date
CN114024971A (en) 2022-02-08

Similar Documents

Publication Publication Date Title
EP3367638B1 (en) Load balancing method, device and system
CN111464592B (en) Load balancing method, device, equipment and storage medium based on micro-service
CN110113441B (en) Computer equipment, system and method for realizing load balance
CN105993161B (en) Element, method, system and computer readable storage device for resolving an address
US11570239B2 (en) Distributed resilient load-balancing for multipath transport protocols
CN111641719B (en) Intranet type load balancing implementation method based on Openstack and storage medium
CN104579732A (en) Method, device and system for managing virtualized network function network elements
EP3353952A1 (en) Managing groups of servers
CN113572831B (en) Communication method, computer equipment and medium between Kubernetes clusters
CN112042170B (en) DHCP implementation on nodes for virtual machines
CN106713378B (en) Method and system for providing service by multiple application servers
CN109688006B (en) High-performance weblog message distribution method supporting dynamic detection of target cluster
CN111193773A (en) Load balancing method, device, equipment and storage medium
CN112187958A (en) Method and device for registering, discovering and forwarding microservice
CN107689878A (en) TCP length connection SiteServer LBSs based on name scheduling
CN115296848B (en) Multi-local area network environment-based fort system and fort access method
CN111182022A (en) Data transmission method and device, storage medium and electronic device
CN111970362A (en) Vehicle networking gateway clustering method and system based on LVS
CN114024971B (en) Service data processing method, kubernetes cluster and medium
CN115086330A (en) Cross-cluster load balancing system
CN112491984B (en) Container editing engine cluster management system based on virtual network bridge
CN112968965B (en) Metadata service method, server and storage medium for NFV network node
US6631421B1 (en) Recursive partitioning of networks
CN113839862A (en) Method, system, terminal and storage medium for synchronizing ARP information between MCLAG neighbors
CN112655185A (en) Apparatus, method and storage medium for service distribution in software defined network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant