CN114024971A - Service data processing method, Kubernetes cluster and medium - Google Patents

Service data processing method, Kubernetes cluster and medium Download PDF

Info

Publication number
CN114024971A
CN114024971A CN202111226347.4A CN202111226347A CN114024971A CN 114024971 A CN114024971 A CN 114024971A CN 202111226347 A CN202111226347 A CN 202111226347A CN 114024971 A CN114024971 A CN 114024971A
Authority
CN
China
Prior art keywords
service
service data
node
load
address
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111226347.4A
Other languages
Chinese (zh)
Other versions
CN114024971B (en
Inventor
李瑞寒
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhengzhou Yunhai Information Technology Co Ltd
Original Assignee
Zhengzhou Yunhai Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhengzhou Yunhai Information Technology Co Ltd filed Critical Zhengzhou Yunhai Information Technology Co Ltd
Priority to CN202111226347.4A priority Critical patent/CN114024971B/en
Publication of CN114024971A publication Critical patent/CN114024971A/en
Application granted granted Critical
Publication of CN114024971B publication Critical patent/CN114024971B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1004Server selection for load balancing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/12Avoiding congestion; Recovering from congestion
    • H04L47/125Avoiding congestion; Recovering from congestion by balancing the load, e.g. traffic engineering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1004Server selection for load balancing
    • H04L67/1008Server selection for load balancing based on parameters of servers, e.g. available memory or workload
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1004Server selection for load balancing
    • H04L67/1023Server selection for load balancing based on a hash applied to IP addresses or costs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/104Peer-to-peer [P2P] networks
    • H04L67/1044Group management mechanisms 
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/50Reducing energy consumption in communication networks in wire-line communication networks, e.g. low power modes or reduced link rate

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The invention provides a service data processing method, which is applied to a Kubernetes cluster and comprises the following steps: when receiving service data of a client, a control node determines a service network to which the service data belongs and forwards the service data to a flow inlet device corresponding to the service network; and the traffic inlet device determines a load node corresponding to the traffic data and forwards the traffic data to the load node, so that the container application in the load node processes the traffic data. The invention can realize that the multi-service network accesses the container application in the same Kubernetes cluster by forwarding of the multi-flow inlet device, and the Kubernetes cluster does not need to be redeployed when a new service network is added, thereby effectively improving the utilization rate of cluster resources. The invention also provides a Kubernets cluster and a computer readable storage medium, which have the beneficial effects.

Description

Service data processing method, Kubernetes cluster and medium
Technical Field
The invention relates to the field of servers, in particular to a service data processing method, a Kubernetes cluster and a computer readable storage medium.
Background
The Kubernetes cluster is a high-performance cluster system and can efficiently deploy container applications. In the related art, the container application deployed in the kubernets cluster can only meet the service requirement of one service network, and when a new service network also needs to use the container application, another kubernets cluster needs to be deployed, which easily causes the waste of cluster resources.
Disclosure of Invention
The invention aims to provide a service data processing method, a Kubernets cluster and a computer readable storage medium, which can realize that a multi-service network accesses container application in the same Kubernets cluster through forwarding of a multi-flow inlet device, do not need to redeploy the Kubernets cluster when a new service network is added, and can effectively improve the utilization rate of cluster resources.
In order to solve the above technical problem, the present invention provides a service data processing method, which is applied to a Kubernetes cluster, and the method includes:
when receiving service data of a client, a control node determines a service network to which the service data belongs and forwards the service data to a flow inlet device corresponding to the service network;
and the traffic inlet device determines a load node corresponding to the service data and forwards the service data to the load node, so that a container application in the load node processes the service data.
Optionally, the determining, by the traffic ingress device, a load node corresponding to the service data, and forwarding the service data to the load node includes:
and the flow inlet device searches an idle load node corresponding to the service data by using a load balancing service and forwards the service data to the idle load node.
Optionally, the load balancing service is a Haproxy.
Optionally, the traffic ingress apparatus includes a plurality of working nodes, and before the control node receives the service data of the client, the apparatus further includes:
the flow inlet device is configured with a Keeplived service and a virtual IP address, so that the Keeplived service can detect the working state of the working node in real time and map the virtual IP address to any target working node which normally works;
storing the mapping relation between the virtual IP address and the target working node to the control node, and providing the virtual IP address to a target client of a corresponding service network so that the target client sends the service data by using the virtual IP address;
correspondingly, the determining the service network to which the service data belongs and forwarding the service data to the traffic entry device corresponding to the service network includes:
the control node extracts a target IP address of the service data and searches a target working node corresponding to the target IP address according to the mapping relation;
and forwarding the service data to the target working node.
Optionally, before configuring Keepalived service and virtual IP address, the method further includes:
the cluster control node acquires the number of the service networks;
and grouping the load nodes according to the number, setting each group of load nodes as the flow inlet devices, and setting the load nodes in each flow inlet device as the working nodes.
The invention also provides a kubernets cluster, comprising: a control node, a traffic ingress apparatus and a load node, wherein,
the control node is used for determining a service network to which the service data belongs when receiving the service data of the client, and forwarding the service data to a flow inlet device corresponding to the service network;
the traffic ingress device is configured to determine a load node corresponding to the service data, and forward the service data to the load node;
and the load node is used for processing the service data by using the container application.
Alternatively,
the traffic entry device is further configured to search an idle load node corresponding to the service data by using a load balancing service, and forward the service data to the idle load node.
Optionally, the load balancing service is a Haproxy.
Optionally, the traffic ingress apparatus comprises a plurality of working nodes, wherein,
the flow inlet device is also used for configuring a keepalive service and a virtual IP address so as to enable the keepalive service to detect the working state of the working node in real time and map the virtual IP address to any target working node which normally works; storing the mapping relation between the virtual IP address and the target working node to the control node, and providing the virtual IP address to a target client of a corresponding service network so that the target client sends the service data by using the virtual IP address;
correspondingly, the control node is further configured to extract a target IP address of the service data, and search a target working node corresponding to the target IP address according to the mapping relationship; and forwarding the service data to the target working node.
The present invention also provides a computer-readable storage medium, in which computer-executable instructions are stored, and when the computer-executable instructions are loaded and executed by a processor, the service data processing method as described above is implemented.
The invention provides a service data processing method, which is applied to a Kubernetes cluster and comprises the following steps: when receiving service data of a client, a control node determines a service network to which the service data belongs and forwards the service data to a flow inlet device corresponding to the service network; and the traffic inlet device determines a load node corresponding to the service data and forwards the service data to the load node, so that a container application in the load node processes the service data.
It can be seen that, when receiving service data of a client, a control node in the invention first determines a service network from which the service data comes and a flow inlet device corresponding to the service network, and forwards the service data to the flow inlet device, and since each service network has a corresponding flow inlet device, the invention can realize the effect of accessing the same Kubernetes cluster by multiple service networks by adding corresponding flow inlet devices to the service network; in addition, after receiving the service data, the traffic ingress device forwards the service data to the corresponding load node, so that the container application in the load node processes the service data, in other words, the invention can realize that multiple service networks access the container application in the same kubernets cluster through the forwarding of the multiple traffic ingress devices, and the kubernets cluster does not need to be redeployed when a new service network is added, thereby effectively improving the utilization rate of cluster resources. The invention also provides a Kubernets cluster and a computer readable storage medium, which have the beneficial effects.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
Fig. 1 is a flowchart of a service data processing method according to an embodiment of the present invention;
fig. 2 is a structural block diagram of a kubernets cluster according to an embodiment of the present invention;
fig. 3 is a block diagram of another kubernets cluster according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The Kubernetes cluster is a high-performance cluster system and can efficiently deploy container applications. In the related art, the container application deployed in the kubernets cluster can only meet the service requirement of one service network, and when a new service network also needs to use the container application, another kubernets cluster needs to be deployed, which easily causes the waste of cluster resources. In view of this, embodiments of the present invention provide a service data processing method, which can implement that multiple service networks access container applications in the same Kubernetes cluster through forwarding of a multiple-traffic ingress device, and a Kubernetes cluster does not need to be redeployed when a new service network is added, so that a utilization rate of cluster resources can be effectively improved. Referring to fig. 1, fig. 1 is a flowchart of a service data processing method according to an embodiment of the present invention, where the method is applied to a kubernets cluster, and may include:
s101, when receiving the service data of the client, the control node determines a service network to which the service data belongs and forwards the service data to a flow inlet device corresponding to the service network.
Each service network in the embodiment of the invention is additionally provided with a corresponding flow inlet device which is responsible for forwarding the service data to the corresponding container application so as to realize the access of the client to the container application. The embodiment of the invention does not limit the number of the service networks and the flow inlet devices, and the service networks correspond to the flow inlet devices one by one, so that the number of the service networks and the number of the flow inlet devices are the same, and the service networks and the flow inlet devices can be set according to actual application requirements. In other words, in the embodiment of the present invention, if a newly added service network occurs, only a corresponding traffic entry device needs to be newly added, and a kubernets cluster does not need to be redeployed, so that the utilization rate of kubernets cluster resources can be effectively improved, and resource waste is avoided.
Further, the control node is a node in the kubernets cluster that is responsible for component operation and whole cluster container control management, and in terms of transmitting service data, the control node generally directly transmits the service data to a corresponding load node (i.e., a node where the container application is located). In the implementation of the present invention, after receiving the service data, the control node actively determines the service network to which the service data belongs, and transmits the service data to the traffic ingress device corresponding to the service network. Because the clients of the same service network usually have the same network segment information, the service network to which the service data belongs can be determined according to the network segment information of the source IP address in the service data. Certainly, in order to facilitate the access of the client, the IP address of the traffic ingress device corresponding to the service network may also be directly provided to the client in the same service network, and the control node may determine the corresponding service network and the traffic ingress device according to the target IP address in the service data.
Further, the embodiment of the present invention does not limit the specific structure of the traffic ingress apparatus, and the apparatus may be composed of one working node or may be composed of a plurality of working nodes, where the working node is configured to receive the service data, determine a load node corresponding to the service data, and forward the service data to a corresponding load node in a route forwarding manner. In the embodiment of the present invention, since the traffic ingress device is an important link in the data path between the client and the container application, and needs to provide sufficient redundancy, the traffic ingress device may be composed of a plurality of working nodes. The embodiment of the invention does not limit the number of the working nodes in each flow inlet device, the number of the working nodes in each flow inlet device can be different, and when the number is larger, the performance of the flow inlet device is stronger, and the specific number can be set according to the actual application requirement. Furthermore, the embodiment of the present invention does not limit the IP address provided by the traffic ingress device to the target service network client, and may be an IP address of a certain working node, or may also be a virtual IP address, where the virtual IP address may be mapped to any working node that normally works, and when the originally mapped working node is abnormal, the node is automatically switched to another working node that normally works. If the working node can work stably, the flow inlet device can take the IP address of the working node as the IP address of the target client for accessing the container application; if the working node may be down or other abnormal conditions, the traffic ingress device may also provide the virtual IP address to the target client. In the embodiment of the present invention, in order to avoid a situation that the client cannot access the container application due to abnormal conditions such as downtime of a certain working node, a virtual IP address may be set in the traffic ingress device. Specifically, the Keepalived service may be set in the traffic portal device, where the Keepalived service is the Linux next lightweight highly available solution. The service can detect the working condition of each working node in real time and map the virtual IP address to any working node which normally works; if the working node originally mapped by the virtual IP address is abnormal, the keepalive service can map the virtual IP address to other working nodes which normally work, and the high availability of the flow inlet device is ensured. It should be noted that the above description only briefly describes the operation mode of the keepalive service, and the specific operation process refers to the related art of the keepalive service. Further, it can be understood that, after the Keepalived service determines the mapping relationship between the virtual IP address and the working node, the traffic ingress device may store the mapping relationship to the control node, so that the control node forwards the traffic through the mapping relationship.
In a possible case, the traffic ingress apparatus includes a plurality of working nodes, and before the control node receives the service data of the client, the method may further include:
step 11: the method comprises the steps that a traffic inlet device configures Keepalived service and virtual IP addresses, so that the Keepalived service can detect the working state of a working node in real time and map the virtual IP addresses to any target working node which normally works;
step 12: storing the mapping relation between the virtual IP address and the target working node to a control node, and providing the virtual IP address for a target client of a corresponding service network so that the target client sends service data by using the virtual IP address;
correspondingly, determining the service network to which the service data belongs, and forwarding the service data to the traffic entry device corresponding to the service network may include:
step 21: the control node extracts a target IP address of the service data and searches a target working node corresponding to the target IP address according to the mapping relation;
step 22: and forwarding the service data to the target working node.
Further, it should be noted that, the embodiment of the present invention does not limit a specific working node, and the working node may be a node which is separately configured and dedicated to service data forwarding, or may be a load node. If each node needs to be independent and the work of each node is simplified, the working node can be a node which is independently arranged and specially used for forwarding service data; if cluster resources need to be saved, the working nodes can also be load nodes. In the embodiment of the invention, in order to save cluster resources and avoid the flow entrance device from occupying a large amount of server resources, the load node can be directly used as a working node. In other words, the load node may run the container application, and may also forward the service data at the same time, which may effectively improve the utilization rate of the cluster resource.
Before configuring Keepalived service and virtual IP address for the traffic ingress device, in one possible case, the method may further include:
step 31: the cluster control node acquires the number of the service networks;
step 32: and grouping the load nodes according to the number, setting each group of load nodes as flow inlet devices, and setting the load nodes in each flow inlet device as working nodes.
It can be understood that the number of load nodes in each group may be different, and may be set according to the actual application requirements. Of course, the grouping can be performed on all load nodes, or on nodes with lower load rates in the load nodes, and the grouping can be performed according to actual application requirements.
S102, the traffic entrance device determines a load node corresponding to the service data and forwards the service data to the load node, so that the container application in the load node processes the service data.
After receiving the service data, the traffic ingress device may send the service data to the corresponding load node according to the preset routing relationship. The embodiment of the present invention does not limit the specific way in which the traffic ingress device forwards the service data to the load node, and may refer to the related technologies of the cluster. Further, in order to improve the working efficiency of the load node and avoid the situation that the workload of a single load node is too large, a load balancing service may be added to the traffic ingress device, and the idle load node corresponding to the service data is queried through the load balancing service. The embodiment of the invention does not limit the specific working process of the load balancing service, and can refer to the related technology of load balancing; the embodiment of the invention also does not limit specific complex balancing service, and can refer to load balancing service applicable to Kubernets. In one possible scenario, since Haproxy is a load balancing service commonly used by kubernets cluster, Haproxy, which is an application proxy that can provide high availability, load balancing, can be used as a load balancing service in traffic ingress devices. Further, if the traffic entry device includes a plurality of working nodes, a load balancing service may be configured for one or more working nodes in the device, and the working nodes are set as load balancers to satisfy load balancing under different load scenarios. Of course, if the working node receiving the traffic flow is not a load balancer, the traffic flow needs to be forwarded to the load balancer for load balancing processing.
In one possible case, the determining, by the traffic ingress apparatus, a load node corresponding to the traffic data, and forwarding the traffic data to the load node may include:
step 41: the traffic inlet device searches for an idle load node corresponding to the traffic data by using the load balancing service, and forwards the traffic data to the idle load node.
In one possible scenario, the load balancing service is Haproxy.
Based on the above embodiment, when receiving service data of a client, a control node in the present invention first determines a service network from which the service data comes and a traffic ingress device corresponding to the service network, and forwards the service data to the traffic ingress device, and since each service network has a corresponding traffic ingress device, the present invention can achieve the effect of accessing the same Kubernetes cluster by multiple service networks by adding corresponding traffic ingress devices to the service network; in addition, after receiving the service data, the traffic ingress device forwards the service data to the corresponding load node, so that the container application in the load node processes the service data, in other words, the invention can realize that multiple service networks access the container application in the same kubernets cluster through the forwarding of the multiple traffic ingress devices, and the kubernets cluster does not need to be redeployed when a new service network is added, thereby effectively improving the utilization rate of cluster resources.
The above-mentioned service data processing method is described below based on a specific structural block diagram. Referring to fig. 2, fig. 2 is a block diagram of a kubernets cluster according to an embodiment of the present invention, where the method is implemented as follows:
1. and establishing load balance of multiple network segments.
If the service network is expanded by N or N network isolations are carried out on the service network, the working nodes are required to be divided into N groups, 1 or more nodes are selected from each group of nodes to deploy load balancers (Haproxy1 to Haproxy N), and each load balancer is responsible for forwarding the traffic of the corresponding network segment after the service traffic comes in, and then forwarding the traffic to the corresponding working load.
2. Unified entry implementation for traffic
The IP for accessing the service application should be fixed and as far as possible not be the service IP of the working node, requiring the establishment of a virtual service IP address in the working node. Thus, the application can be accessed by the fixed IP address + port mode. If the service networks are expanded by N or N network isolations are performed on the service networks, N different and fixed IP addresses are required to be selected for the N service networks as service virtual IP addresses (virtual IP address 1 to virtual IP address N) of each network segment, and the fault drift of the service virtual IP addresses is realized by deploying N Keepalived (Keepalived1 to Keepalived dN) in N groups of working nodes.
As shown in fig. 2, a Kubernetes-based container cluster is divided into a control node (master, which includes control nodes 1 to 3) and a working node (working node, which includes working nodes 1 to N), and if service network expansion is performed on the cluster, a physical network card to be expanded is added to all nodes of the cluster and an expansion network IP address is configured. Then isolating the load balancing nodes of the existing configuration service network, selecting 1 or more nodes from the rest nodes, deploying the load balancing devices and keepalive services to the nodes, and configuring the virtual IP addresses of the extended network.
In the following, a kubernets cluster and a computer readable storage medium provided by the embodiments of the present invention are introduced, and the kubernets cluster and the computer readable storage medium described below and the service data processing method described above may be referred to correspondingly.
Referring to fig. 3, fig. 3 is a block diagram of another kubernets cluster according to an embodiment of the present invention, where the cluster may include: a control node 301, a traffic ingress device 302, and a load node 303, wherein,
the control node 301 is configured to, when receiving service data of a client, determine a service network to which the service data belongs, and forward the service data to a traffic ingress device 302 corresponding to the service network;
a traffic ingress device 302, configured to determine a load node 303 corresponding to the service data, and forward the service data to the load node 303;
and the load node 303 is used for processing the service data by using the container application.
Optionally, the traffic ingress device 302 is further configured to search, by using the load balancing service, an idle load node 303 corresponding to the traffic data, and forward the traffic data to the idle load node 303.
Optionally, the load balancing service is Haproxy.
The traffic ingress device 302 optionally comprises a plurality of working nodes, wherein,
the traffic inlet device 302 is further configured to configure keepalive service and a virtual IP address, so that the keepalive service detects a working state of a working node in real time, and maps the virtual IP address to any target working node which normally works; storing the mapping relation between the virtual IP address and the target working node to a control node 301, and providing the virtual IP address to a target client of a corresponding service network so that the target client sends service data by using the virtual IP address;
correspondingly, the control node 301 is further configured to extract a target IP address of the service data, and search a target working node corresponding to the target IP address according to the mapping relationship; and forwarding the service data to the target working node.
Optionally, the cluster control node 301 may also be configured to obtain the number of service networks; the load nodes 303 are grouped according to the number, and each group of load nodes 303 is set as a traffic ingress device 302, and the load nodes 303 in each traffic ingress device 302 are set as working nodes.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the steps of the service data processing method according to any of the above embodiments are implemented.
Since the embodiment of the computer-readable storage medium portion corresponds to the embodiment of the service data processing method portion, please refer to the description of the embodiment of the service data processing method portion for the embodiment of the computer-readable storage medium portion, which is not repeated here.
The embodiments are described in a progressive manner in the specification, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. The device disclosed by the embodiment corresponds to the method disclosed by the embodiment, so that the description is simple, and the relevant points can be referred to the method part for description.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in Random Access Memory (RAM), memory, Read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
The service data processing method, kubernets cluster and computer readable storage medium provided by the present invention are described in detail above. The principles and embodiments of the present invention are explained herein using specific examples, which are presented only to assist in understanding the method and its core concepts. It should be noted that, for those skilled in the art, it is possible to make various improvements and modifications to the present invention without departing from the principle of the present invention, and those improvements and modifications also fall within the scope of the claims of the present invention.

Claims (10)

1. A service data processing method is applied to a Kubernetes cluster, and the method comprises the following steps:
when receiving service data of a client, a control node determines a service network to which the service data belongs and forwards the service data to a flow inlet device corresponding to the service network;
and the traffic inlet device determines a load node corresponding to the service data and forwards the service data to the load node, so that a container application in the load node processes the service data.
2. The method according to claim 1, wherein the determining, by the traffic ingress device, a load node corresponding to the traffic data, and forwarding the traffic data to the load node comprises:
and the flow inlet device searches an idle load node corresponding to the service data by using a load balancing service and forwards the service data to the idle load node.
3. The method of claim 2, wherein the load balancing service is Haproxy.
4. The traffic data processing method according to any one of claims 1 to 3, wherein the traffic ingress device comprises a plurality of working nodes, and before the control node receives the traffic data of the client, the method further comprises:
the flow inlet device is configured with a Keeplived service and a virtual IP address, so that the Keeplived service can detect the working state of the working node in real time and map the virtual IP address to any target working node which normally works;
storing the mapping relation between the virtual IP address and the target working node to the control node, and providing the virtual IP address to a target client of a corresponding service network so that the target client sends the service data by using the virtual IP address;
correspondingly, the determining the service network to which the service data belongs and forwarding the service data to the traffic entry device corresponding to the service network includes:
the control node extracts a target IP address of the service data and searches a target working node corresponding to the target IP address according to the mapping relation;
and forwarding the service data to the target working node.
5. The method according to claim 4, further comprising, before configuring Keepalived service and virtual IP address for the traffic ingress device:
the cluster control node acquires the number of the service networks;
and grouping the load nodes according to the number, setting each group of load nodes as the flow inlet devices, and setting the load nodes in each flow inlet device as the working nodes.
6. A kubernets cluster, comprising: a control node, a traffic ingress apparatus and a load node, wherein,
the control node is used for determining a service network to which the service data belongs when receiving the service data of the client, and forwarding the service data to a flow inlet device corresponding to the service network;
the traffic ingress device is configured to determine a load node corresponding to the service data, and forward the service data to the load node;
and the load node is used for processing the service data by using the container application.
7. The Kubernets cluster according to claim 6,
the traffic entry device is further configured to search an idle load node corresponding to the service data by using a load balancing service, and forward the service data to the idle load node.
8. The Kubernetes cluster of claim 7, wherein the load balancing service is a Haproxy.
9. A Kubernets cluster according to any of claims 6 to 8, wherein the traffic inlet arrangement comprises a plurality of working nodes, wherein,
the flow inlet device is also used for configuring a keepalive service and a virtual IP address so as to enable the keepalive service to detect the working state of the working node in real time and map the virtual IP address to any target working node which normally works; storing the mapping relation between the virtual IP address and the target working node to the control node, and providing the virtual IP address to a target client of a corresponding service network so that the target client sends the service data by using the virtual IP address;
correspondingly, the control node is further configured to extract a target IP address of the service data, and search a target working node corresponding to the target IP address according to the mapping relationship; and forwarding the service data to the target working node.
10. A computer-readable storage medium, having stored thereon computer-executable instructions, which, when loaded and executed by a processor, implement the business data processing method of any one of claims 1 to 5.
CN202111226347.4A 2021-10-21 2021-10-21 Service data processing method, kubernetes cluster and medium Active CN114024971B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111226347.4A CN114024971B (en) 2021-10-21 2021-10-21 Service data processing method, kubernetes cluster and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111226347.4A CN114024971B (en) 2021-10-21 2021-10-21 Service data processing method, kubernetes cluster and medium

Publications (2)

Publication Number Publication Date
CN114024971A true CN114024971A (en) 2022-02-08
CN114024971B CN114024971B (en) 2024-02-13

Family

ID=80057059

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111226347.4A Active CN114024971B (en) 2021-10-21 2021-10-21 Service data processing method, kubernetes cluster and medium

Country Status (1)

Country Link
CN (1) CN114024971B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114745185A (en) * 2022-04-18 2022-07-12 阿里巴巴(中国)有限公司 Cluster access method and device

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9444744B1 (en) * 2015-04-04 2016-09-13 Cisco Technology, Inc. Line-rate selective load balancing of permitted network traffic
CN110209492A (en) * 2019-03-21 2019-09-06 腾讯科技(深圳)有限公司 A kind of data processing method and device
CN111638957A (en) * 2020-06-01 2020-09-08 山东汇贸电子口岸有限公司 Method for realizing cluster sharing type public cloud load balance
CN111935312A (en) * 2020-09-21 2020-11-13 深圳蜂巢互联(南京)科技研究院有限公司 Industrial Internet container cloud platform and flow access control method thereof
CN112445623A (en) * 2020-12-14 2021-03-05 招商局金融科技有限公司 Multi-cluster management method and device, electronic equipment and storage medium
CN112492022A (en) * 2020-11-25 2021-03-12 上海中通吉网络技术有限公司 Cluster, method, system and storage medium for improving database availability
CN112905305A (en) * 2021-03-03 2021-06-04 国网电力科学研究院有限公司 VPP-based cluster type virtualized data forwarding method, device and system

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9444744B1 (en) * 2015-04-04 2016-09-13 Cisco Technology, Inc. Line-rate selective load balancing of permitted network traffic
CN110209492A (en) * 2019-03-21 2019-09-06 腾讯科技(深圳)有限公司 A kind of data processing method and device
CN111638957A (en) * 2020-06-01 2020-09-08 山东汇贸电子口岸有限公司 Method for realizing cluster sharing type public cloud load balance
CN111935312A (en) * 2020-09-21 2020-11-13 深圳蜂巢互联(南京)科技研究院有限公司 Industrial Internet container cloud platform and flow access control method thereof
CN112492022A (en) * 2020-11-25 2021-03-12 上海中通吉网络技术有限公司 Cluster, method, system and storage medium for improving database availability
CN112445623A (en) * 2020-12-14 2021-03-05 招商局金融科技有限公司 Multi-cluster management method and device, electronic equipment and storage medium
CN112905305A (en) * 2021-03-03 2021-06-04 国网电力科学研究院有限公司 VPP-based cluster type virtualized data forwarding method, device and system

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114745185A (en) * 2022-04-18 2022-07-12 阿里巴巴(中国)有限公司 Cluster access method and device

Also Published As

Publication number Publication date
CN114024971B (en) 2024-02-13

Similar Documents

Publication Publication Date Title
EP3367638B1 (en) Load balancing method, device and system
CN112087312B (en) Method, device and equipment for providing edge service
CN110113441B (en) Computer equipment, system and method for realizing load balance
US10581674B2 (en) Method and apparatus for expanding high-availability server cluster
US8825867B2 (en) Two level packet distribution with stateless first level packet distribution to a group of servers and stateful second level packet distribution to a server within the group
US20200351328A1 (en) Data transmission method, device, equipment, and readable storage medium
US20180295029A1 (en) Managing groups of servers
US11570239B2 (en) Distributed resilient load-balancing for multipath transport protocols
CN111641719B (en) Intranet type load balancing implementation method based on Openstack and storage medium
CN106713378B (en) Method and system for providing service by multiple application servers
CN104717081A (en) Gateway function realization method and device
CN111193773A (en) Load balancing method, device, equipment and storage medium
CN112187958A (en) Method and device for registering, discovering and forwarding microservice
US9729454B2 (en) Methods, systems, and computer readable media for balancing diameter message traffic received over long-lived diameter connections
CN114024971B (en) Service data processing method, kubernetes cluster and medium
CN112968965B (en) Metadata service method, server and storage medium for NFV network node
CN112087382B (en) Service routing method and device
CN115834708A (en) Load balancing method, device, equipment and computer readable storage medium
CN115484232A (en) DHCP server deployment method, device, equipment and storage medium
US11576072B2 (en) Methods, systems, and computer-readable media for distributing S1 connections to mobility management entities (MMEs) and N2 connections to access and mobility management functions (AMFs)
US20130103854A1 (en) Routing network traffic
CN110958182B (en) Communication method and related equipment
CN113992683B (en) Method, system, equipment and medium for realizing effective isolation of double networks in same cluster
US20230318968A1 (en) Using a routing protocol for network port failover
JP7249820B2 (en) Gateway device, network system, control method, and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant