CN112995054B - Flow distribution method and device, electronic equipment and computer readable medium - Google Patents

Flow distribution method and device, electronic equipment and computer readable medium Download PDF

Info

Publication number
CN112995054B
CN112995054B CN202110237195.1A CN202110237195A CN112995054B CN 112995054 B CN112995054 B CN 112995054B CN 202110237195 A CN202110237195 A CN 202110237195A CN 112995054 B CN112995054 B CN 112995054B
Authority
CN
China
Prior art keywords
virtual server
priority
virtual
server
protocol
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110237195.1A
Other languages
Chinese (zh)
Other versions
CN112995054A (en
Inventor
于文超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing QIYI Century Science and Technology Co Ltd
Original Assignee
Beijing QIYI Century Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing QIYI Century Science and Technology Co Ltd filed Critical Beijing QIYI Century Science and Technology Co Ltd
Priority to CN202110237195.1A priority Critical patent/CN112995054B/en
Publication of CN112995054A publication Critical patent/CN112995054A/en
Application granted granted Critical
Publication of CN112995054B publication Critical patent/CN112995054B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/12Avoiding congestion; Recovering from congestion
    • H04L47/125Avoiding congestion; Recovering from congestion by balancing the load, e.g. traffic engineering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/06Management of faults, events, alarms or notifications
    • H04L41/0654Management of faults, events, alarms or notifications using network fault recovery
    • H04L41/0663Performing the actions predefined by failover planning, e.g. switching to standby network elements

Abstract

The application provides a flow distribution method, a flow distribution device, an electronic device and a computer readable medium, wherein the method comprises the following steps: under the condition that a first virtual server in a plurality of virtual servers is determined to be failed, determining a first redundancy protocol and a first protocol address corresponding to the first virtual server, wherein each virtual server is bound with one protocol address, the protocol address has an associated redundancy protocol, and the redundancy protocol is used for indicating the receiving priority sequence of the plurality of virtual servers on the traffic sent to the protocol address; and determining a second virtual server according to the receiving priority order indicated by the first redundancy protocol, and forwarding the traffic sent to the first protocol address to the second virtual server, wherein the priority of the second virtual server is lower than that of the first virtual server. The method and the device improve the balance of flow distribution.

Description

Flow distribution method and device, electronic equipment and computer readable medium
Technical Field
The present application relates to the field of traffic distribution, and in particular, to a traffic distribution method, apparatus, electronic device, and computer readable medium.
Background
The VRRP is a fault-tolerant protocol for preventing single-point failure of network equipment, a plurality of network equipment running the VRRP protocol form a group of virtual servers, the group of virtual servers comprises a plurality of virtual servers, and the plurality of virtual servers adopt the same VRRP protocol.
In a normal state, only one virtual server is the Master, the rest are the Backup, the same IP address can be simultaneously configured on a plurality of virtual servers, but only the Master can process the network data packet of the IP address. When the Master server fails, one Backup becomes the Master, and the flow of the IP address is transferred to a new Master so as to ensure that the network is not interrupted, and the new Master node informs all equipment communicating with the new Master node of sending the message sent to the virtual IP address to the new Master node through a free ARP message.
Generally, a VRRP protocol is usually used to construct a cluster in a Master/slave mode for a service, but this Master/slave service mode only has one virtual server with traffic, that is, a Master has traffic, which causes waste of traffic resources, and even if the number of virtual servers is increased, only one virtual server has traffic, which is inconvenient for capacity expansion.
Disclosure of Invention
In order to solve the technical problem or at least partially solve the problem of traffic resource waste, the application provides a traffic allocation method, a traffic allocation device, an electronic device and a computer-readable medium.
In a first aspect, the present application provides a traffic distribution method, where the method includes:
under the condition that a first virtual server in a plurality of virtual servers is determined to be in fault, determining a first redundancy protocol and a first protocol address corresponding to the first virtual server, wherein each virtual server is bound with one protocol address, the protocol address has an associated redundancy protocol, and the redundancy protocol is used for indicating the receiving priority sequence of the plurality of virtual servers on the traffic sent to the protocol address;
and determining a second virtual server according to the receiving priority order indicated by the first redundancy protocol, and forwarding the traffic sent to the first protocol address to the second virtual server, wherein the priority of the second virtual server is lower than that of the first virtual server.
Optionally, the plurality of virtual servers form a directed ring, and according to the direction of the directed ring, the receiving priority of the traffic sent to the protocol address by each virtual server indicated by the redundancy protocol is sequentially reduced.
Optionally, the selecting, according to the receiving priority order indicated by the first redundancy protocol, a virtual server with a minimum bearer traffic in the current bearer traffic as the second virtual server includes:
determining whether a third virtual server exists in the plurality of virtual servers, wherein the third virtual server fails earlier than the first virtual server, and the priority of the third virtual server is next to the priority of the first virtual server;
under the condition that the third virtual server exists, determining a fourth virtual server and a fifth virtual server according to the receiving priority sequence indicated by the first redundancy protocol, wherein the fourth virtual server already receives the flow transferred by the protocol address bound by the third virtual server, and the priority of the fourth virtual server is positioned at the upper level of the priority of the fifth virtual server;
and exchanging the priority of the fourth virtual server and the priority of the fifth virtual server, and taking the fifth virtual server as the second virtual server.
Optionally, the selecting, according to the receiving priority order indicated by the first redundancy protocol, the virtual server with the smallest bearer traffic in the current bearer traffic as the second virtual server includes:
determining whether an adjacent virtual server corresponding to the next priority of the first virtual server fails according to the receiving priority sequence indicated by the first redundancy protocol;
and when the adjacent virtual server does not have a fault, determining that the adjacent virtual server is one of the virtual servers with the minimum receiving flow, and using the adjacent virtual server as the second virtual server.
Optionally, after the priorities of the fourth virtual server and the fifth virtual server are exchanged, the method further includes:
and under the condition that the failure of the first virtual server is monitored to be eliminated, restoring the priority ranking sequence of the fourth virtual server and the fifth virtual server.
In a second aspect, the present application provides a flow distribution device, the device comprising:
the system comprises a determining module, a judging module and a processing module, wherein the determining module is used for determining a first redundancy protocol and a first protocol address corresponding to a first virtual server in a plurality of virtual servers under the condition that the first virtual server is determined to be in fault, each virtual server is bound with one protocol address, the protocol address has an associated redundancy protocol, and the redundancy protocol is used for indicating the receiving priority sequence of the plurality of virtual servers on the flow sent to the protocol address;
and the forwarding module is used for determining a second virtual server according to the receiving priority order indicated by the first redundancy protocol and forwarding the traffic sent to the first protocol address to the second virtual server, wherein the priority of the second virtual server is lower than that of the first virtual server.
Optionally, the forwarding module includes:
a first determining unit, configured to determine whether a third virtual server exists in the plurality of virtual servers, where a failure time of the third virtual server is earlier than that of the first virtual server, and a priority of the third virtual server is located at a next level of a priority of the first virtual server;
a second determining unit, configured to determine, in the presence of the third virtual server, a fourth virtual server and a fifth virtual server according to a receiving priority order indicated by the first redundancy protocol, where the fourth virtual server already supports traffic transferred by a protocol address bound to the third virtual server, and the priority of the fourth virtual server is located at a previous stage of the priority of the fifth virtual server;
and the exchanging unit is used for exchanging the priorities of the fourth virtual server and the fifth virtual server, and taking the fifth virtual server as the second virtual server.
Optionally, the forwarding module includes:
a third determining unit, configured to determine, according to a receiving priority order indicated by the first redundancy protocol, whether an adjacent virtual server corresponding to a next priority of the first virtual server fails;
a fourth determination unit configured to determine the adjacent virtual server as the second virtual server when the adjacent virtual server does not fail.
In a third aspect, the present application provides an electronic device, including a processor, a communication interface, a memory, and a communication bus, where the processor, the communication interface, and the memory complete mutual communication through the communication bus;
a memory for storing a computer program;
a processor for implementing any of the method steps described herein when executing a program stored in the memory.
In a fourth aspect, the present application provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs any of the method steps.
Compared with the prior art, the technical scheme provided by the embodiment of the application has the following advantages: in the application, each virtual server is bound with one protocol address, and the flow is distributed to each protocol address in a balanced manner, so that each virtual server has flow, the resources of each virtual server are fully utilized, and the balance of flow distribution is improved. In addition, when the first virtual server fails, the flow of the first protocol address is transferred to the second virtual server according to the priority ranking order, and the flow of each protocol address is ensured to be balanced under the condition that the virtual server fails.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without inventive exercise.
Fig. 1 is a schematic flow distribution diagram provided in an embodiment of the present application;
FIG. 2 is a block diagram of a flow distribution system;
fig. 3 is a flowchart of a traffic distribution method according to an embodiment of the present application;
FIG. 4 is a schematic diagram of a receive priority sequence;
FIG. 5 is a schematic diagram of traffic distribution when a discontinuous virtual server fails;
FIG. 6 is a schematic diagram illustrating traffic distribution when a continuous virtual server fails according to the prior art;
FIG. 7 is a schematic diagram of traffic distribution when consecutive virtual servers fail;
fig. 8 is a schematic structural diagram of a flow distribution device according to an embodiment of the present application;
fig. 9 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The implementation process of the embodiment of the present application includes various functions, such as monitoring, priority correction, priority assignment, and the like. The present application may configure one device for each function, and the multiple devices may be independent entities or multiple virtual devices that are borne on the same entity according to function division. For multiple independent entities, each entity may be a server, such as a monitoring server or a priority correction server; for a plurality of virtual devices divided according to functions, the plurality of virtual devices may be configured on the same entity server, and the entity server has a plurality of functions. In the following description, the present application will be explained in terms of separate entities.
The embodiment of the application provides a flow distribution system, which comprises a plurality of virtual servers, a monitoring server and a configuration server which are connected with each other. Each virtual server is bound with one protocol address, the flow is configured to each protocol address in a balanced manner, and the virtual server receives the flow sent to the bound protocol address. Each protocol address has an associated redundancy protocol, the redundancy protocol is used for indicating a receiving priority sequence of a plurality of virtual servers for the flow sent to the protocol address, namely, each protocol address is associated with one protocol address, each redundancy protocol is associated with a plurality of virtual servers, different priorities are set for different servers, the flow of the protocol address is sent to the server with the highest priority, the server with the highest priority is the Master server corresponding to the protocol address, and other virtual servers are backup servers corresponding to the protocol address.
When the monitoring server determines that a first virtual server in the plurality of virtual servers fails, a first redundancy protocol and a first protocol address corresponding to the first virtual server are determined, wherein the first virtual server can be any one of the plurality of virtual servers, the first redundancy protocol indicates a receiving priority sequence of the plurality of virtual servers, the configuration server determines a second virtual server with a lower priority than the first virtual server according to the receiving priority sequence indicated by the first redundancy protocol from high to low, and forwards the traffic sent to the first protocol address to the second virtual server.
Fig. 1 is a schematic diagram of traffic allocation, and as shown in fig. 1, user traffic is respectively bound to each protocol address (vip 1, vip2.. Vipn) through DNS (domain name system) or load balancing, and each virtual server is bound with one protocol address, for example, virtual server-1 is bound with vip1, virtual server-2 is bound with vip2 \8230, and virtual server-n is bound with vipn, then virtual server-1 is a master server of vip1, and virtual server-2-virtual server-n is a backup server of vip 1. The receiving priority sequence indicated by the redundancy protocol associated with the vip1 is sequentially virtual server-1 and virtual server-2 \8230, the receiving priority sequence indicated by the redundancy protocol associated with the virtual server-n and vip2 is sequentially virtual server-2, virtual server-3 \8230, virtual server-n and virtual server-1, and thus a plurality of virtual servers form a directed ring as shown in fig. 4.
Illustratively, in case of failure of virtual server-1, the traffic of vip1 is transferred to virtual server-2, which is lower than the priority of virtual server-1, in the order of receiving priority indicated by the redundancy protocol associated with vip 1; in case of failure of the virtual server-2, according to the receiving priority order indicated by the redundancy protocol associated with the vip2, the traffic of the vip2 is transferred to the virtual server-3 with the priority lower than that of the virtual server-2, and so on.
In the application, each virtual server is bound with one protocol address, and the flow is distributed to each protocol address in a balanced manner, so that each virtual server has flow, the resources of each virtual server are fully utilized, and the balance of flow distribution is improved. In addition, when the first virtual server fails, the flow of the first protocol address is transferred to the second virtual server according to the priority ranking order, and the flow of each protocol address is ensured to be balanced under the condition that the virtual server fails. If the virtual server group performs the capacity expansion, the flow distribution range can be wider.
Fig. 2 is a schematic diagram of a framework of a traffic distribution system, which further includes a priority correction server and a priority distribution server, as shown in fig. 2. In the system, a monitoring server, a priority correction server, a priority distribution server, a configuration server and a virtual server group are sequentially connected, wherein the virtual server group comprises a plurality of virtual servers. The functions of each server may be executed by each server individually, or may be executed by a central server, which is not limited in this application.
When monitoring that the first virtual server fails, the monitoring server sends a monitoring result to the priority correction server, the priority correction server selects a second virtual server with the minimum receiving flow from other virtual servers in the virtual server group and forwards the flow sent to the first protocol address to the second virtual server, and therefore flow transfer can be carried out on the flow of the first protocol address when the first virtual server fails, waste of flow is avoided, the flow is transferred to the second virtual server with the minimum receiving flow, and the transferred flow distribution can be balanced. The monitoring server can also monitor flow switching information and record a fault virtual server in real time.
And according to the receiving priority sequence indicated by the first redundancy protocol, the virtual server corresponding to the next priority of the first virtual server is an adjacent virtual server. And the priority correction server judges whether the adjacent virtual server has a fault or not, and if the priority correction server judges that the adjacent virtual server has no fault, the flow of the first protocol address is transferred to the adjacent virtual server so as to transfer the flow to the nearest adjacent virtual server.
If the priority correction server judges that the adjacent virtual server has a fault, the adjacent virtual server is a third virtual server, the fault time of the third virtual server is earlier than that of the first virtual server, before the first virtual server has the fault, the flow of the protocol address corresponding to the third virtual server is transferred to a fourth virtual server, the priority of the fourth virtual server is the next level of the priority of the third virtual server, and the next level of the fourth virtual server is a fifth virtual server. The fourth virtual server not only has the flow of the protocol address of the fourth virtual server, but also receives the flow of the third virtual server which is transferred by the corresponding protocol address, so that the flow of the fourth virtual server is greater than the flow of the fifth virtual server. The priority correction server exchanges the priority sequence of the fourth virtual server and the fifth virtual server, so that the receiving priority sequence indicated by the first redundancy protocol is 8230that the first virtual server, the third virtual server, the fifth virtual server and the fourth virtual server are in turn. Since the first virtual server and the third virtual server have failed, the traffic of the first protocol address is transferred to the fifth virtual server, so that the traffic is transferred to the virtual server with the minimum receiving traffic and the closest to the failed server.
And after receiving the correction result sent by the priority correction server, the priority distribution server readjusts the receiving priority order indicated by the redundancy protocol, then sends the adjustment result to the configuration server, and the configuration server regenerates the configuration information according to the adjustment result and sends the regenerated configuration information to the plurality of virtual servers, thereby determining a new Master server corresponding to the redundancy protocol.
Illustratively, the redundancy protocol is a VRRP (virtual router redundancy protocol) instance, and the VRRP protocol software is keepalived. Optionally, the configuration server regenerates the VRRP configuration file according to the adjustment result, and then the load process sends the VRRP configuration file to the plurality of virtual servers, thereby completing the configuration of the VRRP instance and ensuring the normal operation of the VRRP protocol.
As shown in fig. 2, the traffic distribution system further includes a service distribution server, where the service distribution server is connected to the plurality of virtual servers, and is configured to select one protocol address from the plurality of protocol addresses and feed the selected protocol address back to the terminal when receiving a service request sent by the terminal, so that the terminal adopts the virtual server bound by the selected protocol address.
The multiple redundancy protocols are associated with multiple protocol addresses, the multiple protocol addresses can be directly or indirectly used by a user, and the user only needs to select one protocol address. However, this requires the user to select the protocol address to be used each time the user requests service, which brings inconvenience to the user, and the user can not ensure the balance of the access amount of the protocol address by selecting the protocol address by himself. The service publishing server solves the problem of user consistency access and access, and the implementation modes at least comprise the following three modes.
The method I comprises the following steps: a DNS. If the service can be accessed through the domain name, a plurality of protocol addresses can be all related to the domain name of the service through the DNS, and when the user requests the domain name of the service, the DNS feeds back to one protocol address of the user in a polling mode.
The DNS is a distributed network directory service, and is mainly used to implement the interconversion between a domain name and an IP address. The DNS service adopts a layered management mode, can be configured and inquired according to regions and operators in a layered mode, and ensures the concurrency processing performance through a cache technology. In a dispatch area, if a service domain name of a DNS record is associated with multiple IP addresses, when a user requests a DNS domain name query, the DNS selects one of the associated IP addresses to respond to the user's query with a polling policy. The advantage of load balancing by DNS extension is that flexible distribution of service flow according to regions and operators can be realized.
The second method comprises the following steps: and balancing the load of the service end. And performing load balancing at the server, taking a plurality of protocol addresses as backups for load balancing, and providing a virtual load balancing address for a user to access. Common such load balancing techniques include 4-layer LVS load balancing, 7-layer nginx load balancing, and the like.
The four-layer load balancing works in the fourth layer of the seven-layer open system interconnection OSI (open system interconnect) model of the network, i.e. the transport layer, and performs request distribution based on ip and port number, and the typical four-layer load balancing is LVS ((Linux). The seven-layer load balancing works in the seventh layer of the network seven-layer OSI model, namely an application layer, and the load balancing is carried out based on the requested application layer information, such as seven-layer protocols http, radius, dns and the like, the url of the seven layers, the browser category and the like, so that the functions are richer, the whole network is more intelligent, and the typical seven-layer load balancing is Nginx.
The third method comprises the following steps: and (4) balancing the load of the client. A plurality of protocol addresses are registered in a service center, and when a user requests service, the user acquires one protocol address from the service center and then accesses the acquired address. Common service center technologies include Consul, zookeper, and the like.
Client load balancing (service discovery mechanism) is widely applied to the technical field of micro services, and is used for solving the problem of how to ensure the availability and consistency of services when an application instance is dynamically expanded, failed or upgraded. At the heart of service discovery is a service registry, which is a database containing the network addresses of all available service instances. After a user initiates a user request, an available application instance is obtained by inquiring the service registry to process the user request. The client load balancing can be realized by utilizing the service discovery mechanism, namely, a load balancer of a server is not needed, a user program directly queries a service registry to obtain an available RealServer list, and then selects one RealServer to initiate a request, and the mode is simpler, more flexible and more direct.
The embodiment of the application provides a traffic distribution method, which can be applied to a server and used for carrying out balanced distribution of traffic.
A detailed description will be given below of a flow rate distribution method provided in an embodiment of the present application with reference to a specific implementation manner, as shown in fig. 3, the specific steps are as follows:
step 301: when determining that a first virtual server in the plurality of virtual servers fails, determining a first redundancy protocol and a first protocol address corresponding to the first virtual server.
Wherein each virtual server is bound to a protocol address having an associated redundancy protocol for indicating a priority order of receipt of traffic sent to the protocol address by the plurality of virtual servers.
In order to distribute user traffic to the multiple virtual servers as evenly as possible, each virtual server is bound with a protocol address, each protocol address has an associated redundancy protocol, namely, a VRRP instance (redundancy protocol) is executed on each virtual server, each VRRP instance is associated with multiple virtual servers, different priorities are set for each virtual server, the virtual server with the highest priority is a Master server of the VRRP instance, other virtual servers are backup servers of the VRRP instance, and VIP (protocol address) of the VRRP instance is configured on the Master server. When the first virtual server fails, a first redundancy protocol and a first protocol address corresponding to the first virtual server are determined.
Step 302: and determining a second virtual server according to the receiving priority order indicated by the first redundancy protocol, and forwarding the traffic sent to the first protocol address to the second virtual server.
Wherein the priority of the second virtual server is lower than the priority of the first virtual server.
When the Master server fails, the backup virtual server with low priority takes over the failed virtual server to be called a new Master server, the VIP of the failed server is switched to the new Master server, the flow of the protocol address of the failed virtual server is forwarded to the new Master server, the priority of the new Master server is lower than that of the failed virtual server, and specifically, the priority of the new Master server is different from that of the failed virtual server by at least one level.
In the application, when the virtual server fails, the flow of the failed virtual server can be automatically migrated to other non-failed virtual servers, so that waste of flow resources is avoided.
As an alternative embodiment, the determining the second virtual server according to the receiving priority order indicated by the first redundancy protocol includes: determining the current bearer traffic of all the virtual servers; and selecting the virtual server with the minimum carrying flow in the current carrying flows as a second virtual server according to the receiving priority sequence indicated by the first redundancy protocol.
When the first virtual server fails, the current carrying flows of the remaining other virtual servers need to be determined, the carrying flows of different virtual servers may be different, and in order to ensure that the flows of the first virtual server can be distributed evenly after the flows of the first virtual server are transferred, the virtual server with the minimum carrying flow in the current carrying flows can be selected as the second virtual server, and then the flows of the first protocol address are transferred to the second virtual server.
Fig. 4 is a schematic diagram of a receiving priority sequence, and as shown in fig. 4, it is assumed that the virtual server group includes 5 virtual servers, which are respectively numbered as S1, S2, S3, S4, and S5, and in a normal state, 5 VRRP instances correspond to 5 virtual IPs, that is, vip1, vip2, vip3, vip4, and vip5 are respectively and correspondingly configured on 5 machines. For example, the Master server of the vip1 VRRP instance is S1, so the VRRP instance of the vip1 assigned to S1 is the highest priority, and then the VRRP instance priorities of the vip1 of the remaining virtual servers are sequentially decreased according to the ring direction. Wherein, the priority calculation formula of the kth node is as follows:
Figure BDA0002960714480000111
where N is the total number of nodes, P k Is the priority of the kth node, P max ,P min VRRP highest priority and lowest priority, respectively.
Assuming that the highest priority of the VRRP is 255 and the lowest priority is 15, the priorities of the VRRP instances corresponding to vip1 on the nodes S1, S2, S3, S4 and S5 are 255, 195, 135, 75 and 15 respectively, the priorities of the VRRP instances corresponding to vip2 on the nodes S2, S3, S4, S5 and S1 are 255, 195, 135, 75 and 15 respectively, the formula of the VRRP priorities of the other 3 vips is analogized, and the result of allocating the priorities of the virtual servers is shown in table one.
Watch 1
S1 S2 S3 S4 S5
Vip1 255 195 135 75 15
Vip2 15 255 195 135 75
Vip3 75 15 255 195 135
Vip4 135 75 15 255 195
Vip5 195 135 75 15 255
And the virtual servers indicated by the redundancy protocol sequentially reduce the receiving priority of the traffic sent to the protocol address according to the direction of the directed ring. By adopting the priority distribution mode, when one virtual server fails, the flow of the failed virtual server is switched to the next normal virtual server according to the direction of the directed ring.
As an optional implementation manner, according to the receiving priority order indicated by the first redundancy protocol, selecting the virtual server with the minimum carrying traffic in the current carrying traffic as the second virtual server includes determining whether an adjacent virtual server corresponding to the next priority of the first virtual server fails according to the receiving priority order indicated by the first redundancy protocol; when the adjacent virtual server does not fail, the adjacent virtual server is determined to be one of the virtual servers with the smallest receiving flow, and the adjacent virtual server is taken as a second virtual server.
According to the priority allocation mode, all the traffic is transferred to the nearest virtual server, when the first virtual server fails, if the adjacent virtual server corresponding to the next priority of the first virtual server does not fail, the adjacent virtual server does not receive the traffic of other failed virtual servers, and the adjacent virtual server is one of the virtual servers with the minimum carrying traffic, the traffic of the first virtual server is transferred to the adjacent virtual server, so that the traffic is transferred nearby.
As shown in fig. 5, assuming that S1 and S4 fail, the next priority S2 of S1 does not fail, and the next priority S5 of S4 does not fail, the traffic of S1 will be switched to S2, and the traffic of S4 will be switched to S5.
As an optional implementation manner, selecting, according to the receiving priority order indicated by the first redundancy protocol, the virtual server with the smallest bearer traffic in the current bearer traffic as the second virtual server includes: determining whether a third virtual server exists in the plurality of virtual servers, wherein the time of failure of the third virtual server is earlier than that of the first virtual server, and the priority of the third virtual server is positioned at the next level of the priority of the first virtual server; under the condition that a third virtual server exists, determining a fourth virtual server and a fifth virtual server according to the receiving priority sequence indicated by the first redundancy protocol, wherein the fourth virtual server already bears the flow transferred by the protocol address bound by the third virtual server, and the priority of the fourth virtual server is positioned at the upper level of the priority of the fifth virtual server; and exchanging the priority of the fourth virtual server and the priority of the fifth virtual server, and taking the fifth virtual server as a second virtual server.
Before the first virtual server fails, if a third virtual server of a next priority of the first virtual server fails, the traffic of the third virtual server is transferred to a fourth virtual server, and if the traffic of the first virtual server is also transferred to the fourth virtual server according to the priority distribution mode of the table one, the fourth virtual server bears three traffic of the first virtual server, the third virtual server and the fourth virtual server, and the traffic distribution of the virtual servers in the directed ring is unbalanced, so that the priorities of the fourth virtual server and the fifth virtual server need to be exchanged, and thus the fourth virtual server bears two traffic of the third virtual server and the fourth virtual server, and thus the fifth virtual server bears two traffic of the first virtual server and the fifth virtual server, and the traffic balance distribution is ensured. By adopting the method, when a plurality of continuous virtual servers have faults, the load flow of the virtual servers can be ensured to be distributed as uniformly as possible.
As shown in fig. 6, when both S4 and S5 fail, the flow rates of both vip4 and vip5 are shifted to S1, resulting in S1 having a flow rate 3 times that of S2 and S3, and an unbalanced flow rate distribution.
As shown in fig. 7, S5 fails before S4, and the traffic is transferred to S1 after S5 fails, and in order to avoid the traffic being transferred to S1 again after S4 fails, the priorities of S1 and S2 of vip4 need to be exchanged, so that the traffic is switched to S2 when S4 fails, and thereby the virtual server bearer traffic balancing is realized.
The priority distribution after correcting the priority order is shown in table two. Wherein, vip4 corresponding to S1 and S2 is a virtual server after exchanging priority.
Watch 2
S1 S2 S3 S4 S5
Vip1 255 195 135 75 15
Vip2 15 255 195 135 75
Vip3 75 15 255 195 135
Vip4 75 135 15 255 195
Vip5 195 135 75 15 255
The VRRP priority allocation is the key for realizing the multi-active mutual backup of the virtual server group, solves the problem that only a single virtual server works other virtual servers to realize cold backup in the traditional active-standby mode, improves the utilization rate of server resources, and realizes the high availability of multi-level mutual backup of all the virtual servers. And a VRRP priority distribution and correction algorithm based on the directed ring is also provided, so that the basic balance of the service flow of the available virtual servers is realized under the normal state and the fault state of the virtual servers.
As an optional implementation manner, after the priorities of the fourth virtual server and the fifth virtual server are exchanged, the method further includes: and under the condition that the fault of the first virtual server is eliminated, restoring the priority ranking sequence of the fourth virtual server and the fifth virtual server.
Based on the same technical concept, an embodiment of the present application further provides a flow distribution device, as shown in fig. 8, the device includes:
a determining module 801, configured to determine, when it is determined that a first virtual server in the multiple virtual servers fails, a first redundancy protocol and a first protocol address corresponding to the first virtual server, where each virtual server is bound to one protocol address, and the protocol address has an associated redundancy protocol, and the redundancy protocol is used to indicate a receiving priority order of the multiple virtual servers on traffic sent to the protocol address;
a forwarding module 802, configured to determine a second virtual server according to a receiving priority order indicated by the first redundancy protocol, and forward the traffic sent to the first protocol address to the second virtual server, where the priority of the second virtual server is lower than the priority of the first virtual server.
Optionally, the apparatus is for:
the virtual servers form a directed ring, and the receiving priority of the traffic sent to the protocol address by each virtual server indicated by the redundancy protocol is reduced in sequence according to the direction of the directed ring.
Optionally, the forwarding module 802 includes:
a first determination unit configured to determine whether a third virtual server exists in the plurality of virtual servers, where a time when the third virtual server fails is earlier than that of the first virtual server, and a priority of the third virtual server is located at a level lower than that of the first virtual server;
a second determining unit, configured to determine, when a third virtual server exists, a fourth virtual server and a fifth virtual server according to a receiving priority order indicated by the first redundancy protocol, where the fourth virtual server already receives a flow transferred by a protocol address bound to the third virtual server, and a priority of the fourth virtual server is located at a previous level of a priority of the fifth virtual server;
and the exchanging unit is used for exchanging the priority of the fourth virtual server and the priority of the fifth virtual server, and taking the fifth virtual server as the second virtual server.
Optionally, the forwarding module 802 includes:
a third determining unit, configured to determine, according to a receiving priority order indicated by the first redundancy protocol, whether an adjacent virtual server corresponding to a next priority of the first virtual server fails;
and a fourth determining unit configured to determine that the adjacent virtual server is one of the virtual servers with the smallest bearer traffic and to use the adjacent virtual server as the second virtual server, when the adjacent virtual server does not fail.
Optionally, the apparatus further comprises:
and the recovery module is used for recovering the priority ranking sequence of the fourth virtual server and the fifth virtual server under the condition that the fault elimination of the first virtual server is monitored.
Based on the same technical concept, an embodiment of the present invention further provides an electronic device, as shown in fig. 9, including a processor 901, a communication interface 902, a memory 903, and a communication bus 904, where the processor 901, the communication interface 902, and the memory 903 complete mutual communication through the communication bus 904,
a memory 903 for storing computer programs;
the processor 901 is configured to implement the above steps when executing the program stored in the memory 903.
The communication bus mentioned in the above electronic device may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The communication bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown, but this does not mean that there is only one bus or one type of bus.
The communication interface is used for communication between the electronic equipment and other equipment.
The memory may include a Random Access Memory (RAM) or a Non-volatile memory (NVM), such as at least one disk memory. Optionally, the memory may also be at least one memory device located remotely from the processor.
The processor may be a general-purpose processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but also Digital Signal Processors (DSPs), application Specific Integrated Circuits (ASICs), field Programmable Gate Arrays (FPGAs) or other programmable logic devices, discrete gates or transistor logic devices, discrete hardware components.
In a further embodiment provided by the present invention, there is also provided a computer readable storage medium having stored therein a computer program which, when executed by a processor, implements the steps of any of the methods described above.
In a further embodiment provided by the present invention, there is also provided a computer program product containing instructions which, when run on a computer, cause the computer to perform any of the methods of the above embodiments.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the invention to be performed in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another computer readable storage medium, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.) means. The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., solid State Disk (SSD)), among others.
It is noted that, in this document, relational terms such as "first" and "second," and the like, are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrases "comprising a," "8230," "8230," or "comprising" does not exclude the presence of additional like elements in a process, method, article, or apparatus that comprises the element.
The foregoing are merely exemplary embodiments of the present invention, which enable those skilled in the art to understand or practice the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. A method of traffic distribution, the method comprising:
under the condition that a first virtual server in a plurality of virtual servers is determined to be in fault, determining a first redundancy protocol and a first protocol address corresponding to the first virtual server, wherein each virtual server is bound with one protocol address, the protocol address has an associated redundancy protocol, and the redundancy protocol is used for indicating the receiving priority sequence of the plurality of virtual servers on the traffic sent to the protocol address;
and determining a second virtual server with the minimum bearer traffic in the current bearer traffic according to the receiving priority order indicated by the first redundancy protocol, and forwarding the traffic sent to the first protocol address to the second virtual server, wherein the priority of the second virtual server is lower than that of the first virtual server.
2. The method of claim 1, wherein the plurality of virtual servers form a directed ring, and wherein the receiving priority of the traffic sent to the protocol address by each virtual server indicated by the redundancy protocol decreases in order of the direction of the directed ring.
3. The method of claim 2, wherein the determining the second virtual server with the smallest amount of the admitted traffic in the current admitted traffic according to the receiving priority order indicated by the first redundancy protocol comprises:
determining whether a third virtual server exists in the plurality of virtual servers, wherein the third virtual server fails earlier than the first virtual server, and the priority of the third virtual server is next to the priority of the first virtual server;
under the condition that the third virtual server exists, determining a fourth virtual server and a fifth virtual server according to the receiving priority sequence indicated by the first redundancy protocol, wherein the fourth virtual server already receives the flow transferred by the protocol address bound by the third virtual server, and the priority of the fourth virtual server is positioned at the upper level of the priority of the fifth virtual server;
and exchanging the priority of the fourth virtual server and the priority of the fifth virtual server, and taking the fifth virtual server as the second virtual server.
4. The method of claim 2, wherein the determining the second virtual server with the smallest amount of the admitted traffic in the current admitted traffic according to the receiving priority order indicated by the first redundancy protocol comprises:
determining whether an adjacent virtual server corresponding to the next priority of the first virtual server fails according to the receiving priority sequence indicated by the first redundancy protocol;
determining the neighboring virtual server as the second virtual server if the neighboring virtual server does not fail.
5. The method of claim 3, wherein after prioritizing the fourth virtual server and the fifth virtual server, the method further comprises:
and under the condition that the failure of the first virtual server is monitored to be eliminated, restoring the priority ranking sequence of the fourth virtual server and the fifth virtual server.
6. A flow distribution apparatus, comprising:
the system comprises a determining module, a determining module and a processing module, wherein the determining module is used for determining a first redundancy protocol and a first protocol address corresponding to a first virtual server in a plurality of virtual servers under the condition that the first virtual server is determined to be in fault, each virtual server is bound with one protocol address, the protocol address has an associated redundancy protocol, and the redundancy protocol is used for indicating the receiving priority sequence of the plurality of virtual servers on the flow sent to the protocol address;
and the forwarding module is configured to determine a second virtual server with the smallest bearer traffic in the current bearer traffic according to the receiving priority order indicated by the first redundancy protocol, and forward the traffic sent to the first protocol address to the second virtual server, where the priority of the second virtual server is lower than the priority of the first virtual server.
7. The apparatus of claim 6, wherein the forwarding module comprises:
a first determining unit, configured to determine whether a third virtual server exists in the plurality of virtual servers, where a failure time of the third virtual server is earlier than that of the first virtual server, and a priority of the third virtual server is located at a next level to that of the first virtual server;
a second determining unit, configured to determine, in the presence of the third virtual server, a fourth virtual server and a fifth virtual server according to a receiving priority order indicated by the first redundancy protocol, where the fourth virtual server already supports traffic transferred by a protocol address bound to the third virtual server, and the priority of the fourth virtual server is located at a previous stage of the priority of the fifth virtual server;
and the exchanging unit is used for exchanging the priorities of the fourth virtual server and the fifth virtual server, and taking the fifth virtual server as the second virtual server.
8. The apparatus of claim 6, wherein the forwarding module comprises:
a third determining unit, configured to determine, according to a receiving priority order indicated by the first redundancy protocol, whether an adjacent virtual server corresponding to a next priority of the first virtual server fails;
a fourth determination unit configured to determine the adjacent virtual server as the second virtual server when the adjacent virtual server does not fail.
9. The electronic equipment is characterized by comprising a processor, a communication interface, a memory and a communication bus, wherein the processor and the communication interface are used for realizing the communication between the processor and the memory through the communication bus;
a memory for storing a computer program;
a processor for implementing the method steps of any one of claims 1 to 5 when executing a program stored in the memory.
10. A computer-readable storage medium, characterized in that a computer program is stored in the computer-readable storage medium, which computer program, when being executed by a processor, carries out the method steps of any one of the claims 1-5.
CN202110237195.1A 2021-03-03 2021-03-03 Flow distribution method and device, electronic equipment and computer readable medium Active CN112995054B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110237195.1A CN112995054B (en) 2021-03-03 2021-03-03 Flow distribution method and device, electronic equipment and computer readable medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110237195.1A CN112995054B (en) 2021-03-03 2021-03-03 Flow distribution method and device, electronic equipment and computer readable medium

Publications (2)

Publication Number Publication Date
CN112995054A CN112995054A (en) 2021-06-18
CN112995054B true CN112995054B (en) 2023-01-20

Family

ID=76352483

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110237195.1A Active CN112995054B (en) 2021-03-03 2021-03-03 Flow distribution method and device, electronic equipment and computer readable medium

Country Status (1)

Country Link
CN (1) CN112995054B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114079636A (en) * 2021-10-25 2022-02-22 深信服科技股份有限公司 Flow processing method, switch, soft load equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102132255A (en) * 2008-05-29 2011-07-20 思杰系统有限公司 Systems and methods for load balancing via a plurality of virtual servers upon failover using metrics from a backup virtual server
CN102934412A (en) * 2010-06-18 2013-02-13 诺基亚西门子通信公司 Server cluster
CN104954182A (en) * 2012-07-27 2015-09-30 北京奇虎科技有限公司 Method and device for configuring virtual server cluster

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2018133764A (en) * 2017-02-17 2018-08-23 株式会社リコー Redundant configuration system, changeover method, information processing system, and program
CN107846454A (en) * 2017-10-25 2018-03-27 暴风集团股份有限公司 A kind of resource regulating method, device and CDN system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102132255A (en) * 2008-05-29 2011-07-20 思杰系统有限公司 Systems and methods for load balancing via a plurality of virtual servers upon failover using metrics from a backup virtual server
CN102934412A (en) * 2010-06-18 2013-02-13 诺基亚西门子通信公司 Server cluster
CN104954182A (en) * 2012-07-27 2015-09-30 北京奇虎科技有限公司 Method and device for configuring virtual server cluster

Also Published As

Publication number Publication date
CN112995054A (en) 2021-06-18

Similar Documents

Publication Publication Date Title
US10868840B1 (en) Multiple-master DNS system
JP6600373B2 (en) System and method for active-passive routing and control of traffic in a traffic director environment
US7225356B2 (en) System for managing operational failure occurrences in processing devices
US7185096B2 (en) System and method for cluster-sensitive sticky load balancing
US9659075B2 (en) Providing high availability in an active/active appliance cluster
US9063787B2 (en) System and method for using cluster level quorum to prevent split brain scenario in a data grid cluster
CN106254240B (en) A kind of data processing method and routing layer equipment and system
US11206173B2 (en) High availability on a distributed networking platform
CN112671882A (en) Same-city double-activity system and method based on micro-service
CN111130835A (en) Data center dual-active system, switching method, device, equipment and medium
CN102025630A (en) Load balancing method and load balancing system
US6675199B1 (en) Identification of active server cluster controller
CN103458013A (en) Streaming media server cluster load balancing system and balancing method
CN107682442B (en) Web connection method and device
CN108055333A (en) A kind of NAS-CIFS cluster load balancing methods based on UFS
US7783786B1 (en) Replicated service architecture
CN112995054B (en) Flow distribution method and device, electronic equipment and computer readable medium
US20150186181A1 (en) System and method for supporting flow control in a distributed data grid
US9760370B2 (en) Load balancing using predictable state partitioning
US20030095501A1 (en) Apparatus and method for load balancing in systems having redundancy
CN109981779B (en) Service providing method, server and computer storage medium
JP2003234752A (en) Load distribution method using tag conversion, tag converter and load distribution controller
CN117149445B (en) Cross-cluster load balancing method and device, equipment and storage medium
CN113364615B (en) Method, device, equipment and computer readable medium for rolling upgrade
WO2020063251A1 (en) Communication method and related device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant