CN113206754B - Method and device for realizing load sharing - Google Patents

Method and device for realizing load sharing Download PDF

Info

Publication number
CN113206754B
CN113206754B CN202110336937.6A CN202110336937A CN113206754B CN 113206754 B CN113206754 B CN 113206754B CN 202110336937 A CN202110336937 A CN 202110336937A CN 113206754 B CN113206754 B CN 113206754B
Authority
CN
China
Prior art keywords
backup
array
backup group
group
load sharing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110336937.6A
Other languages
Chinese (zh)
Other versions
CN113206754A (en
Inventor
吴东坡
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
New H3C Security Technologies Co Ltd
Original Assignee
New H3C Security Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by New H3C Security Technologies Co Ltd filed Critical New H3C Security Technologies Co Ltd
Priority to CN202110336937.6A priority Critical patent/CN113206754B/en
Publication of CN113206754A publication Critical patent/CN113206754A/en
Application granted granted Critical
Publication of CN113206754B publication Critical patent/CN113206754B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0803Configuration setting
    • H04L41/0823Configuration setting characterised by the purposes of a change of settings, e.g. optimising configuration for enhancing reliability
    • H04L41/0826Configuration setting characterised by the purposes of a change of settings, e.g. optimising configuration for enhancing reliability for reduction of network costs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/06Management of faults, events, alarms or notifications
    • H04L41/0654Management of faults, events, alarms or notifications using network fault recovery
    • H04L41/0663Performing the actions predefined by failover planning, e.g. switching to standby network elements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0803Configuration setting
    • H04L41/0823Configuration setting characterised by the purposes of a change of settings, e.g. optimising configuration for enhancing reliability
    • H04L41/0836Configuration setting characterised by the purposes of a change of settings, e.g. optimising configuration for enhancing reliability to enhance reliability, e.g. reduce downtime

Abstract

The application provides a method and a device for realizing load sharing, wherein the method is applied to a first BRAS, and comprises the following steps: when a first backup group is abnormal, selecting a second backup group from the first number of backup groups according to a load sharing algorithm so as to enable the second backup group to bear the service flow in the first backup group, wherein the second backup group is other backup groups except the first backup group in the first number of backup groups; when the first number of backup groups are all abnormal, or the number of abnormal backup groups in the first number of backup groups exceeds a preset number, the first BRAS is degraded into a backup frame, so that the second BRAS is upgraded into a main frame, and the second number of backup groups bear the service traffic of the first number of backup groups.

Description

Method and device for realizing load sharing
Technical Field
The present application relates to the field of communications technologies, and in particular, to a method and an apparatus for implementing load sharing.
Background
In the operator network, under the condition that a plurality of Broadband Access servers (BRAS) are equipped to insert Carrier Grade network address translation (CGN) service boards, different-frame main and standby CGN service boards can be configured in the main and standby BRAS equipment to realize CGN backup between double machine frames. The consistency of data between the CGN service boards in the main BRAS equipment and the standby BRAS equipment can be ensured through a backup mechanism between double machine frames, so that the main-standby switching is ensured to be triggered when the main CGN service board fails, the main-standby switching can be also ensured to be triggered when the main BRAS equipment fails or a link fails, the normal operation of the service is finally realized, and the existence and the influence of the fault perceived by a user are avoided.
With the gradual increase of the number of users, the way that one Network Address Translation (NAT) instance binds one CGN service board has not been able to satisfy the NAT services of a large number of users. Through the load sharing technology among the CGN service boards, a plurality of CGN service boards can be bound under one NAT instance so as to increase the NAT bandwidth of a user. And the quantity of the CGN boards is expanded in the load sharing group between the CGN service boards, so that the single BRAS equipment bears the NAT services of more users, and the deployment cost of the BRAS equipment is saved.
The deployment mode of dual-server hot standby ensures reliable operation of the network, and simultaneously, can enable a single BRAS device to bear NAT services of more users in a load sharing mode between CGN service boards, so that load sharing and overlapping dual-server backup are a common deployment scheme in an operator networking.
As shown in fig. 1, fig. 1 is a diagram of a conventional dual-computer hot-standby deployment networking. In fig. 1, the BRAS1 is a main frame, the BRAS2 is a standby frame, and the same NAT instance 1 is configured in the BRAS main and standby frames. The service instance group bound under the NAT instance 1 in the main backup BRAS frame has backup groups mapped one to one, and each backup group needs a Virtual Router Redundancy Protocol (VRRP) detection link to decide the main backup of each pair of backup groups.
When the CGN main service board of the backup group 1 in the BRAS1 main frame is abnormal, the VRRP link decision of the backup group 1 makes the BRAS2 CGN standby service board of the backup group 1 in the backup frame upgraded to the CGN main service board. Subsequently, if the traffic of the access side is directed to the service traffic of the backup group 1 in the main frame of the BRAS1, the traffic needs to be directed to the CGN service board upgraded by the backup group 1 in the backup frame of the BRAS2 through the protection tunnel established between the BRAS1 and the BRAS2 to execute the NAT service.
The dual-computer hot standby deployment method also has the following defects: 1) in the dual-computer hot standby deployment group network, a BRAS main frame and a BRAS standby frame need to be provided with symmetrical one-to-one mapping backup groups, and each backup group needs to establish a VRRP detection link to decide the main backup of each pair of backup groups, so that more VRRP connections need to be established, and the deployment cost is improved;
2) when the VRRP detects that the link is abnormal, the CGN service boards in the backup group of the BRAS main/standby frames are upgraded to the CGN main service board possibly, and normal processing of the service is influenced;
3) when the backup group is abnormal, the service flow needs to be transmitted across frames through the protection tunnel. A large amount of service traffic is transparently transmitted through the protection tunnel, so that the deployed protection tunnel can bear a larger bandwidth or support a load by the protection tunnel, and thus, the deployment cost is increased.
Disclosure of Invention
In view of this, the present application provides a method and an apparatus for implementing load sharing, so as to solve the problems that the existing dual-host hot-standby deployment networking is high in deployment cost and normal processing of dual-host application service occurs.
In a first aspect, the present application provides a method for implementing load sharing, where the method is applied to a first BRAS, the first BRAS is in a dual-computer hot-standby group network and is a main frame, a first service instance having a first number of backup groups is configured in the first BRAS, the dual-computer hot-standby group network further includes a second BRAS, the second BRAS is a standby frame, and the second BRAS is configured with the first service instance having a second number of backup groups; the method comprises the following steps:
when a first backup group is abnormal, selecting a second backup group from the first number of backup groups according to a load sharing algorithm so as to enable the second backup group to bear the service flow in the first backup group, wherein the second backup group is other backup groups except the first backup group in the first number of backup groups;
when the first number of backup groups are all abnormal, or the number of abnormal backup groups in the first number of backup groups exceeds a preset number, the first BRAS is degraded into a backup frame, so that the second BRAS is upgraded into a main frame, and the second number of backup groups bear the service traffic of the first number of backup groups.
In a second aspect, the present application provides a device for implementing load sharing, where the device is applied to a first BRAS, the first BRAS is in a dual-computer hot-standby group network and is a main frame, a first service instance with a first number of backup groups is configured in the first BRAS, the dual-computer hot-standby group network further includes a second BRAS, the second BRAS is a backup frame, and a first service instance with a second number of backup groups is configured in the second BRAS; the device comprises:
a selecting unit, configured to select, when a first backup group is abnormal, a second backup group from the first number of backup groups according to a load sharing algorithm, so that the second backup group carries a service traffic in the first backup group, where the second backup group is another backup group except the first backup group in the first number of backup groups;
and the processing unit is used for degrading the first BRAS into a standby frame when the first number of backup groups are abnormal or the number of abnormal backup groups in the first number of backup groups exceeds a preset number, so that the second BRAS is promoted into a main frame, and the service traffic of the first number of backup groups is borne by the second number of backup groups.
In a third aspect, the present application provides a network device comprising a processor and a machine-readable storage medium storing machine-executable instructions executable by the processor, the processor being caused by the machine-executable instructions to perform the method provided by the first aspect of the present application.
Therefore, by applying the method and the device for implementing load sharing provided by the present application, when the first backup group is abnormal, according to a load sharing algorithm, the first BRAS selects the second backup group from the first number of backup groups, so that the second backup group carries the first service traffic in the first backup group, and the second backup group is another backup group except the first backup group in the first number of backup groups; when the first number of backup groups are all abnormal, or the number of abnormal backup groups in the first number of backup groups exceeds a preset number, the first BRAS downgrades the first BRAS to a backup frame, so that the second BRAS upgrades to a main frame, and carries second service traffic of the first number of backup groups through the second number of backup groups.
Therefore, when one backup group in the service instance in the main frame is abnormal, other backup groups in the service instance in the main frame are preferentially reloaded, the reliability of the abnormality of the single-frame backup group is improved, and the pressure of protecting the cross-frame transparent transmission service flow of the tunnel is reduced. Meanwhile, the problems that the deployment cost of the existing dual-computer hot standby deployment networking is high and the dual-master state application service is normally processed are solved.
Drawings
Fig. 1 is a diagram of a conventional dual-server hot-standby deployment networking;
fig. 2 is a flowchart of a method for implementing load sharing according to an embodiment of the present application;
fig. 3 is a diagram of a dual-computer hot-standby deployment networking provided in the embodiment of the present application;
FIG. 4-A is a diagram illustrating an example of a load sharing array according to an embodiment of the present application;
fig. 4-B is an exemplary diagram of a load sharing array after an exception occurs in the backup set 1 according to an embodiment of the present disclosure;
fig. 4-C is an exemplary diagram of an abnormal load sharing array of the backup set 4 after the backup set 1 is abnormal according to the embodiment of the present application;
fig. 4-D is an exemplary diagram of an abnormal load sharing array of the backup set 4 after the backup set 1 is restored to normal according to the embodiment of the present application;
fig. 4-E is an exemplary diagram of a load sharing array where the backup group 4 is restored to normal after the backup group 1 is restored to normal according to the embodiment of the present application;
FIG. 4-F is a diagram illustrating another example of load sharing groups according to an embodiment of the present application;
fig. 5 is a structural diagram of an apparatus for implementing load sharing according to an embodiment of the present application;
fig. 6 is a hardware structure of a network device according to an embodiment of the present disclosure.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. The following description refers to the accompanying drawings in which the same numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the application, as detailed in the appended claims.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this application and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the corresponding listed items.
It is to be understood that although the terms first, second, third, etc. may be used herein to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present application. The word "if," as used herein, may be interpreted as "at … …" or "when … …" or "in response to a determination," depending on the context.
The following describes in detail a method for implementing load sharing provided in the embodiments of the present application. Referring to fig. 2, fig. 2 is a flowchart of a method for implementing load sharing according to an embodiment of the present application. The method is applied to a first BRAS, and the method for implementing load sharing provided by the embodiment of the application may include the following steps.
Step 210, when a first backup group is abnormal, selecting a second backup group from the first number of backup groups according to a load sharing algorithm, so that the second backup group carries a first service traffic in the first backup group, where the second backup group is another backup group except the first backup group in the first number of backup groups.
Specifically, as shown in fig. 3, fig. 3 is a diagram of a dual-computer hot-standby deployment networking provided in the embodiment of the present application. In fig. 3, a first BRAS is located in a dual-server hot-standby group and is a main frame, and a first service instance having a first number of backup groups is configured in the first BRAS. For example, 4 backup groups, namely backup group 1, backup group 2, backup group 3, and backup group 4, are configured in the first service instance. And each backup group is internally provided with a service main board and a service backup board. In the embodiment of the present application, the first service instance may specifically be an NAT instance; the service mainboard can be specifically a CGN mainboard; the service standby board may be specifically a CGN standby board.
The dual-computer hot standby network also comprises a second BRAS, the second BRAS is a standby frame, and a first service instance with a second number of backup groups is configured in the second BRAS. For example, 3 backup groups, namely backup group 1 ', backup group 2 ' and backup group 3 ', are configured in the first service instance. And each backup group is internally provided with a service main board and a service backup board. In the embodiment of the present application, the first service instance may specifically be an NAT instance; the service mainboard can be specifically a CGN mainboard; the service standby board may be specifically a CGN standby board.
A Virtual Service Redundancy Protocol (VSRP) connection is established between the first BRAS and the second BRAS. Two channels are established in VSRP connection, one is a control channel and is used for negotiating the main-standby relation between BRASs and synchronizing the main-standby backup table items; the other is a protection tunnel used for mutual drainage and transparent transmission of service flow among the BRASs.
And establishing VRRP connection between the first BRAS and the second BRAS for negotiating the main-standby relationship. In the embodiment of the application, the NAT service instance is bound to the VSRP instance, and the VSRP instance is bound to the VRRP. Because the VSRP and the VRRP are bound, the main-standby relation of the VSRP is consistent with the main-standby relation decided by the VRRP. And because the NAT instance is bound with the VSRP, the main-standby relationship of the NAT instance is consistent with the main-standby relationship of the VSRP.
The first BRAS and the second BRAS are respectively accessed to an external network through CR, and the PC is respectively accessed to the first BRAS and the second BRAS through SW.
In this embodiment of the present application, a first BRAS is a CGN motherboard included in a backup group 1 in a main frame and carries an NAT service, and when the CGN motherboard is abnormal (for example, a fault), the first BRAS upgrades a CGN backup board included in the backup group 1 to a new CGN motherboard, and the new service motherboard carries an NAT traffic.
It can be understood that each backup group in the first service instance carries NAT services of different users, and each CGN motherboard of the backup group carries NAT services. For example, the backup group 1 carries NAT services of the user 1 and the user 2; the backup group 2 bears NAT services of a user 3, a user 4 and a user 5; the backup group 3 bears the NAT service of the user 6; the backup group 4 carries NAT traffic of the users 7 and 8.
In the embodiment of the present application, a first number of backup groups configured in a first BRAS are characterized by a load sharing array. The first number of backup groups share the load of the NAT service. The value of the array element in the load sharing array is the identifier of the first number of backup groups, and is used for indicating the first BRAS to allocate the service flow of the subscript of the array element in the hit number to the backup group corresponding to the value of the array element.
The hash depth of the load sharing array is n x (n-1). Wherein n represents the number of backup groups configured within the first service instance. In the embodiment of the present application, the depth of the load sharing array is 4 × 3 — 12, and the array index is from 0 to 11, for 12 array indices. As shown in fig. 4-a, fig. 4-a is an exemplary diagram of a load sharing array shown in the embodiment of the present application, where numbers above a box are subscripts of array elements, and numbers inside the box are values of the array elements (i.e., identification of backup group 1 is 1, identification of backup group 2 is 2, identification of backup group 3 is 3, and identification of backup group 4 is 4).
The number of array elements in the load sharing array can be equally divided based on the number of any on-site backup groups in the load sharing array. Each backup group in the load sharing group corresponds to the same number of array elements.
It can be understood that, when the CGN board processes the NAT service, public network address resources need to be used to conveniently convert the private network IP address into the public network IP address. And the first BRAS divides the public network address resources according to the number of the group elements in the load sharing group to obtain a plurality of public network address sub-resources. The first BRAS binds each public network address sub-resource to each array element.
For example, after the public network address resources are divided, 12 public network address sub-resources are obtained. The first BRAS binds the public network address resource 1 with the array element 0, binds the public network address resource 2 with the array element 1, and so on. In this way, when receiving the traffic, the backup group 1 corresponding to the array element 0 executes the NAT service using the public network address resource 1.
According to the foregoing, when the CGN motherboard in the backup group 1 is abnormal, the CGN backup board included in the backup group 1 is upgraded to the new CGN motherboard, and the new service motherboard carries the NAT traffic. If the CGN backup boards are also abnormal, that is, the backup group 1 and the CGN boards are all abnormal, at this time, the first BRAS selects other backup groups (which may also be referred to as a second backup group) from the first number of backup groups according to a load sharing algorithm, so that the second backup group carries the service traffic in the backup group 1. Wherein the second backup group is a backup group of the first number of backup groups except the backup group 1.
Further, the first BRAS selects other backup groups from the first number of backup groups according to a load sharing algorithm, and the specific process is as follows: and searching a first array element from the load sharing array according to the identifier of the backup group 1, wherein the value of the first array element is the identifier of the backup group 1. The first traffic allocation rule is determined according to the number (e.g., 3) of the first array elements and the number (e.g., 3) of the second backup groups which are in place and normal in the load sharing group. And updating the value of the first array element into the identifier of the second backup group according to the first traffic distribution rule, and binding the first public network address resource bound by the first array element with the identifier of the second backup group so as to enable the second backup group to bear the service traffic. Therefore, how the service flow carried by the abnormal backup group is distributed among the normal backup groups is realized.
As an embodiment, the determined first traffic distribution rule may be a traffic averaging rule, so that the service traffic carried by the abnormal backup group is evenly distributed to the normal backup group, and the service traffic carried by the normal backup group does not participate in the distribution.
As shown in fig. 4-B, fig. 4-B is an exemplary diagram of a load sharing array after the backup set 1 is abnormal according to the embodiment of the present application. Before the backup group 1 is abnormal, the values of the array element 0, the array element 4 and the array element 8 are the identification of the backup group 1. After backup group 1 is abnormal, the number of array elements 0, 4, 8 may be divided equally by the number of second backup groups. At this time, the values of the array element 0, the array element 4, and the array element 8 are updated to the identifiers of the backup group 2, the backup group 3, and the backup group 4, respectively. Meanwhile, the public network address resource 1 bound with the array element 0 is bound with the backup group 4, the public network address resource 5 bound with the array element 4 is bound with the backup group 3, and the public network address resource 9 bound with the array element 8 is bound with the backup group 2.
On the basis of the abnormality of the backup group 1, if the backup group 4 is abnormal, as shown in fig. 4-C, fig. 4-C is an exemplary diagram of a load sharing array in which the backup group 4 is abnormal after the backup group 1 is abnormal according to the embodiment of the present application. In FIG. 4-C, the values of array element 0, array element 3, array element 7, and array element 11 are the identity of backup group 4. After the backup group 4 is abnormal, the numbers of the array elements 0, 3, 7 and 11 can be divided equally by the number of the second backup group. At this time, the values of the array element 0, the array element 3, the array element 7, and the array element 11 are updated to the identifiers of the backup group 2 and the backup group 3, respectively. Meanwhile, the public network address resource 1 bound with the array element 0, the public network address resource 8 bound with the array element 7 and the backup group 3, the public network address resource 4 bound with the array element 3 and the public network address resource 12 bound with the array element 11 and the backup group 2 are bound.
Furthermore, after updating the values of the array element 0, the array element 4 and the array element 8 to the identifiers of the backup group 2, the backup group 3 and the backup group 4, the first BRAS further records the identifier of the backup group 1 in a linked list corresponding to the array element 0, the array element 4 and the array element 8, and the linked list is used for recording the identifier of the abnormal backup group.
If the backup group 1 is recovered to be normal, the first BRAS searches a second array element from the load sharing array, and the identifier of the abnormal backup group is recorded in a linked list corresponding to the second array element. If the number of array elements in the load sharing array can be equally divided based on the number of any in-place backup groups in the load sharing array, and each second backup group corresponds to the array elements with the same number, the first BRAS determines a second traffic distribution rule according to the number of array elements in the load sharing array and the number of currently in-place and normal backup groups in the load sharing array, and the second traffic distribution rule is to distribute the array elements with the same number to the first backup group from the array elements corresponding to each second backup group, so that each in-place and normal backup group corresponds to the array elements with the same number.
And according to a second flow distribution rule, the first BRAS selects a third number of array elements from the second array elements corresponding to the second backup array. And the first BRAS updates the value of the selected third quantity of array elements into the identifier of the backup group 1, and binds the second public network address resource bound by the third quantity of array elements with the identifier of the first backup group 1, so that the backup group 1 bears the service flow of the abnormal backup group.
As shown in fig. 4-D, fig. 4-D is an exemplary diagram of an abnormal load sharing array of the backup set 4 after the backup set 1 is restored to normal according to the embodiment of the present application. After backup group 1 returns to normal, backup group 4 remains unordinated. The linked lists corresponding to array element 0, array element 3, array element 7, and array element 11 record the identifiers of the abnormal backup groups, for example, the identifiers of backup group 1 and backup group 4. The number of array elements (e.g., 12) in the load sharing array may be evenly divided based on the number of on-site backup groups (e.g., 2, backup group 3) within the load sharing group, with backup group 2, backup group 3 corresponding to the same number of array elements (e.g., 6). At this time, 2 array elements (array element 3, array element 11) are selected from the array elements corresponding to the backup group 2, and 2 array elements (array element 0, array element 7) are selected from the array elements corresponding to the backup group 3. At this time, the values of array element 0, array element 3, array element 7, and array element 11 are updated to the identification of backup group 1. Meanwhile, the public network address resource 1 bound with the array element 0 is bound with the backup group 1, the public network address resource 4 bound with the array element 3 is bound with the backup group 1, the public network address resource 8 bound with the array element 7 is bound with the backup group 1, and the public network address resource 12 bound with the array element 11 is bound with the backup group 1.
On the basis that the backup set 1 is restored to be normal, if the backup set 4 is also restored to be normal, as shown in fig. 4-E, fig. 4-E is an exemplary diagram of a load sharing array in which the backup set 4 is restored to be normal after the backup set 1 is restored to be normal according to the embodiment of the present application. After the backup group 4 is restored to normal, the identifiers of the abnormal backup groups, for example, the identifiers of the backup group 1 and the backup group 4, are recorded in the linked lists corresponding to the array element 0, the array element 4, and the array element 8. The number of array elements (e.g., 12) in the load sharing array may be evenly divided based on the number of on-site backup groups (e.g., 3, backup group 1, backup group 2, backup group 3) within the load sharing group, with backup group 1, backup group 2, backup group 3 corresponding to the same number of array elements (e.g., 4). At this time, 1 array element (array element 0) is selected from the array elements corresponding to the backup group 1, 1 array element (array element 4) is selected from the array elements corresponding to the backup group 2, and 1 array element (array element 8) is selected from the array elements corresponding to the backup group 3. At this time, the values of array element 0, array element 4, and array element 8 are updated to the identifier of backup group 4. Meanwhile, the public network address resource 1 bound with the array element 0 is bound with the backup group 4, the public network address resource 5 bound with the array element 4 is bound with the backup group 4, and the public network address resource 9 bound with the array element 8 is bound with the backup group 4.
Therefore, when one backup group in the service example in the main frame is abnormal, the other backup groups in the service example in the main frame are preferentially reloaded, the reliability of the abnormality of the single-frame backup group is improved, and the pressure of cross-frame transparent transmission service flow of the protection tunnel is reduced.
Step 220, when the first number of backup groups are all abnormal, or the number of abnormal backup groups in the first number of backup groups exceeds a preset number, the first BRAS is degraded into a backup frame, so that the second BRAS is upgraded into a main frame, and second service traffic of the first number of backup groups is borne through the second number of backup groups.
Specifically, according to the description in step 210, when all of the backup group 1, the backup group 2, the backup group 3, and the backup group 4 configured in the first service instance are abnormal, or the number of abnormal backup groups in the 4 backup groups exceeds a preset number (for example, the preset number is 2, and the backup group 1, the backup group 2, and the backup group 3 are abnormal), the first BRAS downgrades itself to a backup frame. At the same time, the second BRAS upgrades itself to the main frame.
At this time, the second BRAS is configured with a second number of backup groups to carry service traffic of the first number of backup groups.
It can be understood that the procedure for the first BRAS to downgrade itself to the standby frame and the procedure for the second BRAS to upgrade itself to the main frame are the same as the procedure for adjusting the main/standby relationship in the prior art, and will not be repeated here.
It should be noted that, in the embodiment of the present application, a second number of backup groups configured in the second BRAS is also represented by the load sharing array, and the second number is not equal to the first number, which is exemplified by that the second number is smaller than the first number. For example, the second number is 3. The second number of backup groups share the load of the NAT service. And the value of the array element in the load sharing array is the identifier of a second number of backup groups, and is used for indicating the second BRAS to distribute the service flow of the subscript of the array element in the hit number to the backup group corresponding to the value of the array element.
In the embodiment of the application, the depth of the load sharing array is 12, the array subscript ranges from 0 to 11, and the total number of the array subscripts is 12. As shown in fig. 4-F, fig. 4-F is another exemplary diagram of a load sharing array shown in the embodiment of the present application, where the numbers above the boxes are subscripts of array elements, and the numbers in the boxes are values of the array elements (i.e., the identifier of the backup group 1 is 1, the identifier of the backup group 2 is 2, and the identifier of the backup group 3 is 3).
The number of array elements in the load sharing array can be equally divided based on the number of any on-site backup groups in the load sharing array. Each backup group in the load sharing group corresponds to the same number of array elements.
Further, in the embodiment of the application, a VSRP connection is established between the first BRAS and the second BRAS, and the VSRP connection comprises a control channel and a protection tunnel. And through the control channel, the first BRAS sends a synchronous backup message to the second BRAS, wherein the synchronous backup message comprises a user table, public network address resources and a resource array binding relationship. And after receiving the synchronous backup message, the second BRAS acquires the user list, the public network address resource and the resource array from the synchronous backup message. And the second BRAS stores the user table, the public network address resource and the resource array binding relationship, distributes service flow to a second number of backup groups according to the user table and the public network address resource, and synchronizes the resource array binding relationship to a load sharing array included in the second BRAS.
The resource array binding relationship is specifically a binding relationship between each public network address sub-resource obtained after the public network address resources are divided and each array element in the load sharing array in the first BRAS.
In one example, the first BRAS binds public network address resource 1 with tuple element 0 and binds public network address resource 2 with tuple element 1. After the first BARS synchronizes the resource array binding relationship to the second BRAS, the second BARS also binds the public network address resource 1 with the array element 0 in the load sharing array included by the second BARS, and binds the public network address resource 2 with the array element 1 in the load sharing array included by the second BARS.
Thus, after the public network resource allocation mode is adopted, when the BARS receives the service traffic, the Hash calculation is carried out according to the private network source IP and the private network source VPN which are included in the service message, and the service traffic load of the user is shared to the backup group corresponding to the subscript value of a certain array of load sharing arrays. The first BARS and the second BARS execute load sharing in the same mode, and public network address resources bound by array elements in a load sharing array in the second BARS are the same as the public network address resources bound by the array elements in the load sharing array in the first BARS. That is, under the scenario that NAT instances configured in the first barcs and the second barcs have different numbers of backup groups, the service traffic and the public network address resources used for processing the service traffic can be shared in the same backup group.
Similarly, when the first BARS and the second BARS pass through the protection tunnel to transparently transmit the service traffic, the second BARS performs hash calculation according to the private network source IP and the private network source VPN, so that the service traffic can be guaranteed to be directed to the backup group for processing the service traffic of the same user.
Therefore, by applying the method and the device for implementing load sharing provided by the present application, when the first backup group is abnormal, according to a load sharing algorithm, the first BRAS selects the second backup group from the first number of backup groups, so that the second backup group carries the first service traffic in the first backup group, and the second backup group is another backup group except the first backup group in the first number of backup groups; when the first quantity of backup groups are all abnormal, or the quantity of abnormal backup groups in the first quantity of backup groups exceeds a preset quantity, the first BRAS downgrades the first BRAS to a backup frame, so that the second BRAS is upgraded to a main frame, and second service traffic of the first quantity of backup groups is borne by the second quantity of backup groups.
Therefore, when one backup group in the service instance in the main frame is abnormal, other backup groups in the service instance in the main frame are preferentially reloaded, the reliability of the abnormality of the single-frame backup group is improved, and the pressure of protecting the cross-frame transparent transmission service flow of the tunnel is reduced. Meanwhile, the problems that the deployment cost of the existing dual-computer hot standby deployment networking is high and the dual-master state application service is normally processed are solved.
Based on the same inventive concept, the embodiment of the present application further provides a load sharing implementation apparatus corresponding to the load sharing implementation method. Referring to fig. 5, fig. 5 is a structural diagram of an apparatus for implementing load sharing according to an embodiment of the present application. The device is applied to a first BRAS which is positioned in a dual-computer hot standby group network and is a main frame, a first service instance with a first number of backup groups is configured in the first BRAS, a second BRAS is also included in the dual-computer hot standby group network, the second BRAS is a standby frame, and the first service instance with a second number of backup groups is configured in the second BRAS; the device comprises:
a selecting unit 510, configured to select, when a first backup group is abnormal, a second backup group from the first number of backup groups according to a load sharing algorithm, so that the second backup group carries a service traffic in the first backup group, where the second backup group is another backup group except the first backup group in the first number of backup groups;
a processing unit 520, configured to downgrade the first BRAS to a backup frame when the first number of backup groups are all abnormal, or the number of abnormal backup groups in the first number of backup groups exceeds a preset number, so that the second BRAS is upgraded to a main frame, and the service traffic of the first number of backup groups is carried by the second number of backup groups.
Optionally, each backup group includes a service main board and a service backup board;
the processing unit 520 is further configured to, when a service motherboard included in the first backup group is abnormal, upgrade a service standby board included in the first backup group to a new service motherboard, and enable the new service motherboard to carry service traffic.
Optionally, the first number of backup groups is represented by a load sharing array, and a value of an array element in the load sharing array is an identifier of the first number of backup groups; the device further comprises:
a dividing unit (not shown in the figure), configured to divide the public network address resources according to the number of the tuple elements in the load sharing group, so as to obtain a plurality of public network address sub-resources;
a binding unit (not shown in the figure) for binding each public network address sub-resource with each array element;
the selecting unit 510 specifically includes: a first searching subunit, configured to search, according to an identifier of the first backup group, a first array element from the load sharing array, where a value of the first array element is the identifier of the first backup group;
a first determining subunit (not shown in the figure), configured to determine a first traffic allocation rule according to the number of the first array elements and the number of second backup groups that are in place and normal in the load sharing group;
a first updating subunit (not shown in the figure), configured to update the value of the first array element to the identifier of the second backup group according to the first traffic allocation rule, and bind the first public network address resource bound to the first array element and the identifier of the second backup group, so that the second backup group carries the first service traffic;
the number of array elements in the load sharing array can be equally divided based on the number of any in-place backup groups in the load sharing array;
each backup group in the load sharing group corresponds to the array elements with the same number.
Optionally, the determining subunit (not shown in the figure) is specifically configured to, when the number of the first array elements can be averaged by the number of the second backup group, determine the first traffic allocation rule, where the first traffic allocation rule is to allocate the first traffic flow to the second backup group evenly.
Optionally, the selecting unit 510 further includes: a recording subunit (not shown in the figure), configured to record an identifier of the first backup group in a linked list corresponding to the first array element, where the linked list is used to record an identifier of an abnormal backup group;
a second searching subunit (not shown in the figure), configured to search, if the first backup group returns to normal, a second array element from the load sharing array, where an identifier of an abnormal backup group is recorded in a linked list corresponding to the second array element;
a second determining subunit (not shown in the figure), configured to, if the number of array elements in the load sharing array can be evenly divided based on the number of any in-place backup groups in the load sharing array, and each second backup group corresponds to the same number of array elements, determine a second traffic allocation rule according to the number of array elements in the load sharing array and the number of currently in-place and normal backup groups in the load sharing array, where the second traffic allocation rule is to allocate the same number of array elements from array elements corresponding to each second backup group to the first backup group, so that each in-place and normal backup group corresponds to the same number of array elements;
a selecting subunit (not shown in the figure), configured to select, according to the second traffic allocation rule, a third number of array elements from the second array elements corresponding to the second backup group;
a second updating subunit (not shown in the figure), configured to update the value of the selected third quantity array element to the identifier of the first backup group, and bind the second public network address resource bound to the third quantity array element to the identifier of the first backup group, so that the first backup group bears the service traffic of the abnormal backup group.
Optionally, a VSRP connection is established between the first BRAS and the second BRAS, the VSRP connection comprising a control channel; the device further comprises:
a sending unit (not shown in the figure), configured to send a synchronous backup packet to the second BRAS through the control channel, where the synchronous backup packet includes a user table, a public network address resource, and a resource array binding relationship, so that the second BRAS stores the user table, the public network address resource, and the resource array binding relationship, and performs service traffic distribution on the second number of backup groups according to the user table and the public network address resource, so as to synchronize the resource array binding relationship to a load sharing array included in the second BRAS;
the resource array binding relationship is specifically a binding relationship between each public network address sub-resource obtained after the public network address resource is divided and each array element in the load sharing array in the first BRAS
Therefore, by applying the load sharing implementation apparatus provided by the present application, when the first backup group is abnormal, according to a load sharing algorithm, the apparatus selects a second backup group from the first number of backup groups, so that the second backup group carries the first service traffic in the first backup group, and the second backup group is another backup group except the first backup group in the first number of backup groups; when the first number of backup groups are abnormal, or the number of abnormal backup groups in the first number of backup groups exceeds a preset number, the device downgrades the first BRAS to a backup frame, so that the second BRAS is upgraded to a main frame, and second service traffic of the first number of backup groups is borne by the second number of backup groups.
Therefore, when one backup group in the service example in the main frame is abnormal, the other backup groups in the service example in the main frame are preferentially reloaded, the reliability of the abnormality of the single-frame backup group is improved, and the pressure of cross-frame transparent transmission service flow of the protection tunnel is reduced. Meanwhile, the problems that the deployment cost of the existing dual-computer hot standby deployment networking is high and the dual-master state application service is normally processed are solved.
Based on the same inventive concept, the present application further provides a network device, as shown in fig. 6, including a processor 610, a transceiver 620, and a machine-readable storage medium 630, where the machine-readable storage medium 630 stores machine-executable instructions capable of being executed by the processor 610, and the processor 610 is caused by the machine-executable instructions to perform an implementation method of load sharing provided by the present application. The load sharing apparatus shown in fig. 5 may be implemented by using a hardware structure of a network device shown in fig. 6.
The computer-readable storage medium 630 may include a Random Access Memory (RAM) or a Non-volatile Memory (NVM), such as at least one disk Memory. Optionally, the computer-readable storage medium 630 may also be at least one memory device located remotely from the processor 610.
The Processor 610 may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; the Integrated Circuit can also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA), or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components.
In the embodiment of the present application, the processor 610 reads the machine executable instructions stored in the machine readable storage medium 630, and the machine executable instructions cause the processor 610 itself and the transceiver 620 to be able to perform the load sharing implementation method described in the embodiment of the present application.
Additionally, the present application provides a machine-readable storage medium 630, where the machine-readable storage medium 630 stores machine executable instructions, and when the machine executable instructions are called and executed by the processor 610, the machine executable instructions cause the processor 610 itself and the calling transceiver 620 to execute the method for implementing load sharing described in the present application.
The implementation process of the functions and actions of each unit in the above device is specifically described in the implementation process of the corresponding step in the above method, and is not described herein again.
For the device embodiments, since they substantially correspond to the method embodiments, reference may be made to the partial description of the method embodiments for relevant points. The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules can be selected according to actual needs to achieve the purpose of the scheme of the application. One of ordinary skill in the art can understand and implement it without inventive effort.
As for the implementation apparatus of load sharing and the machine-readable storage medium embodiment, since the content of the related method is substantially similar to that of the foregoing method embodiment, the description is relatively simple, and for the relevant points, reference may be made to part of the description of the method embodiment.
The above description is only a preferred embodiment of the present application and should not be taken as limiting the present application, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (10)

1. A method for realizing load sharing is characterized in that the method is applied to a first BRAS, the first BRAS is positioned in a dual-computer hot standby group and is a main frame, a first service instance with a first number of backup groups is configured in the first BRAS, a second BRAS is also included in the dual-computer hot standby group, the second BRAS is a standby frame, and the first service instance with a second number of backup groups is configured in the second BRAS; the method comprises the following steps:
when a first backup group is abnormal, selecting a second backup group from the first number of backup groups according to a load sharing algorithm so as to enable the second backup group to bear the service flow in the first backup group, wherein the second backup group is other backup groups except the first backup group in the first number of backup groups;
when the first number of backup groups are abnormal, or the number of abnormal backup groups in the first number of backup groups exceeds a preset number, degrading the first BRAS into a backup frame, so that the second BRAS is upgraded into a main frame, and bearing the service traffic of the first number of backup groups through the second number of backup groups;
each backup group comprises a CGN main board and a CGN backup board;
when the first backup group is abnormal, before selecting a second backup group from the first number of backup groups according to a load sharing algorithm, the method further comprises:
when the service main board included in the first backup group is abnormal, the service standby board included in the first backup group is upgraded to a new service main board, and the new service main board carries service flow.
2. The method of claim 1 wherein the first number of backup groups is characterized by a load sharing array, wherein values of array elements in the load sharing array are identifications of the first number of backup groups; the method further comprises the following steps:
dividing the public network address resources according to the number of the array elements in the load sharing array to obtain a plurality of public network address sub-resources;
binding each public network address sub-resource with each array element;
selecting a second backup group from the first number of backup groups according to a load sharing algorithm, specifically comprising:
searching a first array element from the load sharing array according to the identifier of the first backup group, wherein the value of the first array element is the identifier of the first backup group;
determining a first traffic distribution rule according to the number of the first array elements and the number of the in-place and normal second backup groups in the load sharing groups;
updating the value of the first array element to the identifier of the second backup group according to the first traffic distribution rule, and binding the first public network address resource bound by the first array element with the identifier of the second backup group, so that the second backup group bears first service traffic;
the number of array elements in the load sharing array can be equally divided based on the number of any in-place backup groups in the load sharing array;
each backup group in the load sharing array corresponds to the array elements with the same number.
3. The method according to claim 2, wherein the determining a first traffic allocation rule according to the number of the first array elements and the number of second backup groups that are in place and normal in the load sharing array specifically includes:
when the number of the first array elements can be averaged by the number of the second backup group, determining the first traffic distribution rule, where the first traffic distribution rule is to distribute the first traffic flow to the second backup group evenly.
4. The method of claim 2, wherein after updating the value of the first array element to the identification of the second backup group, further comprising:
recording the identifier of the first backup group in a linked list corresponding to the first array element, wherein the linked list is used for recording the identifier of an abnormal backup group;
if the first backup group is recovered to be normal, searching a second array element from the load sharing array, wherein the linked list corresponding to the second array element records the identifier of the abnormal backup group;
if the number of array elements in the load sharing array can be equally divided based on the number of any in-place backup groups in the load sharing array, and each second backup group corresponds to the array elements with the same number, determining a second traffic distribution rule according to the number of array elements in the load sharing array and the number of currently in-place and normal backup groups in the load sharing array, wherein the second traffic distribution rule is to distribute the array elements with the same number to the first backup group from the array elements corresponding to each second backup group, so that each in-place and normal backup group corresponds to the array elements with the same number;
selecting a third quantity of array elements from second array elements corresponding to the second backup group according to the second traffic distribution rule;
and updating the value of the selected third quantity of array elements into the identifier of the first backup group, and binding the second public network address resource bound by the third quantity of array elements with the identifier of the first backup group, so that the first backup group bears the service flow of the abnormal backup group.
5. The method of claim 2, wherein a VSRP connection has been established between the first BRAS and the second BRAS, the VSRP connection comprising a control channel; the method further comprises the following steps:
sending a synchronous backup message to the second BRAS through the control channel, wherein the synchronous backup message comprises a user table, a public network address resource and a resource array binding relationship, so that the second BRAS stores the user table, the public network address resource and the resource array binding relationship, and synchronizes the resource array binding relationship to a load sharing array included by the second BRAS according to the service flow distribution of the user table and the public network address resource to the second backup groups;
the resource array binding relationship is specifically a binding relationship between each public network address sub-resource obtained after the public network address resource is divided and each array element in the load sharing array in the first BRAS.
6. The device is characterized in that the device is applied to a first BRAS, the first BRAS is positioned in a dual-computer hot standby group and is a main frame, a first service instance with a first number of backup groups is configured in the first BRAS, a second BRAS is further included in the dual-computer hot standby group, the second BRAS is a standby frame, and the first service instance with a second number of backup groups is configured in the second BRAS; the device comprises:
a selecting unit, configured to select, when a first backup group is abnormal, a second backup group from the first number of backup groups according to a load sharing algorithm, so that the second backup group carries a service traffic in the first backup group, where the second backup group is another backup group except the first backup group in the first number of backup groups;
a processing unit, configured to downgrade the first BRAS to a backup frame when the first number of backup groups are all abnormal, or the number of abnormal backup groups in the first number of backup groups exceeds a preset number, so that the second BRAS is upgraded to a main frame, and service traffic of the first number of backup groups is carried by the second number of backup groups;
each backup group comprises a service main board and a service standby board;
the processing unit is further configured to, when a service motherboard included in the first backup group is abnormal, upgrade a service standby board included in the first backup group to a new service motherboard, and load service traffic on the new service motherboard.
7. The apparatus of claim 6 wherein the first number of backup groups is characterized by a load sharing array, wherein values of array elements in the load sharing array are identifications of the first number of backup groups; the device further comprises:
the dividing unit is used for dividing the public network address resources according to the number of the array elements in the load sharing array to obtain a plurality of public network address sub-resources;
the binding unit is used for binding each public network address sub-resource with each array element;
the selection unit specifically includes: a first searching subunit, configured to search, according to an identifier of the first backup group, a first array element from the load sharing array, where a value of the first array element is the identifier of the first backup group;
a first determining subunit, configured to determine a first traffic allocation rule according to the number of the first array elements and the number of in-place and normal second backup groups in the load sharing group;
a first updating subunit, configured to update the value of the first array element to the identifier of the second backup group according to the first traffic allocation rule, and bind the first public network address resource bound to the first array element and the identifier of the second backup group, so that the second backup group carries a first service traffic;
the number of array elements in the load sharing array can be equally divided based on the number of any in-place backup groups in the load sharing array;
each backup group in the load sharing array corresponds to the array elements with the same number.
8. The apparatus according to claim 7, wherein the determining subunit is specifically configured to determine the first traffic allocation rule when the number of the first array elements is equal to the number of the second backup group, and the first traffic allocation rule is to allocate the first traffic flow to the second backup group in an equal manner.
9. The apparatus of claim 7, wherein the selection unit further comprises:
a recording subunit, configured to record an identifier of the first backup group in a linked list corresponding to the first array element, where the linked list is used to record an identifier of an abnormal backup group;
the second searching subunit is configured to search, if the first backup group is recovered to be normal, a second array element from the load sharing array, where an identifier of an abnormal backup group is recorded in a linked list corresponding to the second array element;
a second determining subunit, configured to, if the number of array elements in the load sharing array can be evenly divided based on the number of any in-place backup groups in the load sharing array, and each second backup group corresponds to the same number of array elements, determine a second traffic distribution rule according to the number of array elements in the load sharing array and the number of currently in-place and normal backup groups in the load sharing array, where the second traffic distribution rule is to distribute the same number of array elements from the array elements corresponding to each second backup group to the first backup group, so that each in-place and normal backup group corresponds to the same number of array elements;
a selecting subunit, configured to select, according to the second traffic allocation rule, a third number of array elements from second array elements corresponding to the second backup group;
and the second updating subunit is configured to update the value of the selected third quantity of array elements to the identifier of the first backup group, and bind the second public network address resource bound to the third quantity of array elements to the identifier of the first backup group, so that the first backup group bears the service traffic of the abnormal backup group.
10. The apparatus of claim 7, wherein a VSRP connection has been established between the first BRAS and the second BRAS, the VSRP connection comprising a control channel; the device further comprises:
a sending unit, configured to send a synchronous backup packet to the second BRAS through the control channel, where the synchronous backup packet includes a user table, a public network address resource, and a resource array binding relationship, so that the second BRAS stores the user table, the public network address resource, and the resource array binding relationship, and performs service traffic distribution on the second number of backup groups according to the user table and the public network address resource, and synchronizes the resource array binding relationship to a load sharing array included in the second BRAS;
the resource array binding relationship is specifically a binding relationship between each public network address sub-resource obtained after the public network address resource is divided and each array element in the load sharing array in the first BRAS.
CN202110336937.6A 2021-03-29 2021-03-29 Method and device for realizing load sharing Active CN113206754B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110336937.6A CN113206754B (en) 2021-03-29 2021-03-29 Method and device for realizing load sharing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110336937.6A CN113206754B (en) 2021-03-29 2021-03-29 Method and device for realizing load sharing

Publications (2)

Publication Number Publication Date
CN113206754A CN113206754A (en) 2021-08-03
CN113206754B true CN113206754B (en) 2022-07-12

Family

ID=77025830

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110336937.6A Active CN113206754B (en) 2021-03-29 2021-03-29 Method and device for realizing load sharing

Country Status (1)

Country Link
CN (1) CN113206754B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB0218904D0 (en) * 2001-09-27 2002-09-25 Samsung Electronics Co Ltd Soft switch using distributed firewalls for load sharing voice-over-ip traffic in an ip network
KR20040033629A (en) * 2002-10-15 2004-04-28 삼성전자주식회사 System combined loadsharing structure and primary/backup structure
CN101588304A (en) * 2009-06-30 2009-11-25 杭州华三通信技术有限公司 Implementation method of VRRP
CN102447703A (en) * 2011-12-28 2012-05-09 中兴通讯股份有限公司 Hot backup method and system as well as CGN (carrier grade network address translation (NAT)) equipment
CN105975364A (en) * 2016-05-03 2016-09-28 杭州迪普科技有限公司 Data backup method and device
WO2017101446A1 (en) * 2015-12-15 2017-06-22 中兴通讯股份有限公司 Wireless mode processing method, device and base station
CN109039939A (en) * 2018-07-13 2018-12-18 新华三技术有限公司 A kind of load sharing method and device
CN109787914A (en) * 2019-03-28 2019-05-21 新华三技术有限公司 Load sharing method, device and the network equipment
CN111082959A (en) * 2019-03-28 2020-04-28 新华三技术有限公司 Load sharing method, device and network equipment

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB0218904D0 (en) * 2001-09-27 2002-09-25 Samsung Electronics Co Ltd Soft switch using distributed firewalls for load sharing voice-over-ip traffic in an ip network
KR20040033629A (en) * 2002-10-15 2004-04-28 삼성전자주식회사 System combined loadsharing structure and primary/backup structure
CN101588304A (en) * 2009-06-30 2009-11-25 杭州华三通信技术有限公司 Implementation method of VRRP
CN102447703A (en) * 2011-12-28 2012-05-09 中兴通讯股份有限公司 Hot backup method and system as well as CGN (carrier grade network address translation (NAT)) equipment
WO2017101446A1 (en) * 2015-12-15 2017-06-22 中兴通讯股份有限公司 Wireless mode processing method, device and base station
CN105975364A (en) * 2016-05-03 2016-09-28 杭州迪普科技有限公司 Data backup method and device
CN109039939A (en) * 2018-07-13 2018-12-18 新华三技术有限公司 A kind of load sharing method and device
CN109787914A (en) * 2019-03-28 2019-05-21 新华三技术有限公司 Load sharing method, device and the network equipment
CN111082959A (en) * 2019-03-28 2020-04-28 新华三技术有限公司 Load sharing method, device and network equipment

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Sun Zhixin等.Cross-unit Master-standby-switch Technology Model Based on High-end Router.《2010 IEEE 12th International Conference on Communication Technology》.2011, *
苏东梅.基于华三HCL下的VRRP的设计与实现.《信息通信》.2020,(第09期), *

Also Published As

Publication number Publication date
CN113206754A (en) 2021-08-03

Similar Documents

Publication Publication Date Title
US9906429B2 (en) Performing partial subnet initialization in a middleware machine environment
US11770359B2 (en) Maintaining communications in a failover instance via network address translation
US20140177639A1 (en) Routing controlled by subnet managers
US20120170448A1 (en) Method and Network System for Implementing User Port Orientation in Multi-Machine Backup Scenario of Broadband Remote Access Server
CN115086330B (en) Cross-cluster load balancing system
EP4320839A1 (en) Architectures for disaggregating sdn from the host
US20120166657A1 (en) Gateway system, gateway device, and load distribution method
CN111182022A (en) Data transmission method and device, storage medium and electronic device
US10924454B2 (en) Computing device and method for generating a fabric-wide IPV6 address
US20240106708A1 (en) Fabric availability and synchronization
EP4320516A1 (en) Scaling host policy via distribution
CN113839862A (en) Method, system, terminal and storage medium for synchronizing ARP information between MCLAG neighbors
CN113206754B (en) Method and device for realizing load sharing
WO2020181733A1 (en) Vpc-based multi-data center intercommunication method and related device
CN106209634B (en) Learning method and device of address mapping relation
US10873500B2 (en) Computing device and method for generating a link IPV6 address
US20170033977A1 (en) Method, device and system for processing failure of network service node
CN109462537B (en) Cross-network intercommunication method and device
CN106878051B (en) Multi-machine backup implementation method and device
US20230336405A1 (en) Failover of cloud-native network functions within node groups for high availability in a wireless telecommunication network
US20230336420A1 (en) Utilization of network function (nf) node groups for compute optimization and nf resiliency in a wireless telecommunication network
CN114374643B (en) Communication method and device
CN113300878B (en) Method and device for realizing data smoothing
US20230337018A1 (en) Centralized unit user plane (cu-up) and centralized unit control plane (cu-cp) standby pods in a cloud-native fifth generation (5g) wireless telecommunication network
US20230336476A1 (en) Use of an overlay network to interconnect between a first public cloud and second public cloud

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant