WO2021135416A1 - 一种负载均衡方法和设备 - Google Patents

一种负载均衡方法和设备 Download PDF

Info

Publication number
WO2021135416A1
WO2021135416A1 PCT/CN2020/116534 CN2020116534W WO2021135416A1 WO 2021135416 A1 WO2021135416 A1 WO 2021135416A1 CN 2020116534 W CN2020116534 W CN 2020116534W WO 2021135416 A1 WO2021135416 A1 WO 2021135416A1
Authority
WO
WIPO (PCT)
Prior art keywords
bandwidth
member port
network device
mapping relationship
port
Prior art date
Application number
PCT/CN2020/116534
Other languages
English (en)
French (fr)
Inventor
陆茂生
刘斌
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to EP20910438.9A priority Critical patent/EP4072083A4/en
Publication of WO2021135416A1 publication Critical patent/WO2021135416A1/zh
Priority to US17/851,371 priority patent/US20220329525A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/24Multipath
    • H04L45/245Link aggregation, e.g. trunking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/12Avoiding congestion; Recovering from congestion
    • H04L47/125Avoiding congestion; Recovering from congestion by balancing the load, e.g. traffic engineering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/41Flow control; Congestion control by acting on aggregated flows or links
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/74Address processing for routing
    • H04L45/745Address table lookup; Address filtering
    • H04L45/7453Address table lookup; Address filtering using hashing

Definitions

  • This application relates to the technical field of secure communication, and in particular to a load balancing method and device.
  • the method is used in a network scenario where link aggregation exists to load balance multiple member ports in the link aggregation group.
  • Network equipment usually has multiple physical ports, and because the physical bandwidth that a single physical port can carry is limited, in order to enable the network equipment to have high bandwidth capabilities, link aggregation can be used on the network equipment, that is, multiple The physical ports form a logical port, the multiple physical links corresponding to the multiple physical ports may be called a link aggregation group, and each physical port is called a member port of the link aggregation group. Since the bandwidth that the logical port can carry is the sum of the physical bandwidth carried by multiple physical ports, for example: assuming that 10 physical ports with a bandwidth of 10G on the configuration switch are aggregated into one logical port, then the logical port has a bandwidth of 100G, Therefore, network equipment can achieve high bandwidth requirements by adopting link aggregation.
  • a network device forwards a service flow through a link aggregation group, it usually determines that the hash value corresponding to the service flow is in the link aggregation group based on the fixed mapping relationship between the hash value and the member ports in the link aggregation group. And use the determined member port to forward the service flow, resulting in the unbalanced load of each member port in the link aggregation group, which greatly reduces the bandwidth actually provided by the link aggregation group, and the link aggregation is in the network The role played by the equipment is greatly reduced.
  • the embodiments of the present application provide a load balancing method and device, which dynamically adjust the load sharing to each member port based on the actual load status of each member port in the link aggregation group, so as to realize each member in the link aggregation group.
  • the port load balances so that the link aggregation group on the network device can provide the highest possible bandwidth.
  • a load balancing method is provided.
  • the method is applied to a network device.
  • the network device can forward a service flow to another network device through a link aggregation group with another network device.
  • the above load balancing process may specifically include: the network device performs a hash calculation on the received first service flow to obtain the first hash value corresponding to the first service flow; the network device according to the first hash value and the first hash value A first mapping relationship between a hash value and the first member port in the link aggregation group, and it is determined that the outgoing port of the first service flow is the first member port; at this time, the network device can determine the first member port The current first bandwidth of the port, the first mapping relationship is adjusted to the mapping relationship between the first hash value and the second member port according to the first bandwidth, and the second member port is used to forward the subsequent received The first business flow.
  • the first mapping relationship is adjusted to the second mapping relationship.
  • the first condition is used to characterize the condition of load imbalance in the link aggregation group, which specifically includes but is not limited to at least one of the following: the first bandwidth is greater than or equal to a first threshold; the first bandwidth occupancy The first bandwidth occupancy rate is greater than or equal to the second threshold, and the first bandwidth occupancy rate is the ratio of the first bandwidth to the bandwidth of the logical port corresponding to the link aggregation group; or, the first bandwidth to the second bandwidth The ratio of the difference between, and the bandwidth of a single member port in the link aggregation group is greater than or equal to a third threshold, and the second bandwidth is before the network device adjusts the first mapping relationship to the second mapping relationship, The bandwidth of the second member port.
  • the network device can periodically obtain the bandwidth of each member port. Once a member port is found to meet the pre-configured first condition, it is determined that the link aggregation group has a load imbalance problem, which needs to be provided by the embodiment of this application.
  • the load balancing method is used for processing, so that the load of each member port in the link aggregation group is relatively balanced.
  • the target member port for traffic switching can be selected, specifically, link aggregation can be selected.
  • the member port with the smallest bandwidth currently in the group for example, the target member port selected in the embodiment of this application is the second member port, and the second member port is the network device that adjusts the first mapping relationship to the second member port. Before the mapping, the member port with the smallest bandwidth among the member ports of the link aggregation group.
  • the member port can be used as the target member port to share the bandwidth cut by the first member port, and the target member port also needs to meet the second condition.
  • the network device can adjust the first mapping relationship to the second mapping relationship only when the second member port meets the second condition.
  • the second condition is used to characterize the ability to share traffic for the first member port to improve load imbalance in the link aggregation group, and specifically includes but is not limited to at least one of the following: the fifth bandwidth is less than or equal to the first threshold, so The fifth bandwidth is the bandwidth of the second member port after the network device adjusts the first mapping relationship to the second mapping relationship; the fifth bandwidth occupancy rate is less than or equal to the second threshold, and The fifth bandwidth is the bandwidth of the second member port after the network device adjusts the first mapping relationship to the second mapping relationship, and the fifth bandwidth occupancy ratio is the fifth bandwidth and the link The ratio of the bandwidth of the logical port corresponding to the link aggregation group; or, the ratio of the difference between the fifth bandwidth and the sixth bandwidth to the bandwidth of a single member port in the link aggregation group is less than or equal to a third threshold, The fifth bandwidth is the bandwidth of the second member port after the network device adjusts the first mapping relationship to the second mapping relationship, and the sixth bandwidth is the network device adjusting the first mapping relationship to The bandwidth of the first
  • the embodiments of the present application may further include: the network device obtains the first mapping relationship and the third mapping relationship, where the first mapping relationship includes the first member port, the first hash The mapping relationship between the value and the first traffic statistics unit, the third mapping relationship includes the mapping relationship between the first member port, the second hash value, and the second traffic statistics unit, wherein the second hash The value is a hash value obtained by hash calculation according to the second service flow; the network device determines according to the statistical result of the first traffic statistics unit that the first service flow occupies the third bandwidth of the first member port The network device determines that the second service flow occupies the fourth bandwidth of the first member port according to the statistical result of the second traffic statistics unit; the network device determines that the fourth bandwidth of the first member port is occupied by the network device according to the third bandwidth and the first Four bandwidth, determine the first bandwidth.
  • the traffic statistics unit may specifically be a counter or a token bucket, which is used to count the number of bytes that are mapped to its corresponding member port through its corresponding hash value, and the number of bytes forwarded by the member port.
  • a traffic statistics unit corresponding to the hash value in the mapping relationship provides a finer granularity for the load adjustment of the same member port, that is, it can be mapped to the same member port but the hash value is different
  • the service flow is divided and part of the traffic is selectively cut away, which expands the function of mapping at least two hash values to the same member port in the embodiment of the present application, and makes the load balancing effect better.
  • the first mapping relationship may refer to the mapping relationship between the first member port, the first hash value, and the bandwidth of the first member port counted by the first traffic statistics unit; or, it may also be the first member port.
  • the mapping between the member port, the first hash value and the identifier of the first traffic statistics unit, and the network device determines the traffic statistics unit through the identifier of the first traffic unit, and then determines the identity of the first traffic statistics unit Traffic statistics results.
  • the mapping relationship that carries a larger bandwidth can be selected for adjustment.
  • the third bandwidth determined according to the statistical result of the first traffic statistical unit is greater than the fourth bandwidth determined according to the statistical result of the second traffic statistical unit, then the first hash value is selected The first mapping relationship with the first member port is adjusted, thereby affecting the subsequent forwarding of the first service flow; in another case, if the third bandwidth is less than the fourth bandwidth, the second hash value is selected The third mapping relationship with the first member port is adjusted, thereby affecting subsequent forwarding of the second service flow. In this way, the mapping relationship corresponding to the partial bandwidth on the first member port can be adjusted flexibly, so that the link aggregation group can quickly realize load balancing.
  • the embodiment of the present application may continue to improve the problem of load imbalance in the link aggregation group by adjusting the hash algorithm after adjusting the mapping relationship. It may specifically include: the network device receives the third service flow, and determines that the outgoing port for forwarding the third service flow recorded in the mapping relationship table is the third member port in the link aggregation group, where the mapping relationship table is The mapping relationship between the third hash value and the third member port is recorded, and the third hash value is the hash value obtained by hashing the third service flow through the first hash algorithm; then, the The network device determines that the current seventh bandwidth of the third member port is greater than or equal to the fourth threshold, and then adjusts the first hash algorithm to the second hash algorithm; then, the network device can adjust the second hash algorithm according to the second The hash algorithm performs a hash calculation on the third service flow to obtain a fourth hash value, and according to the mapping relationship between the fourth hash value and the fourth member port recorded in the mapping relationship table, it is
  • the network device adjusting the first hash algorithm to the second hash algorithm includes but is not limited to at least one of the following ways: adjusting the byte order of at least one characteristic parameter in the first service flow; adjusting The sequence of each characteristic parameter in the first service flow; or, adjusting the value of the hash factor.
  • the characteristic parameter includes at least one of the destination address DA, the source address SA, the destination Internet Protocol address DIP, and the source Internet Protocol address SIP of the first service flow.
  • the third member port may be the first member port; and the fourth member port may be the second member port.
  • the two dynamic adjustment methods can be combined according to the actual bandwidth of each member port in the link aggregation group, and the mapping can be dynamically adjusted.
  • the hash algorithm based on the fixed hash value of the service flow and the determined mapping relationship between the hash value and the member ports are overcome, and the service flow is forwarded, resulting in each member port.
  • the problem of unbalanced load makes it possible for each member port in the link aggregation group to maintain load balance to a certain extent, so that network equipment can meet the demand for high bandwidth.
  • the load balancing method can also be implemented by adjusting the hash algorithm for calculating the hash value of the service flow, and then the load balancing method can be implemented by adjusting the member port corresponding to the hash value in the mapping relationship.
  • the execution sequence between the two methods is not specifically limited in this embodiment, and the specific execution sequence can be determined according to actual load conditions or pre-configuration.
  • the hash value corresponding to the service flow is obtained by hashing the service flow.
  • the specific process may include: extracting the characteristic parameters of the service flow, and comparing the characteristic parameters (or the characteristic parameters and the hash value). Factor) to perform a hash calculation to obtain a 32-bit hash value; take the low N bits of the 32-bit hash value as the hash value corresponding to the service flow, where N is a positive integer, and the link aggregation group includes members
  • the number of ports is less than 2 to the power of N. In this way, it is ensured that each member port has at least two corresponding hash values, that is, each member port has at least two corresponding mapping relationships, so that the implementation of the embodiment of the present application for load balancing is more flexible and effective.
  • a load balancing method is provided.
  • the method is applied to a network device.
  • the network device can forward a service flow to another network device through a link aggregation group with another network device.
  • the above load balancing process may specifically include: the network device performs a hash calculation on the received first service flow by the first hash algorithm to obtain the first hash value corresponding to the first service flow; the network device according to the mapping relationship The mapping relationship between the first hash value recorded in the table and the first member port in the link aggregation group determines that the outgoing port of the first service flow is the first member port; at this time, the network device can determine the For the current first bandwidth of the first member port, when it is determined that the first bandwidth is greater than or equal to the preset threshold, the first hash algorithm is adjusted to the second hash algorithm; then, the network device uses the second hash algorithm to Subsequently, the first service flow is received and the hash calculation is performed to obtain the second hash value.
  • the network device according to the mapping relationship between the second hash value in the mapping relationship table and the second member port in the link aggregation group, It is determined to forward the first service flow through the second member port. It can be seen that through the method provided in the embodiments of the present application, in a network device that uses link aggregation, the hash value of the hash value of the service flow can be dynamically adjusted according to the actual bandwidth of each member port in the link aggregation group.
  • the Greek algorithm (for example: the value of the hash factor) changes the hash value corresponding to the service flow, so that multiple service flows originally mapped to the same member port are mapped to different member ports, which overcomes the fixed calculation service
  • the hash algorithm of the flow hash value, as well as the determined mapping relationship between the hash value and member ports, are used for service flow forwarding, which leads to the problem of unbalanced load on each member port, which makes each member port in the link aggregation group It is possible to maintain load balance to a certain extent, so that network equipment can meet the demand for high bandwidth.
  • this application also provides network equipment, including a transceiver unit and a processing unit.
  • the transceiving unit is used to perform the transceiving operations in the method provided in the first aspect or the second aspect; the processing unit is used to perform other operations in addition to the transceiving operations in the first or second aspect.
  • the transceiving unit is used to receive the first service flow, and the transceiving unit is further used to forward the first service flow using the second member port;
  • the processing unit is used to perform a hash calculation on the received first service flow to obtain a first hash value corresponding to the first service flow;
  • the processing unit is also used to perform a hash calculation based on the first hash value and the first hash value Value and the first mapping relationship between the first member port in the link aggregation group to determine that the outgoing port of the first service flow is the first member port;
  • the processing unit is also used to determine the current status of the first member port
  • the first bandwidth the processing unit is further configured to adjust the first mapping relationship to the mapping relationship between the first hash value and the second member port according to the first bandwidth.
  • the transceiving unit when the network device executes the method described in the second aspect, the transceiving unit is configured to receive the first service flow, and the transceiving unit is further configured to use the second member port to forward the first service flow;
  • the processing unit is configured to perform a hash calculation on the received first service flow through a first hash algorithm to obtain the first hash value corresponding to the first service flow;
  • the processing unit is also configured to perform a hash calculation according to the mapping relationship table The mapping relationship between the recorded first hash value and the first member port in the link aggregation group determines that the outgoing port of the first service flow is the first member port;
  • the processing unit is further configured to determine the first member port;
  • the processing unit is further configured to adjust the first hash algorithm to the second hash algorithm when determining that the first bandwidth is greater than or equal to the preset threshold;
  • the processing unit is also configured to pass the first
  • the second hash algorithm performs a hash calculation on the
  • an embodiment of the present application also provides a network device, including a communication interface and a processor.
  • the communication interface is used to perform the transceiving operation in the method provided by the foregoing first aspect or the second aspect; the processor is used to perform other operations other than the transceiving operation in the method provided by the foregoing first or second aspect .
  • an embodiment of the present application also provides a network device, which includes a memory and a processor.
  • the memory is used to store program code; the processor is used to run the instructions in the program code, so that the network device executes the method provided in the first aspect or the second aspect above.
  • the embodiments of the present application also provide a computer-readable storage medium that stores instructions in the computer-readable storage medium, which when run on a computer, causes the computer to execute the first aspect or the second aspect above.
  • the load balancing method provided by the aspect is not limited to:
  • the embodiments of the present application also provide a computer program product, which when running on a computer, causes the computer to execute the load balancing method provided in the foregoing first aspect or second aspect.
  • FIG. 1 is a schematic diagram of a network system framework involved in an application scenario in an embodiment of this application;
  • FIG. 2 is a schematic diagram of an example of load sharing in the scenario shown in FIG. 1 in an embodiment of the application;
  • FIG. 3 is a schematic structural diagram of the PTN device 106 in the scenario shown in FIG. 1 in an embodiment of the application;
  • FIG. 4 is a schematic flowchart of a load balancing method 100 in an embodiment of this application.
  • FIG. 5 is a schematic flowchart of another load balancing method 200 in an embodiment of this application.
  • FIG. 6 is a schematic diagram of a hash key value in a first hash algorithm in an embodiment of the application
  • FIG. 7 is a schematic diagram of hash key values in another first hash algorithm in an embodiment of the application.
  • FIG. 8a is a schematic diagram of a hash key value in a second hash algorithm in an embodiment of the application.
  • FIG. 8b is a schematic diagram of hash key values in another second hash algorithm in an embodiment of the application.
  • FIG. 8c is a schematic diagram of a hash key value in another second hash algorithm in an embodiment of the application.
  • FIG. 9a is a schematic diagram of a hash key value in another second hash algorithm in an embodiment of the application.
  • FIG. 9b is a schematic diagram of hash key values in another second hash algorithm in an embodiment of the application.
  • FIG. 10 is a schematic structural diagram of a network device 1000 in an embodiment of this application.
  • FIG. 11 is a schematic structural diagram of a network device 1100 in an embodiment of this application.
  • FIG. 12 is a schematic structural diagram of a network device 1200 in an embodiment of this application.
  • link aggregation is usually used on network equipment, that is, at least two physical ports of the network equipment are formed into a logical port, and the bandwidth that the logical port can carry Based on the sum of the bandwidth carried by its corresponding physical ports, it can be seen that the use of link aggregation can effectively increase the bandwidth-carrying capacity of network devices.
  • a link aggregation group refers to a collection of multiple physical links that participate in link aggregation between two network devices connected by link aggregation; all physical ports aggregated into logical ports on a network device can be It is called the member port of the link aggregation group.
  • PTN packet transport network
  • the base station 101-the base station 105 respectively directly or indirectly access the PTN device 106, and through the PTN device 106 is converged to the core network device 107. Since the service streams sent by multiple base stations need to be aggregated by the PTN device 106 to the core network side, the network device 106 needs to be able to carry a larger bandwidth.
  • Link aggregation group that is, port 1, port 2, port 3, and port 4 on the PTN device 106 are aggregated into a logical port A, and port 5, port 6, port 7, and port 8 on the core network device 107 are also Converge into a logical port B, which is connected to port 1 and port 5, port 2 and port 6, port 3 and port 7, and port 4 and port 8 through four optical fibers.
  • the bearable bandwidth of each optical fiber (also referred to as the maximum allowable physical bandwidth of the physical port of the optical fiber connection) is 1G
  • the link aggregation group between the PTN device 106 and the core network device 107 has 4G Bandwidth, that is, the PTN device 106 can carry a larger bandwidth.
  • a network device when a network device receives a service flow whose outgoing port is a link aggregation group, first, extract the characteristic parameters of the service flow (for example, the destination address of the packet in the service flow (English: Destination Address, abbreviated in English: Destination Address).
  • the characteristic parameters of the service flow for example, the destination address of the packet in the service flow (English: Destination Address, abbreviated in English: Destination Address).
  • DA DA
  • source address English: Source Address, abbreviation: SA
  • destination Internet Protocol address English: Destination Internet Protocol Address, abbreviation: DIP
  • source Internet Protocol address English: Source Internet Protocol Address, abbreviation: SIP
  • mapping relationships pre-stored on the PTN device 106 mapping between hash value 1 and port 1, mapping between hash value 2 and port 2 The relationship 2, the mapping relationship 3 between the hash value 3 and the port 3, and the mapping relationship 4 between the hash value 4 and the port 4.
  • the PTN device 106 receives service flows 1-8, and the outgoing ports of service flows 1-8 are all link aggregation groups, then load sharing is performed in the link aggregation group according to the above method.
  • the process can specifically include: PTN device 106 extracts the characteristic parameters of service flows 1-8 and calculates: hash value 2, 3, 3, 3, 1, 3, 3, and 2; PTN device 106 stores 4 sets of mappings For example, see Table 1 below. Based on Table 1, PTN device 106 can determine that service flows 1 and 8 are sent to core network device 107 through port 3, and service flows 2, 3, 4, 6 and 7 are sent to core network device 107 through port 4. For the core network device 107, the service flow 5 is sent to the core network device 107 through port 2. In this way, the bandwidths on port 1 to port 4 are 0M, 500M, 110M, and 700M, respectively.
  • the provided bandwidth for example, on the PTN device 106, because port 4 carries a bandwidth of 700M, which exceeds 50% of 1G (the preset value), the PTN device 106 can no longer carry other service flows through the link aggregation group.
  • the embodiment of the present application provides a load balancing method, which dynamically adjusts the load to each member port based on the actual load situation of each member port in the link aggregation group, so as to realize the load balance of each member port in the link aggregation group.
  • the specific process of load balancing may include: the network device performs a hash calculation on the first service flow to obtain the corresponding first hash value, and according to the first hash value and the first member port in the link aggregation group
  • the first mapping relationship determines that the outgoing port of the first service flow is the first member port; at this time, the network device determines the current first bandwidth of the first member port, and according to the first bandwidth, the first mapping relationship
  • the second mapping relationship between the first hash value and the second member port is adjusted, so that the second member port is used to forward the first service flow that is subsequently received.
  • the PTN device 106 may include: a receiving module 1061, a feature extraction module 1062, a hash calculation module 1063, and a mapping module 1064, a traffic statistics module 1065, a traffic adjustment module 1066, and a member port forwarding module 1067.
  • the specific load balancing process may include: S11, the receiving module 1061 of the PTN device 106 receives the service flow sent by the base station 101-base station 105; S12, the feature extraction module 1062 extracts the DA and SA of each service flow, and sends them to Hash Calculation module 1063; S13, hash calculation module 1063 calculates the 32-bit hash value corresponding to each service flow based on the DA and SA of each service flow, and takes the lower 4 bits of the 32-bit hash value as the final corresponding hash value of the service flow Ultimately, send the mapping module 1064; S14, the mapping module 1064 determines the corresponding egress port for each service flow according to the current mapping table 1, and the member port forwarding module 1067 forwards the corresponding service flow to the core network according to the determined egress port Device 107; S15, the port performance monitoring unit in the traffic statistics module 1065 is connected to the member port forwarding module 1067 to monitor the traffic actually forwarded by each member port.
  • the PTN device 106 collects the value of the port performance monitoring unit once per second through software, And calculate the bandwidth of each member port; S16, the traffic statistics unit in the traffic statistics module 1065 is connected to the mapping module 1604 to count the traffic actually forwarded by each member port. The PTN device 106 reads the mapping in the mapping module 1604 through software every second The value of each traffic statistics unit in Table 2 is calculated and the bandwidth of each member port is calculated.
  • the bandwidth obtained according to the port performance monitoring unit under the same circumstances is the same as the bandwidth obtained according to the traffic statistics unit;
  • S17 flow
  • the adjustment module 1066 determines that the total bandwidth of member port 1 is 900M, which exceeds 80% of the maximum physical bandwidth of a single port;
  • S18 the traffic adjustment unit 1066 is based on the traffic statistics unit 0, Values in 4, 8 and 12: 100M, 200M, 300M, and 200M.
  • the traffic adjustment unit 1066 selects member port 2 with the smallest bandwidth from 80% of the member ports whose current bandwidth does not exceed the maximum physical bandwidth of a single port (ie: 1G bandwidth), and determines if it will wait The service flow in the adjustment mapping item is switched to member port 2, and the bandwidth of the member port 2 does not exceed 80% of the maximum physical bandwidth of a single port; S20, the traffic adjustment unit 1066 maps the map to be adjusted in the mapping table 1 in the mapping module 1064 The member port 1 of the item is modified to member port 2, that is, the mapping table 1 in the mapping module 1064 is updated to the mapping table 2; S21, when the receiving module 1061 of the PTN device 106 then receives the service flow corresponding to the hash value of 8.
  • the service flow is forwarded from the member port 2 to the core network device 107, and the bandwidth of the service flow is increased on the traffic statistics unit 8 corresponding to the member port 2. It can be seen that, through the method provided in the embodiment of the present application, the load balancing of each member port in the link aggregation group is realized.
  • mapping table 1 and the mapping table 2 can be referred to the following tables 2 and 3:
  • Hash value Member port Traffic Statistics Unit Hash value Member port Traffic Statistics Unit 0 1 0(100M) 8 1 8(300M) 1 2 1 9 2 9 2 3 2 10 3 10 3 4 3 11 4 11 4 1 4(200M) 12 1 12(200M) 5 2 5 13 2 13 6 3 6 14 3 14 7 4 7 15 4 15
  • Hash value Member port Traffic Statistics Unit Hash value Member port Traffic Statistics Unit 0 1 0(100M) 8 2 8(300M) 1 2 1 9 2 9 2 3 2 10 3 10 3 4 3 11 4 11
  • the traffic statistics unit 0-15 in the mapping table can be the identifier of the traffic statistics unit, which is used to indicate to the corresponding traffic statistics unit, for example: the network device receives the service flow 1, its hash value is 0, and the bandwidth If it is 100M, the network device determines that the outgoing port of service flow 1 is member port 1 according to the mapping table 1 or 2, and the traffic statistics unit is the traffic statistics unit identified as 0.
  • the network device is using member port 1 to forward services At the same time as flow 1, 100M bytes are added to the traffic statistics unit marked as 0; another example: in one acquisition period, when the network device determines the current bandwidth of member port 1, you can check the mapping table 1 or 2 to determine the member
  • the traffic statistics units corresponding to port 1 are the traffic statistics units identified as 0, 4, 8, and 12 respectively. Then, the network device can find the 4 traffic statistics units identified as 0, 4, 8, and 12, and read them Value, the 4 read values are added together, and the sum obtained is the current bandwidth of member port 1.
  • the traffic statistics unit in the mapping table can also directly store the counted number of forwarded bytes. For example, before adjustment, see Table 4 below.
  • the mapping relationship for member port 1 can be :
  • the network device in the embodiment of the present application may specifically be a switch, a router, or a PTN device.
  • Link aggregation can be applied between any network devices with high bandwidth requirements. For example, on servers or server clusters connected to the core network, link aggregation is used to connect core network devices.
  • the maximum physical bandwidth of each member port in the link aggregation group is the same, for example, the maximum physical bandwidth that each member port can carry is 1G.
  • FIG. 4 is a schematic flowchart of a load balancing method 100 in an embodiment of this application.
  • the method 100 is applied to a network device on which a link aggregation group is set.
  • a mapping relationship of the link aggregation group is pre-established and stored on the network device, and the mapping relationship indicates that the link aggregation group is in the link aggregation group.
  • the member port used to forward the service flow For example: in the network shown in Figure 1, for the service flow sent from the base station to the core network, the method 100 can be applied to the PTN device 106; for the service flow sent from the core network to the base station, the method 100 can also be applied On the core network device 107.
  • the method 100 may include the following S101 to S104, for example:
  • the network device determines the egress port of the first service flow according to the first hash value corresponding to the first service flow and the first mapping relationship between the first hash value and the first member port in the link aggregation group. Is the first member port, where the first hash value is a hash value obtained by hash calculation according to the first service volume.
  • the mapping relationship of the link aggregation group is usually configured in advance on the network device.
  • the mapping relationship can characterize the corresponding relationship between the hash value and the member port, and can also characterize the corresponding relationship between the hash value, the member port, and the traffic statistics unit.
  • the mapping relationship can be configured in the form of a mapping table and stored on the network device.
  • the number of mapping items in the mapping relationship is increased.
  • the hash value corresponding to each mapping item in the relationship is different, that is, a hash value only appears once in the mapping relationship of the link aggregation group. Since the hash calculation based on the characteristic parameters of the service flow is a 32-bit calculation result, the lower N bits can be extracted as the hash value corresponding to the service flow. There are a total of 2 N possible hash values, where N is positive. It is an integer and satisfies 2 N > the number of member ports included in the link aggregation group.
  • N For example: if the link aggregation group of a network device includes 16 member ports, then N needs to be an integer greater than 4.
  • N is a value set by the user according to actual needs. The larger the N, the smaller the probability that different service flows correspond to the same hash value, and the more balanced the load on the network device.
  • the pre-configured mapping relationship can include 16 mapping relationships corresponding to a total of 16 hash values from 0 to 15 (hereinafter also referred to as a hash value corresponding Each member port corresponds to 4 different hash values. For details, see Table 5 below.
  • Hash value Member port Hash value Member port 0 1 8 1 1 2 9 2 2 3 10 3 3 4 11 4 4 1 12 1 5 2 13 2 6 3 14 3 7 4 15 4
  • a traffic statistics unit can be added for each mapping item, namely , Add flow statistics unit 0-flow statistics unit 15, see Table 2 or Table 3 above for details.
  • the link aggregation group on the network device includes at least a first member port and a second member port
  • the mapping items for the first member port include at least: a first mapping relationship and a third mapping relationship, as
  • the first mapping relationship represents the relationship between the first member port and the first hash value
  • the third mapping relationship represents the relationship between the first member port and the second hash value.
  • the first mapping relationship represents the relationship between the first member port, the first hash value, and the first traffic statistics unit.
  • the third mapping relationship represents the relationship between the first member port, the second hash value, and the relationship between the second traffic statistics unit.
  • the mapping relationship may refer to the mapping relationship between the member port, the hash value, and the bandwidth of the member port counted by the traffic statistics unit; or the mapping relationship may also be the member port, the hash value, and the identifier of the traffic statistics unit.
  • the network device determines the traffic statistics unit through the identifier of the traffic unit, and then determines the traffic statistics result of the traffic statistics unit.
  • S101 may specifically include: the network device obtains the hash key value according to the first service flow, and obtains the hash value A based on the hash key value calculation, the hash value A is a 32-bit number; then, the network device selects the lower N bits of the hash value A, which is recorded as the first hash value; then, the network device determines the mapping corresponding to the first hash value according to the pre-configured mapping relationship Item (that is, the first mapping relationship), the first member port in the mapping item is used as the outgoing port of the first service flow.
  • the pre-configured mapping relationship Item that is, the first mapping relationship
  • the hash key value of the first service flow may include the characteristic parameter of the first service flow, that is, at least one of DA, SA, DIP, and SIP of the first service flow. Then, the network device can use the characteristic parameter as the hash key value to calculate the 32-bit hash value A.
  • the hash key value of the first service flow may also include the hash factor defined on the network device, that is, both the hash factor and the characteristic parameter are used as the hash key. Value, calculate the 32-bit hash value A.
  • the hash factor may be a variable corresponding to the link aggregation group set by the user on the network device, and specifically may be a generated random number or a value specified by the user.
  • the network device in S101 is PTN device 106, and the mapping relationship configured on it is mapping table 1.
  • the PTN device 106 receives service flow 1, and extracts the characteristic parameters of service flow 1 as DA1 and SA1.
  • This PTN device 106 The hash factor defined in is 5, then the PTN device 106 can perform hash calculations based on DA1, SA1, and 5 to obtain a 32-bit hash value X; the PTN device 106 extracts the lower 4 bits of the hash value X, 0011 , The hash value 3 is obtained; the PTN device 106 determines that the outgoing port of the service flow 1 is the member port 4 according to the first 3 columns of row 5 in Table 1.
  • the first service flow in S101 is service flow 1
  • the first hash value is hash value 3
  • the first member port is port 4.
  • S102 The network device determines the current first bandwidth of the first member port.
  • the first bandwidth refers to the traffic forwarded through the first member port in the current unit time, and is used to characterize the current load situation of the first member port.
  • the network device can periodically obtain the traffic of each member port in the link aggregation group through software, and calculate the corresponding bandwidth based on the obtained period and the forwarded traffic, so as to determine the actual load of each member port .
  • the method for determining the current bandwidth of each member port in the embodiment of the present application is described.
  • the network device may include a traffic statistics unit corresponding to each hash value one-to-one, and the network device may periodically obtain the statistics results of each traffic statistics unit corresponding to the first member port through software (that is, forward The number of bytes), so that the network device can determine the current first bandwidth of the first member port based on the obtained period and statistical results.
  • software that is, forward The number of bytes
  • the traffic statistics unit it is used to count the traffic of the member ports corresponding to each hash value. For example, it is used to count the number of bytes of packets forwarded by the member port corresponding to each hash value.
  • the traffic statistics unit may be a counter or a token bucket.
  • the traffic statistics unit may be a counter that continuously accumulates, and after the network device reads the value of the counter each cycle, it needs to subtract the value currently read The value read in the last cycle is used to obtain the number of bytes forwarded by the corresponding member port counted by the traffic statistics unit in this cycle; in another case, the traffic statistics unit can also be a counter that is synchronously cleared, and the network device While reading the counter value in each cycle, the counter also performs a clearing operation to ensure that the value read each time is the number of bytes forwarded by the corresponding member port counted by the traffic statistics unit in the cycle.
  • each hash value corresponds to a token bucket, and the maximum number of tokens that can be accommodated in each token bucket is the same, for example: 800M tokens; then, when the network When the device receives the first service stream of 100M bytes, it generates 100M tokens, and determines through S101 that its outgoing port is the first member port corresponding to the first hash value, and the token corresponding to the first hash value Use 800M tokens in the bucket to delete 100M, leaving 700M tokens, and so on to achieve token decrement.
  • the token bucket In each acquisition cycle, read the remaining number of tokens in the token bucket, and subtract the remaining number of tokens from the maximum number of tokens it can hold, which is the statistical result, and divide the statistical result by the acquisition
  • the bandwidth corresponding to the hash value can be obtained within the period of time. It should be noted that while reading the number of remaining tokens in each acquisition cycle, the token bucket needs to be set (that is, the token bucket is restored to the maximum number of tokens that can be accommodated), so that the next acquisition Periodically, the token bucket can also be used to complete statistics on member port bandwidth.
  • the first mapping relationship is the relationship between the first member port, the first hash value, and the first traffic statistics unit
  • the third The mapping relationship is the relationship between the first member port, the second hash value, and the second traffic statistics unit, where the second hash value is a hash value obtained by hash calculation according to the second service flow;
  • the network The device can read the statistical results of the first traffic statistical unit and the second traffic statistical unit in each acquisition period (for example: 1 second), and determine the third bandwidth according to the statistical results of the first traffic statistical unit.
  • the statistical result of the second traffic statistical unit determines the fourth bandwidth, so that the first bandwidth is determined according to the third bandwidth and the fourth bandwidth.
  • the network device determines the third bandwidth according to the statistical result of the first traffic statistics unit, which may specifically include: the network device obtains the third bandwidth according to the statistical result of the first traffic statistics unit divided by the acquisition period; the network device obtains the third bandwidth according to the second traffic
  • the statistical result of the statistical unit determines the fourth bandwidth, which may specifically include: the network device obtains the fourth bandwidth according to the statistical result of the second traffic statistical unit divided by the acquired period; when the first member port only corresponds to the first mapping relationship and the third Mapping relationship, then, the first bandwidth is equal to the sum of the third bandwidth and the fourth bandwidth; or, when the first member port corresponds to one or more other mapping relationships in addition to the first mapping relationship and the third mapping relationship, then, The first bandwidth is equal to the sum of the third bandwidth, the fourth bandwidth, and the other one or more bandwidths obtained by the traffic statistics unit corresponding to the other one or more mapping relationships.
  • the network device may also include a port performance monitoring unit.
  • the port performance monitoring unit can monitor the number of bytes forwarded per unit time on the first member port in real time, and the network device can periodically obtain port performance monitoring through software. The number of forwarded bytes recorded by the unit, so that the network device can determine the current first bandwidth of the first member port based on the acquired period and the number of forwarded bytes.
  • the network device can also enable the port performance monitoring unit and the traffic statistics unit at the same time, and the network device can periodically obtain the value of the port performance monitoring unit and the value of each traffic statistics unit corresponding to the first member port through software. Based on the statistical result, the current first bandwidth of the first member port is determined based on the acquired period and the acquired specific value. It should be noted that when the same acquisition period is used, for the same member port in the same period, the number of forwarded bytes monitored by the port performance monitoring unit and the number of forwarded bytes counted by the traffic statistics unit are usually the same, and both are enabled at the same time. , Can mutually verify the accuracy and reliability of the two methods of obtaining bandwidth.
  • S103 The network device adjusts the first mapping relationship between the first hash value and the first member port to a second mapping relationship between the first hash value and the second member port according to the first bandwidth.
  • the first condition may specifically include at least one of the following conditions: condition one, the first bandwidth is greater than or equal to the first threshold; condition two, the first bandwidth occupancy rate is greater than or equal to the second threshold, the first bandwidth occupancy rate The ratio of the first bandwidth to the bandwidth of the logical port corresponding to the link aggregation group; condition three, the ratio of the difference between the first bandwidth and the second bandwidth to the bandwidth of a single member port in the link aggregation group is greater than Or equal to the third threshold, where the second bandwidth is the bandwidth of the second member port before the network device adjusts the first mapping relationship to the second mapping relationship.
  • the first threshold is a preset bandwidth threshold, which is used to indicate the maximum allowable bandwidth carried by a single member port.
  • the physical bandwidth of each member port is 1G
  • the preset first threshold can be any bandwidth less than or equal to 1G Value, such as: 800M.
  • the first condition is the second condition above, that is, if the current first bandwidth occupancy rate of the first member port is greater than or equal to the second threshold, S103 can be executed.
  • the second threshold is a preset occupancy threshold, which is used to indicate the ratio of the maximum allowable bandwidth carried by a single member port to the logical port bandwidth of the link aggregation group where it is located.
  • the bandwidth of the logical port of the link aggregation group of the network device is 100G
  • the first bandwidth is 8.5G
  • the second threshold is preset to 8%. Then, since 8.5G>100G*8%, the first member is determined The load on the port is too heavy, and the mapping relationship corresponding to the first member port needs to be adjusted.
  • the network device can perform the following operations to specifically determine the adjustment content: S31, the network device selects a mapping item to be adjusted from the multiple mapping items corresponding to the first member port; S32, the network device aggregates from the link Among other member ports in the group except the first member port, the target member port to be switched to is determined for the traffic corresponding to the mapping item to be adjusted in the first member port.
  • S31 may specifically be: the network device checks the correspondence among the multiple mapping items corresponding to the first member port According to the number of forwarded bytes counted by the traffic statistics unit, the mapping item where the corresponding traffic statistics unit is located is selected in order from the number of forwarded bytes as the mapping item to be adjusted until the bandwidth of the first member port is accounted for after the adjustment. The proportion of logical port bandwidth does not exceed the second threshold.
  • the first member port corresponds to two mapping items, which are: mapping item 1 "first hash value-first member port-first traffic statistics unit", mapping Item 2 "Second Hash Value-First Member Port-Second Traffic Statistics Unit", then the network device checks the values of the first traffic statistics unit and the second traffic statistics unit, which are respectively 300M and 100M, and the network device is fine First, set the mapping item 1 where the first traffic statistics unit with the larger value is located as the mapping item to be adjusted. If the ratio of the bandwidth of the first member port to the bandwidth of the logical port after this adjustment does not exceed the second threshold, no more mapping is selected Item 2 is the mapping item to be adjusted.
  • S32 determines the target member port for the to-be-adjusted mapping item determined by S31, it is not only required that the ratio of the current bandwidth of the target member port to the logical port bandwidth does not exceed the second threshold, but also that the target member port is required to carry the target member port. After the traffic corresponding to the mapping item is mapped, the bandwidth thereon is still less than the product of the logical port bandwidth and the second threshold. It should be noted that if there are multiple target member ports that meet the above conditions, any one of the multiple target member ports can be used as the target member port in S32. Preferably, S32 can use multiple target member ports from multiple target member ports.
  • S31 may specifically be that the network device randomly selects the mapping items from the multiple mapping items corresponding to the first member port. One is selected as the mapping item to be adjusted; at this time, when S32 determines the target member port for the mapping item to be adjusted determined in S31, it is only required that the ratio of the current bandwidth of the target member port to the logical port bandwidth does not exceed the second threshold.
  • any one of the multiple target member ports can be used as the target member port in S32; if there is no target member port that meets the above conditions, In the current acquisition period, the first member port is not load-balanced, that is, S103 to S104 are not executed, and an alarm message is reported to indicate that each member port in the link aggregation group of the network device exceeds The load is running, and safe service flow forwarding cannot be completed.
  • the first condition is the third condition above. Then, the ratio of the current difference between the first bandwidth and the second bandwidth of the first member port to the bandwidth of a single member port in the link aggregation group is greater than or equal to the first With three thresholds, S103 can be executed.
  • the second bandwidth refers to the bandwidth of the second member port before the network device adjusts the first mapping relationship to the second mapping relationship;
  • the third threshold is a preset threshold, which is used to indicate that a maximum of two members are allowed The ratio of the port difference to the bandwidth of a single member port.
  • the maximum allowable physical bandwidth of each member port in the link aggregation group of the network device is 10G
  • the first bandwidth is 8.5G
  • the second bandwidth is 6G
  • the third threshold is 10%
  • the network device can perform the following operations to specifically determine the adjustment content: S41, the network device selects a mapping item to be adjusted from the multiple mapping items corresponding to the first member port; S42, the network device selects the second member The port is determined as the target member port to which the mapping item to be adjusted will be switched to.
  • S41 may specifically be: the network device looks at the multiple mapping items corresponding to the first member port, For the number of forwarded bytes counted by the corresponding traffic statistical unit, the mapping item of the traffic statistical unit with the largest number of forwarded bytes is used as the mapping item to be adjusted. If the difference between the current bandwidth of multiple member ports and the first bandwidth in the link aggregation group of the network device is not greater than the product of the bandwidth of a single member port and the third threshold, then S42 may select from the multiple member ports, Select one as the target member port.
  • the target member port needs to meet: the bandwidth of the first member port after the first member port switches the traffic corresponding to the mapping item to be adjusted to the target member port corresponds to the target member port carrying the mapping item to be adjusted
  • the difference of the bandwidth after the traffic is not greater than the product of the bandwidth of a single member port and the third threshold; preferably, S42 may select the smallest current bandwidth from the multiple member ports, and determine whether it meets the above-mentioned selection of the target member port If the condition is not met, the current bandwidth is selected to be the second smallest, and it is judged whether it meets the conditions for selecting the target member port, and so on, until the member port that meets the conditions for selecting the target member port is selected as the target selected by S42 Member port; if there is no target member port that meets the above criteria for selecting the target member port, load balancing is not performed on the first member port in the current acquisition period, that is, S103 to S104 are not executed, and an alarm is reported.
  • the message is used to indicate that each member port in the link
  • S41 may specifically be that the network device randomly selects one of the multiple mapping items corresponding to the first member port as the mapping to be adjusted item.
  • S42 determines the target member port for the to-be-adjusted mapping item determined in S41, only the difference between the current bandwidth of the target member port and the first bandwidth is required, which is not greater than the product of the bandwidth of a single member port and the third threshold.
  • any one of the multiple target member ports can be used as the target member port in S42; if there is no target member port that meets the above conditions, Then in the current acquisition period, the first member port is not load-balanced, that is, S103 to S104 are not executed and an alarm message is reported to indicate that each member port in the link aggregation group of the network device exceeds The load is running, and safe service flow forwarding cannot be completed.
  • the member port with the largest bandwidth and the member port with the smallest bandwidth are adjusted in this example, and none of the member ports of the link aggregation group meets the switching conditions mentioned in this example, the member port with the largest bandwidth can be disregarded.
  • the port is adjusted, but the member port with the second largest bandwidth and the member port with the smallest bandwidth are adjusted, and so on, until the adjustment makes the member ports of the link aggregation group not meet the third condition mentioned in this example.
  • the first condition in the embodiment of the present application is used to characterize the condition of unbalanced load in the link aggregation group, which specifically includes but is not limited to the content indicated in the above three examples. If the first condition includes at least two of the above conditions 1, 2 and 3, then meeting any one of them can be regarded as meeting the first condition, or meeting all of them can be regarded as meeting the first condition, which can be done according to actual needs. Flexible design. Moreover, the network device can periodically obtain the bandwidth of each member port. Once a member port is found to meet the pre-configured first condition, it is determined that the link aggregation group has a load imbalance problem, which needs to be provided by the embodiment of this application. The load balancing method is used for processing, so that the load of each member port in the link aggregation group is relatively balanced.
  • S103 may include: S51, the network device determines that the current first bandwidth of the first member port satisfies the first condition; S52, the network device determines the mapping item to be adjusted from the multiple mapping items corresponding to the first member port Is the first mapping relationship between the first hash value and the first member port; S53, the network device determines that the target member port corresponding to the mapping item to be adjusted is the second member port; S54, the network device sets the first hash value The first mapping relationship with the first member port is adjusted to the second mapping relationship between the first hash value and the second member port, that is, the first mapping relationship between the first hash value and the first member port The first member port in a mapping relationship is modified to the second member port.
  • the third bandwidth determined according to the corresponding traffic statistics unit in the first mapping relationship between the first hash value and the first member port may be the value of the traffic statistics unit corresponding to all the mapping items corresponding to the first member port
  • the maximum value, that is, the third bandwidth is greater than the fourth bandwidth
  • the second member port may be the member port with the smallest bandwidth currently in the link aggregation group of the network device.
  • the current fifth bandwidth of the second member port is equal to the second bandwidth of the second member port before the execution of S103 and the third bandwidth switched from the first member port (that is, the first hash value is mapped to The bandwidth generated by the traffic of the first member port), and the fifth bandwidth satisfies the second condition
  • the second condition includes at least one of the following conditions: condition four, the fifth bandwidth is less than or equal to the first threshold, The fifth bandwidth is the bandwidth of the second member port after the network device adjusts the first mapping relationship to the second mapping relationship; condition 5, the fifth bandwidth occupancy rate is less than or equal to the second threshold, and the fifth bandwidth is the bandwidth of the network device.
  • the first mapping relationship is adjusted to the bandwidth of the second member port after the second mapping relationship, and the fifth bandwidth occupancy rate is a ratio of the fifth bandwidth to the bandwidth of the logical port corresponding to the link aggregation group; Condition 6.
  • the ratio of the difference between the fifth bandwidth and the sixth bandwidth to the bandwidth of a single member port in the link aggregation group is less than or equal to the third threshold, and the sixth bandwidth is when the network device adjusts the first mapping relationship to the second Bandwidth of the first member port after the mapping relationship.
  • the above-mentioned second condition can correspond to the aforementioned first condition. When the first condition is condition one, then the second condition can be the corresponding condition four; when the first condition is condition two, then the second condition It can be the corresponding condition five. When the first condition is condition three, then the second condition can be the corresponding condition four.
  • the traffic mapped to the first member port through the first hash value is shared to the second member port with a lighter load, and the load of the first member port is reduced, so that the load of each member port in the link aggregation group is relatively high. In order to improve the bandwidth actually provided by the link aggregation group.
  • the network device may perform the first condition judgment on each member port only once in one acquisition period, and only adjust one mapping item corresponding to the first member port; In the next acquisition period, if it is found that the same member port still satisfies the first condition, another mapping item corresponding to the member port is adjusted, and so on, until the network device cannot acquire the mapping item that needs to be adjusted.
  • the load of each member port in the link aggregation group is relatively balanced.
  • the network device can also judge the first condition for each member port multiple times in one acquisition period, and adjust the mapping item one or more times, until there is no member port that meets the first condition or there is no member port that meets the first condition or the first condition is met.
  • the number of adjustments and the number of judgments depend on the number of times the member port is adjusted to not meet the first condition; it should be noted that in some cases, It is also possible that no matter how many adjustments are made, all member ports cannot meet the first condition.
  • the network device can also set the upper adjustment limit, for example: 10 times. Then, in one acquisition period, if After 10 adjustments, if the member port meets the first condition, the adjustment is stopped, an alarm message is reported, or other methods are used for adjustment, for example, the method provided in the embodiment shown in FIG. 5 is used to continue the adjustment.
  • S104 The network device uses the second member port to forward the first service flow.
  • the network device before S103, receives the first service flow whose outgoing port is the link aggregation group, and obtains the hash value corresponding to the first service flow as the first hash value. Then, it can be based on the network device The first mapping relationship between the first hash value and the first member port included in the configured mapping relationship, the specific outgoing port of the first service flow is determined as the first member port, and the first member port is used to forward to the first service flow .
  • the mapping relationship configured on the network device is updated, which includes the second mapping relationship between the first hash value and the second member port. When the network device receives the first service flow again, the mapping relationship is obtained.
  • the hash value corresponding to the first service flow is the first hash value
  • the second mapping relationship between the first hash value and the second member port included in the mapping relationship configured on the network device can be used to determine the value of the first service flow
  • the specific outgoing port is the second member port, and the second member port is used to forward to the first service flow.
  • S104 may specifically include: S61, the network device obtains the hash value B according to the first service flow; S62, the network device selects the low N bits of the hash value B to obtain the first hash value, where N is A positive integer, the number of member ports included in the link aggregation group is less than 2 to the Nth power; S63, the network device determines to forward the second member port through the second member port according to the second mapping relationship between the first hash value and the second member port One business flow.
  • N is a constant configured on the network device, which is consistent with N when configuring the mapping relationship. For example, when configuring the mapping relationship, the network device selects the lower 5 bits of the 32-bit calculation result as the hash value.
  • the network device uses the characteristic parameters of the service flow (or the characteristic parameters and the hash value). Factor) is used as the hash key value, a 32-bit binary number is calculated, and the lower 5 bits of the 32-bit binary number are selected as the hash value corresponding to the service flow.
  • the network device calculates the 32-bit binary number of the first service stream according to the hash key value as 00100011110000010100111000001011, and obtains the lower 5 digits as the first hash value of the first service stream: 01011, and converts it to decimal, which is 11. Then, the network device can search for the mapping relationship where the binary number 01011 (or the decimal number 11) is located, and use the second member port in the mapping relationship to send the first service flow. Wherein, the network device includes: the mapping relationship between the binary number 01011 (or the decimal number 11) and the second member port. It should be noted that, in the mapping relationship of the network device configuration in the embodiment of the present application, the hash value may be embodied in a binary value, or may be embodied in a decimal number corresponding to a low-N digit binary number.
  • FIG. 5 is a schematic flowchart of another load balancing method 200 in an embodiment of this application.
  • the method 200 is applied to a network device on which a link aggregation group is set.
  • a mapping relationship of the link aggregation group is pre-established and stored on the network device, and the mapping relationship indicates that the link aggregation group is in the link aggregation group.
  • the member port used to forward the service flow For example, in the network shown in FIG. 1, the method 200 can be applied to the PTN device 106 for the service flow sent by the base station to the core network.
  • the method 200 can also be applied to the core network device 107.
  • the method 200 may include the following S201 to S206, for example:
  • S201 The network device receives the third service flow.
  • the network device determines that the outgoing port for forwarding the third service flow recorded in the mapping relationship table is the third member port in the link aggregation group, and the third hash value and the third member port are recorded in the mapping relationship table.
  • the third hash value is a hash value obtained by hashing the third service flow through the first hash algorithm.
  • a first hash algorithm is pre-configured on the network device, and the first hash algorithm is used to obtain the hash value corresponding to the service flow received by the network device on the network device before S203.
  • the first hash algorithm is used to indicate: the hash key value used when calculating the hash value of the business flow, the internal sorting of each hash key value, the sorting between the hash key values, the partial hash key value The value and the specific hash algorithm used.
  • the first hash algorithm instruction use the DIP, SIP and hash factor of the service flow as the hash key value, the specific order is DIP->SIP->Hash factor respectively, DIP and SIP are both 4 Bytes, the hash factor is 5, and the specific hash algorithm is the fifth edition message digest algorithm (English: Message Digest Algorithm, referred to as MD5).
  • the first hash algorithm instruction use the DIP and SIP of the service flow as the hash key, the specific order is DIP->SIP respectively, DIP and SIP are both 4 bytes and each is the 0th word Section, the first byte, the second byte and the third byte, the specific hash algorithm is MD5.
  • the network device can first calculate the hash value C corresponding to the third service flow according to the first hash algorithm configured on the network device; then, it can also calculate the hash value C corresponding to the third service flow according to the first hash algorithm configured on the network device.
  • the N configured on the network device extracts the low N bits of the hash value C to obtain the third hash value; then, it can be based on the mapping relationship between the third hash value and the third member port recorded in the local mapping table , Use the third member port to forward the first service flow.
  • S203 The network device determines that the current seventh bandwidth of the third member port is greater than or equal to a fourth threshold.
  • S204 The network device adjusts the first hash algorithm to the second hash algorithm.
  • the third condition includes but is not limited to at least one of the following conditions: the seventh bandwidth is greater than or equal to the fourth threshold; the seventh bandwidth occupancy rate is greater than or equal to the fifth threshold, wherein the seventh bandwidth occupancy rate is The ratio of the seventh bandwidth to the bandwidth of the logical port corresponding to the link aggregation group; the ratio of the difference between the seventh bandwidth and the eighth bandwidth to the bandwidth of a single member port in the link aggregation group is greater than or equal to the sixth threshold , Where the eighth bandwidth is the minimum bandwidth of other member ports before the network device executes S204.
  • the seventh bandwidth is greater than or equal to the fourth threshold
  • the seventh bandwidth occupancy rate is greater than or equal to the fifth threshold
  • the seventh bandwidth occupancy rate is The ratio of the seventh bandwidth to the bandwidth of the logical port corresponding to the link aggregation group
  • the ratio of the difference between the seventh bandwidth and the eighth bandwidth to the bandwidth of a single member port in the link aggregation group is greater than or equal to the sixth threshold
  • the eighth bandwidth is the minimum bandwidth of other
  • adjusting the first hash algorithm to the second hash algorithm may refer to adjusting the internal byte order of the characteristic parameter of the service flow in the hash key value. For example: Assuming that the hash key value includes DIP, the original DIP is the 0th byte, the 1st byte, the 2nd byte and the 3rd byte, then the adjusted DIP can be the 2nd byte and the 3rd byte respectively. 3 bytes, 0th byte and 1st byte.
  • adjusting the first hash algorithm to the second hash algorithm may refer to adjusting the sequence between the characteristic parameters of the service flow in the hash key value. For example: assuming that the hash key value includes DIP and SIP, the original order of DIP and SIP in the hash key value is: DIP->SIP, then the adjusted order of DIP and SIP in the hash key value are: SIP->DIP.
  • adjusting the first hash algorithm to the second hash algorithm may also refer to adjusting the value of the hash factor in the hash key value. For example: assuming that the hash factor in the original hash key value is 5, then the hash factor in the adjusted hash key value can be 10.
  • adjusting the first hash algorithm to the second hash algorithm may also refer to the specific hash algorithm used. For example: assuming that the original hash algorithm used in the first hash algorithm is MD5, the adjusted hash algorithm used in the second hash algorithm may be a secure hash algorithm (English: SHA-1).
  • the network device adjusts the first hash algorithm to the second hash algorithm, including but not limited to the above four possible adjustment methods.
  • the above four adjustment methods can be used alone or in combination, and they can be configured on the network device according to actual needs.
  • the second hash algorithm adjusted can be specifically shown in FIGS. 8a-8c.
  • the second hash algorithm may indicate that the order of the hash key values is SIP-> DIP->Hash factor, see Figure 8a; the second hash algorithm can also indicate: DIP can be the 2nd byte, 3rd byte, 0th byte and 1st byte respectively, and SIP can be respectively The second byte, the third byte, the 0th byte and the first byte are shown in Fig. 8b; the second hash algorithm may also indicate that the hash factor is adjusted to 10, see Fig. 8c.
  • the adjusted second hash algorithm can be specifically seen in FIGS.
  • the second hash algorithm may indicate that the order of the hash key values is SIP- >DIP, see Figure 9a; the second hash algorithm can also indicate: DIP can be the 2nd byte, 3rd byte, 0th byte and 1st byte respectively, and SIP can be the 2nd byte respectively , The 3rd byte, the 0th byte and the 1st byte, see Figure 9b.
  • the current mapping relationship in the mapping relationship table cannot balance the load of the member ports in the link aggregation group, so as to achieve the The redistribution of member ports can to a certain extent distribute the service flow to each member port in a balanced manner, making it possible to balance the load of each member port in the link aggregation group.
  • S205 The network device performs a hash calculation on the third service flow according to the second hash algorithm to obtain a fourth hash value.
  • S206 The network device determines to forward the third service flow through the fourth member port according to the mapping relationship between the fourth hash value and the fourth member port recorded in the mapping relationship table.
  • the network device before S204, receives the third service flow whose outbound port is the link aggregation group, and obtains the hash value corresponding to the third service flow according to the first hash algorithm as the third hash value, then According to the mapping relationship between the third hash value and the third member port included in the mapping relationship table configured on the network device, the specific outgoing port of the third service flow can be determined as the third member port, and the third member port is used for forwarding Give the third business flow.
  • the network device adjusts the first hash algorithm to the second hash algorithm. When the network device receives the third service flow, it obtains the hash algorithm corresponding to the third service flow according to the second hash algorithm.
  • the desired value is the fourth hash value
  • the specific outgoing port of the third service flow can be determined as the fourth member according to the mapping relationship between the fourth hash value and the fourth member port recorded in the mapping relationship table configured on the network device Port, and use the fourth member port to forward to the third service flow.
  • S205 to S206 may specifically include: S71, the network device calculates the hash value D corresponding to the third service flow according to the second hash algorithm; S72, the network device selects the low N bits of the hash value D to obtain the first Four hash values, where N is a positive integer, and the number of member ports included in the link aggregation group is less than 2 to the Nth power; S73, the network device determines according to the mapping relationship between the fourth hash value and the fourth member port The third service flow is forwarded through the fourth member port.
  • the method 200 does not need to adjust the mapping relationship table configured in the network device, but by adjusting the first hash algorithm to the second hash algorithm, the hash value corresponding to the service flow can be changed, thereby changing the service flow mapping.
  • the traffic is re-hashing, making it possible to load balance on each member port.
  • the third member port in the method 200 may be the first member port in the method 100; the fourth member port in the method 200 may be the second member port in the method 100; The third service flow in 200 may be the first service flow in method 100.
  • the hash value of the hash value of the service flow can be dynamically adjusted according to the actual bandwidth of each member port in the link aggregation group.
  • the Greek algorithm (for example: the value of the hash factor) changes the hash value corresponding to the business flow, thereby remaps the business flow to different member ports, and overcomes the hash value based on the fixed calculation of the business flow hash value.
  • Algorithms as well as the determined mapping relationship between hash values and member ports, perform service flow forwarding, which leads to the problem of unbalanced load on each member port, so that each member port in the link aggregation group maintains load balance to a certain extent Become possible, so that network equipment can meet the demand for high bandwidth.
  • embodiments of the present application also provide a load balancing method, which can combine the above method 100 and method 200 to perform load balancing on the link aggregation group on the network device.
  • the embodiment of the present application may first execute method 100, by adjusting the mapping relationship between the member port and the hash value corresponding to the member port that meets the first condition, to solve the problem of unbalanced load of the link aggregation group.
  • method 100 is used to overcome the problem that some member ports meet the first condition
  • method 200 can be continued, by adjusting the hash algorithm on the network device, thereby changing the hash value corresponding to the service flow, and mapping the original to the same member port
  • the different business flows of the company are scattered to different member ports, so as to solve the problem of load imbalance.
  • the embodiment of the present application may first execute the method 200, by continuously adjusting the hash algorithm on the network device, try to make the service flow decentralized and mapped to different member ports, and the load on each member port is kept balanced to a certain extent; Then, method 100 is executed again.
  • the hash value corresponding to the service flow is mapped to other member ports through the mapping relationship between the member port and the hash value corresponding to the member port, So as to solve the problem of unbalanced load.
  • the embodiment of the present application may first execute the method 200, by continuously adjusting the hash algorithm on the network device, as far as possible, the service flow is mapped to different member ports, and the load on each member port is kept balanced to a certain extent; Then, execute method 100 again, when some member ports meet the first condition, map the hash value corresponding to the service flow to other member ports through the mapping relationship between the member port and the hash value corresponding to the member port; When the method 100 cannot be used to overcome the problem that some member ports meet the first condition, the method 200 can be continued to be executed.
  • the hash value corresponding to the service flow is changed, and the original Different service flows mapped to the same member port are distributed to different member ports, thereby solving the problem of unbalanced load.
  • the network device can achieve load balancing of each member port of the link aggregation group, thereby ensuring that the link aggregation group can perform greater functions and provide better functions during use. Large bandwidth.
  • an embodiment of the present application also provides a network device 1000, as shown in FIG. 10.
  • the network device 1000 includes a transceiver unit 1001 and a processing unit 1002.
  • the transceiving unit 1001 is used to perform the transceiving operations in the method 100 shown in FIG. 4 or the transceiving operations in the method 200 shown in FIG. 5;
  • the processing unit 1002 is used to perform the transceiving operations in the method 100 shown in FIG. Operations other than operations, or perform operations other than the sending and receiving operations in the method 200 shown in FIG. 5 above.
  • the transceiving unit 1001 is configured to use the second member port to forward the first service flow; the processing unit 1002 is configured to forward the first service flow according to the first service flow corresponding to the first service flow.
  • the transceiving unit 1001 is used to receive a third service flow; the processing unit 1002 is used to determine the forwarding of the third service flow recorded in the mapping relationship table
  • the outgoing port is the third member port in the link aggregation group; the processing unit 1002 is further configured to determine that the current seventh bandwidth of the third member port is greater than or equal to the fourth threshold; the processing unit 1002 is also configured to adjust the third member port
  • the first hash algorithm is the second hash algorithm; the processing unit 1002 is further configured to perform a hash calculation on the third service flow according to the second hash algorithm to obtain a fourth hash value, and according to the record in the mapping table The mapping relationship between the fourth hash value and the fourth member port determines that the third service flow is forwarded through the fourth member port.
  • an embodiment of the present application also provides a network device 1100, as shown in FIG. 11.
  • the network device 1100 includes a communication interface 1101 and a processor 1102.
  • the communication interface 1101 is used to perform the transceiving operations in the method 100 shown in FIG. 4 or the transceiving operations in the method 200 shown in FIG. 5;
  • the processor 1102 is used to perform the transceiving operations in the method 100 shown in FIG. Operations other than operations, or perform operations other than the sending and receiving operations in the method 200 shown in FIG. 5 above.
  • the communication interface 1101 is used to forward the first service flow by using the second member port; the processor 1102 is used to forward the first service flow according to the first service flow corresponding to the first service flow.
  • the communication interface 1101 is used to receive the third service flow; the processor 1102 is used to determine the forwarding of the third service flow recorded in the mapping relationship table
  • the egress port is the third member port in the link aggregation group; the processor 1102 is further configured to determine that the current seventh bandwidth of the third member port is greater than or equal to the fourth threshold; the processor 1102 is also configured to adjust the third member port
  • the first hash algorithm is the second hash algorithm; the processor 1102 is further configured to perform a hash calculation on the third service flow according to the second hash algorithm to obtain a fourth hash value, and according to the record in the mapping table The mapping relationship between the fourth hash value and the fourth member port determines that the third service flow is forwarded through the fourth member port.
  • an embodiment of the present application also provides a network device 1200, as shown in FIG. 12.
  • the network device 1200 includes a memory 1201 and a processor 1202.
  • the memory 1201 is used to store program code; the processor 1202 is used to run instructions in the program code, so that the network device 1200 executes the method 100 in the embodiment shown in FIG. 4, or the method 100 in the embodiment shown in FIG. 5 Method 200.
  • the processor may be a central processing unit (English: central processing unit, abbreviation: CPU), a network processor (English: network processor, abbreviation: NP), or a combination of CPU and NP.
  • the processor may also be an application-specific integrated circuit (English: application-specific integrated circuit, abbreviation: ASIC), a programmable logic device (English: programmable logic device, abbreviation: PLD) or a combination thereof.
  • the above-mentioned PLD can be a complex programmable logic device (English: complex programmable logic device, abbreviation: CPLD), field programmable logic gate array (English: field-programmable gate array, abbreviation: FPGA), general array logic (English: generic array) logic, abbreviation: GAL) or any combination thereof.
  • the processor may refer to one processor or may include multiple processors.
  • the memory may include volatile memory (English: volatile memory), such as random access memory (English: random-access memory, abbreviation: RAM); the memory may also include non-volatile memory (English: non-volatile memory), For example, read-only memory (English: read-only memory, abbreviation: ROM), flash memory (English: flash memory), hard disk (English: hard disk drive, abbreviation: HDD) or solid state drive (English: solid-state drive, Abbreviation: SSD); the memory can also include a combination of the above-mentioned types of memory.
  • the memory may refer to one memory, or may include multiple memories.
  • computer-readable instructions are stored in the memory, and the computer-readable instructions include multiple software modules, such as a sending module, a processing module, and a receiving module. After executing each software module, the processor can perform corresponding operations according to the instructions of each software module. In this embodiment, an operation performed by a software module actually refers to an operation performed by the processor according to an instruction of the software module. After the processor executes the computer-readable instructions in the memory, it can perform all operations that can be performed by the network device according to the instructions of the computer-readable instructions.
  • the communication interface 1101 of the network device 1100 can be specifically used as the transceiver unit 1001 in the network device 1000 to implement data communication between the network device and other network devices.
  • embodiments of the present application also provide a computer-readable storage medium that stores instructions in the computer-readable storage medium, and when it runs on a computer, the computer executes the implementation shown in Figure 4 or Figure 5 above.
  • the load balancing method in the example is not limited to:
  • the embodiment of the present application also provides a computer program product, which when running on a computer, causes the computer to execute the load balancing method in the embodiment shown in FIG. 4 or FIG. 5.
  • the various embodiments in this specification are described in a progressive manner, and the same or similar parts between the various embodiments can be referred to each other, and each embodiment focuses on the difference from other embodiments.
  • the description is relatively simple, and for related parts, please refer to the part of the description of the method embodiment.
  • the device embodiments described above are merely illustrative.
  • the modules described as separate components may or may not be physically separated, and the components displayed as modules may or may not be physical modules, that is, they may be located in one place. , Or it can be distributed to multiple network units. Some or all of the modules can be selected according to actual needs to achieve the objectives of the solutions of the embodiments. Those of ordinary skill in the art can understand and implement without creative work.

Abstract

本申请实施例公开了一种负载均衡方法和设备,网络设备对第一业务流进行哈希计算获得其对应的第一哈希值,根据第一哈希值和链路聚合组中的第一成员端口之间的第一映射关系,确定第一业务流的出端口为第一成员端口;该网络设备确定该第一成员端口当前的第一带宽,并根据该第一带宽,将第一映射关系调整为第一哈希值和第二成员端口之间的第二映射关系,从而采用该第二成员端口转发后续接收到的第一业务流。这样,能够克服目前链路聚合组中各成员端口的负载不均衡的问题,使得链路聚合组中各成员端口更加合理且均衡的承载业务流,实现链路聚合组中各个成员端口的负载均衡,从而使网络设备能够满足高带宽的需求。

Description

一种负载均衡方法和设备
本申请要求于2019年12月31日提交中国国家知识产权局、申请号为201911417080.X、申请名称为“一种负载均衡方法和设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及安全通信技术领域,特别是涉及一种负载均衡方法和设备,该方法用于存在链路聚合的网络场景中,对该链路聚合组中的多个成员端口进行负载均衡。
背景技术
网络设备通常具有多个物理端口,而由于单个物理端口能够承载的物理带宽有限,为了使得网络设备具有高带宽的能力,可以在网络设备上采用链路聚合的方式,即,将其上多个物理端口组成一个逻辑端口,该多个物理端口对应的多条物理链路可以称为一个链路聚合组,每个物理端口称为该链路聚合组的一个成员端口。由于该逻辑端口可承载的带宽为多个物理端口承载的物理带宽之和,例如:假设配置交换机上的10个10G带宽的物理端口汇聚为一个逻辑端口,那么,该逻辑端口具有100G的带宽,所以,网络设备采用链路聚合的方式能够实现高带宽的需求。
目前,网络设备通过链路聚合组转发业务流时,通常依据哈希值和链路聚合组中的成员端口之间固定的映射关系,确定该业务流对应的哈希值在链路聚合组中的成员端口,并利用所确定的该成员端口转发该业务流,导致链路聚合组中各成员端口的负载不均衡,从而使得该链路聚合组实际提供的带宽大大降低,链路聚合在网络设备上所发挥的作用大打折扣。
发明内容
基于此,本申请实施例提供了一种负载均衡方法和设备,通过链路聚合组中各成员端口的实际负载情况,动态的调整分担到各成员端口的负载,实现链路聚合组中各个成员端口的负载均衡,从而使得网络设备上链路聚合组能够提供尽可能高的带宽。
第一方面,提供了一种负载均衡方法,该方法应用网络设备上,该网络设备可以通过和另一网络设备之间的链路聚合组,向该另一网络设备转发业务流,该网络设备上进行负载均衡的过程具体可以包括:网络设备对所接收的第一业务流进行哈希计算,得到第一业务流对应的第一哈希值;该网络设备根据该第一哈希值以及第一哈希值和链路聚合组中的第一成员端口之间的第一映射关系,确定该第一业务流的出端口为第一成员端口;此时,该网络设备可以确定该第一成员端口当前的第一带宽,并根据该第一带宽将所述第一映射关系调整为第一哈希值和第二成员端口之间的映射关系,并采用该第二成员端口转发后续接收到的第一业务流。可见,通过本申请实施例提供的方法,在使用链路聚合的网络设备中,能够根据链路聚合组中各成员端口的实际承载的带宽情况,动态的调整各成员端口和哈希值的对应关系,克服了依据固定不变的哈希值和成员端口之间的映射关系转发业务流, 导致各成员端口的负载不均衡的问题,使得链路聚合组中各成员端口可以按照调整后的映射关系进行业务流的负载分担,实现了链路聚合组中各个成员端口的负载均衡,从而使网络设备能够满足高带宽的需求。
在一些具体的实现方式中,第一成员端口满足第一条件时,将所述第一映射关系调整为所述第二映射关系。其中,该第一条件,用于表征链路聚合组中存在负载不均衡的状况,具体包括但不限于以下至少一项:所述第一带宽大于或者等于第一阈值;所述第一带宽占有率大于或者等于第二阈值,所述第一带宽占有率为所述第一带宽与所述链路聚合组对应的逻辑端口的带宽的比值;或者,所述第一带宽与所述第二带宽的差值与所述链路聚合组中单个成员端口的带宽的比值大于或者等于第三阈值,所述第二带宽为网络设备将所述第一映射关系调整为所述第二映射关系之前,所述第二成员端口的带宽。若第一条件包括上述三个中的至少两个,那么,满足任意一个可以视作满足第一条件,或者,全部满足才可以视作满足第一条件,具体可以根据实际需求进行灵活设计。而且,网络设备能够周期性的获取各成员端口的带宽,一旦发现某个成员端口满足预先配置的第一条件,则确定该链路聚合组存在负载不均衡的问题,需要通过本申请实施例提供的负载均衡方法进行处理,从而使得链路聚合组中各成员端口的负载达到相对均衡。
在另一些可能的实现方式中,当确定第一成员端口满足第一条件,需要对第一成员端口当前的第一带宽进行调整时,可以选择流量切换的目标成员端口,具体可以选择链路聚合组中当前带宽最小的成员端口,例如:本申请实施例中选择的目标成员端口为第二成员端口,而该第二成员端口为在网络设备将所述第一映射关系调整为所述第二映射关系之前,链路聚合组的各成员端口中当前带宽最小的成员端口。
在再一些具体的实现方式中,成员端口可以作为目标成员端口分担第一成员端口切出的带宽,还需要该目标成员端口满足第二条件。例如:当目标成员端口为第二成员端口时,该第二成员端口满足第二条件时网络设备才可以将第一映射关系调整为第二映射关系。该第二条件,用于表征能够为第一成员端口分担流量以改善链路聚合组中负载不均衡的状况,具体包括但不限于以下至少一项:第五带宽小于或等于第一阈值,所述第五带宽为所述网络设备将所述第一映射关系调整为所述第二映射关系之后所述第二成员端口的带宽;所述第五带宽占有率小于或等于第二阈值,所述第五带宽为所述网络设备将所述第一映射关系调整为所述第二映射关系之后所述第二成员端口的带宽,所述第五带宽占有率为所述第五带宽与所述链路聚合组对应的逻辑端口的带宽的比值;或者,所述第五带宽与所述第六带宽的差值与所述链路聚合组中单个成员端口的带宽的比值小于或等于第三阈值,所述第五带宽为网络设备将所述第一映射关系调整为所述第二映射关系之后所述第二成员端口的带宽,所述第六带宽为网络设备将所述第一映射关系调整为所述第二映射关系之后所述第一成员端口的带宽。这样,确保将第一成员端口的不部分带宽分分担至第二成员端口后,第二成员端口当前的带宽还不会导致链路聚合组出现负载不均衡的问题,保证两个成员端口之间不会出现彼此往复调整而不能改善负载不均问题,使得该负载均衡方法更加有效。
在又一些可能的实现方式中,本申请实施例还可以包括:网络设备获取所述第一映射关系和第三映射关系,第一映射关系包括所述第一成员端口、所述第一哈希值和第一流量 统计单元之间的映射关系,第三映射关系包括所述第一成员端口、第二哈希值和第二流量统计单元之间的映射关系,其中,所述第二哈希值为根据第二业务流进行哈希计算得到的哈希值;所述网络设备根据所述第一流量统计单元的统计结果确定所述第一业务流占用所述第一成员端口的第三带宽;所述网络设备根据所述第二流量统计单元的统计结果,确定所述第二业务流占用所述第一成员端口的第四带宽;所述网络设备根据所述第三带宽和所述第四带宽,确定所述第一带宽。其中,流量统计单元具体可以是计数器或者令牌桶,用于统计经过其对应的哈希值映射到其对应的成员端口,利用该成员端口转发的字节数。需要说明的是,在映射关系中增加与哈希值一一对应的流量统计单元,为同一成员端口的负载调整提供了更细的粒度,即,可以将映射到同一成员端口但是哈希值不同的业务流进行划分并选择性的切走部分流量,扩大了本申请实施例中对同一成员端口映射至少两个哈希值的作用,使得负载均衡效果更好。举例来说,第一映射关系,可以是指第一成员端口、第一哈希值与所述第一流量统计单元统计的第一成员端口带宽之间的映射关系;或者,也可以是第一成员端口、第一哈希值与第一流量统计单元的标识之间的映射,而网络设备通过所述第一流量单元的标识确定所述流量统计单元,进而确定所述第一流量统计单元的流量统计结果。
作为一个示例,当第一带宽满足第一条件时,第一成员端口对应第一映射关系和第三映射关系,那么,可以选择承载较大带宽的映射关系进行调整。一种情况下,如果根据所述第一流量统计单元的统计结果确定的第三带宽,大于根据所述第二流量统计单元的统计结果确定的第四带宽,则,选择将第一哈希值和第一成员端口之间的第一映射关系进行调整,从而影响后续对第一业务流的转发;另一种情况下,如果第三带宽小于第四带宽,则,选择将第二哈希值和第一成员端口之间的第三映射关系进行调整,从而影响后续对第二业务流的转发。如此,能够灵活的实现对第一成员端口上部分带宽对应的映射关系进行调整,使得链路聚合组快速的实现负载均衡。
作为一个具体的实现方式,本申请实施例还可以在调整映射关系之后,通过调整哈希算法的方式继续改善链路聚合组中的负载不均衡问题。具体可以包括:网络设备接收第三业务流,确定映射关系表中记录的转发所述第三业务流的出端口为所述链路聚合组中的第三成员端口,其中,该映射关系表中记录了第三哈希值与所述第三成员端口的映射关系,第三哈希值为通过第一哈希算法对所述第三业务流进行哈希计算得到的哈希值;接着,该网络设备确定所述第三成员端口当前的第七带宽大于或者等于第四阈值,则将所述第一哈希算法调整为第二哈希算法;那么,该网络设备即可根据所述第二哈希算法对所述第三业务流进行哈希计算,得到第四哈希值,并根据所述映射关系表中记录的所述第四哈希值与第四成员端口的映射关系,确定通过所述第四成员端口转发所述第三业务流。
其中,网络设备将所述第一哈希算法调整为第二哈希算法,包括但不限于下述方式的至少一种:调整所述第一业务流中至少一个特征参数的字节顺序;调整所述第一业务流中各特征参数的排列顺序;或,调整哈希因子的值。其中,特征参数包括所述第一业务流的目的地址DA、源地址SA、目的互联网协议地址DIP和源互联网协议地址SIP中的至少一个。
该实现方式下,作为一个示例,第三成员端口可以为所述第一成员端口;第四成员端口可以为所述第二成员端口。
可见,该实现方式下,在使用链路聚合的网络设备中,能够根据链路聚合组中各成员端口的实际承载的带宽情况,将两种动态的调整方式进行结合,既可以动态的调整映射关系中哈希值对应的成员端口;也可以动态的调整计算业务流的哈希值的哈希算法(例如:哈希因子的值),改变业务流对应的哈希值,从而原本映射到同一成员端口的多个业务流映射到不同的成员端口上。从而,通过该实现方式,克服了依据固定不变的计算业务流哈希值的哈希算法,以及哈希值和成员端口之间的确定好的映射关系,进行业务流转发,导致各成员端口的负载不均衡的问题,使得链路聚合组中各成员端口在一定程度上保持负载均衡成为可能,从而使网络设备能够满足高带宽的需求。
需要说明的是,本申请实施例中,也可以先执行通过调整计算业务流的哈希值的哈希算法实现负载均衡方法,再执行通过调整映射关系中哈希值对应的成员端口实现负载均衡的方法,两者之间的执行顺序在本实施例中不做具体限定,具体的执行顺序可以根据实际负载情况或者预先配置确定。
需要说明的是,本申请实施例中,对业务流进行哈希计算获得该业务流对应的哈希值,具体过程可以包括:提取业务流的特征参数,对特征参数(或特征参数和哈希因子)进行哈希计算,获得32位的哈希值;取32位哈希值的低N位,作为该业务流对应的哈希值,其中,N为正整数,链路聚合组包括的成员端口数量小于2的N次方。这样,确保每个成员端口至少有两个对应的哈希值,即,每个成员端口有至少两条对应的映射关系,使得执行本申请实施例进行负载均衡更加灵活和有效。
第二方面,提供了一种负载均衡方法,该方法应用网络设备上,该网络设备可以通过和另一网络设备之间的链路聚合组,向该另一网络设备转发业务流,该网络设备上进行负载均衡的过程具体可以包括:网络设备通过第一哈希算法对所接收的第一业务流进行哈希计算,得到第一业务流对应的第一哈希值;该网络设备根据映射关系表中记录的第一哈希值和链路聚合组中的第一成员端口之间的映射关系,确定该第一业务流的出端口为第一成员端口;此时,该网络设备可以确定该第一成员端口当前的第一带宽,在确定第一带宽大于或等于预设阈值时,将第一哈希算法调整为第二哈希算法;那么,该网络设备就通过第二哈希算法对后续再接收到第一业务流进行哈希计算,得到第二哈希值,该网络设备根据映射关系表中第二哈希值和链路聚合组中的第二成员端口之间的映射关系,确定通过第二成员端口转发该第一业务流。可见,通过本申请实施例提供的方法,在使用链路聚合的网络设备中,能够根据链路聚合组中各成员端口的实际承载的带宽情况,动态的调整计算业务流的哈希值的哈希算法(例如:哈希因子的值),改变业务流对应的哈希值,从而原本映射到同一成员端口的多个业务流映射到不同的成员端口上,克服了依据固定不变的计算业务流哈希值的哈希算法,以及哈希值和成员端口之间的确定好的映射关系,进行业务流转发,导致各成员端口的负载不均衡的问题,使得链路聚合组中各成员端口在一定程度上保持负载均衡成为可能,从而使网络设备能够满足高带宽的需求。
第三方面,本申请还提供了网络设备,包括收发单元和处理单元。其中,收发单元用 于执行上述第一方面或第二方面提供的方法中的收发操作;处理单元用于执行上述第一方面或第二方面中除了收发操作以外的其他操作。例如:当所述网络设备执行所述第一方面所述的方法时,所述收发单元用于接收第一业务流,所述收发单元还用于采用第二成员端口转发第一业务流;所述处理单元用于对所接收的第一业务流进行哈希计算,得到第一业务流对应的第一哈希值;所述处理单元还用于根据该第一哈希值以及第一哈希值和链路聚合组中的第一成员端口之间的第一映射关系,确定该第一业务流的出端口为第一成员端口;所述处理单元还用于确定该第一成员端口当前的第一带宽;所述处理单元还用于根据该第一带宽将所述第一映射关系调整为第一哈希值和第二成员端口之间的映射关系。又例如:当所述网络设备执行所述第二方面所述的方法时,所述收发单元用于接收第一业务流,所述收发单元还用于采用第二成员端口转发第一业务流;所述处理单元用于通过第一哈希算法对所接收的第一业务流进行哈希计算,得到第一业务流对应的第一哈希值;所述处理单元还用于根据映射关系表中记录的第一哈希值和链路聚合组中的第一成员端口之间的映射关系,确定该第一业务流的出端口为第一成员端口;所述处理单元还用于确定该第一成员端口当前的第一带宽;所述处理单元还用于确定第一带宽大于或等于预设阈值时,将第一哈希算法调整为第二哈希算法;所述处理单元还用于通过第二哈希算法对后续再接收到第一业务流进行哈希计算,得到第二哈希值,其中该网络设备根据映射关系表中第二哈希值和链路聚合组中的第二成员端口之间的映射关系。
第四方面,本申请实施例还提供了一种网络设备,包括通信接口和处理器。其中,通信接口用于执行前述第一方面或第二方面提供的方法中的收发操作;处理器,用于执行前述第一方面或第二方面提供的方法中除所述收发操作以外的其他操作。
第五方面,本申请实施例还提供了一种网络设备,该网络设备包括存储器和处理器。其中,存储器用于存储程序代码;处理器用于运行所述程序代码中的指令,使得该网络设备执行以上第一方面或第二方面提供的方法。
第六方面,本申请实施例还提供了一种计算机可读存储介质,该计算机可读存储介质中存储有指令,当其在计算机上运行时,使得所述计算机执行以上第一方面或第二方面提供的所述的负载均衡方法。
第七方面,本申请实施例还提供了计算机程序产品,当其在计算机上运行时,使得计算机执行前述第一方面或第二方面提供的所述的负载均衡方法。
附图说明
图1为本申请实施例中一应用场景所涉及的网络系统框架示意图;
图2为本申请实施例中图1所示场景中一负载分担示例的示意图;
图3为本申请实施例中图1所示场景中PTN设备106的一结构示意图;
图4为本申请实施例中一种负载均衡方法100的流程示意图;
图5为本申请实施例中另一种负载均衡方法200的流程示意图;
图6为本申请实施例中一种第一哈希算法中哈希键值的示意图;
图7为本申请实施例中另一种第一哈希算法中哈希键值的示意图;
图8a为本申请实施例中一种第二哈希算法中哈希键值的示意图;
图8b为本申请实施例中另一种第二哈希算法中哈希键值的示意图;
图8c为本申请实施例中再一种第二哈希算法中哈希键值的示意图;
图9a为本申请实施例中又一种第二哈希算法中哈希键值的示意图;
图9b为本申请实施例中另一种第二哈希算法中哈希键值的示意图;
图10为本申请实施例中一种网络设备1000的结构示意图;
图11为本申请实施例中一种网络设备1100的结构示意图;
图12为本申请实施例中一种网络设备1200的结构示意图。
具体实施方式
随着对网络设备上端口承载带宽的需求的不断提高,通常在网络设备上采用链路聚合的方式,即,将网络设备的至少两个物理端口组成一个逻辑端口,该逻辑端口可承载的带宽为其对应的各物理端口承载的带宽的和,可见,采用链路聚合的方式,能够有效的增加网络设备承载带宽的能力。需要说明的是,链路聚合组是指采用链路聚合方式连接的两个网络设备之间,参与链路聚合的多条物理链路的集合;网络设备上聚合为逻辑端口的物理端口均可以称为该链路聚合组的成员端口。
例如:以图1所示的分组传送网(英文:packet transport network,简称:PTN)场景为例,该场景中,基站101-基站105分别直接或间接的接入PTN设备106,通过该PTN设备106汇聚到核心网设备107。由于多个基站发送的业务流,均需要由PTN设备106汇聚到核心网侧,所以,需要网络设备106能够承载较大的带宽,故,可以将在PTN设备106和核心网设备107之间设置链路聚合组,即,将PTN设备106上的端口1、端口2、端口3和端口4汇聚成一个逻辑端口A,将核心网设备107上的端口5、端口6、端口7和端口8也汇聚成一个逻辑端口B,通过四条光纤分别连接端口1和端口5、端口2和端口6、端口3和端口7、以及端口4和端口8。假设上述每条光纤的可承载带宽(也称为光纤连接的物理端口的最大允许的物理带宽)为1G,那么,该PTN设备106和核心网设备107之间的该链路聚合组具有4G的带宽,即,该PTN设备106能够承载较大的带宽。
作为一种示例,网络设备上接收到出端口为链路聚合组的业务流时,首先,提取该业务流的特征参数(例如:该业务流中报文的目的地址(英文:Destination Address,简称:DA)、源地址(英文:Source Address,简称:SA)、目的互联网协议地址(英文:Destination Internet Protocol Address,简称:DIP)和源互联网协议地址(英文:Source Internet Protocol Address,简称:SIP)中的至少一个),依据该特征参数计算该业务流的哈希值,并依据哈希值和链路聚合组中的成员端口之间固定的映射关系,确定计算所得的哈希值对应的成员端口,并利用该成员端口转发该业务流。
仍以上述图1所示的场景为例,假设PTN设备106上预先保存有4组映射关系:哈希值1和端口1之间的映射关系1、哈希值2和端口2之间的映射关系2、哈希值3和端口3之间的映射关系3、以及哈希值4和端口4之间的映射关系4。如图2所示,假设PTN设备106接收到业务流1-8,且业务流1-8的出端口均为链路聚合组,那么,根据上述方式在 该链路聚合组中进行负载分担的过程具体可以包括:PTN设备106分别提取业务流1-8的特征参数并计算出:哈希值2、3、3、3、1、3、3和2;PTN设备106上保存有4组映射关系,例如参见下述表1所示,PTN设备106基于表1可以确定业务流1和8通过端口3发送给核心网设备107,业务流2、3、4、6和7通过端口4发送给核心网设备107,业务流5通过端口2发送给核心网设备107。这样,端口1-端口4上的带宽分别为0M、500M、110M和700M。
表1映射表
哈希值 成员端口
0 1
1 2
2 3
3 4
可见,采用上述方式进行负载分担,很可能出现多条业务流计算出一个相同的哈希值,从而该多条业务流映射到同一个成员端口上,造成该成员端口的负载过重,而且,即使各业务流可以均匀的分散到不同的成员端口上,也可能由于每条业务流的流量差异较大,导致链路聚合组中各成员端口的负载不均衡。即,依据该固定不变的哈希值和成员端口之间的映射关系进行负载分担,很可能使得链路聚合组中各成员端口出现负载不均衡的问题,从而大大降低该链路聚合组实际提供的带宽,例如:PTN设备106上,由于端口4承载700M的带宽,超过了1G的50%(预设值),导致该PTN设备106无法再通过该链路聚合组承载其他业务流,这样,该链路聚合组才一共承载了(0M+500M+110M+700M)=1310M的带宽,远远小于该链路聚合组的逻辑端口总带宽(即,4G)的50%。
基于此,本申请实施例提供了一种负载均衡方法,通过链路聚合组中各成员端口的实际负载情况,动态的调整分担到各成员端口的负载,实现链路聚合组中各个成员端口的负载均衡,具体过程可以包括:网络设备对第一业务流进行哈希计算获得其对应的第一哈希值,根据该第一哈希值和链路聚合组中的第一成员端口之间的第一映射关系,确定该第一业务流的出端口为第一成员端口;此时,该网络设备确定该第一成员端口当前的第一带宽,并根据该第一带宽,将第一映射关系调整为第一哈希值和第二成员端口之间的第二映射关系,从而采用该第二成员端口转发后续接收到的第一业务流。这样,能够克服目前依据固定不变的哈希值和成员端口之间的映射关系转发业务流,导致各成员端口的负载不均衡的问题,使得链路聚合组中各成员端口可以按照基于当前实际负载情况调整出更加合理的映射关系进行业务流的负载分担,实现了链路聚合组中各个成员端口的负载均衡,从而使网络设备能够满足高带宽的需求。
举例来说,仍然以图1所示的场景为例,在本申请实施例中,参见图3,PTN设备106上可以包括:接收模块1061、特征提取模块1062、哈希计算模块1063、映射模块1064、流量统计模块1065、流量调整模块1066和成员端口转发模块1067。其中,具体的负载均衡过程可以包括:S11,PTN设备106的接收模块1061接收基站101-基站105发送的业务流;S12,特征提取模块1062提取各业务流的DA和SA,并发送给哈希计算模块1063;S13,哈希计算模块1063基于各业务流的DA和SA计算出各业务流对应的32位哈希值, 并取32位哈希值的低4位作为业务流最终对应的哈希值,发送映射模块1064;S14,映射模块1064根据当前的映射表1为各业务流确定对应的出端口,并由成员端口转发模块1067按照所确定的出端口转发对应的业务流到核心网设备107;S15,流量统计模块1065中的端口性能监测单元和成员端口转发模块1067相连,监测各成员端口实际转发的流量,该PTN设备106通过软件每秒采集一次该端口性能监测单元的值,并计算各成员端口的带宽;S16,流量统计模块1065中的流量统计单元和映射模块1604相连,统计各成员端口实际转发的流量,该PTN设备106通过软件每秒读取一次映射模块1604中映射表2中各流量统计器单元的值,并计算各成员端口的带宽,通常,对于相同成员端口,相同情况下根据端口性能监测单元获得的带宽和根据流量统计单元获得的带宽一致;S17,流量调整模块1066根据流量统计模块1065获得的各成员端口当前的带宽,确定成员端口1的总带宽为900M,超过单端口的最大物理带宽的80%;S18,流量调整单元1066根据流量统计单元0、4、8和12中的值:100M、200M、300M和200M,确定将带宽最大(即,每秒300M)的流量统计单元8对应的映射项作为待调整映射项(即,映射表1中加粗显示的映射项);S19,流量调整单元1066从当前带宽不超过单端口的最大物理带宽(即:1G带宽)的80%成员端口中,选择带宽最小的成员端口2,并确定如果将待调整映射项中的业务流切换到成员端口2,该成员端口2的带宽也不超过单端口的最大物理带宽的80%;S20,流量调整单元1066将映射模块1064中映射表1的待调整映射项的成员端口1修改为成员端口2,即,将映射模块1064中映射表1更新为映射表2;S21,当PTN设备106的接收模块1061再接收到哈希值为8对应的业务流时,即按照映射表2,将该业务流从成员端口2转发至核心网设备107上,且在该成员端口2对应的流量统计单元8上增加该业务流的带宽。可见,通过本申请实施例提供的方法,实现了链路聚合组中各个成员端口的负载均衡。
其中,作为示例,映射表1和映射表2可以参见下述表2和表3:
表2映射表1
哈希值 成员端口 流量统计单元 哈希值 成员端口 流量统计单元
0 1 0(100M) 8 1 8(300M)
1 2 1 9 2 9
2 3 2 10 3 10
3 4 3 11 4 11
4 1 4(200M) 12 1 12(200M)
5 2 5 13 2 13
6 3 6 14 3 14
7 4 7 15 4 15
表3映射表2
哈希值 成员端口 流量统计单元 哈希值 成员端口 流量统计单元
0 1 0(100M) 8 2 8(300M)
1 2 1 9 2 9
2 3 2 10 3 10
3 4 3 11 4 11
4 1 4(200M) 12 1 12(200M)
5 2 5 13 2 13
6 3 6 14 3 14
7 4 7 15 4 15
其中,映射表中的流量统计单元0-15,可以是流量统计单元的标识,用于指示到对应的流量统计单元中,例如:网络设备接收到业务流1,其哈希值为0,带宽为100M,那么,网络设备依据映射表1或2,确定业务流1的出端口为成员端口1,流量统计单元为标识为0的流量统计单元,那么,该网络设备在利用成员端口1转发业务流1的同时,在标识为0的流量统计单元中增加100M字节;又例如:在一个获取的周期下,网络设备确定成员端口1的当前带宽时,可以查看映射表1或2,确定成员端口1对应的流量统计单元分别为标识为0、4、8和12的流量统计单元,那么,网络设备可以查找标识为0、4、8和12的4个流量统计单元,并读取其中的值,将4个读取到的值进行相加,得到的和即为成员端口1的当前带宽。
需要说明的是,作为另一个示例,映射表中的流量统计单元,也可以直接存储所统计的转发字节数,例如:在调整之前,参见下表4,成员端口1对应的映射关系可以为:
表4映射表1中成员端口1对应的部分
哈希值 成员端口 流量统计单元
0 1 100M
4 1 200M
8 1 300M
12 1 400M
可以理解的是,上述场景仅是本申请实施例提供的一个场景示例,本申请实施例并不限于此场景。
需要说明的是,本申请实施例中的网络设备具体可以是交换机、路由器或PTN设备等。链路聚合可以应用在任何带宽需求较大的网络设备之间,例如:在连接核心网的服务器或服务器群上,使用链路聚合的方式连接核心网设备。
需要说明的是,本申请实施例中,链路聚合组中各成员端口的最大物理带宽均一致,例如:各成员端口能够承载的最大物理带宽均为1G。
下面结合附图,通过实施例来详细说明本申请实施例中一种负载均衡方法的具体实现方式。
图4为本申请实施例中的一种负载均衡方法100的流程示意图。参见图4,该方法100应用于其上设置有链路聚合组的网络设备上,该网络设备上预先建立并保存有该链路聚合组的映射关系,该映射关系指示该链路聚合组中用于转发该业务流的成员端口。例如:在图1所示的网络中,对于由基站向核心网发送的业务流,该方法100可以应用在PTN设备106上;对于由核心网向基站发送的业务流,该方法100也可以应用在核心网设备107上。该方法100例如可以包括下述S101~S104:
S101,网络设备根据第一业务流对应的第一哈希值,以及第一哈希值和链路聚合组中的第一成员端口之间的第一映射关系,确定第一业务流的出端口为第一成员端口,其中, 该第一哈希值为根据第一业务量进行哈希计算得到的哈希值。
为了实现网络设备上链路聚合组中各成员端口的负载分担,通常预先在该网络设备上配置有链路聚合组的映射关系。该映射关系可以表征哈希值和成员端口之间的对应关系,也可以表征哈希值、成员端口和流量统计单元之间的对应关系。该映射关系可以以映射表的形式配置并保存在网络设备上。
在本申请实施例中,为了让不同的业务流分散对应到不同的出端口上,增加了映射关系的映射项数量,即,映射关系中,每个成员端口至少对应两个映射项,且映射关系中的每个映射项对应的哈希值均不同,即,一个哈希值在链路聚合组的映射关系中仅出现一次。由于根据业务流的特征参数进行哈希计算得到的是32位的计算结果,那么,可以提取低N位作为业务流对应的哈希值,共2 N个可能的哈希值,该N为正整数,且满足2 N>链路聚合组包括的成员端口数量。例如:若网络设备的链路聚合组中包括16个成员端口,那么,N需要取大于4的整数,当N=5,则该网络设备上预先配置的映射关系包括2 5=32条映射项,每个成员端口和2个不同的哈希值对应;当N=8,则该网络设备上预先配置的映射关系包括2 8=256条映射项,每个成员端口和16个不同的哈希值对应。需要说明的是,N为用户根据实际需求设置的值,N越大,发生不同业务流对应到同一哈希值的概率越小,网络设备上的负载越均衡。
例如,假设网络设备上包括4个成员端口,N取4,那么,预先配置的映射关系可以包括0-15共16个哈希值对应的16条映射关系(下文中也称一个哈希值对应的一条映射关系为一个映射项),每个成员端口对应4个不同的哈希值,具体可以参见下述表5所示。
表5映射关系
哈希值 成员端口 哈希值 成员端口
0 1 8 1
1 2 9 2
2 3 10 3
3 4 11 4
4 1 12 1
5 2 13 2
6 3 14 3
7 4 15 4
又例如,为了让该映射关系可以体现通过某个具体哈希值映射到各成员端口上的业务流的带宽,还可以在表5的基础上,为每个映射项增加一个流量统计单元,即,增加流量统计单元0-流量统计单元15,具体参见上述表2或表3。
在本申请实施例中,网络设备上的链路聚合组,至少包括第一成员端口和第二成员端口,关于第一成员端口的映射项至少包括:第一映射关系和第三映射关系,作为一个示例,第一映射关系表征第一成员端口和第一哈希值之间的关系,第三映射关系表征第一成员端口和第二哈希值之间的关系。作为另一个示例,第一映射关系表征第一成员端口、第一哈希值之间、第一流量统计单元的关系。第三映射关系表征第一成员端口、第二哈希值之间的关系和第二流量统计单元之间的关系。作为示例,映射关系可以是指成员端口、哈希值 与流量统计单元统计的成员端口带宽之间的映射关系;或者,映射关系也可以是成员端口、哈希值与流量统计单元的标识之间的映射,而网络设备通过所述流量单元的标识确定所述流量统计单元,进而确定所述流量统计单元的流量统计结果。
具体实现时,当网络设备接收到第一业务流时,S101具体可以包括:网络设备根据该第一业务流获得哈希键值,基于哈希键值计算获得哈希值A,该哈希值A是32位的数;接着,网络设备选择该哈希值A的低N位,记作第一哈希值;然后,网络设备根据预先配置的映射关系,确定第一哈希值对应的映射项(也即第一映射关系),将该映射项中的第一成员端口作为该第一业务流的出端口。
作为一个示例,第一业务流的哈希键值可以包括第一业务流的特征参数,即,第一业务流的DA、SA、DIP和SIP中的至少一个。那么,网络设备可以将特征参数作为哈希键值,计算获得32位的哈希值A。
作为另一个示例,第一业务流的哈希键值除了包括第一业务流的特征参数,还可以包括网络设备上定义的哈希因子,即,将哈希因子和特征参数均作为哈希键值,计算获得32位的哈希值A。其中,该哈希因子可以是用户在网络设备上为链路聚合组对应设置的变量,具体可以是生成的随机数或者用户指定的数值。
例如:假设S101中的网络设备为PTN设备106,其上配置的映射关系为映射表1,该PTN设备106接收到业务流1,提取业务流1的特征参数为DA1和SA1,该PTN设备106中定义的哈希因子为5,那么,该PTN设备106可以基于DA1、SA1和5进行哈希计算,获得32位的哈希值X;该PTN设备106提取哈希值X的低4位0011,获得哈希值3;该PTN设备106根据表1中第5行前3列,确定业务流1的出端口为成员端口4。与该实例对应,S101中的第一业务流即为业务流1,第一哈希值即为哈希值3,第一成员端口即为端口4。
S102,网络设备确定第一成员端口当前的第一带宽。
第一带宽,是指当前单位时间内通过第一成员端口转发的流量,用于表征该第一成员端口当前的负载情况。本申请实施例中,网络设备可以通过软件周期性的获取到链路聚合组中各成员端口的流量,并基于获取的周期和转发的流量计算对应的带宽,从而确定各成员端口的实际负载情况。以第一成员端口为例,说明本申请实施例中确定各成员端口当前的带宽的方法。
作为一个示例,网络设备内可以包括与各哈希值一一对应的流量统计单元,而网络设备可以通过软件周期性的获取与第一成员端口对应的各个流量统计单元的统计结果(即,转发的字节数),从而网络设备可以基于获取的周期和统计结果,确定第一成员端口当前的第一带宽。
对于流量统计单元,用于统计各哈希值对应的成员端口的流量。例如用于统计各哈希值对应的成员端口所转发的报文的字节数。作为示例,流量统计单元可以是计数器或者令牌桶。
对于计数器作为流量统计单元的实现方式中,一种情况下,该流量统计单元可以是不断累加的计数器,而网络设备每个周期读取计数器的值后,需要用当前读取到的值减去上 一个周期读取到的值,获得该周期内该流量统计单元统计的其对应成员端口转发的字节数;另一种情况下,该流量统计单元也可以是同步清零的计数器,网络设备每个周期读取计数器值的同时,该计数器也执行清零操作,确保每次读取的值均为该周期内该流量统计单元统计的其对应成员端口转发的字节数。
对于令牌桶作为流量统计单元的实现方式中,每个哈希值对应一个令牌桶,且各个令牌桶的最多可容纳的令牌数量相同,例如:800M个令牌;那么,当网络设备接收到100M字节的第一业务流时,产生100M个令牌,且通过S101确定其出端口为第一哈希值对应的第一成员端口时,在第一哈希值对应的令牌桶中用800M个令牌中删除100M,剩余700M个令牌,以此类推实现令牌递减。而每个获取的周期时,读取令牌桶当其剩余的令牌数,并用其最多可容纳的令牌数减去该剩余的令牌数,即为统计结果,并用统计结果除以获取的周期,即可获得该哈希值对应的带宽。需要说明的是,每个获取的周期读取剩余令牌数的同时,还需要将令牌桶置位(即,将令牌桶恢复到最多可容纳的令牌数量),以便下一个获取的周期也可以使用该令牌桶完成对成员端口带宽的统计。
具体实现时,当第一成员端口对应第一映射关系和第三映射关系,其中,第一映射关系为第一成员端口、第一哈希值和第一流量统计单元之间的关系,第三映射关系为第一成员端口、第二哈希值和第二流量统计单元之间的关系,其中,第二哈希值为根据第二业务流进行哈希计算得到的哈希值;那么,网络设备可以在每个获取的周期(例如:1秒),分别读取第一流量统计单元和第二流量统计单元的统计结果,并根据第一流量统计单元的统计结果确定第三带宽,根据第二流量统计单元的统计结果确定第四带宽,从而根据第三带宽和第四带宽,确定第一带宽。其中,网络设备根据第一流量统计单元的统计结果确定第三带宽,具体可以包括:网络设备根据第一流量统计单元的统计结果除以获取的周期,获得第三带宽;网络设备根据第二流量统计单元的统计结果确定第四带宽,具体可以包括:网络设备根据第二流量统计单元的统计结果除以获取的周期,获得第四带宽;当第一成员端口仅对应第一映射关系和第三映射关系,那么,第一带宽等于第三带宽与第四带宽的和;或者,当第一成员端口除了对应第一映射关系和第三映射关系,还对应其他一个或多个映射关系,那么,第一带宽等于第三带宽、第四带宽以及根据其他一个或多个映射关系对应的流量统计单元获得的其他一个或多个带宽的和。
作为另一个示例,网络设备内还可以包括端口性能监测单元,该端口性能监测单元可以实时监测第一成员端口上单位时间转发的字节数,而网络设备可以通过软件周期性的获取端口性能监测单元的记录的转发字节数,从而网络设备可以基于获取的周期和转发字节数,确定第一成员端口当前的第一带宽。
作为再一个示例,网络设备也可以同时启用端口性能监测单元和流量统计单元,而网络设备可以通过软件周期性的获取端口性能监测单元的值,以及与第一成员端口对应的各个流量统计单元的统计结果,从而基于获取的周期以及获取的具体值,确定第一成员端口当前的第一带宽。需要说明的是,采用相同的获取周期时,对于同一成员端口在同一周期内,端口性能监测单元监测到的转发字节数和流量统计单元统计得到的转发字节数通常一致,同时启用两者,可以相互验证两种获取带宽方法的准确性和可靠性。
S103,网络设备根据第一带宽,将第一哈希值和第一成员端口之间的第一映射关系调整为第一哈希值和第二成员端口之间的第二映射关系。
需要说明的是,为了能够根据各成员端口的实际负载情况灵活的调整链路聚合组对应的映射关系,实现链路聚合组中各成员端口的负载均衡,本申请实施例中,对于各成员端口,均周期性的获取其带宽。当确定某个成员端口当前的带宽满足第一条件时,即执行该S103。该第一条件具体可以包括下述条件中的至少一个:条件一、第一带宽大于或者等于第一阈值;条件二、第一带宽占有率大于或者等于第二阈值,该第一带宽占有率为所述第一带宽与所述链路聚合组对应的逻辑端口的带宽的比值;条件三、第一带宽与第二带宽的差值与所述链路聚合组中单个成员端口的带宽的比值大于或者等于第三阈值,其中,第二带宽为网络设备将第一映射关系调整为所述第二映射关系之前,所述第二成员端口的带宽。
作为一个示例,第一条件为上述条件一,则,第一成员端口当前的第一带宽大于第一阈值,即可执行S103。其中,第一阈值为预先设置的带宽阈值,用于指示最大允许单个成员端口承载的带宽,例如:各成员端口的物理带宽为1G,预设的第一阈值则可以是小于等于1G的任何带宽值,如:800M。
作为另一个示例,第一条件为上述条件二,即,第一成员端口当前的第一带宽占有率大于或者等于第二阈值,即可执行S103。其中,第二阈值为预先设置的占有率阈值,用于指示最大允许单个成员端口承载的带宽占其所在的链路聚合组的逻辑端口带宽的比值。例如:该网络设备的链路聚合组的逻辑端口的带宽为100G,而第一带宽为8.5G,第二阈值预设为8%,那么,由于8.5G>100G*8%,确定第一成员端口上的负载过重,需要对第一成员端口对应的映射关系进行调整。
该示例下,网络设备可以进行下述操作具体确定出调整的内容:S31,网络设备从第一成员端口对应的多个映射项中,选择一个待调整映射项;S32,网络设备从链路聚合组中除第一成员端口以外的其他成员端口中,为第一成员端口中与待调整映射项对应的流量,确定即将切换至的目标成员端口。
若网络设备上配置的映射关系包括有哈希值、成员端口和流量统计单元三者之间的对应关系,那么,S31具体可以是:网络设备查看第一成员端口对应的多个映射项中对应的流量统计单元所统计的转发字节数,从转发字节数从多到少的顺序依次选择对应的流量统计单元所在的映射项作为待调整映射项,直到调整后第一成员端口的带宽占逻辑端口带宽的比例不超过第二阈值为止,例如:第一成员端口对应两条映射项,分别为:映射项1“第一哈希值-第一成员端口-第一流量统计单元”、映射项2“第二哈希值-第一成员端口-第二流量统计单元”,那么,网络设备查看第一流量统计单元和第二流量统计单元的值,分别为300M和100M,网络设备即可先将值较大的第一流量统计单元所在的映射项1作为待调整映射项,若该次调整之后第一成员端口的带宽占逻辑端口带宽的比例不超过第二阈值,则不再选择映射项2为待调整映射项。此时,S32为S31所确定的待调整映射项确定目标成员端口时,不仅需要该目标成员端口当前的带宽占逻辑端口带宽的比例不超过第二阈值,而且,还需要该目标成员端口承载该映射项对应的流量后,其上的带宽仍然小于逻辑端口带宽和第二阈值的乘积。需要说明的是,若存在满足上述条件的多个目标成员端口,则, 该多个目标成员端口中的任意一个均可以作为S32中的目标成员端口,优选地,S32可以从多个目标成员端口中选择当前带宽最小的,作为最终确定的目标成员端口;若不存在满足上述条件的目标成员端口,则在当前的获取周期内不对该第一成员端口进行负载均衡的处理,即,不执行S103~S104,并且,上报告警消息,用于指示该网络设备的链路聚合组中各成员端口均超负载运行,无法完成安全的业务流转发。
若网络设备上配置的映射关系不包括流量统计单元,只包括哈希值和成员端口之间的对应关系,那么,S31具体可以是网络设备从第一成员端口对应的多个映射项中,随机选择一个作为待调整映射项;此时,S32为S31所确定的待调整映射项确定目标成员端口时,仅需要该目标成员端口当前的带宽占逻辑端口带宽的比例不超过第二阈值。需要说明的是,若存在满足上述条件的多个目标成员端口,则,该多个目标成员端口中的任意一个均可以作为S32中的目标成员端口;若不存在满足上述条件的目标成员端口,则在当前的获取周期内不对该第一成员端口进行负载均衡的处理,即,不执行S103~S104,并且上报告警消息,用于指示该网络设备的链路聚合组中各成员端口均超负载运行,无法完成安全的业务流转发。
作为又一个示例,第一条件为上述条件三,则,第一成员端口当前的第一带宽与第二带宽的差值与所述链路聚合组中单个成员端口的带宽的比值大于或者等于第三阈值,即可执行S103。其中,第二带宽是指网络设备将第一映射关系调整为所述第二映射关系之前,所述第二成员端口的带宽;第三阈值为预先设置的阈值,用于指示最大允许两个成员端口的差值占单个成员端口带宽的比例。例如:该网络设备的链路聚合组中各成员端口的最大允许的物理带宽为10G,而第一带宽为8.5G,第二带宽为6G,第三阈值为10%,那么,由于(8.5G-6G)>10G*10%,确定第二成员端口和第一成员端口的负载差值过大,为了使该链路聚合组的各成员端口承载的业务流达到更加均衡的状态,需要对第一成员端口对应的映射关系进行调整。
该示例下,网络设备可以进行下述操作具体确定出调整的内容:S41,网络设备从第一成员端口对应的多个映射项中,选择一个待调整映射项;S42,网络设备将第二成员端口确定为该待调整映射项即将切换至的目标成员端口。
若网络设备上配置的映射关系包括有哈希值、成员端口和流量统计单元三者之间的对应关系,那么,S41具体可以是:网络设备查看第一成员端口对应的多个映射项中,对应的流量统计单元所统计的转发字节数,将转发字节数最多的流量统计单元所在的映射项作为待调整映射项。如果网络设备的链路聚合组中存在多个成员端口的当前带宽和第一带宽的差值,不大于单个成员端口带宽和第三阈值的乘积,那么,S42可以从该多个成员端口中,选择一个作为目标成员端口,该目标成员端口需要满足:第一成员端口将待调整映射项对应的流量切换给目标成员端口后该第一成员端口的带宽,与目标成员端口承载待调整映射项对应的流量后的带宽的差值,不大于单个成员端口带宽和第三阈值的乘积;优选地,S42可以从该多个成员端口中选择当前带宽最小的,判断其是否满足上述选择目标成员端口的条件,若不满足,则选择当前带宽次小的,判断其是否满足上述选择目标成员端口的条件,以此类推,直到选取出满足上述选择目标成员端口条件的成员端口,作为S42选定 的目标成员端口;若不存在满足上述选择目标成员端口条件的目标成员端口,则在当前的获取周期内不对该第一成员端口进行负载均衡的处理,即,不执行S103~S104,并且,上报告警消息,用于指示该网络设备的链路聚合组中各成员端口均超负载运行,无法完成安全的业务流转发。
若网络设备上配置的映射关系只包括哈希值和成员端口之间的对应关系,那么,S41具体可以是网络设备从第一成员端口对应的多个映射项中,随机选择一个作为待调整映射项。S42为S41所确定的待调整映射项确定目标成员端口时,仅需要该目标成员端口当前的带宽和第一带宽的差值,不大于单个成员端口带宽和第三阈值的乘积。需要说明的是,若存在满足上述条件的多个目标成员端口,则,该多个目标成员端口中的任意一个均可以作为S42中的目标成员端口;若不存在满足上述条件的目标成员端口,则在当前的获取周期内不对该第一成员端口进行负载均衡的处理,即,不执行S103~S104并且,上报告警消息,用于指示该网络设备的链路聚合组中各成员端口均超负载运行,无法完成安全的业务流转发。
该示例下,若带宽最大的成员端口和带宽最小的成员端口经过该示例的调整,无法使得该链路聚合组的成员端口均不满足该示例所指切换条件,则,可以不对带宽最大的成员端口进行调整,而是对带宽第二大的成员端口和带宽最小的成员端口进行调整,以此类推,直到调整使得该链路聚合组的成员端口均不满足该示例所指条件三。
需要说明的是,本申请实施例中的第一条件,用于表征链路聚合组中存在负载不均衡的状况,具体包括但不限于上述三个示例中所指出内容。若第一条件包括上述条件一、二和三中的至少两个,那么,满足任意一个可以视作满足第一条件,或者,全部满足才可以视作满足第一条件,具体可以根据实际需求进行灵活设计。而且,网络设备能够周期性的获取各成员端口的带宽,一旦发现某个成员端口满足预先配置的第一条件,则确定该链路聚合组存在负载不均衡的问题,需要通过本申请实施例提供的负载均衡方法进行处理,从而使得链路聚合组中各成员端口的负载达到相对均衡。
具体实现时,S103可以包括:S51,网络设备确定第一成员端口当前的第一带宽满足第一条件;S52,网络设备从该第一成员端口对应的多个映射项中,确定待调整映射项为第一哈希值和第一成员端口之间的第一映射关系;S53,网络设备确定该待调整映射项对应的目标成员端口为第二成员端口;S54,网络设备将第一哈希值和第一成员端口之间的第一映射关系,调整为第一哈希值和第二成员端口之间的第二映射关系,即,将第一哈希值和第一成员端口之间的第一映射关系中的第一成员端口修改为第二成员端口。
其中,根据第一哈希值和第一成员端口的第一映射关系中对应的流量统计单元确定的第三带宽,可以是第一成员端口对应的所有映射项对应的流量统计单元的值中的最大值,即,第三带宽大于第四带宽;第二成员端口可以是网络设备的链路聚合组中当前的带宽最小的成员端口。而且,经过S103后,第二成员端口当前的第五带宽等于执行S103之前该第二成员端口的第二带宽与第一成员端口切换来的第三带宽(即,通过第一哈希值映射到第一成员端口的流量产生的带宽)的和,而且,该第五带宽满足第二条件,该第二条件包括下述条件中的至少一个:条件四、第五带宽小于或等于第一阈值,该第五带宽为网络设 备将第一映射关系调整为第二映射关系之后第二成员端口的带宽;条件五、第五带宽占有率小于或等于第二阈值,该第五带宽为网络设备将所述第一映射关系调整为所述第二映射关系之后所述第二成员端口的带宽,第五带宽占有率为所述第五带宽与所述链路聚合组对应的逻辑端口的带宽的比值;条件六、第五带宽与第六带宽的差值与所述链路聚合组中单个成员端口的带宽的比值小于或等于第三阈值,第六带宽为网络设备将第一映射关系调整为第二映射关系之后第一成员端口的带宽。需要说明的是,上述第二条件可以与前述第一条件对应,当第一条件为条件一,那么,第二条件可以是对应的条件四;当第一条件为条件二,那么,第二条件可以是对应的条件五,当第一条件为条件三,那么,第二条件可以是对应的条件四。
这样,将通过第一哈希值映射到第一成员端口上的流量分担给负载较轻的第二成员端口,减轻第一成员端口的负载,使得链路聚合组中各成员端口的负载达到相对的均衡,从而提高该链路聚合组实际所提供的带宽。
需要说明的是,在本申请实施例中,网络设备可以在一个获取的周期内,仅对各成员端口进行一次第一条件的判断,也仅对第一成员端口对应的一个映射项进行调整;在下一个获取的周期内,如果发现同一成员端口仍然满足第一条件,再对该成员端口对应的另一个映射项进行调整,如此往复,直到网络设备获取不到需要进行调整的映射项,表征该链路聚合组中各成员端口的负载相对均衡。或者,网络设备也可以在一个获取的周期内,对各成员端口进行多次第一条件的判断,并进行一次或多次映射项的调整,直至不存在成员端口满足第一条件或存在满足第一条件的待调整成员端口但无法确定满足第二条件的目标成员端口,调整次数和判断的次数,取决于将成员端口调整到不满足第一条件的次数;需要说明的是,一些情况下,也可能无论如何调整,调整多少次,均不能使得所有的成员端口均不满足第一条件,此时,网络设备还可以设置调整上限,例如:10次,那么,在一个获取的周期内,如果调整10次还存在成员端口满足第一条件,就停止调整,上报告警消息,或者,采用其他方式进行调整,例如:采用下述图5所示实施例提供的方法进行继续调整。
S104,网络设备采用第二成员端口转发第一业务流。
具体实现时,在S103之前,网络设备接收到出端口为链路聚合组的第一业务流,获得该第一业务流对应的哈希值为第一哈希值,那么,可以根据网络设备上配置的映射关系中包括的第一哈希值和第一成员端口的第一映射关系,确定第一业务流的具体出端口为第一成员端口,并利用第一成员端口转发给第一业务流。但是,执行完S103之后,网络设备上配置的映射关系进行了更新,其包括第一哈希值和第二成员端口的第二映射关系,当网络设备再接收到第一业务流时,获得该第一业务流对应的哈希值为第一哈希值,可以根据网络设备上配置的映射关系中包括的第一哈希值和第二成员端口的第二映射关系,确定第一业务流的具体出端口为第二成员端口,并利用第二成员端口转发给第一业务流。
作为一个示例,S104具体可以包括:S61,网络设备根据第一业务流,获得哈希值B;S62,网络设备选择哈希值B的低N位,获得第一哈希值,其中,N为正整数,链路聚合组包括的成员端口数量小于2的N次方;S63,网络设备根据第一哈希值和第二成员端口之间的第二映射关系,确定通过第二成员端口转发第一业务流。其中,N为网络设备上配 置的常数,和配置映射关系时的N一致。例如:网络设备在配置映射关系时,选择32位计算结果的低5位作为哈希值,那么,该网络设备在接收到业务流后,以该业务流的特征参数(或特征参数和哈希因子)作为哈希键值,计算出32位的二进制数,并选择该32位的二进制数的低5位为该业务流对应的哈希值。
假设网络设备根据哈希键值计算出第一业务流的32位二进制数为00100011110000010100111000001011,获取低5位为该第一业务流的第一哈希值:01011,转换为十进制即为11。那么,网络设备即可查找该二进制数01011(或十进制数11)所在的映射关系,并利用该映射关系中的第二成员端口发送第一业务流。其中,该网络设备上包括:二进制数01011(或十进制数11)和第二成员端口的映射关系。需要说明的是,本申请实施例中,网络设备配置的映射关系中,哈希值可以是以二进制数值体现,也可以是以低N位二进制数对应的十进制数体现。
可见,通过本申请实施例提供的方法,在使用链路聚合的网络设备中,能够根据链路聚合组中各成员端口的实际承载的带宽情况,动态的调整各成员端口和哈希值的对应关系,克服了依据固定不变的哈希值和成员端口之间的映射关系转发业务流,导致各成员端口的负载不均衡的问题,使得链路聚合组中各成员端口可以按照调整后的映射关系进行业务流的负载分担,实现了链路聚合组中各个成员端口的负载均衡,从而使网络设备能够满足高带宽的需求。
此外,图5为本申请实施例中的另一种负载均衡方法200的流程示意图。参见图5,该方法200应用于其上设置有链路聚合组的网络设备上,该网络设备上预先建立并保存有该链路聚合组的映射关系,该映射关系指示该链路聚合组中用于转发该业务流的成员端口。例如:在图1所示的网络中,对于由基站向核心网发送的业务流,该方法200可以应用在PTN设备106上。对于由核心网向基站发送的业务流,该方法200也可以应用在核心网设备107上。该方法200例如可以包括下述S201~S206:
S201,网络设备接收第三业务流。
S202,网络设备确定映射关系表中记录的转发该第三业务流的出端口为该链路聚合组中的第三成员端口,该映射关系表中记录了第三哈希值与第三成员端口的映射关系,第三哈希值为通过第一哈希算法对第三业务流进行哈希计算得到的哈希值。
其中,该方法200中S201~S202的具体实现方式以及相关描述,可以参见方法100中的S101~S102。
需要说明的是,网络设备上预先配置有第一哈希算法,该第一哈希算法用于在S203之前,在网络设备上获得该网络设备所接收的业务流对应的哈希值。该第一哈希算法用于指示:计算业务流的哈希值时使用的哈希键值、各哈希键值的内部排序、各哈希键值之间的排序、部分哈希键值的取值、以及具体使用的哈希算法等。例如:参见图6,第一哈希算法指示:使用业务流的DIP、SIP和哈希因子作为哈希键值,具体顺序分别为DIP->SIP->哈希因子,DIP和SIP均为4字节,哈希因子为5,采用具体的哈希算法为第五版消息摘要算法(英文:Message Digest Algorithm,简称:MD5)。又例如:参见图7,第一哈希算法 指示:使用业务流的DIP和SIP作为哈希键值,具体顺序分别为DIP->SIP,DIP和SIP均为4字节且分别为第0字节、第1字节、第2字节和第3字节,采用具体的哈希算法为MD5。
可以理解的是,网络设备在接收到第三业务流后,可以先根据该网络设备上配置的第一哈希算法,计算出第三业务流对应的哈希值C;接着,还可以根据该网络设备上配置的N,提取哈希值C的低N位,获得第三哈希值;那么,即可基于本地的映射关系表中记录的第三哈希值和第三成员端口的映射关系,利用第三成员端口转发第一业务流。
S203,网络设备确定第三成员端口当前的第七带宽大于或等于第四阈值。
S204,网络设备调整第一哈希算法为第二哈希算法。
当网络设备根据第三成员端口当前的第七带宽,确定第三成员端口满足了第三条件,则,可以执行该S203。其中,第三条件包括但不限于下述条件中的至少一个:第七带宽大于或者等于第四阈值;第七带宽占有率大于或者等于第五阈值,其中,该第七带宽占有率为所述第七带宽与所述链路聚合组对应的逻辑端口的带宽的比值;第七带宽与第八带宽的差值与所述链路聚合组中单个成员端口的带宽的比值大于或者等于第六阈值,其中,第八带宽为网络设备执行S204之前其他成员端口的最小带宽。具体可以参见方法100中关于第一条件的相关描述,在本申请实施例中不再赘述。
作为一个示例,调整第一哈希算法为第二哈希算法,可以是指调整哈希键值中业务流的特征参数内部字节的顺序。例如:假设哈希键值包括DIP,原来DIP中分别为第0字节、第1字节、第2字节和第3字节,那么,调整后DIP中分别可以是第2字节、第3字节、第0字节和第1字节。
作为另一个示例,调整第一哈希算法为第二哈希算法,可以是指调整哈希键值中业务流的特征参数之间的顺序。例如:假设哈希键值包括DIP和SIP,原来DIP和SIP在哈希键值中的顺序分别为:DIP->SIP,那么,调整后DIP和SIP在哈希键值中的顺序分别为:SIP->DIP。
作为再一个示例,若哈希键值包括哈希因子,那么,调整第一哈希算法为第二哈希算法,还可以是指调整哈希键值中哈希因子的数值大小。例如:假设原来哈希键值中哈希因子为5,那么,调整后哈希键值中的哈希因子可以是10。
作为又一个示例,调整第一哈希算法为第二哈希算法,还可以是指具体使用的哈希算法。例如:假设原来第一哈希算法中具体使用的哈希算法为MD5,那么,调整后第二哈希算法中具体使用的哈希算法可以是安全哈希算法(英文:SHA-1)。
需要说明的是,网络设备将第一哈希算法调整为第二哈希算法,包括但不限于上述四种可能的调整方式。而且,上述四种调整方式,可以单独使用,也可以结合使用,具体根据实际需求在网络设备上进行配置。
例如:对于图6所示的第一哈希算法,调整到的第二哈希算法具体可以参见图8a-图8c,第二哈希算法可以指示:哈希键值的顺序分别为SIP->DIP->哈希因子,参见图8a;第二哈希算法也可以指示:DIP中分别可以是第2字节、第3字节、第0字节和第1字节,SIP中分别可以是第2字节、第3字节、第0字节和第1字节,参见图8b;第二哈希算法还可以指示:哈希因子调整为10,参见图8c。又例如:对于图7所示的第一哈希算法,调 整到的第二哈希算法具体可以参见图9a-图9b,第二哈希算法可以指示:哈希键值的顺序分别为SIP->DIP,参见图9a;第二哈希算法也可以指示:DIP中分别可以是第2字节、第3字节、第0字节和第1字节,SIP中分别可以是第2字节、第3字节、第0字节和第1字节,参见图9b。
可见,通过S204中的调整第一哈希算法为第二哈希算法,能够在当前映射关系表中的映射关系无法使得链路聚合组中成员端口负载均衡的情况下,实现对业务流在各成员端口的重新分散,一定程度上能够将业务流均衡的分散到各个成员端口上,使得链路聚合组中各成员端口的负载均衡成为可能。
S205,网络设备根据第二哈希算法,对第三业务流进行哈希计算获得第四哈希值。
S206,网络设备根据映射关系表中记录的第四哈希值与第四成员端口的映射关系,确定通过第四成员端口转发第三业务流。
具体实现时,在S204之前,网络设备接收到出端口为链路聚合组的第三业务流,根据第一哈希算法获得该第三业务流对应的哈希值为第三哈希值,那么,可以根据网络设备上配置的映射关系表中包括的第三哈希值和第三成员端口的映射关系,确定第三业务流的具体出端口为第三成员端口,并利用第三成员端口转发给第三业务流。但是,执行完S204之后,网络设备将第一哈希算法调整为第二哈希算法,当网络设备再接收到第三业务流时,根据第二哈希算法获得该第三业务流对应的哈希值为第四哈希值,可以根据网络设备上配置的映射关系表中记录的的第四哈希值和第四成员端口的映射关系,确定第三业务流的具体出端口为第四成员端口,并利用第四成员端口转发给第三业务流。
作为一个示例,S205~S206具体可以包括:S71,网络设备根据第二哈希算法,计算第三业务流对应的哈希值D;S72,网络设备选择哈希值D的低N位,获得第四哈希值,其中,N为正整数,链路聚合组包括的成员端口数量小于2的N次方;S73,网络设备根据第四哈希值和第四成员端口之间的映射关系,确定通过第四成员端口转发第三业务流。
需要说明的是,该方法200无需调整网络设备中配置的映射关系表,只是通过调整第一哈希算法为第二哈希算法,即可改变业务流对应的哈希值,从而改变业务流映射到的成员端口,实现流量的重新散列,使得各成员端口上的负载均衡成为可能。
在一些可能的实现方式中,该方法200中的第三成员端口,可以是方法100中的第一成员端口;方法200中的第四成员端口,可以是方法100中的第二成员端口;方法200中的第三业务流,可以是方法100中的第一业务流。
可见,通过本申请实施例提供的方法,在使用链路聚合的网络设备中,能够根据链路聚合组中各成员端口的实际承载的带宽情况,动态的调整计算业务流的哈希值的哈希算法(例如:哈希因子的值),改变业务流对应的哈希值,从而将业务流重新映射到不同的成员端口上,克服了依据固定不变的计算业务流哈希值的哈希算法,以及哈希值和成员端口之间的确定好的映射关系,进行业务流转发,导致各成员端口的负载不均衡的问题,使得链路聚合组中各成员端口在一定程度上保持负载均衡成为可能,从而使网络设备能够满足高带宽的需求。
在一些可能的实现方式中,本申请实施例还提供了一种负载均衡方法,该方法可以结合上述方法100和方法200对网络设备上的链路聚合组进行负载均衡。
作为一个示例,本申请实施例可以先执行方法100,通过调整满足第一条件的成员端口对应的该成员端口和哈希值的映射关系,解决链路聚合组负载不均衡的问题,当无法再通过执行方法100克服某些成员端口满足第一条件的问题时,可以继续执行方法200,通过调整网络设备上的哈希算法,从而改变业务流对应的哈希值,将原本映射到同一成员端口的不同业务流打散到不同的成员端口上,从而解决负载不均衡的问题。
作为另一个示例,本申请实施例可以先执行方法200,通过不断调整网络设备上的哈希算法,尽量使得业务流分散映射到不同成员端口,每个成员端口上的负载一定程度上保持均衡;接着,再执行方法100,当部分成员端口满足第一条件时,通过该成员端口对应的该成员端口和哈希值的映射关系,将该业务流对应的哈希值映射到其他成员端口上,从而解决负载不均衡的问题。
作为又一个示例,本申请实施例可以先执行方法200,通过不断调整网络设备上的哈希算法,尽量使得业务流分散映射到不同成员端口,每个成员端口上的负载一定程度上保持均衡;接着,再执行方法100,当部分成员端口满足第一条件时,通过该成员端口对应的该成员端口和哈希值的映射关系,将该业务流对应的哈希值映射到其他成员端口上;当无法再通过执行方法100克服该某些成员端口满足第一条件的问题时,再可以继续执行方法200,通过调整网络设备上的哈希算法,从而改变业务流对应的哈希值,将原本映射到同一成员端口的不同业务流分散到不同的成员端口上,从而解决负载不均衡的问题。
通过上述方法100、方法200、以及三个示例,网络设备均可以实现对其中链路聚合组各成员端口的负载均衡,从而确保该链路聚合组发挥更大的功能,在使用过程中提供更大的带宽。
此外,本申请实施例还提供了一种网络设备1000,参见图10所示。该网络设备1000包括收发单元1001和处理单元1002。其中,收发单元1001用于执行上述图4所示的方法100中的收发操作或图5所示的方法200中的收发操作;处理单元1002用于执行上述图4所示的方法100中除了收发操作以外的其他操作,或执行上述图5所示的方法200中除了收发操作以外的其他操作。例如:当所述网络设备1000执行所述方法100时,所述收发单元1001用于采用第二成员端口转发第一业务流;所述处理单元1002用于根据第一业务流对应的第一哈希值,以及第一哈希值和链路聚合组中的第一成员端口之间的第一映射关系,确定第一业务流的出端口为第一成员端口;所述处理单元1002还用于确定第一成员端口当前的第一带宽;所述处理单元1002还用于根据第一带宽,将第一哈希值和第一成员端口之间的第一映射关系调整为第一哈希值和第二成员端口之间的第二映射关系。又例如:当所述网络设备1000执行所述方法200时,所述收发单元1001用于接收第三业务流;所述处理单元1002用于确定映射关系表中记录的转发该第三业务流的出端口为该链路聚合组中的第三成员端口;所述处理单元1002还用于确定第三成员端口当前的第七带宽大于或等于第四阈值;所述处理单元1002还用于调整第一哈希算法为第二哈希算法;所述处理单元 1002还用于根据第二哈希算法,对第三业务流进行哈希计算获得第四哈希值,并根据映射关系表中记录的第四哈希值与第四成员端口的映射关系,确定通过第四成员端口转发第三业务流。
此外,本申请实施例还提供了一种网络设备1100,参见图11。该网络设备1100包括通信接口1101和处理器1102。其中,通信接口1101用于执行上述图4所示的方法100中的收发操作或图5所示的方法200中的收发操作;处理器1102用于执行上述图4所示的方法100中除了收发操作以外的其他操作,或执行上述图5所示的方法200中除了收发操作以外的其他操作。例如:当所述网络设备1100执行所述方法100时,所述通信接口1101用于采用第二成员端口转发第一业务流;所述处理器1102用于根据第一业务流对应的第一哈希值,以及第一哈希值和链路聚合组中的第一成员端口之间的第一映射关系,确定第一业务流的出端口为第一成员端口;所述处理器1102还用于确定第一成员端口当前的第一带宽;所述处理器1102还用于根据第一带宽,将第一哈希值和第一成员端口之间的第一映射关系调整为第一哈希值和第二成员端口之间的第二映射关系。又例如:当所述网络设备1100执行所述方法200时,所述通信接口1101用于接收第三业务流;所述处理器1102用于确定映射关系表中记录的转发该第三业务流的出端口为该链路聚合组中的第三成员端口;所述处理器1102还用于确定第三成员端口当前的第七带宽大于或等于第四阈值;所述处理器1102还用于调整第一哈希算法为第二哈希算法;所述处理器1102还用于根据第二哈希算法,对第三业务流进行哈希计算获得第四哈希值,并根据映射关系表中记录的第四哈希值与第四成员端口的映射关系,确定通过第四成员端口转发第三业务流。
此外,本申请实施例还提供了一种网络设备1200,参见图12所示。该网络设备1200包括存储器1201和处理器1202。其中,存储器1201用于存储程序代码;处理器1202用于运行所述程序代码中的指令,使得该网络设备1200执行以上图4所示实施例中方法100,或者图5所示实施例中的方法200。
可以理解的是,上述实施例中,处理器可以是中央处理器(英文:central processing unit,缩写:CPU),网络处理器(英文:network processor,缩写:NP)或者CPU和NP的组合。处理器还可以是专用集成电路(英文:application-specific integrated circuit,缩写:ASIC),可编程逻辑器件(英文:programmable logic device,缩写:PLD)或其组合。上述PLD可以是复杂可编程逻辑器件(英文:complex programmable logic device,缩写:CPLD),现场可编程逻辑门阵列(英文:field-programmable gate array,缩写:FPGA),通用阵列逻辑(英文:generic array logic,缩写:GAL)或其任意组合。处理器可以是指一个处理器,也可以包括多个处理器。存储器可以包括易失性存储器(英文:volatile memory),例如随机存取存储器(英文:random-access memory,缩写:RAM);存储器也可以包括非易失性存储器(英文:non-volatile memory),例如只读存储器(英文:read-only memory,缩写:ROM),快闪存储器(英文:flash memory),硬盘(英文:hard disk drive,缩写:HDD)或固态硬盘(英文:solid-state drive,缩写:SSD);存储器还可以包括上述种类的存储器的组合。存储器可以是指一个存储器,也可以包括多个存储器。在一个具体实施方式中,存储器中存储有计算机可读指令,所述计算机可读指令包括多个软件模块,例如发送模块,处理模 块和接收模块。处理器执行各个软件模块后可以按照各个软件模块的指示进行相应的操作。在本实施例中,一个软件模块所执行的操作实际上是指处理器根据所述软件模块的指示而执行的操作。处理器执行存储器中的计算机可读指令后,可以按照所述计算机可读指令的指示,执行网络设备可以执行的全部操作。
可以理解的是,上述实施例中,网络设备1100的通信接口1101,具体可以被用作网络设备1000中的收发单元1001,实现网络设备和其他网络设备之间的数据通信。
此外,本申请实施例还提供了一种计算机可读存储介质,该计算机可读存储介质中存储有指令,当其在计算机上运行时,使得所述计算机执行以上图4或图5所示实施例中的所述负载均衡方法。
此外,本申请实施例还提供了计算机程序产品,当其在计算机上运行时,使得计算机执行前述图4或图5所示实施例中的所述负载均衡方法。
本申请实施例中提到的“第一业务流”、“第一哈希值”等名称中的“第一”只是用来做名字标识,并不代表顺序上的第一。该规则同样适用于“第二”等。
通过以上的实施方式的描述可知,本领域的技术人员可以清楚地了解到上述实施例方法中的全部或部分步骤可借助软件加通用硬件平台的方式来实现。基于这样的理解,本申请的技术方案可以以软件产品的形式体现出来,该计算机软件产品可以存储在存储介质中,如只读存储器(英文:read-only memory,ROM)/RAM、磁碟、光盘等,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者诸如路由器等网络通信设备)执行本申请各个实施例或者实施例的某些部分所述的方法。
本说明书中的各个实施例均采用递进的方式描述,各个实施例之间相同相似的部分互相参见即可,每个实施例重点说明的都是与其他实施例的不同之处。尤其,对于设备实施例而言,由于其基本相似于方法实施例,所以描述得比较简单,相关之处参见方法实施例的部分说明即可。以上所描述的设备实施例仅仅是示意性的,其中作为分离部件说明的模块可以是或者也可以不是物理上分开的,作为模块显示的部件可以是或者也可以不是物理模块,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。本领域普通技术人员在不付出创造性劳动的情况下,即可以理解并实施。
以上所述仅是本申请的优选实施方式,并非用于限定本申请的保护范围。应当指出,对于本技术领域的普通技术人员来说,在不脱离本申请的前提下,还可以作出若干改进和润饰,这些改进和润饰也应视为本申请的保护范围。

Claims (16)

  1. 一种负载均衡方法,其特征在于,包括:
    网络设备根据第一业务流对应的第一哈希值,以及所述第一哈希值和所述链路聚合组中的第一成员端口之间的第一映射关系,确定所述第一业务流的出端口为所述第一成员端口,其中,所述第一哈希值为根据所述第一业务流进行哈希计算得到的哈希值;
    所述网络设备确定所述第一成员端口当前的第一带宽;
    所述网络设备根据所述第一带宽,将所述第一映射关系调整为所述第一哈希值和所述链路聚合组中的第二成员端口之间的第二映射关系;
    所述网络设备采用所述第二成员端口转发所述第一业务流。
  2. 根据权利要求1所述的方法,其特征在于,所述第一成员端口满足第一条件时将所述第一映射关系调整为所述第二映射关系。
  3. 根据权利要求2所述的方法,其特征在于,所述第一条件包括以下至少一项:
    所述第一带宽大于或者等于第一阈值;
    所述第一带宽占有率大于或者等于第二阈值,所述第一带宽占有率为所述第一带宽与所述链路聚合组对应的逻辑端口的带宽的比值;或者
    所述第一带宽与所述第二带宽的差值与所述链路聚合组中单个成员端口的带宽的比值大于或者等于第三阈值,所述第二带宽为网络设备将所述第一映射关系调整为所述第二映射关系之前,所述第二成员端口的带宽。
  4. 根据权利要求1-3任一项所述的方法,其特征在于,在网络设备将所述第一映射关系调整为所述第二映射关系之前,在所述链路聚合组的各成员端口中,所述第二成员端口为当前带宽最小的成员端口。
  5. 根据权利要求1-4任一项所述的方法,其特征在于,所述方法还包括:
    所述网络设备获取所述第一映射关系和第三映射关系,所述第一映射关系包括所述第一成员端口、所述第一哈希值和第一流量统计单元之间的映射关系,所述第三映射关系包括所述第一成员端口、第二哈希值和第二流量统计单元之间的映射关系,其中,所述第二哈希值为根据第二业务流进行哈希计算得到的哈希值;
    所述网络设备根据所述第一流量统计单元的统计结果确定所述第一业务流占用所述第一成员端口的第三带宽;
    所述网络设备根据所述第二流量统计单元的统计结果,确定所述第二业务流占用所述第一成员端口的第四带宽;
    所述网络设备根据所述第三带宽和所述第四带宽,确定所述第一带宽。
  6. 根据权利要5所述的方法,其特征在于,所述第三带宽大于所述第四带宽。
  7. 根据权利要求1-6任一项所述的方法,其特征在于,所述第二成员端口满足第二条件时,将所述第一映射关系调整为所述第二映射关系。
  8. 根据权利要求7所述的方法,其特征在于,所述第二条件包括以下至少一项:
    第五带宽小于或者等于第一阈值,所述第五带宽为所述网络设备将所述第一映射关系调整为所述第二映射关系之后,所述第二成员端口的带宽;
    所述第五带宽占有率小于或者等于第二阈值,所述第五带宽为所述网络设备将所述第一映射关系调整为所述第二映射关系之后所述第二成员端口的带宽,所述第五带宽占有率为所述第五带宽与所述链路聚合组对应的逻辑端口的带宽的比值;或者
    所述第五带宽与所述第六带宽的差值与所述链路聚合组中单个成员端口的带宽的比值小于或者等于第三阈值,所述第五带宽为网络设备将所述第一映射关系调整为所述第二映射关系之后所述第二成员端口的带宽,所述第六带宽为网络设备将所述第一映射关系调整为所述第二映射关系之后所述第一成员端口的带宽。
  9. 根据权利要求1-8任一项所述的方法,其特征在于,所述方法还包括:
    所述网络设备接收第三业务流;
    所述网络设备确定映射关系表中记录的转发所述第三业务流的出端口为所述链路聚合组中的第三成员端口,所述映射关系表中记录了第三哈希值与所述第三成员端口的映射关系,所述第三哈希值为通过第一哈希算法对所述第三业务流进行哈希计算得到的哈希值;
    所述网络设备确定所述第三成员端口当前的第七带宽大于或者等于第四阈值;
    所述网络设备将所述第一哈希算法调整为第二哈希算法;
    所述网络设备根据所述第二哈希算法对所述第三业务流进行哈希计算,得到第四哈希值;
    所述网络设备根据所述映射关系表中记录的所述第四哈希值与第四成员端口的映射关系,确定通过所述第四成员端口转发所述第三业务流。
  10. 根据权利要求9所述的方法,其特征在于,所述网络设备将所述第一哈希算法调整为第二哈希算法,包括下述方式的至少一种:
    调整所述第一业务流中至少一个特征参数的字节顺序;
    调整所述第一业务流中各特征参数的排列顺序;或
    调整哈希因子的值。
  11. 根据权利要求10所述的方法,其特征在于,所述特征参数包括所述第一业务流的目的地址DA、源地址SA、目的互联网协议地址DIP和源互联网协议地址SIP中的至少一个。
  12. 根据权利要求9-11任一项所述的方法,其特征在于,所述第三成员端口为所述第一成员端口。
  13. 根据权利要求9-12任一项所述的方法,其特征在于,所述第四成员端口为所述第二成员端口。
  14. 一种网络设备,其特征在于,包括:
    通信接口;和
    与所述通信接口连接的处理器;
    根据所述通信接口和所述处理器,所述网络设备用于执行前述权利要求1-13任一项所述的方法。
  15. 一种网络设备,其特征在于,所述网络设备包括存储器和处理器;
    所述存储器,用于存储程序代码;
    所述处理器,用于运行所述程序代码中的指令,使得所述网络设备执行以上权利要求 1-13任一项所述的方法。
  16. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质中存储有指令,当其在计算机上运行时,使得所述计算机执行以上权利要求1-13任一项所述的方法。
PCT/CN2020/116534 2019-12-31 2020-09-21 一种负载均衡方法和设备 WO2021135416A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP20910438.9A EP4072083A4 (en) 2019-12-31 2020-09-21 LOAD BALANCING METHOD AND DEVICE
US17/851,371 US20220329525A1 (en) 2019-12-31 2022-06-28 Load balancing method and device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201911417080.XA CN113132249A (zh) 2019-12-31 2019-12-31 一种负载均衡方法和设备
CN201911417080.X 2019-12-31

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/851,371 Continuation US20220329525A1 (en) 2019-12-31 2022-06-28 Load balancing method and device

Publications (1)

Publication Number Publication Date
WO2021135416A1 true WO2021135416A1 (zh) 2021-07-08

Family

ID=76686386

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/116534 WO2021135416A1 (zh) 2019-12-31 2020-09-21 一种负载均衡方法和设备

Country Status (4)

Country Link
US (1) US20220329525A1 (zh)
EP (1) EP4072083A4 (zh)
CN (1) CN113132249A (zh)
WO (1) WO2021135416A1 (zh)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113890847A (zh) * 2021-09-26 2022-01-04 新华三信息安全技术有限公司 一种流量转发方法和装置
US20220159510A1 (en) * 2020-11-16 2022-05-19 At&T Intellectual Property I, L.P. Scaling network capability using baseband unit pooling in fifth generation networks and beyond
CN114827016A (zh) * 2022-04-12 2022-07-29 珠海星云智联科技有限公司 切换链路聚合方案的方法、装置、设备及存储介质
CN114936087A (zh) * 2021-09-29 2022-08-23 华为技术有限公司 一种嵌入向量预取的方法、装置、系统及相关设备
US11540330B2 (en) 2020-11-25 2022-12-27 At&T Intellectual Property I, L.P. Allocation of baseband unit resources in fifth generation networks and beyond
WO2023224691A1 (en) * 2022-05-18 2023-11-23 Microsoft Technology Licensing, Llc Feedback-based dynamic network flow remapping

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10608957B2 (en) * 2017-06-29 2020-03-31 Cisco Technology, Inc. Method and apparatus to optimize multi-destination traffic over etherchannel in stackwise virtual topology
CN113645145A (zh) * 2021-08-02 2021-11-12 迈普通信技术股份有限公司 负载均衡方法、装置、网络设备及计算机可读存储介质
CN115914234A (zh) * 2021-08-03 2023-04-04 华为技术有限公司 负载均衡的哈希算法信息的确定方法、装置及存储介质
CN113709053B (zh) * 2021-08-03 2023-05-26 杭州迪普科技股份有限公司 一种基于流定义的分流方法及装置
CN113672665A (zh) * 2021-08-18 2021-11-19 Oppo广东移动通信有限公司 数据处理方法、数据采集系统、电子设备和存储介质
CN114866473B (zh) * 2022-02-25 2024-04-12 网络通信与安全紫金山实验室 一种转发装置及流量输出接口调节方法
CN115883469B (zh) * 2023-01-04 2023-06-02 苏州浪潮智能科技有限公司 一种数据流负载平衡方法、装置及数据中心

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150138986A1 (en) * 2013-11-15 2015-05-21 Broadcom Corporation Load balancing in a link aggregation
CN106302223A (zh) * 2016-09-20 2017-01-04 杭州迪普科技有限公司 一种聚合组流量分流的方法和装置
CN109039938A (zh) * 2017-06-12 2018-12-18 中兴通讯股份有限公司 负载均衡方法及装置
CN109379297A (zh) * 2018-11-26 2019-02-22 锐捷网络股份有限公司 一种实现流量负载均衡的方法和装置
CN109547341A (zh) * 2019-01-04 2019-03-29 烽火通信科技股份有限公司 一种链路聚合的负载分担方法及系统

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101014977B1 (ko) * 2005-10-07 2011-02-16 엘지에릭슨 주식회사 링크통합 기능에서 로드밸런싱 방법
US7940661B2 (en) * 2007-06-01 2011-05-10 Cisco Technology, Inc. Dynamic link aggregation
US8995277B2 (en) * 2012-10-30 2015-03-31 Telefonaktiebolaget L M Ericsson (Publ) Method for dynamic load balancing of network flows on LAG interfaces
US9565113B2 (en) * 2013-01-15 2017-02-07 Brocade Communications Systems, Inc. Adaptive link aggregation and virtual link aggregation
US9007906B2 (en) * 2013-01-25 2015-04-14 Dell Products L.P. System and method for link aggregation group hashing using flow control information
CN109218216B (zh) * 2017-06-29 2023-08-01 中兴通讯股份有限公司 链路聚合流量分配方法、装置、设备及存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150138986A1 (en) * 2013-11-15 2015-05-21 Broadcom Corporation Load balancing in a link aggregation
CN106302223A (zh) * 2016-09-20 2017-01-04 杭州迪普科技有限公司 一种聚合组流量分流的方法和装置
CN109039938A (zh) * 2017-06-12 2018-12-18 中兴通讯股份有限公司 负载均衡方法及装置
CN109379297A (zh) * 2018-11-26 2019-02-22 锐捷网络股份有限公司 一种实现流量负载均衡的方法和装置
CN109547341A (zh) * 2019-01-04 2019-03-29 烽火通信科技股份有限公司 一种链路聚合的负载分担方法及系统

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220159510A1 (en) * 2020-11-16 2022-05-19 At&T Intellectual Property I, L.P. Scaling network capability using baseband unit pooling in fifth generation networks and beyond
US11582642B2 (en) * 2020-11-16 2023-02-14 At&T Intellectual Property I, L.P. Scaling network capability using baseband unit pooling in fifth generation networks and beyond
US11540330B2 (en) 2020-11-25 2022-12-27 At&T Intellectual Property I, L.P. Allocation of baseband unit resources in fifth generation networks and beyond
CN113890847A (zh) * 2021-09-26 2022-01-04 新华三信息安全技术有限公司 一种流量转发方法和装置
CN114936087A (zh) * 2021-09-29 2022-08-23 华为技术有限公司 一种嵌入向量预取的方法、装置、系统及相关设备
CN114827016A (zh) * 2022-04-12 2022-07-29 珠海星云智联科技有限公司 切换链路聚合方案的方法、装置、设备及存储介质
WO2023224691A1 (en) * 2022-05-18 2023-11-23 Microsoft Technology Licensing, Llc Feedback-based dynamic network flow remapping

Also Published As

Publication number Publication date
EP4072083A4 (en) 2023-01-11
US20220329525A1 (en) 2022-10-13
CN113132249A (zh) 2021-07-16
EP4072083A1 (en) 2022-10-12

Similar Documents

Publication Publication Date Title
WO2021135416A1 (zh) 一种负载均衡方法和设备
US11736388B1 (en) Load balancing path assignments techniques
US10148492B2 (en) Data center bridging network configuration and management
Wang et al. Scotch: Elastically scaling up sdn control-plane using vswitch based overlay
CN107579923B (zh) 一种sdn网络的链路负载均衡方法和sdn控制器
US9614768B2 (en) Method for traffic load balancing
US7940661B2 (en) Dynamic link aggregation
CN107995123A (zh) 一种基于交换机的负载均衡系统及方法
US9979656B2 (en) Methods, systems, and computer readable media for implementing load balancer traffic policies
CN104756451A (zh) 用于lag接口上网络流的动态负载平衡的方法
CN104702523A (zh) 分组转发装置和方法
US20110216769A1 (en) Dynamic Path Selection
US10110421B2 (en) Methods, systems, and computer readable media for using link aggregation group (LAG) status information
US20110280124A1 (en) Systems and Methods for Load Balancing of Management Traffic Over a Link Aggregation Group
CN111726299B (zh) 流量均衡方法及装置
US9444756B2 (en) Path aggregation group monitor
CN114006863A (zh) 一种多核负载均衡协同处理方法、装置及存储介质
CN111163015A (zh) 报文发送方法、装置及汇聚分流设备
WO2019084805A1 (zh) 一种分发报文的方法及装置
CN108337183A (zh) 一种数据中心网络流负载均衡的方法
Hussein et al. Enhancement Load Balance (ELB) Model for Port EtherChannel Technology
Alqahtani et al. Bert: Scalable source routed multicast for cloud data centers
CN105812437A (zh) 一种业务分发方法、系统及相关装置
US11968123B1 (en) Methods for allocating a traffic load and devices thereof
CN113630319B (zh) 一种数据分流方法、装置及相关设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20910438

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2020910438

Country of ref document: EP

Effective date: 20220705

NENP Non-entry into the national phase

Ref country code: DE