WO2020187006A1 - 流量均衡方法及装置 - Google Patents

流量均衡方法及装置 Download PDF

Info

Publication number
WO2020187006A1
WO2020187006A1 PCT/CN2020/077311 CN2020077311W WO2020187006A1 WO 2020187006 A1 WO2020187006 A1 WO 2020187006A1 CN 2020077311 W CN2020077311 W CN 2020077311W WO 2020187006 A1 WO2020187006 A1 WO 2020187006A1
Authority
WO
WIPO (PCT)
Prior art keywords
physical port
port
physical
ethernet device
load information
Prior art date
Application number
PCT/CN2020/077311
Other languages
English (en)
French (fr)
Inventor
刘泉
田亚文
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to EP20773120.9A priority Critical patent/EP3890257B1/en
Publication of WO2020187006A1 publication Critical patent/WO2020187006A1/zh
Priority to US17/385,161 priority patent/US20210352018A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/12Avoiding congestion; Recovering from congestion
    • H04L47/125Avoiding congestion; Recovering from congestion by balancing the load, e.g. traffic engineering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/24Multipath
    • H04L45/245Link aggregation, e.g. trunking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/66Layer 2 routing, e.g. in Ethernet based MAN's
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/36Flow control; Congestion control by determining packet size, e.g. maximum transfer unit [MTU]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/30Peripheral units, e.g. input or output ports
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/50Reducing energy consumption in communication networks in wire-line communication networks, e.g. low power modes or reduced link rate

Definitions

  • the embodiments of the present application relate to the field of communication technologies, and in particular, to a method and device for traffic balancing.
  • the Ethernet link aggregation (Ethernet trunk, Eth-Trunk) mechanism is used to bundle multiple physical ports of an Ethernet device into a logical port for use.
  • the logical port is also called an Eth-Trunk port.
  • the physical ports bound together are called member ports of the logical port.
  • Ethernet equipment can use the Eth-Trunk mechanism to increase network bandwidth, and achieve load sharing and service protection needs. Since the Eth-Trunk port is a logical port, it cannot carry real packet forwarding work. Therefore, when packets are forwarded, the traffic sent to the logical port is forwarded to a physical port of the logical port according to the load sharing algorithm. on.
  • the Eth-Trunk cascading system includes multi-level Ethernet devices, multiple Ethernet devices are cascaded with each other, and the traffic is calculated on the first-level Ethernet device through a hash calculation and is evenly shared to the first Ethernet On each member port of the Eth-Trunk port of the device, the traffic reaches the second-level Ethernet device. After that, a hash calculation is performed on the second-level Ethernet device, so that the traffic is shared to the third-level Ethernet device, and so on.
  • the Ethernet device performs hash calculation according to several specific characteristics of the message, thereby determining which physical port of the logical port the traffic should be sent from.
  • the specific characteristics are also called hash factor.
  • the hash algorithms used by all levels of Ethernet equipment are the same, the calculation results of the Ethernet equipment at all levels are relatively similar, that is, the physical ports determined by the Ethernet equipment at all levels according to the hash factor are fixed, resulting in reaching each level of Ethernet
  • the traffic of the device can only be sent from a few specific physical ports of the logical port of the Ethernet device, resulting in a large amount of traffic on some physical ports, and a small amount of traffic on other physical ports.
  • the traditional processing method is to manually modify the hash factor involved in the hash calculation, and adjust the traffic sent to each physical port according to the new hash factor.
  • the new hash factor is used, the calculated physical port is fixed, and it is also impossible to avoid the uneven traffic sharing among the member ports of the Eth-Trunk port.
  • the process of modifying the hash factor it is possible to obtain a suitable hash factor after many attempts to modify, and this adjustment process is likely to cause business abnormalities.
  • the embodiments of the present application provide a method and device for traffic balancing, which automatically adjust the traffic size of each member port of the Eth-Trunk port to achieve the purpose of making each member port of the Eth-Trunk port evenly share the traffic.
  • an embodiment of the present application provides a traffic balancing method.
  • the method can be applied to an Ethernet device or a chip in an Ethernet device.
  • the method will be described below by taking an Ethernet device as an example.
  • the method includes: the Ethernet device sends a query message to the controller, the query message carries the load information of each physical port among the multiple physical ports of the logical port of the Ethernet device; the Ethernet device receives the response sent by the controller Message, the response message carries the weighting factor of each physical port, and the weighting factor of each physical port is positively related to the size of the logical port traffic that the physical port can share; when sending a message through the logical port, the Ethernet device The weight factor of each physical port adjusts the traffic sent to each physical port.
  • this method there is no need for the user to manually adjust the hash factor involved in the hash calculation, but the Ethernet device automatically adjusts the traffic of each physical port, which has no effect on the currently transmitted business data and will not cause business abnormalities.
  • the Ethernet device sends a query message to the controller, including: the Ethernet device judges whether there is at least one physical port among multiple physical ports whose traffic exceeds a preset threshold; if there is at least one physical port When the flow exceeds the preset threshold, the Ethernet device sends a query message to the controller.
  • the Ethernet device actively reports the load information of each physical port of the logical port to the controller, so that the controller determines the weighting factor according to the load information, so as to achieve the purpose of adjusting the traffic of each physical port without modifying the hash factor. .
  • the Ethernet device sending the query message to the controller includes: the Ethernet device sends the query message to the controller multiple times. Using this method, the Ethernet device periodically or irregularly reports the load information of each physical port of the logical port to the controller, so that the controller determines the weighting factor according to the load information, and adjusts the load of each physical port without modifying the hash factor. The purpose of the flow.
  • the Ethernet device in addition to the preset hash factor, is also configured with a minimum weight factor.
  • the Ethernet device determines according to the preset hash factor. Then, the Ethernet device determines whether the weighting factor of the physical port is greater than the minimum weighting factor. If the weighting factor of the physical port is greater than the minimum weighting factor, the packet is forwarded through the physical port; if the weighting factor of the physical port is If it is less than or equal to the minimum weight factor, the packet is forwarded through other physical ports.
  • the Ethernet device determines the physical port used to forward the message according to the preset hash factor, and then determines whether the message can pass according to the hash factor The determined physical port is forwarded, and the traffic sent to each physical port can be adjusted without modifying the hash factor.
  • the adjustment process is simple and will not affect the business.
  • the Ethernet device in addition to the preset hash factor, the Ethernet device also configures the weight factor difference threshold.
  • the Ethernet device When a message needs to be sent through logical port 2, the Ethernet device will use the preset hash factor , Determine a physical port, and then, the Ethernet device determines whether the difference between the weight factor of the physical port and the maximum weight factor is greater than or equal to the weight factor difference threshold. If the difference is greater than or equal to the weight factor difference threshold, it passes other The physical port sends the message; if the difference is less than the weight factor difference threshold, the physical port calculated by the hash algorithm is used to send the message.
  • the Ethernet device determines the physical port used to forward the message according to the preset hash factor, and then determines whether the message can pass according to the hash factor The determined physical port is forwarded, and the traffic sent to each physical port can be adjusted without modifying the hash factor.
  • the adjustment process is simple and will not affect the business.
  • the load information of a physical port is used to indicate the percentage of the bandwidth consumed by the physical port to the total bandwidth of the physical port.
  • the Ethernet device adjusts the weighting factor sent to each physical port according to the weighting factor of each physical port.
  • the traffic includes: the Ethernet device judges whether the percentage indicated by the load information of each physical port of the logical port has changed; if the percentage change indicated by the load information of each physical port does not exceed the preset threshold, the Ethernet device According to the weight factor of each physical port, adjust the traffic sent to each physical port. Using this method, the Ethernet device judges whether the percentage change indicated by the load information of each physical port in each physical port exceeds the preset threshold before adjusting the traffic sent to each physical port according to the weighting factor.
  • the traffic sent to each physical port is adjusted according to the weighting factor to avoid a large change in the percentage of the physical port's consumed bandwidth to the total bandwidth of the physical port.
  • the phenomenon that the weighting factor is not applicable.
  • the above method further includes: if the percentage change indicated by the load information of one or more physical ports in the multiple physical ports exceeds a preset threshold, the Ethernet device re-collects each physical port Load information: The Ethernet device sends a query message carrying the newly collected load information to the controller.
  • the Ethernet device re-collects the load information of each physical port and sends it to the controller for re-determination
  • the weighting factor avoids the phenomenon that the weighting factor is not applicable when the percentage of the bandwidth consumed by the physical port in the total bandwidth of the physical port changes greatly.
  • the Ethernet device adjusts the traffic sent to each physical port according to the weight factor of each physical port, including: the Ethernet device adjusts the traffic sent to each physical port according to the preset hash factor and the weight factor of each physical port, Adjust the traffic sent to each physical port.
  • the Ethernet device determines the physical port used to forward the message according to the preset hash factor, and then determines whether the message can pass according to the hash factor The determined physical port forwarding can adjust the traffic sent to each physical port without modifying the hash factor.
  • the adjustment process is simple and will not affect the business.
  • an embodiment of the present invention provides a traffic balancing method, including: a controller receives a query message sent by an Ethernet device, the query message carries each physical port among multiple physical ports of the logical port of the Ethernet device According to the load information of each physical port, the controller determines the weighting factor of each physical port.
  • the weighting factor of each physical port is positively related to the size of the logical port traffic that the physical port can share;
  • the device sends a response message, which carries the weight factor of each physical port.
  • the load information of each physical port is used to indicate the percentage of the bandwidth consumed by the physical port to the total bandwidth of the physical port.
  • an embodiment of the present application provides a traffic balancing device.
  • the device may be an Ethernet device or a chip in the Ethernet device.
  • the device may include a processing unit, a sending unit, and a receiving unit.
  • the processing unit may be a processor
  • the sending unit may be a transmitter
  • the receiving unit may be a receiver
  • the Ethernet device may also include a storage unit
  • the storage unit may be a memory
  • the storage The unit is used to store instructions, and the processing unit executes the instructions stored in the storage unit to enable the Ethernet device to implement the foregoing first aspect or functions in various possible implementation manners of the first aspect.
  • the processing unit may be a processor, and the transceiver unit may be an input/output interface, a pin or a circuit, etc.; the processing unit executes the instructions stored in the storage unit to make the
  • the Ethernet device implements the functions of the first aspect or various possible implementations of the first aspect.
  • the storage unit may be a storage unit (for example, a register, a cache, etc.) in the chip, or the Ethernet device A storage unit (for example, read-only memory, random access memory, etc.) located outside the chip.
  • an embodiment of the present application provides a flow balancing device.
  • the device may be a controller or a chip in the controller.
  • the device may include a processing unit, a sending unit, and a receiving unit.
  • the processing unit may be a processor
  • the sending unit may be a transmitter
  • the receiving unit may be a receiver
  • the controller may also include a storage unit, which may be a memory;
  • the processing unit executes the instructions stored in the storage unit, so that the controller implements the above-mentioned second aspect or functions in various possible implementation manners of the second aspect.
  • the processing unit may be a processor, the transceiver unit may be an input/output interface, a pin or a circuit, etc.; the processing unit executes instructions stored in the storage unit to enable the control
  • the storage unit can be a storage unit (for example, a register, a cache, etc.) in the chip, or a storage unit located in the controller.
  • a storage unit external to the chip for example, read-only memory, random access memory, etc.).
  • the embodiments of the present application provide a computer program product containing instructions, which when run on a processor, enable the processor to execute the foregoing first aspect or the methods in various possible implementation manners of the first aspect.
  • the embodiments of the present application provide a computer program product containing instructions, which when run on a processor, enable the processor to execute the above-mentioned second aspect or the methods in the various possible implementations of the second aspect .
  • an embodiment of the present application provides a computer-readable storage medium that stores instructions in the computer-readable storage medium, which when run on a processor, causes the processor to execute the first aspect or the first aspect described above.
  • an embodiment of the present application provides a computer-readable storage medium that stores instructions in the computer-readable storage medium, which when run on a processor, causes the processor to execute the above-mentioned second aspect or the second aspect Methods in various possible implementations.
  • an embodiment of the present application provides a flow balancing system.
  • the system includes a first flow balancing device and a second flow balancing device.
  • the first flow balancing device is the flow balancing device of the third aspect above.
  • the equalizing device is the flow equalizing device of the fourth aspect above.
  • the Ethernet device sends a query message carrying the load information of each physical port of the logical port of the Ethernet device to the controller, and the controller receives the query message, and according to each The load information of the physical port determines the weighting factor of each physical port of the logical port.
  • the weighting factor of each physical port is positively related to the size of the logical port traffic that can be shared by the physical port.
  • the controller sends the port to the Ethernet device.
  • the reply message of the weight factor of each physical port enables the Ethernet device to adjust the traffic sent to each physical port according to the weight factor of each physical port. In this process, there is no need for the user to manually adjust the hash factor involved in the hash calculation. Instead, the Ethernet device automatically adjusts the traffic of each physical port, which has no impact on the currently transmitted business data and will not cause business abnormalities.
  • Figure 1 is a schematic diagram of Eth-Trunk cascaded forwarding
  • FIG. 2 is a schematic diagram of an operating environment to which a traffic balancing method provided by an embodiment of the application is applicable;
  • FIG. 3 is a flowchart of a flow balancing method provided by an embodiment of the present application.
  • FIG. 4 is a schematic diagram of the structure of a UDP packet in a flow balancing method provided by an embodiment of the present application
  • FIG. 5 is a schematic structural diagram of a controller provided by an embodiment of the present application.
  • FIG. 6 is a schematic structural diagram of a traffic balancing device provided by an embodiment of this application.
  • FIG. 7 is a schematic structural diagram of another traffic balancing device provided by an embodiment of this application.
  • FIG. 8 is a schematic structural diagram of another traffic balancing device provided by an embodiment of the application.
  • the Eth-Trunk cascade mechanism applied to the Eth-Trunk cascade system is derived.
  • Figure 1 is a schematic diagram of Eth-Trunk cascaded forwarding.
  • the Ethernet device is specifically a switch
  • switches 1 to 5 are devices of the same type
  • switch 1 is the first-level switch
  • switch 2 and switch 3 are the second Switch 2 and switch 3 form a stack
  • switch 4 and switch 5 are third-level switches
  • switch 1, switch 2, and switch 3 respectively pre-store a hash algorithm and a hash factor used by the hash algorithm.
  • Switch 1 is connected to the stacking device composed of switch 2 and switch 3 through logical port 1, switch 2 is connected to switch 4 through logical port 2, and switch 3 is connected to switch 5 through logical port.
  • Logical port 1 includes 4 physical ports, namely physical ports 1, 2, 3, and 4; logical port 2 and logical port 3 each include 3 physical ports.
  • switch 1 hashes the traffic according to the hash factor and hash algorithm, and distributes the traffic to physical port 1 (denoted as sub-flow 1) and physical port 2 of logical port 1. (Denoted as sub-flow 2), physical port 3 (denoted as sub-flow 3), or physical port 4 (denoted as sub-flow 4).
  • switch 2 After that, sub-flow 1 and sub-flow 2 reach switch 2, and sub-flow 3 and sub-flow 4 reach switch 3.
  • Switch 2 performs hash calculation on sub-flow 1 and sub-flow 2 according to the hash factor and hash algorithm, and assigns sub-flow 1 and sub-flow 2 to each physical port of logical port 2.
  • switch 3 is based on the hash factor And hash algorithm, perform hash calculation on sub-flow 3 and sub-flow 4, and assign sub-flow 3 and sub-flow 4 to a certain physical port of logical port 3.
  • the hash factor refers to a parameter participating in a hash operation, and specifically may be one or more characteristics of the received message (collectively referred to as message characteristics).
  • the hash factor may be a source media access control (MAC) address, a destination MAC address, a source Internet Protocol (IP) address, a destination IP address, a source port number, a destination port number, etc.
  • Traffic is composed of messages.
  • the switch extracts the value of the hash factor (that is, the message feature) from the received message.
  • the hash algorithm performs a hash calculation on the message feature to determine which physical port of the logical port the message should be sent from.
  • switch 1 assigns packets arriving at logical port 1 to physical port 1, physical port 2, physical port 3, or physical port 4 based on the hash algorithm.
  • the bandwidth consumed by each physical port is compared with the total amount of the physical port.
  • the ratio of bandwidth is 60%.
  • the ratio of the consumed bandwidth of the physical port to the total bandwidth of the physical port is also called traffic bandwidth.
  • switch 2 Since the hash algorithms used by switch 2, switch 3 and switch 1 are the same, the hash calculation results of switch 2, switch 3 and switch 1 are relatively similar, resulting in uneven traffic distribution between logical port 2 and logical port 3. For example, the respective traffic bandwidths of physical port 1, physical port 2, and physical port 3 of logical port 2 are 20%, 40%, and 95% respectively. If the traffic to switch 2 is relatively large, physical port 3 of logical port 2 is prone to ultra-bandwidth packet loss.
  • the embodiments of the present application provide a flow balancing method and device, which automatically adjusts the flow size of each member port of the Eth-Trunk port to achieve the purpose of evenly sharing the flow for each member port of the Eth-Trunk port.
  • words such as “exemplary” or “for example” are used as examples, illustrations, or illustrations. Any embodiment or design solution described as “exemplary” or “for example” in the embodiments of the present application should not be construed as being more preferable or advantageous than other embodiments or design solutions. To be precise, words such as “exemplary” or “for example” are used to present related concepts in a specific manner.
  • FIG. 2 is a schematic diagram of an operating environment to which a traffic balancing method provided by an embodiment of the application is applicable.
  • the operating environment includes a controller and multiple switches (only switch 1, switch 2 and switch 3 are shown in the figure), and a network connection is established between the controller and each switch.
  • the controller can also be understood as a collector, a server, etc.
  • the switch can be a Layer 2 switch or a Layer 3 switch, which supports the Eth-Trunk mechanism. Based on the Eth-Trunk mechanism, multiple physical ports of the switch are bound to a logical port.
  • the switch can use the Eth-Trunk mechanism to increase network bandwidth and achieve load sharing and service protection.
  • each switch shows only one logical port, and the number of physical ports for each logical port is 3 or 4.
  • the number of logical ports of an Ethernet device is very large, for example, 1000 or more, and the number of physical ports of each logical port is also very large.
  • the devices form a stack, and there are at least 1500 physical ports on the stack, and 1000 logical ports can be obtained based on the 1500 physical ports.
  • the controller is responsible for processing such as weighting factors, so as to balance the traffic of the logical ports of the Ethernet devices in the entire network.
  • FIG. 3 is a flowchart of a traffic balancing method provided by an embodiment of the present application, and the method includes steps 101-104.
  • step 101 the Ethernet device sends a query message to the controller, where the query message carries load information of each physical port among multiple physical ports of the logical port of the Ethernet device.
  • the controller receives the query message.
  • an agent module is pre-deployed on the Ethernet device to collect load information of each physical port of the logical port of the Ethernet device and report it to the controller, and the controller receives the query message.
  • the agent module performs traffic statistics on each physical port of the logical port, and determines the percentage of the bandwidth consumed by each physical port of the logical port to the total bandwidth of the physical port. For example, continuing to refer to Fig. 2, the logical port 2 has 3 physical ports, and the maximum bandwidths of the 3 physical ports are respectively 200M, 100M, and 50M.
  • the current consumed bandwidth of physical port 1 is 40M, and the consumed bandwidth accounts for 20% of the total bandwidth of physical port 1; the current consumed bandwidth of physical port 2 is 40M, and the consumed bandwidth accounts for the total bandwidth of physical port 2.
  • the current consumed bandwidth of physical port 3 is 47.5M, and the consumed bandwidth accounts for 95% of the total bandwidth of physical port 3.
  • step 102 the controller determines the weighting factor of each physical port according to the load information of each physical port, and the weighting factor of each physical port and the physical port can share the traffic of the logical port The size is positively correlated.
  • the controller determines the weighting factor of each physical port according to the load information of each physical port of the logical port, and the weighting factor of each physical port and the physical port can share the logical port
  • the size of the traffic is positively correlated. That is to say, for a logical port, among the physical ports of the logical port, the physical port with the larger the weighting factor, the larger the traffic that can be shared, or the physical port with the larger the weighting factor, the current consumed bandwidth accounts for the The smaller the proportion of the total bandwidth of the physical port.
  • the controller determines physical port 1, physical port 2, and physical port 3 based on the three percentages.
  • the weighting factors are 3.8, 1.9, 0.8, that is, physical port 1 can share the largest traffic, physical port 2 can share the second, and physical port 3 can share the smallest traffic.
  • step 103 the controller sends a response message to the Ethernet device, the response message carrying the weight factor of each physical port.
  • the Ethernet device receives the response message carrying the weight factor of each physical port.
  • step 104 when sending a message through the logical port, the Ethernet device adjusts the traffic sent to each physical port according to the weighting factor of each physical port.
  • the Ethernet device adjusts the traffic sent to each physical port according to the weighting factor of each physical port of the logical port.
  • the hash algorithm determines that the new data stream is forwarded via physical port 3.
  • the Ethernet The network equipment determines that the weight factor of physical port 3 is 0.8, which is relatively small. If physical port 3 is used to send the data stream, physical port 3 will experience ultra-bandwidth packet loss. Therefore, the Ethernet device selects the physical port with the largest weight factor, that is, physical port 1 to forward the new data stream.
  • the Ethernet device sends a query message carrying the load information of each physical port of the logical port of the Ethernet device to the controller, and the controller receives the query message, and according to the physical
  • the port load information determines the weighting factor of each physical port of the logical port.
  • the weighting factor of each physical port is positively related to the size of the physical port that can share the logical port traffic.
  • the controller sends to the Ethernet device
  • the response message carrying the weight factor of each physical port allows the Ethernet device to adjust the traffic sent to each physical port according to the weight factor of each physical port. In this process, there is no need for the user to manually adjust the hash factor involved in the hash calculation. Instead, the Ethernet device automatically adjusts the traffic of each physical port, which has no impact on the currently transmitted business data and will not cause business abnormalities.
  • the logical port 2 includes physical port 1, physical port 2, and physical port 3 as an example.
  • the Ethernet device adjust when sending a message through the logical port? The traffic sent to each physical port is explained in detail.
  • a hash factor is preset on each Ethernet device.
  • the hash factor can be the source MAC address, destination MAC address, source IP address, destination IP address, source port number, destination port number and other information of the message. one or more.
  • the hash factor is used as the source MAC address, and the hash algorithm is used to map the last three digits of the source MAC address to different physical ports. For example, 000, 001, 010 are mapped to physical port 1, 011, 100 are mapped to physical port 2, and 101, 111 are mapped to physical port 3.
  • the Ethernet device adjusts the traffic sent to each physical port according to the preset hash factor and the weight factor of each physical port.
  • the Ethernet device is also configured with a minimum weight factor.
  • the Ethernet device determines according to the preset hash factor. Then, the Ethernet device determines whether the weighting factor of the physical port is greater than the minimum weighting factor. If the weighting factor of the physical port is greater than the minimum weighting factor, the packet is forwarded through the physical port; if the weighting factor of the physical port is If it is less than or equal to the minimum weight factor, the packet is forwarded through other physical ports.
  • the weighting factors of physical port 1, physical port 2 and physical port 3 are respectively 3.8, 1.9, 0.8, and the minimum weighting factor is 1.
  • the Ethernet device selects a physical port from physical port 1 and physical port 2 to send the message. For example, a physical port is randomly selected from physical port 1 and physical port 2 to send a message; another example is to select a physical port with the largest weight factor from physical port 1 and physical port 2, that is, physical port 1 to send a message.
  • the Ethernet device in addition to the preset hash factor, the Ethernet device also configures the weight factor difference threshold.
  • the Ethernet device When a message needs to be sent through logical port 2, the Ethernet device will use the preset hash factor. Factor, determine a physical port, and then, the Ethernet device determines whether the difference between the weight factor of the physical port and the maximum weight factor is greater than or equal to the weight factor difference threshold, and if the difference is greater than or equal to the weight factor difference threshold, it passes Other physical ports send packets; if the difference is less than the weight factor difference threshold, the physical port calculated by the hash algorithm is used to send the packet.
  • the weight factors of physical port 1, physical port 2, and physical port 3 are 3.8, 1.9, and 0.8, respectively, and the weight factor difference threshold is 2.
  • the Ethernet device If the physical port determined by the Ethernet device according to the hash algorithm is a physical port 3. Since the weighting factor of physical port 3 and the maximum weighting factor, that is, the difference between the weighting factor of physical port 1 is equal to 3, and the difference is greater than the weighting factor difference threshold 2, then the Ethernet device starts from physical port 1 and physical port Select a physical port in 2 to send the message. For example, a physical port is randomly selected from physical port 1 and physical port 2 to send a message; another example is to select a physical port with the largest weight factor from physical port 1 and physical port 2, that is, physical port 1 to send a message.
  • the Ethernet device determines the physical port used to forward the message according to a preset hash factor, and then determines whether the message can be passed according to the hash factor according to the weight factor.
  • the physical port forwarding determined by the Greek factor can adjust the traffic sent to each physical port without modifying the hash factor. The adjustment process is simple and will not affect the business.
  • the source MAC address, destination MAC address, etc. of the packets of these services are fixed. Therefore, the hash factor and the weight factor can be combined to adjust the traffic sent to each physical port.
  • the business is not fixed. If the business changes, for example, before the Ethernet device sends a query message to the controller, 100 users are initiating the business, and after the Ethernet device receives the response message sent by the controller, 1000 users are initiating services. At this time, the traffic of each physical port of the logical port changes. Obviously, the weighting factor carried in the response message cannot meet the needs of traffic balance.
  • the Ethernet device when the message is sent through the logical port, the Ethernet device adjusts the weighting factor of each physical port.
  • the traffic of each physical port includes: when sending a message through the logical port, the Ethernet device determines whether the traffic of each physical port of the logical port has changed; If the traffic of the port does not change, the Ethernet device adjusts the traffic sent to each physical port according to the weight factor of each physical port.
  • the Ethernet device has logical port 2.
  • the logical port 2 includes physical port 1, physical port 2, and physical port 3.
  • 100 users are initiating services.
  • the physical port 1 The current consumed bandwidth is 40M, and the consumed bandwidth accounts for 20% of the total bandwidth of physical port 1.
  • the current consumed bandwidth of physical port 2 is 40M, and the consumed bandwidth accounts for 40% of the total bandwidth of physical port 2; physical port 3 is currently
  • the consumed bandwidth is 47.5M, and the consumed bandwidth accounts for 95% of the total bandwidth of physical port 3.
  • the controller determines that the weight factors of physical port 1, physical port 2 and physical port 3 are 3.8, 1.9, and 0.8, respectively, and encapsulates the weight factor and the logical port identification of the Ethernet device into a response message Send to the Ethernet device.
  • the Ethernet device determines that the weighting factors of physical port 1, physical port 2, and physical port 3 are 3.8, 1.9, and 0.8 respectively, and at the same time, determines the percentage change indicated by the load information of each physical port The amount does not exceed the preset threshold, that is, the change in the percentage of the bandwidth consumed by each physical port in the total bandwidth of the physical port does not exceed the preset threshold.
  • the Ethernet gateway device receives the response message, the physical port 1, physical port
  • the percentage of bandwidth consumed by each physical port 2 and physical port 3 to the total bandwidth is still 20%, 40%, and 95%, or although the percentage of the bandwidth consumed by each physical port to the total bandwidth has changed, the change is not significant, for example , Changing from the original 20%, 40%, and 95% to 25%, 40%, and 95%, the Ethernet device adjusts the traffic sent to each physical port according to the weighting factor of each physical port.
  • the load information of each physical port is used to indicate the percentage of the bandwidth consumed by the physical port in the total bandwidth of the physical port.
  • the Ethernet device Before the Ethernet device adjusts the traffic sent to each physical port according to the weighting factor, it judges each physical port. Whether the percentage change indicated by the load information of each physical port in the port exceeds the preset threshold, only when the percentage change indicated by the load information of all physical ports does not exceed the preset threshold, the weighting factor is adjusted and sent to The traffic of each physical port avoids the phenomenon that the weight factor is not applicable when the percentage of the bandwidth consumed by the physical port in the total bandwidth of the physical port changes greatly. In addition, the Ethernet device may only judge the percentage change indicated by the load information of some physical ports.
  • the Ethernet device receives the response message After the article, it is found that the weighting factor of physical port 1 is larger, and the weighting factor of physical port 3 is the smallest, and the traffic of physical port 3 should be increased.
  • the Ethernet device mainly detects whether the percentage of the consumed bandwidth of physical port 1 to the total bandwidth of physical port 1 has changed significantly. If the percentage changes from 20% to 30%, the change is 10%, which is less than the preset threshold of 30. %, the Ethernet device adjusts the traffic sent to each physical port according to the weight factor carried in the response message.
  • the Ethernet device determines whether the weighting factor of each physical port is suitable for traffic Each of the physical ports that have changed; if the weighting factor of each physical port is applicable to each of the physical ports that have changed traffic, the Ethernet device adjusts the transmission according to the weighting factor of each physical port Traffic to each physical port.
  • the Ethernet device recollects the load information of each physical port and sends it to the controller, so that the controller recalculates the weighting factor.
  • the query message and the response message may be User Datagram Protocol (UDP) messages.
  • UDP User Datagram Protocol
  • FIG. 4 is a schematic diagram of the structure of a UDP packet in a traffic balancing method provided in an embodiment of the present application.
  • a UDP packet includes an outer Ethernet header, an outer IP header, an outer UDP header, and a payload.
  • the external Ethernet header includes destination MAC address (MAC DA), source MAC address (MAC SA), label, Ethernet type (Ethernet type), etc.
  • external IP header includes protocol (protocol), source IP address (IP SA), Destination IP address (IP DA);
  • the external UDP header includes source port (source port) number, destination port (Dest port) number, UDP length (UDP length) and UDP checksum (UDP Checksum).
  • the UDP message as the query message has a fixed UDP port number, that is, the source port number in the external UDP header is fixed, for example, 6000.
  • the controller receives the UDP packet and parses the packet, if the UDP port number of the UDP packet is 6000, it determines that the UDP packet is a query packet, and needs to be based on the load carried by the UDP packet The information calculates the weight factor. If the UDP port number of the UDP packet is not 6000, there is no need to calculate the weight factor.
  • the Ethernet device receives a UDP message and parses the message, if the UDP port number of the UDP message is 6000, the UDP message is determined to be a response message, and it needs to be based on the weight carried by the UDP message The factor adjusts the traffic sent to each physical port. If the UDP port number of the UDP message is not 6000, there is no need to adjust the traffic of each physical port.
  • the payload part of the UDP message when the UDP message is a query message, the payload part of the UDP message is used to store the device information of the Ethernet device, the load information of each physical port of the logical port, etc.; when the UDP message is a response When sending a message, the payload part of the UDP message is used to store the device information of the Ethernet device, the weight factor of each physical port of the logical port, and so on.
  • the payload part of the UDP message will be described in detail. For exemplary, see Table 1.
  • Table 1 is a detailed content table of the payload part of the UDP packet in the embodiment of the application.
  • the IP field represents the IP address of the Ethernet device, that is, the network location where the Ethernet device is located, and is used to communicate messages between the Ethernet device and the controller.
  • the IP The address can also be called the IP address of the agent.
  • the controller parses the payload part of the query message, saves the relevant information of the Ethernet device in the local database, and extracts the information as input during calculation.
  • the Eth-trunk ID field includes the number of the logical port, the index number (ifindex) data of the logical port, and so on.
  • the Eth-trunk ID field can uniquely identify a logical port on the Ethernet device.
  • the controller calculates the weighting factor, the Eth-trunk ID is carried when sending a response message to the Ethernet device, so that the Ethernet device can determine the corresponding logical port according to the Eth-trunk ID.
  • the Member_Num field indicates the number of physical ports included in the logical port.
  • the controller determines the number of physical ports according to the Member_Num field, then traverses all physical ports according to the number, and calculates the weighting factor of each physical port according to the load information of each physical port.
  • the value of this field represents the load information of the physical port, and the load information is important data for the controller to calculate the weighting factor; if the UDP message is a response message, The value of this field represents the weight factor of the physical port, and is used to instruct the Ethernet device to adjust the traffic of each physical port according to the weight factor of each physical port.
  • the load information of a physical port is used to indicate the percentage of the bandwidth consumed by the physical port to the total bandwidth of the physical port, and the controller determines the total bandwidth of each physical port according to the load information of each physical port.
  • the weighting factor of each physical port includes: the controller determines the least common multiple of the percentage of the multiple physical ports according to the percentage of each physical port; and determines the least common multiple of the percentage of the multiple physical ports according to the least common multiple.
  • the weighting factor of the port, a weighting factor of each physical port the least common multiple/the percentage of the physical port.
  • the weight is denoted as weight
  • the percentage is denoted as OutUti
  • the least common multiple is denoted as X
  • the logical port identifier is 2
  • the physical port number field is 3
  • the physical port fields are 20%, 40%, and 95% respectively.
  • logical port 2 includes physical port 1, physical port 2, and physical port 2.
  • Port 3. The current consumed bandwidth of the three physical ports accounts for 20%, 40%, and 95% of their total bandwidth.
  • the timing for the Ethernet device to report the load information of each physical port in the embodiment of the present application will be described in detail.
  • the timing can also be understood as the timing for the controller to determine the weighting factor of each physical port of the logical port.
  • the Ethernet device sending a query message to the controller includes: the Ethernet device judging whether there is at least one physical port whose traffic exceeds a preset value among the multiple physical ports Threshold; if the traffic of at least one physical port exceeds a preset threshold, the Ethernet device sends the query message to the controller.
  • an agent module is deployed on the Ethernet device, and if the load of a certain logical port exceeds the limit, the Ethernet device reminds the user that there may be uneven traffic distribution. After sensing this event, the agent module actively collects the load information of each physical port of the logical port and reports it to the controller. For example, a bandwidth warning value is pre-stored on the Ethernet device, such as 90%. At this time, if the current consumed bandwidth of a physical port accounts for more than 90% of the total bandwidth of the physical port, a warning is issued to the user. The agent module senses the event, collects the load information of each physical port of the logical port, and reports it to the control.
  • the Ethernet device pre-stores a bandwidth warning value difference, such as 10%, and the proxy module collects the load information of each logical port. If there are two physical ports among these physical ports, the two physical bandwidths have been consumed If the difference in bandwidth percentage is greater than 10%, the proxy module sends the load information of each physical port to the controller. For example, the proxy module finds that physical port 1, physical port 2, and physical port 3 of a logical port have been consumed respectively The percentages of bandwidth in total bandwidth are 20%, 40%, and 60%. The difference between the percentage of bandwidth consumed by physical port 1 and physical port 2 is greater than 10%, and the percentage difference between the percentage of bandwidth consumed by physical port 2 and physical port 3. If the value is greater than 10%, and the difference between the percentages of the bandwidth consumed by the physical port 1 and the physical port 3 is greater than 10%, the proxy module sends the load information of each physical port of the logical port to the controller.
  • a bandwidth warning value difference such as 10%
  • the Ethernet device actively reports the load information of each physical port of the logical port to the controller, so that the controller determines the weighting factor according to the load information, so as to achieve the purpose of adjusting the traffic of each physical port without modifying the hash factor. .
  • the Ethernet device sending the query message to the controller includes: the Ethernet device sends the query message to the controller multiple times.
  • the Ethernet device may periodically or irregularly collect the load information of each physical port of the logical port, and report it to the controller in batches.
  • the Ethernet device regularly or irregularly reports the load information of each physical port of the logical port to the controller, so that the controller determines the weighting factor according to the load information, and adjusts the traffic of each physical port without modifying the hash factor. the goal of.
  • FIG. 5 is a schematic structural diagram of a controller provided in an embodiment of the present application.
  • the controller is connected with the network card, the controller includes a message processing chip, external memory, internal memory, input device, output device, arithmetic unit, control chip, output device, etc.
  • the query message sent by the Ethernet device reaches the message processing chip via the network card.
  • the message processing chip parses the query message to parse out the IP address of the Ethernet device, the load information of each physical port of the logical port of the Ethernet device, etc., and the message processing chip reports the parsed information to the internal memory.
  • the control chip periodically triggers the arithmetic unit to call the information in the internal memory, and calculates the called information to obtain the weight factor of each physical port, and store the weight factor of each physical port in the internal memory.
  • the message processing chip calls the weight factor in the internal memory, encapsulates the weight factor in the response message, and sends the response message to the Ethernet device through the network card.
  • the Ethernet device adjusts the weighting factor according to the weight factor of each physical port. The traffic of each physical port.
  • FIG. 6 is a schematic structural diagram of a traffic balancing device provided by an embodiment of the application.
  • the traffic balancing device involved in this embodiment may be an Ethernet device or a chip applied to the Ethernet device.
  • the flow balancing device can be used to perform the functions of the Ethernet device in the foregoing embodiment.
  • the flow balancing device 100 may include:
  • the sending unit 11 is configured to send a query message to the controller, where the query message carries load information of each physical port among the multiple physical ports of the logical port of the Ethernet device;
  • the receiving unit 12 is configured to receive a response message sent by the controller, the response message carrying the weighting factor of each physical port, and the weighting factor of each physical port and the physical port can share the logic
  • the size of the port traffic is positively correlated;
  • the processing unit 13 is configured to adjust the traffic sent to each physical port according to the weighting factor of each physical port when the sending unit 11 sends a message through the logical port.
  • the processing unit 13 is further configured to determine whether the traffic of at least one physical port among the multiple physical ports exceeds a preset threshold
  • the sending unit 11 is configured to send the query message to the controller if the processing unit 13 determines that the traffic of at least one physical port exceeds a preset threshold.
  • the load information of each physical port is used to indicate the percentage of the bandwidth consumed by the physical port in the total bandwidth of the physical port
  • the processing unit 13 is used to determine the Whether the percentage indicated by the load information of each physical port has changed, and if the percentage change indicated by the load information of each physical port does not exceed the preset threshold, adjust the transmission according to the weight factor of each physical port. Traffic to each physical port.
  • the processing unit 13 is further configured to, if the percentage change indicated by the load information of each physical port exceeds a preset threshold, the Ethernet device re-collects each physical port Port load information;
  • the sending unit 11 is further configured to send a query message carrying the newly collected load information to the controller.
  • the processing unit 13 is configured to adjust the traffic sent to each physical port according to a preset hash factor and a weight factor of each physical port.
  • the traffic balancing device provided in the embodiment of the present application can perform the actions of the Ethernet device in the foregoing embodiment, and its implementation principles and technical effects are similar, and will not be repeated here.
  • FIG. 7 is a schematic structural diagram of another traffic balancing device provided by an embodiment of the application.
  • the flow balancing device involved in this embodiment may be a controller or a chip applied to the controller.
  • the flow equalization device can be used to perform the functions of the controller in the foregoing embodiment.
  • the flow balancing device 200 may include:
  • the receiving unit 21 is configured to receive a query message sent by an Ethernet device, where the query message carries load information of each physical port among multiple physical ports of the logical port of the Ethernet device;
  • the processing unit 22 is configured to determine the weighting factor of each physical port according to the load information of each physical port, the weighting factor of each physical port and the size of the physical port that can share the traffic of the logical port Positive correlation
  • the sending unit 23 is configured to send a response message to the Ethernet device, the response message carrying the weight factor of each physical port.
  • the traffic equalization device provided in the embodiment of the present application can execute the actions of the controller in the above-mentioned embodiment, and its implementation principles and technical effects are similar, and will not be repeated here.
  • the processing unit can be implemented in the form of software calling through processing elements; it can also be implemented in the form of hardware.
  • the processing unit may be a separate processing element, or it may be integrated in a chip of the above-mentioned device for implementation.
  • it may also be stored in the memory of the above-mentioned device in the form of program code, and a certain processing element of the above-mentioned device Call and execute the functions of the above processing unit.
  • all or part of these units can be integrated together or implemented independently.
  • the processing element described here may be an integrated circuit with signal processing capability. In the implementation process, each step of the above method or each of the above units can be completed by an integrated logic circuit of hardware in the processor element or instructions in the form of software.
  • the above units may be one or more integrated circuits configured to implement the above methods, such as: one or more application-specific integrated circuits (ASICs), or one or more microprocessors ( Digital signal processor, DSP), or, one or more field-programmable gate arrays (FPGA), etc.
  • ASICs application-specific integrated circuits
  • DSP Digital signal processor
  • FPGA field-programmable gate arrays
  • the processing element may be a general-purpose processor, such as a central processing unit (CPU) or other processors that can call program codes.
  • CPU central processing unit
  • these units can be integrated together and implemented in the form of a system-on-a-chip (SOC).
  • SOC system-on-a-chip
  • an embodiment of the present application also provides a flow balancing system, which includes one or more flow balancing devices (may be referred to as first flow balancing devices) shown in FIG. 6 above and the flow balancing devices described in FIG. 7 above (It can be called the second flow balancing device). Since the device shown in FIG. 6 is set on the Ethernet device, and the device shown in FIG. 7 is set on the controller, in one embodiment, the flow balancing system may include multiple Ethernet devices and one controller.
  • FIG. 8 is a schematic structural diagram of another traffic balancing device provided by an embodiment of the application.
  • the flow balancing device 300 may include: a processor 31 (for example, a CPU), a memory 32, a receiver 33, and a transmitter 34; both the receiver 33 and the transmitter 34 are coupled to the processor 31, and the processor 31
  • the receiving action of the receiver 33 is controlled, and the processor 31 controls the sending action of the transmitter 34
  • the memory 32 may include high-speed random access memory (RAM), or may also include non-volatile memory (non-volatile memory). , NVM), for example, at least one disk storage.
  • the memory 32 can store various instructions for completing various processing functions and implementing the method steps of the present application.
  • the traffic balancing device involved in this application may further include: a communication bus 35.
  • the receiver 33 and the transmitter 34 may be integrated in the transceiver of the flow balancing device, or may be independent transceiver antennas on the flow balancing device.
  • the communication bus 35 is used to implement communication connections between components.
  • the above-mentioned memory 32 is used to store computer executable program code, and the program code includes instructions; when the processor 31 executes the instructions, the processor 31 of the flow balancing device executes the Ethernet device in the above method embodiment.
  • the processing action of the receiver 33 causes the receiver 33 to perform the receiving action of the Ethernet device in the above-mentioned embodiment, and the transmitter 34 executes the sending action of the Ethernet device in the above-mentioned method embodiment.
  • the implementation principles and technical effects are similar, and will not be repeated here. .
  • the aforementioned memory 32 is used to store computer executable program codes, and the program codes include instructions; when the processor 31 executes the instructions, the processor 31 of the flow balancing device executes the controller in the aforementioned method embodiment.
  • the processing action of makes the receiver 33 execute the receiving action of the controller in the foregoing embodiment, and the transmitter 34 executes the sending action of the controller in the foregoing method embodiment.
  • An embodiment of the present application also provides a storage medium, and the storage medium stores a computer-executable instruction, and the computer-executable instruction is used to implement the above-mentioned traffic balancing method when executed by a processor.
  • the embodiment of the present invention also provides a computer program product, which when the computer program product runs on an Ethernet device, causes the Ethernet device to perform the above-mentioned flow balancing method.
  • the embodiment of the present invention also provides a computer program product, which when the computer program product runs on the controller, causes the controller to execute the above-mentioned flow balancing method.
  • the foregoing embodiments may be implemented in whole or in part by software, hardware, firmware, or any combination thereof.
  • software When implemented by software, it can be implemented in the form of a computer program product in whole or in part.
  • the computer program product includes one or more computer instructions.
  • the computer program instructions When the computer program instructions are loaded and executed on the computer, the processes or functions according to the embodiments of the present application are generated in whole or in part.
  • the computer can be a general-purpose computer, a dedicated computer, a computer network, or other programmable devices.
  • Computer instructions can be stored in a computer-readable storage medium, or transmitted from one computer-readable storage medium to another computer-readable storage medium.
  • computer instructions can be transmitted from a website, computer, server, or data center through a cable (such as Coaxial cable, optical fiber, digital subscriber line (DSL)) or wireless (such as infrared, wireless, microwave, etc.) means to transmit to another website, computer, server or data center.
  • a computer-readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server or data center integrated with one or more available media.
  • the usable medium may be a magnetic medium (for example, a floppy disk, a hard disk, and a magnetic tape), an optical medium (for example, a DVD), or a semiconductor medium (for example, a solid state hard disk SSD).
  • plural herein refers to two or more.
  • the term “and/or” in this article is only an association relationship describing associated objects, which means that there can be three relationships, for example, A and/or B, which can mean: A alone exists, A and B exist at the same time, exist alone B these three situations.
  • the manner described herein "at least one of" means one of the listed items or any combination thereof, for example, "at least one of A, B, and C” can mean: A alone exists, alone There are B, C alone, A and B, B and C, A and C, and A, B, and C.
  • the character “/" in this text generally indicates that the associated objects before and after are an "or” relationship; in the formula, the character "/" indicates that the associated objects before and after are a kind of "division" relationship.
  • the size of the sequence numbers of the foregoing processes does not mean the order of execution.
  • the execution order of each process should be determined by its function and internal logic, and should not be implemented in this application.
  • the implementation process of the example constitutes any limitation.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)
  • Small-Scale Networks (AREA)

Abstract

本申请提供一种流量均衡方法及装置,以太网设备向控制器发送携带该以太网设备的逻辑端口的各物理端口的负载信息的查询报文,控制器接收该查询报文,并根据各物理端口的负载信息,确定逻辑端口的每个物理端口的权重因子,每个物理端口的权重因子与该物理端口可分担逻辑端口的流量的大小正相关,之后,控制器向以太网设备发送携带各物理端口的权重因子的应答报文,使得以太网设备根据各物理端口的权重因子,调整发往各物理端口的流量。该过程中,无需用户手动调整参与哈希计算的哈希因子,而是由以太网设备自动对各物理端口的流量进行调整,对当前传输的业务数据没有影响,不会引发业务异常。

Description

流量均衡方法及装置
本申请要求于2019年3月18日提交中国国家知识产权局、申请号为201910205012.0、发明名称为“流量均衡方法及装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请实施例涉及通信技术领域,尤其涉及一种流量均衡方法及装置。
背景技术
以太网链路聚合(Ethernet trunk,Eth-Trunk)机制用于将以太网设备的多个物理端口捆绑为一个逻辑端口来使用,该逻辑端口也称之为Eth-Trunk端口。绑定在一起的物理端口称为该逻辑端口的成员端口。以太网设备可以使用Eth-Trunk机制来提高网络带宽,实现负载分担与业务保护等需要。由于Eth-Trunk端口是一个逻辑端口,并不能承载真正的报文转发工作,因此,在转发报文时,发往该逻辑端口的流量被按照负载分担算法转发到该逻辑端口的某个物理端口上。
基于Eth-Trunk机制,衍生出应用于Eth-Trunk级联系统的Eth-Trunk级联机制。该Eth-Trunk级联系统包括多级以太网设备,多个以太网设备相互级联,流量在第一级以太网设备上经过一次哈希(hash)计算,被均匀分担到该第一以太网设备的Eth-Trunk端口的各成员端口上,使得流量到达第二级的以太网设备。之后,第二级以太网设备上经过一次哈希计算,使得流量被分担到第三级以太网设备上,以此类推。以太网设备在执行Hash计算的过程中,根据报文的几个特定特征进行哈希计算,从而确定出流量应该从该逻辑端口的哪个物理端口发出,其中,特定的特征也称之为哈希因子。由于各级以太网设备使用的hash算法相同,各级的以太网设备的计算结果比较相近,即各级以太网设备根据哈希因子确定出的物理端口是固定的,导致到达每一级以太网设备的流量,只能从该以太网设备的逻辑端口的几个特定的物理端口发出,造成某些物理端口的流量很大,而其他物理端口的流量很小。
为避免Eth-Trunk端口的各成员端口的流量分担不均匀,传统的处理方式是手动修改参与哈希计算的哈希因子,根据新的哈希因子调整发往各物理端口的流量。然而,即使使用新的哈希因子,计算出的物理端口也是固定的,同样无法避免Eth-Trunk端口的各成员端口的流量分担不均匀的现象。同时,修改哈希因子的过程中,需要经过多次尝试修改后才可能得出合适的哈希因子,该调整过程容易导致业务异常。
发明内容
本申请实施例提供一种流量均衡方法及装置,通过自动调整Eth-Trunk端口的各成员端口的流量大小,达到使Eth-Trunk端口的各成员端口均匀分担流量的目的。
第一方面,本申请实施例提供的一种流量均衡方法,该方法可以应用于以太网设备、也可以应用于以太网设备中的芯片,下面以应用于以太网设备为例对该方法进行 描述,该方法包括:以太网设备向控制器发送查询报文,该查询报文携带以太网设备的逻辑端口的多个物理端口中每个物理端口的负载信息;以太网设备接收控制器发送的应答报文,应答报文携带每个物理端口的权重因子,每个物理端口的权重因子与物理端口可分担逻辑端口的流量的大小正相关;在通过逻辑端口发送报文时,以太网设备根据每个物理端口的权重因子,调整发往每个物理端口的流量。采用该种方法,无需用户手动调整参与哈希计算的哈希因子,而是由以太网设备自动对各物理端口的流量进行调整,对当前传输的业务数据没有影响,不会引发业务异常。
一种可行的设计中,以太网设备向控制器发送查询报文,包括:以太网设备判断多个物理端口中,是否存在至少一个物理端口的流量超出预设阈值;若存在至少一个物理端口的流量超出预设阈值,则以太网设备向控制器发送查询报文。采用该种方法,以太网设备主动向控制器上报逻辑端口的各物理端口的负载信息,使得控制器根据负载信息确定权重因子,实现不修改哈希因子的前提下调整各物理端口的流量的目的。
一种可行的设计中,以太网设备向控制器发送查询报文,包括:以太网设备多次向该控制器发送查询报文。采用这种方法,以太网设备定期或不定期向控制器上报逻辑端口的各物理端口的负载信息,使得控制器根据负载信息确定权重因子,实现不修改哈希因子的前提下调整各物理端口的流量的目的。
一种可行的设计中,以太网设备上除了配置预设的哈希因子外,还配置最小权重因子,在需要通过逻辑端口2发送报文时,以太网设备根据预设的哈希因子,确定一个物理端口,然后,以太网设备确定该物理端口的权重因子是否大于最小权重因子,若该物理端口的权重因子大于最小权重因子,则通过该物理端口转发报文;若该物理端口的权重因子小于或等于最小权重因子,则通过其他物理端口转发报文。采用该种方法,在通过逻辑端口发送报文时,以太网设备根据预设的哈希因子确定用于转发该报文的物理端口,然后根据权重因子确定该报文是否可以通过根据哈希因子确定出的物理端口转发,无需修改哈希因子即可实现对发往各物理端口的流量进行调整,调整过程简单,不会对业务产生影响
一种可行的设计中,以太网设备上除了配置预设的哈希因子外,还配置权重因子差值阈值,在需要通过逻辑端口2发送报文时,以太网设备根据预设的哈希因子,确定一个物理端口,然后,以太网设备确定该物理端口的权重因子与最大权重因子的差值是否大于或等于权重因子差值阈值,若差值大于或等于权重因子差值阈值,则通过其他物理端口发送报文;若差值小于权重因子差值阈值,则通过哈希算法计算出的物理端口发送报文。采用该种方法,在通过逻辑端口发送报文时,以太网设备根据预设的哈希因子确定用于转发该报文的物理端口,然后根据权重因子确定该报文是否可以通过根据哈希因子确定出的物理端口转发,无需修改哈希因子即可实现对发往各物理端口的流量进行调整,调整过程简单,不会对业务产生影响
一种可行的设计中,一个物理端口的负载信息用于指示该物理端口已消耗带宽占该物理端口总带宽的百分比,以太网设备根据每个物理端口的权重因子,调整发往每个物理端口的流量,包括:以太网设备判断逻辑端口的每个物理端口的负载信息指示的百分比是否发生变化;若每个物理端口的负载信息指示的百分比的变化量未超出预 设阈值,则以太网设备根据每个物理端口的权重因子,调整发往每个物理端口的流量。采用该种方法,以太网设备根据权重因子调整发往各物理端口的流量之前,判断各物理端口中每个物理端口的负载信息指示的百分比的变化量是否超出预设阈值,只有在所有物理端口负载信息指示的百分比的变化量未超出预设阈值的情况下,才根据权重因子调整发往各物理端口的流量,避免物理端口已消耗带宽占该物理端口总带宽的百分比发生较大变化时,权重因子不适用的现象。
一种可行的设计中,上述的方法还包括:若多个物理端口中一个或多个物理端口的负载信息指示的百分比的变化量超出预设阈值,则以太网设备重新采集每个物理端口的负载信息;以太网设备向控制器发送携带重新采集到的负载信息的查询报文。采用该种方法,若各物理端口中全部或部分物理端口的负载信息指示的百分比的变化量是否超出预设阈值,则以太网设备重新采集各物理端口的负载信息并发送给控制器以重新确定权重因子,避免物理端口已消耗带宽占该物理端口总带宽的百分比发生较大变化时,权重因子不适用的现象。
一种可行的设计中,以太网设备根据每个物理端口的权重因子,调整发往每个物理端口的流量,包括:以太网设备根据预设的哈希因子和每个物理端口的权重因子,调整发往每个物理端口的流量。采用该种方法,在通过逻辑端口发送报文时,以太网设备根据预设的哈希因子确定用于转发该报文的物理端口,然后根据权重因子确定该报文是否可以通过根据哈希因子确定出的物理端口转发,无需修改哈希因子即可实现对发往各物理端口的流量进行调整,调整过程简单,不会对业务产生影响。
第二方面,本发明实施例提供一种流量均衡方法,包括:控制器接收以太网设备发送的查询报文,该查询报文携带以太网设备的逻辑端口的多个物理端口中每个物理端口的负载信息;控制器根据每个物理端口的负载信息,确定每个物理端口的权重因子,每个物理端口的权重因子与物理端口可分担逻辑端口的流量的大小正相关;控制器向以太网设备发送应答报文,应答报文携带每个物理端口的权重因子。采用该种方法,无需用户手动调整参与哈希计算的哈希因子,而是由以太网设备自动对各物理端口的流量进行调整,对当前传输的业务数据没有影响,不会引发业务异常。
一种可行的设计中,每个物理端口的负载信息用于指示物理端口已消耗带宽占物理端口总带宽的百分比,控制器根据每个物理端口的负载信息,确定每个物理端口的权重因子,包括:控制器根据每个物理端口的百分比,确定该多个物理端口的百分比的最小公倍数;控制器根据该最小公倍数,确定每个物理端口的权重因子,一个物理端口的权重因子=公倍数/所述物理端口的百分比。采用该种方法,控制器可以确定出权重因子。
第三方面,本申请实施例提供一种流量均衡装置,该装置可以是以太网设备,也可以是以太网设备内的芯片。该装置可以包括处理单元、发送单元和接收单元。当该装置是以太网设备时,该处理单元可以是处理器,发送单元可以是发送器,接收单元可以是接收器;该以太网设备还可以包括存储单元,该存储单元可以是存储器;该存储单元用于存储指令,该处理单元执行该存储单元所存储的指令,以使该以太网设备实现上述第一方面或第一方面的各种可能的实现方式中的功能。当该装置是以太网设备内的芯片时,该处理单元可以是处理器,该收发单元可以是输入/输出接口、管脚或电路等;该处 理单元执行存储单元所存储的指令,以使该以太网设备实现上述第一方面或第一方面的各种可能的实现方式中的功能,该存储单元可以是该芯片内的存储单元(例如,寄存器、缓存等),也可以是该以太网设备内的位于该芯片外部的存储单元(例如,只读存储器、随机存取存储器等)。
第四方面,本申请实施例提供一种流量均衡装置,该装置可以是控制器,也可以是控制器内的芯片。该装置可以包括处理单元、发送单元和接收单元。当该装置是控制器时,该处理单元可以是处理器,发送单元可以是发送器,接收单元可以是接收器;该控制器还可以包括存储单元,该存储单元可以是存储器;该存储单元用于存储指令,该处理单元执行该存储单元所存储的指令,以使该控制器实现上述第二方面或第二方面的各种可能的实现方式中的功能。当该装置是控制器内的芯片时,该处理单元可以是处理器,该收发单元可以是输入/输出接口、管脚或电路等;该处理单元执行存储单元所存储的指令,以使该控制器实现上述第二方面或第二方面的各种可能的实现方式中的功能,该存储单元可以是该芯片内的存储单元(例如,寄存器、缓存等),也可以是该控制器内的位于该芯片外部的存储单元(例如,只读存储器、随机存取存储器等)。
第五方面,本申请实施例提供一种包含指令的计算机程序产品,当其在处理器上运行时,使得处理器执行上述第一方面或第一方面的各种可能的实现方式中的方法。
第六方面,本申请实施例提供一种包含指令的计算机程序产品,当其在处理器上运行时,使得处理器机执行上述第二方面或第二方面的各种可能的实现方式中的方法。
第七方面,本申请实施例提供一种计算机可读存储介质,该计算机可读存储介质中存储有指令,当其在处理器上运行时,使得处理器执行上述第一方面或第一方面的各种可能的实现方式中的方法。
第八方面,本申请实施例提供一种计算机可读存储介质,该计算机可读存储介质中存储有指令,当其在处理器上运行时,使得处理器执行上述第二方面或第二方面的各种可能的实现方式中的方法。
第九方面,本申请实施例提供一种流量均衡系统,该系统包括第一流量均衡装置和第二流量均衡装置,该第一流量均衡装置为如上第三方面的流量均衡装置,该第二流量均衡装置为如上第四方面的流量均衡装置。
本申请实施例提供的流量均衡方法及装置,以太网设备向控制器发送携带该以太网设备的逻辑端口的各物理端口的负载信息的查询报文,控制器接收该查询报文,并根据各物理端口的负载信息,确定逻辑端口的每个物理端口的权重因子,每个物理端口的权重因子与该物理端口可分担逻辑端口的流量的大小正相关,之后,控制器向以太网设备发送携带各物理端口的权重因子的应答报文,使得以太网设备根据各物理端口的权重因子,调整发往各物理端口的流量。该过程中,无需用户手动调整参与哈希计算的哈希因子,而是由以太网设备自动对各物理端口的流量进行调整,对当前传输的业务数据没有影响,不会引发业务异常。
附图说明
图1是Eth-Trunk级联转发示意图;
图2为本申请实施例提供的一种流量均衡方法所适用的运行环境示意图;
图3是本申请实施例提供的一种流量均衡方法的流程图;
图4是本申请实施例提供的一种流量均衡方法中UDP报文的结构示意图;
图5是本申请实施例提供的一种控制器的结构示意图;
图6为本申请实施例提供的一种流量均衡装置的结构示意图;
图7为本申请实施例提供的另一种流量均衡装置的结构示意图;
图8为本申请实施例提供的又一种流量均衡装置的结构示意图。
具体实施方式
基于Eth-Trunk机制,衍生出应用于Eth-Trunk级联系统的Eth-Trunk级联机制。示例性的,可参见图1,图1是Eth-Trunk级联转发示意图。请参照图1,该Eth-Trunk级联场景中,以太网设备具体为交换机(switch),交换机1~交换机5为同类型设备,交换机1为第一级交换机,交换机2与交换机3为第二级交换机,且交换机2与交换机3组成堆叠,交换机4和交换机5为第三级交换机,交换机1、交换机2、交换机3上分别预先存储哈希算法和该哈希算法采用的哈希因子。交换机1通过逻辑端口1连接交换机2和交换机3组成的堆叠设备,交换机2通过逻辑端口2连接交换机4,交换机3通过逻辑端口连接交换机5。逻辑端口1包括4个物理端口,分别为物理端口1、2、3、4;逻辑端口2和逻辑端口3各自包含3个物理端口。当需要通过逻辑端口1转发流量时,交换机1根据哈希因子和哈希算法,对流量进行哈希计算,将流量分配到逻辑端口1的物理端口1(记为子流量1)、物理端口2(记为子流量2)、物理端口3(记为子流量3)或物理端口4(记为子流量4)。之后,子流量1和子流量2到达交换机2,子流量3和子流量4到达交换机3。交换机2根据哈希因子和哈希算法,对子流量1和子流量2进行哈希计算,将子流量1和子流量2分配到逻辑端口2的各物理端口上;同理,交换机3根据哈希因子和哈希算法,对子流量3和子流量4进行哈希计算,将子流量3和子流量4分配到逻辑端口3的某个物理端口上。其中,哈希因子指参与哈希运算的参数,具体可以是接收的报文的一个或多个特征(统称为报文特征)。例如,哈希因子可以是源介质访问控制(media access control,MAC)地址,目的MAC地址、源因特网协议(Internet Protocol,IP)地址、目的IP地址、源端口号、目的端口号等。流量是由报文组成的,根据哈希因子和哈希算法,对流量进行哈希计算时,交换机从接收到的报文中提取出该哈希因子对应的值(即报文特征),根据哈希算法对该报文特征进行哈希计算,确定出该报文应该从逻辑端口的哪个物理端口发出。
在一个场景中,交换机1基于哈希算法,将到达逻辑端口1的报文分配到物理端口1、物理端口2、物理端口3或物理端口4,各物理端口已消耗带宽与该物理端口的总带宽的比值均为60%。其中,对于一个物理端口,该物理端口的已消耗带宽与该物理端口的总带宽的比值也称之为流量带宽。
由于交换机2、交换机3与交换机1使用的哈希算法相同,所以交换机2、交换机3与交换机1的哈希计算结果比较相近,导致逻辑端口2和逻辑端口3出现流量分配不均匀的现象。例如,逻辑端口2的物理端口1、物理端口2、物理端口3各自的流量带宽依次为20%、40%、95%。如果到达交换机2的流量比较大,则逻辑端口2的物理端口3容易出现超带宽丢包现象。
为避免出现流量分配不均匀的现象,可以手动修改参与哈希计算的哈希因子或修改哈希算法。例如,修改用于哈希运算的报文特征;再如,对哈希计算结果进行偏移等。然而,即使使用新的哈希因子,计算出的物理端口也是固定的,同样无法避免多级Eth-Trunk端口的场景下,除第一级Eth-Trunk端口外的其他Eth-Trunk端口的各成员端口的流量分担不均匀的现象。同时,修改哈希因子的过程中,需要经过多次尝试修改后才可能得出合适的哈希因子,该调整过程容易导致业务异常。
有鉴于此,本申请实施例提供一种流量均衡方法及装置,通过自动调整Eth-Trunk端口的各成员端口的流量大小,达到为Eth-Trunk端口的各成员端口均匀分担流量的目的。
在本申请实施例中,“示例性的”或者“例如”等词用于表示作例子、例证或说明。本申请实施例中被描述为“示例性的”或者“例如”的任何实施例或设计方案不应被解释为比其它实施例或设计方案更优选或更具优势。确切而言,使用“示例性的”或者“例如”等词旨在以具体方式呈现相关概念。
图2为本申请实施例提供的一种流量均衡方法所适用的运行环境示意图。请参照图2,该运行环境包括控制器和多个交换机(图中仅示出了交换机1,交换机2和交换机3),控制器和每个交换机之间建立网络连接。其中,控制器也可以理解为采集器(collector)、服务器等;交换机可以是二层交换机或三层交换机,其支持Eth-Trunk机制。基于Eth-Trunk机制,交换机的多个物理端口被绑定为一个逻辑端口,交换机可以使用Eth-Trunk机制来提高网络带宽,实现负载分担与业务保护。
图2中,每个交换机仅示出了一个逻辑端口,每个逻辑端口的物理端口数量为3个或4个。然而,实际应用中,一个以太网设备的逻辑端口数量非常庞大,例如为1000个或者更多,每个逻辑端口的物理端口的数量也非常庞大。举例来说,一个以太网设备上可以插入16个板卡,每个板卡可以支持至少48个物理端口,因此,一个以太设备上至少存在48×16=768个物理端口,若2个以太网设备形成堆叠,则堆叠上至少存在1500个物理端口,根据该1500个物理端口可以得到1000个逻辑端口。倘若通过以太网设备自身对各逻辑端口的流量进行均衡,则以太网设备处理数据量庞大,严重浪费以太网设备的计算资源和内存。因此,本申请实施例中,通过部署控制器,由控制器承担权重因子等处理,从而对整网中以太网设备的各逻辑端口的流量进行均衡。
下面,基于图2所示架构,对本申请实施例所述的流量均衡方法进行详细说明。示例性的,可参见图3,图3是本申请实施例提供的一种流量均衡方法的流程图,该方法包括步骤101-104。
在步骤101中,以太网设备向控制器发送查询报文,所述查询报文携带所述以太网设备的逻辑端口的多个物理端口中每个物理端口的负载信息。
相应的,控制器接收该查询报文。
示例性的,以太网设备上预先部署代理(agent)模块,用于采集以太网设备的逻辑端口的每个物理端口的负载信息并向控制器上报,控制器接收该查询报文。采集负载信息的过程中,agent模块对逻辑端口的每个物理端口进行流量统计,确定出逻辑端口的每个物理端口已消耗的带宽占该物理端口总带宽的百分比。例如,继 续参照图2,逻辑端口2具有3个物理端口,该3个物理端口的最大带宽分别为200兆(M)、100M和50M。经agent模块统计发现:物理端口1当前已消耗带宽为40M,已消耗带宽占物理端口1的总带宽的20%;物理端口2当前已消耗带宽为40M,已消耗带宽占物理端口2的总带宽的40%;物理端口3当前已消耗带宽为47.5M,已消耗带宽占物理端口3的总带宽的95%。
在步骤102中,所述控制器根据所述每个物理端口的负载信息,确定所述每个物理端口的权重因子,每个物理端口的权重因子与该物理端口可分担所述逻辑端口的流量的大小正相关。
示例性的,对于一个逻辑端口,控制器根据该逻辑端口的每个物理端口的负载信息,确定每个物理端口的权重因子,每个物理端口的权重因子与该物理端口可分担所述逻辑端口的流量的大小正相关。也就是说,对于一个逻辑端口,该逻辑端口的各物理端口中,权重因子越大的物理端口,能够分担的流量越大,或者说,权重因子越大的物理端口,当前已消耗带宽占该物理端口的总带宽的比例越小。
继续沿用上述步骤101中的例子,假设负载信息为每个物理端口的已消耗的带宽与总带宽的百分比,则控制器根据该3个百分比,确定出物理端口1、物理端口2和物理端口3的权重因子分别为3.8、1.9、0.8,也就是说,物理端口1能够分担的流量最大,物理端口2能够分担的流量次之,物理端口3能够分担的流量最小。
在步骤103中,所述控制器向所述以太网设备发送应答报文,所述应答报文携带所述每个物理端口的权重因子。
相应的,以太网设备接收该携带每个物理端口的权重因子的应答报文。
在步骤104中,在通过所述逻辑端口发送报文时,所述以太网设备根据所述每个物理端口的权重因子,调整发往所述每个物理端口的流量。
示例性的,数据到达逻辑端口后,以太网设备根据该逻辑端口的每个物理端口的权重因子,调整发往各物理端口的流量。继续沿用上述步骤101和102中的例子,假设一条新的数据流的大小为6M,该数据流到达交换机2后,经过哈希算法确定出该新的数据流经由物理端口3转发,然而,以太网设备确定出物理端口3的权重因子为0.8,该权重因子比较小,倘若使用物理端口3发送该数据流,则会导致物理端口3出现超带宽丢包现象。因此,以太网设备选择权重因子最大的物理端口,即物理端口1转发该新的数据流。
本发明实施例提供的流量均衡方法中,以太网设备向控制器发送携带该以太网设备的逻辑端口的各物理端口的负载信息的查询报文,控制器接收该查询报文,并根据各物理端口的负载信息,确定逻辑端口的每个物理端口的权重因子,每个物理端口的权重因子与该物理端口可分担所述逻辑端口的流量的大小正相关,之后,控制器向以太网设备发送携带各物理端口的权重因子的应答报文,使得以太网设备根据各物理端口的权重因子,调整发往各物理端口的流量。该过程中,无需用户手动调整参与哈希计算的哈希因子,而是由以太网设备自动对各物理端口的流量进行调整,对当前传输的业务数据没有影响,不会引发业务异常。
下面,以逻辑端口为逻辑端口2,该逻辑端口2包含物理端口1、物理端口2和物理端口3为例,对上述实施例中,在通过该逻辑端口发送报文时,以太网设备如何调整 发往每个物理端口的流量进行详细说明。
示例性,每个以太网设备上预先设置哈希因子,哈希因子可以为报文的源MAC地址、目的MAC地址、源IP地址、目的IP地址、源端口号、目的端口号等信息中的一个或多个。以哈希因子为源MAC地址、哈希算法为将源MAC地址的后三位的二进制值映射到不同的物理端口。例如,000、001、010映射为物理端口1,011、100映射为物理端口2,101、111映射为物理端口3。在通过所述逻辑端口发送报文时,所述以太网设备根据预设的哈希因子和所述每个物理端口的权重因子,调整发送所述每个物理端口的流量。
一种可行的设计中,以太网设备上除了配置预设的哈希因子外,还配置最小权重因子,在需要通过逻辑端口2发送报文时,以太网设备根据预设的哈希因子,确定一个物理端口,然后,以太网设备确定该物理端口的权重因子是否大于最小权重因子,若该物理端口的权重因子大于最小权重因子,则通过该物理端口转发报文;若该物理端口的权重因子小于或等于最小权重因子,则通过其他物理端口转发报文。举例来说,物理端口1、物理端口2和物理端口3的权重因子分别为3.8、1.9、0.8,最小权重因子为1,若以太网设备根据哈希算法确定出的物理端口为物理端口3,则由于物理端口3的权重因子小于最小权重因子,则以太网设备从物理端口1和物理端口2中选择一个物理端口发送该报文。例如,从物理端口1和物理端口2中随机选择一个物理端口发送报文;再如,从物理端口1和物理端口2中选择一个权重因子最大的物理端口,即物理端口1发送报文。
另一种可行的设计中,以太网设备上除了配置预设的哈希因子外,还配置权重因子差值阈值,在需要通过逻辑端口2发送报文时,以太网设备根据预设的哈希因子,确定一个物理端口,然后,以太网设备确定该物理端口的权重因子与最大权重因子的差值是否大于或等于权重因子差值阈值,若差值大于或等于权重因子差值阈值,则通过其他物理端口发送报文;若差值小于权重因子差值阈值,则通过哈希算法计算出的物理端口发送报文。举例来说,物理端口1、物理端口2和物理端口3的权重因子分别为3.8、1.9、0.8,权重因子差值阈值为2,若以太网设备根据哈希算法确定出的物理端口为物理端口3,则由于物理端口3的权重因子与最大权重因子,即物理端口1的权重因子的差值等于3,该差值大于权重因子差值阈值2,则以太网设备从物理端口1和物理端口2中选择一个物理端口发送该报文。例如,从物理端口1和物理端口2中随机选择一个物理端口发送报文;再如,从物理端口1和物理端口2中选择一个权重因子最大的物理端口,即物理端口1发送报文。
本实施例中,在通过所述逻辑端口发送报文时,以太网设备根据预设的哈希因子确定用于转发该报文的物理端口,然后根据权重因子确定该报文是否可以通过根据哈希因子确定出的物理端口转发,无需修改哈希因子即可实现对发往各物理端口的流量进行调整,调整过程简单,不会对业务产生影响。
上述实施例中,对于某种确定的业务,该些业务的报文的源MAC地址、目的MAC地址等是固定的,因此,可以结合哈希因子和权重因子调整发往各物理端口的流量。然而,业务是不固定的,若业务发生变化,例如,以太网设备向控制器发送查询报文之前,有100个用户正在发起业务,而以太网设备接收到控制器发送的应答报文之 后,有1000个用户正在发起业务,此时,逻辑端口的各物理端口的流量发生变化,显然,应答报文携带的权重因子无法满足流量均衡的需要。为避免出现该种现象,可选地,本申请实施例中,所述在通过所述逻辑端口发送报文时,所述以太网设备根据所述每个物理端口的权重因子,调整发往所述每个物理端口的流量,包括:在通过所述逻辑端口发送报文时,所述以太网设备判断所述逻辑端口的所述每个物理端口的流量是否发生变化;若所述每个物理端口的流量未发生变化,则所述以太网设备根据所述每个物理端口的权重因子,调整发往所述每个物理端口的流量。
示例性的,假设以太网设备具有逻辑端口2,逻辑端口2包含物理端口1、物理端口2和物理端口3,以太网设备向控制器发送报文之前,有100个用户正在发起业务,物理端口1当前已消耗带宽为40M,已消耗带宽占物理端口1的总带宽的20%;物理端口2当前已消耗带宽为40M,已消耗带宽占物理端口2的总带宽的40%;物理端口3当前已消耗带宽为47.5M,已消耗带宽占物理端口3的总带宽的95%。控制器基于该查询报文,确定出物理端口1、物理端口2和物理端口3的权重因子分别为3.8、1.9和0.8,并将权重因子和以太网设备的逻辑端口标识等封装成应答报文发送给以太网设备。以太网设备接收到该应答报文后,确定出物理端口1、物理端口2和物理端口3的权重因子分别为3.8、1.9和0.8,同时,确定出各物理端口的负载信息指示的百分比的变化量未超出预设阈值,即每个物理端口已消耗带宽占该物理端口总带宽的百分比的变化量未超出预设阈值,例如,以太网关设备接收到应答报文后,物理端口1、物理端口2和物理端口3各自已消耗的带宽占总带宽的百分比依旧为20%、40%和95%,或者,虽然各物理端口已消耗的带宽与总带宽的百分比发生变化,但变化不大,例如,从原先的20%、40%和95%变化为25%、40%和95%,则以太网设备根据所述每个物理端口的权重因子,调整发往所述每个物理端口的流量。
本实施例中,每个物理端口的负载信息用于指示所述物理端口已消耗带宽占该物理端口总带宽的百分比,以太网设备根据权重因子调整发往各物理端口的流量之前,判断各物理端口中每个物理端口的负载信息指示的百分比的变化量是否超出预设阈值,只有在所有物理端口负载信息指示的百分比的变化量未超出预设阈值的情况下,才根据权重因子调整发往各物理端口的流量,避免物理端口已消耗带宽占该物理端口总带宽的百分比发生较大变化时,权重因子不适用的现象。另外,以太网设备也可以仅对部分物理端口的负载信息指示的百分比的变化量进行判断。例如,以太网设备发送的查询报文中,物理端口1、物理端口2和物理端口3的已消耗带宽占总带宽的比例依次为20%、40%和95%;以太网设备接收到应答报文后,发现物理端口1的权重因子较大,而物理端口3的权重因子最小,应该将物理端口3的流量增大。此时,以太网设备主要检测物理端口1已消耗带宽占物理端口1的总带宽的百分比是否发生较大变化,若百分比由20%变为30%,变化量为10%,小于预设阈值30%,则以太网设备根据应答报文携带的权重因子调整发往每个物理端口的流量。
可选地,上述实施例中,若一个或多个物理端口的负载信息指示的百分比的变化量超出预设阈值,则所述以太网设备判断所述每个物理端口的权重因子是否适用于流量发生变化的各所述物理端口;若所述每个物理端口的权重因子适用于流量发生变化的各所述物理端口,则所述以太网设备根据所述每个物理端口的权重因子,调整发往 所述每个物理端口的流量。
示例性的,若任意一个或多个物理端口已消耗的带宽与总带宽的百分比发生巨大变化,例如,从原先的20%、40%和95%变化为80%、40%和95%,则以太网设备重新采集各物理端口的负载信息,并向控制器发送,使得控制器重新计算权重因子。
下面,对本申请实施例中的查询报文和应答报文进行详细描述。
本申请实施例中,查询报文和应答报文可以为用户数据报协议(User Datagram Protocol,UDP)报文。示例性的,可参见图4,图4是本申请实施例提供的一种流量均衡方法中UDP报文的结构示意图。
请参照图4,UDP报文包含外部以太头(outer Ethernet header)、外部IP头(outer IP header)、外部UDP头(outer UDP header)和载荷(payload)。其中,外部以太头包含目的MAC地址(MAC DA)、源MAC地址(MAC SA)、标签、以太网类型(Ethernet type)等;外部IP头包含协议(protocol)、源IP地址(IP SA)、目的IP地址(IP DA);外部UDP头包含源端口(source port)号、目的端口(Dest port)号、UDP长度(UDP length)和UDP校验和(UDP Checksum)。
本申请实施例中,作为查询报文的UDP报文具有固定的UDP端口号,即外部UDP头中的源端口(source port)号是固定的,例如为6000。如此一来,控制器接收到UDP报文并解析该报文后,若该UDP报文的UDP端口号为6000,则确定该UDP报文为查询报文,需要根据该UDP报文携带的负载信息计算权重因子,若该UDP报文的UDP端口号不是6000,则无需计算权重因子。同理,以太网设备接收到UDP报文并解析该报文后,若该UDP报文的UDP端口号为6000,则确定该UDP报文为应答报文,需要根据该UDP报文携带的权重因子调整发往各物理端口的流量,若该UDP报文的UDP端口号不是6000,则无需对各物理端口的流量进行调整。
本申请实施例中,当UDP报文为查询报文时,该UDP报文的payload部分用于存放以太网设备的设备信息、逻辑端口的各物理端口的负载信息等;当UDP报文为应答报文时,该UDP报文的payload部分用于存放以太网设备的设备信息、逻辑端口的各物理端口的权重因子等。下面,对UDP报文的payload部分进行详细说明。示例性的,可参见表1。
表1为本申请实施例中UDP报文的payload部分的详细内容表。
表1
消息主要字段 查询报文 应答报文
IP IP1 IP1
逻辑端口标识 10 10
物理端口数量 3 3
物理端口1 20% 3.8
物理端口2 40% 1.9
物理端口3 95% 0.8
对表1的详细说明如下:
(a)、IP字段。
本申请实施例中,IP字段表示以太网设备的IP地址,即以太网设备所在的网络位 置,用于以太网设备与控制器之间互通消息,当以太网设备上部署agent模块时,该IP地址也可以称之为agent的IP地址。假设IP地址为10.1.1.1,控制器的IP地址为20.1.1.1,则以太网设备向控制器发送查询报文时,该查询报文的外部IP头中的目的IP(IP DA)为控制器的IP地址20.1.1.1;外部IP头中的源IP地址(IP SA)为以太网设备的IP地址10.1.1.1;该查询报文的payload部分的IP字段,即上述表1中的IP1为以太网设备的IP地址10.1.1.1。控制器收到该查询报文后,解析该查询报文中的payload部分,将以太网设备的相关信息保存在本地数据库中,在计算的时候提取该信息作为输入。
(b)逻辑端口标识字段,即Eth-trunk ID字段。
本申请实施例中,Eth-trunk ID字段包括逻辑端口的编号、逻辑端口的索引号(ifindex)数据等。以太网设备发送给控制器的查询报文的payload数据中,Eth-trunk ID字段可以唯一标识以太网设备上的一个逻辑端口。控制器计算好权重因子后,向以太网设备发送应答报文时携带该Eth-trunk ID,从而使得以太网设备根据Eth-trunk ID确定出对应的逻辑端口。
(c)物理端口数量字段,即Member_Num字段。
本申请实施例中,Member_Num字段表示逻辑端口包含的物理端口的数量。控制器接收到查询报文后,根据该Member_Num字段,确定物理端口的数量,然后根据该数量遍历所有的物理端口,根据各物理端口的负载信息计算每个物理端口的权重因子。
(d)物理端口字段,即Member_N,N为物理端口的编号。
本申请实施例中,若UDP报文为查询报文,则该字段的值表示该物理端口的负载信息,该负载信息是控制器计算权重因子的重要数据;若UDP报文为响应报文,则该字段的值表示该物理端口的权重因子,用于指示以太网设备根据各物理端口的权重因子对各物理端口的流量进行调整。
下面,对控制器接收到查询报文后,如何根据查询报文中各物理端口的负载信息确定各物理端口的权重因子进行详细说明。
一种可行的设计中,一个物理端口的负载信息用于指示所述物理端口已消耗带宽占所述物理端口总带宽的百分比,所述控制器根据所述每个物理端口的负载信息,确定所述每个物理端口的权重因子,包括:所述控制器根据所述每个物理端口的百分比,确定所述多个物理端口的百分比的最小公倍数;根据所述最小公倍数,确定所述每个物理端口的权重因子,一个每个物理端口的权重因子=最小公倍数/所述物理端口的百分比。
示例性的,将权重记为weight,百分比记为OutUti,最小公倍数记为X,逻辑端口包含物理端口1~物理端口N,则:(weight_1×OutUti_1):(weight_2×OutUti_2):(weight_3×OutUti_3):(weight_4×OutUti_4):……:(weight_N×OutUti_N)=1:1:1:1:……。因此,可以通过计算OutUti_1、OutUti_2、OutUti_3……OutUti_N的最小公倍数X,即可确定出weight_1、weight_2、weight_3……weight_N,weight_N=X/OutUti_N。例如,上述表1中,逻辑端口标识为2,物理端口数量字段为3,物理端口字段分别为20%、40%、95%,则说明:逻辑端口2包含物理端口1、物理 端口2和物理端口3,该三个物理端口当前已消耗带宽占各自总带宽的20%、40%、95%。控制器根据20%、40%、95%发现,最小公倍数X=0.76,则物理端口1、物理端口2和物理端口3的权重因子依次为3.8、1.9和0.8。控制器计算好权重因子后,向以太网设备发送应答报文,基于上述表1,该应答报文中的物理端口字段的值为权重因子。
下面,对本申请实施例中,以太网设备上报各物理端口的负载信息的时机进行详细描述。该时机也可以理解为控制器确定逻辑端口的各物理端口的权重因子的时机。
一种可行的实现方式中,所述以太网设备向所述控制器发送查询报文,包括:所述以太网设备判断所述多个物理端口中,是否存在至少一个物理端口的流量超出预设阈值;若存在至少一个物理端口的流量超出预设阈值,则所述以太网设备向所述控制器发送所述查询报文。
示例性的,以太网设备上部署代理(agent)模块,若某个逻辑端口的负载超过限制时,则以太网设备提醒用户可能出现流量分配不均匀的现象。代理模块感知到这个事件后,主动采集该逻辑端口的每个物理端口的负载信息,并上报给控制器。例如,以太网设备上预先存储一个带宽预警值,例如为90%。此时,若某个物理端口当前已消耗带宽占该物理端口总带宽的百分比超过90%,则向用户发出警告。代理模块感知该事件,采集逻辑端口的各物理端口的负载信息,并向控制上报。再如,以太网设备上预先存储一个带宽预警值差值,比如10%,代理模块采集各个逻辑端口的负载信息,若该些物理端口中,存在两个物理端口,该两个物理带宽已消耗带宽的百分比的差值大于10%,则代理模块向控制器发送各物理端口的负载信息,举例来说,代理模块发现:一个逻辑端口的物理端口1、物理端口2和物理端口3各自已消耗的带宽占总带宽的百分比为20%、40%和60%,物理端口1与物理端口2已消耗带宽的百分比的差值大于10%,物理端口2和物理端口3已消耗带宽的百分比的差值大于10%,物理端口1和物理端口3已消耗带宽的百分比的差值大于10%,则代理模块向控制器发送该逻辑端口的每个物理端口的负载信息。
本实施例中,以太网设备主动向控制器上报逻辑端口的各物理端口的负载信息,使得控制器根据负载信息确定权重因子,实现不修改哈希因子的前提下调整各物理端口的流量的目的。
另一种可行的设计中,以太网设备向所述控制器发送查询报文,包括:所述以太网设备多次向所述控制器发送查询报文。
示例性的,以太网设备可以定期或不定期的采集逻辑端口的各物理端口的负载信息,并批量上报给向控制器。
本示例中,以太网设备定期或不定期向控制器上报逻辑端口的各物理端口的负载信息,使得控制器根据负载信息确定权重因子,实现不修改哈希因子的前提下调整各物理端口的流量的目的。
下面,对本申请实施例中的控制器进行详细说明。示例性的,请参见图5,图5是本申请实施例提供的一种控制器的结构示意图。
请参照图5,控制器与网卡连接,控制器包括报文处理芯片、外部存储器、内部存储器、输入设备、输出设备、运算器、控制芯片以及输出设备等。以太网设备发送的 查询报文经由网卡到达报文处理芯片。报文处理芯片对查询报文进行解析,解析出以太网设备的IP地址、以太网设备的逻辑端口的各物理端口的负载信息等,报文处理芯片将解析出的信息上报给内部存储器。控制芯片周期性的触发运算器调用内部存储器中的信息,并对调用的信息进行计算,得出各物理端口的权重因子,将各物理端口的权重因子存储在内部存储器中。报文处理芯片调用内部存储器中的权重因子,将权重因子封装在应答报文中,通过网卡将该应答报文发送给以太网设备,由以太网设备根据各物理端口的权重因子,调整发往各物理端口的流量。
图6为本申请实施例提供的一种流量均衡装置的结构示意图。本实施例所涉及的流量均衡装置可以为以太网设备,也可以为应用于以太网设备的芯片。该流量均衡装置可以用于执行上述实施例中以太网设备的功能。如图6所示,该流量均衡装置100可以包括:
发送单元11,用于向控制器发送查询报文,所述查询报文携带所述以太网设备的逻辑端口的多个物理端口中每个物理端口的负载信息;
接收单元12,用于接收所述控制器发送的应答报文,所述应答报文携带所述每个物理端口的权重因子,每个物理端口的权重因子与所述物理端口可分担所述逻辑端口的流量的大小正相关;
处理单元13,用于在所述发送单元11通过所述逻辑端口发送报文时,根据所述每个物理端口的权重因子,调整发往所述每个物理端口的流量。
一种可行的设计中,所述处理单元13,还用于判断所述多个物理端口中,是否存在至少一个物理端口的流量超出预设阈值;
所述发送单元11,用于若所述处理单元13判断出存在至少一个物理端口的流量超出预设阈值,则向所述控制器发送所述查询报文。
一种可行的设计中,每个物理端口的负载信息用于指示所述物理端口已消耗带宽占所述物理端口总带宽的百分比,所述处理单元13,用于判断所述逻辑端口的所述每个物理端口的负载信息指示的百分比是否发生变化,若所述每个物理端口的负载信息指示的百分比的变化量未超出预设阈值,则根据所述每个物理端口的权重因子,调整发往所述每个物理端口的流量。
一种可行的设计中,所述处理单元13,还用于若所述每个物理端口的负载信息指示的百分比的变化量超出预设阈值,则所述以太网设备重新采集所述每个物理端口的负载信息;
所述发送单元11,还用于向所述控制器发送携带重新采集到的负载信息的查询报文。
一种可行的设计中,所述处理单元13,用于根据预设的哈希因子和所述每个物理端口的权重因子,调整发往所述每个物理端口的流量。
本申请实施例提供的流量均衡装置,可以执行上述实施例中以太网设备的动作,其实现原理和技术效果类似,在此不再赘述。
图7为本申请实施例提供的另一种流量均衡装置的结构示意图。本实施例所涉及的流量均衡装置可以为控制器,也可以为应用于控制器的芯片。该流量均衡装置可以用于执行上述实施例中控制器的功能。如图7所示,该流量均衡装置200可以包括:
接收单元21,用于接收以太网设备发送的查询报文,所述查询报文携带所述以太网设备的逻辑端口的多个物理端口中每个物理端口的负载信息;
处理单元22,用于根据所述每个物理端口的负载信息,确定所述每个物理端口的权重因子,每个物理端口的权重因子与所述物理端口可分担所述逻辑端口的流量的大小正相关;
发送单元23,用于向所述以太网设备发送应答报文,所述应答报文携带所述每个物理端口的权重因子。
一种可行的设计中,每个物理端口的负载信息用于指示所述物理端口已消耗带宽占所述物理端口总带宽的百分比,所述处理单元22,用于根据所述每个物理端口的百分比,确定所述多个物理端口的百分比的最小公倍数;根据所述最小公倍数,确定所述每个物理端口的权重因子,一个物理端口的权重因子=所述最小公倍数/所述物理端口的百分比。
本申请实施例提供的流量均衡装置,可以执行上述实施例中控制器的动作,其实现原理和技术效果类似,在此不再赘述。
需要说明的是,应理解以上接收单元实际实现时可以为接收器、发送单元实际实现时可以为发送器。而处理单元可以以软件通过处理元件调用的形式实现;也可以以硬件的形式实现。例如,处理单元可以为单独设立的处理元件,也可以集成在上述装置的某一个芯片中实现,此外,也可以以程序代码的形式存储于上述装置的存储器中,由上述装置的某一个处理元件调用并执行以上处理单元的功能。此外这些单元全部或部分可以集成在一起,也可以独立实现。这里所述的处理元件可以是一种集成电路,具有信号的处理能力。在实现过程中,上述方法的各步骤或以上各个单元可以通过处理器元件中的硬件的集成逻辑电路或者软件形式的指令完成。
例如,以上这些单元可以是被配置成实施以上方法的一个或多个集成电路,例如:一个或多个专用集成电路(application-specific integrated circuit,ASIC),或,一个或多个微处理器(digital signal processor,DSP),或,一个或者多个现场可编程门阵列(field-programmable gate array,FPGA)等。再如,当以上某个单元通过处理元件调度程序代码的形式实现时,该处理元件可以是通用处理器,例如中央处理器(central processing unit,CPU)或其它可以调用程序代码的处理器。再如,这些单元可以集成在一起,以片上系统(system-on-a-chip,SOC)的形式实现。
另外,本申请实施例还提供一种流量均衡系统,该系统包括一个或多个如上图6所示的流量均衡装置(可以称为第一流量均衡装置)以及如上图7所述的流量均衡装置(可以称为第二流量均衡装置)。由于图6所示装置设置在以太网设备上,图7所示装置设置在控制器上,在一个实施方式中,该流量均衡系统可以包括多个以太网设备以及一个控制器。
图8为本申请实施例提供的又一种流量均衡装置的结构示意图。如图8所示,该流量均衡装置300可以包括:处理器31(例如CPU)、存储器32、接收器33、发送器34;接收器33和发送器34均耦合至处理器31,处理器31控制接收器33的接收动作、处理器31控制发送器34的发送动作;存储器32可能包含高速随机存取存储器(random access memory,RAM),也可能还包括非易失性存储器(non-volatile memory,NVM),例如 至少一个磁盘存储器,存储器32中可以存储各种指令,以用于完成各种处理功能以及实现本申请的方法步骤。可选的,本申请涉及的流量均衡装置还可以包括:通信总线35。接收器33和发送器34可以集成在流量均衡装置的收发信机中,也可以为流量均衡装置上独立的收发天线。通信总线35用于实现元件之间的通信连接。
在本申请一个实施例中,上述存储器32用于存储计算机可执行程序代码,程序代码包括指令;当处理器31执行指令时,使流量均衡装置的处理器31执行上述方法实施例中以太网设备的处理动作,使接收器33执行上述实施例中以太网设备的接收动作,使发送器34执行上述方法实施例中以太网设备的发送动作,其实现原理和技术效果类似,在此不再赘述。
在本申请另一个实施例中,上述存储器32用于存储计算机可执行程序代码,程序代码包括指令;当处理器31执行指令时,使流量均衡装置的处理器31执行上述方法实施例中控制器的处理动作,使接收器33执行上述实施例中控制器的接收动作,使发送器34执行上述方法实施例中控制器的发送动作,其实现原理和技术效果类似,在此不再赘述。
本申请实施例还提供一种存储介质,所述存储介质中存储有计算机执行指令,所述计算机执行指令被处理器执行时用于实现如上所述的流量均衡方法。
本发明实施例还提供一种计算机程序产品,当所述计算机程序产品在以太网设备上运行时,使得以太网设备执行如上述的流量均衡方法。
本发明实施例还提供一种计算机程序产品,当所述计算机程序产品在控制器上运行时,使得控制器执行如上述的流量均衡方法。
上述各实施例可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行计算机程序指令时,全部或部分地产生按照本申请实施例的流程或功能。计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。可用介质可以是磁性介质,(例如,软盘、硬盘、磁带)、光介质(例如,DVD)、或者半导体介质(例如固态硬盘SSD)等。
本文中的术语“多个”是指两个或两个以上。本文中术语“和/或”,仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。此外,本文中描述方式“……中的至少一个”表示所列出的各项之一或其任意组合,例如,“A、B和C中的至少一个”,可以表示:单独存在A,单独存在B,单独存在C,同时存在A和B,同时存在B和C,同时存在A和C,同时存在A、B和C这六种情况。另外,本文中字符“/”,一般表示前后关联对象是一种“或”的关系;在公式中,字符“/”,表示前后关联对象是一种“相除”的关 系。
可以理解的是,在本申请的实施例中涉及的各种数字编号仅为描述方便进行的区分,并不用来限制本申请的实施例的范围。
可以理解的是,在本申请的实施例中,上述各过程的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请的实施例的实施过程构成任何限定。

Claims (17)

  1. 一种流量均衡方法,其特征在于,包括:
    以太网设备向控制器发送查询报文,所述查询报文携带所述以太网设备的逻辑端口的多个物理端口中每个物理端口的负载信息;
    所述以太网设备接收所述控制器发送的应答报文,所述应答报文携带所述每个物理端口的权重因子,每个物理端口的权重因子与所述物理端口可分担所述逻辑端口的流量的大小正相关;
    在通过所述逻辑端口发送报文时,所述以太网设备根据所述每个物理端口的权重因子,调整发往所述每个物理端口的流量。
  2. 根据权利要求1所述的方法,其特征在于,所述以太网设备向所述控制器发送查询报文,包括:
    所述以太网设备判断所述多个物理端口中,是否存在至少一个物理端口的流量超出预设阈值;
    若存在至少一个物理端口的流量超出预设阈值,则所述以太网设备向所述控制器发送所述查询报文。
  3. 根据权利要求1或2所述的方法,其特征在于,每个物理端口的负载信息用于指示所述物理端口已消耗带宽占所述物理端口总带宽的百分比,所述以太网设备根据所述每个物理端口的权重因子,调整发往所述每个物理端口的流量,包括:
    所述以太网设备判断所述逻辑端口的所述每个物理端口的负载信息指示的百分比是否发生变化;
    若所述每个物理端口的负载信息指示的百分比的变化量未超出预设阈值,则所述以太网设备根据所述每个物理端口的权重因子,调整发往所述每个物理端口的流量。
  4. 根据权利要求3所述的方法,其特征在于,还包括:
    若所述每个物理端口的负载信息指示的百分比的变化量超出预设阈值,则所述以太网设备重新采集所述每个物理端口的负载信息;
    所述以太网设备向所述控制器发送携带重新采集到的负载信息的查询报文。
  5. 根据权利要求3所述的方法,其特征在于,所述以太网设备根据所述每个物理端口的权重因子,调整发往所述每个物理端口的流量,包括:
    所述以太网设备根据预设的哈希因子和所述每个物理端口的权重因子,调整发往所述每个物理端口的流量。
  6. 一种流量均衡方法,其特征在于,包括:
    控制器接收以太网设备发送的查询报文,所述查询报文携带所述以太网设备的逻辑端口的多个物理端口中每个物理端口的负载信息;
    所述控制器根据所述每个物理端口的负载信息,确定所述每个物理端口的权重因子,每个物理端口的权重因子与所述物理端口可分担所述逻辑端口的流量的大小正相关;
    所述控制器向所述以太网设备发送应答报文,所述应答报文携带所述每个物理端口的权重因子。
  7. 根据权利要求6所述的方法,其特征在于,每个物理端口的负载信息用于指示所述物理端口已消耗带宽占所述物理端口总带宽的百分比,所述控制器根据所述每个物理端口的负载信息,确定所述每个物理端口的权重因子,包括:
    所述控制器根据所述每个物理端口的百分比,确定所述多个物理端口的百分比的最小公倍数;
    所述控制器根据所述最小公倍数,确定所述每个物理端口的权重因子,一个物理端口的权重因子=所述最小公倍数/所述物理端口的百分比。
  8. 一种负载均衡装置,其特征在于,包括:
    发送单元,用于向控制器发送查询报文,所述查询报文携带以太网设备的逻辑端口的多个物理端口中每个物理端口的负载信息;
    接收单元,用于接收所述控制器发送的应答报文,所述应答报文携带所述每个物理端口的权重因子,每个物理端口的权重因子与所述物理端口可分担所述逻辑端口的流量的大小正相关;
    处理单元,用于在所述发送单元通过所述逻辑端口发送报文时,根据所述每个物理端口的权重因子,调整发往所述每个物理端口的流量。
  9. 根据权利要求8所述的装置,其特征在于,
    所述处理单元,还用于判断所述多个物理端口中,是否存在至少一个物理端口的流量超出预设阈值;
    所述发送单元,用于若所述处理单元判断出存在至少一个物理端口的流量超出预设阈值,则向所述控制器发送所述查询报文。
  10. 根据权利要求8或9所述的装置,其特征在于,每个物理端口的负载信息用于指示所述物理端口已消耗带宽占所述物理端口总带宽的百分比,所述处理单元,用于判断所述逻辑端口的所述每个物理端口的负载信息指示的百分比是否发生变化,若所述每个物理端口的负载信息指示的百分比的变化量未超出预设阈值,则根据所述每个物理端口的权重因子,调整发往所述每个物理端口的流量。
  11. 根据权利要求10所述的装置,其特征在于,
    所述处理单元,还用于若所述每个物理端口的负载信息指示的百分比的变化量超出预设阈值,则所述以太网设备重新采集所述每个物理端口的负载信息;
    所述发送单元,还用于向所述控制器发送携带重新采集到的负载信息的查询报文。
  12. 根据权利要求10所述的装置,其特征在于,
    所述处理单元,用于根据预设的哈希因子和所述每个物理端口的权重因子,调整发往所述每个物理端口的流量。
  13. 一种流量均衡装置,其特征在于,包括:
    接收单元,用于接收以太网设备发送的查询报文,所述查询报文携带所述以太网设备的逻辑端口的多个物理端口中每个物理端口的负载信息;
    处理单元,用于根据所述每个物理端口的负载信息,确定所述每个物理端口的权重因子,每个物理端口的权重因子与所述物理端口可分担所述逻辑端口的流量的大小正相关;
    发送单元,用于向所述以太网设备发送应答报文,所述应答报文携带所述每个物理端口的权重因子。
  14. 根据权利要求13所述的装置,其特征在于,每个物理端口的负载信息用于指示所述物理端口已消耗带宽占所述物理端口总带宽的百分比;
    所述处理单元,用于根据所述每个物理端口的百分比,确定所述多个物理端口的百分比的最小公倍数;根据所述最小公倍数,确定所述每个物理端口的权重因子,一个物理端口的权重因子=所述最小公倍数/所述每个物理端口的百分比。
  15. 一种流量均衡系统,其特征在于,包括:第一流量均衡装置和第二流量均衡装置,所述第一流量均衡装置为权利要求8~12任一项所述的流量均衡装置,所述第二流量均衡装置为如权利要求13或14所述的流量均衡装置。
  16. 一种流量均衡装置,其特征在于,包括:处理器、存储器、接收器和发送器,所述接收器用于接收数据,所述发送器用于发送数据,所述存储器用于存储指令,所述处理器用于执行所述存储器中存储的指令,实现如权利要求1至5任一项所述的流量均衡方法。
  17. 一种流量均衡装置,其特征在于,包括:处理器、存储器、接收器和发送器,所述接收器用于接收数据,所述发送器用于发送数据,所述存储器用于存储指令,所述处理器用于执行所述存储器中存储的指令,实现如权利要求6或7所述的流量均衡方法。
PCT/CN2020/077311 2019-03-18 2020-02-29 流量均衡方法及装置 WO2020187006A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP20773120.9A EP3890257B1 (en) 2019-03-18 2020-02-29 Flow balancing method and device
US17/385,161 US20210352018A1 (en) 2019-03-18 2021-07-26 Traffic Balancing Method and Apparatus

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910205012.0A CN111726299B (zh) 2019-03-18 2019-03-18 流量均衡方法及装置
CN201910205012.0 2019-03-18

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/385,161 Continuation US20210352018A1 (en) 2019-03-18 2021-07-26 Traffic Balancing Method and Apparatus

Publications (1)

Publication Number Publication Date
WO2020187006A1 true WO2020187006A1 (zh) 2020-09-24

Family

ID=72519534

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/077311 WO2020187006A1 (zh) 2019-03-18 2020-02-29 流量均衡方法及装置

Country Status (4)

Country Link
US (1) US20210352018A1 (zh)
EP (1) EP3890257B1 (zh)
CN (1) CN111726299B (zh)
WO (1) WO2020187006A1 (zh)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112702277B (zh) * 2020-12-15 2023-01-10 锐捷网络股份有限公司 一种负载均衡配置优化的方法和装置
CN113992544B (zh) * 2021-12-28 2022-04-29 北京中智润邦科技有限公司 端口流量分配的优化方法、装置
CN114866473B (zh) * 2022-02-25 2024-04-12 网络通信与安全紫金山实验室 一种转发装置及流量输出接口调节方法
CN114666276A (zh) * 2022-04-01 2022-06-24 阿里巴巴(中国)有限公司 一种发送报文的方法和装置

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101022456A (zh) * 2007-03-22 2007-08-22 华为技术有限公司 一种链路聚合方法、端口负载均衡方法及其装置
CN101110763A (zh) * 2007-06-22 2008-01-23 中兴通讯股份有限公司 一种快速加权选择端口的方法
CN102447619A (zh) * 2011-11-10 2012-05-09 华为技术有限公司 选择负载分担方式的方法、装置和系统
CN103401801A (zh) * 2013-08-07 2013-11-20 盛科网络(苏州)有限公司 动态负载均衡的实现方法及装置
US20170149877A1 (en) * 2014-03-08 2017-05-25 Google Inc. Weighted load balancing using scaled parallel hashing
CN109218216A (zh) * 2017-06-29 2019-01-15 中兴通讯股份有限公司 链路聚合流量分配方法、装置、设备及存储介质

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2277265C (en) * 1999-07-09 2003-04-15 Pmc-Sierra Inc. Link aggregation in ethernet frame switches
US8320399B2 (en) * 2010-02-26 2012-11-27 Net Optics, Inc. Add-on module and methods thereof
US8339951B2 (en) * 2010-07-28 2012-12-25 Hewlett-Packard Development Company, L.P. Method for configuration of a load balancing algorithm in a network device
CN102118319B (zh) * 2011-04-06 2013-09-18 杭州华三通信技术有限公司 流量负载均衡方法和装置
CN107071087B (zh) * 2011-08-17 2021-01-26 Nicira股份有限公司 逻辑l3路由
US9300586B2 (en) * 2013-03-15 2016-03-29 Aruba Networks, Inc. Apparatus, system and method for load balancing traffic to an access point across multiple physical ports
US10374956B1 (en) * 2015-09-25 2019-08-06 Amazon Technologies, Inc. Managing a hierarchical network
US10003538B2 (en) * 2016-06-08 2018-06-19 Futurewei Technologies, Inc. Proactive load balancing based on fractal analysis
CN107579923B (zh) * 2017-09-18 2019-12-10 迈普通信技术股份有限公司 一种sdn网络的链路负载均衡方法和sdn控制器
CN109861925B (zh) * 2017-11-30 2021-12-21 华为技术有限公司 数据传输方法、相关装置及网络
US11108704B2 (en) * 2018-12-04 2021-08-31 Nvidia Corp. Use of stashing buffers to improve the efficiency of crossbar switches

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101022456A (zh) * 2007-03-22 2007-08-22 华为技术有限公司 一种链路聚合方法、端口负载均衡方法及其装置
CN101110763A (zh) * 2007-06-22 2008-01-23 中兴通讯股份有限公司 一种快速加权选择端口的方法
CN102447619A (zh) * 2011-11-10 2012-05-09 华为技术有限公司 选择负载分担方式的方法、装置和系统
CN103401801A (zh) * 2013-08-07 2013-11-20 盛科网络(苏州)有限公司 动态负载均衡的实现方法及装置
US20170149877A1 (en) * 2014-03-08 2017-05-25 Google Inc. Weighted load balancing using scaled parallel hashing
CN109218216A (zh) * 2017-06-29 2019-01-15 中兴通讯股份有限公司 链路聚合流量分配方法、装置、设备及存储介质

Also Published As

Publication number Publication date
EP3890257A1 (en) 2021-10-06
CN111726299A (zh) 2020-09-29
EP3890257B1 (en) 2023-08-30
CN111726299B (zh) 2023-05-09
EP3890257A4 (en) 2022-02-23
US20210352018A1 (en) 2021-11-11

Similar Documents

Publication Publication Date Title
WO2020187006A1 (zh) 流量均衡方法及装置
US11005729B2 (en) Satisfying service level agreement metrics for unknown applications
US10735323B2 (en) Service traffic allocation method and apparatus
US20220329525A1 (en) Load balancing method and device
US9007906B2 (en) System and method for link aggregation group hashing using flow control information
US9602428B2 (en) Method and apparatus for locality sensitive hash-based load balancing
WO2017025021A1 (zh) 一种处理流表的方法及装置
CN111682952A (zh) 针对体验质量度量的按需探测
Wang et al. Implementation of multipath network virtualization with SDN and NFV
US9350631B2 (en) Identifying flows causing undesirable network events
WO2022127475A1 (zh) 数据传输方法、装置、电子设备及存储介质
Zhang et al. A stable matching based elephant flow scheduling algorithm in data center networks
WO2012109910A1 (zh) 链路聚合选路方法及装置
US20240179095A1 (en) Method and apparatus for determining hash algorithm information for load balancing, and storage medium
CN113612698A (zh) 一种数据包发送方法及装置
WO2023116580A1 (zh) 路径切换方法、装置、网络设备、以及网络系统
CN116668374A (zh) 通信方法及装置
Avci et al. Congestion aware priority flow control in data center networks
Kaymak et al. Per-packet load balancing in data center networks
WO2021012902A1 (zh) 一种处理网络拥塞的方法以及相关装置
Zhang et al. Congestion-aware adaptive forwarding in datacenter networks
US11240164B2 (en) Method for obtaining path information of data packet and device
Wu et al. Ravenflow: Congestion-aware load balancing in 5g base station network
US11012347B2 (en) Communication apparatus, communication control method, and communication system for multilink communication simultaneously using a plurality of communication paths
Mei et al. Psa: An architecture for proactively securing protocol-oblivious sdn networks

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20773120

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2020773120

Country of ref document: EP

Effective date: 20210702

NENP Non-entry into the national phase

Ref country code: DE