WO2019084805A1 - 一种分发报文的方法及装置 - Google Patents

一种分发报文的方法及装置 Download PDF

Info

Publication number
WO2019084805A1
WO2019084805A1 PCT/CN2017/108685 CN2017108685W WO2019084805A1 WO 2019084805 A1 WO2019084805 A1 WO 2019084805A1 CN 2017108685 W CN2017108685 W CN 2017108685W WO 2019084805 A1 WO2019084805 A1 WO 2019084805A1
Authority
WO
WIPO (PCT)
Prior art keywords
port
physical
packet
physical port
weight
Prior art date
Application number
PCT/CN2017/108685
Other languages
English (en)
French (fr)
Inventor
胡海涛
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to CN201780093360.2A priority Critical patent/CN110945844A/zh
Priority to PCT/CN2017/108685 priority patent/WO2019084805A1/zh
Publication of WO2019084805A1 publication Critical patent/WO2019084805A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/24Multipath
    • H04L45/245Link aggregation, e.g. trunking
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/50Reducing energy consumption in communication networks in wire-line communication networks, e.g. low power modes or reduced link rate

Definitions

  • the embodiments of the present invention relate to the field of communications, and in particular, to a method and an apparatus for distributing a message.
  • Link Aggregation (LAG) technology is defined in the Ethernet technology system, which is to aggregate multiple independent physical links into one logical link to increase the bandwidth between NEs and improve the connection between NEs. Sex and flexibility.
  • IEEE Institute of Electrical and Electronics Engineers 802.1AX-2008 and prior standards, the link rates of physical links corresponding to physical ports in the same Ethernet port aggregation group are the same.
  • the NEs When the link aggregation technology is used to send packets, the NEs need to distribute the packets to the physical link corresponding to the physical port in the Ethernet port aggregation group based on the routing algorithm, so that the service traffic of the Ethernet port aggregation group is evenly distributed.
  • the load balancing effect is implemented on the physical link of the physical port in the Ethernet port aggregation group.
  • the order of the packets that belong to the same session is not out of order.
  • the routing algorithm may be a packet-by-packet distribution algorithm or a hash-based routing algorithm.
  • the same physical link link rate of the physical port of the same Ethernet port aggregation group is eliminated.
  • the physical network corresponding to each physical port in the Ethernet port aggregation group exists.
  • the link rate of the link is different.
  • the low-speed physical link may be full, and the high-speed physical link may be generated. There is still a lot of remaining bandwidth.
  • the embodiment of the present invention provides a method and an apparatus for distributing a message, which solves the problem that the routing algorithm is still provided according to the prior art in the case that the link rate of the physical link corresponding to each physical port in the Ethernet port aggregation group is different. Distributing packets results in low-rate physical links that are already full, while high-rate physical links have a lot of remaining bandwidth.
  • a first aspect of the embodiments of the present application provides a method for distributing a packet, where the method is applied to a network element that uses a link aggregation technology, where the method includes: first, obtaining a packet identifier of a packet to be sent, and then, according to the report
  • the port number of the physical port in the Ethernet port aggregation group determines the physical port on which the packet is sent, and then sends the packet through the specified physical port.
  • the Ethernet port aggregation group contains N physical ports and sends the physical port of the packet.
  • the port weight of each physical port in the Ethernet port aggregation group is related to the link rate of the physical link corresponding to the physical port, that is, the link rate of the physical link corresponding to the physical port in the Ethernet port aggregation group.
  • the weight of the physical port is significant.
  • the link rate of the physical link corresponding to the physical port is small.
  • the weight of the physical port is small.
  • N is an integer greater than or equal to 2.
  • the method for distributing a packet in the case that the link rate of the physical link corresponding to each physical port in the Ethernet port aggregation group is different, according to the physical link corresponding to each physical port in the Ethernet port aggregation group Configure the port weight of each physical port based on the link rate and the port weight of the physical port in the Ethernet port aggregation group to determine the physical port for sending packets.
  • the number of packets sent on the physical link of the physical port in the port aggregation group is proportional to the port weight of the physical port in the Ethernet port aggregation group.
  • a physical link with a small rate is sent with a small number of packets. Therefore, the service traffic of the Ethernet port aggregation group is allocated to the corresponding physical link according to the port weight of each physical port in the Ethernet port aggregation group.
  • the port weight of each physical port in the Ethernet port aggregation group is related to the link rate of the physical link corresponding to the physical port, including:
  • the port weight of a physical port is proportional to the link rate of the physical link corresponding to the physical port.
  • the port weight of the physical port in the Ethernet port aggregation group is proportional to the link rate of the physical link corresponding to the physical port.
  • the system automatically uses the link rate of the physical link corresponding to the physical port in the Ethernet port aggregation group.
  • the port weights of the physical ports are pre-configured. Of course, the manual configuration can also be performed.
  • the system administrator can configure the port weight of the physical port according to the link rate of the physical link corresponding to the physical port in the Ethernet port aggregation group. When the link rate of the physical link corresponding to the physical port in the Ethernet port aggregation group is the same, in some cases, the routing algorithm may not achieve the effect of traffic balancing.
  • the traffic of a certain port is too large, the traffic of the physical link corresponding to the physical port in the Ethernet port aggregation group is not uniform. You can manually adjust the port weight of the physical port in the Ethernet port aggregation group to allow the Ethernet port aggregation group to The proportion of traffic on member physical ports changes.
  • the load balancing effect is implemented on the physical port of the Ethernet port aggregation group according to the port weight of each physical port in the Ethernet port aggregation group to implement load balancing.
  • the NE identifies the physical port in the aggregation group based on the packet identifier and the Ethernet port.
  • the port weight is determined by the physical port that sends the packet.
  • the network element determines the physical port for sending the packet according to the packet identifier and the port weight of the physical port in the Ethernet port aggregation group, including: the network element calculates the packet according to the packet identifier.
  • the index is the identifier of the packet, the sequence number of the packet, or the session identifier (ID) of the packet.
  • the packet features include at least one of Layer 2 information, Layer 3 information, and Layer 4 information.
  • the hash index is used to query the weight segmentation mapping table, and the physical port of the packet is obtained.
  • the weight segmentation mapping table includes a mapping relationship between the physical port and the hash index range, and one physical port corresponds to a hash index range, and the hash index range. It is proportional to the port weight of the physical port.
  • the network element queries the weight segmentation mapping table according to the hash index, and obtains a physical port for sending the packet, including: determining, by the network element, that the hash index belongs to multiple ports.
  • the first hash index range of the first index port corresponds to the first physical port; the network element uses the first physical port as the physical port for sending packets.
  • the network element determines the physical port for sending the packet according to the packet identifier and the port weight of the physical port in the Ethernet port aggregation group, including: the network element calculates the packet according to the packet identifier.
  • the index of the packet is the packet identifier, the packet sequence number, or the session identifier of the packet.
  • the packet feature includes at least one of Layer 2 information, Layer 3 information, and Layer 4 information.
  • the NE is based on the hash index and the weight chain.
  • the path mapping table determines a physical port for sending a message, where the weight link mapping table includes a mapping relationship between the physical port and the weight unit, and one physical port corresponds to a weight unit that is proportional to the port weight of the physical port.
  • the network element determines, according to the hash index and the weight link mapping table, the physical port that sends the packet, including: the total number of weighting units of the network element pair hash index.
  • the modulus is obtained by the remainder M, M is a positive integer, and the total number of weight units is corresponding to all physical ports in the Ethernet port aggregation group.
  • the sum of the weighting units; the network element obtains the Mth weighting unit according to the M query weight link mapping table; the network element determines that the Mth weighting unit corresponds to the first physical port; and the network element uses the first physical port as the physical port for sending the packet .
  • a second aspect of the present application provides a network element, where a network element uses a link aggregation technology to distribute a message, where the network element includes: a processing unit, configured to obtain a packet identifier of a packet to be sent; The physical port for sending packets is determined according to the packet identifier and the port weight of the physical port in the Ethernet port aggregation group.
  • the Ethernet port aggregation group includes N physical ports, and the Ethernet port aggregation group includes physical ports for sending packets.
  • the port weight of each physical port in the Ethernet port aggregation group is related to the link rate of the physical link corresponding to the physical port, where N is an integer greater than or equal to 2, and the sending unit is configured to determine the physical port through which the packet is sent. Send a message.
  • the foregoing functional modules of the second aspect may be implemented by hardware, or may be implemented by hardware.
  • the hardware or software includes one or more modules corresponding to the functions described above.
  • a communication interface for performing functions of the receiving unit and the transmitting unit a processor for performing functions of the processing unit, a memory, and program instructions for the processor to process the method of distributing the message of the embodiment of the present application.
  • the processor, communication interface, and memory are connected by a bus and communicate with each other. Specifically, reference may be made to the function of the behavior of the network element in the method for distributing the message provided by the first aspect.
  • a third aspect of the embodiments of the present application provides a network element, including: at least one processor, a memory, a communication interface, and a communication bus; at least one processor is connected to the memory and the communication interface by using a communication bus, and the memory is used to store the network element.
  • Execution instruction when the processor is running, the processor executes the execution instruction of the memory element of the memory to enable the network element to perform the method of distributing the message according to any one of the first aspect or the possible implementation manner of the first aspect .
  • a fourth aspect of the embodiments of the present application provides a computer storage medium for storing computer software instructions for use in the network element, the computer software instructions comprising a program designed to execute the method for distributing the message.
  • a fifth aspect of the embodiments of the present application provides a computer program product comprising instructions, which, when run on a network element, enable the network element to perform the method of any of the above aspects.
  • the name of the network element does not limit the device itself. In actual implementation, these devices may appear under other names. As long as the functions of the respective devices are similar to the embodiments of the present application, they are within the scope of the claims and their equivalents.
  • FIG. 1 is a schematic diagram of a packet sent by using a packet-by-packet distribution algorithm according to the prior art
  • FIG. 2 is a schematic diagram of a packet sent by a hash-based routing algorithm provided by the prior art
  • FIG. 3 is a simplified schematic diagram of a network system to which an embodiment of the present application is applied according to an embodiment of the present application;
  • FIG. 4 is a schematic structural diagram of a network element according to an embodiment of the present disclosure.
  • FIG. 5 is a flowchart of a method for distributing a message according to an embodiment of the present disclosure
  • FIG. 6 is a flowchart of another method for distributing a message according to an embodiment of the present disclosure.
  • FIG. 7 is a schematic diagram of a process for distributing a message according to an embodiment of the present application.
  • FIG. 8 is a flowchart of still another method for distributing a message according to an embodiment of the present application.
  • FIG. 9 is a schematic diagram of another process for distributing a message according to an embodiment of the present disclosure.
  • FIG. 10 is a schematic structural diagram of another network element according to an embodiment of the present disclosure.
  • FIG. 11 is a schematic structural diagram of still another network element according to an embodiment of the present application.
  • Link Aggregation (LAG) technology is also defined in the Ethernet technology system. It can also be called link aggregation or link bundling.
  • Link Bundling that is, a plurality of independent physical links are aggregated into one logical link, and physical ports corresponding to multiple independent physical links form an Ethernet port aggregation group. It can also be understood that all physical links form an Ethernet link.
  • Aggregation group all physical links are members of this Ethernet link aggregation group (logical link). The link bandwidth of this logical link is equal to the link bandwidth of all physical links that perform link aggregation. If a physical link fails, it will not affect the use of other physical links, but it is reduced.
  • the link bandwidth of the logical link This logical link does not have a physical port configured. Only the logical link is configured during configuration.
  • the link rates of physical links corresponding to physical ports in the same Ethernet port aggregation group are the same.
  • the network operator needs to expand the link bandwidth of the physical link between the existing NEs in some scenarios for the purpose of protecting the existing investment and making full use of the existing equipment.
  • a new board may be added to the existing device, and the physical port corresponding to the physical link provided on the new board and the physical port corresponding to the original physical link are configured as an Ethernet port aggregation group to provide more High link bandwidth.
  • the link rate of the physical link corresponding to the physical port of the newly added card may be higher than the link rate of the physical link corresponding to the original physical port on the NE, such as the link of the physical link corresponding to the original physical port.
  • the rate is 1 G.
  • the link rate of the physical link corresponding to the new physical port is 10 G.
  • the Ethernet port group consisting of the physical ports corresponding to the 1G physical link is in the Ethernet port aggregation group.
  • the network operator may additionally configure a new Ethernet port aggregation group for the physical port corresponding to the newly added physical link with the link rate of 10G, and then correspond the service to the physical link with the previous link rate of 1G.
  • the Ethernet port aggregation group consisting of the physical port is cut into the Ethernet port aggregation group consisting of the physical port corresponding to the physical link with the link rate of 10G.
  • the network element uses the link aggregation technology to send packets to the physical link corresponding to the physical port in the Ethernet port aggregation group.
  • the traffic distribution of the Ethernet port aggregation group is evenly distributed to the physical link corresponding to each physical port in the Ethernet port aggregation group to implement load balancing.
  • the concept of a session is defined for a message, that is, a group of frames transmitted from one end to the other, in which all frames form an ordered sequence, and in which the communication parties request A sequence is maintained between the exchanged sets of frames. Therefore, when a network element uses link aggregation technology to send packets, it also ensures that the sequence of packets in a session does not occur out of order.
  • the packet-by-packet distribution algorithm that appears before the earliest release of the IEEE 802.3ad standard distributes the packets to be sent to the physical link corresponding to each physical port in the Ethernet port aggregation group. Ensure that the traffic on the physical link corresponding to each physical port in the Ethernet port aggregation group is as uniform as possible.
  • FIG. 1 is a schematic diagram of a packet transmission algorithm using a packet-by-packet distribution algorithm, in which physical port 0 to physical port N form an Ethernet port aggregation group, physical port 0 corresponds to physical link 0, and physical port N corresponds to physical Link N, after receiving the packet from the NE, according to the physical port 0 to the physical end The order of the port N sends the messages one by one.
  • the most commonly used technique in the prior art is a routing algorithm based on a hash operation (ie, a hash operation), which uses a packet feature of the extracted message to perform a hash operation, for each report.
  • the file calculates a hash index, and then searches for the physical link corresponding to the physical port mapped by the hash index, and sends the packet.
  • the packets of different packet characteristics belong to different sessions, and the packets of the same session are mapped to the same physical link.
  • the traffic on the physical link corresponding to the physical port is as basic as possible.
  • FIG. 2 is a schematic diagram of a packet sending algorithm using a hash-based routing algorithm provided by the prior art, where physical port 0 to physical port N form an Ethernet port aggregation group, and physical port 0 corresponds to a physical link 0, and physical The port N corresponds to the physical link N.
  • the network element extracts the feature of the packet and obtains the packet feature. Then, the packet is hashed to obtain the hash index of the packet, and then the chain is performed. A physical link is sent to send a packet on the corresponding physical link.
  • the hash index obtained by the hash algorithm with a low-order 3-bit XOR is 0 to 7.
  • the hash index is obtained by the number of physical links corresponding to all physical ports included in the Ethernet port aggregation group. The remainder is used to index the physical port in the Ethernet port aggregation group. If one physical port corresponds to one physical link, the physical port corresponding to the physical port can be mapped to the physical port in the Ethernet port aggregation group. Send a message on the road.
  • the two routing algorithms are statistically obtained after the network element receives more packets for distribution.
  • the traffic on the physical link corresponding to each physical port in the Ethernet port aggregation group is basically uniform.
  • the requirement that the link rate of the physical link corresponding to each physical port in the same Ethernet port aggregation group is the same is eliminated.
  • the embodiment of the present application provides a method for distributing a packet.
  • the basic principle is as follows: First, the network element obtains the packet identifier of the packet to be sent. Then, according to the packet identifier and the port weight of the physical port in the Ethernet port aggregation group, the physical port that sends the packet is determined, and then the packet is sent through the determined physical port.
  • the Ethernet port aggregation group includes N physical ports and sends a report.
  • the physical port of the text belongs to the Ethernet port aggregation group.
  • the port weight of each physical port in the Ethernet port aggregation group is related to the link rate of the physical link corresponding to the physical port. That is, the physical port corresponding to the physical port in the Ethernet port aggregation group.
  • the link rate of a link is large, the weight of the physical port is significant, and the link rate of the physical link corresponding to the physical port is small.
  • the weight of the physical port is small, and N is an integer greater than or equal to 2.
  • the method for distributing a packet in the case that the link rate of the physical link corresponding to each physical port in the Ethernet port aggregation group is different, according to the physical link corresponding to each physical port in the Ethernet port aggregation group
  • the link rate is configured as the port weight of each physical port.
  • the physical port corresponding to each physical port in the Ethernet port aggregation group is determined based on the packet identifier and the port weight of the physical port in the Ethernet port aggregation group.
  • the number of sent packets is proportional to the port weight of the physical port in the Ethernet port aggregation group.
  • the service traffic of the Ethernet port aggregation group is allocated to the corresponding physical link according to the port weight of each physical port in the Ethernet port aggregation group, so as to achieve the effect of load sharing.
  • FIG. 3 shows a simplified schematic diagram of a network system to which embodiments of the present application may be applied.
  • the network system may include: a first network element 301, a second network element 302, a third network element 303, a fourth network element 304, and a fifth network element 305.
  • the network element in the network system may be a network device such as a switch or a router, and is used to forward packets.
  • the first network element 301, the third network element 303, the fourth network element 304, and the fifth network element 305 may also be connected to the terminal device 306.
  • FIG. 4 is a schematic diagram of a composition of a network element according to an embodiment of the present disclosure.
  • the network element may include at least one processor 41, a memory 42, a communication interface 43, and a communication bus 44.
  • the processor 41 is a control center of the network element, and may be a processor or a collective name of a plurality of processing elements.
  • processor 41 may include one or more CPUs, such as CPU0 and CPU1 shown in FIG.
  • the processor 41 may also be an Application Specific Integrated Circuit (ASIC), or one or more integrated circuits configured to implement the embodiments of the present application, for example, one or more microprocessors (Digital Signal Processor, DSP), or one or more Field Programmable Gate Arrays (FPGAs).
  • ASIC Application Specific Integrated Circuit
  • the processor 41 can execute the network element by running or executing a software program stored in the memory 42 in the network element and calling data stored in the memory 42. Various functions.
  • the network element can include multiple processors, such as processor 41 and processor 45 shown in FIG.
  • processors can be a single core processor (CPU) or a multi-core processor (multi-CPU).
  • a processor herein may refer to one or more devices, circuits, and/or processing cores for processing data, such as computer program instructions.
  • the processor is configured to obtain the packet identifier of the packet to be sent, and determine the physical port of the packet to be sent according to the packet identifier and the port weight of the physical port in the Ethernet port aggregation group.
  • the memory 42 can be a Read-Only Memory (ROM) or other type of static storage device that can store static information and instructions, a Random Access Memory (RAM) or other type that can store information and instructions.
  • the dynamic storage device can also be an Electrically Erasable Programmable Read-Only Memory (EEPROM), a Compact Disc Read-Only Memory (CD-ROM) or other optical disc storage, and a disc storage device. (including compact discs, laser discs, optical discs, digital versatile discs, Blu-ray discs, etc.), magnetic disk storage media or other magnetic storage devices, or can be used to carry or store desired program code in the form of instructions or data structures and can be Any other media accessed, but not limited to this.
  • Memory 42 may be present independently and coupled to processor 41 via communication bus 44. The memory 42 can also be integrated with the processor 41.
  • the memory 42 is used to store a software program that executes the solution of the present application, and is controlled by the processor 41 for execution.
  • the communication interface 43 is configured to communicate with other devices or communication networks, such as Ethernet, Radio Access Network (RAN), Wireless Local Area Networks (WLAN), and the like.
  • the communication interface 43 may include a receiving unit that implements a receiving function, and a transmitting unit that implements a transmitting function.
  • the communication interface is mainly used to send a packet through the determined physical port.
  • the communication bus 44 may be an Industry Standard Architecture (ISA) bus, a Peripheral Component Interconnect (PCI) bus, or an Extended Industry Standard Architecture (EISA) bus.
  • ISA Industry Standard Architecture
  • PCI Peripheral Component Interconnect
  • EISA Extended Industry Standard Architecture
  • the bus can be divided into an address bus, a data bus, a control bus, and the like. For ease of representation, only one thick line is shown in Figure 4, but it does not mean that there is only one bus or one type of bus.
  • the device structure shown in FIG. 4 does not constitute a definition of a network element, and may include more or less components than those illustrated, or some components may be combined, or different component arrangements.
  • FIG. 5 is a flowchart of a method for distributing a packet according to an embodiment of the present disclosure. As shown in FIG. 5, the method may include:
  • the network element obtains a packet identifier of the packet to be sent.
  • the network element is connected to the terminal device or other network element.
  • the network element receives the packet from the physical port connected to the terminal device or other network element.
  • the physical port is also a specific physical port in the forwarding path, and is not a physical port on the network element. .
  • the network element determines how to forward the packet. For the method of forwarding the packet, refer to the method in the prior art.
  • the network element determines how to forward the packet it obtains the packet identifier of the packet to be sent before sending the packet.
  • the packet identifier can be the packet characteristics of the packet, the sequence number of the packet, or the session identifier of the packet.
  • the packet itself includes the packet feature, the packet sequence number, or the session identifier of the packet.
  • the NE can obtain the packet feature, the packet sequence number, or the session identifier of the packet directly from the packet.
  • the message feature includes at least one of Layer 2 information, Layer 3 information, and Layer 4 information.
  • Layer 2 information includes a source access control (MAC) address, a destination MAC address, an Ethernet type, and a virtual local area network identification (VLAN ID).
  • the three layers of information include the source Internet Protocol (IP) address and the destination IP address.
  • the four layers of information include a source transmission control protocol (TCP) port number, a destination TCP port number, a source datagram protocol (UDP) port number, and a destination UDP port number.
  • TCP transmission control protocol
  • UDP source datagram protocol
  • the packet feature may also be Multiprotocol Label Switching (MPLS) tag information. Or any combination of the above information.
  • MPLS Multiprotocol Label Switching
  • the same session is composed of multiple packets, and the NE needs to be forwarded in a forward manner.
  • the message sequence number can indicate the sequence number of the message in the same session.
  • the message sequence number may also indicate the sequence number of the arrival of the packet for the port that received the message.
  • the packet sequence number can also indicate that the switching network forwards packets to the sending port (physical port or logical port), or it can be numbered in the order in which the packets are sent.
  • the session ID indicates the ID of the session to which the packet belongs.
  • the session ID of the packets in the same session is the same.
  • the session IDs of packets in different sessions are different.
  • the range of session identifiers defined by IEEE 802.1AX-2014 is 0 to 4095. Since the session to which each message belongs is specific, the session ID of each message is also specific.
  • the network element determines the physical port for sending the packet according to the packet identifier and the port weight of the physical port in the Ethernet port aggregation group.
  • the Ethernet port aggregation group contains N physical ports, and each physical port needs to be assigned a port weight according to the link rate of the physical link corresponding to the physical port.
  • the port weight of the physical port in the Ethernet port aggregation group is proportional to the link rate of the physical link corresponding to the physical port. That is, when the link rate of the physical link corresponding to the physical port is large, the port weight of the corresponding physical port. Significantly, when the link rate of the physical link corresponding to the physical port is small, The port weight of the corresponding physical port is small.
  • N is an integer greater than or equal to 2.
  • the Ethernet port aggregation group includes the physical port that sends packets.
  • the port weight of the physical port in the Ethernet port aggregation group is proportional to the link rate of the physical link corresponding to the physical port.
  • the system automatically uses the link rate of the physical link corresponding to the physical port in the Ethernet port aggregation group. Configure port weights for physical ports. Of course, the manual configuration can also be performed.
  • the system administrator can configure the port weight of the physical port according to the link rate of the physical link corresponding to the physical port in the Ethernet port aggregation group. For example, after the network element distributes a large number of packets, the method for distributing packets according to the embodiment of the present application can ensure the physical link corresponding to each physical port in the Ethernet port aggregation group.
  • the traffic is basically uniform.
  • the network element distributes a small number of packets, the traffic on the physical link corresponding to each physical port in the Ethernet port aggregation group cannot be basically uniform. In this case, you can configure it manually.
  • the weight of the physical port that is, the system administrator can configure the port weight of the physical port according to the link rate of the physical link corresponding to the physical port in the Ethernet port aggregation group.
  • the network element presets a weight segmentation mapping table, and the weight segmentation mapping table includes a mapping relationship between the physical port and the hash index range, and each physical port in the Ethernet port aggregation group corresponds to a hash index.
  • the range of the hash index is proportional to the port weight of the physical port.
  • the physical port corresponding to the port weight has a large hash index range, and the physical port with a small port weight has a small hash index range.
  • the network element determines the specific implementation manner of the processing port for sending packets according to the packet identifier and the port weight of the physical port in the Ethernet port aggregation group, such as S5021 to S5022.
  • the network element calculates a hash index according to the packet identifier.
  • the packet identifier is a packet feature
  • the lower 3 bits of the packet feature may be XORed; or the packet characteristics of the packet may be accumulated; or, the packet feature fields of the packet may be circulated.
  • the Cyclic Redundancy Check (CRC) calculation is used to obtain the hash index.
  • CRC Cyclic Redundancy Check
  • a source MAC address and a destination MAC address of 96 bits are obtained by a low-bit XOR hash algorithm to obtain a 3-bit index; or a source IP address and a destination IP address of a total of 64 bits are obtained by a hash algorithm of CRC calculation.
  • a 16-bit index is obtained by a source MAC address and a destination MAC address of 96 bits.
  • the packet identifier is the sequence number of the packet or the session identifier of the packet, if a smaller hash index space is required, the packet sequence number or the session identifier can be hashed.
  • the specific calculation method can refer to the packet identifier.
  • the selected hash index space is the same as the packet sequence number or the size of the session identifier, a one-to-one algorithm may be adopted, that is, the hash algorithm is the message sequence number or the session identifier of the packet is equivalent to the hash index.
  • the network element queries the weight segmentation mapping table according to the hash index, and obtains a physical port for sending the packet.
  • the network element After the hash index is calculated according to the packet identifier, the network element performs a search according to the hash index in the range of multiple hash indexes included in the weight segment mapping table, and determines that the hash index belongs to the first of the plurality of hash index ranges.
  • the hash index ranges, and the first hash index range corresponds to the first physical port, and the network element uses the first physical port as the physical port for sending packets.
  • the name of the first physical port does not limit the physical port itself. In actual implementation, the physical port may also appear by other names. As long as the function of the physical port is similar to the embodiment of the present application, it is within the scope of the claims and the equivalents thereof.
  • the value of the corresponding hash index range may be different.
  • the packet identifier is a packet sequence number
  • the packet sequence number since the packet sequence number does not need to be hashed, it is equivalent to using the packet sequence number as a hash index. Therefore, the hash index range is actually the range of the message sequence number.
  • the message identifier is a session identifier
  • the hash index range is actually the range of the session identifier.
  • the packet sequence number or the session identifier may be hashed.
  • the corresponding hash index range may also be different. For example, the hash index range obtained by the aforementioned 3-bit XOR is 0-7.
  • the hash index range obtained by CRC16 is 0 to 65535.
  • FIG. 7 is a schematic diagram of a process for distributing a message according to an embodiment of the present application.
  • the Ethernet port aggregation group includes three physical ports, namely physical port 0, physical port 1, and physical port 2.
  • Physical port 0, physical port 1 and physical port 2 respectively determine the respective port weights according to the link rates of the respective physical links.
  • the port weight of physical port 0 is 1, and the port weight of physical port 1 is 2, physical.
  • Port 2 has a port weight of 5.
  • the weight segmentation map includes a hash index ranging from 0 to 255.
  • a physical port corresponds to a hash index range
  • the hash index range can be divided according to the ratio of the port weights of the three physical ports, that is, the port weight of the physical port 0, the port weight of the physical port 1, and the port weight of the physical port 2.
  • the ratio is 1:2:5
  • the hash index range is divided into 8 parts
  • the physical port 0 corresponds to a hash index range of 0-31
  • the physical port 1 corresponds to a hash index range of 31-95
  • physical port 2 The corresponding hash index range is 96-255.
  • the network element determines that the hash index belongs to the range of the plurality of hash index ranges. It is assumed that the hash index belongs to the hash index range of 0-31, and the physical port corresponding to the hash index range of 0-31 is physical port 0, so that the network element can send a report on physical link 0 corresponding to physical port 0. Text. If the hash index belongs to the hash index range of 31-95, and the physical port corresponding to the hash index range of 31-95 is physical port 1, the network element can send a report on the physical link 1 corresponding to physical port 1. Text.
  • the network element presets the weight link mapping table, and the weight link mapping table includes a mapping relationship between the physical port and the weight unit, and each physical port in the Ethernet port aggregation group corresponds to the port of the physical port.
  • a weight unit that is proportional to the weight that is, the number of weight units corresponding to the physical port is the same as the port weight of the physical port.
  • a physical port with a large port weight has more weight units, and a physical port with a smaller port weight has fewer weight units.
  • the physical port has a port weight of 2
  • the physical port has a port weight of 3
  • the network element determines the physical port for sending packets according to the packet identifier and the port weight of the physical port in the Ethernet port aggregation group, that is, the specific implementation manner of S502 is S5023 to S5024.
  • the network element calculates a hash index according to the packet identifier.
  • the network element determines the physical port for sending the packet according to the hash index and the weight link mapping table.
  • All physical ports in the Ethernet port aggregation group are pre-configured with corresponding port weights according to the link rate of the physical link corresponding to the physical port, and each physical port corresponds to a number of weight units proportional to the port weight of the physical port.
  • the network element calculates the sum of the weight units corresponding to all the physical ports in the Ethernet port aggregation group, and obtains the total number of weight units. Then, the network element takes the modulus of the total number of weighted units of the hash index, that is, divides the hash index by the total number of weight units, and obtains the remainder M, where M is a positive integer.
  • the value of M is the weight The sequence number of the re-unit.
  • the hash index may be different for different packets. Different hash indexes M take the sequence numbers of different weight units. Of course, there are also different packets. The hash index may be the same, the same hash. Index M takes the sequence number of the same weight unit. It can be understood that each physical port in the Ethernet port aggregation group corresponds to a weight unit that is proportional to the port weight of the physical port, that is, the number of weight units corresponding to the physical port is the same as the port weight of the physical port. The sum of the weights of all physical ports in the Ethernet port aggregation group is the sum of the port weights of all physical ports in the Ethernet port aggregation group. The sum of the port weights of all the physical ports in the Ethernet port aggregation group is obtained.
  • the network element can also take the modulus of the port weight sum value for the hash index to obtain the remainder M.
  • the network element searches according to M in the weight link mapping table to obtain the Mth weight unit.
  • the network element can sort the physical ports.
  • the weight units corresponding to the physical ports are also sorted according to the sorting manner of the physical ports. Each weight unit has its own serial number.
  • the network element determines that the Mth weight unit belongs to the weight unit corresponding to the first physical port of the plurality of physical ports, and the network element uses the first physical port as the physical port for sending the packet.
  • the name of the first physical port does not limit the physical port itself. In actual implementation, the physical port may also appear by other names. As long as the function of the physical port is similar to the embodiment of the present application, it is within the scope of the claims and the equivalents thereof.
  • FIG. 9 is a schematic diagram of a process for distributing a message according to an embodiment of the present application.
  • the Ethernet port aggregation group includes three physical ports, namely physical port 0, physical port 1, and physical port 2.
  • Physical port 0, physical port 1 and physical port 2 respectively determine the respective port weights according to the link rates of the respective physical links.
  • the port weight of physical port 0 is 1, and the port weight of physical port 1 is 2, physical.
  • Port 2 has a port weight of 5.
  • the weight link mapping table includes a correspondence between a physical port and a weight unit.
  • the port weight of physical port 0 is 1 for one weight unit
  • the port weight of physical port 1 is 2 for 2 weight units
  • the port weight of physical port 2 is 5 for 5 weight units.
  • the weight link mapping table includes 8 weight units, physical port 0, physical port 1 and physical port 2 are sorted in order, and 8 weight units are also sorted and numbered in order, namely, weight unit 0 to weight unit 7, physical port 0.
  • physical port 1 corresponds to weight unit 1 and weight unit 2
  • physical port 2 corresponds to weight unit 3, weight unit 4, weight unit 5, weight unit 6 and weight unit 7.
  • the NE obtains the feature of the packet to obtain the packet feature, and obtains the packet sequence number or the session identifier, and performs a hash operation to obtain the hash index of the packet.
  • the network element obtains the modulus of the total number of weighting units of the hash index by 8 and obtains the remainder M.
  • S503 The network element sends the packet by using the determined physical port that sends the packet.
  • the network element After the first physical port is configured as the physical port of the packet, the network element sends a packet through the first physical port, that is, the packet is sent on the physical link corresponding to the first physical port.
  • FIG. 10 is a schematic diagram showing a possible composition of the network element involved in the foregoing and the embodiment.
  • the network element may include: a processing unit 1001. And transmitting unit 1002.
  • the processing unit 1001 is configured to support S501, S502 in the method for distributing the message shown in FIG. 5, S5021 and S5022 in the method for distributing the message shown in FIG. 6, and the distribution shown in FIG. S5023, S5024 in the method of the message.
  • the network element provided by the embodiment of the present application is used to execute the foregoing method for distributing a message, so that the same effect as the method for distributing the message can be achieved.
  • FIG. 11 shows another possible composition diagram of the network element involved in the above embodiment.
  • the network element includes: a processing module 1101 and a communication module 1102.
  • the processing module 1101 is configured to control and manage the action of the network element.
  • the processing module 1101 is configured to support the network element to perform S501, S502 in FIG. 5, S5021, S5022 in FIG. 6, S5023, S5024 in FIG. 8, and / or other processes for the techniques described herein.
  • the communication module 1102 is configured to support communication between network elements and other network entities, such as with other network elements or terminal devices shown in FIG. Specifically, for example, the communication module 1102 is configured to execute the network element to perform S503 in FIG. 5.
  • the processing module 1101 can be a processor or a controller. It is possible to implement or carry out the various illustrative logical blocks, modules and circuits described in connection with the present disclosure.
  • the processor can also be a combination of computing functions, for example, including one or more microprocessor combinations, a combination of a DSP and a microprocessor, and the like.
  • the communication module 1102 can be a communication interface or the like.
  • the storage module 1103 can be a memory.
  • the integrated unit if implemented in the form of a software functional unit and sold or used as a standalone product, may be stored in a readable storage medium.
  • the technical solution of the embodiments of the present application may be embodied in the form of a software product in the form of a software product in essence or in the form of a contribution to the prior art, and the software product is stored in a storage medium.
  • a number of instructions are included to cause a device (which may be a microcontroller, chip, etc.) or a processor to perform all or part of the steps of the methods described in various embodiments of the present application.
  • the foregoing storage medium includes various media that can store program codes, such as a USB flash drive, a mobile hard disk, a ROM, a RAM, a magnetic disk, or an optical disk.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

本申请实施例公开了一种分发报文的方法及装置,涉及通信领域,解决了在以太端口聚合组中各物理端口对应的物理链路的链路速率不同的情况下,依然按照现有技术提供的选路算法对报文进行分发,导致低速率物理链路已经满负荷的时候,而高速率物理链路还有很多剩余带宽的问题。具体方案为:获取需要发送的报文的报文标识;根据报文标识和以太端口聚合组内物理端口的端口权重,确定发送报文的物理端口,其中,以太端口聚合组内至少包含两个物理端口,以太端口聚合组内每个物理端口的端口权重与所述物理端口对应的物理链路的链路速率相关,发送报文。本申请实施例用于分发报文的过程中。

Description

一种分发报文的方法及装置 技术领域
本申请实施例涉及通信领域,尤其涉及一种分发报文的方法及装置。
背景技术
以太技术体系中规定了链路聚合(Link Aggregation,LAG)技术,即将多条独立的物理链路聚合成为一条逻辑链路,来增加网元之间的带宽,同时提高网元之间连接的可靠性和弹性。且在电气及电子工程师学会(Institute of Electrical and Electronics Engineers,IEEE)802.1AX-2008及之前标准中规定同一以太端口聚合组中各物理端口对应的物理链路的链路速率相同。
网元在采用了链路聚合技术发送报文时,需要根据选路算法将报文分发到以太端口聚合组中物理端口对应的物理链路上进行发送,使以太端口聚合组的业务流量均匀分配到以太端口聚合组中各物理端口对应的物理链路上,实现负荷分担的效果,且保证属于同一个会话的报文的顺序不发生乱序。其中,选路算法可以是逐包分发算法或基于哈希(hash)运算的选路算法。但是,在新版IEEE 802.1AX-2014标准中取消了同一以太端口聚合组中各物理端口对应的物理链路的链路速率相同这个要求,因此,就存在以太端口聚合组中各物理端口对应的物理链路的链路速率不同的情况,此时,如果依然按照现有技术提供的选路算法对报文进行分发,可能会造成低速率物理链路已经满负荷的时候,而高速率物理链路还有很多剩余带宽。
发明内容
本申请实施例提供一种分发报文的方法及装置,解决了在以太端口聚合组中各物理端口对应的物理链路的链路速率不同的情况下,依然按照现有技术提供的选路算法对报文进行分发,导致低速率物理链路已经满负荷的时候,而高速率物理链路还有很多剩余带宽的问题。
为达到上述目的,本申请实施例采用如下技术方案:
本申请实施例的第一方面,提供一种分发报文的方法,方法应用于采用链路聚合技术的网元,方法包括:首先,获取需要发送的报文的报文标识,然后,根据报文标识和以太端口聚合组内物理端口的端口权重,确定发送报文的物理端口,再通过确定的物理端口发送报文,其中,以太端口聚合组包含N个物理端口,发送报文的物理端口属于以太端口聚合组,以太端口聚合组内每个物理端口的端口权重与物理端口对应的物理链路的链路速率相关,即以太端口聚合组内的物理端口对应的物理链路的链路速率大时,物理端口的权重大,物理端口对应的物理链路的链路速率小时,物理端口的权重小,N为大于等于2的整数。本申请实施例提供的分发报文的方法,在以太端口聚合组中各物理端口对应的物理链路的链路速率不同的情况下,根据以太端口聚合组中各物理端口对应的物理链路的链路速率配置各物理端口的端口权重,根据报文标识和以太端口聚合组内物理端口的端口权重,确定发送报文的物理端口,使得在以太 端口聚合组中各物理端口对应的物理链路上发送的报文的数量正比于以太端口聚合组内物理端口的端口权重,即链路速率大的物理链路发送的报文的多,链路速率小的物理链路发送的报文的少,从而,使以太端口聚合组的业务流量按照以太端口聚合组中各物理端口的端口权重分配到对应的物理链路上,实现负荷分担的效果。
结合第一方面,在一种可能的实现方式中,以太端口聚合组内每个物理端口的端口权重与所述物理端口对应的物理链路的链路速率相关,包括:以太端口聚合组内每个物理端口的端口权重与物理端口对应的物理链路的链路速率成正比。
需要说明的是,以太端口聚合组内物理端口的端口权重与物理端口对应的物理链路的链路速率成正比是系统根据以太端口聚合组内物理端口对应的物理链路的链路速率,自动对物理端口进行端口权重预先进行配置的。当然,也可以进行人工配置,可以选择的,系统管理员可以根据以太端口聚合组内物理端口对应的物理链路的链路速率自主配置物理端口的端口权重。当以太端口聚合组内物理端口对应的物理链路的链路速率相同的情况下,在某些情况下,选路算法也可能达不到流量均衡的效果。例如,某些会话的流量太大造成以太端口聚合组内物理端口对应的物理链路的流量不均匀,此时可以采用人工调整以太端口聚合组内物理端口的端口权重,让以太端口聚合组各成员物理端口的流量的比例发生变化。
为了使以太端口聚合组的业务流量按照以太端口聚合组中各物理端口的端口权重分配到对应的物理链路上,实现负荷分担的效果,网元根据报文标识和以太端口聚合组内物理端口的端口权重,确定发送报文的物理端口,具体的可以包括以下实现方式:
结合第一方面,在一种可能的实现方式中,网元根据报文标识和以太端口聚合组内物理端口的端口权重,确定发送报文的物理端口,包括:网元根据报文标识计算哈希索引,报文标识为报文特征、报文序号或报文的会话标识(identification,ID),报文特征包括二层信息、三层信息和四层信息中至少一项;网元根据哈希索引查询权重分段映射表,得到发送报文的物理端口,其中,权重分段映射表包括物理端口与哈希索引范围的映射关系,一个物理端口对应一个哈希索引范围,哈希索引范围与物理端口的端口权重成正比。
结合上述可能的实现方式,在另一种可能的实现方式中,网元根据哈希索引查询权重分段映射表,得到发送报文的物理端口,包括:网元确定哈希索引属于多个哈希索引范围中的第一哈希索引范围,第一哈希索引范围对应第一物理端口;网元将第一物理端口作为发送报文的物理端口。
结合第一方面,在一种可能的实现方式中,网元根据报文标识和以太端口聚合组内物理端口的端口权重,确定发送报文的物理端口,包括:网元根据报文标识计算哈希索引,报文标识为报文特征、报文序号或报文的会话标识,报文特征包括二层信息、三层信息和四层信息中至少一项;网元根据哈希索引和权重链路映射表,确定发送报文的物理端口,其中,权重链路映射表包括物理端口与权重单元的映射关系,一个物理端口对应与物理端口的端口权重成比例的数量的权重单元。
结合上述可能的实现方式,在另一种可能的实现方式中,网元根据哈希索引和权重链路映射表,确定发送报文的物理端口,包括:网元对哈希索引取权重单元总数的模,得到余数M,M为正整数,权重单元总数为以太端口聚合组中所有物理端口对应 的权重单元之和;网元根据M查询权重链路映射表,得到第M权重单元;网元确定第M权重单元对应第一物理端口;网元将第一物理端口作为发送报文的物理端口。
本申请实施例的第二方面,提供一种网元,网元采用链路聚合技术分发报文,网元包括:处理单元,用于获取需要发送的报文的报文标识;处理单元,还用于根据报文标识和以太端口聚合组内物理端口的端口权重,确定发送报文的物理端口,其中,以太端口聚合组包含N个物理端口,以太端口聚合组包括发送报文的物理端口,以太端口聚合组内每个物理端口的端口权重与所述物理端口对应的物理链路的链路速率相关,N为大于等于2的整数;发送单元,用于通过确定的发送报文的物理端口发送报文。
需要说明的是,上述第二方面功能模块可以通过硬件实现,也可以通过硬件执行相应的软件实现。硬件或软件包括一个或多个与上述功能相对应的模块。例如,通信接口,用于完成接收单元和发送单元的功能,处理器,用于完成处理单元的功能,存储器,用于处理器处理本申请实施例的分发报文的方法的程序指令。处理器、通信接口和存储器通过总线连接并完成相互间的通信。具体的,可以参考第一方面提供的分发报文的方法中网元的行为的功能。
本申请实施例的第三方面,提供一种网元,包括:至少一个处理器、存储器、通信接口、通信总线;至少一个处理器与存储器、通信接口通过通信总线连接,存储器用于存储网元的执行指令,当处理器运行时,处理器执行存储器存储的网元的执行指令,以使网元执行第一方面或第一方面的可能的实现方式中任一所述的分发报文的方法。
本申请实施例的第四方面,提供一种计算机存储介质,用于存储上述网元所用的计算机软件指令,该计算机软件指令包含用于执行上述分发报文的方法所设计的程序。
本申请实施例的第五方面,提供了一种包含指令的计算机程序产品,当其在网元上运行时,使得网元可以执行上述任意方面的方法。
另外,第二方面至第五方面中任一种设计方式所带来的技术效果可参见第一方面中不同设计方式所带来的技术效果,此处不再赘述。
本申请实施例中,网元的名字对设备本身不构成限定,在实际实现中,这些设备可以以其他名称出现。只要各个设备的功能和本申请实施例类似,属于本申请权利要求及其等同技术的范围之内。
附图说明
图1为现有技术提供的一种采用逐包分发算法发送报文的示意图;
图2为现有技术提供的一种采用基于哈希运算的选路算法发送报文的示意图;
图3为本申请实施例提供的一种应用本申请实施例的网络系统的简化示意图;
图4为本申请实施例提供的一种网元的组成示意图;
图5为本申请实施例提供的一种分发报文的方法的流程图;
图6为本申请实施例提供的另一种分发报文的方法的流程图;
图7为本申请实施例提供的一种分发报文过程示意图;
图8为本申请实施例提供的又一种分发报文的方法的流程图;
图9为本申请实施例提供的另一种分发报文过程示意图;
图10为本申请实施例提供的另一种网元的组成示意图;
图11为本申请实施例提供的又一种网元的组成示意图。
具体实施方式
为了增加网元之间的带宽,同时提高网元之间连接的可靠性和弹性,以太技术体系中规定了链路聚合(Link Aggregation,LAG)技术,也可称为链路汇聚或链路捆绑(Link Bundling),即将多条独立的物理链路聚合成为一条逻辑链路,多条独立的物理链路对应的物理端口组成了以太端口聚合组,也可以理解为所有物理链路组成以太链路聚合组,所有物理链路作为这个以太链路聚合组(逻辑链路)的成员。这条逻辑链路的链路带宽相当于进行链路聚合的所有物理链路的链路带宽之和,如果某条物理链路出现故障,也不会影响其他物理链路的使用,只是降低了逻辑链路的链路带宽。这条逻辑链路不单独配置物理口,配置时只配置这条逻辑链路。在最早的IEEE802.3ad-2000标准、及后续的IEEE802.3-2002标准和IEEE802.1AX-2008标准中规定同一以太端口聚合组中各物理端口对应的物理链路的链路速率相同。
例如,在数据网络建设过程中,网络运营方出于保护现有投资和充分利用现有设备的目的,在某些场景下需要对现有网元之间的物理链路的链路带宽进行扩容,可能会在现有设备上增加新的板卡,并将新板卡上提供的物理链路对应的物理端口和原有物理链路对应的物理端口一起配置为以太端口聚合组,以便提供更高的链路带宽。但是,新增加板卡的物理端口对应的物理链路的链路速率可能比网元上原有物理端口对应的物理链路的链路速率高,如原有物理端口对应的物理链路的链路速率为1G,新物理端口对应的物理链路的链路速率为10G。如果按照同一以太端口聚合组中各物理端口对应的物理链路的链路速率相同的要求,这些新增加的链路速率为10G的物理链路对应的物理端口就不能加入到现有的链路速率为1G的物理链路对应的物理端口组成的以太端口聚合组中去。通常,网络运营方可能会将新增加的链路速率为10G的物理链路对应的物理端口另外配置一个新的以太端口聚合组,再将业务从以前的链路速率为1G的物理链路对应的物理端口组成的以太端口聚合组中割接到新增的链路速率为10G的物理链路对应的物理端口组成的以太端口聚合组中去。
在数据网络使用链路聚合技术的场景中,网元采用链路聚合技术发送报文时,需要根据选路算法将报文分发到以太端口聚合组中物理端口对应的物理链路上进行发送,使以太端口聚合组的业务流量均匀分配到以太端口聚合组中各物理端口对应的物理链路上,实现负荷分担的效果。另外,从最早的IEEE 802.3ad-2000标准开始,就对报文定义了会话的概念,即从一端发送到另一端的一组帧,其中所有帧形成有序序列,并且其中通信双方要求在所交换的帧集合之间保持序列。因此,网元采用链路聚合技术发送报文时,也要保证一个会话内的报文序列不发生乱序。
其中,选路算法有很多种。例如,在最早的IEEE802.3ad标准正式发布前出现的逐包分发算法,将需要发送的报文,逐个报文均匀地分配到以太端口聚合组中各物理端口对应的物理链路上,从而,保证以太端口聚合组中各物理端口对应的物理链路上的流量尽量基本均匀。图1为现有技术提供的一种采用逐包分发算法发送报文的示意图,其中,物理端口0至物理端口N组成以太端口聚合组,物理端口0对应物理链路0,物理端口N对应物理链路N,在网元接收到报文后,按照从物理端口0至物理端 口N的顺序逐个发送报文。
除了上述逐包分发算法以外,现有技术中最常用的是基于哈希(hash)运算(即散列运算)的选路算法,采用提取报文的报文特征进行hash运算,对每个报文计算出一个hash索引,再查找hash索引映射的物理端口对应的物理链路,将报文发送出去。报文经过基于哈希运算的选路算法分发后,不同报文特征组合的报文,属于不同的会话,同一会话的报文映射到同一个物理链路,从而,保证以太端口聚合组中各物理端口对应的物理链路上的流量尽量基本均匀。图2为现有技术提供的一种采用基于哈希运算的选路算法发送报文的示意图,其中,物理端口0至物理端口N组成以太端口聚合组,物理端口0对应物理链路0,物理端口N对应物理链路N,在网元接收到报文后,对报文进行特征提取,得到报文特征,然后,对报文特征进行hash运算,得到该报文的hash索引,再进行链路映射,得到发送报文的物理链路,在对应的物理链路上发送报文。例如,低3bit异或的hash算法得到的hash索引,hash索引的取值为0~7,再对hash索引取以太端口聚合组包括的所有物理端口对应的物理链路的个数的模,得到的余数就用于索引到以太端口聚合组中的物理端口,一个物理端口对应一条物理链路,则根据余数索引到以太端口聚合组中的物理端口后,就可以在该物理端口对应的物理链路上发送报文。
在以太端口聚合组中各物理端口对应的物理链路的链路速率相同的情况下,上述两种选路算法,在网元接收到较多的报文进行分发后,从统计学的角度来讲,能够保证以太端口聚合组中各物理端口对应的物理链路上的流量基本均匀。但是,在新版IEEE 802.1AX-2014标准中取消了同一以太端口聚合组中各物理端口对应的物理链路的链路速率相同这个要求。
为了解决在以太端口聚合组中各物理端口对应的物理链路的链路速率不同的情况下,依然按照现有技术提供的选路算法对报文进行分发,导致低速率物理链路已经满负荷的时候,而高速率物理链路还有很多剩余带宽的问题,本申请实施例提供一种分发报文的方法,其基本原理是:首先,网元获取需要发送的报文的报文标识,然后,根据报文标识和以太端口聚合组内物理端口的端口权重,确定发送报文的物理端口,再通过确定的物理端口发送报文,其中,以太端口聚合组包含N个物理端口,发送报文的物理端口属于以太端口聚合组,以太端口聚合组内每个物理端口的端口权重与所述物理端口对应的物理链路的链路速率相关,即以太端口聚合组内的物理端口对应的物理链路的链路速率大时,物理端口的权重大,物理端口对应的物理链路的链路速率小时,物理端口的权重小,N为大于等于2的整数。本申请实施例提供的分发报文的方法,在以太端口聚合组中各物理端口对应的物理链路的链路速率不同的情况下,根据以太端口聚合组中各物理端口对应的物理链路的链路速率配置各物理端口的端口权重,根据报文标识和以太端口聚合组内物理端口的端口权重,确定发送报文的物理端口,使得在以太端口聚合组中各物理端口对应的物理链路上发送的报文的数量正比于以太端口聚合组内物理端口的端口权重,即链路速率大的物理链路发送的报文的多,链路速率小的物理链路发送的报文的少,从而,使以太端口聚合组的业务流量按照以太端口聚合组中各物理端口的端口权重分配到对应的物理链路上,实现负荷分担的效果。
下面将结合附图对本申请实施例的实施方式进行详细描述。
图3示出的是可以应用本申请实施例的网络系统的简化示意图。如图3所示,该网络系统可以包括:第一网元301、第二网元302、第三网元303、第四网元304和第五网元305。网络系统中所述的网元可以是交换机或路由器等网络设备,用于转发报文。第一网元301、第三网元303、第四网元304和第五网元305还可以连接终端设备306。
图4为本申请实施例提供的一种网元的组成示意图,如图4所示,网元可以包括至少一个处理器41,存储器42、通信接口43、通信总线44。
下面结合图4对网元的各个构成部件进行具体的介绍:
处理器41是网元的控制中心,可以是一个处理器,也可以是多个处理元件的统称。在具体的实现中,作为一种实施例,处理器41可以包括一个或多个CPU,例如图4中所示的CPU0和CPU1。处理器41也可以是特定集成电路(Application Specific Integrated Circuit,ASIC),或者是被配置成实施本申请实施例的一个或多个集成电路,例如:一个或多个微处理器(Digital Signal Processor,DSP),或,一个或者多个现场可编程门阵列(Field Programmable Gate Array,FPGA)。
其中,以处理器41是一个或多个CPU为例,处理器41可以通过运行或执行存储在网元中的存储器42内的软件程序,以及调用存储在存储器42内的数据,执行网元的各种功能。
在具体实现中,作为一种实施例,网元可以包括多个处理器,例如图4中所示的处理器41和处理器45。这些处理器中的每一个可以是一个单核处理器(single-CPU),也可以是一个多核处理器(multi-CPU)。这里的处理器可以指一个或多个设备、电路、和/或用于处理数据(例如计算机程序指令)的处理核。
在本申请实施例中处理器主要用于获取需要发送的报文的报文标识,并根据报文标识和以太端口聚合组内物理端口的端口权重,确定发送报文的物理端口。
存储器42可以是只读存储器(Read-Only Memory,ROM)或可存储静态信息和指令的其他类型的静态存储设备,随机存取存储器(Random Access Memory,RAM)或者可存储信息和指令的其他类型的动态存储设备,也可以是电可擦可编程只读存储器(Electrically Erasable Programmable Read-Only Memory,EEPROM)、只读光盘(Compact Disc Read-Only Memory,CD-ROM)或其他光盘存储、光碟存储(包括压缩光碟、激光碟、光碟、数字通用光碟、蓝光光碟等)、磁盘存储介质或者其他磁存储设备、或者能够用于携带或存储具有指令或数据结构形式的期望的程序代码并能够由计算机存取的任何其他介质,但不限于此。存储器42可以是独立存在,通过通信总线44与处理器41相连接。存储器42也可以和处理器41集成在一起。
其中,所述存储器42用于存储执行本申请方案的软件程序,并由处理器41来控制执行。
通信接口43,用于与其他设备或通信网络通信,如以太网,无线接入网(Radio Access Network,RAN),无线局域网(Wireless Local Area Networks,WLAN)等。通信接口43可以包括接收单元实现接收功能,以及发送单元实现发送功能。在本申请实施例中通信接口主要用于通过确定的物理端口发送报文。
通信总线44,可以是工业标准体系结构(Industry Standard Architecture,ISA)总线、外部设备互连(Peripheral Component Interconnect,PCI)总线或扩展工业标准体系结构(Extended Industry Standard Architecture,EISA)总线等。该总线可以分为地址总线、数据总线、控制总线等。为便于表示,图4中仅用一条粗线表示,但并不表示仅有一根总线或一种类型的总线。
图4中示出的设备结构并不构成对网元的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。
图5为本申请实施例提供的一种分发报文的方法的流程图,如图5所示,该方法可以包括:
S501、网元获取需要发送的报文的报文标识。
网元连接终端设备或其他网元,网元从与终端设备或其他网元相连的物理端口接收报文,这个物理端口也是转发路径中特定的物理端口,而非网元上的任意一个物理端口。网元接收到报文后,确定如何转发报文,如何转发报文可以参考现有技术中的方法,本申请实施例在此不再赘述。网元确定如何转发报文后,在发送报文之前获取需要发送的报文的报文标识,报文标识可以是报文的报文特征、报文序号或报文的会话标识。报文中本身就包括报文特征、报文序号或报文的会话标识,网元可以直接从报文中获取报文特征、报文序号或报文的会话标识。
报文特征包括二层信息、三层信息和四层信息中至少一项。二层信息包括源媒体接入控制(Media Access Control,MAC)地址、目的MAC地址、以太类型、虚拟局域网标识(Virtual Local Area Network Identification,VLAN ID)。三层信息包括源互联网协议(Internet Protocol,IP)地址、目的IP地址。四层信息包括源传输控制协议(Transmission Control Protocol,TCP)端口号、目的TCP端口号、源用户数据报协议(User datagram protocol,UDP)端口号、目的UDP端口号。报文特征还可以是多协议标记交换(Multiprotocol Label Switching,MPLS)标签信息。或者,上述信息的任意组合。这些提取报文特征的方法,实际就是将报文区分为不同会话的方法,相同报文特征的报文被认为是属于一个会话。
通常,同一个会话是由多个报文组成的,网元需要按顺进行转发。例如,报文序号可以表示同一个会话中的报文的顺序编号。报文序号还可以表示对于接收到报文的端口对报文到达的顺序编号。报文序号还可以表示交换网给发送端口(物理端口或逻辑端口)转发报文时,也可以按报文发送的顺序编号。
会话标识表示报文所属的会话的标识。同一个会话的报文的会话标识相同,不同会话的报文的会话标识不同。IEEE 802.1AX-2014定义的会话标识的范围是0~4095。由于每个报文所属的会话是特定的,因此,每个报文的会话标识也是特定的。
S502、网元根据报文标识和以太端口聚合组内物理端口的端口权重,确定发送报文的物理端口。
以太端口聚合组包含N个物理端口,每个物理端口需要根据物理端口对应的物理链路的链路速率赋予端口权重。例如,以太端口聚合组内物理端口的端口权重与物理端口对应的物理链路的链路速率成正比,即当物理端口对应的物理链路的链路速率大时,对应的物理端口的端口权重大,当物理端口对应的物理链路的链路速率小时,对 应的物理端口的端口权重小。N为大于等于2的整数。以太端口聚合组包括发送报文的物理端口。需要说明的是,以太端口聚合组内物理端口的端口权重与物理端口对应的物理链路的链路速率成正比是系统根据以太端口聚合组内物理端口对应的物理链路的链路速率,自动对物理端口进行端口权重配置的。当然,也可以进行人工配置,可以选择的,系统管理员可以根据以太端口聚合组内物理端口对应的物理链路的链路速率自主配置物理端口的端口权重。例如,在网元分发较多的报文后,从统计学的角度来讲,根据本申请实施例所述的分发报文的方法能够保证以太端口聚合组中各物理端口对应的物理链路上的流量基本均匀,但是,在网元分发较少的报文时,可能无法保证以太端口聚合组中各物理端口对应的物理链路上的流量基本均匀,这时,可以采用人工配置的方法配置物理端口的权重,即系统管理员可以根据以太端口聚合组内物理端口对应的物理链路的链路速率自主配置物理端口的端口权重。
下面对网元根据报文标识和以太端口聚合组内物理端口的端口权重,确定发送报文的物理端口的实现方式进行详细说明。
在一种可能的实现方式中,网元预设权重分段映射表,权重分段映射表包括物理端口与哈希索引范围的映射关系,以太端口聚合组中每个物理端口对应一个哈希索引范围,哈希索引范围与物理端口的端口权重成正比,端口权重大的物理端口对应的哈希索引范围大,端口权重小的物理端口对应的哈希索引范围小。如图6所示,网元根据报文标识和以太端口聚合组内物理端口的端口权重,确定发送报文的理端口的具体实现方式如S5021至S5022。
S5021、网元根据报文标识计算哈希索引。
对于报文标识为报文特征的情况下,可以对报文特征的低3bit进行异或;或者,对报文的各报文特征进行累加;或者,对报文的各报文特征字段进行循环冗余校验(Cyclic Redundancy Check,CRC)计算来获取哈希索引。这些hash运算的实质都是将多bit的信息通过hash算法得到一个相对bit数较少的hash索引。因此,hash运算实际相当于两种索引空间的映射方法。例如,将源MAC地址、目的MAC共96bit的信息,通过低3bit异或的hash算法得到一个3bit的索引;或者,将源IP地址、目的IP地址共64bit的信息,通过CRC计算的hash算法得到一个16bit的索引。
对于报文标识为报文序号或报文的会话标识的情况下,如果需要更小的哈希索引空间,可以对报文序号或会话标识进行哈希计算,具体的计算方法可以参考报文标识为报文特征时的阐述。如果选择的哈希索引空间和报文序号或会话标识的编码空间大小相同,可以采用一一对应的算法,即哈希算法就是报文序号或报文的会话标识相当于哈希索引。
S5022、网元根据哈希索引查询权重分段映射表,得到发送报文的物理端口。
网元根据报文标识计算得到哈希索引后,根据哈希索引在权重分段映射表包括的多个哈希索引范围内进行查找,确定哈希索引属于多个哈希索引范围中的第一哈希索引范围,而第一哈希索引范围对应第一物理端口,网元将第一物理端口作为发送报文的物理端口。本申请实施例中,第一物理端口的名字对物理端口本身不构成限定,在实际实现中,物理端口还可以以其他名称出现。只要物理端口的功能和本申请实施例类似,属于本申请权利要求及其等同技术的范围之内。
需要说明的是,根据不同的报文标识得到的哈希索引,查找对应的哈希索引范围的值可能是不同的。例如,报文标识为报文序号时,由于可以无需对报文序号进行哈希运算,相当于将报文序号作为哈希索引,因此,哈希索引范围其实就是报文序号的范围。同理,报文标识为会话标识时,哈希索引范围其实就是会话标识的范围。另外,对于如果需要更小的哈希索引空间,可以对报文序号或会话标识进行哈希计算的情况下,采用不同的哈希算法时,对应的哈希索引范围也可能是不同的。例如,前述的3bit异或得到的哈希索引范围是0~7。CRC16得到的哈希索引范围是0~65535。
示例的,图7为本申请实施例提供的一种分发报文过程示意图。假设以太端口聚合组包括3个物理端口,即物理端口0、物理端口1和物理端口2。物理端口0、物理端口1和物理端口2根据各自对应的物理链路的链路速率分别确定各自的端口权重,例如,物理端口0的端口权重为1,物理端口1的端口权重为2,物理端口2的端口权重为5。权重分段映射表包括的哈希索引范围为0到255。因此,一个物理端口对应一个哈希索引范围,可以根据3个物理端口的端口权重之比划分哈希索引范围,即物理端口0的端口权重、物理端口1的端口权重和物理端口2的端口权重之比为1:2:5,将哈希索引范围划分为8份,物理端口0对应的哈希索引范围为0-31,物理端口1对应的哈希索引范围为31-95,物理端口2对应的哈希索引范围为96-255。网元接收到报文后,对报文进行特征提取获取报文特征,也可以获取报文序号或会话标识,并进行哈希运算得到报文的哈希索引,具体的获取方法可以参考S501中的阐述。然后,网元确定哈希索引属于多个哈希索引范围中的那个范围。假设哈希索引属于0-31的哈希索引范围,而0-31的哈希索引范围对应的物理端口是物理端口0,从而,网元可以在物理端口0对应的物理链路0上发送报文。如果哈希索引属于31-95的哈希索引范围,而31-95的哈希索引范围对应的物理端口是物理端口1,从而,网元可以在物理端口1对应的物理链路1上发送报文。
在另一种可能的实现方式中,网元预设权重链路映射表,权重链路映射表包括物理端口与权重单元的映射关系,以太端口聚合组中每个物理端口对应与物理端口的端口权重成比例的数量的权重单元,即可以理解为物理端口对应的权重单元的个数与物理端口的端口权重相同。端口权重大的物理端口对应的权重单元多,端口权重小的物理端口对应的权重单元少。例如,物理端口的端口权重为2,该物理端口对应2个权重单元;物理端口的端口权重为3,该物理端口对应3个权重单元。如图8所示,网元根据报文标识和以太端口聚合组内物理端口的端口权重,确定发送报文的物理端口,即S502的具体实现方式如S5023至S5024。
S5023、网元根据报文标识计算哈希索引。
详细解释可以参考S5021,本申请实施例在此步骤赘述。
S5024、网元根据哈希索引和权重链路映射表,确定发送报文的物理端口。
以太端口聚合组中所有物理端口均预先根据物理端口对应的物理链路的链路速率设置有对应的端口权重,且每个物理端口对应与物理端口的端口权重成比例的数量的权重单元。网元根据报文标识计算得到哈希索引后,计算以太端口聚合组中所有物理端口对应的权重单元之和,得到权重单元总数。然后,网元对哈希索引取权重单元总数的模,即用哈希索引除以权重单元总数,得到余数M,M为正整数。M的取值为权 重单元的序号,对于不同的报文,哈希索引可能不同,不同的哈希索引M取不同的权重单元的序号,当然,也有不同的报文,哈希索引可能相同的,相同的哈希索引M取相同的权重单元的序号。可以理解的,以太端口聚合组中每个物理端口对应与物理端口的端口权重成比例的数量的权重单元,即可以理解为物理端口对应的权重单元的个数与物理端口的端口权重相同,对以太端口聚合组中所有物理端口对应的权重单元求和,也相当于对对以太端口聚合组中所有物理端口的端口权重求和,得到以太端口聚合组中所有物理端口的端口权重总和值,因此,网元也可以对哈希索引取端口权重总和值的模,得到余数M。这里的M表示权重单元的序号,如果M=0,对应序号为0的权重单元,如果M=1,对应序号为1的权重单元。网元根据M在权重链路映射表中进行查找,得到第M权重单元。示例的,网元可以对物理端口进行排序,相应的,物理端口对应的权重单元也按照物理端口的排序方式进行排序,每个权重单元都有自己的序列号。网元确定第M权重单元属于多个物理端口中的第一物理端口对应的权重单元,网元将第一物理端口作为发送报文的物理端口。本申请实施例中,第一物理端口的名字对物理端口本身不构成限定,在实际实现中,物理端口还可以以其他名称出现。只要物理端口的功能和本申请实施例类似,属于本申请权利要求及其等同技术的范围之内。
示例的,图9为本申请实施例提供的一种分发报文过程示意图。假设以太端口聚合组包括3个物理端口,即物理端口0、物理端口1和物理端口2。物理端口0、物理端口1和物理端口2根据各自对应的物理链路的链路速率分别确定各自的端口权重,例如,物理端口0的端口权重为1,物理端口1的端口权重为2,物理端口2的端口权重为5。权重链路映射表包括物理端口与权重单元的对应关系。物理端口0的端口权重为1对应1个权重单元,物理端口1的端口权重为2对应2个权重单元,物理端口2的端口权重为5对应5个权重单元。权重链路映射表包括8个权重单元,物理端口0、物理端口1和物理端口2按照顺序排序,8个权重单元也按照顺序排序并进行编号,即权重单元0至权重单元7,物理端口0对应权重单元0,物理端口1对应权重单元1和权重单元2,物理端口2对应权重单元3、权重单元4、权重单元5、权重单元6和权重单元7。网元接收到报文后,对报文进行特征提取获取报文特征,也可以获取报文序号或会话标识,并进行哈希运算得到报文的哈希索引,具体的获取方法可以参考S501中的阐述。然后,网元对哈希索引取权重单元总数8的模,得到余数M,假设M=0,网元根据0查询到权重单元0,权重单元0属于物理端口0对应的权重单元,从而,网元可以在物理端口0对应的物理链路0上发送报文。假设M=4,网元根据4查询到权重单元4,权重单元4属于物理端口2对应的权重单元,从而,网元可以在物理端口2对应的物理链路2上发送报文。
S503、网元通过确定的发送报文的物理端口发送报文。
网元将第一物理端口作为发送报文的物理端口之后,通过第一物理端口发送报文,即在第一物理端口对应的物理链路上发送报文。
本申请实施例提供的分发报文的方法,在以太端口聚合组中各物理端口对应的物理链路的链路速率不同的情况下,根据以太端口聚合组中各物理端口对应的物理链路的链路速率配置各物理端口的端口权重,根据报文标识和以太端口聚合组内物理端口 的端口权重,确定发送报文的物理端口,使得在以太端口聚合组中各物理端口对应的物理链路上发送的报文的数量正比于以太端口聚合组内物理端口的端口权重,即链路速率大的物理链路发送的报文的多,链路速率小的物理链路发送的报文的少,从而,使以太端口聚合组的业务流量按照以太端口聚合组中各物理端口的端口权重分配到对应的物理链路上,实现负荷分担的效果。
上述主要从各个网元之间交互的角度对本申请实施例提供的方案进行了介绍。可以理解的是,各个网元为了实现上述功能,其包含了执行各个功能相应的硬件结构和/或软件模块。本领域技术人员应该很容易意识到,结合本文中所公开的实施例描述的各示例的算法步骤,本申请能够以硬件或硬件和计算机软件的结合形式来实现。某个功能究竟以硬件还是计算机软件驱动硬件的方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
本申请实施例可以根据上述方法示例对网元进行功能模块的划分,例如,可以对应各个功能划分各个功能模块,也可以将两个或两个以上的功能集成在一个处理模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。需要说明的是,本申请实施例中对模块的划分是示意性的,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。
在采用对应各个功能划分各个功能模块的情况下,图10示出了上述和实施例中涉及的网元的一种可能的组成示意图,如图10所示,该网元可以包括:处理单元1001、发送单元1002。
其中,处理单元1001,用于支持网元执行图5所示的分发报文的方法中的S501、S502,图6所示的分发报文的方法中的S5021、S5022,图8所示的分发报文的方法中的S5023、S5024。
发送单元1002,用于支持网元执行图5所示的分发报文的方法中的S503。
需要说明的是,上述方法实施例涉及的各步骤的所有相关内容均可以援引到对应功能模块的功能描述,在此不再赘述。
本申请实施例提供的网元,用于执行上述分发报文的方法,因此可以达到与上述分发报文的方法相同的效果。
在采用集成的单元的情况下,图11示出了上述实施例中所涉及的网元的另一种可能的组成示意图。如图11所示,该网元包括:处理模块1101和通信模块1102。
处理模块1101用于对网元的动作进行控制管理,例如,处理模块1101用于支持网元执行图5中的S501、S502,图6中的S5021、S5022,图8中的S5023、S5024,和/或用于本文所描述的技术的其它过程。通信模块1102用于支持网元与其他网络实体的通信,例如与图3中示出的其他网元或终端设备之间的通信。具体的,如通信模块1102用于执行网元执行图5中的S503。
其中,处理模块1101可以是处理器或控制器。其可以实现或执行结合本申请公开内容所描述的各种示例性的逻辑方框,模块和电路。处理器也可以是实现计算功能的组合,例如包含一个或多个微处理器组合,DSP和微处理器的组合等等。通信模块1102可以是通信接口等。存储模块1103可以是存储器。
当处理模块1101为处理器,通信模块1102为通信接口,存储模块1103为存储器时,本申请实施例所涉及的网元还可以为图4所示的网元。
通过以上的实施方式的描述,所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将装置的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。
在本申请所提供的几个实施例中,应该理解到,所揭露的装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述模块或单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个装置,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是一个物理单元或多个物理单元,即可以位于一个地方,或者也可以分布到多个不同地方。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个可读取存储介质中。基于这样的理解,本申请实施例的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该软件产品存储在一个存储介质中,包括若干指令用以使得一个设备(可以是单片机,芯片等)或处理器(processor)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、ROM、RAM、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何在本申请揭露的技术范围内的变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。

Claims (20)

  1. 一种分发报文的方法,其特征在于,所述方法应用于采用链路聚合技术的网元,所述方法包括:
    所述网元获取需要发送的报文的报文标识;
    所述网元根据所述报文标识和以太端口聚合组内物理端口的端口权重,确定发送所述报文的物理端口,其中,所述以太端口聚合组包含N个物理端口,所述以太端口聚合组包括发送所述报文的物理端口,所述以太端口聚合组内每个物理端口的端口权重与所述物理端口对应的物理链路的链路速率相关,N为大于等于2的整数;
    所述网元通过确定的发送所述报文的物理端口发送所述报文。
  2. 根据权利要求1所述的方法,其特征在于,所述以太端口聚合组内每个物理端口的端口权重与所述物理端口对应的物理链路的链路速率相关,包括:
    所述以太端口聚合组内每个物理端口的端口权重与物理端口对应的物理链路的链路速率成正比。
  3. 根据权利要求1或2所述的方法,其特征在于,所述网元根据所述报文标识和以太端口聚合组内物理端口的端口权重,确定发送所述报文的物理端口,包括:
    所述网元根据所述报文标识计算哈希索引,所述报文标识为报文特征、报文序号或所述报文的会话标识ID,所述报文特征包括二层信息、三层信息和四层信息中至少一项;
    所述网元根据所述哈希索引查询权重分段映射表,得到发送所述报文的物理端口,其中,所述权重分段映射表包括物理端口与哈希索引范围的映射关系,一个物理端口对应一个哈希索引范围,所述哈希索引范围与所述物理端口的端口权重成正比。
  4. 根据权利要求3所述的方法,其特征在于,所述网元根据所述哈希索引查询权重分段映射表,得到发送所述报文的物理端口,包括:
    所述网元确定所述哈希索引属于多个哈希索引范围中的第一哈希索引范围,所述第一哈希索引范围对应第一物理端口;
    所述网元将所述第一物理端口作为发送所述报文的物理端口。
  5. 根据权利要求1或2所述的方法,其特征在于,所述网元根据所述报文标识和以太端口聚合组内物理端口的端口权重,确定发送所述报文的物理端口,包括:
    所述网元根据所述报文标识计算哈希索引,所述报文标识为报文特征、报文序号或所述报文的会话标识ID,所述报文特征包括二层信息、三层信息和四层信息中至少一项;
    所述网元根据所述哈希索引和权重链路映射表,确定发送所述报文的物理端口,其中,所述权重链路映射表包括物理端口与权重单元的映射关系,一个物理端口对应与所述物理端口的端口权重成比例的数量的权重单元。
  6. 根据权利要求5所述的方法,其特征在于,所述网元根据所述哈希索引和权重链路映射表,确定发送所述报文的物理端口,包括:
    所述网元对所述哈希索引取权重单元总数的模,得到余数M,所述权重单元总数为所述以太端口聚合组中所有所述物理端口对应的权重单元之和,M为正整数;
    所述网元根据所述M查询所述权重链路映射表,得到第M权重单元;
    所述网元确定所述第M权重单元对应第一物理端口;
    所述网元将所述第一物理端口作为发送所述报文的物理端口。
  7. 一种网元,其特征在于,所述网元采用链路聚合技术分发报文,所述网元包括:
    处理单元,用于获取需要发送的报文的报文标识;
    所述处理单元,还用于根据所述报文标识和以太端口聚合组内物理端口的端口权重,确定发送所述报文的物理端口,其中,所述以太端口聚合组包含N个物理端口,所述以太端口聚合组包括发送所述报文的物理端口,所述以太端口聚合组内每个物理端口的端口权重与所述物理端口对应的物理链路的链路速率相关,N为大于等于2的整数;
    发送单元,用于通过确定的发送所述报文的物理端口发送所述报文。
  8. 根据权利要求7所述的网元,其特征在于,所述以太端口聚合组内每个物理端口的端口权重与所述物理端口对应的物理链路的链路速率相关,包括:
    所述以太端口聚合组内每个物理端口的端口权重与物理端口对应的物理链路的链路速率成正比。
  9. 根据权利要求7或8所述的网元,其特征在于,所述处理单元,具体用于:
    根据所述报文标识计算哈希索引,所述报文标识为报文特征、报文序号或所述报文的会话标识ID,所述报文特征包括二层信息、三层信息和四层信息中至少一项;
    根据所述哈希索引查询权重分段映射表,得到发送所述报文的物理端口,其中,所述权重分段映射表包括物理端口与哈希索引范围的映射关系,一个物理端口对应一个哈希索引范围,所述哈希索引范围与所述物理端口的端口权重成正比。
  10. 根据权利要求9所述的网元,其特征在于,所述处理单元,具体用于:
    确定所述哈希索引属于多个哈希索引范围中的第一哈希索引范围,所述第一哈希索引范围对应第一物理端口;
    将所述第一物理端口作为发送所述报文的物理端口。
  11. 根据权利要求7或8所述的网元,其特征在于,所述处理单元,具体用于:
    根据所述报文标识计算哈希索引,所述报文标识为报文特征、报文序号或所述报文的会话标识ID,所述报文特征包括二层信息、三层信息和四层信息中至少一项;
    根据所述哈希索引和权重链路映射表,确定发送所述报文的物理端口,其中,所述权重链路映射表包括物理端口与权重单元的映射关系,一个物理端口对应与所述物理端口的端口权重成比例的数量的权重单元。
  12. 根据权利要求11所述的网元,其特征在于,所述处理单元,具体用于:
    对所述哈希索引取权重单元总数的模,得到余数M,所述权重单元总数为所述以太端口聚合组中所有所述物理端口对应的权重单元之和,M为正整数;
    根据所述M查询所述权重链路映射表,得到第M权重单元;
    确定所述第M权重单元对应第一物理端口;
    将所述第一物理端口作为发送所述报文的物理端口。
  13. 一种网元,其特征在于,所述网元采用链路聚合技术分发报文,所述网元包括:至少一个处理器、存储器、至少一个通信接口和通信总线;所述处理器、所述存储器和所述通信接口通过所述通信总线相互的通信;
    所述存储器,用于存储指令:
    所述处理器,用于调用所述存储器中的指令执行如下方法:
    获取需要发送的报文的报文标识;
    根据所述报文标识和以太端口聚合组内物理端口的端口权重,确定发送所述报文的物理端口,其中,所述以太端口聚合组包含N个物理端口,所述以太端口聚合组包括发送所述报文的物理端口,所述以太端口聚合组内每个物理端口的端口权重与所述物理端口对应的物理链路的链路速率相关,N为大于等于2的整数;
    所述通信接口,用于通过确定的发送所述报文的物理端口发送所述报文。
  14. 根据权利要求13所述的网元,其特征在于,所述以太端口聚合组内每个物理端口的端口权重与所述物理端口对应的物理链路的链路速率相关,包括:
    所述以太端口聚合组内每个物理端口的端口权重与物理端口对应的物理链路的链路速率成正比。
  15. 根据权利要求13或14所述的网元,其特征在于,所述处理器,具体用于:
    根据所述报文标识计算哈希索引,所述报文标识为报文特征、报文序号或所述报文的会话标识ID,所述报文特征包括二层信息、三层信息和四层信息中至少一项;
    根据所述哈希索引查询权重分段映射表,得到发送所述报文的物理端口,其中,所述权重分段映射表包括物理端口与哈希索引范围的映射关系,一个物理端口对应一个哈希索引范围,所述哈希索引范围与所述物理端口的端口权重成正比。
  16. 根据权利要求15所述的网元,其特征在于,所述处理器,具体用于:
    确定所述哈希索引属于多个哈希索引范围中的第一哈希索引范围,所述第一哈希索引范围对应第一物理端口;
    将所述第一物理端口作为发送所述报文的物理端口。
  17. 根据权利要求13或14所述的网元,其特征在于,所述处理器,具体用于:
    根据所述报文标识计算哈希索引,所述报文标识为报文特征、报文序号或所述报文的会话标识ID,所述报文特征包括二层信息、三层信息和四层信息中至少一项;
    根据所述哈希索引和权重链路映射表,确定发送所述报文的物理端口,其中,所述权重链路映射表包括物理端口与权重单元的映射关系,一个物理端口对应与所述物理端口的端口权重成比例的数量的权重单元。
  18. 根据权利要求17所述的网元,其特征在于,所述处理器,具体用于:
    对所述哈希索引取权重单元总数的模,得到余数M,所述权重单元总数为所述以太端口聚合组中所有所述物理端口对应的权重单元之和,M为正整数;
    根据所述M查询所述权重链路映射表,得到第M权重单元;
    确定所述第M权重单元对应第一物理端口;
    将所述第一物理端口作为发送所述报文的物理端口。
  19. 一种计算机可读存储介质,其特征在于,包括:计算机软件指令;
    当其在网元上运行时,使得所述网元执行如权利要求1-6任一项所述方法。
  20. 一种包含指令的计算机程序产品,其特征在于,当其在网元上运行时,使得所述网元执行如权利要求1-6任一项所述方法。
PCT/CN2017/108685 2017-10-31 2017-10-31 一种分发报文的方法及装置 WO2019084805A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201780093360.2A CN110945844A (zh) 2017-10-31 2017-10-31 一种分发报文的方法及装置
PCT/CN2017/108685 WO2019084805A1 (zh) 2017-10-31 2017-10-31 一种分发报文的方法及装置

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2017/108685 WO2019084805A1 (zh) 2017-10-31 2017-10-31 一种分发报文的方法及装置

Publications (1)

Publication Number Publication Date
WO2019084805A1 true WO2019084805A1 (zh) 2019-05-09

Family

ID=66331215

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/108685 WO2019084805A1 (zh) 2017-10-31 2017-10-31 一种分发报文的方法及装置

Country Status (2)

Country Link
CN (1) CN110945844A (zh)
WO (1) WO2019084805A1 (zh)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111711577B (zh) * 2020-07-24 2022-07-22 杭州迪普信息技术有限公司 流控设备的报文转发方法及装置
CN113098790B (zh) * 2021-03-26 2022-06-21 新华三信息安全技术有限公司 一种流量转发方法及装置
CN117978851B (zh) * 2024-03-29 2024-06-07 苏州元脑智能科技有限公司 会话连接方法、交互方法、装置、设备及介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101022456A (zh) * 2007-03-22 2007-08-22 华为技术有限公司 一种链路聚合方法、端口负载均衡方法及其装置
CN101056257A (zh) * 2006-04-14 2007-10-17 中兴通讯股份有限公司 实现链路聚合和保护倒换的方法及系统
US20130235876A1 (en) * 2012-03-09 2013-09-12 Cisco Technology, Inc. Managing hierarchical ethernet segments
CN103905326A (zh) * 2012-12-28 2014-07-02 迈普通信技术股份有限公司 以太网链路聚合的报文转发控制方法及网络设备
US20150020724A1 (en) * 2001-09-28 2015-01-22 Robert A. Morvillo Method and apparatus for controlling a waterjet-driven marine vessel

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8867560B2 (en) * 2012-07-30 2014-10-21 Cisco Technology, Inc. Managing crossbar oversubscription
CN105939283B (zh) * 2016-03-17 2019-03-15 杭州迪普科技股份有限公司 网络流量分流的方法及装置

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150020724A1 (en) * 2001-09-28 2015-01-22 Robert A. Morvillo Method and apparatus for controlling a waterjet-driven marine vessel
CN101056257A (zh) * 2006-04-14 2007-10-17 中兴通讯股份有限公司 实现链路聚合和保护倒换的方法及系统
CN101022456A (zh) * 2007-03-22 2007-08-22 华为技术有限公司 一种链路聚合方法、端口负载均衡方法及其装置
US20130235876A1 (en) * 2012-03-09 2013-09-12 Cisco Technology, Inc. Managing hierarchical ethernet segments
CN103905326A (zh) * 2012-12-28 2014-07-02 迈普通信技术股份有限公司 以太网链路聚合的报文转发控制方法及网络设备

Also Published As

Publication number Publication date
CN110945844A (zh) 2020-03-31

Similar Documents

Publication Publication Date Title
US9143441B2 (en) Sliced routing table management
US8804572B2 (en) Distributed switch systems in a trill network
US8913613B2 (en) Method and system for classification and management of inter-blade network traffic in a blade server
CN113326228B (zh) 基于远程直接数据存储的报文转发方法、装置及设备
US10693790B1 (en) Load balancing for multipath group routed flows by re-routing the congested route
US20210160350A1 (en) Generating programmatically defined fields of metadata for network packets
US10666564B2 (en) Increasing entropy across routing table segments
US9762493B2 (en) Link aggregation (LAG) information exchange protocol
US10547547B1 (en) Uniform route distribution for a forwarding table
US10263896B1 (en) Methods and apparatus for load balancing communication sessions
US10819640B1 (en) Congestion avoidance in multipath routed flows using virtual output queue statistics
US10044625B2 (en) Hash level load balancing for deduplication of network packets
US9906443B1 (en) Forwarding table updates during live packet stream processing
WO2014134919A1 (zh) 同一租户内服务器间的通信控制方法及网络设备
WO2019084805A1 (zh) 一种分发报文的方法及装置
WO2015113435A1 (zh) 基于并行协议栈实例的数据包处理方法和装置
US20240195749A1 (en) Path selection for packet transmission
US10887234B1 (en) Programmatic selection of load balancing output amongst forwarding paths
US10616116B1 (en) Network traffic load balancing using rotating hash
US8634417B2 (en) Method and apparatus providing selective flow redistribution across Multi Link Trunk/Link Aggregation Group (MLT/LAG) after port member failure and recovery
CN116886621B (zh) 报文转发控制方法、dpu及相关设备
WO2015039616A1 (zh) 一种报文处理方法及设备
WO2023011153A1 (zh) 负载均衡的哈希算法信息的确定方法、装置及存储介质
US10771537B2 (en) Technologies for scrambling in load balancers
WO2015143981A1 (zh) 一种报文转发方法、系统及装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17930703

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17930703

Country of ref document: EP

Kind code of ref document: A1