WO2017128945A1 - 一种业务流量的分配方法及装置 - Google Patents

一种业务流量的分配方法及装置 Download PDF

Info

Publication number
WO2017128945A1
WO2017128945A1 PCT/CN2017/070658 CN2017070658W WO2017128945A1 WO 2017128945 A1 WO2017128945 A1 WO 2017128945A1 CN 2017070658 W CN2017070658 W CN 2017070658W WO 2017128945 A1 WO2017128945 A1 WO 2017128945A1
Authority
WO
WIPO (PCT)
Prior art keywords
path
leaf node
packet
response
node
Prior art date
Application number
PCT/CN2017/070658
Other languages
English (en)
French (fr)
Inventor
鞠文彬
姚学军
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to JP2018557175A priority Critical patent/JP6608545B2/ja
Priority to EP17743560.9A priority patent/EP3393094A4/en
Publication of WO2017128945A1 publication Critical patent/WO2017128945A1/zh
Priority to US16/045,373 priority patent/US10735323B2/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/06Management of faults, events, alarms or notifications
    • H04L41/0631Management of faults, events, alarms or notifications using root cause analysis; using analysis of correlation between notifications, alarms or events based on decision criteria, e.g. hierarchy, tree or time analysis
    • H04L41/065Management of faults, events, alarms or notifications using root cause analysis; using analysis of correlation between notifications, alarms or events based on decision criteria, e.g. hierarchy, tree or time analysis involving logical or physical relationship, e.g. grouping and hierarchies
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/11Identifying congestion
    • H04L47/115Identifying congestion using a dedicated packet
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • H04L12/46Interconnection of networks
    • H04L12/4641Virtual LANs, VLANs, e.g. virtual private networks [VPN]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/12Discovery or management of network topologies
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/02Topology update or discovery
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/24Multipath
    • H04L45/245Link aggregation, e.g. trunking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/70Routing based on monitoring results
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/12Avoiding congestion; Recovering from congestion
    • H04L47/125Avoiding congestion; Recovering from congestion by balancing the load, e.g. traffic engineering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/41Flow control; Congestion control by acting on aggregated flows or links
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L61/00Network arrangements, protocols or services for addressing or naming
    • H04L61/50Address allocation
    • H04L61/5007Internet protocol [IP] addresses

Definitions

  • the present invention relates to the field of communications, and in particular, to a method and an apparatus for allocating traffic.
  • LAG link aggregation group
  • the server 1 needs to send a message to the server 5 via the leaf node 2, the backbone node 3, and the leaf node 4.
  • the leaf node 2 receives the message sent by the server 1, if the leaf node 2 If the entry corresponding to the service flow to which the packet belongs is found in the stored flow table, including the identifier of the service flow, the outbound interface, and the timestamp of the first packet of the service flow reaching the leaf node 2, The leaf node 2 sends the packet through the outbound interface in the entry.
  • the leaf node 2 can use the bandwidth of the physical link supported by the leaf node 2.
  • the QoS and the queue length are used to select an optimal physical link (for example, the physical link with the lowest ratio of the bandwidth usage and the queue), and the packet is sent out through the outbound interface of the physical link.
  • An entry corresponding to the service flow to which the packet belongs is added to the flow table.
  • the leaf node 2 can only select an optimal physical link according to the bandwidth utilization and the queue length of the physical link on the leaf node 2, that is, the leaf.
  • the node 2 can only ensure that the service traffic is balanced between the physical links on the leaf node 2, and the packet needs to be sent to the leaf node 4 through the backbone node 3, so that the backbone node 3 and the leaf node 4 When the physical link is blocked, the packet may be lost on the physical link between the backbone node 3 and the leaf node 4.
  • the embodiments of the present invention provide a method and an apparatus for allocating service traffic, which can ensure that service flows are balanced between paths between the source node and the destination node, so that packets are not lost.
  • an embodiment of the present invention provides a method for allocating service traffic, where the method includes:
  • the first leaf node periodically sends a probe packet through each of the plurality of physical links connecting the backbone nodes on the first leaf node;
  • the first leaf node For each physical link, the first leaf node receives the returned response message through the physical link, and each response message is that after the probe packet sent by the physical link reaches the second leaf node, The second leaf node replies;
  • the first leaf node calculates transmission parameters of the path according to multiple response messages received from the path, to obtain transmission parameters of each path, where the same path is received.
  • the identifiers of the paths included in the response packets are the same.
  • Each path includes at least two physical links, and the multiple physical links belong to different paths.
  • the first leaf node allocates service traffic to be transmitted on the multiple physical links according to the transmission parameter of each path.
  • the method for allocating service traffic because the first leaf node can detect the transmission parameters of all paths between the first leaf node and the second leaf node by sending the probe message, the first leaf node further according to these The transmission parameter of the path allocates the traffic to be transmitted, which not only ensures the balance of service traffic between the physical links on the first leaf node, but also ensures that other physical links on each path in the path are also The equalization of the service traffic is achieved, so that the service traffic is balanced between the paths between the source node (for example, the first leaf node) and the destination node (for example, the second leaf node), so that the packet is not lost.
  • the equalization of the service traffic is achieved, so that the service traffic is balanced between the paths between the source node (for example, the first leaf node) and the destination node (for example, the second leaf node), so that the packet is not lost.
  • the first leaf node allocates the service traffic to be transmitted on the multiple physical links according to the transmission parameter of each path, including:
  • the first leaf node allocates the service traffic to be transmitted on the multiple physical links according to the proportion of traffic distribution on each path.
  • the first leaf node is transmitting because the traffic distribution ratio determined by the first leaf node is a service traffic allocation ratio of all paths between the first leaf node and the second leaf node (each path includes multiple physical links)
  • the service traffic to be transmitted is allocated, the corresponding traffic to be transmitted is allocated on multiple physical links on the first leaf node according to the proportion of the traffic distribution, so that not only each physical entity on the first leaf node can be guaranteed.
  • the service traffic is balanced between the links, and the other physical links on the paths in the path between the first leaf node and the second leaf node are also balanced. Lost when transmitted by the first leaf node to the second leaf node.
  • the transmission parameter of the path includes at least one of a delay of the path, a jitter rate of the path, and a packet loss rate of the path.
  • the first leaf node calculates the transmission parameters of the multiple paths
  • the first leaf node calculates the same transmission parameter for each path. For example, the transmission parameters of the same class are calculated separately for the multiple paths.
  • the method before the first leaf node calculates the transmission parameter of the path according to the multiple response packets received from the path, the method further includes:
  • the first leaf node calculates a transmission parameter of the path according to the multiple response packets received from the path, including:
  • the first leaf node calculates a transmission parameter of the path according to a packet feature of the plurality of response messages received from the path.
  • each of the multiple response packets received from the path The packet feature includes: the identifier of the path, the sequence number of the response packet, the source Internet Protocol IP address of the response packet, the destination IP address of the response packet, and the response packet.
  • the four-layer source port number, the four-layer destination port number of the response packet, the time when the response packet reaches the first leaf node, and the probe packet corresponding to the response packet leave the first The time of the leaf node.
  • the identifier of the path included in each response packet may be used to identify the path through which the response packet is transmitted.
  • the identifier of the path may be composed of an interface name of the device (for example, the first leaf node in the embodiment of the present invention) that sends the probe packet corresponding to the response packet, or an interface identifier of the device, and a hash value.
  • the hash value may be used to represent another physical link in the path other than the physical link between the first leaf node and the backbone node.
  • the hash value is determined by the first leaf node after the network deployment is completed. All paths between the traversed first leaf node and the second leaf node are calculated by a hash algorithm.
  • the response packet received from a certain path may be used to determine the transmission parameter of the path. Therefore, the first leaf node may respectively receive the packets of the multiple response packets received from each path according to the determined path.
  • the feature calculates the transmission parameters of the path, so that the first leaf node calculates the transmission parameters of each path separately after calculating each path.
  • the first leaf node determines, according to the transmission parameter of each path, a service traffic allocation ratio on each path, including:
  • the first leaf node separately quantizes transmission parameters of each path
  • the first leaf node determines a service traffic allocation ratio on each path according to the quantized transmission parameters of each path.
  • the first leaf node separately quantizes the transmission parameters of each path, and then the first leaf node determines the service traffic distribution ratio on each path according to the quantized transmission parameters of each path.
  • the process of determining the proportion of traffic distribution can be simplified, reducing the complexity of implementation.
  • the method that the first leaf node determines the service traffic allocation ratio on each path according to the quantized transmission parameter of each path may adopt any one of the following Implementation:
  • the first leaf node adds the respective transmission parameters quantized by each path in units of paths to obtain total transmission parameters of each path; the first leaf node further determines each according to the total transmission parameters of each path. The proportion of business traffic distribution on the path.
  • the first leaf node splices the three bits of each transmission parameter quantized by each path into a sequence in units of paths, and the value of the sequence can represent the total transmission parameter of each path; the first leaf node Then, according to the total transmission parameters of each path, the proportion of service traffic allocation on each path is determined.
  • the foregoing method for determining the service traffic distribution ratio can be used to determine the service traffic allocation ratio on each path more flexibly and conveniently, so as to implement the method for allocating the service traffic provided by the embodiment of the present invention flexibly and conveniently.
  • the first leaf node may cyclically perform the foregoing method for allocating service traffic.
  • the first leaf node can determine the transmission parameters of all the paths between the first leaf node and the second leaf node in real time, thereby improving the multiple physicalities of the first leaf node in the first leaf node according to the transmission parameters.
  • the accuracy of the traffic to be transmitted is allocated on the link.
  • the embodiment of the present invention provides a device for distributing traffic, the device is a first leaf node, and the device includes:
  • a sending unit configured to periodically send a probe packet by using each physical link of the plurality of physical links connecting the backbone nodes on the first leaf node;
  • a receiving unit configured to receive, by using the physical link, a returned response packet, where the response packet is that the probe packet sent by using the physical link reaches the second leaf node Responding to the second leaf node;
  • a calculation unit configured to calculate, according to each path, a transmission parameter of the path according to the multiple response messages received by the receiving unit from the path, to obtain transmission parameters of each path, where the same path
  • the identifiers of the paths included in the response packets are the same.
  • Each path includes at least two physical links, and the multiple physical links belong to different paths.
  • an allocating unit configured to allocate, according to the transmission parameter of each path calculated by the computing unit, the service traffic to be transmitted on the multiple physical links.
  • An embodiment of the present invention provides a device for allocating traffic.
  • the device is a first leaf node.
  • the first leaf node can detect the transmission of all paths between the first leaf node and the second leaf node by sending a probe packet. Therefore, the first leaf node allocates the traffic to be transmitted according to the transmission parameters of the paths, and not only ensures the balance of service traffic between the physical links on the first leaf node, but also ensures the paths.
  • the other physical links on each path also achieve the balance of service traffic, so as to ensure that traffic between the source node (for example, the first leaf node) and the destination node (for example, the second leaf node) reach traffic. Balance, so that the message will not be lost.
  • the allocating unit is configured to determine, according to the transmission parameter of each path calculated by the calculating unit, a service traffic allocation ratio on each path, and according to the path on each path The traffic distribution ratio is allocated, and the service traffic to be transmitted is allocated on the multiple physical links.
  • the transmission parameter of the path includes at least one of a delay of the path, a jitter rate of the path, and a packet loss rate of the path.
  • the distributing device further includes a determining unit,
  • the determining unit is configured to: before the calculating, by the calculating unit, the transmission parameter of the path according to the multiple response messages received from the path, according to the response message received by the receiving unit An identifier of the path, determining a response packet received from the path, and obtaining a plurality of response messages received from the path;
  • the calculating unit is specifically configured to calculate a transmission parameter of the path according to a packet feature of the plurality of response messages received from the path determined by the determining unit.
  • the packet feature of each response packet in the multiple response packets received by the receiving unit from the path includes: an identifier of the path, a sequence number of the response packet, and the The source IP address of the response packet, the destination IP address of the response packet, the four-layer source port number of the response packet, the four-layer destination port number of the response packet, and the response packet The time when the text arrives at the first leaf node and the time when the probe packet corresponding to the response packet leaves the first leaf node.
  • the allocating unit is configured to separately quantize transmission parameters of each path calculated by the calculating unit, and determine each of the pieces according to the quantized transmission parameters of each path. The proportion of business traffic distribution on the path.
  • an embodiment of the present invention provides a service traffic distribution apparatus, where the distribution device is a first leaf node, the first leaf node is a switch, and the switch includes a processor, an interface circuit, a memory, and a system bus. ;
  • the memory is for storing computer program instructions, the processor, the interface circuit and the memory are interconnected by the system bus, and when the switch is running, the processor performs the memory storage Computer program instructions to cause the switch to perform the allocation method of any of the first aspect described above and the various alternatives of the first aspect.
  • the embodiment of the present invention provides a device for distributing service traffic.
  • the device is a first leaf node, and the first leaf node is a switch.
  • the switch can detect all paths between the switch and other switches by sending a probe packet.
  • the transmission parameters so the switch allocates the traffic to be transmitted according to the transmission parameters of the paths, which not only ensures the balance of service traffic between the physical links on the switch, but also ensures the various routes in the paths.
  • the other physical links on the path also reach the balance of the service traffic, so that the service traffic is balanced between the paths between the switch and other switches, and the packets are not lost.
  • FIG. 1 is a schematic structural diagram of a data center network provided by the prior art
  • FIG. 2 is a schematic structural diagram of a data center network according to an embodiment of the present invention.
  • FIG. 3 is a schematic diagram of a method for allocating service traffic according to an embodiment of the present invention.
  • FIG. 4 is a schematic structural diagram 1 of a service flow distribution apparatus according to an embodiment of the present invention.
  • FIG. 5 is a schematic structural diagram 2 of a service flow distribution apparatus according to an embodiment of the present disclosure.
  • FIG. 6 is a schematic diagram of hardware of a service flow distribution apparatus according to an embodiment of the present invention.
  • A/B can be understood as A or B.
  • first and second in the description and claims of the present invention are used to distinguish different objects, rather than to describe a particular order of the objects.
  • first leaf node and the second leaf node are used to distinguish different leaf nodes, rather than to describe the feature order of the leaf nodes.
  • the words “exemplary” or “such as” are used to mean an example, an illustration, or a description. Any embodiment or design described herein as “exemplary” or “for example” should not be construed as preferred or advantageous over other embodiments or designs. Specifically, use “exemplary”, or The words “for example” are intended to present concepts in a specific manner.
  • the paths mentioned in the following various embodiments of the present invention refer to network paths, for example, the path between the first leaf node and the second leaf node refers to the network path between the first leaf node and the second leaf node. .
  • the service traffic allocation method provided by the embodiment of the present invention may be applied to a data center network, where the data center network may be a data center network having a two-layer leaf-backbone topology structure, and the data center network may also have a core-aggregation-
  • the data center network of the core-aggregation-access (English: core-aggregation-access) topology may also be another data center network with a more complex topology structure, which is not specifically limited in the embodiment of the present invention.
  • FIG. 2 it is a data center network with a two-layer leaf-backbone topology.
  • server 10 and server 11 respectively
  • leaf nodes leaf node 12 and leaf node 13 respectively
  • backbone nodes are respectively included in the data center network in FIG. 2 (respectively An exemplary description is made for the backbone node 14 and the backbone node 15) as an example.
  • a LAG is formed (for example, Physical link 1, physical link 2, physical link 3, and physical link 4 shown in FIG. 2 form one LAG; physical link 5, physical link 6, physical link 7, and physical link 8 form one LAG).
  • ECMP equal cost multipath routing
  • UCMP non-equivalent route
  • one leaf node may be a cluster or a stack composed of multiple access switches, for example, the leaf node 12 is the switch 120.
  • the server 10 and the leaf node 12 can pass two physical links (one physical link is used as the primary physical link, the other physical link is used as the standby physical link, or the two physical links are LAG).
  • the server 11 and the leaf node 13 can be similarly connected through two physical links.
  • the service to be transmitted can be Switch to the transmission on the alternate physical link to ensure continuity of service transmission.
  • the leaf node 12 is connected to the backbone node 14 and the backbone node 15 through a plurality of physical links, and the leaf node 13 and the backbone node 14 and the backbone node 15 are also connected by a plurality of physical links.
  • the server 10 needs to send traffic to the server 11, the traffic needs to be forwarded and shared by the leaf node 12, the backbone node 14 or the backbone node 15, and the leaf node 13.
  • the leaf node 12 may need to allocate the traffic to different paths for transmission, where each path is from multiple segments from the leaf node 12 to the leaf node 13. Physical link composition.
  • the physical link between the server 10 and the switch 120 in the leaf node 12 is the primary physical link, and when the server 10 transmits the service to the leaf node 12 through the primary physical link.
  • the multiple paths are:
  • Path 1 switch 120 in leaf node 12 - backbone node 14 - switch 130 in leaf node 13 (ie physical link 1 + physical link 5 in Figure 2);
  • Path 2 switch 120 in leaf node 12 - backbone node 14 - switch 131 in leaf node 13 (ie physical link 1 + physical link 6 in Figure 2);
  • Path 3 switch 120 in leaf node 12 - backbone node 15 - switch 130 in leaf node 13 (ie physical link 2+ physical link 7 in Figure 2);
  • Path 4 Switch 120 in leaf node 12 - backbone node 15 - switch 131 in leaf node 13 (i.e., physical link 2+ physical link 8 in Figure 2).
  • Path 5 switch 121 in leaf node 12 - backbone node 14 - switch 130 in leaf node 13 (ie physical link 3 + physical link 5 in Figure 2);
  • Path 6 switch 121 in the leaf node 12 - backbone node 14 - switch 131 in the leaf node 13 (ie physical link 3 + physical link 6 in Figure 2);
  • Path 7 switch 121 in leaf node 12 - backbone node 15 - switch 130 in leaf node 13 (ie physical link 4 + physical link 7 in Figure 2);
  • Path 8 Switch 121 in leaf node 12 - backbone node 15 - switch 131 in leaf node 13 (i.e., physical link 4 + physical link 8 in Figure 2).
  • each path between the leaf node 12 and the leaf node 13 is composed of two physical links (which may also be multiple segments in actual application), and the leaf node 12 in the prior art can only follow the leaf node.
  • the bandwidth utilization and the queue length of the physical link on the 12 select an optimal physical link for the packets in the service traffic to be forwarded, so the leaf node 12 can only guarantee the physical links on the leaf node 12.
  • the packet in the service traffic may also be in the leaf node. 12 is lost on the path to leaf node 13 when transmitting.
  • a method for allocating service traffic is provided.
  • the first leaf node sends a probe packet to the second leaf node, and the second leaf node that receives the probe packet sends the probe packet.
  • the first leaf node of the probe packet replies to the response packet, and the first leaf node calculates each path (including the plurality of physical links constituting the path) according to the response packet received by the first leaf node through each path.
  • the transmission parameter, the first leaf node finally allocates the traffic to be transmitted on each physical link on the first leaf node according to the calculated transmission parameter of each path.
  • the leaf node may detect the transmission parameters of all paths between the leaf node and other leaf nodes by sending the probe packet, so the leaf node is further allocated according to the transmission parameters of the paths.
  • the service traffic to be transmitted can not only ensure the balance of service traffic between the physical links on the leaf node, but also ensure that other physical links on each path in the path also achieve service traffic balance. Therefore, the service traffic is balanced between the paths between the source node (for example, the first leaf node) and the destination node (for example, the second leaf node), so that the packet is not lost.
  • the embodiment of the present invention provides a method for allocating service traffic.
  • the method for allocating the service traffic may include:
  • the first leaf node periodically sends a probe packet by using each physical link of the multiple physical links connecting the backbone node on the first leaf node.
  • the plurality of physical links on the first leaf node include a physical link between the first leaf node and each of the at least one backbone node.
  • the multiple physical links on the first leaf node may specifically include physical link 1, physical link 2, physical link 3, and physical link 4 as shown in FIG. 2, where the first leaf node is During each transmission period, the probe packet is sent through the physical link 1, the physical link 2, the physical link 3, and the physical link 4, that is, the probe packet is sent 4 times.
  • the first leaf node in the embodiment of the present invention may be a leaf node 12 in the data center network as shown in FIG. 2; the at least one backbone node may be a backbone node in the data center network as shown in FIG. 2 14 and backbone node 15.
  • the detection packet may be a packet supporting a virtual extensible local area network (VXLAN) protocol, and the specific format thereof is the same as that of the VXLAN protocol packet in the prior art.
  • VXLAN virtual extensible local area network
  • the embodiment of the present invention may set an identifier in the detection packet, where the detection packet is used to detect the source node.
  • the identifier may be set according to actual use requirements, and is not specifically limited in the present invention.
  • Each backbone node receives a probe packet sent by the first leaf node by connecting to a physical link of the backbone node.
  • Each backbone node sends the detected probe packet to the second leaf node by using a physical link that connects the second leaf node.
  • the second leaf node in the embodiment of the present invention may be the leaf node 13 in the data center network as shown in FIG. 2 .
  • the backbone node since there may be multiple physical links between each backbone node and the second leaf node, in order to balance the service traffic among multiple physical links, the backbone node sends the second leaf node to the second leaf node.
  • the symmetric hash service traffic distribution method also called symmetric hash load sharing method
  • the first leaf node may calculate a hash value corresponding to each path by using a hash algorithm according to all paths between the first leaf node and the second leaf node traversed by the first leaf node, and each hash value may be used.
  • the saved forwarding table including the hash value and the backbone
  • the second leaf node receives the probe packet sent by each backbone node by using multiple physical links on the second leaf node.
  • the plurality of physical links on the second leaf node include a physical link between the second leaf node and each of the at least one backbone node.
  • the multiple physical links on the second leaf node may specifically include the physical link 5, the physical link 6, the physical link 7, and the physical link 8 as shown in FIG. 2.
  • the second leaf node generates a response packet corresponding to each probe packet.
  • the method for the second leaf node to generate a response packet corresponding to the probe packet according to the received probe packet may be:
  • the second leaf node may generate and detect the source packet control (English: media access control, MAC address) and the destination MAC address in the probe packet. Corresponding response message.
  • the source packet control English: media access control, MAC address
  • the second leaf node may exchange the source MAC address and the destination MAC address in the probe packet, and exchange the source IP address and the destination IP address in the probe packet, and the probe packet.
  • the four-layer source port number and the four-layer destination port number are interchanged to generate a response packet corresponding to the probe packet.
  • the second leaf node may exchange the source MAC address and the destination MAC address in the probe packet, and exchange the source device identifier and the destination device identifier in the probe packet to generate a corresponding packet corresponding to the probe packet. Response message.
  • the values of other fields/fields in the probe message can remain intact in the response packet except for the fields exchanged by the second leaf node. That is, in addition to several fields exchanged by the second leaf node in the response message, the values of other fields/fields are the same as the probe message.
  • the virtual local area network (English: virtual local area network, abbreviated as: VLAN), the broadcast domain, or the IP network segment to which the first leaf node and the second leaf node belong may be different.
  • the network between the leaf node (including the first leaf node and the second leaf node) and the backbone node with which it interacts is different, so that the second leaf node generates a response report corresponding to the probe packet according to a probe packet received by the second leaf node.
  • the method of the text will also vary.
  • the network between the leaf node and the backbone node with which the leaf node interacts is a Layer 2 network or a multi-link transparent interconnection.
  • Transparent interconnection of lots of links abbreviation: TRILL
  • TRILL Transparent interconnection of lots of links
  • the VLAN, broadcast domain or IP network segment to which the first leaf node and the second leaf node belong are different, the network between the leaf node and the backbone node with which it interacts is three Layer network.
  • TRILL Transparent interconnection of lots of links
  • the network between the leaf node and the backbone node is a Layer 2 network.
  • the second leaf node If the network between the leaf node and the backbone node is a Layer 2 network, the second leaf node generates a response report corresponding to the probe packet according to a probe packet received by the second leaf network.
  • the original MAC address and the destination MAC address in the probe packet are exchanged.
  • the network between the leaf node and the backbone node is a three-layer network.
  • the second leaf node If the network between the leaf node and the backbone node is a Layer 3 network, the second leaf node generates a response packet corresponding to the probe packet according to a probe packet received by the second leaf node, and generates the original MAC address in the probe packet.
  • the address and the destination MAC address are exchanged, and the source IP address and the destination IP address in the probe packet are exchanged, and the four-layer source port number and the four-layer destination port number in the probe packet are exchanged.
  • the network between the leaf node and the backbone node is a TRILL network.
  • the second leaf node If the network between the leaf node and the backbone node is a TRILL network, the second leaf node generates a response packet corresponding to the probe packet according to a probe packet received by the second leaf node, and generates an original MAC address in the probe packet.
  • the switch is exchanged with the destination MAC address and the source device identifier and the destination device identifier in the probe packet.
  • the network between the leaf node and the backbone node is in addition to the three networks listed above.
  • it can also be a cluster system network, virtual private LAN service (English: virtual private lan service, abbreviation: VPLS) network, VXLAN, network virtualization using generic routing encapsulation (English: network virtualization using generic routing encapsulation, abbreviation :NVGRE) network and stateless transport tunneling (STT) network, etc., which are not detailed in this embodiment.
  • the response packet may be a packet supporting the VXLAN protocol, and the specific format thereof is the same as the format of the VXLAN protocol packet in the prior art.
  • the address and port number of the probe packet and the response packet are described in detail below.
  • the probe message is from the leaf node 12 to the backbone node 14 via the physical link 1 and then to the leaf node 13 via the physical link 5 or the physical link 6; and the response message is returned according to the original path.
  • the source MAC address of the probe packet is the MAC address of the leaf node 12
  • the destination MAC address is the MAC address of the backbone node 14
  • the source IP address is the IP address of the leaf node 12, and the destination IP address.
  • the source MAC address of the probe packet is the MAC address of the backbone node 14
  • the destination MAC address is the MAC address of the leaf node 13, the source IP address, the destination IP address, the four-layer source port number, and four.
  • the port number of the layer is the same as the source IP address, the destination IP address, the four-layer source port number, and the four-layer destination port number of the probe packets from the leaf node 12 to the backbone node 14.
  • the detection packet also includes a time to live (TTL: TTL) field.
  • TTL time to live
  • the TTL value in the TTL field of the probe packet is forwarded. minus one.
  • the value of the IP header check field (English: checksum) of the probe packet and the checksum field (English: CRC) of the packet are updated synchronously.
  • the source MAC address and the destination MAC address of the response packet corresponding to the foregoing detection packet are opposite to the source MAC address and the destination MAC address of the probe packet, and the source IP address of the response packet.
  • Both the source IP address and the destination IP address of the probe packet are the same as the source IP address and the destination IP address of the probe packet.
  • the Layer 4 source port number and the Layer 4 destination port number of the response packet are both the Layer 4 source port number and the Layer 4 destination port of the probe packet. The opposite is true.
  • the second leaf node sends a response packet by using multiple physical links connecting the backbone node on the second leaf node.
  • the second leaf node may receive the response packet from the fourth layer of the probe packet.
  • the port is sent out, that is, the response packet can be returned to the first leaf node according to the transmission path of the probe packet, so that the first leaf node can accurately calculate the transmission of each path between the first leaf node and the second leaf node. parameter.
  • Each backbone node receives a response packet sent by the second leaf node by connecting to a physical link of the backbone node.
  • Each backbone node sends a response packet received by the first leaf node to the first leaf node by using a physical link that connects the first leaf node.
  • the symmetric hash service traffic distribution method may be used, that is, the backbone node receives the first
  • the backbone node records the physical link of the probe packet, and then the backbone node transmits the multiple response packets received by the backbone node according to the probe packet corresponding to the response packet.
  • Physics The link is sent to the first leaf node (that is, each response packet is returned to the first leaf node according to the transmission path of the corresponding probe packet). In this way, it is ensured that one probe message and its corresponding response message are transmitted on one path, thereby improving the accuracy of the first leaf node calculating the transmission parameter of each path between the first leaf node and the second leaf node. rate.
  • the first leaf node receives the response packet sent by each backbone node by using multiple physical links on the first leaf node.
  • the first leaf node calculates transmission parameters of the path according to the multiple response messages received from the path, to obtain transmission parameters of each path.
  • the identifiers of the paths included in the response packets received on the same path are the same.
  • the multiple physical links on the first leaf node belong to different paths.
  • the multiple physical links on the second leaf node also belong to different paths.
  • a path; a physical link on the first leaf node and a physical link on the second leaf node belong to the same path, that is, exemplary, based on the data center network shown in FIG. 2, the first leaf node ( For example, a physical link on the leaf node 12) and a physical link on the second leaf node (e.g., leaf node 13) form a path.
  • the transmission parameter of each path may include at least one of a delay of the path, a jitter rate of the path, and a packet loss rate of the path.
  • a delay of the path e.g., a jitter rate of the path
  • a packet loss rate of the path e.g., a packet loss rate of the path.
  • the first leaf node may calculate the transmission parameter of the path 1 according to the response packet received from the path 1; and calculate the transmission parameter of the path 2 according to the response message received from the path 2; ...; based on the response message it received from path 8, the transmission parameters of path 8 are calculated. In this way, the first leaf node can obtain the transmission parameters of the eight paths.
  • the first leaf node allocates service traffic to be transmitted on multiple physical links on the first leaf node according to transmission parameters of each path.
  • the first leaf node After the first leaf node calculates the transmission parameters of each path between the first leaf node and the second leaf node, the first leaf node may reasonably be a plurality of strips on the first leaf node according to the transmission parameters of the paths.
  • the traffic to be transmitted is allocated on the physical link.
  • the transmission parameters of each path include the delay of the path, the jitter rate of the path, and the packet loss rate of the path.
  • the first leaf node may according to the delay of each path, the jitter rate of each path, and The packet loss rate of each path and the actual transmission requirement of the traffic to be transmitted allocate traffic to be transmitted on multiple physical links on the first leaf node.
  • path 1 For example, suppose there are four paths between the first leaf node and the second leaf node, namely path 1, path 2, path 3, and path 4.
  • the transmission parameters of path 1 are: delay 1, jitter rate 1 and packet loss rate 1; the transmission parameters of path 2 are: delay 2, jitter rate 2 and packet loss rate 2; the transmission parameters of path 3 are respectively Delay 3, jitter rate 3, and packet loss rate 3; the transmission parameters of path 4 are: delay 4, jitter rate 4, and packet loss rate 4.
  • the delay 1 and the delay 2 are greater than a preset delay threshold, and the delay 3 and the delay 4 are smaller than the delay threshold; the jitter rate 1 is greater than a preset jitter threshold, the jitter rate 2, the jitter rate 3, and the jitter.
  • the rate 4 is smaller than the threshold of the jitter rate; the packet loss rate 1, the packet loss rate 2, and the packet loss rate 4 are smaller than the preset packet loss rate threshold, and the packet loss rate 3 is greater than the packet loss rate threshold.
  • the first leaf node may transmit the packet to be transmitted.
  • the service traffic is allocated to the path 1, the path 2, and the path 4 (the specific first leaf node can allocate the service traffic to be transmitted to the physical links belonging to the path 1, the path 2, and the path 4 on the first leaf node. transmission).
  • the above-mentioned delay threshold, jitter rate threshold, and packet loss rate threshold are all determined according to the actual network architecture and the transmission environment of the network, and can be set by themselves in practical applications.
  • the first leaf node periodically sends a probe packet through each physical link of the plurality of physical links connecting the backbone nodes on the first leaf node; For each physical link, the first leaf node receives the returned response packet through the physical link, and each response packet is sent by the second leaf node after the probe packet sent by the physical link reaches the second leaf node. Then, for each path, the first leaf node calculates transmission parameters of the path according to the plurality of response messages received from the path to obtain transmission parameters of each path; and finally the first leaf node according to each The transmission parameters of the path, the service traffic to be transmitted is allocated on the multiple physical links.
  • the first leaf node may detect the transmission parameters of all the paths between the first leaf node and the second leaf node by sending the probe message, so the first leaf node further according to these The transmission parameter of the path allocates the traffic to be transmitted, which not only ensures the balance of service traffic between the physical links on the first leaf node, but also ensures that other physical links on each path in the path are also The equalization of the service traffic is achieved, so that the service traffic is balanced between the paths between the source node (for example, the first leaf node) and the destination node (for example, the second leaf node), so that the packet is not lost.
  • the equalization of the service traffic is achieved, so that the service traffic is balanced between the paths between the source node (for example, the first leaf node) and the destination node (for example, the second leaf node), so that the packet is not lost.
  • the foregoing S111 may specifically include:
  • the first leaf node determines a service traffic allocation ratio on each path according to a transmission parameter of each path; and a plurality of physical chains on the first leaf node according to a traffic distribution ratio of the first leaf node according to each path
  • the traffic to be transmitted is allocated on the road.
  • the first leaf node may determine the service on each path according to the transmission parameters of the paths. Traffic distribution ratio.
  • the transmission parameter of each path is an example of a packet loss rate.
  • the first leaf node may determine that the packet loss rate is greater than or equal to the preset loss.
  • the path of the packet rate threshold is relatively congested.
  • the path where the packet loss rate is smaller than the packet loss rate threshold is idle.
  • the first leaf node can determine that the packet loss rate is greater than or equal to the packet loss rate threshold.
  • the traffic distribution ratio on the path where the packet loss rate is smaller than the packet loss rate threshold is higher. For example, suppose there are two paths, namely path 1 and path 2, the packet loss rate of path 1 is 1%, the packet loss rate of path 2 is 5%, and the preset packet loss rate threshold is 2%.
  • the leaf node can determine that the traffic distribution ratio on path 1 is 90%, and the traffic distribution ratio on path 2 is 10%.
  • the above description and examples of the transmission parameters of each path and the proportion of traffic distribution on each path are exemplified. That is, the method for allocating the service traffic provided by the embodiment of the present invention, the transmission parameter of each path and the selection and determination of the service traffic allocation ratio on each path may be specifically set according to actual use requirements, and the present invention does not specifically limited.
  • the first leaf node may allocate the multiple physical links on the first leaf node according to the service traffic allocation ratio on each path.
  • Traffic to be transmitted The traffic distribution ratio determined by the first leaf node is the first leaf node and the second The traffic distribution ratio of all the paths between the leaf nodes (each path includes multiple physical links), so when the first leaf node transmits the traffic to be transmitted, the first leaf can be allocated according to the proportion of the traffic.
  • a plurality of physical links on the node are allocated corresponding service traffic to be transmitted, so that not only the service traffic is balanced between each physical link on the first leaf node, but also the first leaf node can be ensured.
  • the other physical links on the paths in all the paths between the second leaf nodes also reach the balance of the service traffic, thereby preventing the packet from being lost when being transmitted by the first leaf node to the second leaf node.
  • the method for allocating the service traffic may further include:
  • the first leaf node determines the response packet received from the path according to the identifier of the path included in the received response packet, and obtains multiple response packets received on the path.
  • the first leaf node may determine the slaves according to the identifiers of the paths included in the response packets received by the first leaf node. Multiple response messages received on each path.
  • S110 may specifically include:
  • the first leaf node calculates a transmission parameter of the path according to the packet characteristics of the plurality of response messages received from the path.
  • the packet feature of each response packet in the multiple response packets received from each path includes: an identifier of the path, a sequence number of the response packet, and the The source IP address of the response packet, the destination IP address of the response packet, the four-layer source port number of the response packet, the four-layer destination port number of the response packet, and the response packet arrives at the first leaf node. The time and the time when the probe packet corresponding to the response packet leaves the first leaf node.
  • the identifier of the path included in each response packet may be used to identify the path through which the response packet is transmitted.
  • the identifier of the path may be composed of an interface name of the device (for example, the first leaf node in the embodiment of the present invention) that sends the probe packet corresponding to the response packet, or an interface identifier of the device, and a hash value.
  • the interface name of the device may be the name of an access switch in the first leaf node; the interface identifier of the device may be an identifier of an access switch in the first leaf node; the hash value It can be used to represent another physical link in the path except the physical link between the first leaf node and the backbone node, and the hash value is the first leaf node according to the traversal of the first leaf node after the network deployment is completed. All paths between the leaf node and the second leaf node are calculated by a hash algorithm. For details, refer to the related description of the hash value in the above S103, and details are not described herein again.
  • the identifier of the above path includes a hash value only as an example of a data center network having a two-layer topology as shown in FIG. 2.
  • the identifier of the path may include multiple hash values, and each hash value corresponds to a physical link other than the first physical link (that is, the first physical link through which the probe packet passes) in the path.
  • a physical link in The implementation principle is similar to the implementation principle of the data center network with the two-layer topology shown in FIG. 2, and details are not described herein again.
  • the response packet received from a certain path may be used to determine the transmission parameter of the path. Therefore, the first leaf node may respectively receive the packets of the multiple response packets received from each path according to the determined path.
  • the feature calculates the transmission parameters of the path, so that the first leaf node calculates the transmission parameters of each path separately after calculating each path.
  • the time delay may be obtained by the first leaf node to calculate a time difference between the sending of the probe packet and the response packet corresponding to the probe packet (specifically, the response packet reaches the first leaf node.
  • the time minus the time when the probe packet corresponding to the response packet leaves the first leaf node is obtained;
  • the jitter rate may be obtained by the first leaf node dividing the delay difference between the two adjacent messages by the sequence number of the two packets. The difference is obtained.
  • the packet loss rate may be obtained by the first leaf node dividing the number of probe packets sent by the first leaf node and the number of response packets received by the first leaf node by the first leaf node by the first leaf node. The number of messages is obtained.
  • the time when the first leaf node calculates the delay and the time when the response packet corresponding to the response packet leaves the first leaf node can be carried.
  • the first leaf node may add a timestamp to the probe packet to indicate the time when the probe packet leaves the first leaf node, and the first leaf node receives the response packet after receiving the response packet.
  • a timestamp may also be added to the response packet to indicate the time when the response packet arrives at the first leaf node.
  • the delay difference between two adjacent messages involved in calculating the jitter rate by the first leaf node may be obtained by subtracting the delays of the two messages; the sequence numbers of the two messages are determined by the first leaf.
  • the node adds a probe packet to the probe packet.
  • the sequence number of each packet is used to uniquely identify the packet.
  • the sequence number of each packet can be carried in the packet.
  • the number of detection packets sent by the first leaf node and the number of response packets received by the first leaf node involved in the calculation of the packet loss rate by the first leaf node may be counted by the first leaf node. get.
  • the first leaf node determines, according to the transmission parameter of each path, the service traffic allocation ratio on each path, which may include:
  • the first leaf node separately quantizes the transmission parameters of each path; and the first leaf node determines the service traffic allocation ratio on each path according to the quantized transmission parameters of each path.
  • the first leaf node may first transmit the path for each path.
  • the parameters are separately quantized, and then the first leaf node determines the distribution ratio of the traffic flow on each path according to the quantized transmission parameters of each path.
  • the first leaf node may quantize the transmission parameters of each path in multiple implementation manners.
  • the following takes a possible implementation as an example, and exemplifies the first leaf node to quantize the transmission parameters of each path.
  • a delay and delay quantization may be preset.
  • the correspondence between the delay and the delay quantized value can be expressed by the following Table 1
  • the jitter rate The correspondence relationship with the jitter rate quantized value can be expressed by the following Table 2.
  • the correspondence relationship between the packet loss rate and the packet loss rate quantized value can be expressed by the following Table 3.
  • Delay ( ⁇ s) Delay quantized value 0 to 1.5 0 1.5 to 3.0 1 3.0 to 6.0 2 6.0 to 15.0 3 15.0 ⁇ 30.0 4 30.0 to 50.0 5 50.0 ⁇ 100.0 6 >100.0 7
  • the first leaf node may quantize the delay to 3.
  • the first leaf node may quantize the jitter rate to 2.
  • Packet loss rate (10 -6 ) Packet loss rate quantized value ⁇ 1 0 ⁇ 10 1 ⁇ 100 2 ⁇ 1000 3 ⁇ 10000 4 ⁇ 100000 5 ⁇ 500000 6 ⁇ 500000 7
  • the first leaf node may quantize the jitter rate to 4.
  • the delay quantization value, the jitter rate quantization value, and the packet loss rate quantization value may all be represented by three bits.
  • 0 can be represented as 000; 1 can be represented as 001; 2 can be represented as 010; 3 can be represented as 011; 4 can be represented as 100; 5 can be represented as 101; 6 can be represented as 110;
  • the method for determining, by the first leaf node, the service traffic allocation ratio on each path according to the quantized transmission parameters of each path may be one of the following:
  • the first leaf node adds the respective transmission parameters quantized by each path in units of paths to obtain total transmission parameters of each path; the first leaf node further determines each according to the total transmission parameters of each path. The proportion of business traffic distribution on the path.
  • path 1 there are three paths, namely path 1, path 2 and path 3; the transmission parameters of each of the three paths include delay, jitter rate and packet loss rate respectively; and delay quantization of path 1
  • the value is 2, the jitter rate of path 1 is 5, the loss rate of path 1 is 3; the delay of path 2 is 5, the quantization of path 2 is 2, and the loss of path 2
  • the rate quantization value is 0; the delay quantization value of path 3 is 3, the jitter rate of path 3 is 0, and the quantization value of path 3 is 2.
  • the first leaf node splices the three bits of each transmission parameter quantized by each path into a sequence in units of paths, and the value of the sequence can represent the total transmission parameter of each path; the first leaf node Then, according to the total transmission parameters of each path, the proportion of service traffic allocation on each path is determined.
  • path 1 there are three paths, namely path 1, path 2 and path 3; the transmission parameters of each of the three paths include delay, jitter rate and packet loss rate respectively; and delay quantization of path 1
  • the value is 2 (indicated as 010), the jitter value of path 1 is 5 (indicated as 101), the packet loss rate of path 1 is 3 (indicated as 011), and the delay value of path 2 is 5 ( Expressed as 101), the jitter rate of path 2 is 2 (indicated as 010), the packet loss rate of path 2 is 0 (indicated as 000), and the delay of channel 3 is 3 (indicated as 011).
  • the jitter rate of path 3 is 0 (indicated as 000), and the packet loss rate of path 3 is 2 (indicated as 010).
  • the traffic distribution ratios on the path 1, the path 2, and the path 3 are determined, so that the path 1, the path 2, and the path 3 can all achieve the balance of the service traffic.
  • the first leaf node is based on the total transmission parameter of path 1, the total transmission parameter of path 2, and the path.
  • the total transmission parameter of the path 3, determining the service traffic allocation ratio on the path 1, the path 2, and the path 3 specifically includes: the total transmission parameter of the first leaf node according to the path 1, the total transmission parameter of the path 2, and the total transmission parameter of the path 3. Determining a service traffic allocation ratio on the path 1; the first leaf node determines the service traffic allocation ratio on the path 2 according to the total transmission parameter of the path 1, the total transmission parameter of the path 2, and the total transmission parameter of the path 3; The leaf node determines the traffic distribution ratio on the path 3 according to the total transmission parameter of the path 1, the total transmission parameter of the path 2, and the total transmission parameter of the path 3.
  • the first leaf node determines, according to the total transmission parameter of the path 1, the total transmission parameter of the path 2, and the total transmission parameter of the path 3, that the specific implementation of the traffic distribution ratio on the path 1, the path 2, and the path 3 may be implemented according to actual conditions. This is not limited by the embodiment of the present invention. For example, suppose the total transmission parameter of path 1 in (1) above is 10; the total transmission parameter of path 2 is 7; the total transmission parameter of path 3 is 5, then the first leaf node can calculate 10, 7 and 5 The percentage, then the percentage of traffic distribution on Path 1, Path 2, and Path 3 is determined by this percentage.
  • the first leaf node can consider that the transmission performance of path 1 is the worst, and the transmission performance of path 2 is the second.
  • the transmission performance of 3 is the best, so that the first leaf node can determine that the traffic distribution ratio on path 1 is the smallest, the service traffic distribution ratio on path 2 is the second, and the service traffic distribution ratio on path 3 is the largest, so
  • the transmission parameters of the paths are used to determine the transmission performance of each path. Therefore, the service traffic distribution ratio of each path is determined according to the transmission parameters of each path, so that the service traffic on each path can be balanced.
  • the first leaf node may perform the foregoing method for allocating the service traffic as shown in FIG. 3 only once.
  • the transmission parameter of each path may be configured by the user in the first leaf node, and if the first leaf node needs to transmit the service, the first leaf node
  • the traffic to be transmitted may be allocated on each physical link on the first leaf node according to the transmission parameters of each path configured in advance.
  • the user may allocate the service traffic allocation ratio on each path to the first leaf node, if The first leaf node needs to transmit the service, and the first leaf node can allocate the service traffic to be transmitted on each physical link on the first leaf node according to the proportion of the traffic distribution on each path. In this way, the service traffic is balanced between the paths between the first leaf node and the second leaf node, so that the packet is not lost.
  • the first leaf node described above only performs the foregoing method for allocating service traffic as shown in FIG. 3, and the architecture of the data center network is basically fixed, and there is no major change (ie, data center network). After the deployment is completed, the architecture will not have a big change.
  • the first leaf node may cyclically perform the foregoing method for allocating service traffic as shown in FIG. 3 .
  • the first leaf node may continuously perform the foregoing method for allocating the service traffic as shown in FIG. 3, or the first leaf node may perform the foregoing method for allocating the service traffic as shown in FIG. 3 in a fixed cycle. It can be set according to actual use requirements, and the invention is not specifically limited.
  • the method for allocating service traffic because the first leaf node can detect the transmission parameters of all paths between the first leaf node and the second leaf node by sending the probe packet, the first leaf node according to the paths
  • the transmission parameters allocate the traffic to be transmitted, which not only guarantees the various items on the first leaf node.
  • the service traffic is balanced between the links, and the other physical links on each path in the path are also balanced to achieve service traffic, thereby ensuring the source node (for example, the first leaf node) and the destination node ( For example, the traffic between the two leaf nodes is equalized, so that the packets are not lost.
  • an embodiment of the present invention provides a service traffic distribution apparatus, where the distribution apparatus is a first leaf node, and the allocation apparatus is configured to perform the steps performed by the first leaf node in the above method.
  • the dispensing device may comprise a module corresponding to the respective step.
  • the distributing device may include:
  • the sending unit 20 is configured to periodically send a probe message through each physical link of the plurality of physical links connecting the backbone nodes on the first leaf node, and the receiving unit 21 is configured to use, for each physical The link receives the returned response packet by using the physical link, where the response packet is sent by the second leaf node after the probe packet sent by the physical link reaches the second leaf node;
  • the unit 22 is configured to calculate, according to each path, a transmission parameter of the path according to the multiple response messages received by the receiving unit 21 from the path, to obtain transmission parameters of each path, where the same
  • the identifiers of the paths included in the response packets received on the path are the same, each path includes at least two physical links, the multiple physical links belong to different paths, and the allocation unit 23 is configured to use the computing unit.
  • the calculated transmission parameters of each of the paths are allocated, and the service traffic to be transmitted is allocated on the plurality of physical links.
  • the allocating unit 23 is configured to determine, according to the transmission parameter of each path calculated by the calculating unit 22, a service traffic allocation ratio on each path; and according to each path The traffic distribution ratio on the network traffic is allocated on the plurality of physical links.
  • the transmission parameter of the path includes at least one of a delay of the path, a jitter rate of the path, and a packet loss rate of the path.
  • the distribution device further includes a determining unit 24,
  • the determining unit 24 is configured to: according to the plurality of response messages received from the path, the calculating unit 22, according to the response message received by the receiving unit 21, before calculating the transmission parameter of the path An identifier of the path included in the path, the response message received from the path is determined, and a plurality of response messages received from the path are obtained.
  • the calculating unit 22 is specifically configured to be used according to the determining unit 24 Determining packet characteristics of the plurality of response messages received from the path, and calculating transmission parameters of the path.
  • the packet feature of each response packet in the multiple response packets received by the receiving unit 21 from the path includes: an identifier of the path, a sequence number of the response packet, and a location The source Internet Protocol IP address of the response packet, the destination IP address of the response packet, the four-layer source port number of the response packet, the four-layer destination port number of the response packet, and the response The time when the packet arrives at the first leaf node and the time when the probe packet corresponding to the response packet leaves the first leaf node.
  • the allocating unit 23 is configured to separately quantize the transmission parameters of each path calculated by the calculating unit 22, and determine, according to the quantized transmission parameters of each path. The proportion of traffic distribution on each path.
  • the allocating device/the first leaf node may be a switch
  • the second leaf node and the backbone node may also be a switch.
  • the first leaf node and the second leaf node may be the leaf switch in the data center network of the two-layer leaf-backbone topology
  • the backbone node may be the backbone switch in the data center network of the two-layer leaf-backbone topology.
  • the allocating device of this embodiment may correspond to the first leaf node in the method for allocating the traffic flow of the embodiment shown in FIG. 3, and the division and/or division of each module in the distributing device of the embodiment.
  • the functions and the like are all for the implementation of the method flow shown in FIG. 3, and for brevity, no further details are provided herein.
  • An embodiment of the present invention provides a device for allocating traffic.
  • the device is a first leaf node.
  • the first leaf node can detect the transmission of all paths between the first leaf node and the second leaf node by sending a probe packet. Therefore, the first leaf node allocates the traffic to be transmitted according to the transmission parameters of the paths, and not only ensures the balance of service traffic between the physical links on the first leaf node, but also ensures the paths.
  • the other physical links on each path also achieve the balance of service traffic, so as to ensure that traffic between the source node (for example, the first leaf node) and the destination node (for example, the second leaf node) reach traffic. Balance, so that the message will not be lost.
  • an embodiment of the present invention provides a service traffic distribution apparatus, where the distribution device is a first leaf node, and the first leaf node is a switch (specifically, the switch can serve as a two-layer leaf-backbone topology.
  • the switch in a data center network of the architecture, the switch includes a processor 30, an interface circuit 31, a memory 32, and a system bus 33.
  • the memory 32 is configured to store computer program instructions, the processor 30, the interface circuit 31 and the memory 32 are connected to each other through the system bus 33, and when the switch is running, the processor 30 executes
  • the computer program instructions stored by the memory 32 are arranged to cause the switch to perform a method of allocating traffic flow as shown in FIG.
  • For the method for allocating the specific service traffic refer to the related description in the foregoing embodiment shown in FIG. 3, and details are not described herein again.
  • the embodiment further provides a storage medium, which may include the memory 32.
  • the processor 30 can be a central processing unit (English: central processing unit, abbreviation: CPU).
  • the processor 30 can also be other general-purpose processors, digital signal processing (DSP), application specific integrated circuit (ASIC), field programmable gate array (English) : field-programmable gate array (FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
  • DSP digital signal processing
  • ASIC application specific integrated circuit
  • FPGA field-programmable gate array
  • the general purpose processor may be a microprocessor or the processor or any conventional processor or the like.
  • the processor 30 may be a dedicated processor, and the dedicated processor may include at least one of a baseband processing chip, a radio frequency processing chip, and the like. Further, the dedicated processor may also include a chip with other dedicated processing functions of the switch.
  • the memory 32 may include a volatile memory, such as a random access memory (English: random-access memory, abbreviation: RAM); the memory 32 may also include a non-volatile memory (English: Non-volatile memory, such as read-only memory (English: read-only memory, abbreviation: ROM), flash memory (English: flash memory), hard disk (English: hard disk drive, abbreviation: HDD) or solid state drive (English) : solid-state drive, abbreviated: SSD); the memory 32 may also include a combination of the above types of memory.
  • a volatile memory such as a random access memory (English: random-access memory, abbreviation: RAM)
  • the memory 32 may also include a non-volatile memory (English: Non-volatile memory, such as read-only memory (English: read-only memory, abbreviation: ROM), flash memory (English: flash memory), hard disk (English: hard disk drive, abbreviation: HDD) or solid state drive (English) : solid-state drive
  • the system bus 33 can include a data bus, a power bus, a control bus, and a signal status bus. For the sake of clarity in the present embodiment, various buses are illustrated as the system bus 33 in FIG.
  • the interface circuit 31 may specifically be a transceiver on a switch.
  • the transceiver can be a wireless transceiver.
  • Place The processor 30 performs packet transmission and reception with the other devices, for example, other switches through the interface circuit 31.
  • each step in the method flow shown in FIG. 3 above may be implemented by the processor 30 in hardware form executing computer program instructions in software form stored in the memory 32. To avoid repetition, we will not repeat them here.
  • the second leaf node and the backbone node may also be switches.
  • the second leaf node may be a leaf switch in a data center network of a two-layer leaf-backbone topology
  • the backbone node may be a backbone switch in a data center network of a two-layer leaf-backbone topology.
  • the embodiment of the present invention provides a device for distributing service traffic.
  • the device is a first leaf node, and the first leaf node is a switch.
  • the switch can detect all paths between the switch and other switches by sending a probe packet.
  • the transmission parameters so the switch allocates the traffic to be transmitted according to the transmission parameters of the paths, which not only ensures the balance of service traffic between the physical links on the switch, but also ensures the various routes in the paths.
  • the other physical links on the path also reach the balance of the service traffic, so that the service traffic is balanced between the paths between the switch and other switches, and the packets are not lost.
  • the disclosed system, apparatus, and method may be implemented in other manners.
  • the device embodiments described above are merely illustrative.
  • the division of the modules or units is only a logical function division.
  • there may be another division manner for example, multiple units or components may be used. Combinations can be integrated into another system, or some features can be ignored or not executed.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, device or unit.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the above integrated unit can be implemented in the form of a software functional unit.
  • the integrated unit if implemented in the form of a software functional unit and sold or used as a standalone product, may be stored in a computer readable storage medium.
  • all or part of the technical solution may be embodied in the form of a software product stored in a storage medium, including a plurality of instructions for causing a computer device (which may be a personal computer, a server, Either a network device or the like) or a processor performs all or part of the steps of the method described in various embodiments of the invention.
  • the storage medium is a non-transitory medium, including: a flash memory, a mobile hard disk, a read only memory, a random access memory, a magnetic disk, or an optical disk, and the like, which can store program code.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer Security & Cryptography (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

本发明实施例提供一种业务流量的分配方法及装置,涉及通信领域,能够保证源节点和目的节点之间的各条路径之间达到业务流量的均衡,从而保证报文不会丢失。第一叶子节点分别通过第一叶子节点上的连接骨干节点的多条物理链路中的每条物理链路周期性地发送探测报文;对于每条物理链路,第一叶子节点通过该物理链路接收返回的响应报文,每个响应报文为通过该物理链路发送的探测报文到达第二叶子节点后由第二叶子节点回复的;对于每条路径,第一叶子节点根据从该路径上接收到的多个响应报文,计算该路径的传输参数,以得到每条路径的传输参数;第一叶子节点根据每条路径的传输参数,在多条物理链路上分配待传输的业务流量。

Description

一种业务流量的分配方法及装置
本申请要求于2016年1月26日提交中国专利局、申请号为201610051922.4、发明名称为“一种业务流量的分配方法及装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本发明涉及通信领域,尤其涉及一种业务流量的分配方法及装置。
背景技术
在采用两层叶子-骨干(英文:leaf-spine)拓扑架构的数据中心网络中,为了提高数据中心网络的可靠性和服务质量,通常在交换机和服务器之间配置多条物理链路,为了使得业务流量在多条物理链路上同时传输,通常会将多条物理链路聚合成一个逻辑链路,该逻辑链路可称为链路聚合组(英文:link aggregation group,缩写:LAG)。
为了防止数据中心网络中某些物理链路阻塞导致报文被丢弃,通常需要在LAG中的各条物理链路上均匀分配业务流量。具体地,如图1所示,假设服务器1需经叶子节点2、骨干节点3和叶子节点4向服务器5发送报文,当叶子节点2接收到服务器1发送的报文后,若叶子节点2在其存储的流表中查找到与该报文所属业务流对应的表项(包括该业务流的标识、出接口和该业务流的第一个报文到达叶子节点2的时间戳),则叶子节点2通过该表项中的出接口将该报文发送出去;若叶子节点2在该流表中没有查找到该表项,则叶子节点2可根据叶子节点2支持的物理链路的带宽利用率和队列长度为该报文选择一个最优的物理链路(例如带宽利用率和队列占用比例最低的物理链路),并通过该物理链路的出接口将该报文发送出去,且在上述流表中添加与该报文所属业务流对应的表项。
然而,在上述分配业务流量的方法中,由于叶子节点2只能根据叶子节点2上的物理链路的带宽利用率和队列长度为需要转发的报文选择一个最优的物理链路,即叶子节点2只能保证在叶子节点2上的各条物理链路之间达到业务流量的均衡,而该报文还需要通过骨干节点3发往叶子节点4,因此当骨干节点3与叶子节点4之间的物理链路出现阻塞时,也可能会导致该报文在骨干节点3与叶子节点4之间的物理链路上丢失。
发明内容
本发明的实施例提供一种业务流量的分配方法及装置,能够保证源节点和目的节点之间的各条路径之间达到业务流量的均衡,从而保证报文不会丢失。
为达到上述目的,本发明的实施例采用如下技术方案:
第一方面,本发明实施例提供一种业务流量的分配方法,所述分配方法包括:
第一叶子节点分别通过所述第一叶子节点上的连接骨干节点的多条物理链路中的每条物理链路周期性地发送探测报文;
对于每条物理链路,所述第一叶子节点通过所述物理链路接收返回的响应报文,每个响应报文为通过所述物理链路发送的探测报文到达第二叶子节点后由所述第二叶子节点回复的;
对于每条路径,所述第一叶子节点根据从所述路径上接收到的多个响应报文,计算所述路径的传输参数,以得到每条路径的传输参数,其中,同一路径上接收到的响应报文中包含的路径的标识相同,每条路径包括至少两段物理链路,所述多条物理链路属于不同的路径;
所述第一叶子节点根据所述每条路径的传输参数,在所述多条物理链路上分配待传输的业务流量。
本发明实施例提供的业务流量的分配方法,由于第一叶子节点可以通过发送探测报文探测到第一叶子节点与第二叶子节点之间所有路径的传输参数,因此第一叶子节点再根据这些路径的传输参数分配待传输的业务流量,不仅可以保证第一叶子节点上的各条物理链路之间达到业务流量的均衡,而且也能保证这些路径中各条路径上的其他物理链路也达到业务流量的均衡,从而能够保证源节点(例如第一叶子节点)和目的节点(例如第二叶子节点)之间的各条路径之间达到业务流量的均衡,进而保证报文不会丢失。
可选的,本发明实施例中,所述第一叶子节点根据所述每条路径的传输参数,在所述多条物理链路上分配待传输的业务流量,包括:
所述第一叶子节点根据所述每条路径的传输参数,确定所述每条路径上的业务流量分配比例;
所述第一叶子节点根据所述每条路径上的业务流量分配比例,在所述多条物理链路上分配所述待传输的业务流量。
由于第一叶子节点确定的业务流量分配比例是第一叶子节点与第二叶子节点之间的所有路径(每条路径都包括多段物理链路)的业务流量分配比例,因此第一叶子节点在传输待传输的业务流量时,可以按照这些业务流量分配比例,在第一叶子节点上的多条物理链路上分配相应的待传输的业务流量,如此不仅可以保证第一叶子节点上的每条物理链路之间达到业务流量的均衡,而且也能保证第一叶子节点与第二叶子节点之间的所有路径中各条路径上的其他物理链路也达到业务流量的均衡,进而防止报文在由第一叶子节点传输到第二叶子节点时丢失。
可选的,本发明实施例中,所述路径的传输参数包括所述路径的时延、所述路径的抖动率以及所述路径的丢包率中的至少一个。
示例性的,假设第一叶子节点与第二叶子节点之间有多条路径,第一叶子节点在计算这多条路径的传输参数时,针对每条路径,第一叶子节点计算的传输参数相同,例如针对这多条路径分别计算相同类的传输参数。
可选的,本发明实施例中,所述第一叶子节点根据从所述路径上接收到的多个响应报文,计算所述路径的传输参数之前,所述方法还包括:
所述第一叶子节点根据接收到的响应报文中包含的路径的标识,确定从所述路径上接收到的响应报文,得到从所述路径上接收到的多个响应报文;
所述第一叶子节点根据从所述路径上接收到的多个响应报文,计算所述路径的传输参数,包括:
所述第一叶子节点根据从所述路径上接收到的多个响应报文的报文特征,计算所述路径的传输参数。
可选的,本发明实施例中,从所述路径上接收到的多个响应报文中的每个响应报文 的报文特征包括:所述路径的标识、所述响应报文的序列号、所述响应报文的源网际互连协议IP地址、所述响应报文的目的IP地址、所述响应报文的四层源端口号、所述响应报文的四层目的端口号、所述响应报文到达所述第一叶子节点的时间以及与所述响应报文对应的探测报文离开所述第一叶子节点的时间。
其中,每个响应报文中包括的路径的标识可以用于标识该响应报文是通过哪条路径传输的。例如,该路径的标识可以由发送与该响应报文对应的探测报文的设备(例如本发明实施例中的第一叶子节点)的接口名称或该设备的接口标识,以及一个哈希值组成。该哈希值可以用于表示该路径中除第一叶子节点与骨干节点之间的物理链路之外的另一段物理链路,该哈希值是网络部署完成后,第一叶子节点根据其遍历的第一叶子节点与第二叶子节点之间的所有路径,通过哈希算法计算得到的。
由于从某条路径上接收到的响应报文,可以用于确定该路径的传输参数,因此第一叶子节点可以分别根据其确定的从每条路径上接收到的多个响应报文的报文特征,计算该路径的传输参数,如此第一叶子节点针对每条路径分别进行计算后,可以得到每条路径的传输参数。
可选的,本发明实施例中,所述第一叶子节点根据所述每条路径的传输参数,确定每条路径上的业务流量分配比例,包括:
所述第一叶子节点对所述每条路径的传输参数分别进行量化;
所述第一叶子节点根据量化后的所述每条路径的传输参数,确定所述每条路径上的业务流量分配比例。
本发明实施例中,第一叶子节点通过对每条路径的传输参数分别进行量化,然后第一叶子节点再根据量化后的每条路径的传输参数,确定每条路径上的业务流量分配比例,可以简化确定业务流量分配比例的过程,降低实现的复杂度。
可选的,本发明实施例中,所述第一叶子节点根据量化后的所述每条路径的传输参数,确定所述每条路径上的业务流量分配比例的方法可以采用下述的任意一种实现:
(1)第一叶子节点以路径为单位,将每条路径量化后的各个传输参数相加,得到每条路径的总传输参数;第一叶子节点再根据每条路径的总传输参数,确定每条路径上的业务流量分配比例。
(2)第一叶子节点以路径为单位,将每条路径量化后的各个传输参数的3个比特拼接成一个序列,该序列的值即可表示每条路径的总传输参数;第一叶子节点再根据每条路径的总传输参数,确定每条路径上的业务流量分配比例。
通过上述两种确定业务流量分配比例的方法,可以更加灵活、方便地确定每条路径上的业务流量分配比例,从而灵活、方便地实现本发明实施例提供的业务流量的分配方法。
可选的,本发明实施例中,第一叶子节点可以循环执行上述业务流量的分配方法。如此,第一叶子节点能够实时地确定出第一叶子节点与第二叶子节点之间的所有路径的传输参数,从而能够提高第一叶子节点根据这些传输参数,在第一叶子节点的多条物理链路上分配待传输的业务流量的准确性。
第二方面,本发明实施例提供一种业务流量的分配装置,所述分配装置为第一叶子节点,所述分配装置包括:
发送单元,用于分别通过所述第一叶子节点上的连接骨干节点的多条物理链路中的每条物理链路周期性地发送探测报文;
接收单元,用于对于每条物理链路,通过所述物理链路接收返回的响应报文,所述响应报文为通过所述物理链路发送的探测报文到达第二叶子节点后由所述第二叶子节点回复的;
计算单元,用于对于每条路径,根据所述接收单元从所述路径上接收到的多个响应报文,计算所述路径的传输参数,以得到每条路径的传输参数,其中,同一路径上接收到的响应报文中包含的路径的标识相同,每条路径包括至少两段物理链路,所述多条物理链路属于不同的路径;
分配单元,用于根据所述计算单元计算的所述每条路径的传输参数,在所述多条物理链路上分配待传输的业务流量。
本发明实施例提供一种业务流量的分配装置,该分配装置为第一叶子节点,由于第一叶子节点可以通过发送探测报文探测到第一叶子节点与第二叶子节点之间所有路径的传输参数,因此第一叶子节点再根据这些路径的传输参数分配待传输的业务流量,不仅可以保证第一叶子节点上的各条物理链路之间达到业务流量的均衡,而且也能保证这些路径中各条路径上的其他物理链路也达到业务流量的均衡,从而能够保证源节点(例如第一叶子节点)和目的节点(例如第二叶子节点)之间的各条路径之间达到业务流量的均衡,进而保证报文不会丢失。
可选的,所述分配单元,具体用于根据所述计算单元计算的所述每条路径的传输参数,确定所述每条路径上的业务流量分配比例;并根据所述每条路径上的业务流量分配比例,在所述多条物理链路上分配所述待传输的业务流量。
可选的,所述路径的传输参数包括所述路径的时延、所述路径的抖动率以及所述路径的丢包率中的至少一个。
可选的,所述分配装置还包括确定单元,
所述确定单元,用于在所述计算单元根据从所述路径上接收到的多个响应报文,计算所述路径的传输参数之前,根据所述接收单元接收到的响应报文中包含的路径的标识,确定从所述路径上接收到的响应报文,得到从所述路径上接收到的多个响应报文;
所述计算单元,具体用于根据所述确定单元确定的从所述路径上接收到的多个响应报文的报文特征,计算所述路径的传输参数。
可选的,所述接收单元从所述路径上接收到的多个响应报文中每个响应报文的报文特征包括:所述路径的标识、所述响应报文的序列号、所述响应报文的源网际互连协议IP地址、所述响应报文的目的IP地址、所述响应报文的四层源端口号、所述响应报文的四层目的端口号、所述响应报文到达所述第一叶子节点的时间以及与所述响应报文对应的探测报文离开所述第一叶子节点的时间。
可选的,所述分配单元,具体用于对所述计算单元计算的所述每条路径的传输参数分别进行量化;并根据量化后的所述每条路径的传输参数,确定所述每条路径上的业务流量分配比例。
对于上述第二方面的各种可选方式带来的技术效果的描述具体可参见上述对第一方面的相应的各种可选方式带来的技术效果的相关描述,此处不再赘述。
第三方面,本发明实施例提供一种业务流量的分配装置,所述分配装置为第一叶子节点,所述第一叶子节点为交换机,所述交换机包括处理器、接口电路、存储器和系统总线;
所述存储器用于存储计算机程序指令,所述处理器、所述接口电路和所述存储器通过所述系统总线相互连接,当所述交换机运行时,所述处理器执行所述存储器存储的所述计算机程序指令,以使所述交换机执行上述第一方面以及第一方面的各种可选方式中的任意一项所述的分配方法。
本发明实施例提供一种业务流量的分配装置,该分配装置为第一叶子节点,该第一叶子节点为交换机,由于该交换机可以通过发送探测报文探测到该交换机与其他交换机之间所有路径的传输参数,因此该交换机再根据这些路径的传输参数分配待传输的业务流量,不仅可以保证该交换机上的各条物理链路之间达到业务流量的均衡,而且也能保证这些路径中各条路径上的其他物理链路也达到业务流量的均衡,从而能够保证该交换机和其他交换机之间的各条路径之间达到业务流量的均衡,进而保证报文不会丢失。
附图说明
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍。
图1为现有技术提供的数据中心网络的架构示意图;
图2为本发明实施例提供的数据中心网络的架构示意图;
图3为本发明实施例提供的业务流量的分配方法的示意图;
图4为本发明实施例提供的业务流量的分配装置的结构示意图一;
图5为本发明实施例提供的业务流量的分配装置的结构示意图二;
图6为本发明实施例提供的业务流量的分配装置的硬件示意图。
具体实施方式
本文中字符“/”,一般表示前后关联对象是一种“或者”的关系。例如,A/B可以理解为A或者B。
本发明的说明书和权利要求书中的术语“第一”和“第二”是用于区别不同的对象,而不是用于描述对象的特定顺序。例如,第一叶子节点和第二叶子节点是用于区别不同的叶子节点,而不是用于描述叶子节点的特征顺序。
在本发明的描述中,除非另有说明,“多个”的含义是指两个或两个以上。例如,多条物理链路是指两个或两个以上的物理链路。
此外,本发明的描述中所提到的术语“包括”和“具有”以及它们的任何变形,意图在于覆盖不排他的包含。例如包含了一系列步骤或单元的过程、方法、系统、产品或设备没有限定于已列出的步骤或单元,而是可选地还包括其他没有列出的步骤或单元,或可选地还包括对于这些过程、方法、产品或设备固有的其它步骤或单元。
另外,在本发明实施例中,“示例性的”、或者“例如”等词用于表示作例子、例证或说明。本申请中被描述为“示例性的”或“例如”的任何实施例或设计方案不应被解释为比其它实施例或设计方案更优选或更具优势。确切而言,使用“示例性的”、或 者“例如”等词旨在以具体方式呈现概念。
本发明的下述各个实施例中所提及的路径均是指网络路径,例如第一叶子节点到第二叶子节点之间的路径是指第一叶子节点到第二叶子节点之间的网络路径。
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。
本发明实施例提供的业务流量的分配方法可以应用于数据中心网络中,该数据中心网络可以为具有两层叶子-骨干拓扑架构的数据中心网络,该数据中心网络也可以为具有核心-汇聚-接入(英文:core-aggregation-access)拓扑架构的数据中心网络,还可以为其他具有更为复杂的拓扑架构的数据中心网络,本发明实施例不作具体限定。
示例性的,如图2所示,为一种具有两层叶子-骨干拓扑架构的数据中心网络。为了描述方便,图2中仅以该数据中心网络中包括两台服务器(分别为服务器10和服务器11)、两个叶子节点(分别为叶子节点12和叶子节点13)和两个骨干节点(分别为骨干节点14和骨干节点15)为例进行示例性的说明。
在如图2所示的数据中心网络中,为了使得数据中心网络支持业务流量的均衡,通常,可将节点之间的多条物理链路聚合成一个逻辑链路,即形成一个LAG(例如如图2中所示的物理链路①、物理链路②、物理链路③和物理链路④形成一个LAG;物理链路⑤、物理链路⑥、物理链路⑦和物理链路⑧形成一个LAG)。需要说明的是,本发明实施例中,仅以多条物理链路形成一个LAG为例进行示例性的说明。实际应用中,多条物理链路还可以形成等价路由(英文:equal cost multipath routing,缩写:ECMP)或非等价路由(英文:unequal cost multipath routing,缩写:UCMP),本发明实施例不再一一详述。
如图2所示,为了保证数据中心网络的可靠性和服务质量,通常一个叶子节点可以为由多个接入交换机组成的集群(cluter)或堆叠(stack),例如叶子节点12为由交换机120和交换机121组成的集群或堆叠;叶子节点13为由交换机130和交换机131组成的集群或堆叠。这样,服务器10与叶子节点12之间可以通过两条物理链路(其中,一条物理链路作为主用物理链路,另一条物理链路作为备用物理链路,或者两条物理链路采用LAG负载分担方式)连接,服务器11与叶子节点13之间也可以类似地通过两条物理链路连接,当服务器与叶子节点之间的主用物理链路发生故障时,可以通过将待传输的业务切换到备用物理链路上传输,从而保证业务传输的连续性。叶子节点12与骨干接点14和骨干节点15之间通过多条物理链路连接,叶子节点13与骨干接点14和骨干节点15之间也通过多条物理链路连接。当服务器10需向服务器11发送业务流量时,该业务流量需经过叶子节点12、骨干节点14或骨干节点15,以及叶子节点13的转发和分担。例如,叶子节点12接收到服务器10发送的业务流量后,叶子节点12可能需要将该业务流量分配到不同的路径上传输,其中,每条路径由从叶子节点12到叶子节点13需经过的多段物理链路组成。
示例性的,在图2中,假设服务器10与叶子节点12中的交换机120之间的物理链路为主用物理链路,且当服务器10通过该主用物理链路向叶子节点12发送业务流量时,由于每个叶子节点和两个骨干节点之间均通过多条物理链路连接,因此由叶子节点12到叶子节点13之间的路径有多条,该多条路径分别为:
路径1:叶子节点12中的交换机120—骨干节点14—叶子节点13中的交换机130(即图2中的物理链路①+物理链路⑤);
路径2:叶子节点12中的交换机120—骨干节点14—叶子节点13中的交换机131(即图2中的物理链路①+物理链路⑥);
路径3:叶子节点12中的交换机120—骨干节点15—叶子节点13中的交换机130(即图2中的物理链路②+物理链路⑦);
路径4:叶子节点12中的交换机120—骨干节点15—叶子节点13中的交换机131(即图2中的物理链路②+物理链路⑧)。
本领域技术人员可以理解,当服务器10与叶子节点12中的交换机121之间的物理链路为主用物理链路,且当服务器10通过该主用物理链路向叶子节点12发送业务流量时,叶子节点12到叶子节点13之间的多条路径分别为:
路径5:叶子节点12中的交换机121—骨干节点14—叶子节点13中的交换机130(即图2中的物理链路③+物理链路⑤);
路径6:叶子节点12中的交换机121—骨干节点14—叶子节点13中的交换机131(即图2中的物理链路③+物理链路⑥);
路径7:叶子节点12中的交换机121—骨干节点15—叶子节点13中的交换机130(即图2中的物理链路④+物理链路⑦);
路径8:叶子节点12中的交换机121—骨干节点15—叶子节点13中的交换机131(即图2中的物理链路④+物理链路⑧)。
由于上述图2中,叶子节点12到叶子节点13之间的每条路径均由两段(实际应用中也可能为多段)物理链路组成,且现有技术中叶子节点12只能按照叶子节点12上的物理链路的带宽利用率和队列长度为需要转发的业务流量中的报文选择一个最优的物理链路,因此叶子节点12只能保证在叶子节点12上的各条物理链路之间达到业务流量的均衡,而当骨干节点14与叶子节点13,或者骨干节点15与叶子节点13之间的物理链路出现阻塞时,可能也会导致业务流量中的报文在从叶子节点12到叶子节点13的路径上传输时丢失。
本发明实施例为了解决该问题,提出了一种业务流量的分配方法,该分配方法通过第一叶子节点向第二叶子节点发送探测报文,收到探测报文的第二叶子节点再向发送探测报文的第一叶子节点回复响应报文,第一叶子节点根据第一叶子节点通过每条路径接收到的响应报文,分别计算出每条路径(包括组成该路径的多段物理链路)的传输参数,最后第一叶子节点再根据计算出的每条路径的传输参数,在第一叶子节点上的各条物理链路上分配待传输的业务流量。由于本发明实施例提供的业务流量的分配方法,叶子节点可以通过发送探测报文探测到该叶子节点与其他叶子节点之间所有路径的传输参数,因此该叶子节点再根据这些路径的传输参数分配待传输的业务流量,不仅可以保证该叶子节点上的各条物理链路之间达到业务流量的均衡,而且也能保证这些路径中各条路径上的其他物理链路也达到业务流量的均衡,从而能够保证源节点(例如第一叶子节点)和目的节点(例如第二叶子节点)之间的各条路径之间达到业务流量的均衡,进而保证报文不会丢失。
基于上述如图2所示的具有两层叶子-骨干拓扑架构的数据中心网络,本发明实施例提供一种业务流量的分配方法,如图3所示,该业务流量的分配方法可以包括:
S101、第一叶子节点分别通过第一叶子节点上的连接骨干节点的多条物理链路中的每条物理链路周期性地发送探测报文。
其中,第一叶子节点上的多条物理链路包括第一叶子节点与至少一个骨干节点中的每个骨干节点之间的物理链路。例如,第一叶子节点上的多条物理链路具体可以包括如图2所示的物理链路①、物理链路②、物理链路③和物理链路④,则所述第一叶子节点在每个发送周期内,分别通过物理链路①、物理链路②、物理链路③和物理链路④发送探测报文,即所述探测报文被发送了4份。
示例性的,本发明实施例中的第一叶子节点可以为如图2所示的数据中心网络中的叶子节点12;至少一个骨干节点可以为如图2所示的数据中心网络中的骨干节点14和骨干节点15。
本发明实施例中,探测报文可以为支持虚拟可扩展局域网(英文:virtual extensible local area network,缩写:VXLAN)协议的报文,其具体格式与现有技术中的VXLAN协议报文的格式相同,本发明实施例不再赘述。
可选的,为了区分本发明实施例中的探测报文与通常其他功能的探测报文,本发明实施例可以在探测报文中设置一个标识,用于指示该探测报文用于探测源节点和目的节点之间的路径的传输参数。其中,该标识可以根据实际使用需求进行设置,本发明不作具体限定。
S102、每个骨干节点接收第一叶子节点通过连接骨干节点的物理链路发送的探测报文。
S103、每个骨干节点分别通过连接第二叶子节点的物理链路向第二叶子节点发送其接收到的探测报文。
示例性的,本发明实施例中的第二叶子节点可以为如图2所示的数据中心网络中的叶子节点13。
本发明实施例中,由于每个骨干节点与第二叶子节点之间可能存在多条物理链路,因此为了使得业务流量在多条物理链路之间达到均衡,骨干节点向第二叶子节点发送其接收到的探测报文时,可以采用对称哈希业务流量分配方法(也称对称哈希负载分担方法)发送。具体地,第一叶子节点可以根据其遍历的第一叶子节点与第二叶子节点之间的所有路径,通过哈希算法计算出与每个路径对应的哈希值,每个哈希值可以用于表示骨干节点与第二叶子节点之间的某条物理链路;当骨干节点接收到第一叶子节点发送的探测报文后,骨干节点可以根据其保存的转发表(包括哈希值和骨干节点与第二叶子节点之间的物理链路的对应关系)确定出与该探测报文中包含的哈希值对应的物理链路,并将该探测报文从该物理链路的出接口发送出去。
S104、第二叶子节点通过第二叶子节点上的多条物理链路接收每个骨干节点发送的探测报文。
其中,第二叶子节点上的多条物理链路包括第二叶子节点与至少一个骨干节点中的每个骨干节点之间的物理链路。例如,第二叶子节点上的多条物理链路具体可以包括如图2所示的物理链路⑤、物理链路⑥、物理链路⑦和物理链路⑧。
S105、第二叶子节点分别生成与每个探测报文对应的响应报文。
具体的,第二叶子节点根据其接收到的一个探测报文生成与该探测报文对应的响应报文的方法可以为:
第二叶子节点接收到一个探测报文后,可通过将该探测报文中的源媒体接入控制(英文:media access control,缩写:MAC)地址和目的MAC地址互换生成与该探测报文对应的响应报文。
或者,第二叶子节点可通过将该探测报文中的源MAC地址和目的MAC地址互换,并将该探测报文中的源IP地址与目的IP地址互换,以及将该探测报文中的四层源端口号和四层目的端口号互换生成与该探测报文对应的响应报文。
或者,第二叶子节点可通过将该探测报文中的源MAC地址和目的MAC地址互换,并将该探测报文中的源设备标识和目的设备标识互换生成与该探测报文对应的响应报文。
本领域技术人员可以理解,探测报文中除了上述第二叶子节点互换的几个字段外,其他字段/字段的数值可以继续原封不动地保留在响应报文中。即响应报文中除了上述第二叶子节点互换的几个字段外,其他字段/字段的数值均与探测报文相同。
可选的,本发明实施例中,由于第一叶子节点和第二叶子节点所属的虚拟局域网(英文:virtual local area network,缩写:VLAN)、广播域或者IP网段可能不同,因此可能会使得叶子节点(包括第一叶子节点和第二叶子节点)和与其交互的骨干节点之间的网络不同,从而第二叶子节点根据其接收到的一个探测报文生成与该探测报文对应的响应报文的方法也会有所差异。具体的,若第一叶子节点和第二叶子节点所属的VLAN、广播域或者IP网段相同,则叶子节点和与其交互的骨干节点之间的网络为二层网络或多链接透明互联(英文:transparent interconnection of lots of links,缩写:TRILL)网络;若第一叶子节点和第二叶子节点所属的VLAN、广播域或者IP网段不同,则叶子节点和与其交互的骨干节点之间的网络为三层网络。以下分别以叶子节点和骨干节点之间的网络为这三种网络为例,对第二叶子节点根据其接收到的一个探测报文生成与该探测报文对应的响应报文的方法再进行详细地描述。
(1)叶子节点和骨干节点之间的网络为二层网络
如果叶子节点和骨干节点之间的网络为二层网络,则由于二层网络并不涉及IP地址,因此第二叶子节点根据其接收到的一个探测报文生成与该探测报文对应的响应报文时将该探测报文中的原MAC地址和目的MAC地址互换。
(2)叶子节点和骨干节点之间的网络为三层网络
如果叶子节点和骨干节点之间的网络为三层网络,则第二叶子节点根据其接收到的一个探测报文生成与该探测报文对应的响应报文时将该探测报文中的原MAC地址和目的MAC地址互换,并将该探测报文中的源IP地址与目的IP地址互换,以及将该探测报文中的四层源端口号和四层目的端口号互换。
(3)叶子节点和骨干节点之间的网络为TRILL网络
如果叶子节点和骨干节点之间的网络为TRILL网络,则第二叶子节点根据其接收到的一个探测报文生成与该探测报文对应的响应报文时将该探测报文中的原MAC地址和目的MAC地址互换,并将该探测报文中的源设备标识和目的设备标识互换。
可选的,本发明实施例中,叶子节点和骨干节点之间的网络除了上述列举的三种网 络之外,还可以为集群系统网络、虚拟专用局域网业务(英文:virtual private lan service,缩写:VPLS)网络、VXLAN、使用通用路由封装的网络虚拟化(英文:network virtualization using generic routing encapsulation,缩写:NVGRE)网络以及无状态的传输隧道(英文:stateless transport tunneling,缩写:STT)网络等,本实施例不再一一详述。
本发明实施例中,响应报文可以为支持VXLAN协议的报文,其具体格式与现有技术中的VXLAN协议报文的格式相同。
进一步地,下面再详细说明一下探测报文和响应报文的地址及端口号等。以图2为例,假设探测报文从叶子节点12经物理链路①到骨干节点14,再经物理链路⑤或者物理链路⑥到叶子节点13;且响应报文再按照原路返回,则从叶子节点12到骨干节点14,探测报文的源MAC地址为叶子节点12的MAC地址,目的MAC地址为骨干节点14的MAC地址;源IP地址为叶子节点12的IP地址,目的IP地址为叶子节点13的IP地址;四层源端口号为叶子节点12的四层源端口号,四层目的端口号为叶子节点13的四层目的端口号。从骨干节点14到叶子节点13,探测报文的源MAC地址为骨干节点14的MAC地址,目的MAC地址为叶子节点13的MAC地址,源IP地址、目的IP地址、四层源端口号以及四层目的端口号均与从叶子节点12到骨干节点14的探测报文的源IP地址、目的IP地址、四层源端口号以及四层目的端口号相同。
需要说明的是,探测报文还包括生存周期(英文:time to live,缩写:TTL)字段,当探测报文在三层网络转发时,每转发一次探测报文的TTL字段中的TTL值就减一。相应的,探测报文的IP头校验字段(英文:checksum)的值和报文的校验字段(英文:CRC)的值会同步更新。
相应的,本领域技术人员都知道,与上述探测报文对应的响应报文的源MAC地址和目的MAC地址均与探测报文的源MAC地址和目的MAC地址相反,响应报文的源IP地址和目的IP地址均与探测报文的源IP地址和目的IP地址相反,响应报文的四层源端口号和四层目的端口号均与探测报文的四层源端口号和四层目的端口号相反。
S106、第二叶子节点分别通过第二叶子节点上的连接骨干节点的多条物理链路发送响应报文。
本发明实施例中,第二叶子节点根据其接收到的探测报文生成与该探测报文对应的响应报文后,第二叶子节点可将该响应报文从接收该探测报文的四层端口发送出去,即保证响应报文能够按照探测报文的传输路径返回第一叶子节点,如此能够保证第一叶子节点准确地计算第一叶子节点与第二叶子节点之间的每条路径的传输参数。
S107、每个骨干节点接收第二叶子节点通过连接骨干节点的物理链路发送的响应报文。
S108、每个骨干节点分别通过连接第一叶子节点的物理链路向第一叶子节点发送其接收到的响应报文。
与S103类似,S108中,至少一个骨干节点中的每个骨干节点向第一叶子节点发送其接收到的响应报文时,可以采用对称哈希业务流量分配方法发送,即骨干节点接收到第一叶子节点发送的探测报文时,骨干节点记录其接收到探测报文的物理链路,然后骨干节点再将其接收到的多个响应报文按照与这些响应报文对应的探测报文传输的物理 链路发送给第一叶子节点(即每个响应报文均按照与其对应的探测报文的传输路径原路返回到第一叶子节点)。如此,可以保证一个探测报文和与其对应的响应报文均在一条路径上传输,从而可以提高第一叶子节点计算第一叶子节点与第二叶子节点之间的每条路径的传输参数的准确率。
S109、第一叶子节点通过第一叶子节点上的多条物理链路接收每个骨干节点发送的响应报文。
S110、对于每条路径,第一叶子节点根据从该路径上接收到的多个响应报文,计算该路径的传输参数,以得到每条路径的传输参数。
其中,同一路径上接收到的响应报文中包含的路径的标识相同;第一叶子节点上的多条物理链路属于不同的路径,第二叶子节点上的多条物理链路也属于不同的路径;第一叶子节点上的一条物理链路和第二叶子节点上的一条物理链路属于同一条路径,即示例性的,基于上述如图2所示的数据中心网络,第一叶子节点(例如叶子节点12)上的一条物理链路和第二叶子节点(例如叶子节点13)上的一条物理链路组成一条路径。
可选的,本发明实施例中,上述每条路径的传输参数可以包括该路径的时延、该路径的抖动率以及该路径的丢包率中的至少一个。示例性的,在上述如图2所示的叶子节点12和叶子节点13之间总共有8条路径,具体为上述描述的路径1至路径8,第一叶子节点可分别根据其从这8条路径中的每条路径上接收到的响应报文,计算出每条路径的传输参数。具体的,第一叶子节点可以根据其从路径1上接收到的响应报文,计算出路径1的传输参数;根据其从路径2上接收到的响应报文,计算出路径2的传输参数;……;根据其从路径8上接收到的响应报文,计算出路径8的传输参数。如此,第一叶子节点可以得到这8条路径的传输参数。
S111、第一叶子节点根据每条路径的传输参数,在第一叶子节点上的多条物理链路上分配待传输的业务流量。
第一叶子节点计算出第一叶子节点和第二叶子节点之间的每条路径的传输参数之后,第一叶子节点可以根据这些路径的传输参数,合理地在第一叶子节点上的多条条物理链路上分配待传输的业务流量。
示例性的,假设每条路径的传输参数包括该路径的时延、该路径的抖动率以及该路径的丢包率。则第一叶子节点计算出每条路径的时延、每条路径的抖动率和每条路径的丢包率之后,第一叶子节点可以根据每条路径的时延、每条路径的抖动率和每条路径的丢包率,以及待传输的业务流量的实际传输需求,在第一叶子节点上的多条物理链路上分配待传输的业务流量。
例如,假设第一叶子节点与第二叶子节点之间有4条路径,分别为路径1、路径2、路径3和路径4。路径1的传输参数分别为:时延1、抖动率1和丢包率1;路径2的传输参数分别为:时延2、抖动率2和丢包率2;路径3的传输参数分别为时延3、抖动率3和丢包率3;路径4的传输参数分别为:时延4、抖动率4和丢包率4。其中,时延1和时延2大于预设的时延阈值,时延3和时延4小于该时延阈值;抖动率1大于预设的抖动率阈值,抖动率2、抖动率3和抖动率4小于该抖动率阈值;丢包率1、丢包率2和丢包率4小于预设的丢包率阈值,丢包率3大于该丢包率阈值。当待传输的业务对丢包率要求较高(即要求丢包率小于预设的丢包率阈值)时,第一叶子节点可将待传输的 业务流量分配到路径1、路径2和路径4上传输(具体第一叶子节点可将待传输的业务流量分配到第一叶子节点上属于路径1、路径2和路径4的各条物理链路上传输)。
上述时延阈值、抖动率阈值和丢包率阈值均是根据实际的网络架构和网络的传输环境确定的,在实际应用中可以自行设定。
本发明实施例提供的业务流量的分配方法,由第一叶子节点通过第一叶子节点上的连接骨干节点的多条物理链路中的每条物理链路周期性地发送探测报文;并且对于每条物理链路,第一叶子节点通过该物理链路接收返回的响应报文,每个响应报文为通过该物理链路发送的探测报文到达第二叶子节点后由第二叶子节点回复的;然后对于每条路径,第一叶子节点根据从该路径上接收到的多个响应报文,计算该路径的传输参数,以得到每条路径的传输参数;最后第一叶子节点再根据每条路径的传输参数,在该多条物理链路上分配待传输的业务流量。由于本发明实施例提供的业务流量的分配方法,第一叶子节点可以通过发送探测报文探测到第一叶子节点与第二叶子节点之间所有路径的传输参数,因此第一叶子节点再根据这些路径的传输参数分配待传输的业务流量,不仅可以保证第一叶子节点上的各条物理链路之间达到业务流量的均衡,而且也能保证这些路径中各条路径上的其他物理链路也达到业务流量的均衡,从而能够保证源节点(例如第一叶子节点)和目的节点(例如第二叶子节点)之间的各条路径之间达到业务流量的均衡,进而保证报文不会丢失。
可选的,结合图3,本发明实施例提供的业务流量的分配方法中,上述S111具体可以包括:
第一叶子节点根据每条路径的传输参数,确定每条路径上的业务流量分配比例;以及第一叶子节点根据每条路径上的业务流量分配比例,在第一叶子节点上的多条物理链路上分配待传输的业务流量。
本发明实施例中,第一叶子节点计算出第一叶子节点与第二叶子节点之间每条路径的传输参数之后,第一叶子节点可以根据这些路径的传输参数,确定每条路径上的业务流量分配比例。
示例性的,假设以每条路径的传输参数为丢包率为例,第一叶子节点计算出每条路径的丢包率之后,第一叶子节点可以确定丢包率大于或者等于预设的丢包率阈值的路径比较拥塞,丢包率小于该丢包率阈值的路径比较空闲,如此第一叶子节点可以确定丢包率大于或者等于该丢包率阈值的路径上的业务流量分配比例较低,丢包率小于该丢包率阈值的路径上的业务流量分配比例较高。例如,假设有2条路径,分别为路径1和路径2,路径1的丢包率为1%,路径2的丢包率为5%,预设的丢包率阈值为2%,则第一叶子节点可以确定路径1上的业务流量分配比例为90%,路径2上的业务流量分配比例为10%。
需要说明的是,上述对每条路径的传输参数以及每条路径上的业务流量分配比例的描述及举例,均是对其进行示例性的说明。即本发明实施例提供的业务流量的分配方法,对每条路径的传输参数以及每条路径上的业务流量分配比例的选择及确定具体可以根据实际使用需求进行设定,本发明对此不作具体限定。
进一步地,第一叶子节点确定出每条路径上的业务流量分配比例之后,第一叶子节点可以根据每条路径上的业务流量分配比例,在第一叶子节点上的多条物理链路上分配待传输的业务流量。由于第一叶子节点确定的业务流量分配比例是第一叶子节点与第二 叶子节点之间的所有路径(每条路径都包括多段物理链路)的业务流量分配比例,因此第一叶子节点在传输待传输的业务流量时,可以按照这些业务流量分配比例,在第一叶子节点上的多条物理链路上分配相应的待传输的业务流量,如此不仅可以保证第一叶子节点上的每条物理链路之间达到业务流量的均衡,而且也能保证第一叶子节点与第二叶子节点之间的所有路径中各条路径上的其他物理链路也达到业务流量的均衡,进而防止报文在由第一叶子节点传输到第二叶子节点时丢失。
可选的,结合图3,本发明实施例提供的业务流量的分配方法中,在S109之后,S110之前,该业务流量的分配方法还可以包括:
针对每条路径,第一叶子节点根据接收到的响应报文中包含的路径的标识,确定从该路径上接收到的响应报文,得到该路径上接收到的多个响应报文。
本发明实施例中,由于同一条路径上接收到的响应报文中包含的路径的标识相同,因此第一叶子节点可以根据其接收到的响应报文中包含的路径的标识,分别确定出从每条路径上接收到的多个响应报文。
S110具体可以包括:
第一叶子节点根据从该路径上接收到的多个响应报文的报文特征,计算该路径的传输参数。
可选的,本发明实施例中,上述从每条路径上接收到的多个响应报文中每个响应报文的报文特征包括:该路径的标识、该响应报文的序列号、该响应报文的源IP地址、该响应报文的目的IP地址、该响应报文的四层源端口号、该响应报文的四层目的端口号、该响应报文到达所述第一叶子节点的时间以及与该响应报文对应的探测报文离开所述第一叶子节点的时间。
其中,每个响应报文中包括的路径的标识可以用于标识该响应报文是通过哪条路径传输的。该路径的标识可以由发送与该响应报文对应的探测报文的设备(例如本发明实施例中的第一叶子节点)的接口名称或该设备的接口标识,以及一个哈希值组成。
示例性的,该设备的接口名称可以为第一叶子节点中的某个接入交换机的名称;该设备的接口标识可以为第一叶子节点中的某个接入交换机的标识;该哈希值可以用于表示该路径中除第一叶子节点与骨干节点之间的物理链路之外的另一段物理链路,该哈希值是网络部署完成后,第一叶子节点根据其遍历的第一叶子节点与第二叶子节点之间的所有路径,通过哈希算法计算得到的。具体可参见上述S103中对该哈希值的相关描述,此处不再赘述。
上述路径的标识中包括一个哈希值只是以如图2所示的具有两层拓扑架构的数据中心网络为例进行示例性的说明,对于具有三层或三层以上拓扑架构的数据中心网络,路径的标识中可以包括多个哈希值,每个哈希值分别对应该路径中除第一条物理链路(即探测报文经过的第一条物理链路)之外的其他物理链路中的一条物理链路。其实现原理与如图2所示的具有两层拓扑架构的数据中心网络的实现原理类似,此处不再赘述。
由于从某条路径上接收到的响应报文,可以用于确定该路径的传输参数,因此第一叶子节点可以分别根据其确定的从每条路径上接收到的多个响应报文的报文特征,计算该路径的传输参数,如此第一叶子节点针对每条路径分别进行计算后,可以得到每条路径的传输参数。
本发明实施例中,时延可以由第一叶子节点计算发送一个探测报文到接收到与该探测报文对应的响应报文的时间差得到(具体可以为上述响应报文到达第一叶子节点的时间减去与该响应报文对应的探测报文离开第一叶子节点的时间得到);抖动率可以由第一叶子节点将相邻两个报文延迟时间差除以这两个报文的序列号差得到;丢包率可以由第一叶子节点将一段时间内第一叶子节点发送的探测报文的数量与第一叶子节点接收到的响应报文的数量差除以第一叶子节点发送的探测报文的数量得到。
具体的,为了提高计算精度,上述时延也可以由第一叶子节点统计一段时间内时延的平均值得到;上述抖动率也可以由第一叶子节点统计一段时间内抖动率的平均值得到。
本领域技术人员可以理解,上述第一叶子节点计算时延所涉及到的响应报文到达第一叶子节点的时间以及与该响应报文对应的探测报文离开第一叶子节点的时间均可以携带在响应报文中。例如,第一叶子节点在发送探测报文时可以在该探测报文中增加一个时间戳,用于指示该探测报文离开第一叶子节点的时间;第一叶子节点在接收到响应报文后也可以再在该响应报文中增加一个时间戳,用于指示该响应报文到达第一叶子节点的时间。
相应的,上述第一叶子节点计算抖动率时涉及到的相邻两个报文延迟时间差可以由这两个报文的时延相减得到;这两个报文的序列号是由第一叶子节点在发送探测报文时为探测报文添加的,每个报文的序列号用于唯一标识该报文,每个报文的序列号可以携带在该报文中。
相应的,上述第一叶子节点计算丢包率时涉及到的第一叶子节点一段时间内发送的探测报文的数量与第一叶子节点接收到的响应报文的数量可以由第一叶子节点统计得到。
可选的,本发明实施例提供的业务流量的分配方法中,上述第一叶子节点根据每条路径的传输参数,确定每条路径上的业务流量分配比例,具体可以包括:
第一叶子节点对每条路径的传输参数分别进行量化;以及第一叶子节点根据量化后的每条路径的传输参数,确定每条路径上的业务流量分配比例。
本发明实施例中,为了方便实现和计算,第一叶子节点计算出第一叶子节点和第二叶子节点之间的每条路径的传输参数后,第一叶子节点可先对每条路径的传输参数分别进行量化,然后第一叶子节点根据量化后的每条路径的传输参数,确定每条路径上的业务流量分配比例。
可选的,本发明实施例中,第一叶子节点对每条路径的传输参数进行量化可以有多种实现方式。下面以一种可能的实现方式为例,对第一叶子节点对每条路径的传输参数进行量化进行示例性的说明。
假设第一叶子节点与第二叶子节点之间每条路径的传输参数均包括该路径的时延、该路径的抖动率和该路径的丢包率,那么可以预先设置一个时延与时延量化值的对应关系、抖动率与抖动率量化值的对应关系,以及丢包率与丢包率量化值的对应关系,在第一叶子节点计算出每条路径的传输参数之后,第一叶子节点可以根据每条路径的传输参数和这些对应关系,确定出对每条路径的各个传输参数量化后的值。
本发明实施例中,时延与时延量化值的对应关系可以通过下述表1来表示,抖动率 与抖动率量化值的对应关系可以通过下述表2来表示,丢包率与丢包率量化值的对应关系可以通过下述表3来表示。
表1
时延(μs) 时延量化值
0~1.5 0
1.5~3.0 1
3.0~6.0 2
6.0~15.0 3
15.0~30.0 4
30.0~50.0 5
50.0~100.0 6
>100.0 7
示例性的,参照表1,若第一叶子节点计算出某条路径的时延为7.0μs,则第一叶子节点可将该时延量化为3。
表2
抖动率(μs) 抖动率量化值
0~1.0 0
1.0~2.0 1
2.0~8.0 2
8.0~32.0 3
32.0~128.0 4
128.0~512.0 5
512.0~1024.0 6
>1024.0 7
示例性的,参照表2,若第一叶子节点计算出某条路径的抖动率为7.0μs,则第一叶子节点可将该抖动率量化为2。
表3
丢包率(10-6) 丢包率量化值
<1 0
<10 1
<100 2
<1000 3
<10000 4
<100000 5
<500000 6
≥500000 7
示例性的,参照表3,若第一叶子节点计算出某条路径的丢包率为10000×10-6=0.01(即1%),则第一叶子节点可将该抖动率量化为4。
其中,实际应用中,时延量化值、抖动率量化值和丢包率量化值均可以用三个比特表示。例如,0可以表示为000;1可以表示为001;2可以表示为010;3可以表示为011;4可以表示为100;5可以表示为101;6可以表示为110;7可以表示为111。
进一步地,上述第一叶子节点根据量化后的每条路径的传输参数,确定每条路径上的业务流量分配比例的方法可以为下述的一种:
(1)第一叶子节点以路径为单位,将每条路径量化后的各个传输参数相加,得到每条路径的总传输参数;第一叶子节点再根据每条路径的总传输参数,确定每条路径上的业务流量分配比例。
示例性的,假设有3条路径,分别为路径1、路径2和路径3;3条路径中每条路径的传输参数分别包括时延、抖动率和丢包率;且路径1的时延量化值为2,路径1的抖动率量化值为5,路径1的丢包率量化值为3;路径2的时延量化值为5,路径2的抖动率量化值为2,路径2的丢包率量化值为0;路径3的时延量化值为3,路径3的抖动率量化值为0,路径3的丢包率量化值为2。则第一叶子节点可将路径1的时延量化值、路径1的抖动率量化值和路径1的丢包率量化值相加得到路径1的总传输参数(即2+5+3=10);并将路径2的时延量化值、路径2的抖动率量化值和路径2的丢包率量化值相加得到路径2的总传输参数(即5+2+0=7);以及将路径3的时延量化值、路径3的抖动率量化值和路径3的丢包率量化值相加得到路径3的总传输参数(即3+0+2=5);然后第一叶子节点再根据路径1的总传输参数、路径2的总传输参数和路径3的总传输参数,确定路径1、路径2和路径3上的业务流量分配比例,如此可以保证路径1、路径2和路径3都能够达到业务流量的均衡。
(2)第一叶子节点以路径为单位,将每条路径量化后的各个传输参数的3个比特拼接成一个序列,该序列的值即可表示每条路径的总传输参数;第一叶子节点再根据每条路径的总传输参数,确定每条路径上的业务流量分配比例。
示例性的,假设有3条路径,分别为路径1、路径2和路径3;3条路径中每条路径的传输参数分别包括时延、抖动率和丢包率;且路径1的时延量化值为2(表示为010),路径1的抖动率量化值为5(表示为101),路径1的丢包率量化值为3(表示为011);路径2的时延量化值为5(表示为101),路径2的抖动率量化值为2(表示为010),路径2的丢包率量化值为0(表示为000);路径3的时延量化值为3(表示为011),路径3的抖动率量化值为0(表示为000),路径3的丢包率量化值为2(表示为010)。则第一叶子节点可将路径1的时延量化值的3个比特、路径1的抖动率量化值的3个比特和路径1的丢包率量化值的3个比特拼接成一个序列得到路径1的总传输参数(即010101011=171);并将路径2的时延量化值的3个比特、路径2的抖动率量化值的3个比特和路径2的丢包率量化值的3个比特拼接成一个序列得到路径2的总传输参数(即101010000=336),以及将路径3的时延量化值的3个比特、路径3的抖动率量化值的3个比特和路径3的丢包率量化值的3个比特拼接成一个序列得到路径3的总传输参数(即011000010=200),然后第一叶子节点再根据路径1的总传输参数、路径2的总传输参数和路径3的总传输参数,确定路径1、路径2和路径3上的业务流量分配比例,如此可以保证路径1、路径2和路径3都能够达到业务流量的均衡。
需要说明的是,第一叶子节点根据路径1的总传输参数、路径2的总传输参数和路 径3的总传输参数,确定路径1、路径2和路径3上的业务流量分配比例具体包括:第一叶子节点根据路径1的总传输参数、路径2的总传输参数和路径3的总传输参数,确定路径1上的业务流量分配比例;第一叶子节点根据路径1的总传输参数、路径2的总传输参数和路径3的总传输参数,确定路径2上的业务流量分配比例;以及第一叶子节点根据路径1的总传输参数、路径2的总传输参数和路径3的总传输参数,确定路径3上的业务流量分配比例。
具体的,第一叶子节点根据路径1的总传输参数、路径2的总传输参数和路径3的总传输参数,确定路径1、路径2和路径3上的业务流量分配比例的具体实现可以根据实际使用需求选择相应的算法实现,本发明实施例对此不作限定。例如,假设上述(1)中路径1的总传输参数为10;路径2的总传输参数为7;路径3的总传输参数为5,则第一叶子节点可以计算出10、7和5所占的百分比,然后按照该百分比确定路径1、路径2和路径3上的业务流量分配比例。由于路径1的总传输参数最大,路径2的总传输参数次之,路径3的总传输参数最小,因此第一叶子节点可认为路径1的传输性能最差,路径2的传输性能次之,路径3的传输性能最好,这样第一叶子节点可确定路径1上的业务流量分配比例最小,路径2上的业务流量分配比例次之,路径3上的业务流量分配比例最大,如此,由于可以根据各条路径的传输参数获知各条路径的传输性能,因此根据各条路径的传输参数确定各条路径上的业务流量分配比例,可以保证各条路径上的业务流量达到均衡。
可选的,本发明实施例中,第一叶子节点可以只执行一次上述如图3所示的业务流量的分配方法。具体的,当第一叶子节点计算出每条路径的传输参数时,可以由用户将每条路径的传输参数配置在第一叶子节点中,若第一叶子节点需要传输业务,则第一叶子节点可以根据预先配置的每条路径的传输参数,在第一叶子节点上的各条物理链路上分配待传输的业务流量。或者,当第一叶子节点根据每条路径的传输参数,确定出每条路径上的业务流量分配比例时,可以由用户将每条路径上的业务流量分配比例配置在第一叶子节点中,若第一叶子节点需要传输业务,则第一叶子节点可以根据预先配置的每条路径上的业务流量分配比例,在第一叶子节点上的各条物理链路上分配待传输的业务流量。如此也能够保证第一叶子节点和第二叶子节点之间的各条路径之间达到业务流量的均衡,进而保证报文不会丢失。
需要说明的是,上述描述的第一叶子节点只执行一次上述如图3所示的业务流量的分配方法,只适用于数据中心网络的架构基本固定,不会有较大变化(即数据中心网络部署完成后,其架构不会有较大的变化)的应用场景。
可选的,对于数据中心网络的架构动态变化的应用场景,第一叶子节点可以循环执行上述如图3所示的业务流量的分配方法。具体的,第一叶子节点可以不断循环执行上述如图3所示的业务流量的分配方法,或者第一叶子节点可以以一个固定周期循环执行上述如图3所示的业务流量的分配方法,具体的,可以根据实际使用需求进行设定,本发明不作具体限定。
本发明实施例提供的业务流量的分配方法,由于第一叶子节点可以通过发送探测报文探测到第一叶子节点与第二叶子节点之间所有路径的传输参数,因此第一叶子节点根据这些路径的传输参数分配待传输的业务流量,不仅可以保证第一叶子节点上的各条物 理链路之间达到业务流量的均衡,而且也能保证这些路径中各条路径上的其他物理链路也达到业务流量的均衡,从而能够保证源节点(例如第一叶子节点)和目的节点(例如第二叶子节点)之间的各条路径之间达到业务流量的均衡,进而保证报文不会丢失。
如图4所示,本发明实施例提供一种业务流量的分配装置,所述分配装置为第一叶子节点,所述分配装置用于执行以上方法中的第一叶子节点所执行的步骤。所述分配装置可以包括相应步骤所对应的模块。示例的,所述分配装置可以包括:
发送单元20,用于分别通过所述第一叶子节点上的连接骨干节点的多条物理链路中的每条物理链路周期性地发送探测报文;接收单元21,用于对于每条物理链路,通过所述物理链路接收返回的响应报文,所述响应报文为通过所述物理链路发送的探测报文到达第二叶子节点后由所述第二叶子节点回复的;计算单元22,用于对于每条路径,根据所述接收单元21从所述路径上接收到的多个响应报文,计算所述路径的传输参数,以得到每条路径的传输参数,其中,同一路径上接收到的响应报文中包含的路径的标识相同,每条路径包括至少两段物理链路,所述多条物理链路属于不同的路径;分配单元23,用于根据所述计算单元22计算的所述每条路径的传输参数,在所述多条物理链路上分配待传输的业务流量。
可选的,所述分配单元23,具体用于根据所述计算单元22计算的所述每条路径的传输参数,确定所述每条路径上的业务流量分配比例;并根据所述每条路径上的业务流量分配比例,在所述多条物理链路上分配所述待传输的业务流量。
可选的,所述路径的传输参数包括所述路径的时延、所述路径的抖动率以及所述路径的丢包率中的至少一个。
可选的,结合图4,如图5所示,所述分配装置还包括确定单元24,
所述确定单元24,用于在所述计算单元22根据从所述路径上接收到的多个响应报文,计算所述路径的传输参数之前,根据所述接收单元21接收到的响应报文中包含的路径的标识,确定从所述路径上接收到的响应报文,得到从所述路径上接收到的多个响应报文;所述计算单元22,具体用于根据所述确定单元24确定的从所述路径上接收到的多个响应报文的报文特征,计算所述路径的传输参数。
可选的,所述接收单元21从所述路径上接收到的多个响应报文中每个响应报文的报文特征包括:所述路径的标识、所述响应报文的序列号、所述响应报文的源网际互连协议IP地址、所述响应报文的目的IP地址、所述响应报文的四层源端口号、所述响应报文的四层目的端口号、所述响应报文到达所述第一叶子节点的时间以及与所述响应报文对应的探测报文离开所述第一叶子节点的时间。
可选的,所述分配单元23,具体用于对所述计算单元22计算的所述每条路径的传输参数分别进行量化;并根据量化后的所述每条路径的传输参数,确定所述每条路径上的业务流量分配比例。
本发明实施例中,所述分配装置/所述第一叶子节点可以为交换机、所述第二叶子节点和所述骨干节点也可以为交换机。具体的,第一叶子节点和第二叶子节点可以作为两层叶子-骨干拓扑架构的数据中心网络中的叶子交换机,骨干节点可以为两层叶子-骨干拓扑架构的数据中心网络中的骨干交换机。
可以理解,本实施例的分配装置可对应于上述如图3所示的实施例的业务流量的分配方法中的第一叶子节点,并且本实施例的分配装置中的各个模块的划分和/或功能等均是为了实现如图3所示的方法流程,为了简洁,在此不再赘述。
本发明实施例提供一种业务流量的分配装置,该分配装置为第一叶子节点,由于第一叶子节点可以通过发送探测报文探测到第一叶子节点与第二叶子节点之间所有路径的传输参数,因此第一叶子节点再根据这些路径的传输参数分配待传输的业务流量,不仅可以保证第一叶子节点上的各条物理链路之间达到业务流量的均衡,而且也能保证这些路径中各条路径上的其他物理链路也达到业务流量的均衡,从而能够保证源节点(例如第一叶子节点)和目的节点(例如第二叶子节点)之间的各条路径之间达到业务流量的均衡,进而保证报文不会丢失。
如图6所示,本发明实施例提供一种业务流量的分配装置,所述分配装置为第一叶子节点,所述第一叶子节点为交换机(具体的该交换机可以作为两层叶子-骨干拓扑架构的数据中心网络中的叶子交换机),所述交换机包括处理器30、接口电路31、存储器32和系统总线33。
所述存储器32用于存储计算机程序指令,所述处理器30、所述接口电路31和所述存储器32通过所述系统总线33相互连接,当所述交换机运行时,所述处理器30执行所述存储器32存储的所述计算机程序指令,以使所述交换机执行如图3所示的业务流量的分配方法。具体的业务流量的分配方法可参见上述如图3所示的实施例中的相关描述,此处不再赘述。
本实施例还提供一种存储介质,该存储介质可以包括所述存储器32。
所述处理器30可以为中央处理器(英文:central processing unit,缩写:CPU)。所述处理器30还可以为其他通用处理器、数字信号处理器(英文:digital signal processing,简称DSP)、专用集成电路(英文:application specific integrated circuit,简称ASIC)、现场可编程门阵列(英文:field-programmable gate array,简称FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。
所述处理器30可以为专用处理器,该专用处理器可以包括基带处理芯片、射频处理芯片等中的至少一个。进一步地,该专用处理器还可以包括具有交换机其他专用处理功能的芯片。
所述存储器32可以包括易失性存储器(英文:volatile memory),例如随机存取存储器(英文:random-access memory,缩写:RAM);所述存储器32也可以包括非易失性存储器(英文:non-volatile memory),例如只读存储器(英文:read-only memory,缩写:ROM),快闪存储器(英文:flash memory),硬盘(英文:hard disk drive,缩写:HDD)或固态硬盘(英文:solid-state drive,缩写:SSD);所述存储器32还可以包括上述种类的存储器的组合。
所述系统总线33可以包括数据总线、电源总线、控制总线和信号状态总线等。本实施例中为了清楚说明,在图6中将各种总线都示意为系统总线33。
所述接口电路31具体可以是交换机上的收发器。该收发器可以为无线收发器。所 述处理器30通过所述接口电路31与其他设备,例如其他交换机之间进行报文的收发。
在具体实现过程中,上述如图3所示的方法流程中的各步骤均可以通过硬件形式的处理器30执行存储器32中存储的软件形式的计算机程序指令实现。为避免重复,此处不再赘述。
本发明实施例中,所述第二叶子节点和所述骨干节点也可以为交换机。具体的,第二叶子节点可以作为两层叶子-骨干拓扑架构的数据中心网络中的叶子交换机,骨干节点可以为两层叶子-骨干拓扑架构的数据中心网络中的骨干交换机。
本发明实施例提供一种业务流量的分配装置,该分配装置为第一叶子节点,该第一叶子节点为交换机,由于该交换机可以通过发送探测报文探测到该交换机与其他交换机之间所有路径的传输参数,因此该交换机再根据这些路径的传输参数分配待传输的业务流量,不仅可以保证该交换机上的各条物理链路之间达到业务流量的均衡,而且也能保证这些路径中各条路径上的其他物理链路也达到业务流量的均衡,从而能够保证该交换机和其他交换机之间的各条路径之间达到业务流量的均衡,进而保证报文不会丢失。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将装置的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。上述描述的系统,装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的几个实施例中,应该理解到,所揭露的系统,装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述模块或单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本发明各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元可以采用软件功能单元的形式实现。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)或处理器执行本发明各个实施例所述方法的全部或部分步骤。所述存储介质是非短暂性(英文:non-transitory)介质,包括:快闪存储器、移动硬盘、只读存储器、随机存取存储器、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,仅为本发明的具体实施方式,但本发明的保护范围并不局限于此,任何 熟悉本技术领域的技术人员在本发明揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本发明的保护范围之内。因此,本发明的保护范围应所述以权利要求的保护范围为准。

Claims (13)

  1. 一种业务流量的分配方法,其特征在于,所述分配方法包括:
    第一叶子节点分别通过所述第一叶子节点上的连接骨干节点的多条物理链路中的每条物理链路周期性地发送探测报文;
    对于每条物理链路,所述第一叶子节点通过所述物理链路接收返回的响应报文,每个响应报文为通过所述物理链路发送的探测报文到达第二叶子节点后由所述第二叶子节点回复的;
    对于每条路径,所述第一叶子节点根据从所述路径上接收到的多个响应报文,计算所述路径的传输参数,以得到每条路径的传输参数,其中,同一路径上接收到的响应报文中包含的路径的标识相同,每条路径包括至少两段物理链路,所述多条物理链路属于不同的路径;
    所述第一叶子节点根据所述每条路径的传输参数,在所述多条物理链路上分配待传输的业务流量。
  2. 根据权利要求1所述的分配方法,其特征在于,所述第一叶子节点根据所述每条路径的传输参数,在所述多条物理链路上分配待传输的业务流量,包括:
    所述第一叶子节点根据所述每条路径的传输参数,确定所述每条路径上的业务流量分配比例;
    所述第一叶子节点根据所述每条路径上的业务流量分配比例,在所述多条物理链路上分配所述待传输的业务流量。
  3. 根据权利要求1或2所述的分配方法,其特征在于,
    所述路径的传输参数包括所述路径的时延、所述路径的抖动率以及所述路径的丢包率中的至少一个。
  4. 根据权利要求1至3任意一项所述的分配方法,其特征在于,所述第一叶子节点根据从所述路径上接收到的多个响应报文,计算所述路径的传输参数之前,所述方法还包括:
    所述第一叶子节点根据接收到的响应报文中包含的路径的标识,确定从所述路径上接收到的响应报文,得到从所述路径上接收到的多个响应报文;
    所述第一叶子节点根据从所述路径上接收到的多个响应报文,计算所述路径的传输参数,包括:
    所述第一叶子节点根据从所述路径上接收到的多个响应报文的报文特征,计算所述路径的传输参数。
  5. 根据权利要求4所述的分配方法,其特征在于,
    从所述路径上接收到的多个响应报文中的每个响应报文的报文特征包括:所述路径的标识、所述响应报文的序列号、所述响应报文的源网际互连协议IP地址、所述响应报文的目的IP地址、所述响应报文的四层源端口号、所述响应报文的四层目的端口号、所述响应报文到达所述第一叶子节点的时间以及与所述响应报文对应的探测报文离开所述第一叶子节点的时间。
  6. 根据权利要求2所述的分配方法,其特征在于,所述第一叶子节点根据所述每 条路径的传输参数,确定每条路径上的业务流量分配比例,包括:
    所述第一叶子节点对所述每条路径的传输参数分别进行量化;
    所述第一叶子节点根据量化后的所述每条路径的传输参数,确定所述每条路径上的业务流量分配比例。
  7. 一种业务流量的分配装置,其特征在于,所述分配装置为第一叶子节点,所述分配装置包括:
    发送单元,用于分别通过所述第一叶子节点上的连接骨干节点的多条物理链路中的每条物理链路周期性地发送探测报文;
    接收单元,用于对于每条物理链路,通过所述物理链路接收返回的响应报文,所述响应报文为通过所述物理链路发送的探测报文到达第二叶子节点后由所述第二叶子节点回复的;
    计算单元,用于对于每条路径,根据所述接收单元从所述路径上接收到的多个响应报文,计算所述路径的传输参数,以得到每条路径的传输参数,其中,同一路径上接收到的响应报文中包含的路径的标识相同,每条路径包括至少两段物理链路,所述多条物理链路属于不同的路径;
    分配单元,用于根据所述计算单元计算的所述每条路径的传输参数,在所述多条物理链路上分配待传输的业务流量。
  8. 根据权利要求7所述的分配装置,其特征在于,
    所述分配单元,具体用于根据所述计算单元计算的所述每条路径的传输参数,确定所述每条路径上的业务流量分配比例;并根据所述每条路径上的业务流量分配比例,在所述多条物理链路上分配所述待传输的业务流量。
  9. 根据权利要求7或8所述的分配装置,其特征在于,
    所述路径的传输参数包括所述路径的时延、所述路径的抖动率以及所述路径的丢包率中的至少一个。
  10. 根据权利要求7至9任意一项所述的分配装置,其特征在于,所述分配装置还包括确定单元,
    所述确定单元,用于在所述计算单元根据从所述路径上接收到的多个响应报文,计算所述路径的传输参数之前,根据所述接收单元接收到的响应报文中包含的路径的标识,确定从所述路径上接收到的响应报文,得到从所述路径上接收到的多个响应报文;
    所述计算单元,具体用于根据所述确定单元确定的从所述路径上接收到的多个响应报文的报文特征,计算所述路径的传输参数。
  11. 根据权利要求10所述的分配装置,其特征在于,
    所述接收单元从所述路径上接收到的多个响应报文中每个响应报文的报文特征包括:所述路径的标识、所述响应报文的序列号、所述响应报文的源网际互连协议IP地址、所述响应报文的目的IP地址、所述响应报文的四层源端口号、所述响应报文的四层目的端口号、所述响应报文到达所述第一叶子节点的时间以及与所述响应报文对应的探测报文离开所述第一叶子节点的时间。
  12. 根据权利要求8所述的分配装置,其特征在于,
    所述分配单元,具体用于对所述计算单元计算的所述每条路径的传输参数分别进行 量化;并根据量化后的所述每条路径的传输参数,确定所述每条路径上的业务流量分配比例。
  13. 一种业务流量的分配装置,其特征在于,所述分配装置为第一叶子节点,所述第一叶子节点为交换机,所述交换机包括处理器、接口电路、存储器和系统总线;
    所述存储器用于存储计算机程序指令,所述处理器、所述接口电路和所述存储器通过所述系统总线相互连接,当所述交换机运行时,所述处理器执行所述存储器存储的所述计算机程序指令,以使所述交换机执行如权利要求1至6任意一项所述的分配方法。
PCT/CN2017/070658 2016-01-26 2017-01-09 一种业务流量的分配方法及装置 WO2017128945A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
JP2018557175A JP6608545B2 (ja) 2016-01-26 2017-01-09 サービストラヒック分配方法及び装置
EP17743560.9A EP3393094A4 (en) 2016-01-26 2017-01-09 Method and device for allocating service traffic
US16/045,373 US10735323B2 (en) 2016-01-26 2018-07-25 Service traffic allocation method and apparatus

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201610051922.4A CN106998302B (zh) 2016-01-26 2016-01-26 一种业务流量的分配方法及装置
CN201610051922.4 2016-01-26

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/045,373 Continuation US10735323B2 (en) 2016-01-26 2018-07-25 Service traffic allocation method and apparatus

Publications (1)

Publication Number Publication Date
WO2017128945A1 true WO2017128945A1 (zh) 2017-08-03

Family

ID=59397404

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/070658 WO2017128945A1 (zh) 2016-01-26 2017-01-09 一种业务流量的分配方法及装置

Country Status (5)

Country Link
US (1) US10735323B2 (zh)
EP (1) EP3393094A4 (zh)
JP (1) JP6608545B2 (zh)
CN (1) CN106998302B (zh)
WO (1) WO2017128945A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3761559A4 (en) * 2018-03-19 2021-03-17 Huawei Technologies Co., Ltd. ERROR DETECTION METHOD, DEVICE AND SYSTEM

Families Citing this family (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102318284B1 (ko) * 2017-03-14 2021-10-28 삼성전자주식회사 데이터 전송 경로의 혼잡 탐지 방법 및 장치
US10419539B2 (en) 2017-05-19 2019-09-17 Bank Of America Corporation Data transfer path selection
US10848413B2 (en) 2017-07-12 2020-11-24 Nicira, Inc. Self-expansion of a layer 3 network fabric
CN108696428B (zh) * 2018-05-17 2020-10-27 北京大米科技有限公司 基于隧道技术的路由探测方法、路由节点和中心服务器
CN108881015B (zh) * 2018-05-24 2021-04-27 新华三技术有限公司 一种报文广播方法和装置
CN109194575B (zh) * 2018-08-23 2021-08-06 新华三技术有限公司 路由选择方法及装置
CN110943933B (zh) * 2018-09-25 2023-09-01 华为技术有限公司 一种实现数据传输的方法、装置和系统
CN109600322A (zh) * 2019-01-11 2019-04-09 国家电网有限公司 一种适用于电力骨干通信网的流量负载均衡控制方法
CN109802879B (zh) * 2019-01-31 2021-05-28 新华三技术有限公司 一种数据流路由方法及装置
US11334546B2 (en) * 2019-05-31 2022-05-17 Cisco Technology, Inc. Selecting interfaces for device-group identifiers
CN112039795B (zh) * 2019-06-04 2022-08-26 华为技术有限公司 一种负载分担方法、装置和网络设备
CN112583729B (zh) * 2019-09-27 2023-11-28 华为技术有限公司 一种路径的流量分配方法、网络设备及网络系统
CN113079090A (zh) * 2020-01-06 2021-07-06 华为技术有限公司 一种流量传输的方法、节点和系统
CN112152872B (zh) * 2020-08-31 2022-05-27 新华三大数据技术有限公司 一种网络亚健康检测方法及装置
CN112787925B (zh) * 2020-10-12 2022-07-19 中兴通讯股份有限公司 拥塞信息收集方法、确定最优路径方法、网络交换机
CN112491702A (zh) * 2020-11-17 2021-03-12 广州西麦科技股份有限公司 一种基于vpp路由器的多链路智能调度方法及装置
CN112491700B (zh) * 2020-12-14 2023-05-02 成都颜创启新信息技术有限公司 网络路径调整方法、系统、装置、电子设备及存储介质
US11811638B2 (en) * 2021-07-15 2023-11-07 Juniper Networks, Inc. Adaptable software defined wide area network application-specific probing
CN114448865B (zh) * 2021-12-23 2024-01-02 东莞市李群自动化技术有限公司 业务报文的处理方法、系统、设备及存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102118319A (zh) * 2011-04-06 2011-07-06 杭州华三通信技术有限公司 流量负载均衡方法和装置
CN103825839A (zh) * 2014-03-17 2014-05-28 杭州华三通信技术有限公司 一种基于聚合链路的报文传输方法和设备
CN104363181A (zh) * 2014-08-28 2015-02-18 杭州华三通信技术有限公司 流量传输控制方法及装置
US20150156119A1 (en) * 2013-12-03 2015-06-04 International Business Machines Corporation Autonomic Traffic Load Balancing in Link Aggregation Groups
CN104796346A (zh) * 2014-01-16 2015-07-22 中国移动通信集团公司 一种实现l3vpn业务负载分担的方法、设备及系统

Family Cites Families (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7012896B1 (en) * 1998-04-20 2006-03-14 Alcatel Dedicated bandwidth data communication switch backplane
US7154858B1 (en) * 1999-06-30 2006-12-26 Cisco Technology, Inc. System and method for measuring latency of a selected path of a computer network
US6662223B1 (en) * 1999-07-01 2003-12-09 Cisco Technology, Inc. Protocol to coordinate network end points to measure network latency
JP2001298482A (ja) 2000-04-13 2001-10-26 Nec Corp 分散型障害回復装置及びシステムと方法並びに記録媒体
JP4305091B2 (ja) * 2003-08-05 2009-07-29 日本電気株式会社 マルチホーミング負荷分散方法およびその装置
US9197533B1 (en) * 2005-05-09 2015-11-24 Cisco Technology, Inc. Technique for maintaining and enforcing relative policies with thresholds
CN101335689B (zh) * 2007-06-26 2011-11-02 华为技术有限公司 跟踪路由的实现方法及设备
US8675502B2 (en) * 2008-01-30 2014-03-18 Cisco Technology, Inc. Relative one-way delay measurements over multiple paths between devices
US7983163B2 (en) * 2008-12-11 2011-07-19 International Business Machines Corporation System and method for implementing adaptive load sharing to balance network traffic
JP5789947B2 (ja) * 2010-10-04 2015-10-07 沖電気工業株式会社 データ転送装置
JP5617582B2 (ja) * 2010-12-08 2014-11-05 富士通株式会社 プログラム、情報処理装置、及び情報処理方法
US9806835B2 (en) * 2012-02-09 2017-10-31 Marvell International Ltd. Clock synchronization using multiple network paths
US9898317B2 (en) * 2012-06-06 2018-02-20 Juniper Networks, Inc. Physical path determination for virtual network packet flows
JP6101114B2 (ja) * 2013-03-01 2017-03-22 日本放送協会 パケット伝送装置およびそのプログラム
CN104092628B (zh) * 2014-07-23 2017-12-08 新华三技术有限公司 一种流量分配方法和网络设备
US10218629B1 (en) * 2014-12-23 2019-02-26 Juniper Networks, Inc. Moving packet flows between network paths
CN105119778B (zh) * 2015-09-09 2018-09-07 华为技术有限公司 测量时延的方法和设备
US9923828B2 (en) * 2015-09-23 2018-03-20 Cisco Technology, Inc. Load balancing with flowlet granularity
CN106921456B (zh) * 2015-12-24 2018-06-19 中国科学院沈阳自动化研究所 基于ptp协议的多跳无线回程网络时间同步误差补偿方法
CN107547418B (zh) * 2016-06-29 2019-07-23 华为技术有限公司 一种拥塞控制方法和装置
CN107634912B (zh) * 2016-07-19 2020-04-28 华为技术有限公司 负载均衡方法、装置及设备
US10715446B2 (en) * 2016-09-12 2020-07-14 Huawei Technologies Co., Ltd. Methods and systems for data center load balancing

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102118319A (zh) * 2011-04-06 2011-07-06 杭州华三通信技术有限公司 流量负载均衡方法和装置
US20150156119A1 (en) * 2013-12-03 2015-06-04 International Business Machines Corporation Autonomic Traffic Load Balancing in Link Aggregation Groups
CN104796346A (zh) * 2014-01-16 2015-07-22 中国移动通信集团公司 一种实现l3vpn业务负载分担的方法、设备及系统
CN103825839A (zh) * 2014-03-17 2014-05-28 杭州华三通信技术有限公司 一种基于聚合链路的报文传输方法和设备
CN104363181A (zh) * 2014-08-28 2015-02-18 杭州华三通信技术有限公司 流量传输控制方法及装置

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3393094A4 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3761559A4 (en) * 2018-03-19 2021-03-17 Huawei Technologies Co., Ltd. ERROR DETECTION METHOD, DEVICE AND SYSTEM

Also Published As

Publication number Publication date
JP6608545B2 (ja) 2019-11-20
CN106998302A (zh) 2017-08-01
EP3393094A4 (en) 2018-12-19
CN106998302B (zh) 2020-04-14
EP3393094A1 (en) 2018-10-24
US20180331955A1 (en) 2018-11-15
US10735323B2 (en) 2020-08-04
JP2019503153A (ja) 2019-01-31

Similar Documents

Publication Publication Date Title
WO2017128945A1 (zh) 一种业务流量的分配方法及装置
JP7417825B2 (ja) スライスベースルーティング
US8989049B2 (en) System and method for virtual portchannel load balancing in a trill network
US20130003549A1 (en) Resilient Hashing for Load Balancing of Traffic Flows
US9654401B2 (en) Systems and methods for multipath load balancing
EP2928130B1 (en) Systems and methods for load balancing multicast traffic
CN101789949B (zh) 一种实现负荷分担的方法和路由设备
WO2020135087A1 (zh) 一种通信方法、装置及系统
EP4149066A1 (en) Communication method and apparatus
US9030926B2 (en) Protocol independent multicast last hop router discovery
US20240179095A1 (en) Method and apparatus for determining hash algorithm information for load balancing, and storage medium
WO2015039616A1 (zh) 一种报文处理方法及设备
EP3471351B1 (en) Method and device for acquiring path information about data packet
CN109792405B (zh) 用于传输节点中共享缓冲器分配的方法和设备
JP6633502B2 (ja) 通信装置
JP2016054342A (ja) 通信装置、通信方法及びプログラム
WO2024021878A1 (zh) 一种发送负载信息的方法、发送报文的方法及装置
CN113630319B (zh) 一种数据分流方法、装置及相关设备
JP2015156589A (ja) 通信装置及び通信プログラム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17743560

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2017743560

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 2018557175

Country of ref document: JP

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2017743560

Country of ref document: EP

Effective date: 20180718

NENP Non-entry into the national phase

Ref country code: DE