CN113472675A - Routing information transmission method, device, system and storage medium - Google Patents

Routing information transmission method, device, system and storage medium Download PDF

Info

Publication number
CN113472675A
CN113472675A CN202010246879.3A CN202010246879A CN113472675A CN 113472675 A CN113472675 A CN 113472675A CN 202010246879 A CN202010246879 A CN 202010246879A CN 113472675 A CN113472675 A CN 113472675A
Authority
CN
China
Prior art keywords
node
interface
load sharing
nodes
hop count
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010246879.3A
Other languages
Chinese (zh)
Inventor
肖守和
任江兴
王临春
林云
亚利克斯·塔尔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202010246879.3A priority Critical patent/CN113472675A/en
Priority to PCT/CN2021/082969 priority patent/WO2021197196A1/en
Publication of CN113472675A publication Critical patent/CN113472675A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/12Avoiding congestion; Recovering from congestion
    • H04L47/125Avoiding congestion; Recovering from congestion by balancing the load, e.g. traffic engineering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/02Topology update or discovery
    • H04L45/04Interdomain routing, e.g. hierarchical routing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/02Topology update or discovery
    • H04L45/06Deflection routing, e.g. hot-potato routing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/12Shortest path evaluation
    • H04L45/123Evaluation of link metrics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/12Shortest path evaluation
    • H04L45/124Shortest path evaluation using a combination of metrics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/28Routing or path finding of packets in data switching networks using route fault recovery
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/12Avoiding congestion; Recovering from congestion
    • H04L47/122Avoiding congestion; Recovering from congestion by diverting traffic away from congested entities

Abstract

The application provides a routing information transmission method, a device, a system and a storage medium. The first node determines the load sharing weight from other nodes to the first node through the first interface of the first node according to the current state information of at least one interface of the first node, and sends the load sharing weight and hop count from other nodes to the first node through the first interface. The second node determines the load sharing weight of other nodes to the first node through the second interface of the second node according to the load sharing weight of the second node to the first node through at least one interface of the second node. And determining the hop count of other nodes to the first node through the second interface according to the hop count of the second node to the first node through at least one interface of the second node, and sending the load sharing weight and the hop count of other nodes to the first node through the second interface. Thereby reducing the phenomenon of link congestion.

Description

Routing information transmission method, device, system and storage medium
Technical Field
The present application relates to the field of communications technologies, and in particular, to a method, an apparatus, a system, and a storage medium for transmitting routing information.
Background
Spine-Leaf (Spine-Leaf) architecture based Fabric networks are widely used in data center networks, and most computing and storage services can be provided by Spine-Leaf architecture based Fabric networks. Typical Fabric networks include Clos architecture, Fat-Tree architecture, and the like.
In a Spine network based on the Spine-Leaf structure, one or more paths usually exist to any Leaf node or Spine node, that is, for each node, there are one or more links to any Leaf node or Spine node, and the load sharing proportion of each link is fixed. For example: fig. 1 is a schematic view of load sharing provided by the present application, and as shown in fig. 1, there are multiple paths from L1 to L2. The load sharing ratio of the upstream L1 to S1 and S2 is 1:1, and even if the link from the downstream S1 to L2 fails, the ratio is kept unchanged all the time, so that the link from S1 to L2 is congested.
In summary, in the prior art, since the load sharing ratio of each link of each node is fixed, when some links fail, a link congestion phenomenon will be caused.
Disclosure of Invention
The application provides a routing information transmission method, a device, a system and a storage medium, thereby reducing the link congestion phenomenon.
In a first aspect, the present application provides a switching network system, comprising: a plurality of leaf nodes and at least one spine node. The first node is a leaf node of the plurality of leaf nodes. The second node is a spine node of the at least one spine node. The first node is configured to: according to the current state information of at least one interface of the first node, determining the load sharing weight from other nodes to the first node through the first interface of the first node, and sending the load sharing weight and hop count from other nodes to the first node through the first interface, wherein the first interface is one interface of at least one interface of the first node. The second node is configured to: and determining the load sharing weight of other nodes to the first node through the second interface of the second node according to the load sharing weight of the second node to the first node through at least one interface of the second node. And determining the hop count of other nodes to the first node through the second interface according to the hop count of the second node to the first node through at least one interface of the second node, and sending the load sharing weight and the hop count of other nodes to the first node through the second interface, wherein the second interface is one interface in at least one interface of the second node.
Each node can send load sharing weight and hop count to other nodes, so that downstream bandwidth change caused by interface failure can be quickly transmitted to upstream, interface switching or interface load sharing proportion adjustment is triggered to the upstream, and the phenomenon of link congestion is reduced.
Optionally, the load sharing weight and the hop count from the other node to the first node through the first interface of the first node are carried in the first message. Or the load sharing weight from the other node to the first node through the first interface of the first node is carried in the second message, and the hop count from the other node to the first node through the first interface of the first node is carried in the third message.
Optionally, the first packet further includes at least one of the following: the type of the first message, the auxiliary identifier of the first message, the identifier of the first node, and the number of the first nodes. The auxiliary message of the first message is used to identify that the first message is a periodically sent message or a message sent when the first interface fails. The number of first nodes is 1. The second message further includes at least one of: the type of the second message, the auxiliary identifier of the second message, the identifier of the first node, and the number of the first nodes. The auxiliary message of the second message is used to identify that the second message is a periodically sent message or a message sent when the first interface fails. The number of first nodes is 1. The third message further includes at least one of: the type of the third message, the auxiliary identifier of the third message, the identifier of the first node, and the number of the first nodes. The auxiliary message of the third message is used to identify that the third message is a periodically sent message or a message sent when the first interface fails. The number of first nodes is 1.
Optionally, the first node is specifically configured to: and determining the load sharing weight from other nodes to the first node through the first interface of the first node according to the current state information of at least one interface of the first node on the forwarding plane of the first node. And sending the load sharing weight and the hop count of other nodes to the first node through the first interface. The forwarding plane is realized by a hardware logic circuit, an FPGA logic code, a microcode of a programmable NP and a multi-core CPU software code, so that end-to-end millisecond fault convergence can be realized.
Optionally, the first node is specifically configured to: periodically sending the load sharing weight and the hop count of other nodes to the first node through the first interface. Or when the first interface fails, the load sharing weight and the hop count from other nodes to the first node through the first interface are sent through the first interface. The load sharing weight and the hop count are sent in the second mode, so that the overhead of the first node can be reduced.
Optionally, the second node is further configured to: load sharing weights and hop counts of the second node to the first node over at least one interface of the second node are received.
Optionally, the second node is specifically configured to: and if the second interface is an uplink interface or a horizontal interface, determining the load sharing weight from other nodes to the first node through the second interface according to the load sharing weight from the second node to the first node through the interfaces except the horizontal segmentation group in at least one interface of the second node. Wherein, if the second interface is an uplink interface and other uplink interfaces exist in the second node, the horizontal split group includes: a second interface and other upstream interfaces of the second node. If the second interface is an uplink interface and the second node does not have other uplink interfaces, the horizontal split group includes: a second interface. If the second interface is a horizontal interface and other horizontal interfaces exist in the second node, the horizontal partition group includes: the second interface and other horizontal interfaces of the second interface. If the second interface is a horizontal interface and the second node does not have other horizontal interfaces, the horizontal split group comprises: a second interface. And if the second interface is a downlink interface, determining the load sharing weight from other nodes to the first node through the second interface according to the load sharing weight from the second node to the first node through at least one interface of the second node. By excluding the horizontal segmentation groups, the loop phenomenon in the message transmission can be prevented.
The first alternative is as follows: and if the second interface is an uplink interface or a horizontal interface, determining that the load sharing weight from the other node to the first node through the second interface is the sum of the load sharing weights from the second node to the first node through the interface corresponding to the minimum non-0 hop count except the horizontal segmentation group in at least one interface of the second node. And if the second interface is a downlink interface, determining that the load sharing weight from the other nodes to the first node through the second interface is the sum of the load sharing weights from the second node to the first node through the interface corresponding to the minimum non-0 hop count in at least one interface of the second node.
The second option is: if the second interface is an uplink interface or a horizontal interface, determining the load sharing weight of the other nodes to the first node through the second interface of the second node according to the load sharing weight of the second node to the first node through the interface corresponding to the minimum non-0 hop count except the horizontal split group in at least one interface of the second node, the number of the interfaces of the second node, and the number of normal interfaces of the second node. And if the second interface is a downlink interface, determining the load sharing weight from other nodes to the first node through the second interface of the second node according to the load sharing weight from the second node to the first node through the interface corresponding to the minimum non-0 hop count in at least one interface of the second node, the number of interfaces of the second node and the number of normal interfaces of the second node.
Optionally, the second node is specifically configured to: if the second interface is an uplink interface or a horizontal interface, determining load sharing weights of other nodes to the first node through the second interface of the second node according to the load sharing weight of the second node to the first node through the interface corresponding to the minimum non-0 hop count except the horizontal split group in at least one interface of the second node, the quality evaluation value of the interface corresponding to the minimum non-0 hop count except the horizontal split group in at least one interface of the second node, the number of the interfaces corresponding to the minimum non-0 hop count except the horizontal split group in at least one interface of the second node, the number of interfaces of the second node, and the normal number of interfaces of the second node. And if the second interface is a downlink interface, determining the load sharing weight of other nodes to the first node through the second interface of the second node according to the load sharing weight of the second node to the first node through the interface corresponding to the minimum non-0 hop count in the at least one interface of the second node, the quality evaluation value of the interface corresponding to the minimum non-0 hop count in the at least one interface of the second node, the number of interfaces of the second node and the number of normal interfaces of the second node.
The load sharing weight from the second interface of the second node to the first node of the other node can be effectively calculated through the three methods, and it can be known from the three methods that each node can send the load sharing weight to the other node, so that the downstream bandwidth change caused by the interface fault can be quickly transmitted to the upstream, and the upstream is triggered to perform interface switching or interface load sharing proportion adjustment.
Optionally, the second node is specifically configured to: and periodically sending the load sharing weight and the hop count of other nodes to the first node through the second interface. Or when the second interface fails, the load sharing weight and the hop count from other nodes to the first node through the second interface are sent through the second interface. The load sharing weight and the hop count are sent in the second mode, so that the overhead of the second node can be reduced.
Optionally, the load sharing weight and the hop count that are respectively sent to the first node by the other nodes through the second interface of the second node are carried in the fourth message. Or the load sharing weights respectively from other nodes to the first node through the second interface of the second node are carried in the fifth message, and the hop counts respectively from other nodes to the first node through the second interface of the second node are carried in the sixth message.
Optionally, the fourth packet further includes at least one of the following: the type of the fourth packet, an auxiliary identifier of the fourth packet, an identifier of a first leaf node of the plurality of leaf nodes, and the number of the plurality of leaf nodes. The auxiliary message of the fourth message is used to identify that the fourth message is a periodically sent message or a message sent when the second interface fails. The fifth message further includes at least one of: the type of the fifth message, the auxiliary identifier of the fifth message, the identifier of the first leaf node in the plurality of leaf nodes, and the number of the plurality of leaf nodes. The auxiliary message of the fifth message is used to identify that the fifth message is a periodically sent message or a message sent when the second interface fails. The sixth message further includes at least one of: the type of the sixth message, the auxiliary identifier of the sixth message, the identifier of the first leaf node in the plurality of leaf nodes, and the number of the plurality of leaf nodes. The auxiliary message of the sixth message is used to identify that the sixth message is a periodically sent message or a message sent when the second interface fails.
Optionally, the second node is specifically configured to: and if the second interface is an uplink interface or a horizontal interface, determining the hop count of other nodes to the first node through the second interface according to the hop count of the second node to the first node through the interfaces except the horizontal segmentation group in at least one interface of the second node. And if the second interface is a downlink interface, determining the hop count of other nodes to the first node through the second interface according to the hop count of the second node to the first node through at least one interface of the second node. By the method, the hop count of other nodes to the first node through the second interface can be effectively obtained.
Optionally, if the second interface is an uplink interface or a horizontal interface, the smallest non-0 hop count is selected from the hop counts of the second node to the first node through the interfaces other than the horizontal split group in at least one interface of the second node, and it is determined that the hop counts of the other nodes to the first node through the second interface of the second node are the non-0 hop count plus 1. And if the second interface is a downlink interface, selecting the minimum hop count which is not 0 hop count from the hop counts of the second node from the at least one interface of the second node to the first node, and determining that the hop counts of other nodes from the second interface of the second node to the first node are the hop counts which are not 0 hop count and are added with 1. By the method, the hop count of other nodes to the first node through the second interface can be effectively obtained.
Optionally, the second node is specifically configured to: and determining the load sharing weight of other nodes to the first node through the second interface of the second node on the forwarding plane of the second node according to the load sharing weight of the second node to the first node through at least one interface of the second node. And determining the hop count of other nodes to the first node through the second interface according to the hop count of the second node to the first node through at least one interface of the second node. And sending the load sharing weight and the hop count of other nodes to the first node through the second interface. The forwarding plane is realized by a hardware logic circuit, an FPGA logic code, a microcode of a programmable NP and a multi-core CPU software code, so that end-to-end millisecond fault convergence can be realized.
Optionally, if the second node does not receive the load sharing weight and/or the hop count from the second node to the first node through the second interface within the preset time period, the load sharing weight and the hop count from the second node to the first node through the second interface are both 0. Or, if the second node does not receive the detection message transmitted through the second interface within the preset time length, the load sharing weight and the hop count from the second node to the first node through the second interface are both 0, and the detection message is used for detecting whether the second interface is abnormal.
The following provides a method, an apparatus, a storage medium, and a computer program product for transmitting routing information, and contents and effects thereof may refer to the above system parts, which are not described herein again.
In a second aspect, the present application provides a routing information transmission method, which is applied to a first node, where a switching network system includes a plurality of leaf nodes and at least one spine node, and the first node is a leaf node in the plurality of leaf nodes. The method comprises the following steps: and determining the load sharing weight from other nodes to the first node through the first interface of the first node according to the current state information of at least one interface of the first node. And sending the load sharing weight and the hop count of other nodes to the first node through the first interface, wherein the first interface is one of at least one interface of the first node.
In a third aspect, the present application provides a routing information transmission method, which is applied to a second node, where a switching network system includes a plurality of leaf nodes and at least one spine node, and the second node is a spine node in the at least one spine node. The method comprises the following steps: and determining the load sharing weight of other nodes to the first node through the second interface of the second node according to the load sharing weight of the second node to the first node through at least one interface of the second node. And determining the hop count of other nodes to the first node through the second interface according to the hop count of the second node to the first node through at least one interface of the second node. And sending the load sharing weight and the hop count of other nodes to the first node through a second interface through the second interface, wherein the second interface is one of at least one interface of the second node.
In a fourth aspect, the present application provides a node, comprising: at least one processor. And a memory communicatively coupled to the at least one processor. Wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method according to the second or third aspect.
In a fifth aspect, the present application provides a non-transitory computer readable storage medium having stored thereon computer instructions for causing a computer to perform the method according to the second or third aspect.
In a sixth aspect, the present application provides a computer program product comprising computer instructions for causing a computer to perform the method according to the second or third aspect.
The application provides a routing information transmission method, a device, a system and a storage medium, firstly, each node can send load sharing weight and hop count to other nodes, so that downstream bandwidth change caused by interface failure can be quickly transmitted to an upstream, the upstream is triggered to carry out interface switching or interface load sharing proportion adjustment, and the phenomenon of link congestion is reduced. Secondly, the functions of the nodes are all realized on a forwarding plane, and the forwarding plane is realized by a hardware logic circuit, an FPGA logic code, a microcode of a programmable NP and a multi-core CPU software code, so that end-to-end millisecond fault convergence can be realized. Thirdly, if the interface of the node is an uplink interface, the method can prevent the loop condition by excluding the horizontal partition group.
Drawings
Fig. 1 is a schematic view of load sharing provided in the present application;
FIG. 2 is a schematic diagram of the Clos architecture provided herein;
FIG. 3 is a schematic diagram of the Fat-Tree architecture provided herein;
FIG. 4 is a schematic diagram of a hybrid multi-stage Clos network architecture provided herein;
FIG. 5 is a schematic diagram of a variant Fat-Tree architecture provided herein;
FIG. 6 is a schematic diagram of a Fabric network based on a Spine-Leaf structure according to the present application;
fig. 7 is a schematic format diagram of a second packet according to an embodiment of the present application;
fig. 8 is a schematic format diagram of a third packet according to an embodiment of the present application;
fig. 9 is a schematic diagram of a first node according to an embodiment of the present application;
FIG. 10 is a schematic diagram of a Fabric network based on a Spine-Leaf architecture according to another embodiment of the present application;
FIG. 11 is a schematic diagram of a Fabric network based on a Spine-Leaf structure according to yet another embodiment of the present application;
FIG. 12 is a schematic diagram of a Fabric network based on a Spine-Leaf structure according to yet another embodiment of the present application;
FIG. 13 is a schematic diagram of a Fabric network based on a Spine-Leaf structure according to yet another embodiment of the present application;
FIG. 14 is a schematic diagram of a Fabric network based on a Spine-Leaf structure according to yet another embodiment of the present application;
fig. 15 is a schematic format diagram of a fifth packet according to an embodiment of the present application;
fig. 16 is a schematic format diagram of a sixth packet according to an embodiment of the present application;
fig. 17 is a schematic diagram of a second node according to an embodiment of the present application;
fig. 18 is a schematic diagram of a Fabric network based on a Spine-Leaf structure according to yet another embodiment of the present application;
fig. 19 is a schematic diagram of a Fabric network based on a Spine-Leaf structure according to yet another embodiment of the present application;
fig. 20 is a schematic diagram of a Fabric network based on a Spine-Leaf structure according to yet another embodiment of the present application;
fig. 21 is a schematic diagram of a Fabric network based on a Spine-Leaf structure according to yet another embodiment of the present application;
fig. 22 is a schematic diagram of a Fabric network based on a Spine-Leaf structure according to yet another embodiment of the present application;
fig. 23 is a schematic diagram of a Fabric network based on a Spine-Leaf structure according to yet another embodiment of the present application;
fig. 24 is a schematic diagram of a Fabric network based on a Spine-Leaf structure according to yet another embodiment of the present application;
FIG. 25 is a schematic illustration of a link type table of node S3 in FIG. 6;
FIG. 26 is a schematic illustration of the link-level partition mask table of node S3 of FIG. 6;
FIG. 27 is a schematic illustration of a link state table of node S3 of FIG. 6;
FIG. 28 is a schematic diagram of a hop-count table of node S3 of FIG. 6;
FIG. 29 is a schematic diagram of a weight table for node S3 of FIG. 6;
FIG. 30 is a schematic diagram of a hop-count table of node S3 of FIG. 6;
FIG. 31 is a schematic diagram of the initial weight table of node L0 of FIG. 6;
FIG. 32 is a schematic diagram of a node in a Fabric network according to an embodiment of the present application;
FIG. 33 is a schematic diagram of a Fabric network based on a Spine-Leaf architecture according to yet another embodiment of the present application;
FIG. 34 is a partial schematic view of the Fabric network of FIG. 33 according to one embodiment of the present application;
FIG. 35 is a partial schematic view of the Fabric network of FIG. 33 according to another embodiment of the present application;
FIG. 36 is a schematic diagram of a Fabric network based on a Spine-Leaf structure according to yet another embodiment of the present application;
FIG. 37 is a partial schematic view of the Fabric network of FIG. 36 according to one embodiment of the present application;
FIG. 38 is a partial schematic view of the Fabric network of FIG. 36 according to another embodiment of the present application;
fig. 39 is a flowchart of a routing information transmission method according to an embodiment of the present application;
fig. 40 is a flowchart of a method for transmitting routing information according to another embodiment of the present application;
fig. 41 is a schematic diagram of a node 800 according to an embodiment of the present application;
fig. 42 is a schematic diagram of a first node according to an embodiment of the present application;
fig. 43 is a schematic diagram of a second node according to an embodiment of the present application.
Detailed Description
As described above, in the Spine network based on the Spine-Leaf structure, for each node, there are one or more links to any Leaf node or Spine node, and the load sharing ratio of each link is fixed. Since the load sharing ratio of each link of each node is fixed, when some links fail, a link congestion phenomenon will result.
In order to solve the above technical problem, the present application provides a method, an apparatus, a system and a storage medium for transmitting routing information. The invention conception of the application is as follows: each node can send the load sharing weight and hop count from other nodes to any leaf node through the interface through each interface of the node, the node directly connected with each node determines the load sharing weight from other nodes to any leaf node through the interface of the node according to the load sharing weight from other nodes to any leaf node through the interface, and determines the hop count from other nodes to any leaf node through the interface of the node according to the hop count from other nodes to any leaf node through the interface of the node. Through the above thought, it can be known that when the load sharing weight on any interface changes, the nodes in the whole network can adjust the load sharing weight on their own interface according to the change. Thereby, the link congestion phenomenon can be alleviated.
The method is applicable to various Fabric networks comprising a plurality of Leaf nodes and at least one Spine node, namely, various Fabric networks based on Spine-Leaf structures. Typical Fabric networks have a Clos architecture and a Fat-Tree architecture, wherein the Fabric level of the Fat-Tree architecture can be 2 layers, 3 layers or more. For example: fig. 2 is a schematic diagram of a Clos architecture provided in the present application, and as shown in fig. 2, the Clos architecture is composed of a layer of leaf nodes and a layer of spine nodes, and each spine node is directly connected to each leaf node. Fig. 3 is a schematic diagram of a Fat-Tree architecture provided in the present application, where the Fat-Tree architecture includes two layers of spine nodes and one layer of leaf nodes, and two leaf nodes and two spine nodes directly connected to the two leaf nodes form a convergence region (POD). The present application is also applicable to a multi-stage hybrid structure, fig. 4 is a schematic diagram of a hybrid multi-stage Clos network architecture provided by the present application, and as shown in fig. 4, such a multi-stage hybrid structure corresponds to a partial leaf node zoom-out scenario. Fig. 5 is a schematic diagram of a variant Fat-Tree architecture provided herein, wherein the top two layers are spine node layers and the bottom layer is a leaf node layer.
Any node in the Fabric network based on the Spine-Leaf structure may be part or all of a router or a switch, and the like, where part of the router may be a processor or a chip in the router, and the like. Likewise, portions of the switch may be processors or chips in the switch, and so on. The Leaf nodes are generally used for connecting servers to enable each server to perform message transmission through a Fabric network based on a Spine-Leaf structure.
The technical scheme of the application is explained in detail as follows:
the present application provides a switching network system, which is the Fabric network based on the Spine-Leaf structure, and includes: a plurality of leaf nodes and at least one spine node. For example: the switching network system may be a Clos network architecture shown in fig. 2, a Fat-Tree network architecture shown in fig. 3, a hybrid multi-stage Clos network architecture shown in fig. 4, or a variant Fat-Tree network architecture shown in fig. 5, but may also be other types of Fabric networks based on a Spine-Leaf architecture. For example: fig. 6 is a schematic diagram of a Spine network based on a Spine-Leaf structure provided by the present application, and as shown in fig. 6, the network architecture includes two Spine node layers and one Leaf node layer, where S in fig. 6 represents a Spine node, and L represents a Leaf node. The following explains the technical solution of the present application by taking the network architecture shown in fig. 6 as an example: first, it should be noted that, in the present application, the first node is one of the leaf nodes. The second node is one of the at least one spine node. The first node is configured to determine, according to current state information of at least one interface of the first node, a load sharing weight from the other node to the first node through the first interface of the first node, and send the load sharing weight and the hop count from the other node to the first node through the first interface, where the first interface is one interface of the at least one interface of the first node. The second node is used for determining the load sharing weight of other nodes to the first node through the second interface of the second node according to the load sharing weight of the second node to the first node through at least one interface of the second node. And determining the hop count of other nodes to the first node through the second interface according to the hop count of the second node to the first node through at least one interface of the second node, and sending the load sharing weight and the hop count of other nodes to the first node through the second interface, wherein the second interface is one interface in at least one interface of the second node.
The at least one interface of the first node may be all interfaces of the first node, or may be an available interface of the first node, that is, an available interface, also referred to as an enabled interface.
Optionally, the current state information of at least one interface of the first node includes at least one of: the method comprises the steps of obtaining current bandwidth information corresponding to at least one interface of a first node, an available interface proportion in the at least one interface of the first node, a quality assessment value corresponding to the at least one interface of the first node, and an average queue length corresponding to the at least one interface of the first node. The quality assessment value of each interface may be determined according to a ratio of the historical failure times and the historical total transmission times of the interface, but is not limited thereto. Wherein the higher the ratio, the lower the quality assessment value of the interface. Conversely, the lower the ratio, the higher the quality assessment value of the interface. For any interface, only when the physical line of the interface is busy, the message or data is stored into the queue corresponding to the interface. When storing the message into the corresponding queue, if the current length of the queue exceeds the maximum available length, packet loss is carried out. The average length of the queue corresponding to the interface refers to an average value of the actual use length of the queue corresponding to the interface within a preset time period, the average value is used for measuring the congestion degree of the interface corresponding to the queue, and the larger the average length is, the more congested the interface is. Conversely, the shorter the average queue length of an interface is, the less congested the corresponding interface is.
For example: taking the first node as the L0 in fig. 6 as an example, it includes four interfaces, i.e., interfaces 0, 1, 2, and 3, and their corresponding current bandwidth information is 100M, 50M, 100M, and 50M, according to the current bandwidth information ratios corresponding to the four interfaces: 2:1:2:1, so let the load sharing weights of interfaces 0, 1, 2, 3 be 2, 1, respectively.
For example: assuming that the available interface ratio of the four interfaces 0, 1, 2, and 3 of L0 is 1/2, in this case, it is assumed that L0 can obtain the corresponding relationship between the available interface ratio and the load sharing weights of the four interfaces, and for example, by means of table lookup, it is determined that when the available interface ratio is 1/2, the load sharing weights of the interfaces 0, 1, 2, and 3 are 2, 1, 2, and 1, respectively.
For example: assuming four interfaces 0, 1, 2, and 3 of L0, their corresponding quality assessment values are 10, 5, 10, and 5, respectively, and the ratio of the quality assessment values corresponding to the four interfaces is: 2:1:2:1, so let the load sharing weights of interfaces 0, 1, 2, 3 be 2, 1, respectively.
For example: assuming that the average length of queues corresponding to four interfaces 0, 1, 2, and 3 of L0 is 200, 100, 200, and 100, the average length of queues corresponding to four interfaces is: 2:1:2:1, so let the load sharing weights of interfaces 0, 1, 2, 3 be 1, 2, respectively.
The above examples respectively determine the load sharing weight of the other nodes to the first node through the first interface of the first node when the current state information of the at least one interface of the first node is the current bandwidth information respectively corresponding to the at least one interface of the first node, the available interface proportion in the at least one interface of the first node, the quality assessment value respectively corresponding to the at least one interface of the first node, and the queue average length respectively corresponding to the at least one interface of the first node. But not limited to, the first node may also perform any combination of the above four ways to determine the load sharing weight of the other nodes to the first node through the first interface of the first node. For example: four groups of load sharing weights corresponding to the interfaces 0, 1, 2 and 3 are obtained through the four modes, and the four groups of load sharing weights can be averaged to obtain the final load sharing weight.
The load sharing weight of the other nodes to the first node through the first interface of the first node refers to: when other nodes transmit messages or data to the first node, the first interface is compared with other interfaces of the first node, and the message proportion or the data proportion carried by the first interface is obtained. That is, the load sharing weight can be used as the load sharing proportion during forwarding. For example: the load sharing weight of the other nodes from interface 0 to L0 through L0 is 2, which indicates that when the other nodes transmit messages or data to L0, the message ratio or data ratio carried by interface 0 is 2 in this interface 0 compared with the other interfaces of L0.
The hop count of the other node to the first node through the first interface means: and when other nodes send messages or data to the first node through the first interface, the required hop count is needed, wherein the hop count 0 represents unreachable, and the hop count non-0 represents reachable.
It is worth mentioning that: the first node is used as a sender and just sends the load sharing weight and the hop count of other nodes to the first node through the first interface to the outside through the own interface, and for the node which receives the load sharing weight and the hop count of other nodes to the first node through the first interface, the first node is used as a receiver, and the load sharing weight and the weight which are received by the first node are regarded as the load sharing weight and the hop count of the receiver to the first node through the own interface. For example: in fig. 6, it is assumed that the first node is L0, the second node is S0, L0 sends out other nodes through its own interface 0 with a load sharing weight of 2 through interface 0 to L0 of L0, and the hop count is 1; for S0, it receives the load sharing weight sent by L0 through interface 0 of L0 through its own interface 0, and therefore S0 may consider the load sharing weight as 2 for S0 through its own interface 0 to L0. S0 receives via its own interface 0 the number of hops L0 sent via interface 0 of L0, and therefore S0 may consider the number of hops S0 via its own interface 0 to L0 to be 1. Similarly, L0 sends out other nodes through its own interface 2 with load sharing weight of 2 and hop count of 1 through interface 2 of L0 to L0; for S0, it receives the load sharing weight sent by L0 through interface 2 of L0 through its own interface 2, and therefore S0 may consider the load sharing weight as S0 through its own interface 2 to L0, and is 2. S0 receives via its own interface 2 the number of hops L0 sent via interface 2 of L0, and therefore S0 may consider the number of hops S0 from interface 2 to L0 to be 1.
Optionally, the first node periodically sends the load sharing weight and the hop count of the other nodes to the first node through the first interface. For example: and the first node sends the load sharing weight and the hop count of other nodes to the first node through the first interface every 10 seconds. It should be noted that, because the network state of at least one interface of the first node may change, the load sharing weight and the hop count of the first node sending other nodes to the first node through the first interface multiple times through the first interface may be different. Of course, since the network state of at least one interface of the first node may not be changed, the load sharing weight and the hop count of the first node sending other nodes to the first node through the first interface multiple times through the first interface may also be the same. In addition, the periods of the first node transmitting the load sharing weight and the hop count may be the same or different. The period of sending the load sharing weight and the hop count by the first node can be determined according to the requirement of the whole network performance index and the hardware architecture of the node, and if the first node is realized by a hardware circuit, the first node can achieve a delicate level. The periodic transmission load sharing weight and the hop count described below for the second node are similar to those of the first node, and are not described again below.
Alternatively, the first and second electrodes may be,
optionally, when the first interface fails, the first node sends, through the first interface, the load sharing weight and the hop count from the other node to the first node through the first interface. That is, only when the load sharing weight and/or the hop count from the other node to the first node through the first interface changes, the first node sends the load sharing weight and the hop count from the other node to the first node through the first interface. In this way, the resource overhead of the first node can be saved.
Optionally, the load sharing weight and the hop count from the other node to the first node through the first interface of the first node are carried in the first message. That is, the load sharing weight and the hop count from the other nodes to the first node through the first interface of the first node are carried in the same message. Wherein the first message further comprises at least one of: the type of the first message, the auxiliary identifier of the first message, the identifier of the first node, and the number of the first nodes. The auxiliary message of the first message is used to identify that the first message is a periodically sent message or a message sent when the first interface fails. The number of first nodes is 1.
Alternatively, the first and second electrodes may be,
optionally, the load sharing weight from the other node to the first node through the first interface of the first node is carried in the second message, and the hop count from the other node to the first node through the first interface of the first node is carried in the third message. That is, the load sharing weight and the hop count from other nodes to the first node through the first interface of the first node are respectively carried in different messages. Wherein the second message further comprises at least one of: the type of the second message, the auxiliary identifier of the second message, the identifier of the first node, and the number of the first nodes. The type of the second packet may be a load sharing weight type. The auxiliary message of the second message is used to identify whether the second message is a periodically sent message or a message sent when the first interface fails. The number of first nodes is 1. The third message further includes at least one of: the type of the third message, the auxiliary identifier of the third message, the identifier of the first node, and the number of the first nodes. Wherein the type of the third packet is a hop count type, also referred to as a reachability packet type. The auxiliary message of the third message is used to identify whether the third message is a periodically sent message or a message sent when the first interface fails. The number of first nodes is 1.
Exemplarily, fig. 7 is a schematic diagram of a format of a second packet provided in an embodiment of the present application, and as shown in fig. 7, a type of the second packet is filled in a byte 0 position, an auxiliary identifier of the second packet is filled in a byte 1 position, and an identifier of a first node is filled in byte 2 and byte 3 positions, for example: the first node is L0, then bytes 2 and 3 fill in the identity of L0. The byte 4 and 5 positions fill in the number 1 of first nodes. The byte 6 position fills out the load sharing weights of the other nodes to the first node through the first interface of the first node.
Exemplarily, fig. 8 is a schematic diagram of a format of a third packet provided in an embodiment of the present application, and as shown in fig. 8, a type of the third packet is filled in a byte 0 position, an auxiliary identifier of the third packet is filled in a byte 1 position, and an identifier of the first node is filled in byte 2 and byte 3 positions, for example: the first node is L0, then bytes 2 and 3 fill in the identity of L0. The byte 4 and 5 positions fill in the number 1 of first nodes. The byte 6 position fills out the hop count of the other nodes to the first node via the first interface of the first node.
Optionally, the first node determines, on its forwarding plane, a load sharing weight from the other node to the first node through the first interface of the first node according to the current state information of the at least one interface of the first node. And sending the load sharing weight and the hop count of other nodes to the first node through the first interface. Fig. 9 is a schematic diagram of a first node according to an embodiment of the present application, and as shown in fig. 9, the first node includes: a control plane (also referred to as a control plane) and a forwarding plane (also referred to as a forwarding plane), wherein functions of the control plane are implemented by a Central Processing Unit (CPU) in the first node, and functions of the forwarding plane are implemented by a Network Processor (NP), a Field Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC), or a multi-core CPU in the first node.
And if the first node determines the load sharing weight from other nodes to the first node through the first interface of the first node according to the current state information of at least one interface of the first node on the forwarding plane of the first node. And sending the load sharing weight and the hop count of other nodes to the first node through the first interface. And the forwarding plane is realized by NP, the first node can adopt the programming language of NP to realize the execution action of the first node by adding the function in the microcode.
And if the first node determines the load sharing weight from other nodes to the first node through the first interface of the first node according to the current state information of at least one interface of the first node on the forwarding plane of the first node. And sending the load sharing weight and the hop count of other nodes to the first node through the first interface. And the forwarding plane is implemented by an ASIC, a hardware logic circuit needs to be added to the ASIC to implement the execution of the first node.
And if the first node determines the load sharing weight from other nodes to the first node through the first interface of the first node according to the current state information of at least one interface of the first node on the forwarding plane of the first node. And sending the load sharing weight and the hop count of other nodes to the first node through the first interface. And the forwarding plane is realized by the FPGA, a module needs to be added in the logic code of the FPGA to realize the execution action of the first node.
And if the first node determines the load sharing weight from other nodes to the first node through the first interface of the first node according to the current state information of at least one interface of the first node on the forwarding plane of the first node. And sending the load sharing weight and the hop count of other nodes to the first node through the first interface. And the forwarding plane is realized by a multi-core CPU, a module needs to be added in the software code of the multi-core CPU to realize the execution action of the first node.
In short, the first node may implement the execution action of the first node through a hardware circuit, logic, microcode, or the like, so as to reach the millisecond convergence time.
Optionally, for the second node, it may store a load sharing weight and a hop count of the second node to the first node through at least one interface of the second node. The load sharing weight and the hop count of the second node to the first node through at least one interface of the second node may be stored in a table form, or may be stored in a table form together, the horizontal direction is an interface list, the vertical direction is a leaf node list, the number of entries is LinkNum × leaf num, LinkNum represents the number of interfaces of the second node, leaf num represents the number of leaf nodes in the above switching network system, and each entry includes two fields (hop count, weight). Each entry is initially defaulted to (0, 0). For example: table 1 is a table maintained by the node S0 in fig. 6, and since S0 receives, through its own interface 0, that S0 has a load sharing weight of 2 and a hop count of 1 through interfaces 0 to L0, (1, 2) is filled in the first row and the first column in the table, 1 represents the hop count, and 2 represents the load sharing weight. Since S0 receives through its own interface 2 that S0 has a load sharing weight of 2 and a hop count of 1 through interfaces 2 to L0, it fills in (1, 2) in the third column of the first row in the table, 1 indicates the hop count, and 2 indicates the load sharing weight. Since S0 receives through its own interface 1 that S0 has a load sharing weight of 1 and a hop count of 1 through interfaces 1 to L0, what is filled in the second column of the second row in the table is (1, 1), the first 1 representing the hop count and the second 1 representing the load sharing weight. Since S0 receives through its own interface 3 that S0 has a load sharing weight of 1 and a hop count of 1 through interfaces 3 to L0, what is filled in the fourth column of the second row in the table is (1, 1), the first 1 representing the hop count and the second 1 representing the load sharing weight.
TABLE 1
Interface 0 Interface 1 Interface 2 Interface 3 Interface 4 Interface 5
L0 1 2 0 0 1 2 0 0 0 0 0 0
L1 0 0 1 1 0 0 1 1 0 0 0 0
L2 0 0 0 0 0 0 0 0 0 0 0 0
L3 0 0 0 0 0 0 0 0 0 0 0 0
As described above, the load sharing weight and the hop count of the second node to all leaf nodes through at least one interface of the second node may be stored in a form of a table, where the table may be referred to as a hop count weight table, and when the second node performs packet or data forwarding, the forwarding table may be updated according to the hop count weight table to guide the packet or data forwarding, or the second node may also use the hop count weight table as the forwarding table to guide the packet forwarding.
The load sharing weight of the second node to the first node through the second interface refers to: when the second node transmits the message or the data to the first node through the second interface, the second interface is compared with other interfaces of the second node, and the message ratio or the data ratio borne by the second interface is obtained. That is, the load sharing weight may be used as a load sharing proportion when the second node forwards the packet or the data. For example: s0 in fig. 6 has a load sharing weight of 2 through interfaces 0 to L0 of its own L0, which indicates that when S0 transmits a packet or data to L0, the packet ratio or data ratio carried by interface 0 is 2 in this interface 0 compared with other interfaces of S0.
The hop count of the second node to the first node through the second interface means: and when the second node sends the message or the data to the first node through the second interface, the hop count is required, wherein the hop count of 0 represents unreachable, and the hop count of non-0 represents reachable.
The load sharing weight of the other node to the first node through the second interface of the second node refers to: when other nodes transmit messages or data to the first node through the second node, the second interface is compared with other interfaces of the second node, and the message proportion or the data proportion borne by the second interface is obtained. That is, the load sharing weight can be used as the load sharing proportion during forwarding.
The hop count of the other node to the first node through the second interface means: and when other nodes send messages or data to the first node through the second interface, the required hop count is needed, wherein the hop count 0 represents unreachable, and the hop count non-0 represents reachable.
For the second node, when the second interface of the second node is normal, the second node may receive the load sharing weight and the hop count from the second node to the first node through the second interface, and in this case, the second node determines that the load sharing weight and the hop count from the second node to the first node through the second interface are actually received by the second node. For example: s0 in fig. 6 receives through its own interface 0 that its load sharing weight through interfaces 0 to L0 is 2 and the hop count is 1, in which case S0 determines that its load sharing weight and hop count through interfaces 0 to L0 are 2 and 1, respectively. Conversely, if the second node does not receive the load sharing weight and/or the hop count from the second node to the first node through the second interface within the preset time length, the load sharing weight and the hop count from the second node to the first node through the second interface are both 0. The preset time period may be set according to an actual situation, for example, may be set to 10 seconds, 20 seconds, and the like, which is not limited in this application. For example: if S0 does not receive the load sharing weight and/or the hop count of S0 through its own interface 0 to L0 within 10 seconds, S0 considers that there is a failure at interface 0, and then S0 considers that the load sharing weight and the hop count through interfaces 0 to L0 are both 0. Or, if the second node does not receive the detection message transmitted through the second interface within the preset time length, the load sharing weight and the hop count from the second node to the first node through the second interface are both 0, and the detection message is used for detecting whether the second interface is abnormal. For example: if the S0 does not receive the detection packet transmitted through its own interface 0 within 10 seconds, S0 determines that the interface 0 has a failure, and S0 also determines that the load sharing weight and the hop count from the interface 0 to the L0 are both 0. The detection message may also be sent periodically. The sending period of the detection message is small relative to the sending period of the load sharing weight and the hop count, and the sending of the load sharing weight and the hop count is triggered in an emergency when the fault is detected, namely the load sharing weight and the hop count are sent only when the interface fault is detected, so that the node overhead can be reduced.
It should be noted that, when the second node is not a direct connection node of a leaf node, it receives, through the second interface, the load sharing weight and/or the hop count from the second interface to at least one leaf node, and at this time, if the second node does not receive, through the second interface, the load sharing weight and/or the hop count from the second interface to any leaf child node within the preset time duration, the second node regards that the second interface fails. It is also understood that the second interface does not receive its load sharing weight and/or hop count to any leaf child node over the second interface within a preset time period if it is directed to one leaf node, i.e. the first node above.
Before introducing the second node to determine the load sharing weights and hop counts of other nodes to the first node over the second interface, the following first introduces the definition of split horizon groups and the split horizon algorithm.
The grouping horizontal segmentation algorithm is used for preventing the generation of loops when calculating the reachability, and all uplink interfaces of the ridge nodes are divided into a horizontal segmentation group. All horizontal interfaces connecting the same spine node are divided into a horizontal split group. The load sharing weight and the hop count received by the links in the same horizontal division group are not sent in the same group in a diffused mode.
The above-mentioned upstream interface of the Spine node is also referred to as the uplink of the Spine node, and the "upstream interface" refers to the interface between the Spine node and the node at the upper layer in the Spine-Leaf-structure-based Fabric network. For example: fig. 10 is a schematic diagram of a Fabric network based on a Spine-Leaf structure according to another embodiment of the present application, and as shown in fig. 10, interfaces 4 and 5 of S0 are uplink interfaces thereof. The horizontal interface of a Spine node is also called a back-to-back link, and herein, the "horizontal interface" refers to an interface between the Spine node and its same-level node (also called peer node) in a Spine network based on a Spine-Leaf structure, for example: as shown in fig. 10, the interfaces 4 and 5 of S4 are both horizontal interfaces. The downstream interface of a spine node, which will be mentioned below, is also referred to as the downlink of the spine node. The "downstream interface" herein refers to an interface between the Spine node and the nodes below the Spine node in the Spine network based on the Spine-Leaf structure, for example: interfaces 0, 1, 2 and 3 of the S4 node are all its downstream interfaces.
As shown in fig. 10, the interfaces 4 and 5 of S3 are divided into a horizontally split group, and the interfaces 4 and 5 of S4 are divided into a horizontally split group. As shown in fig. 10, when the load sharing weight and the hop count of the other node through the interface 4 to L0 are transmitted through the interface 4 in S3, since the interface 5 of S3 and the interface 4 belong to the same horizontal split group, when the load sharing weight and the hop count of the other node through the interface 4 and the interfaces 5 to L0 of S3 are calculated in S3, the load sharing weight and the hop count of S3 through the interface 4 and the interfaces 5 to L0 are excluded. On the contrary, if the load sharing weight of the other node through the interface 4 of S3 and the interfaces 5 to L0 and the hop count of S3 through the interface 4 of S3 and the interfaces 5 to L0 are not excluded when S3 calculates the load sharing weight of the other node through the interface 4 of S3 and the interfaces 5 to L0, as shown in fig. 10, according to the above first way of calculating the load sharing weight and the way of calculating the hop count, it can be known that S3 calculates the load sharing weight of the other node through the interfaces 4 to L0 of S3 to be 8, and the hop count is 3+1 ═ 4, and when S4 forwards the packet to L0, the packet will pass through the interface 3 of S4 (which is connected with the interface 4 of S3)
It should be noted that, when a ridge node has only one uplink interface, the uplink interface is divided into a horizontal partition group. For example: assuming that S3 shown in fig. 10 has only the uplink interface 4 and no uplink interface 5, the uplink interface 4 is also divided into a horizontal split group.
The second node is specifically configured to: and if the second interface is an uplink interface or a horizontal interface, determining the load sharing weight from other nodes to the first node through the second interface according to the load sharing weight from the second node to the first node through the interfaces except the horizontal segmentation group in at least one interface of the second node. Wherein, if the second interface is an uplink interface and other uplink interfaces exist in the second node, the horizontal split group includes: a second interface and other upstream interfaces of the second node. If the second interface is an uplink interface and the second node does not have other uplink interfaces, the horizontal split group includes: a second interface. If the second interface is a horizontal interface and other horizontal interfaces exist in the second node, the horizontal partition group includes: the second interface and other horizontal interfaces of the second interface. If the second interface is a horizontal interface and the second node does not have other horizontal interfaces, the horizontal split group comprises: a second interface. And if the second interface is a downlink interface, determining the load sharing weight from other nodes to the first node through the second interface according to the load sharing weight from the second node to the first node through at least one interface of the second node.
The first alternative is as follows: if the second interface is an uplink interface or a horizontal interface, determining that the load sharing weight from the other node to the first node through the second interface is the sum of the load sharing weights from the second node to the first node through the interface corresponding to the minimum non-0 hop count except the horizontal segmentation group in at least one interface of the second node; and if the second interface is a downlink interface, determining that the load sharing weight from the other nodes to the first node through the second interface is the sum of the load sharing weights from the second node to the first node through the interface corresponding to the minimum non-0 hop count in at least one interface of the second node. The interface corresponding to the minimum non-0 hop count mentioned in the present application refers to an interface corresponding to the minimum non-0 hop count selected from the hop counts from the second node to the first node through at least one interface of the second node. This will not be described further below. For example: fig. 11 is a schematic diagram of a Spine-Leaf structure-based Fabric network according to still another embodiment of the present application, as shown in fig. 11, for S0, at least one interface thereof is 0 to 5, it is assumed that the second interface of S0 is interface 0, since interface 0 is an uplink interface of S0, the horizontal partition group where interface 0 is located includes interfaces 0-2 of S0, it is determined that load sharing weights (i.e., W0) of other nodes passing through interfaces 0 to L0 of S0 are the sum of load sharing weights of interfaces 3-5 (interfaces corresponding to the minimum non-0 hop count) of S0 passing through S0, where the weight average of load sharing weights of interfaces 3-5 of S0 is 1, and therefore, W0 is 1+1+1 — 3. Determining the load sharing weights (namely W1) of the other nodes from the interface 1 to the interface L0 of S0 to be the sum of the load sharing weights of the interface 3-5 of S0 through S0, wherein the weight average of the load sharing weights of the interface 3-5 of S0 is 1. Therefore, W1 is 1+1+1 is 3. Determining the load sharing weight (namely W2) of the other nodes from the interface 2 to the L0 through S0 to be the sum of the load sharing weights of the interface 3-5 through S0 of S0, wherein the weight average of the load sharing weights of the interface 3-5 of S0 is 1. Therefore, W2 is 1+1+1 is 3. Similarly, the load sharing weight of the other node through the interface 0 to the interface L0 of S1 (i.e., W3) is determined to be the sum of the load sharing weights of the interface 3-5 of S1 through S1, wherein the weight average of the load sharing weights of the interface 3-5 of S1 is 1, and therefore, W3 is 1+1+1 is 3. Determining the load sharing weights (namely W4) of the other nodes from the interface 1 to the interface L0 of S1 to be the sum of the load sharing weights of the interface 3-5 of S1 through S1, wherein the weight average of the load sharing weights of the interface 3-5 of S1 is 1. Therefore, W4 is 1+1+1 is 3. Determining the load sharing weight (namely W5) of the other nodes from the interface 3 to the L0 through S1 to be the sum of the load sharing weights of the interface 3-5 through S1 of S1, wherein the weight average of the load sharing weights of the interface 3-5 of S1 is 1. Therefore, W5 is 1+1+1 is 3. For S8, at least one interface of the S8 is interfaces 0 to 9, and assuming that the second interface is interface 6, since it is an uplink interface, the horizontal split group in which this interface 6 is located includes interfaces 6 to 9 of S8, it is determined that the load sharing weight of the other nodes through interfaces 6 to L0 of S8 (i.e., W6) is the sum of the load sharing weights of interfaces 0 to 5 of S8, and therefore W6 is 3+3+3+3+3+ 18. Similarly, W7 is 18, W8 is 18, and W9 is 18.
Fig. 12 is a schematic diagram of a Spine-Leaf structure-based Fabric network according to still another embodiment of the present application, and as shown in fig. 12, fig. 12 differs from fig. 11 in that an interface 3 of S8 has a failure, that is, W3 is 0, and based on this, calculated W6 is W7 is W8 is W9 is W0+ W1 is W2+ W4 is W5 is 12.
The second option is: if the second interface is an uplink interface or a horizontal interface, determining the load sharing weight of other nodes to the first node through the second interface of the second node according to the load sharing weight of the second node to the first node through the interface corresponding to the minimum non-0 hop count except the horizontal segmentation group in at least one interface of the second node, the number of the interfaces of the second node, and the normal number of the interfaces of the second node; and if the second interface is a downlink interface, determining the load sharing weight from other nodes to the first node through the second interface of the second node according to the load sharing weight from the second node to the first node through the interface corresponding to the minimum non-0 hop count in at least one interface of the second node, the number of interfaces of the second node and the number of normal interfaces of the second node. Assuming that the sum of load sharing weights from the second node to the first node through interfaces corresponding to the minimum non-0 hop count except for the horizontal split group in at least one interface of the second node is represented by TotalWeights, wherein when a fault interface exists in the interfaces except for the horizontal split group in at least one interface of the second node, the load sharing weight corresponding to the fault interface is 0; the number of interfaces corresponding to the minimum non-0 hop count except the horizontal split group in at least one interface of the second node is represented by NextHopTotalNum, and when a fault interface exists in the interface corresponding to the minimum non-0 hop count except the horizontal split group in at least one interface of the second node, the NextHopTotalNum comprises the number of fault interfaces; the number of interfaces of the second node is represented by UpLinkGroupTotalNum, the number of normal interfaces of the second node is represented by uplinkgroupnnormalnum, and the load sharing weight from the second interface of the second node to the first node of other nodes is represented by w, so that w can be calculated by the following formula (1):
Figure BDA0002434193300000131
it should be noted that the above formula (1) is not the only method for calculating w, and the above formula may be modified to obtain w, which is not limited in the present application.
For example, fig. 13 is a schematic view of a Fabric network based on a Spine-Leaf structure according to still another embodiment of the present application, as shown in fig. 13, for S0, at least one interface thereof is 0 to 5, assuming that the second interface of S0 is interface 0, since interface 0 is an upstream interface of S0, the horizontal partition group where interface 0 is located includes interfaces 0-2 of S0, it is determined that the load sharing weights (i.e., W0) of other nodes from interface 0 of S0 to L0 are calculated by formula (1):
Figure BDA0002434193300000132
similarly, W1 ═ W2 ═ W3 ═ W4 ═ W5 ═ 100. The method is similar to obtain the product, wherein W6-W7-W8-W9-100.
Fig. 14 is a schematic view of a Fabric network based on a Spine-Leaf structure according to still another embodiment of the present application, and as shown in fig. 14, a difference between fig. 14 and fig. 13 is that an interface 3 of S0, an interface 3 of S8, and an interface 0 of S2 have a fault, and based on this, the calculation is obtained by formula (1):
Figure BDA0002434193300000133
Figure BDA0002434193300000134
Figure BDA0002434193300000135
the optional mode three: if the second interface is an uplink interface or a horizontal interface, determining load sharing weights of other nodes to the first node through the second interface of the second node according to the load sharing weights of the second node to the first node through the interface corresponding to the minimum non-0 hop count except the horizontal segmentation group in at least one interface of the second node, the quality evaluation value of the interface corresponding to the minimum non-0 hop count except the horizontal segmentation group in at least one interface of the second node, the number of the interfaces corresponding to the minimum non-0 hop count except the horizontal segmentation group in at least one interface of the second node, the number of interfaces of the second node, and the normal number of interfaces of the second node; and if the second interface is a downlink interface, determining the load sharing weight of other nodes to the first node through the second interface of the second node according to the load sharing weight of the second node to the first node through the interface corresponding to the minimum non-0 hop count in the at least one interface of the second node, the quality evaluation value of the interface corresponding to the minimum non-0 hop count in the at least one interface of the second node, the number of interfaces of the second node and the number of normal interfaces of the second node. Assuming that load sharing weights from the second node to the first node through an interface corresponding to the minimum non-0 hop count except for the bisection group in at least one interface of the second node are w1 and w2 … … wn, respectively, and quality assessment values corresponding to interfaces corresponding to the minimum non-0 hop count except for the bisection group in at least one interface of the second node are q1 and q2 … … qn, respectively, wherein a value range of the quality assessment value corresponding to each interface is 0 to 100, if a quality assessment value of an interface is 0, the interface is represented as a failed interface or a queue corresponding to the interface is full, and a packet loss situation occurs. If the quality evaluation value of a certain interface is 100, the interface is a normal interface, and message or data transmission can be normally realized. If the quality assessment value of a certain interface is between 0 and 100, the link congestion degree corresponding to the interface is represented, the larger the value is, the lower the link congestion degree is represented, and the smaller the value is, the higher the link congestion degree is represented. The second node represents a weighted average result formed by the load sharing weight and the quality evaluation value from the interface corresponding to the minimum non-0 hop count except the horizontal segmentation group in at least one interface of the second node to the first node by Totalweights; the number of interfaces except the horizontal split group in at least one interface of the second node is represented by NextHopTotalNum, and when a fault interface exists in the interface corresponding to the minimum non-0 hop count except the horizontal split group in at least one interface of the second node, the NextHopTotalNum comprises the number of fault interfaces; the number of interfaces of the second node is represented by UpLinkGroupTotalNum, the number of normal interfaces of the second node is represented by uplinkgroupnnormalnum, and the load sharing weight from the second interface of the second node to the first node of other nodes is represented by w, so that w can be calculated by the following formula (2):
Figure BDA0002434193300000141
it should be noted that the above formula (2) is not the only method for calculating w, and the above formula may be modified to obtain w, which is not limited in the present application.
When calculating the hop count of the other node to the first node through the second interface, the second node is specifically configured to: and if the second interface is an uplink interface or a horizontal interface, determining the hop count of other nodes to the first node through the second interface according to the hop count of the second node to the first node through the interfaces except the horizontal segmentation group in at least one interface of the second node. And if the second interface is a downlink interface, determining the hop count of other nodes to the first node through the second interface according to the hop count of the second node to the first node through at least one interface of the second node.
Optionally, if the second interface is an uplink interface or a horizontal interface, the second node selects the smallest non-0 hop count from the hop counts from the second node to the first node through the interfaces other than the horizontal split group in at least one interface of the second node, and determines that the hop counts from the other nodes to the first node through the second interfaces of the second node are the non-0 hop count plus 1. And if the second interface is a downlink interface, the second node selects the minimum non-0 hop count from the hop counts from the second node to the first node through at least one interface of the second node, and determines that the hop counts from other nodes to the first node through the second interface of the second node are the non-0 hop count plus 1. For example: as shown in table 1, which is a table maintained by the node S0 in fig. 6, it can be seen from table 1 that the interfaces 4 and 5 of S0 belong to a horizontal split group, excluding the horizontal split group, the hop count of S0 through the interfaces 0 and 2 to L0 is not 0, respectively 1 and 1, i.e. the minimum hop count of non-0 is 1, based on which the hop count of other nodes through the second interface of S0 to L0 is 2, for example: the number of hops S4 through interface 4 of S0 to L0 is 2.
Optionally, the second node periodically sends the load sharing weight and the hop count of the other nodes to the first node through the second interface. For example: and the second node sends the load sharing weight and the hop count of other nodes to the first node through the second interface every 10 seconds. It should be noted that, because the network state of at least one interface of the second node may change, the load sharing weight and the hop count of the second node sending other nodes to the first node through the second interface multiple times through the second interface may be different. Of course, since the network state of at least one interface of the second node may not be changed, the load sharing weight and the hop count of the second node sending other nodes to the first node through the second interface multiple times through the second interface may also be the same.
Alternatively, the first and second electrodes may be,
optionally, when the second interface fails, the second node sends, through the second interface, the load sharing weight and the hop count from the other node to the first node through the second interface. That is, only when the load sharing weight and/or the hop count from the other node to the first node through the second interface changes, the second node sends the load sharing weight and the hop count from the other node to the first node through the second interface. In this way, the resource overhead of the second node can be saved.
Optionally, the load sharing weight and the hop count that are respectively sent to the first node by the other nodes through the second interface of the second node are carried in the fourth message. Or the load sharing weights respectively from other nodes to the first node through the second interface of the second node are carried in the fifth message, and the hop counts respectively from other nodes to the first node through the second interface of the second node are carried in the sixth message.
Optionally, the fourth packet further includes at least one of the following: the type of the fourth packet, an auxiliary identifier of the fourth packet, an identifier of a first leaf node of the plurality of leaf nodes, and the number of the plurality of leaf nodes. The auxiliary message of the fourth message is used to identify that the fourth message is a periodically sent message or a message sent when the second interface fails. The fifth message further includes at least one of: the type of the fifth message, the auxiliary identifier of the fifth message, the identifier of the first leaf node in the plurality of leaf nodes, and the number of the plurality of leaf nodes. And the type of the fifth message is a load sharing weight type. The auxiliary message of the fifth message is used to identify whether the fifth message is a periodically sent message or a message sent when the second interface fails. The sixth message further includes at least one of: the type of the sixth message, the auxiliary identifier of the sixth message, the identifier of the first leaf node in the plurality of leaf nodes, and the number of the plurality of leaf nodes. The type of the sixth packet is a hop count type, which is also called a reachability packet type. The auxiliary message of the sixth message is used to identify that the sixth message is a periodically sent message or a message sent when the second interface fails.
Exemplarily, fig. 15 is a schematic diagram of a format of a fifth packet provided in an embodiment of the present invention, and as shown in fig. 15, a type of the fifth packet is filled in a byte 0 position, an auxiliary identifier of the fifth packet is filled in a byte 1 position, and an identifier of a first leaf node in the multiple leaf nodes is filled in byte 2 and byte 3 positions, for example: the leaf nodes shown in fig. 6 are L0, L1, L2 and L3, respectively, and the first leaf node of the leaf nodes is L0, and assuming that its identity is L0, bytes 2 and 3 fill in the identity of L0. The byte 4 and 5 positions fill in the number of the leaf nodes, which is 4 in this case, taking the network architecture shown in fig. 6 as an example. The byte 6 position fills out the load sharing weight of other nodes to the first node through the second interface of the second node, where there are how many leaf nodes, that is, how many first nodes, there are how many load sharing weights, for example: assuming that the second node is S0 in fig. 6, and the second interface of S0 is its interface 0, the weight of load sharing from other nodes to each leaf node through the interface 0 of S0 is carried in the fifth packet, which is 2, 0, respectively.
Exemplarily, fig. 16 is a schematic diagram of a format of a sixth packet provided in an embodiment of the present invention, as shown in fig. 16, a type of the sixth packet is filled in a byte 0 position, an auxiliary identifier of the sixth packet is filled in a byte 1 position, and an identifier of a first leaf node in the multiple leaf nodes is filled in byte 2 and byte 3 positions, for example: the leaf nodes shown in fig. 6 are L0, L1, L2 and L3, respectively, and the first leaf node of the leaf nodes is L0, and assuming that its identity is L0, bytes 2 and 3 fill in the identity of L0. The byte 4 and 5 positions fill out the number 4 of the above-mentioned plurality of leaves. The byte 6 position fills out the hop count of other nodes to the first node through the second interface of the second node, where there are how many leaf nodes, that is, how many first nodes, there are, for example: assuming that the second node is S0 in fig. 6, and the second interface of S0 is its interface 0, the hop counts from the other nodes to the leaf nodes through the interface 0 of S0 are all carried in the sixth packet, which are 1, 0, and 0, respectively.
Optionally, the second node is specifically configured to: and determining the load sharing weight of other nodes to the first node through the second interface of the second node on the forwarding plane of the second node according to the load sharing weight of the second node to the first node through at least one interface of the second node. And determining the hop count of other nodes to the first node through the second interface according to the hop count of the second node to the first node through at least one interface of the second node. And sending the load sharing weight and the hop count of other nodes to the first node through the second interface. Fig. 17 is a schematic diagram of a second node according to an embodiment of the present application, and as shown in fig. 17, the second node includes: a control plane (also referred to as a control plane) whose functions are implemented by the CPU in the second node, and a forwarding plane (also referred to as a forwarding plane) whose functions are implemented by the NP, FPGA, ASIC, or multi-core CPU in the second node.
If the function of the second node is implemented on the forwarding plane of the second node and the forwarding plane is implemented by NP, the second node may implement the execution action of the second node by adding a function in microcode by using the programming language of NP.
If the function of the second node is implemented on the forwarding plane of the second node and the forwarding plane is implemented by an ASIC, a hardware logic circuit needs to be added to the ASIC to implement the execution of the second node.
If the function of the second node is implemented on the forwarding plane of the second node and the forwarding plane is implemented by the FPGA, a module needs to be added in the logic code of the FPGA to implement the execution action of the second node.
If the function of the second node is implemented on the forwarding plane of the second node, and the forwarding plane is implemented by a multi-core CPU, a module needs to be added to the software code of the multi-core CPU to implement the execution action of the second node.
In short, the first node may implement the execution action of the second node through a hardware circuit, logic, microcode, or the like, so as to reach the millisecond convergence time.
The technical solution of the present application will be exemplarily described below with reference to the network architecture shown in fig. 6:
the first node (i.e., each leaf node) shown in fig. 6 is configured to determine, according to current state information of at least one interface of the first node, a load sharing weight from the other node to the first node through the first interface of the first node, and send the load sharing weight and the hop count from the other node to the first node through the first interface, where the first interface is one interface of the at least one interface of the first node. As mentioned above, the current state information of at least one interface of the first node comprises at least one of: the method comprises the steps of obtaining current bandwidth information corresponding to at least one interface of a first node, an available interface proportion in the at least one interface of the first node, a quality assessment value corresponding to the at least one interface of the first node, and an average queue length corresponding to the at least one interface of the first node.
For example: the number 1 in the circles in L0 and L1 in fig. 6 indicates that L0, L1 send the first message over their own interface. The L0 sends load sharing weights and hop counts of other nodes through interfaces 0 to L0 through its own interface 0, where the Weight is 2 and the hop count is 1, and if the two are carried in the same first message, fields in the first message, startleaf id is 0, ItemNum is 1, HopNum is 1, and Weight is 2, where startleaf id represents the identifier of L0, ItemNum represents the number of L0, HopNum represents the hop counts of other nodes through interfaces 0 to L0, and Weight represents the hop counts of other nodes through interfaces 0 to L0.
Similarly, L0 sends load sharing Weight and hop count of other nodes through interface 2 to L0 through its own interface 2, where the Weight is 2 and the hop count is 1, and if the two are carried in the same first message, fields startleaf id in the first message is 0, ItemNum is 1, HopNum is 1, and Weight is 2.
L0 sends the load sharing Weight and the hop count of other nodes through interfaces 1 to L0 through its own interface 1, where the Weight is 1 and the hop count is also 1, that is, the field startleaf id in the first message is 0, ItemNum is 1, HopNum is 1, and Weight is 1.
L0 sends, via its own interface 3, the load sharing Weight and the hop count of the other node via interface 3 to L0, where the Weight is 1 and the hop count is also 1, that is, the field startleaf id in the first message is 0, ItemNum is 1, HopNum is 1, and Weight is 1.
L1 sends load sharing Weight and hop count of other nodes through interfaces 0 to L1 through its own interface 0, where the Weight is 1 and the hop count is 1, and if the two are carried in the same first message, the fields startleaf id in the first message is 0, ItemNum is 1, HopNum is 1, and Weight is 1.
L1 sends load sharing Weight and hop count of other nodes through interfaces 1 to L1 through its own interface 1, where the Weight is 1 and the hop count is 1, and if the two are carried in the same first message, fields startleaf id in the first message is 0, ItemNum is 1, HopNum is 1, and Weight is 1.
L1 sends the load sharing Weight and hop count of other nodes through interface 2 to L1 through its own interface 2, where the Weight is 1 and the hop count is also 1, that is, the field startleaf id in the first message is 0, ItemNum is 1, HopNum is 1, and Weight is 1.
L1 sends, via its own interface 3, the load sharing Weight and the hop count of the other node via interface 3 to L1, where the Weight is 1 and the hop count is also 1, that is, the field startleaf id in the first message is 0, ItemNum is 1, HopNum is 1, and Weight is 1.
Fig. 18 is a schematic diagram of a Fabric network based on a Spine-Leaf structure according to yet another embodiment of the present application, and as shown in fig. 18, S0 is used as a second node, and S0 represents a step of modifying its own weight hop count table by using the number 2 in the circle to represent S0. Specifically, it receives through its own interface 0 that S0 has a load sharing weight of 2 through interfaces 0 to L0, and the hop count is 1, S0 receives through its own interface 2 that S0 has a load sharing weight of 2 through interfaces 2 to L0, and the hop count is 1, S0 receives through its own interface 1 that S0 has a load sharing weight of 1 through interfaces 1 to L1, and the hop count is 1, S0 receives through its own interface 3 that S0 has a load sharing weight of 1 through interfaces 3 to L1, and the hop count is 1. As described above, S0 stores in advance the load sharing weight and the hop count thereof to each leaf node through its own respective interface, and these values are 0 by default, and therefore, when S0 receives the actual load sharing weight and hop count through its own interfaces, the hop count weight entry stored by itself is updated. The hop count weight table corresponding to S0 shown in fig. 18 is the result of updating the table entry after receiving the load sharing weight and the hop count transmitted from L0 and L1 at S0. That is, when it is known that the startleaf id is 0, ItemNum is 1, HopNum is 1, and Weight is 2 in the first message received through the interface 0 of S0, the 1 st row and the 1 st column in the hop Weight table are updated to (1, 2). S0 receives the startleaf id1, ItemNum 1, HopNum 1, and Weight 1 in the first packet through its own interface 1, and then S0 updates row 2 and column 2 in the hop Weight table to (1, 1). S0 receives the first message via its own interface 2, where startleaf id is 0, ItemNum is 1, HopNum is 1, and Weight is 2, and then S0 updates row 1, column 3 in the hop Weight table to (1, 2). S0 updates S0 the 4 th column in the 2 nd row in the hop Weight table to (1, 1) if startleaf id is 1, ItemNum is 1, HopNum is 1, and Weight is 1 in the first message received through its own interface 3.
Further, as described above, the second node is configured to determine the load sharing weight of the other node to the first node through the second interface of the second node according to the load sharing weight of the second node to the first node through the at least one interface of the second node. And determining the hop count of other nodes to the first node through the second interface according to the hop count of the second node to the first node through at least one interface of the second node, and sending the load sharing weight and the hop count of other nodes to the first node through the second interface, wherein the second interface is one interface in at least one interface of the second node. For example: s0 determines the load sharing weight and the hop count of the other node through its own interface 4 to L0 of S0, and queries the hop count weight table of S0, so as to obtain the hop count and the load sharing weight (1, 2), (0, 0), (1, 2), (0, 0), (0, 0), (0, 0) of S0 through its own respective interface to L0 row, and since interface 4 and interface 5 of S0 belong to the same horizontal division group, when calculating the load sharing weight and the hop count of the other node through interface 4 to L0 of S0, the column corresponding to interface 4 and interface 5 in the hop count weight table of S0 is excluded, i.e., (0, 0), and the column corresponding to the smallest non-0 hop count is selected from the columns excluded in S0, i.e., (1, 2) of S0 through its own interface 0 and interface 1 to L0 row, and the load sharing weight (1, 2), (1, 2), and calculating the hop count of the other node from the own interface 4 to the L0 through the S0 to be 1+ 1-2, and according to the first calculation method of the load sharing weight, obtaining the load sharing weight of the other node from the own interface 4 to the L0 through the S0 to be 2+ 2-4. The same can be said to result in:
fig. 19 is a schematic diagram of a Fabric network based on a Spine-Leaf structure according to still another embodiment of the present invention, as shown in fig. 19, a number 3 with a circle in S0 and S1 indicates that S0 and S1 send a third packet, where a startleaf id in the third packet sent by S0 through its own interface 4 is 0 and ItemNum is 4, where the startleaf id is a head node L0 of all Leaf nodes in the Fabric network based on the Spine-Leaf structure in fig. 19, and ItemNum is the number of all Leaf nodes in the Fabric network based on the Spine-Leaf structure in fig. 19. Assuming that a first load sharing weight calculation method is adopted, the hop count and the load sharing weight of other nodes included in the third packet sent by the S0 through the interface 4 of the S0 to the L0, the L1, the L2, and the L3 are respectively as follows:
(2,4): indicating that L0 is reachable, hop count 2, and load sharing weight 4.
(2,2): indicating that L1 is reachable, hop count 2, and load sharing weight 2.
(0,0): indicating that it is not reachable to L2.
(0,0): indicating that it is not reachable to L3.
S0 indicates that the startleaf id is 0 and ItemNum is 4 in the third packet sent through its own interface 5. Assuming that the first load sharing weight calculation method is adopted, the hop count and the load sharing weight of other nodes included in the third packet sent by the S0 through the interface 5 of the S0 to L0, L1, L2, and L3 are respectively:
(2,4): indicating that L0 is reachable, hop count 2, and load sharing weight 4.
(2,2): indicating that L1 is reachable, hop count 2, and load sharing weight 2.
(0,0): indicating that it is not reachable to L2.
(0,0): indicating that it is not reachable to L3.
S1 indicates that the startleaf id is 0 and ItemNum is 4 in the third packet sent through its own interface 4. Assuming that a first load sharing weight calculation method is adopted, the hop count and the load sharing weight of other nodes included in the third packet sent by the S1 through the interface 4 of the S1 to the L0, the L1, the L2, and the L3 are respectively as follows:
(2,2): indicating that L0 is reachable, hop count 2, and load sharing weight 2.
(2,2): indicating that L1 is reachable, hop count 2, and load sharing weight 2.
(0,0): indicating that it is not reachable to L2.
(0,0): indicating that it is not reachable to L3.
S1 indicates that the startleaf id is 0 and ItemNum is 4 in the third packet sent through its own interface 5. Assuming that the first load sharing weight calculation method is adopted, the hop count and the load sharing weight of other nodes included in the third packet sent by the S1 through the interface 5 of the S1 to L0, L1, L2, and L3 are respectively:
(2,2): indicating that L0 is reachable, hop count 2, and load sharing weight 2.
(2,2): indicating that L1 is reachable, hop count 2, and load sharing weight 2.
(0,0): indicating that it is not reachable to L2.
(0,0): indicating that it is not reachable to L3.
Furthermore, the superior spine node receives the load sharing weight and the hop count reaching each leaf node through the interface thereof through each interface thereof, so as to modify the hop count weight table thereof.
For example: fig. 20 is a schematic diagram of a Fabric network based on a Spine-Leaf structure according to yet another embodiment of the present application, and as shown in fig. 20, a circle 4 in S4 and S5 indicates that S4 and S5 receive a corresponding third packet, and update their respective hop weight tables. S4 indicates that the number of hops from S4 to L0 via its own interface 0 is 2, the load sharing weight is 4, the number of hops from interface 0 to L1 is 2, and the load sharing weight is 1, and S4 indicates that the hop number and the load sharing weight of the third packet received via its own interface 0 to L0, L1, L2, and L3 are (2, 4), (2, 2), (0, 0), and 0, respectively; the number of hops from interface 0 to L1 is 2, and the load sharing weight is 2; not accessible from interface 0 to L2, L3. At the same time, the data is written into the 1 st column of the interface 0 of the hop count weight table of S4.
Similarly, in the third message received by S4 through its interface 1, the startleaf id is 0, ItemNum is 4, and the hop count and the load sharing weight of S4 through its interfaces 1 to L0, L1, L2, and L3 are (2, 2), (0, 0), and (0, 0), respectively, which means that the hop count from interface 1 to L0 is 2, the load sharing weight is 2, the hop count from interface 1 to L1 is 2, and the load sharing weight is 2; from interface 1 to L2, L3 is not reachable. At the same time, the data is written into the 2 nd column of the interface 1 of the hop count weight table of S4.
S5 indicates that the number of hops from interface 0 to L0 of S5 is 2, the load sharing weight is 4, the number of hops from interface 0 to L1 is 2, and the load sharing weight is 2, in the third packet received through interface 0 of S5, ItemNum is 4, and the hop count and the load sharing weight of S5 to L0, L1, L2, and L3 through interface 0 of itself are (2, 4), (2, 2), (0, 0), respectively; not accessible from interface 0 to L2, L3. At the same time, the data is written into the 1 st column of the interface 0 of the hop count weight table of S5.
S5 indicates that the number of hops from interface 1 of S5 to L0 is 2, the load sharing weight is 2, the number of hops from interface 1 to L1 is 2, and the load sharing weight is 1, where the startleaf id is 0 and ItemNum is 4, and the hop numbers and load sharing weights of S5 from interfaces 1 to L0, L1, L2, and L3 are (2, 2), (2, 2), (0, 0), and 0, respectively; the number of hops from interface 1 to L1 is 2, and the load sharing weight is 2; from interface 1 to L2, L3 is not reachable. At the same time, the data is written into the 2 nd column of the interface 1 of the hop count weight table of S5.
The above process is repeated multiple times so that all upper level spine nodes collect their load sharing weights and hop counts to the leaf nodes immediately downstream of them. For example: fig. 21 is a schematic diagram of a Spine-Leaf structure-based Fabric network according to yet another embodiment of the present invention, as shown in fig. 21, the top-most S4 learns the load sharing weight and the hop count to each Leaf node through its respective interface, and similarly, S5 learns the load sharing weight and the hop count to each Leaf node. The node S0 in the middle tier will learn its load sharing weights and hop counts to the leaf nodes L0, L1, but not to L2, L3. Node S1 learns its load sharing weights and hop counts to leaf nodes L0, L1, but not to L2, L3. Node S2 learns its load sharing weights and hop counts to leaf nodes L2, L3, but not to L0, L1. Node S3 learns its load sharing weights and hop counts to leaf nodes L2, L3, but not to L0, L1. It should be noted that, for the sake of brevity and clarity, fig. 21 only shows the hop count weight tables of S4, S5, S0 and S3.
In order to enable each node to obtain the load sharing weights and the hop counts from the node to all the leaf nodes, further, the top-most ridge node needs to cross-spread the load sharing weights and the hop counts reaching all the leaf nodes. For example: fig. 22 is a schematic view of a Fabric network based on a Spine-Leaf structure according to yet another embodiment of the present application, as shown in fig. 22, in which numbers 6 in circles in S4 and S5 indicate in the diagram, and S4 and S5 send third messages to a lower layer. Wherein S4 may send a third packet to S0 through its own interface 0, where the third packet includes the load sharing weight and the hop count from other nodes to each leaf node through interface 0 of S4, where, according to the hop count weight table of S4 and the first hop count calculation method and the first load sharing weight calculation method, the hop count and the load sharing weight of S4 through interfaces 0 to L0 of S4 may be 3 and 6, respectively, that is, for S0, the hop count and the load sharing weight of S0 through interfaces 4 to L0 of S0 are 3 and 6, respectively. Similarly, the hop count and load sharing weight of S0 through interface 5 to L0 of S0 are 3 and 6, respectively. The hop count and load sharing weight of S0 through interface 4 to L1 of S0 are 3 and 4, respectively. The hop count and load sharing weight of S0 through interface 5 to L1 of S0 are 3 and 4, respectively. The hop count and load sharing weight of S0 through interface 4 to L2 of S0 are 3 and 4, respectively. The hop count and load sharing weight of S0 through interface 5 to L2 of S0 are 3 and 4, respectively. The hop count and load sharing weight of S0 through interface 4 to L3 of S0 are 3 and 4, respectively. The hop count and load sharing weight of S0 through interface 5 to L3 of S0 are 3 and 4, respectively. The hop count and load sharing weight of S3 through interface 4 to L0 of S3 are 3 and 6, respectively. The hop count and load sharing weight of S3 through interface 5 to L0 of S3 are 3 and 6, respectively. The hop count and load sharing weight of S3 through interface 4 to L1 of S3 are 3 and 4, respectively. The hop count and load sharing weight of S3 through interface 5 to L1 of S3 are 3 and 4, respectively. The hop count and load sharing weight of S3 through interface 4 to L2 of S3 are 3 and 4, respectively. The hop count and load sharing weight of S3 through interface 5 to L2 of S3 are 3 and 4, respectively. The hop count and load sharing weight of S3 through interface 4 to L3 of S3 are 3 and 4, respectively. The hop count and load sharing weight of S3 through interface 5 to L3 of S3 are 3 and 4, respectively. It should be noted that, for the sake of brevity and clarity, fig. 21 only shows the hop count weight tables of S4, S5, S0 and S3.
Furthermore, the directly connected leaf nodes of the ridge node in the middle layer continue to send the load sharing weight and hop count from other nodes to each leaf node through the ridge node in the middle layer. So that all leaf nodes can learn their load sharing weights and hop counts to other leaf nodes through their own interfaces. For example: fig. 23 is a schematic diagram of a Fabric network based on a Spine-Leaf structure according to still another embodiment of the present application, as shown in fig. 23, in the diagram, a numeral 7 in a circle in S0, S1, S2, and S3 indicates, and S0, S1, S2, and S3 send a third packet to a lower layer. Taking L0 as an example, L0 receives a third message through its own interface 0, where the third message includes: the hop count and the load sharing weight of the L0 from 0 to L0 through the own interface are 2 and 4, that is, the hop count is obtained by adding 1 to the minimum hop count in the first row in the hop count weight table of S0, and the load sharing weight is obtained by selecting and downloading the load sharing weight corresponding to the minimum hop count in the first row of S0, and summing the load sharing weight and the sum result is 4. Similarly, it can be obtained that the hop count and the load sharing weight of L0 through its own interface 0 to L1 are 2, 2. The hop count and load sharing weight of L0 through its own interface 0 to L2 is 4, 8. The hop count and load sharing weight of L0 through its own interface 0 to L3 is 4, 8. The hop count and load sharing weight of L0 through its own interface 1 to L0 is 2, 2. The hop count and load sharing weight of L0 through its own interface 1 to L1 is 2, 2. The hop count and load sharing weight of L0 through its own interface 1 to L2 is 4, 8. The hop count and load sharing weight of L0 through its own interface 1 to L3 is 4, 8. The hop count and load sharing weight of L0 through its own interface 2 to L0 is 2, 4. The hop count and load sharing weight of L0 through its own interface 2 to L1 is 2, 2. The hop count and load sharing weight of L0 through its own interface 2 to L2 is 4, 8. The hop count and load sharing weight of L0 through its own interface 2 to L3 is 4, 8. The L0 has a hop count and load sharing weight of 2, 2 through its own interface 3 to L0. The L0 has a hop count and load sharing weight of 2, 2 through its own interface 3 to L1. The L0 has a hop count and load sharing weight of 4, 8 through its own interface 3 to L2. The L0 has a hop count and load sharing weight of 4, 8 through its own interface 3 to L3. Taking L3 as an example, the hop count and load sharing weight of L3 from 0 to L0 through its own interface is 4, 12. The hop count and load sharing weight of L3 through its own interface 0 to L1 is 4, 8. The hop count and load sharing weight of L3 through its own interface 0 to L2 is 2, 2. The hop count and load sharing weight of L3 through its own interface 0 to L3 is 2, 2. The hop count and load sharing weight of L3 through its own interface 1 to L0 is 4, 12. The hop count and load sharing weight of L3 through its own interface 1 to L1 is 4, 8. The hop count and load sharing weight of L3 through its own interface 1 to L2 is 2, 2. The hop count and load sharing weight of L3 through its own interface 1 to L3 is 2, 2. The hop count and load sharing weight of L3 through its own interface 2 to L0 is 4, 12. The hop count and load sharing weight of L3 through its own interface 2 to L1 is 4, 8. The hop count and load sharing weight of L3 through its own interface 2 to L2 is 2, 2. The hop count and load sharing weight of L3 through its own interface 2 to L3 is 2, 2. The L3 has a hop count and load sharing weight of 4, 12 through its own interface 3 to L0. The L3 has a hop count and load sharing weight of 4, 8 through its own interface 3 to L1. The L3 has a hop count and load sharing weight of 2, 2 through its own interface 3 to L2. The L3 has a hop count and load sharing weight of 2, 2 through its own interface 3 to L3.
Finally, each node in the Fabric network may update a forwarding table according to the respective hop weight table, so as to direct the forwarding of the packet or the data, or each node may also use the respective hop weight table as the forwarding table to direct the forwarding of the packet. When the load sharing weight from any node to the leaf node changes, the load sharing weight needs to be triggered to modify the corresponding load sharing weight in the hop count weight table.
In the network shown in fig. 23, by querying the hop count weight table of S4, the outgoing interfaces S4 to L0 are interface 0 and interface 1, and the corresponding load sharing weights are 4 and 2, when the packet is sent to L0 in S4, the packet is sent on interface 0 and interface 1 in a ratio of 4: 2.
If a certain interface does not receive a detection message within a preset time length, or does not receive a load sharing weight and/or a hop count, the interface is considered to be in a fault state, and a column where the fault interface is located in a hop count weight table is cleared, and the process is generally called an aging process. After the steps are periodically and repeatedly operated, the fault information can be quickly diffused to the whole network, and after the downstream fault occurs, the upstream can be quickly fed back to adjust the load sharing proportion. For example: fig. 24 is a schematic view of a Fabric network based on a Spine-Leaf structure according to still another embodiment of the present application, and compared with fig. 23, when a link between L0 and S0 fails, that is, interface 0 of L0 fails, and/or interface 0 of S0 fails, load sharing weights of interfaces 0 and 1 from S4 to L0 are changed from 4:2 to 2:2, and a load sharing ratio is adjusted accordingly.
It should be noted that, for any spine node, it may send the hop count and the load sharing weight reaching each leaf node each time, or may only send data with changed load sharing weight and/or hop count, so as to save node overhead.
In addition, for any node, in order to avoid generating a reachability loop, the hop count may be limited, and when the node receives that the hop count of the node reaching a certain leaf child node through its own interface exceeds a certain set value, the node considers that the hop count is unavailable, and thus the hop count is not recorded in the hop count weight table.
In summary, the present application provides a switching network system, that is, a Spine-Leaf structure-based Fabric network, where, firstly, each node may send a load sharing weight and a hop count to other nodes, so that a downstream bandwidth change caused by an interface failure may be quickly transmitted to an upstream, and an upstream is triggered to perform interface switching or interface load sharing ratio adjustment, thereby reducing a phenomenon of link congestion. Secondly, the functions of the nodes are all realized on a forwarding plane, and the forwarding plane is realized by a hardware logic circuit, an FPGA logic code, a microcode of a programmable NP and a multi-core CPU software code, so that end-to-end millisecond fault convergence can be realized.
The data format involved in the Fabric network described above will now be described:
1. link type table (i.e. interface type table): LinkType Table
The link type table is a static table, wherein the interface type can be an uplink interface, a downlink interface or a back-to-back interface.
Size of the link type table: LinkNum 8Bits, where LinkNum in this application indicates the number of interfaces of a node, which is not described in detail below.
The following steps are described: through interface configuration generation.
For example: fig. 25 is a schematic diagram of a link type table of the node S3 in fig. 6, and as can be seen from fig. 6, the node S3 includes 6 interfaces, which are respectively: the interfaces 0 to 5, wherein the interfaces 0 to 3 are downstream interfaces, the interfaces 4 to 5 are upstream interfaces, in fig. 25, the numbers above the table indicate the interface numbers, and the numbers in the table indicate the interface types, wherein the number 1 in the table indicates the upstream interface, the number 2 indicates the downstream interface, the horizontal interface is not referred to here, and it should be noted that the number 3 indicates the horizontal interface.
2. Link level partition mask table: LinkMask Table
The link level partition mask table is a static table describing the level partition group information of the interface.
Size: LinkNum 1 Bits.
The following steps are described: grouping all uplink interfaces into a group; the back-to-back interface interfaces connecting the same equipment are grouped together, and each downlink interface is individually grouped together. Setting the initial values of the entries to be 1, and setting the initial values to be 0 according to the grouping condition of the interfaces: if the interface IDs of the same group are { ID0, ID1, …, IDn-1}, LinkMask [ IDi ] [ IDj ] ═ 0, i ═ 0 to n-1, and j ═ 0 to n-1.
For example: fig. 26 is a schematic diagram of the link horizontal partition mask table of the node S3 in fig. 6, and in conjunction with fig. 6, the top and the left sides of the table shown in fig. 6 are both interface numbers, and since the interfaces 4 and 5 of S3 belong to the same horizontal partition group, the intersection where the interface 4 and the interface 5 are located is 0.
3. A link state table: LinkStatus Table
The link state table is a dynamic table describing interface state information of one node (0 indicates an unavailable state and 1 indicates an available state).
Size: LinkNum 1 Bits.
The following steps are described: if the load sharing weight and/or hop count are received from the interface, the state of the interface is set to be available, namely 1; if the load sharing weight and/or the hop count are not received within 3 detection periods, the interface state is set to be unavailable, i.e. 0.
For example: fig. 27 is a schematic diagram of a link state table of the node S3 in fig. 6, where, as shown in fig. 27, the number above the table is the interface number of S3, and the numbers in the table respectively indicate the states of each interface, and as can be seen from fig. 27, all of the 6 interfaces of S3 are available.
4. A hop count meter: hopnum Table
The hop count table is a dynamic table describing routing hop count information.
Size: LinkNum (LeafNum +1) 4Bits, where LeafNum represents the number of leaf nodes.
The following steps are described: the dynamic change in operation is triggered by the received hop count or by a timer message. Non-zero indicates the number of hops that the route can reach, and 0 indicates unreachable. The last row indicates the aging state, 1 indicates aging is needed, and 0 indicates no aging is needed.
For example: fig. 28 is a schematic diagram of a hop count table of the node S3 in fig. 6, where, as shown in fig. 28, the number above the table is an interface number of S3, the numbers in the table respectively indicate the hop counts corresponding to the interfaces, the number on the left side of the table is an identifier of each leaf node, and the last row indicates an aging state, as can be seen from fig. 28, the number of hops from S3 through interfaces 4 to L0 is 3, the number of hops from interfaces 5 to L0 is 3, the number of hops from interfaces 4 and 5 to L1 is also 3, the number of hops from interfaces 0 to L2 is 1, the number of hops from interfaces 2 to L2 is 1, the number of hops from interfaces 4 and 5 to L2 is 3, the number of hops from interfaces 1 to 3 to L3 is 1, and the number of hops from interfaces 4 and 5 to L3 is 3. And it can be seen from the last row that S3 does not need to be subjected to the aging process for each interface.
5. Weight table: weights table
The weight table is a dynamic table that describes load weight information for the interfaces of the node.
Size: LinkNum LeafNum 8-32 Bits, and the spaces occupied by different load sharing weight algorithms are different.
The following steps are described: and the link weight information reported by the downstream according to different algorithms is finally used as a load sharing proportion during forwarding.
For example: fig. 29 is a schematic diagram of a weight table of the node S3 in fig. 6, as shown in fig. 29, the number above the table is the interface number of S3, the numbers in the table respectively represent the load sharing weights corresponding to the interfaces, the number on the left side of the table is the identifier of each leaf node, as can be seen from fig. 29, the load sharing weight of S3 through interface 4 to L0 is 6, the load sharing weight through interface 5 to L0 is 6, the load sharing weight through interface 4, interface 5 to L1 is 4, the load sharing weight through interface 0 to L2 is 1, the load sharing weight through interface 2 to L2 is 1, the load sharing weight through interface 4 and interface 5 to L2 are both 4, the load sharing weight through interface 1, interface 3 to L3 is both 1, and the load sharing weight through interface 4 and interface 5 to L3 is both 4.
6. List of next hop interfaces: NextHop Table
The next hop out interface list is a dynamic table that describes the best out interface list information to a certain leaf node.
Size: LinkNum 4 Bits.
The following steps are described: the intermediate result calculated when the hop count is sent represents the list of next hop-out interfaces to a certain leaf node.
For example: fig. 30 is a schematic diagram of a hop count table of the node S3 in fig. 6, where, as shown in fig. 30, the numbers above the table are interface numbers of S3, the numbers in the table respectively indicate the hop counts to L0 through the respective interfaces, and the numbers on the left side of the table are the labels of L0. As can be seen from fig. 30, S3 can reach L0 through interfaces 4 and 5, and the corresponding hop counts are all 3.
7. Initial weight table: InitWeights table
The initial weight table is a static table describing the initial load sharing weights of the interfaces. Only leaf nodes have initial load sharing weights, and spine nodes do not have this information.
Size: LinkNum 8-32 Bits, and the occupied spaces of different load sharing weight algorithms are different.
The following steps are described: the specific values are related to configurations and algorithms, such as current bandwidth information corresponding to at least one interface of the leaf node, an available interface ratio in the at least one interface of the leaf node, a quality assessment value corresponding to each of the at least one interface of the leaf node, and an average queue length corresponding to each of the at least one interface of the leaf node.
For example: fig. 31 is a schematic diagram of an initial weight table of the node L0 in fig. 6, as shown in fig. 31, the number above the table is an interface number L0, and the numbers in the table respectively represent the initial load sharing weight corresponding to each interface. As can be seen from fig. 31, interfaces 0-3 of L0 have initial load sharing weights of 2, 1, 2, and 1, respectively.
Fig. 32 is a schematic diagram of nodes in a Fabric network according to an embodiment of the present application, and as shown in fig. 32, each node includes 6 modules, where a module 1 is configured to traverse all its interfaces at regular time, look up its hop count table or hop count weight table, calculate the hop count of other nodes reaching each leaf node through each interface of the node, and construct and send a packet (the packet is also referred to as a reachability packet). The module 2 is used for processing the received message carrying the hop count, analyzing the message to obtain the hop count, and modifying the column associated with the receiving interface in the hop count table or the hop count weight table of the module according to the analyzed hop count. The module 3 is configured to traverse all the interfaces at regular time, calculate, by looking up a table, the load sharing weight that other nodes reach each leaf node through each interface of the node, and construct and send a packet (the packet is also referred to as a weight packet). The module 4 is configured to process a received packet carrying a load sharing weight, analyze the load sharing weight, and modify a column associated with a receiving interface in the weight table or the hop count weight table. The module 5 is configured to traverse all the interfaces at regular time, check whether a packet carrying the hop count and/or the load sharing weight is received within a specified aging period, and clear all columns associated with the interface in the hop count table and the weight table or clear all columns associated with the interface in the hop count weight table if a certain interface does not receive the packet. The module 6 is used for storing and maintaining a hop count table and a weight table and other auxiliary table entries, or maintaining a hop count weight table and other auxiliary table entries, and when there is a change in hop count and/or load sharing weight, triggering and modifying an external forwarding table.
With reference to the above data format, the following describes specific implementations of the modules of the node:
the module 1 realizes that: and periodically sending a message carrying the hop count.
The algorithm implemented on the ridge node and the leaf node is different, and the pseudo code implemented by the specific algorithm is as follows:
on the ridge node:
Figure BDA0002434193300000231
where row 01 represents a loop through all interfaces whose states are normal. Row 02 represents traversing each leaf node and counting the number of hops to reach that leaf node. 03 uses the link horizontal division mask table to calculate and exclude part of the outgoing interface and stores the result in the list of the next-hop outgoing interface for subsequent calculation. Line 04 calculates the non-zero minimum hop count value for the next hop out interface list. Line 05 performs a normalization process on the next hop out interface list to determine if it is equal to the minimum next hop. Line 07 calculates the number of final outgoing interfaces. And constructing a message carrying hop counts by 08-12 lines, wherein if the number of the output interfaces is more than 0, the message indicates that the output interfaces are reachable, the hop counts are filled as the minimum hop counts plus 1, otherwise, the output interfaces are not reachable, and the hop counts are filled as 0. And after traversing all the interfaces and leaf nodes by 15-16 rows, filling message header information and sending a message carrying hop count.
On leaf nodes:
Figure BDA0002434193300000241
on the leaf node, only the hop count information of the leaf node needs to be sent to all the interfaces in normal states, and the hop count is 1.
The module 2 realizes that: and the node receives the processing of the message carrying the hop count.
Assuming that a message carrying hop count is received on an interface numbered linkid, a pseudo code for message processing is as follows:
Figure BDA0002434193300000242
wherein row 01 represents the aging flag of the interface linkid column in the update hop table, and is set to the updated state 0. Line 02 sets the interface state to normal state 1. Line 03 parses out the startleaf id field in the message. Line 04 parses out the ItemNum field in the message. Line 05 traverses each data item according to ItemNum, and line 06 analyzes each hop count information in the message. Line 07 writes the hop count information into the hop count table.
The module 3 realizes that: and periodically sending a message carrying the load sharing weight.
The algorithm implemented on the ridge node and the leaf node is different, and the pseudo code for implementing the specific algorithm is as follows:
on the ridge node:
Figure BDA0002434193300000243
Figure BDA0002434193300000251
where row 01 represents a loop through all interfaces whose states are normal. The 02 rows represent traversing each leaf node and counting the number of hops to reach each leaf node. 03 uses the link horizontal division mask table to calculate and exclude part of the outgoing interface and stores the result in the list of the next-hop outgoing interface for subsequent calculation. Line 04 calculates the non-zero minimum hop count value for the next hop out interface list. Line 05 performs a normalization process on the next hop out interface list to determine if it is equal to the minimum next hop. And (4) calculating the weighted total weight of the final output interface list in the 06 rows (the calculation is performed by adopting a weight algorithm 1, other algorithms adopt similar processing, the weights of the output interfaces are weighted and accumulated in the core process, and the difference is weighted parameter selection). Line 07 constructs a message carrying the load sharing weight. After traversing all interfaces and leaf nodes by 09-10 lines, filling message header information, and sending a message carrying load sharing weight to the interfaces on the leaf nodes:
Figure BDA0002434193300000252
on the leaf node, only the load sharing weight of the leaf node needs to be sent to all the interfaces in normal states, and the weight is directly obtained from the configured initialization weight table.
The module 4 realizes that: and the node receives the message carrying the load sharing weight and processes the message.
Assuming that a message carrying a load sharing weight is received on an interface numbered linkid, a pseudo code for message processing is as follows:
Figure BDA0002434193300000253
wherein row 01 represents the aging flag of the interface linkid column in the update hop table, and is set to the updated state 0. Row 02 sets the link state to normal state 1. Line 03 parses out the startleaf id field in the message. Line 04 parses out the ItemNum field in the message. Line 05 traverses each data item according to ItemNum, and line 06 resolves each load sharing weight in the message. Line 07 writes the load sharing weights into the weight table.
The module 5 realizes that: and (5) timing aging treatment.
The node carries out aging treatment on the table item at regular time, and pseudo codes of the aging treatment are as follows:
Figure BDA0002434193300000254
Figure BDA0002434193300000261
wherein row 01 indicates traversing each interface linkid for subsequent processing. Line 02 judges that if the aging flag of the interface linkid is 1, it indicates that the data of the interface needs to be aged. And the 03-04 rows directly clear the data in the row where the interface needing to be aged is located in the hop count table and the weight table. Line 05 marks the state of the interface that needs to be aged as fail state 0. And line 07 shows that the old conversion marks of all the interfaces are set to be aged again, and the aging marks are removed after the interfaces receive messages written to the hop count or messages carrying the load sharing weight.
The module 6 realizes that: and transmitting the load sharing weight information to a forwarding table.
The module needs to be supported and realized by an interface provided by an external module, and different software and hardware architectures of network element equipment are realized in different modes.
The end-to-end performance effect of the present application is explained as follows:
if the ASIC chip adopts a hardware circuit to realize the functions of the nodes, the running frequency of the chip is calculated according to 1Ghz, and 1 beat is 1 nanosecond. Let LinkN denote the maximum number of links for each node, and Leaf denote the number of the maximum Leaf nodes supported by the whole network.
The performance evaluation of the algorithm for sending the message with the hop count does not consider the segmented parallel processing through experiments, and the total cost for sending the message with the hop count is as follows: LinkN (LeafN +2 log)2LinkN + 26). According to LinkN 128, LeafN 2048 specification calculation, total time overhead of LinkN messages carrying hop count sent in each period is 128 (2048+2 log)2128+26) beats 267264 ns 0.267 ms. If the parallel processing is not considered, receiving the total cost of a single message carrying hop count: beat 3. According to the LinkN 128, LeafN 2048 ruleAnd (4) grid calculation, wherein the processing time of a single message carrying the hop number is 2048 × 3 and 6144 nanoseconds.
The total time of failure convergence is failure detection time + hop count (time of sending message with hop count + total time of receiving message with hop count)
Wherein:
receiving total time of message carrying hop number (single message processing time carrying hop number) and link number between two adjacent nodes
Failure detection time is 3 × detection period or 0 (hardware reports failure directly, e.g. cable is pulled out)
Thereby obtaining the total fault convergence time
=3*T+MaxHop*[LinkN*(LeafN+2*log2 LinkN+26)+LinkNumPerHop*LeafN*3)]
According to the specifications of LinkN 128 and leaf 2048, 10 links are averaged between two hops (LinkNumPerHop 10), the three-stage switching network structure only needs 3 hops at most (MaxHop 3), and the fault detection time is not considered:
total time of fault convergence 3 (128 (2048+2 log)2128+26) +10 × 2048 × 3) beats 986112 ns 0.986 ms.
Therefore, under the condition that the algorithm is applied to a chip, the whole network can reach the end-to-end fault convergence time of 1 millisecond, and the effect of millisecond-level rapid fault perception can be achieved.
The following explains the effect of fault propagation in the present application:
according to the method and the device, the downstream link fault is quickly transmitted to the upstream in a mode of combining the hop count and the load sharing weight, so that the residual bandwidth after the downstream link fault is transmitted to the upstream.
Next, a fault delivery effect of the present application is described with reference to a specific networking, fig. 33 is a schematic diagram of a Spine-Leaf structure based on a still another embodiment of the present application, fig. 34 is a partial schematic diagram of the Fabric network shown in fig. 33 provided in an embodiment of the present application, and with reference to fig. 33 and fig. 34, when a single link between S0 and S8 fails, that is, when a connection interface of S0 and/or S8 fails, according to the technical solution provided in the present application, each node quickly uploads the failure to the upstream, so that load sharing weights of two outgoing interfaces of routes from L6 and L7 to L0 and L1 are respectively adjusted to 1, 2, and load sharing weights of two outgoing interfaces of routes from S6 to L0 and L1 are both adjusted to 0, 1, S7 to L0 and L1 remain unchanged.
Fig. 35 is a partial schematic view of the Fabric network shown in fig. 33 according to another embodiment of the present application, and with reference to fig. 33 and fig. 35, after multiple link failures of fig. 35 occur, the load sharing weights of two outgoing interfaces of the route from L6 to L0 are both adjusted to 0, 2, and the load sharing weights of two outgoing interfaces of the route from L6 to L1 are both adjusted to 1, 2. The load sharing weights of the two outgoing interfaces of the route from L7 to L0 are both adjusted to 0, 1, and the load sharing weights of the two outgoing interfaces of the route from L7 to L1 are both adjusted to 1, 2.
Fig. 36 is a schematic diagram of a Fabric network based on a Spine-Leaf structure according to yet another embodiment of the present application, and fig. 37 is a partial schematic diagram of the Fabric network shown in fig. 36 according to an embodiment of the present application, and with reference to fig. 36 and fig. 37, when a single link failure occurs as in fig. 37, load sharing weights of an outgoing interface from L9 to L0 change to 1, 2, 2, and 2. Fig. 38 is a partial schematic view of the Fabric network shown in fig. 36 according to another embodiment of the present application, and with reference to fig. 36 and 38, when a failure occurs in a plurality of links shown in fig. 38, the load sharing ratios calculated by the first and second load sharing weight calculation methods are different, and the effect of the second load sharing weight calculation method is more accurate.
Fig. 39 is a flowchart of a routing information transmission method according to an embodiment of the present application, where the method is applied to a first node, where a switching network system includes a plurality of leaf nodes and at least one spine node, and the first node is a leaf node in the plurality of leaf nodes. The routing information in this application includes the load sharing weight and/or the hop count described below, which is not described in detail below. As shown in fig. 39, the method includes the steps of:
step S101: and determining the load sharing weight from other nodes to the first node through the first interface of the first node according to the current state information of at least one interface of the first node.
Step S102: and sending the load sharing weight and the hop count of other nodes to the first node through the first interface, wherein the first interface is one of at least one interface of the first node.
Optionally, the load sharing weight and the hop count from the other node to the first node through the first interface of the first node are carried in the first message. Or the load sharing weight from the other node to the first node through the first interface of the first node is carried in the second message, and the hop count from the other node to the first node through the first interface of the first node is carried in the third message.
Optionally, the first packet further includes at least one of the following: the type of the first message, the auxiliary identifier of the first message, the identifier of the first node, and the number of the first nodes. The auxiliary message of the first message is used to identify that the first message is a periodically sent message or a message sent when the first interface fails. The number of first nodes is 1.
The second message further includes at least one of: the type of the second message, the auxiliary identifier of the second message, the identifier of the first node, and the number of the first nodes. The auxiliary message of the second message is used to identify that the second message is a periodically sent message or a message sent when the first interface fails. The number of first nodes is 1.
The third message further includes at least one of: the type of the third message, the auxiliary identifier of the third message, the identifier of the first node, and the number of the first nodes. The auxiliary message of the third message is used to identify that the third message is a periodically sent message or a message sent when the first interface fails. The number of first nodes is 1.
Optionally, the load sharing weight from the other node to the first node through the first interface of the first node is determined according to the current state information of at least one interface of the first node. Sending load sharing weight and hop count of other nodes to the first node through the first interface, including: and determining the load sharing weight from other nodes to the first node through the first interface of the first node according to the current state information of at least one interface of the first node on the forwarding plane of the first node. And sending the load sharing weight and the hop count of other nodes to the first node through the first interface.
Optionally, the current state information of at least one interface of the first node includes at least one of: the method comprises the steps of obtaining current bandwidth information corresponding to at least one interface of a first node, an available interface proportion in the at least one interface of the first node, a quality assessment value corresponding to the at least one interface of the first node, and an average queue length corresponding to the at least one interface of the first node.
Optionally, sending the load sharing weight and the hop count of the other node to the first node through the first interface by using the first interface includes: periodically sending the load sharing weight and the hop count of other nodes to the first node through the first interface. Or when the first interface fails, the load sharing weight and the hop count from other nodes to the first node through the first interface are sent through the first interface.
The routing information transmission method provided by the present application is executed by a first node, where the first node is a node in the above switching network system, and the content and effect of the routing information transmission method may refer to some embodiments of the system, which are not described herein again.
Fig. 40 is a flowchart of a routing information transmission method applied to a second node according to another embodiment of the present application, where a switching network system includes a plurality of leaf nodes and at least one spine node, and the second node is a spine node of the at least one spine node. As shown in fig. 40, the method includes the steps of:
step S201: and determining the load sharing weight of other nodes to the first node through the second interface of the second node according to the load sharing weight of the second node to the first node through at least one interface of the second node.
Step S202: and determining the hop count of other nodes to the first node through the second interface according to the hop count of the second node to the first node through at least one interface of the second node.
Step S203: and sending the load sharing weight and the hop count of other nodes to the first node through a second interface through the second interface, wherein the second interface is one of at least one interface of the second node.
Optionally, before step S201, the method further includes: load sharing weights and hop counts of the second node to the first node over at least one interface of the second node are received.
Optionally, determining the load sharing weight of the other node to the first node through the second interface of the second node according to the load sharing weight of the second node to the first node through the at least one interface of the second node includes: and if the second interface is an uplink interface or a horizontal interface, determining the load sharing weight from other nodes to the first node through the second interface according to the load sharing weight from the second node to the first node through the interfaces except the horizontal segmentation group in at least one interface of the second node. Wherein, if the second interface is an uplink interface and other uplink interfaces exist in the second node, the horizontal split group includes: a second interface and other upstream interfaces of the second node. If the second interface is an uplink interface and the second node does not have other uplink interfaces, the horizontal split group includes: a second interface. If the second interface is a horizontal interface and other horizontal interfaces exist in the second node, the horizontal partition group includes: the second interface and other horizontal interfaces of the second interface. If the second interface is a horizontal interface and the second node does not have other horizontal interfaces, the horizontal split group comprises: a second interface. And if the second interface is a downlink interface, determining the load sharing weight from other nodes to the first node through the second interface according to the load sharing weight from the second node to the first node through at least one interface of the second node.
Optionally, if the second interface is an uplink interface or a horizontal interface, determining the load sharing weight of the other node to the first node through the second interface according to the load sharing weight of the second node to the first node through the interface except the horizontal split group in at least one interface of the second node, includes: and if the second interface is an uplink interface or a horizontal interface, determining that the load sharing weight from the other node to the first node through the second interface is the sum of the load sharing weights from the second node to the first node through the interface corresponding to the minimum non-0 hop count except the horizontal segmentation group in at least one interface of the second node. If the second interface is a downlink interface, determining the load sharing weight from the other node to the first node through the second interface according to the load sharing weight from the second node to the first node through at least one interface of the second node, including: and if the second interface is a downlink interface, determining that the load sharing weight from the other nodes to the first node through the second interface is the sum of the load sharing weights from the second node to the first node through the interface corresponding to the minimum non-0 hop count in at least one interface of the second node.
Optionally, if the second interface is an uplink interface or a horizontal interface, determining the load sharing weight of the other node to the first node through the second interface according to the load sharing weight of the second node to the first node through the interface except the horizontal split group in at least one interface of the second node, includes: if the second interface is an uplink interface or a horizontal interface, determining the load sharing weight of the other nodes to the first node through the second interface of the second node according to the load sharing weight of the second node to the first node through the interface corresponding to the minimum non-0 hop count except the horizontal split group in at least one interface of the second node, the number of the interfaces of the second node, and the number of normal interfaces of the second node. If the second interface is a downlink interface, determining the load sharing weight from the other node to the first node through the second interface according to the load sharing weight from the second node to the first node through at least one interface of the second node, including: and if the second interface is a downlink interface, determining the load sharing weight from other nodes to the first node through the second interface of the second node according to the load sharing weight from the second node to the first node through the interface corresponding to the minimum non-0 hop count in at least one interface of the second node, the number of interfaces of the second node and the number of normal interfaces of the second node.
Optionally, if the second interface is an uplink interface or a horizontal interface, determining the load sharing weight of the other node to the first node through the second interface according to the load sharing weight of the second node to the first node through the interface except the horizontal split group in at least one interface of the second node, including: if the second interface is an uplink interface or a horizontal interface, determining load sharing weights of other nodes to the first node through the second interface of the second node according to the load sharing weight of the second node to the first node through the interface corresponding to the minimum non-0 hop count except the horizontal split group in at least one interface of the second node, the quality evaluation value of the interface corresponding to the minimum non-0 hop count except the horizontal split group in at least one interface of the second node, the number of the interfaces corresponding to the minimum non-0 hop count except the horizontal split group in at least one interface of the second node, the number of interfaces of the second node, and the normal number of interfaces of the second node. If the second interface is a downlink interface, determining the load sharing weight from the other node to the first node through the second interface according to the load sharing weight from the second node to the first node through at least one interface of the second node, including: and if the second interface is a downlink interface, determining the load sharing weight of other nodes to the first node through the second interface of the second node according to the load sharing weight of the second node to the first node through the interface corresponding to the minimum non-0 hop count in the at least one interface of the second node, the quality evaluation value of the interface corresponding to the minimum non-0 hop count in the at least one interface of the second node, the number of interfaces of the second node and the number of normal interfaces of the second node.
Optionally, sending the load sharing weight and the hop count of the other node to the first node through the second interface via the second interface includes: and periodically sending the load sharing weight and the hop count of other nodes to the first node through the second interface. Or when the second interface fails, the load sharing weight and the hop count from other nodes to the first node through the second interface are sent through the second interface.
Optionally, the load sharing weight and the hop count that are respectively sent to the first node by the other nodes through the second interface of the second node are carried in the fourth message. Or the load sharing weights respectively from other nodes to the first node through the second interface of the second node are carried in the fifth message, and the hop counts respectively from other nodes to the first node through the second interface of the second node are carried in the sixth message.
Optionally, the fourth packet further includes at least one of the following: the type of the fourth packet, an auxiliary identifier of the fourth packet, an identifier of a first leaf node of the plurality of leaf nodes, and the number of the plurality of leaf nodes. The auxiliary message of the fourth message is used to identify that the fourth message is a periodically sent message or a message sent when the second interface fails. The fifth message further includes at least one of: the type of the fifth message, the auxiliary identifier of the fifth message, the identifier of the first leaf node in the plurality of leaf nodes, and the number of the plurality of leaf nodes. The auxiliary message of the fifth message is used to identify that the fifth message is a periodically sent message or a message sent when the second interface fails. The sixth message further includes at least one of: the type of the sixth message, the auxiliary identifier of the sixth message, the identifier of the first leaf node in the plurality of leaf nodes, and the number of the plurality of leaf nodes. The auxiliary message of the sixth message is used to identify that the sixth message is a periodically sent message or a message sent when the second interface fails.
Optionally, determining the hop count of the other node to the first node through the second interface according to the hop count of the second node to the first node through at least one interface of the second node, includes: and if the second interface is an uplink interface or a horizontal interface, determining the hop count of other nodes to the first node through the second interface according to the hop count of the second node to the first node through the interfaces except the horizontal segmentation group in at least one interface of the second node. And if the second interface is a downlink interface, determining the hop count of other nodes to the first node through the second interface according to the hop count of the second node to the first node through at least one interface of the second node.
Optionally, if the second interface is an uplink interface or a horizontal interface, determining, according to the hop count of the second node from the interface other than the horizontal split group in at least one interface of the second node to the first node, the hop count of the other node from the second interface to the first node, includes: if the second interface is an uplink interface or a horizontal interface, selecting the smallest non-0 hop count from the hop counts of the second node from the interfaces except the horizontal segmentation group to the first node through at least one interface of the second node, and determining that the hop counts of other nodes from the second interface of the second node to the first node are the non-0 hop count plus 1. If the second interface is a downlink interface, determining the hop count of other nodes to the first node through the second interface according to the hop count of the second node to the first node through at least one interface of the second node, including: and if the second interface is a downlink interface, selecting the minimum hop count which is not 0 from the hop counts from the second node to the first node through at least one interface of the second node, and determining that the hop counts from other nodes to the first node through the second interface are the hop counts which are not 0 plus 1.
Optionally, the load sharing weights of the other nodes to the first node through the at least one interface of the second node are determined according to the load sharing weight of the second node to the first node through the at least one interface of the second node. And determining the hop counts of other nodes to the first node through at least one interface of the second node according to the hop counts of the second node to the first node through at least one interface of the second node. Sending load sharing weight and hop count of other nodes to the first node through a second interface through the second interface, wherein the second interface is any one interface in at least one interface of the second node, and the method comprises the following steps: and determining the load sharing weight of other nodes to the first node through the second interface of the second node on the forwarding plane of the second node according to the load sharing weight of the second node to the first node through at least one interface of the second node. And determining the hop count of other nodes to the first node through the second interface according to the hop count of the second node to the first node through at least one interface of the second node. And sending the load sharing weight and the hop count of other nodes to the first node through the second interface.
Optionally, if the second node does not receive the load sharing weight and/or the hop count from the second node to the first node through the second interface within the preset time period, the load sharing weight and the hop count from the second node to the first node through the second interface are both 0. Or, if the second node does not receive the detection message transmitted through the second interface within the preset time length, the load sharing weight and the hop count from the second node to the first node through the second interface are both 0, and the detection message is used for detecting whether the second interface is abnormal.
The routing information transmission method provided by the present application is executed by a second node, where the second node is a node in the above switching network system, and the content and effect of the routing information transmission method may refer to some embodiments of the system, which are not described herein again.
Fig. 41 is a schematic diagram of a node 800 according to an embodiment of the present application, and as shown in fig. 41, the following describes each constituent component of the node in detail with reference to fig. 41: the node 800 includes a processor 801, and the processor 801 may be an NP/FPGA/ASIC/multi-core CPU, which includes the multi-core CPU as an example, and is configured to execute the above-mentioned routing information transmission method. Node 800 may also include a Memory 802, where Memory 802 may be a Read-Only Memory (ROM) or other type of static storage device that may store static information and instructions, a Random Access Memory (RAM) or other type of dynamic storage device that may store information and instructions, an Electrically Erasable Programmable Read-Only Memory (EEPROM), a Compact Disc Read-Only Memory (CD-ROM) or other optical Disc storage, optical Disc storage (including Compact Disc, laser Disc, optical Disc, digital versatile Disc, blu-ray Disc, etc.), magnetic disk storage media or other magnetic storage devices, or any other medium that may be used to carry or store desired program code in the form of instructions or data structures and that may be accessed by a computer, but is not limited thereto. The memory 802, which may be separate, is coupled to the processor 801 via a communication bus 804. The memory 802 may also be integrated with the processor 801.
The memory 802 is used for storing software programs for implementing the scheme of the present application, and is controlled by the processor 801 to implement the method provided by the following method embodiments.
A transceiver 803 for communicating with other nodes. Of course, the transceiver 803 may also be used for communicating with a communication network, such as an ethernet, a Radio Access Network (RAN), a Wireless Local Area Network (WLAN), etc. The transceiver 803 may include a receiving unit to implement a receiving function and a transmitting unit to implement a transmitting function.
The communication bus 804 may be an Industry Standard Architecture (ISA) bus, a Peripheral Component Interconnect (PCI) bus, an Extended ISA (EISA) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown in FIG. 41, but this is not intended to represent only one bus or one type of bus.
The node structure shown in fig. 41 does not constitute a definition of a node, and may include more or fewer components than shown, or some of the components may be combined, or a different arrangement of components.
The node provided in the present application may be configured to execute the method described above, and details and effects corresponding to the node may refer to the method embodiment section, which are not described herein again.
Fig. 42 is a schematic diagram of a first node according to an embodiment of the present application, where a switching network system includes a plurality of leaf nodes and at least one spine node, and the first node is a leaf node in the plurality of leaf nodes. As shown in fig. 42, the first node includes:
a determining module 201, configured to determine, according to current state information of at least one interface of the first node, a load sharing weight from another node to the first node through the first interface of the first node.
A sending module 202, configured to send, through a first interface, a load sharing weight and a hop count that are provided by another node to a first node through the first interface, where the first interface is one interface of at least one interface of the first node.
Optionally, the load sharing weight and the hop count from the other node to the first node through the first interface of the first node are carried in the first message.
Alternatively, the first and second electrodes may be,
the load sharing weight from the other nodes to the first node through the first interface of the first node is carried in the second message, and the hop count from the other nodes to the first node through the first interface of the first node is carried in the third message.
Optionally, the first packet further includes at least one of the following: the type of the first message, the auxiliary identifier of the first message, the identifier of the first node, and the number of the first nodes. The auxiliary message of the first message is used to identify that the first message is a periodically sent message or a message sent when the first interface fails. The number of first nodes is 1.
The second message further includes at least one of: the type of the second message, the auxiliary identifier of the second message, the identifier of the first node, and the number of the first nodes. The auxiliary message of the second message is used to identify that the second message is a periodically sent message or a message sent when the first interface fails. The number of first nodes is 1.
The third message further includes at least one of: the type of the third message, the auxiliary identifier of the third message, the identifier of the first node, and the number of the first nodes. The auxiliary message of the third message is used to identify that the third message is a periodically sent message or a message sent when the first interface fails. The number of first nodes is 1.
Optionally, the determining module 201 is specifically configured to determine, on a forwarding plane of the first node, a load sharing weight from another node to the first node through the first interface of the first node according to the current state information of at least one interface of the first node. The sending module 202 is specifically configured to send, on a forwarding plane of a first node, a load sharing weight and a hop count of another node to the first node through a first interface.
Optionally, the current state information of at least one interface of the first node includes at least one of: the method comprises the steps of obtaining current bandwidth information corresponding to at least one interface of a first node, an available interface proportion in the at least one interface of the first node, a quality assessment value corresponding to the at least one interface of the first node, and an average queue length corresponding to the at least one interface of the first node.
Optionally, the sending module 202 is specifically configured to: periodically sending the load sharing weight and the hop count of other nodes to the first node through the first interface. Or when the first interface fails, the load sharing weight and the hop count from other nodes to the first node through the first interface are sent through the first interface.
The content and effect of the first node provided in the present application for executing the above routing information transmission method may refer to the method embodiment section, which is not described again.
Fig. 43 is a schematic diagram of a second node according to an embodiment of the present application, where a switching network system includes a plurality of leaf nodes and at least one spine node, and the second node is a spine node of the at least one spine node. As shown in fig. 43, the second node includes:
a first determining module 301, configured to determine, according to a load sharing weight of a second node to a first node through at least one interface of the second node, a load sharing weight of another node to the first node through a second interface of the second node.
A second determining module 302, configured to determine, according to a hop count of the second node to the first node through at least one interface of the second node, a hop count of another node to the first node through the second interface.
A sending module 303, configured to send, through a second interface, the load sharing weight and the hop count that are sent by another node to the first node through the second interface, where the second interface is one interface in at least one interface of the second node.
Optionally, the method further includes: a receiving module 304, configured to receive a load sharing weight and a hop count of the second node to the first node through at least one interface of the second node.
Optionally, the first determining module 301 is specifically configured to: and if the second interface is an uplink interface or a horizontal interface, determining the load sharing weight from other nodes to the first node through the second interface according to the load sharing weight from the second node to the first node through the interfaces except the horizontal segmentation group in at least one interface of the second node. Wherein, if the second interface is an uplink interface and other uplink interfaces exist in the second node, the horizontal split group includes: a second interface and other upstream interfaces of the second node. If the second interface is an uplink interface and the second node does not have other uplink interfaces, the horizontal split group includes: a second interface. If the second interface is a horizontal interface and other horizontal interfaces exist in the second node, the horizontal partition group includes: the second interface and other horizontal interfaces of the second interface. If the second interface is a horizontal interface and the second node does not have other horizontal interfaces, the horizontal split group comprises: a second interface. And if the second interface is a downlink interface, determining the load sharing weight from other nodes to the first node through the second interface according to the load sharing weight from the second node to the first node through at least one interface of the second node.
Optionally, the first determining module 301 is specifically configured to: and if the second interface is an uplink interface or a horizontal interface, determining that the load sharing weight from the other node to the first node through the second interface is the sum of the load sharing weights from the second node to the first node through the interface corresponding to the minimum non-0 hop count except the horizontal segmentation group in at least one interface of the second node. And if the second interface is a downlink interface, determining that the load sharing weight from the other nodes to the first node through the second interface is the sum of the load sharing weights from the second node to the first node through the interface corresponding to the minimum non-0 hop count in at least one interface of the second node.
Optionally, the first determining module 301 is specifically configured to: if the second interface is an uplink interface or a horizontal interface, determining the load sharing weight of the other nodes to the first node through the second interface of the second node according to the load sharing weight of the second node to the first node through the interface corresponding to the minimum non-0 hop count except the horizontal split group in at least one interface of the second node, the number of the interfaces of the second node, and the number of normal interfaces of the second node. And if the second interface is a downlink interface, determining the load sharing weight from other nodes to the first node through the second interface of the second node according to the load sharing weight from the second node to the first node through the interface corresponding to the minimum non-0 hop count in at least one interface of the second node, the number of interfaces of the second node and the number of normal interfaces of the second node.
Optionally, the first determining module 301 is specifically configured to: if the second interface is an uplink interface or a horizontal interface, determining load sharing weights of other nodes to the first node through the second interface of the second node according to the load sharing weight of the second node to the first node through the interface corresponding to the minimum non-0 hop count except the horizontal split group in at least one interface of the second node, the quality evaluation value of the interface corresponding to the minimum non-0 hop count except the horizontal split group in at least one interface of the second node, the number of the interfaces corresponding to the minimum non-0 hop count except the horizontal split group in at least one interface of the second node, the number of interfaces of the second node, and the normal number of interfaces of the second node. And if the second interface is a downlink interface, determining the load sharing weight of other nodes to the first node through the second interface of the second node according to the load sharing weight of the second node to the first node through the interface corresponding to the minimum non-0 hop count in the at least one interface of the second node, the quality evaluation value of the interface corresponding to the minimum non-0 hop count in the at least one interface of the second node, the number of interfaces of the second node and the number of normal interfaces of the second node.
Optionally, the sending module 303 is specifically configured to: and periodically sending the load sharing weight and the hop count of other nodes to the first node through the second interface. Or when the second interface fails, the load sharing weight and the hop count from other nodes to the first node through the second interface are sent through the second interface.
Optionally, the load sharing weight and the hop count that are respectively sent to the first node by the other nodes through the second interface of the second node are carried in the fourth message. Or the load sharing weights respectively from other nodes to the first node through the second interface of the second node are carried in the fifth message, and the hop counts respectively from other nodes to the first node through the second interface of the second node are carried in the sixth message.
Optionally, the fourth packet further includes at least one of the following: the type of the fourth packet, an auxiliary identifier of the fourth packet, an identifier of a first leaf node of the plurality of leaf nodes, and the number of the plurality of leaf nodes. The auxiliary message of the fourth message is used to identify that the fourth message is a periodically sent message or a message sent when the second interface fails.
The fifth message further includes at least one of: the type of the fifth message, the auxiliary identifier of the fifth message, the identifier of the first leaf node in the plurality of leaf nodes, and the number of the plurality of leaf nodes. The auxiliary message of the fifth message is used to identify that the fifth message is a periodically sent message or a message sent when the second interface fails.
The sixth message further includes at least one of: the type of the sixth message, the auxiliary identifier of the sixth message, the identifier of the first leaf node in the plurality of leaf nodes, and the number of the plurality of leaf nodes. The auxiliary message of the sixth message is used to identify that the sixth message is a periodically sent message or a message sent when the second interface fails.
Optionally, the second determining module 302 is specifically configured to: and if the second interface is an uplink interface or a horizontal interface, determining the hop count of other nodes to the first node through the second interface according to the hop count of the second node to the first node through the interfaces except the horizontal segmentation group in at least one interface of the second node.
And if the second interface is a downlink interface, determining the hop count of other nodes to the first node through the second interface according to the hop count of the second node to the first node through at least one interface of the second node.
Optionally, the second determining module 302 is specifically configured to: if the second interface is an uplink interface or a horizontal interface, selecting the smallest non-0 hop count from the hop counts of the second node from the interfaces except the horizontal segmentation group to the first node through at least one interface of the second node, and determining that the hop counts of other nodes from the second interface of the second node to the first node are the non-0 hop count plus 1. And if the second interface is a downlink interface, selecting the minimum hop count which is not 0 from the hop counts from the second node to the first node through at least one interface of the second node, and determining that the hop counts from other nodes to the first node through the second interface are the hop counts which are not 0 plus 1.
Optionally, the first determining module 301 is specifically configured to: and determining the load sharing weight of other nodes to the first node through the second interface of the second node on the forwarding plane of the second node according to the load sharing weight of the second node to the first node through at least one interface of the second node. The second determining module 302 is specifically configured to: and determining the hop count of other nodes to the first node through the second interface according to the hop count of the second node to the first node through at least one interface of the second node on the forwarding plane of the second node. The sending module 303 is specifically configured to: and sending the load sharing weight and the hop count of other nodes to the first node through the second interface on the forwarding plane of the second node through the second interface.
Optionally, if the second node does not receive the load sharing weight and/or the hop count from the second node to the first node through the second interface within the preset time period, the load sharing weight and the hop count from the second node to the first node through the second interface are both 0. Or, if the second node does not receive the detection message transmitted through the second interface within the preset time length, the load sharing weight and the hop count from the second node to the first node through the second interface are both 0, and the detection message is used for detecting whether the second interface is abnormal.
The content and effect of the second node provided in the present application for executing the above routing information transmission method may refer to the embodiment of the method, and details thereof are not described again.
The present application further provides a non-transitory computer-readable storage medium storing computer instructions for causing a computer to execute the method described above, and details and effects thereof may refer to the method embodiment section, which is not described herein again.
The present application further provides a computer program product, where the computer program product includes computer instructions for causing a computer to execute the method described above, and details and effects thereof may refer to the method embodiment section, which is not described herein again.

Claims (38)

1. A switching network system, comprising: a plurality of leaf nodes and at least one spine node; the first node is a leaf node of the plurality of leaf nodes; the second node is a spine node of the at least one spine node;
the first node is configured to: determining load sharing weight from other nodes to the first node through a first interface of the first node according to current state information of at least one interface of the first node, and sending the load sharing weight and hop count from other nodes to the first node through the first interface, wherein the first interface is one interface of at least one interface of the first node;
the second node is configured to: determining the load sharing weight of other nodes to the first node through a second interface of the second node according to the load sharing weight of the second node to the first node through at least one interface of the second node; determining the hop count of other nodes to the first node through the second interface according to the hop count of the second node to the first node through at least one interface of the second node, and sending the load sharing weight and the hop count of other nodes to the first node through the second interface, wherein the second interface is one interface of at least one interface of the second node.
2. The system of claim 1,
the load sharing weight and the hop count from the other nodes to the first node through the first interface of the first node are carried in a first message;
alternatively, the first and second electrodes may be,
the load sharing weight from the other node to the first node through the first interface of the first node is carried in a second message, and the hop count from the other node to the first node through the first interface of the first node is carried in a third message.
3. The system of claim 2, wherein the first message further comprises at least one of: the type of the first message, the auxiliary identifier of the first message, the identifier of the first node, and the number of the first nodes; the auxiliary message of the first message is used for identifying that the first message is a periodically sent message or a message sent when the first interface fails; the number of the first nodes is 1;
the second message further comprises at least one of: the type of the second message, the auxiliary identifier of the second message, the identifier of the first node, and the number of the first nodes; the auxiliary message of the second message is used for identifying that the second message is a periodically sent message or a message sent when the first interface fails; the number of the first nodes is 1;
the third packet further includes at least one of: the type of the third packet, the auxiliary identifier of the third packet, the identifier of the first node, and the number of the first nodes; the auxiliary message of the third message is used to identify that the third message is a periodically sent message or a message sent when the first interface fails; the number of the first nodes is 1.
4. The system according to any one of claims 1 to 3,
the first node is specifically configured to: determining load sharing weights of other nodes to the first node through the first interface of the first node according to the current state information of at least one interface of the first node on the forwarding plane of the first node; and sending the load sharing weight and hop count of other nodes to the first node through the first interface.
5. The system according to any of claims 1-4, wherein the first node is specifically configured to:
periodically sending load sharing weights and hop counts of other nodes to the first node through the first interface;
alternatively, the first and second electrodes may be,
and when the first interface fails, sending the load sharing weight and hop count from other nodes to the first node through the first interface.
6. The system according to any of claims 1-5, wherein the second node is further configured to:
receiving a load sharing weight and a hop count of the second node to the first node over at least one interface of the second node.
7. The system according to any of claims 1-6, wherein the second node is specifically configured to:
if the second interface is an uplink interface or a horizontal interface, determining load sharing weights of other nodes to the first node through the second interface according to the load sharing weights of the second node to the first node through interfaces except for a horizontal segmentation group in at least one interface of the second node; wherein, if the second interface is an uplink interface and there are other uplink interfaces in the second node, the horizontal split group includes: the second interface and other upstream interfaces of the second node; if the second interface is an uplink interface and the second node does not have other uplink interfaces, the horizontal split group includes: the second interface; if the second interface is a horizontal interface and other horizontal interfaces exist in the second node, the horizontal split group includes: the second interface and other horizontal interfaces of the second interface; if the second interface is a horizontal interface and the second node does not have other horizontal interfaces, the horizontal split group includes: the second interface;
and if the second interface is a downlink interface, determining the load sharing weight from other nodes to the first node through the second interface according to the load sharing weight from the second node to the first node through at least one interface of the second node.
8. The system of claim 7, wherein the second node is specifically configured to:
if the second interface is an uplink interface or a horizontal interface, determining that the load sharing weight from the other node to the first node through the second interface is the sum of the load sharing weights from the second node to the first node through an interface corresponding to the minimum non-0 hop count except for the horizontal split group in at least one interface of the second node;
and if the second interface is a downlink interface, determining that the load sharing weight from other nodes to the first node through the second interface is the sum of the load sharing weights from the second node to the first node through an interface corresponding to the minimum non-0 hop count in at least one interface of the second node.
9. The system of claim 7, wherein the second node is specifically configured to:
if the second interface is an uplink interface or a horizontal interface, determining load sharing weights of other nodes to the first node through a second interface of the second node according to load sharing weights of the second node to the first node through an interface corresponding to the minimum non-0 hop count except for a horizontal split group in at least one interface of the second node, the number of interfaces corresponding to the minimum non-0 hop count except for the horizontal split group in at least one interface of the second node, the number of interfaces of the second node, and the number of normal interfaces of the second node;
and if the second interface is a downlink interface, determining the load sharing weight from other nodes to the first node through the second interface of the second node according to the load sharing weight from the second node to the first node through the interface corresponding to the minimum non-0 hop count in at least one interface of the second node, the number of interfaces of the second node, and the number of normal interfaces of the second node.
10. The system of claim 7, wherein the second node is specifically configured to:
if the second interface is an uplink interface or a horizontal interface, determining load sharing weights of other nodes to the first node through a second interface of the second node according to load sharing weights of the second node to the first node through an interface corresponding to the minimum non-0 hop count except for the horizontal split group in at least one interface of the second node, a quality evaluation value of an interface corresponding to the minimum non-0 hop count except for the horizontal split group in at least one interface of the second node, the number of interfaces of the second node, and the number of normal interfaces of the second node;
and if the second interface is a downlink interface, determining the load sharing weight from other nodes to the first node through the second interface of the second node according to the load sharing weight from the second node to the first node through the interface corresponding to the minimum non-0 hop count in the at least one interface of the second node, the quality evaluation value of the interface corresponding to the minimum non-0 hop count in the at least one interface of the second node, the number of interfaces of the second node, and the number of normal interfaces of the second node.
11. The system according to any of claims 1-10, wherein the second node is specifically configured to:
periodically sending load sharing weight and hop count of other nodes to the first node through the second interface;
alternatively, the first and second electrodes may be,
and when the second interface fails, sending the load sharing weight and hop count from other nodes to the first node through the second interface.
12. The system according to any one of claims 1 to 11,
the load sharing weight and the hop count from the other nodes to the first node through the second interface of the second node are carried in a fourth message;
alternatively, the first and second electrodes may be,
the load sharing weights from the other nodes to the first node through the second interface of the second node are carried in a fifth message, and the hop counts from the other nodes to the first node through the second interface of the second node are carried in a sixth message.
13. The system of claim 12, wherein the fourth message further comprises at least one of: the type of the fourth packet, the auxiliary identifier of the fourth packet, the identifier of the first leaf node in the plurality of leaf nodes, and the number of the plurality of leaf nodes; the auxiliary message of the fourth message is used for identifying that the fourth message is a periodically sent message or a message sent when the second interface fails;
the fifth packet further includes at least one of: the type of the fifth packet, the auxiliary identifier of the fifth packet, the identifier of the first leaf node in the plurality of leaf nodes, and the number of the plurality of leaf nodes; the auxiliary message of the fifth message is used to identify that the fifth message is a periodically sent message or a message sent when the second interface fails;
the sixth packet further includes at least one of: the type of the sixth packet, the auxiliary identifier of the sixth packet, the identifier of the first leaf node in the plurality of leaf nodes, and the number of the plurality of leaf nodes; the auxiliary packet of the sixth packet is used to identify that the sixth packet is a packet sent periodically or a packet sent when the second interface fails.
14. The system according to any of claims 7-10, wherein the second node is specifically configured to:
if the second interface is an uplink interface or a horizontal interface, determining the hop count of other nodes to the first node through the second interface according to the hop count of the second node to the first node through interfaces except the horizontal segmentation group in at least one interface of the second node;
and if the second interface is a downlink interface, determining the hop count of other nodes to the first node through the second interface according to the hop count of the second node to the first node through at least one interface of the second node.
15. The system of claim 14, wherein the second node is specifically configured to:
if the second interface is an uplink interface or a horizontal interface, selecting the smallest non-0 hop count from the hop counts from the second node to the first node through the interfaces except the horizontal segmentation group in at least one interface of the second node, and determining that the hop counts from other nodes to the first node through the second interface of the second node are the non-0 hop count plus 1;
if the second interface is a downlink interface, selecting the smallest non-0 hop count from the hop counts from the second node to the first node through at least one interface of the second node, and determining that the hop counts from other nodes to the first node through the second interface of the second node are the non-0 hop count plus 1.
16. The system of any one of claims 1-15,
the second node is specifically configured to: determining load sharing weights of other nodes to the first node through a second interface of the second node according to the load sharing weight of the second node to the first node through at least one interface of the second node on a forwarding plane of the second node; determining the hop count of other nodes to the first node through the second interface according to the hop count of the second node to the first node through at least one interface of the second node; and sending the load sharing weight and hop count of other nodes to the first node through the second interface.
17. The system according to any one of claims 1 to 16,
if the second node does not receive the load sharing weight and/or the hop count from the second node to the first node through the second interface within a preset time length, the load sharing weight and the hop count from the second node to the first node through the second interface are both 0;
alternatively, the first and second electrodes may be,
and if the second node does not receive the detection message transmitted through the second interface within the preset time length, the load sharing weight and the hop count from the second node to the first node through the second interface are both 0, and the detection message is used for detecting whether the second interface is abnormal or not.
18. A routing information transmission method is applied to a first node, wherein a switching network system comprises a plurality of leaf nodes and at least one ridge node, and the first node is one of the leaf nodes; the method comprises the following steps:
determining load sharing weight from other nodes to the first node through the first interface of the first node according to the current state information of at least one interface of the first node;
and sending load sharing weight and hop count of other nodes to the first node through the first interface, wherein the first interface is one of at least one interface of the first node.
19. The method of claim 18,
the load sharing weight and the hop count from the other nodes to the first node through the first interface of the first node are carried in a first message;
alternatively, the first and second electrodes may be,
the load sharing weight from the other node to the first node through the first interface of the first node is carried in a second message, and the hop count from the other node to the first node through the first interface of the first node is carried in a third message.
20. The method of claim 19, wherein the first message further comprises at least one of: the type of the first message, the auxiliary identifier of the first message, the identifier of the first node, and the number of the first nodes; the auxiliary message of the first message is used for identifying that the first message is a periodically sent message or a message sent when the first interface fails; the number of the first nodes is 1;
the second message further comprises at least one of: the type of the second message, the auxiliary identifier of the second message, the identifier of the first node, and the number of the first nodes; the auxiliary message of the second message is used for identifying that the second message is a periodically sent message or a message sent when the first interface fails; the number of the first nodes is 1;
the third packet further includes at least one of: the type of the third packet, the auxiliary identifier of the third packet, the identifier of the first node, and the number of the first nodes; the auxiliary message of the third message is used to identify that the third message is a periodically sent message or a message sent when the first interface fails; the number of the first nodes is 1.
21. The method according to any of claims 18-20, wherein said determining load sharing weights of other nodes to said first node via said first interface of said first node according to current state information of at least one interface of said first node; sending load sharing weights and hop counts of other nodes to the first node through the first interface via the first interface, including:
determining load sharing weights of other nodes to the first node through the first interface of the first node according to the current state information of at least one interface of the first node on the forwarding plane of the first node; and sending the load sharing weight and hop count of other nodes to the first node through the first interface.
22. The method according to any of claims 18-21, wherein the current state information of at least one interface of the first node comprises at least one of: the current bandwidth information corresponding to at least one interface of the first node, the available interface proportion in the at least one interface of the first node, the quality assessment value corresponding to the at least one interface of the first node, and the queue average length corresponding to the at least one interface of the first node.
23. The method according to any of claims 18-22, wherein said sending load sharing weights and hop counts of other nodes to said first node over said first interface comprises:
periodically sending load sharing weights and hop counts of other nodes to the first node through the first interface;
alternatively, the first and second electrodes may be,
and when the first interface fails, sending the load sharing weight and hop count from other nodes to the first node through the first interface.
24. A routing information transmission method, wherein the method is applied to a second node, wherein a switching network system comprises a plurality of leaf nodes and at least one spine node, and the second node is a spine node in the at least one spine node; the method comprises the following steps:
determining the load sharing weight of other nodes to a first node through a second interface of a second node according to the load sharing weight of the second node to the first node through at least one interface of the second node;
determining the hop count of other nodes to the first node through the second interface according to the hop count of the second node to the first node through at least one interface of the second node;
and sending load sharing weight and hop count of other nodes to the first node through the second interface, wherein the second interface is one of at least one interface of the second node.
25. The method as claimed in claim 24, wherein before determining the load sharing weight of other nodes to the first node through the second interface of the second node according to the load sharing weight of the second node to the first node through at least one interface of the second node, further comprising:
receiving a load sharing weight and a hop count of the second node to the first node over at least one interface of the second node.
26. The method according to claim 24 or 25, wherein said determining the load sharing weight of other nodes to the first node through the second interface of the second node according to the load sharing weight of the second node to the first node through at least one interface of the second node comprises:
if the second interface is an uplink interface or a horizontal interface, determining load sharing weights of other nodes to the first node through the second interface according to the load sharing weights of the second node to the first node through interfaces except for a horizontal segmentation group in at least one interface of the second node; wherein, if the second interface is an uplink interface and there are other uplink interfaces in the second node, the horizontal split group includes: the second interface and other upstream interfaces of the second node; if the second interface is an uplink interface and the second node does not have other uplink interfaces, the horizontal split group includes: the second interface; if the second interface is a horizontal interface and other horizontal interfaces exist in the second node, the horizontal split group includes: the second interface and other horizontal interfaces of the second interface; if the second interface is a horizontal interface and the second node does not have other horizontal interfaces, the horizontal split group includes: the second interface;
and if the second interface is a downlink interface, determining the load sharing weight from other nodes to the first node through the second interface according to the load sharing weight from the second node to the first node through at least one interface of the second node.
27. The method of claim 26, wherein if the second interface is an uplink interface or a horizontal interface, determining the load sharing weight of other nodes to the first node through the second interface according to the load sharing weight of the second node to the first node through interfaces except for a horizontal split group in at least one interface of the second node, comprises:
if the second interface is an uplink interface or a horizontal interface, determining that the load sharing weight from the other node to the first node through the second interface is the sum of the load sharing weights from the second node to the first node through an interface corresponding to the minimum non-0 hop count except for the horizontal split group in at least one interface of the second node;
if the second interface is a downlink interface, determining the load sharing weight from other nodes to the first node through the second interface according to the load sharing weight from the second node to the first node through at least one interface of the second node, including:
and if the second interface is a downlink interface, determining that the load sharing weight from other nodes to the first node through the second interface is the sum of the load sharing weights from the second node to the first node through an interface corresponding to the minimum non-0 hop count in at least one interface of the second node.
28. The method of claim 26, wherein if the second interface is an uplink interface or a horizontal interface, determining the load sharing weight of other nodes to the first node through the second interface according to the load sharing weight of the second node to the first node through interfaces except for a horizontal split group in at least one interface of the second node, comprises:
if the second interface is an uplink interface or a horizontal interface, determining load sharing weights of other nodes to the first node through a second interface of the second node according to load sharing weights of the second node to the first node through an interface corresponding to the minimum non-0 hop count except for a horizontal split group in at least one interface of the second node, the number of interfaces corresponding to the minimum non-0 hop count except for the horizontal split group in at least one interface of the second node, the number of interfaces of the second node, and the number of normal interfaces of the second node;
if the second interface is a downlink interface, determining the load sharing weight from other nodes to the first node through the second interface according to the load sharing weight from the second node to the first node through at least one interface of the second node, including:
and if the second interface is a downlink interface, determining the load sharing weight from other nodes to the first node through the second interface of the second node according to the load sharing weight from the second node to the first node through the interface corresponding to the minimum non-0 hop count in at least one interface of the second node, the number of interfaces of the second node, and the number of normal interfaces of the second node.
29. The method of claim 26, wherein if the second interface is an uplink interface or a horizontal interface, determining the load sharing weight of other nodes to the first node through the second interface according to the load sharing weight of the second node to the first node through interfaces except for a horizontal split group in at least one interface of the second node, comprises:
if the second interface is an uplink interface or a horizontal interface, determining load sharing weights of other nodes to the first node through a second interface of the second node according to load sharing weights of the second node to the first node through an interface corresponding to the minimum non-0 hop count except for the horizontal split group in at least one interface of the second node, a quality evaluation value of an interface corresponding to the minimum non-0 hop count except for the horizontal split group in at least one interface of the second node, the number of interfaces of the second node, and the number of normal interfaces of the second node;
if the second interface is a downlink interface, determining the load sharing weight from other nodes to the first node through the second interface according to the load sharing weight from the second node to the first node through at least one interface of the second node, including:
and if the second interface is a downlink interface, determining the load sharing weight from other nodes to the first node through the second interface of the second node according to the load sharing weight from the second node to the first node through the interface corresponding to the minimum non-0 hop count in the at least one interface of the second node, the quality evaluation value of the interface corresponding to the minimum non-0 hop count in the at least one interface of the second node, the number of interfaces of the second node, and the number of normal interfaces of the second node.
30. The method according to any of claims 24-29, wherein said sending load sharing weights and hop counts of other nodes to said first node over said second interface comprises:
periodically sending load sharing weight and hop count of other nodes to the first node through the second interface;
alternatively, the first and second electrodes may be,
and when the second interface fails, sending the load sharing weight and hop count from other nodes to the first node through the second interface.
31. The method of any one of claims 24-30,
the load sharing weight and the hop count from the other nodes to the first node through the second interface of the second node are carried in a fourth message;
alternatively, the first and second electrodes may be,
the load sharing weights from the other nodes to the first node through the second interface of the second node are carried in a fifth message, and the hop counts from the other nodes to the first node through the second interface of the second node are carried in a sixth message.
32. The method of claim 31, wherein the fourth packet further comprises at least one of: the type of the fourth packet, the auxiliary identifier of the fourth packet, the identifier of the first leaf node in the plurality of leaf nodes, and the number of the plurality of leaf nodes; the auxiliary message of the fourth message is used for identifying that the fourth message is a periodically sent message or a message sent when the second interface fails;
the fifth packet further includes at least one of: the type of the fifth packet, the auxiliary identifier of the fifth packet, the identifier of the first leaf node in the plurality of leaf nodes, and the number of the plurality of leaf nodes; the auxiliary message of the fifth message is used to identify that the fifth message is a periodically sent message or a message sent when the second interface fails;
the sixth packet further includes at least one of: the type of the sixth packet, the auxiliary identifier of the sixth packet, the identifier of the first leaf node in the plurality of leaf nodes, and the number of the plurality of leaf nodes; the auxiliary packet of the sixth packet is used to identify that the sixth packet is a packet sent periodically or a packet sent when the second interface fails.
33. The method according to any of claims 26-29, wherein said determining the number of hops of other nodes to said first node over said second interface based on the number of hops of said second node to said first node over at least one interface of said second node comprises:
if the second interface is an uplink interface or a horizontal interface, determining the hop count of other nodes to the first node through the second interface according to the hop count of the second node to the first node through interfaces except the horizontal segmentation group in at least one interface of the second node;
and if the second interface is a downlink interface, determining the hop count of other nodes to the first node through the second interface according to the hop count of the second node to the first node through at least one interface of the second node.
34. The method of claim 33, wherein if the second interface is an uplink interface or a horizontal interface, determining the hop count of other nodes to the first node through the second interface according to the hop count of the second node to the first node through an interface other than the horizontal split group in at least one interface of the second node, comprises:
if the second interface is an uplink interface or a horizontal interface, selecting the smallest non-0 hop count from the hop counts from the second node to the first node through the interfaces except the horizontal segmentation group in at least one interface of the second node, and determining that the hop counts from other nodes to the first node through the second interface of the second node are the non-0 hop count plus 1;
if the second interface is a downlink interface, determining the hop count of other nodes to the first node through the second interface according to the hop count of the second node to the first node through at least one interface of the second node, including:
if the second interface is a downlink interface, selecting the smallest non-0 hop count from the hop counts from the second node to the first node through at least one interface of the second node, and determining that the hop counts from other nodes to the first node through the second interface are the non-0 hop count plus 1.
35. The method according to any of the claims 24-34, wherein said determining load sharing weights of other nodes to said first node through at least one interface of said second node according to load sharing weights of said second node to said first node through at least one interface of said second node; determining hop counts of other nodes to the first node through at least one interface of the second node according to the hop counts of the second node to the first node through at least one interface of the second node; sending load sharing weight and hop count of other nodes to the first node through a second interface, where the second interface is any one of at least one interface of the second node, and the method includes:
determining load sharing weights of other nodes to the first node through a second interface of the second node according to the load sharing weight of the second node to the first node through at least one interface of the second node on a forwarding plane of the second node; determining the hop count of other nodes to the first node through the second interface according to the hop count of the second node to the first node through at least one interface of the second node; and sending the load sharing weight and hop count of other nodes to the first node through the second interface.
36. The method of any one of claims 24-35,
if the second node does not receive the load sharing weight and/or the hop count from the second node to the first node through the second interface within a preset time length, the load sharing weight and the hop count from the second node to the first node through the second interface are both 0;
alternatively, the first and second electrodes may be,
and if the second node does not receive the detection message transmitted through the second interface within the preset time length, the load sharing weight and the hop count from the second node to the first node through the second interface are both 0, and the detection message is used for detecting whether the second interface is abnormal or not.
37. A node, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 18-36.
38. A non-transitory computer readable storage medium having stored thereon computer instructions for causing a computer to perform the method of any one of claims 18-36.
CN202010246879.3A 2020-03-31 2020-03-31 Routing information transmission method, device, system and storage medium Pending CN113472675A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202010246879.3A CN113472675A (en) 2020-03-31 2020-03-31 Routing information transmission method, device, system and storage medium
PCT/CN2021/082969 WO2021197196A1 (en) 2020-03-31 2021-03-25 Route information transmission method and apparatus, and system and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010246879.3A CN113472675A (en) 2020-03-31 2020-03-31 Routing information transmission method, device, system and storage medium

Publications (1)

Publication Number Publication Date
CN113472675A true CN113472675A (en) 2021-10-01

Family

ID=77866151

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010246879.3A Pending CN113472675A (en) 2020-03-31 2020-03-31 Routing information transmission method, device, system and storage medium

Country Status (2)

Country Link
CN (1) CN113472675A (en)
WO (1) WO2021197196A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115412499B (en) * 2022-08-24 2024-03-22 新华三人工智能科技有限公司 Flow transmission method, device and controller

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102137018A (en) * 2011-03-21 2011-07-27 华为技术有限公司 Load sharing method and device thereof
US9548926B2 (en) * 2013-01-11 2017-01-17 Brocade Communications Systems, Inc. Multicast traffic load balancing over virtual link aggregation
CN104168212B (en) * 2014-08-08 2017-10-17 北京华为数字技术有限公司 The method and apparatus for sending message
US10454830B2 (en) * 2016-05-05 2019-10-22 City University Of Hong Kong System and method for load balancing in a data network

Also Published As

Publication number Publication date
WO2021197196A1 (en) 2021-10-07

Similar Documents

Publication Publication Date Title
US8897130B2 (en) Network traffic management
US11824771B2 (en) Packet processing method, related device, and computer storage medium
US9485181B2 (en) Dynamic bandwidth adjustment in packet transport network
US20070242607A1 (en) Method and system for controlling distribution of network topology information
US11190437B2 (en) Methods, apparatus and computer programs for allocating traffic in a telecommunications network
EP3973676A1 (en) Application workload routing and interworking for network defined edge routing
US20110078237A1 (en) Server, network device, client, and network system
CN112825512A (en) Load balancing method and device
CN113472675A (en) Routing information transmission method, device, system and storage medium
US11102148B2 (en) Method to generate stable and reliable 6TiSCH routing paths
US20230379266A1 (en) Path determination method, network element, and computer-readable storage medium
CN108496391B (en) Routing for wireless mesh communication networks
Khan et al. Mitigation of packet loss using data rate adaptation scheme in MANETs
US11240164B2 (en) Method for obtaining path information of data packet and device
CN115865814A (en) Network device, system and method for cycle-based load balancing
US11765071B2 (en) Method, network controller and computer program product for facilitating a flow from a sending end to a receiving end by multi-path transmission
US11985534B2 (en) Application workload routing and interworking for network defined edge routing
Gurung et al. A survey of multipath routing schemes of wireless mesh networks
US20210297891A1 (en) Application workload routing and interworking for network defined edge routing
Abramkina Analysis of the effectiveness of the eigrp routing protocol
CN115150329A (en) Method, device, storage medium and system for sending message and generating route
Akon et al. A new routing table update and ant migration scheme for ant based control in telecommunication networks
CN117097633A (en) Message transmission method, transmission control method, device and system
CN116055376A (en) Route diffusion method and device
CN114760293A (en) Internet of things system and standby channel using method thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination