CN118250215A - Method and equipment for planning paths between logic blocks of network topology structure - Google Patents

Method and equipment for planning paths between logic blocks of network topology structure Download PDF

Info

Publication number
CN118250215A
CN118250215A CN202410184216.1A CN202410184216A CN118250215A CN 118250215 A CN118250215 A CN 118250215A CN 202410184216 A CN202410184216 A CN 202410184216A CN 118250215 A CN118250215 A CN 118250215A
Authority
CN
China
Prior art keywords
logic
logic blocks
blocks
block
path
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410184216.1A
Other languages
Chinese (zh)
Inventor
周尚彦
张毅超
熊一梁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Huanfang Technology Co ltd
Hangzhou Magic Square Artificial Intelligence Foundation Research Co ltd
Shanghai Jimi Technology Co ltd
Ningbo Jimi Information Technology Co ltd
Original Assignee
Hangzhou Huanfang Technology Co ltd
Hangzhou Magic Square Artificial Intelligence Foundation Research Co ltd
Shanghai Jimi Technology Co ltd
Ningbo Jimi Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Huanfang Technology Co ltd, Hangzhou Magic Square Artificial Intelligence Foundation Research Co ltd, Shanghai Jimi Technology Co ltd, Ningbo Jimi Information Technology Co ltd filed Critical Hangzhou Huanfang Technology Co ltd
Priority to CN202410184216.1A priority Critical patent/CN118250215A/en
Publication of CN118250215A publication Critical patent/CN118250215A/en
Pending legal-status Critical Current

Links

Landscapes

  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The invention relates to a method and equipment for planning paths among logic blocks of a network topology, which aim at an IB network, realize path planning among logic blocks by enumerating source logic blocks and target logic blocks and tracing back the path planning in sequence from the source logic blocks, and solve the problems of path selection and planning among logic blocks after splitting an integral nonstandard network topology physical sub-network into fat tree topology logic blocks and tree structure logic blocks. The fat tree logic blocks and other tree mechanism logic blocks in the nonstandard network topology physical subnetwork can exert better network topology characteristics, so that the occurrence of network congestion is reduced on the whole, the overall throughput is improved, the network configuration is optimized, the higher bandwidth is realized, and the overall use efficiency of the cluster is obviously improved.

Description

Method and equipment for planning paths between logic blocks of network topology structure
Technical Field
The embodiment of the invention relates to the field of network communication, in particular to a method and equipment for planning paths among logic blocks of a network topology structure.
Background
In practice, fat tree topology (fat-tree topology) is used for High Performance Computing (HPC) clusters and for InfiniBandTM (IB) technology-based clusters, arranging the network topology into a hierarchical, multi-root tree structure of switches, with end nodes residing at leaf switches. The switches are connected in a tree form, the number of the switches at different layers is different, the bandwidth of the cable between the different layers is also changed in equal proportion, but the total bandwidth of each layer is equal. In such a network topology, congestion can be reduced to a large extent if the bandwidth can be halved as much as possible between paths using a suitable routing scheme.
By load balancing the fat tree structured downlink, link multiplexing may be reduced. In practice, however, many network topologies are not a standard fat-tree structure, and there are some test machines, development machines, machines for managing clusters, etc. in addition to computing machines and storage machines, within a large cluster for deep learning model training. They require a relatively much lower network bandwidth and are typically accessed to the core network using lower bandwidth network cards or other network topologies. The fat tree algorithm path planning in the prior art cannot well cope with the problem that the route cannot be calculated according to the application type of the machine, and more than two network cards are distributed to the same link to generate network congestion. The whole nonstandard network topology physical sub-network is split into the fat tree topology logic blocks and the tree structure logic blocks, and the occurrence and the influence range of congestion can be reduced by means of path planning respectively, but if the planning of the paths among the network topology logic blocks is realized, the congestion of data transmission among the logic blocks is reduced as much as possible, and no corresponding scheme exists in the prior art, so that the problem to be solved is solved.
Disclosure of Invention
In order to overcome the defects, the invention aims to provide a method and equipment for planning paths among logic blocks of a network topology structure.
The invention achieves the aim through the following technical scheme: a method for planning paths among logic blocks of a network topology structure aims at the path planning among a plurality of logic blocks of dividing a network of an IB subnet whole nonstandard fat tree topology, and the forwarding path planning among the logic blocks comprises the following steps:
(1) Enumerating any two logic blocks, namely a source logic block and a target logic block;
(2) Judging whether a path planning exists between the source logic blocks and the target logic blocks, and starting the step (3) if the path planning does not exist, and starting the path planning between the logic blocks; step (5) is carried out when path planning exists, and the next two logic blocks are enumerated;
(3) Solving the shortest path from the source logic block to the target logic block;
(4) Tracing upwards from the target logic block, and calculating a forwarding path rule between each two adjacent blocks until tracing to the source logic block;
(5) And (3) selecting two logic blocks from the rest logic blocks to carry out path planning among the logic blocks, and repeating the step (1) until the path planning among all the logic blocks is completed, so as to complete the path planning among the logic blocks.
Preferably, the subnet route planning is implemented by maintaining a routing forwarding table in the switch, and the routing forwarding table of each switch contains entries of all nodes, wherein the entries record the address of the target node and the forwarding port corresponding to the next hop, and the route planning of the whole subnet is constructed, namely, the routing forwarding table of each switch in the subnet is constructed.
Preferably, each time the network topology is updated, the overall network routing plan is performed once, and all the switch routing tables in the subnet are updated.
Preferably, before the step (1), the whole subnet is divided into a plurality of logic blocks, wherein the dividing mode is divided into one or more fat tree topology logic blocks as far as possible, and the rest part is divided into one or more tree logic blocks; the division of the logical blocks is only logical and does not physically create isolation nor does it require the use of routers at the top level.
Preferably, after the division of the logic blocks is completed, path planning in each logic block needs to be realized, and path planning from each node in the logic block to any node in the block is obtained. Preferably, in step (4), tracing back from the target logic block, and calculating a forwarding path rule between each adjacent block until tracing back to the source logic block, where the specific method is as follows:
(401) Enumerating any node in the target logic block to be a target node;
(402) Distributing a connecting line of the previous logic block to the target logic block according to the shortest path determined between the logic blocks;
(403) According to the distributed connection line, the switch port of the previous logic block corresponding to the connection line is regarded as an outlet port;
(404) Enumerating each switch in the previous logic block, and determining a logic block internal path to an outlet port;
(405) Replacing the switch route forwarding item and replacing the outlet port with the target node, so as to obtain the path planning from any switch in the previous logic block to the target node;
(406) And tracing forward in the same way, and obtaining the path planning from the former logic block to the target node.
Preferably, more than one connection is provided between the logic blocks, and in the step (402), the connection allocation method from the previous logic block to the target logic block may be selected to allocate sequentially, or load balancing may be selected to assign weights to the connections.
Preferably, path planning between the inside of the logic block and the logic block is combined, so that the whole path from any source node to any destination node can be obtained, a routing table is constructed based on the whole path, and the routing table is issued to the switch in the subnet.
An electronic device for implementing a method as claimed in any preceding claim, comprising a memory, a processor, a bus, a network interface, other peripheral interfaces;
The memory, the processor, the network interface, and other peripheral interfaces are connected through a communication bus, and the processor implements any of the steps described above when executing the program.
A computer readable storage medium having stored thereon one or more computer programs which when executed by an electronic device comprising a plurality of application programs implement the steps of any of the above.
The invention has the beneficial effects that: the invention aims at an IB network, realizes path planning among logic blocks by enumerating source logic blocks and target logic blocks and tracing forward from the source logic blocks in sequence, and solves the problems of path selection and planning among logic blocks after splitting an integral nonstandard network topology physical sub-network into fat tree topology logic blocks and tree structure logic blocks. The fat tree logic blocks and other tree mechanism logic blocks in the nonstandard network topology physical subnetwork can exert better network topology characteristics, so that the occurrence of network congestion is reduced on the whole, the overall throughput is improved, the network configuration is optimized, the higher bandwidth is realized, and the overall use efficiency of the cluster is obviously improved.
Drawings
Fig. 1 is an entity diagram according to a first embodiment of the present disclosure.
Fig. 2 is a schematic diagram of a standard fat tree topology.
Fig. 3 is a schematic diagram of a fat tree topology in a network environment according to an embodiment of the invention.
Fig. 4 is a schematic diagram of logical partitioning in a subnet according to an embodiment of the invention.
Fig. 5 is a forwarding path planning method between neighboring blocks according to an embodiment of the present invention.
Fig. 6 is a schematic diagram of forwarding path planning between neighboring blocks according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. The components of the embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations.
The following description of the invention uses InfiniBand (IB) networks as examples of high performance networks. It will be clear to those skilled in the art that other types of high performance networks may be used without limitation. It will be clear to a person skilled in the art that other types of fabric topologies may be used without limitation.
Fig. 1 is an entity diagram of a first embodiment of the disclosure, which provides a method and apparatus for path planning between logical blocks of a network topology, for path planning between logical blocks of a network that divides an IB subnet-wide nonstandard fat-tree topology. The method comprises the following steps:
(1) Any two logic blocks are enumerated, namely a source logic block and a target logic block.
(2) And judging whether the path planning between the logic blocks exists between the source logic block and the target logic block, and if not, starting (3) the path planning between the logic blocks. If a path is planned, step (5) is entered, and the next two logical blocks are enumerated.
(3) The shortest path from the source logical block to the target logical block is found.
(4) And tracing upwards from the target logic block, and calculating a forwarding path rule between each adjacent block until tracing to the source logic block.
(5) And (3) selecting two logic blocks from the rest logic blocks to carry out path planning among the logic blocks, and repeating the step (1) until the path planning among all the logic blocks is completed, thereby completing the path planning among the logic blocks.
InfiniBand (IB) is an open standard lossless network technology developed by the InfiniBand trade organization. The technology is based on a serial point-to-point full duplex interconnect architecture that provides high throughput and low latency communications, particularly suited for High Performance Computing (HPC) cluster applications and data centers. Within the IB subnetwork, the host nodes are connected using switches and point-to-point links, and there is a Subnetwork Manager (SM) resident on the subnetwork device specified in the subnetwork. The subnet manager is responsible for configuring, activating, and maintaining IB subnets. The Subnet Manager (SM) may be responsible for performing routing table calculations in the IB fabric.
Fat tree topology (Fattree) a network topology commonly used for large AI clusters today, as shown in fig. 2, employs a plurality of switches connected in a tree, different layers have different numbers of switches, and the bandwidths of the cables between different layers also change in equal proportion, but the total bandwidth of each layer is equal. This is a non-blocking network topology that can maintain a completely bisected bandwidth and potentially avoid congestion. In practice, in order to realize a larger bandwidth, a fat tree topology generally has a plurality of wires between two nodes, wherein the wires can be network wires in a physical sense, and the plurality of network wires are combined to be used as a large network wire; the large switch is also broken down into multiple small switches.
In practice, however, many network topologies are not a standard fat tree structure. In a large cluster where deep learning model training is performed, there may be some test machines, development machines, machines for managing the cluster, and the like in addition to computing machines and storage machines. They require a relatively much lower network bandwidth and typically use a lower bandwidth network card or otherwise access them to the core network. IB does not support well this nonstandard topology, fat tree topology path planning does not handle this nonstandard network topology well, it does not calculate routes according to the application type of the machine. If the total bandwidth of the network cards of the machine is larger than the cross-sectional bandwidth of the fat tree, the routing table obtained by the fat tree topology routing algorithm often distributes two network cards to the same link randomly, and if the two network cards are exactly heavy-load network cards, congestion can be generated.
IB network architecture the lower layer architecture is referred to as a subnet, as shown in fig. 3, which includes switches and a series of host devices connected point-to-point to the end of the subnet, including but not limited to computing machines, storage machines, test computers, development computers, computers that manage clusters, etc. The switch includes a plurality of switch ports and the host device may include one or more network cards. One switch port is connected to another switch port, or to a network card of another host device, by a wire, which may be a network cable in a physical sense. For ease of understanding, during packet transmission, forwarding and receiving, both the switch and the network card may be considered a node in a subnet, where the network card on the device is typically connected to the end of the subnet, which is also referred to as an end node for ease of description.
The IB subnetwork may also include at least one Subnetwork Manager (SM) responsible for initializing and starting up the network, including the configuration of all switches, routers, and Host Channel Adapters (HCAs) within the subnetwork. Host devices and switches in the subnetwork may be addressed using a specified Local Identifier (LID).
Subnet path planning is accomplished by maintaining routing forwarding tables within the switch. The routing forwarding table of each switch contains entries for all nodes that record the address of the target node and its forwarding port corresponding to the next hop. And constructing a path plan of the whole subnet, namely constructing a routing forwarding table of each switch in the subnet. Because the entry records the address of the target node and the forwarding port corresponding to the address in the routing forwarding table entry, only the target node is known when the routing planning is performed, and therefore, the path planning is usually traced upwards from the target node.
Firstly, the whole subnet is divided into a plurality of logic blocks, the division mode is divided into one or more fat tree topology logic blocks as far as possible, and the rest part is divided into one or more tree logic blocks. Load balancing is performed in the fat tree, so that the topology characteristic of the fat tree network can be well exerted. For an overall standard network topology physical subnet, the maximum exertion of the network topology characteristics of the fat tree structure is realized by dividing the subnet into as many fat tree topologies as possible. For the remaining portion, it is divided into one or more tree structured logical blocks. Different path planning strategies are adopted in different types of logic blocks respectively, so that the network topology characteristics of the subnetwork can be optimized better on the whole.
The subnetwork is divided into a plurality of logic blocks, the parts conforming to the fat tree network topology and the other parts of the network topology are mutually isolated and segmented, the subnetwork is divided into one or more fat tree topology logic blocks as far as possible, and the rest of the other network topology parts are divided into one or more tree-shaped logic blocks. It is noted that the division of the logical blocks is only logical and does not physically create isolation nor require the use of routers at the top level.
After the logic blocks are divided, path planning in each logic block needs to be realized, and path planning from each node in the logic block to any node in the block is obtained.
The path planning in each logic block needs to be realized respectively, and the optimal path planning is selected for different types of logic blocks so as to solve the problem of the path selection of which connection line from any node to a destination node in the logic block. The connection line can be a network line in a physical sense, a plurality of connection lines can exist between each two adjacent switches, and two ends of each connection line are respectively connected with two ports.
By acquiring path planning among all nodes in the logic block, the routing forwarding table of all switches in the logic block comprises forwarding port entries which are corresponding to all nodes in the logic block as target nodes.
And (3) realizing path planning in a manner of carrying out load balancing improvement on the basis of a traditional fat tree algorithm in the fat tree topology logic block. In particular by giving each end node a different weight to its bandwidth requirements. Traversing fat tree from end node upwards, and sorting descending port of previous exchanger in descending order of weight, selecting smallest weight in descending port. And adds the weight of the target end node to the assigned downstream port. And continuing to trace back port allocation until the port allocation of the switch at the top end of the fat tree is completed, and performing the allocation of the next node.
The connection ports are given different weights according to the difference of corresponding devices of the end nodes and the difference of bandwidth requirements. Devices that participate in model training and have high bandwidth requirements, such as computing machines, storage machines, etc., may be given a high weight, such as 2. Devices that have a general bandwidth requirement, such as test machines, may be given a medium weight, such as 1. For development computers, computers managing clusters, etc. that have little requirement for bandwidth, a lower weight, such as a weight of 0, is given.
In order to better balance the load between the logic blocks, for the ports connected to other logic blocks through the connection lines, when the internal routing forwarding table of the logic block is calculated, the whole other end of the connection lines needs to be considered as a network card of the equipment connected to the network end of the logic block. That is, when weight is given, the bandwidth required for connecting to other logic blocks through the connection needs to be considered, and for convenience of implementation, the other end of the connection can be regarded as a network card as a whole.
Each end node is connected to a switch port by a wire. Because of the IB network rule restrictions, only the target node is known when route planning is performed, and thus path planning typically traverses the tree up from the target node.
When the downlink port weights are the same, the port Local Identifiers (LIDs) may be selected in ascending order according to a conventional fat-tree routing algorithm. The downlink ports with the same weight are ordered according to the Local Identifiers (LIDs) in the logic blocks in ascending order, and sequentially selected.
By the method, the problems of unbalanced link load and contention for an uplink of a single node in a fat tree structure are solved, and the full utilization of the characteristics of the network topology structure is realized.
And adopting a shortest path planning mode for other tree logic blocks except the fat tree topology logic blocks in the subnetwork. Inside these non-fat tree logic blocks, a tree structure connection mode is adopted between the switch and the end node, and the routing rule of the tree topology generally adopts the shortest path, because only one shortest path exists between any two nodes of the tree. A shortest path is determined by means of the shortest path, and the path from the source node to the target node directly uses the shortest path. The implementation method for shortest path planning of tree-structured network topology belongs to the prior art for those skilled in the art, and is not described herein in detail.
A plurality of connecting lines are arranged between the adjacent switches, and two ends of each connecting line are respectively connected with two ports of two switches. The connection may be a network cable in a physical sense. The downstream ports of the previous switch are allocated in turn for the end nodes according to allocation rules. The allocation rule may be sequentially allocated according to the ascending order of the weights, so as to realize load balancing, or may be sequentially allocated according to the ascending order of the Local Identifiers (LIDs) in the classical routing algorithm.
After the path planning and the forwarding rules in the logic blocks are determined, the path planning between the logic blocks needs to be realized, and the forwarding path planning between any two logic blocks is acquired. The method according to the embodiment shown in fig. 1 is described as follows:
(1) Any two logic blocks are enumerated, namely a source logic block and a target logic block.
(2) And judging whether the path planning between the logic blocks exists between the source logic block and the target logic block, and if not, starting (3) to start the path planning between the logic blocks. If a path has been planned, go to (5) and continue to enumerate the next two logical blocks.
In this embodiment, the path planning between the logic blocks is a forwarding routing path rule between any node in the source logic block and any node in the target logic block. In the path planning, a source node and a target node of the path planning are planned, wherein the source node belongs to a source logic block, and the target node belongs to a target logic block. As shown in fig. 4, the source node x is a node in the logical block a, and the target node Y is a node in the logical block Y, and it is assumed that a path from the logical block a where the source node x is located to the logical block E where the target node Y is located needs to be planned. Because of the IB network rule restrictions, only the target node is known when route planning is performed, and therefore path planning is typically traversed from the target logical block E where the target node y is located.
In one embodiment, it is first determined whether a path plan already exists from the logical block a where the source node x is located to the logical block E where the target node y is located. It should be noted that, since IB rule determines that only the destination node and the forwarding port corresponding to the next hop (next hop) are recorded in the routing forwarding table, when the destination node and the source node are switched, a new independent path plan is included, i.e. the path plan from the source node x of the logical block a to the destination node y of the logical block E and the path plan from the source node y of the logical block E to the destination node x of the logical block a are two different path plans.
(3) The shortest path from the source logical block to the target logical block is found.
In this embodiment, the shortest path from the source logical block a where the source node x is located to the logical block E where the target node y is located needs to be required.
In one embodiment, for a subnet made up of multiple logical blocks, the logical blocks as a whole can be considered as a node, and the subnet can be considered as a mesh topology made up of multiple nodes, as shown in fig. 4. The logic blocks are connected through a plurality of connecting lines, so that the shortest path topology planning description is facilitated, and the shortest path topology planning description is simplified into one. The shortest path from logic block A to logic block E is determined by the shortest path algorithm and is forwarded through logic block C, namely, the route of A-C-E.
In one embodiment, there are many ways to implement the shortest path algorithm of the mesh topology to those skilled in the art, including but not limited to FLOYD Buddha's Reed algorithm, and the like.
(4) And tracing upwards from the target logic block, and calculating a forwarding path rule between each adjacent block until tracing to the source logic block.
In one embodiment, the forwarding path rule from the last logic block C to the target node y is obtained by tracing back from the target node y, and then the forwarding path rule from the logic block a to the logic block C is obtained. Fig. 5 illustrates a path planning implementation tracing from a target logical block E up to a logical block C in one embodiment, and the specific method includes:
(401) Any node in the target logic block is enumerated and is a target node.
In one embodiment, the target logical block E contains the target node y, and the next forwarding point in all switch paths in the logical block C with y as the destination needs to be found.
(402) And allocating the previous logic block to a connecting line of the target logic block according to the shortest path determined among the logic blocks.
In one embodiment, there is more than one connection between the logic blocks, so as to solve the problem of path planning from the previous logic block to the target logic block in the shortest path, and the connection between the previous logic block and the target logic block needs to be selected according to the determination. After the target logical block path plan is obtained, according to the path plan in the logical block, any switch of the target logical block class can be obtained to the target node path plan,
In one embodiment, as shown in fig. 6, the logical block E is a target logical block, and y in the logical block E is a target node. The target node y accesses the subnet through the switch E0. The shortest path defined between the aforementioned logical blocks is a-C-E, and therefore a connection from logical block C to target logical block E needs to be allocated. There may be multiple connections from the previous logical block C to the target logical block E. The connection mode can be different ports of different switches or different ports of the same switch. As shown in fig. 6, the connection between logical block C and logical block E may be through a 701 connection of port C101 of logical block C switch C1 to E1 switch port E101 in logical block E. Or to E1 switch port E102 in logical block E through the 702 wiring of port C201 of logical block C switch C2. The 703 connection through port C102 of logical block C switch C1 connects to E2 switch port E201 in logical block E, the 704 connection through port C202 of logical block C switch C2 connects to E2 switch port E202 in logical block E, or both 705,706 connections.
The allocation method can select LID sequence allocation, and can also consider load balancing to give weight to the connection line. In this embodiment, a connection line is determined from the connection lines 701-704 between the logic block C and the logic block E as a path to the target node y, and may be allocated according to the order of the connection lines LID, or may consider load balancing between the connection lines.
In one embodiment, the path planning between every two switches in the logic block E can realize the path planning between the switch E0 and the target node y, namely, the path planning between the switch E1 and the switch E2 or between other switches in the logic block E and the logic block C can be realized through determining the connection line between the switches and the logic block C.
(403) And according to the distributed connection line, the switch port of the previous logic block corresponding to the connection line is regarded as an outlet port.
The egress port of logical block C is determined in one embodiment from one of the connections 701-704 between logical block C and logical block E. For example, if the distribution line is 701, the corresponding egress port C101 of the switch C1 in the logic block C is the egress port of the path in the logic block C.
(404) Each switch in the previous logical block is enumerated and a logical block internal path to the egress port is determined.
In one embodiment, the path of any switch to an egress port may be obtained in conjunction with path planning inside the logic block.
(405) And replacing the switch route forwarding entry and replacing the exit port with the target node, so that the path planning from any switch in the previous logic block to the target node can be obtained.
(406) And tracing forward in the same way, and obtaining the path planning from the former logic block to the target node.
In one embodiment, the path planning between any node in the source logical block can be traced back from the target logical block in this way.
(5) And (3) selecting two logic blocks from the rest logic blocks to carry out path planning among the logic blocks, and repeating the step (1) until the path planning among all the logic blocks is completed, thereby completing the path planning among the logic blocks.
In one embodiment, path planning between the inside of the combined logic block and the logic block may result in an overall path from any source node to any destination node. And constructing a routing table based on the whole path and issuing the routing table to the switch in the subnet.
In one embodiment, the path planning between the logic block and the inside of the logic block is combined, after the overall path planning is obtained, a routing table can be constructed to obtain forwarding entries of all switches in the subnet, so that any switch in the subnet can know a forwarding port to a next hop through the forwarding entries of the switch after the data packet is obtained, and then data forwarding from a source node to a target node is realized.
In one embodiment, the global path build routing table is issued by a Subnet Manager (SM) to a subnet internal switch.
It should be noted that, every time the network topology is updated, including but not limited to shutdown of the host device, failure of the host switch, addition and subtraction of hosts and switches, and change of the network topology, it is necessary to perform an overall network routing plan and update all the switch routing tables in the subnetwork.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. The apparatus embodiments described above are merely illustrative, for example, of the flowcharts and block diagrams in the figures that illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, functional modules in the embodiments of the present invention may be integrated together to form a single part, or each module may exist alone, or two or more modules may be integrated to form a single part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a usb disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes. It is noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The above description is only of the preferred embodiments of the present invention and is not intended to limit the present invention, but various modifications and variations can be made to the present invention by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention. It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further definition or explanation thereof is necessary in the following figures.
The foregoing is merely illustrative of the present invention, and the present invention is not limited thereto, and any person skilled in the art will readily recognize that variations or substitutions are within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (10)

1. A method for planning paths among logic blocks of a network topology structure aims at dividing a network of an IB subnet whole nonstandard fat tree topology into a plurality of paths among logic blocks, and is characterized in that the steps for acquiring forwarding path planning among the logic blocks are as follows:
(1) Enumerating any two logic blocks, namely a source logic block and a target logic block;
(2) Judging whether a path planning exists between the source logic blocks and the target logic blocks, and starting the step (3) if the path planning does not exist, and starting the path planning between the logic blocks; step (5) is carried out when path planning exists, and the next two logic blocks are enumerated;
(3) Solving the shortest path from the source logic block to the target logic block;
(4) Tracing upwards from the target logic block, and calculating a forwarding path rule between each two adjacent blocks until tracing to the source logic block;
(5) And (3) selecting two logic blocks from the rest logic blocks to carry out path planning among the logic blocks, and repeating the step (1) until the path planning among all the logic blocks is completed, so as to complete the path planning among the logic blocks.
2. The method according to claim 1, wherein the path planning is implemented by maintaining routing tables in the switches, each of the routing tables of the switches contains entries of all nodes, the entries record the address of the target node and the forwarding port corresponding to the next hop, and the path planning of the entire subnet is constructed, i.e. the routing table of each switch in the subnet is constructed.
3. A method for path planning between logical blocks of a network topology according to claims 1-2, characterized in that each time the network topology is updated, the overall network routing is planned once and all switch routing tables inside the subnetwork are updated.
4. A method according to claims 1-3, characterized in that prior to step (1) the whole subnetwork is first divided into several logical blocks in such a way that it is divided as far as possible into one or more fat tree topology logical blocks, the remaining part being divided into one or more tree-like logical blocks; the division of the logical blocks is only logical and does not physically create isolation nor does it require the use of routers at the top level.
5. The method of claims 1-4, wherein after the division of the logic blocks is completed, an internal path plan for each logic block is required to be implemented, and a path plan from each node in the logic block to any node in the block is obtained.
6. A method for path planning between logical blocks of a network topology according to claims 1-5, wherein in step (4), the forwarding path rules between each adjacent block are calculated from the target logical block until the forwarding path rules are traced back to the source logical block, and the specific method is as follows:
(401) Enumerating any node in the target logic block to be a target node;
(402) Distributing a connecting line of the previous logic block to the target logic block according to the shortest path determined between the logic blocks;
(403) According to the distributed connection line, the switch port of the previous logic block corresponding to the connection line is regarded as an outlet port;
(404) Enumerating each switch in the previous logic block, and determining a logic block internal path to an outlet port;
(405) Replacing the switch route forwarding item and replacing the outlet port with the target node, so as to obtain the path planning from any switch in the previous logic block to the target node;
(406) And tracing forward in the same way, and obtaining the path planning from the former logic block to the target node.
7. A method for path planning between logical blocks of a network topology according to claims 1-6, characterized in that there is more than one connection between the logical blocks, and in said step (402), the connection allocation method from the previous logical block to the target logical block is a selectable LID sequential allocation, or load balancing is selectable to assign weights to the connections.
8. The method for planning paths between logical blocks of a network topology according to claims 1-7, wherein the path planning between the interior of the logical block and the logical block is combined to obtain an overall path from any source node to any destination node, and a routing table is constructed based on the overall path and issued to a switch in a subnet.
9. An electronic device for implementing the method of any of claims 1-8, comprising a memory, a processor, a bus, a network interface, other peripheral interfaces;
Memory, processor, network interface, other peripheral interfaces are connected via a communication bus, said processor implementing the steps of any of claims 1-8 when executing said program.
10. A computer-readable storage medium having one or more computer programs stored thereon, characterized by: the one or more programs, when executed by an electronic device comprising a plurality of application programs, implement the steps of any of claims 1-8.
CN202410184216.1A 2024-02-19 2024-02-19 Method and equipment for planning paths between logic blocks of network topology structure Pending CN118250215A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410184216.1A CN118250215A (en) 2024-02-19 2024-02-19 Method and equipment for planning paths between logic blocks of network topology structure

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410184216.1A CN118250215A (en) 2024-02-19 2024-02-19 Method and equipment for planning paths between logic blocks of network topology structure

Publications (1)

Publication Number Publication Date
CN118250215A true CN118250215A (en) 2024-06-25

Family

ID=91557410

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410184216.1A Pending CN118250215A (en) 2024-02-19 2024-02-19 Method and equipment for planning paths between logic blocks of network topology structure

Country Status (1)

Country Link
CN (1) CN118250215A (en)

Similar Documents

Publication Publication Date Title
US11356370B2 (en) System and method for efficient network isolation and load balancing in a multi-tenant cluster environment
Alicherry et al. Network aware resource allocation in distributed clouds
US11159452B2 (en) System and method for supporting efficient load-balancing in a high performance computing (HPC) environment
US8855116B2 (en) Virtual local area network state processing in a layer 2 ethernet switch
EP2374250B1 (en) Load balancing
EP2348678B1 (en) Network topology method, device and system
WO2013015905A1 (en) Method and apparatus for assignment of virtual resources within a cloud environment
JP6275263B2 (en) Traffic engineering for large data center networks
US11005724B1 (en) Network topology having minimal number of long connections among groups of network elements
CN107360478B (en) Routing and spectrum allocation method in elastic optical network
JP2014164756A (en) Method, system, compute nodes and computer program for all-to-all message exchange in parallel processing system (all-to-all message exchange in parallel computing systems)
Ferraz et al. A two-phase multipathing scheme based on genetic algorithm for data center networking
Cheng et al. NAMP: Network-aware multipathing in software-defined data center networks
CN108400922B (en) Virtual local area network configuration system and method and computer readable storage medium thereof
JP5664131B2 (en) Information processing method, apparatus and program
Zahid et al. Partition-aware routing to improve network isolation in infiniband based multi-tenant clusters
Chung et al. Dynamic parallel flow algorithms with centralized scheduling for load balancing in cloud data center networks
US20060036762A1 (en) System and method for automatic path generation in a computer network
CN117176638A (en) Routing path determining method and related components
Di Lena et al. A right placement makes a happy emulator: A placement module for distributed SDN/NFV emulation
CN118250215A (en) Method and equipment for planning paths between logic blocks of network topology structure
US11637791B2 (en) Processing allocation in data center fleets
US9009004B2 (en) Generating interconnect fabric requirements
CN118075198A (en) Method and equipment for planning path of high-speed network topology structure
US7336658B2 (en) Methods and system of virtual circuit identification based on bit permutation of link numbers for multi-stage elements

Legal Events

Date Code Title Description
PB01 Publication