WO1989001665A1 - Hypercube topology for multiprocessor systems with added communication paths between nodes or substituted corner topologies - Google Patents

Hypercube topology for multiprocessor systems with added communication paths between nodes or substituted corner topologies Download PDF

Info

Publication number
WO1989001665A1
WO1989001665A1 PCT/US1988/002782 US8802782W WO8901665A1 WO 1989001665 A1 WO1989001665 A1 WO 1989001665A1 US 8802782 W US8802782 W US 8802782W WO 8901665 A1 WO8901665 A1 WO 8901665A1
Authority
WO
WIPO (PCT)
Prior art keywords
topology
node
hypercube
nodes
block
Prior art date
Application number
PCT/US1988/002782
Other languages
French (fr)
Inventor
Renben Shu
David H. C. Du
Original Assignee
Regents Of The University Of Minnesota
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Regents Of The University Of Minnesota filed Critical Regents Of The University Of Minnesota
Publication of WO1989001665A1 publication Critical patent/WO1989001665A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F15/00Digital computers in general; Data processing equipment in general
    • G06F15/16Combinations of two or more digital computers each having at least an arithmetic unit, a program unit and a register, e.g. for a simultaneous processing of several programs
    • G06F15/163Interprocessor communication
    • G06F15/173Interprocessor communication using an interconnection network, e.g. matrix, shuffle, pyramid, star, snowflake
    • G06F15/17337Direct connection machines, e.g. completely connected computers, point to point communication networks
    • G06F15/17343Direct connection machines, e.g. completely connected computers, point to point communication networks wherein the interconnection is dynamically configurable, e.g. having loosely coupled nearest neighbor architecture

Definitions

  • the present invention pertains to the field of high-speed digital data processors, and more particularly to interprocessor communication networks for multiprocessor or multicomputer designs.
  • the physical packaging of the hardware becomes a major concern to interprocessor communication.
  • the preferred cost-effective method of implementing a large parallel processor is to use microprocessor packaged in individual VLSI packages with a small number of memory and other support chips for each processor.
  • the number of other microprocessor packages that a single package can communicate with depends on the number of available input/output (I/O) ports available.
  • I/O input/output
  • dedicated pins on the LSI package are used to form the communication interconnect.
  • Interprocessor communication can be divided into two main categories of topologies, commonly entitled direct and indirect.
  • the direct network connection topology has been widely researched and considered for telephone switching connections. These connections are typically found to be tightly coupled, slow-speed communications networks designed either as single-stage or multi-stage designs.
  • the most commonly used direct interconnect topology is the crossbar interconnect widely used in older telephone exchange central offices. When the number of nodes increases in a direct interconnection topology, the number of possible interconnections grow very quickly.
  • These types of prior art direct interconnection techniques are very expensive for loosely-coupled or distributed multiprocessor designs with a very large number of processors.
  • the indirect interconnection topology is typically described in terms of the physical layout of the interconnected nodes of the system. These interconnect schemes are usually described in terms of the degree required for implementation, i.e., degree one, degree two, degree three, hypercube, etc. The most promise for very large multiprocessor system interconnect topologies has been found in the hypercube topology.
  • an interprocessor communication system for use in multiprocessor designs is constructed by modifying an n- dimensional hypercube.
  • This modified hypercube is constructed using 2 n -nodes arranged as an n-dimensional hypercube (n-cube). Additional communication paths are inserted into the hypercube structure to reduce the diameter of the hypercube by increasing the number of ports (degree) for each node. This is an acceptable trade-off when the number of nodes within the system increases.
  • FIG. 1 shows a variety of prior art network topologies such as (a) the chordal ring, (b) the cube- connected cycle (CCC), (c) the ring (cycle), (d) the tree, (e) the mesh (grid), and (f) the completely connected network topology.
  • CCC cube- connected cycle
  • FIG. 2 shows prior art hypercubes of (a) one-, (b) two-, (c) three-, and (d) four-dimensional configurations.
  • FIG. 3 shows a path connecting x and x' in the basic topology S 2 .
  • FIG. 4 shows a path connecting (x, y) and (x', y') in a Substituted Topology ST ( S 2 , S 1 ).
  • FIG. 5 is a Modified Hypercube MH 2 formed from a
  • FIG. 6 is a Modified Hypercube MH 3 formed from a 3-cube.
  • FIG. 7 exemplifies the robustness of the Modified Hypercube by showing four remaining virtual 2-cubes in a MH 3 configuration when nodes 3 and 4 are faulty.
  • FIG. 8 shows a substituted node (within the circle) of a Substituted Topology MH 3 (MH 2 ) where the basic topology is a Modified Hypercube of the type shown in FIG. 6 and the block topology is a Modified Hypercube of the type shown in FIG. 5.
  • FIG. 9 is a graphical representation of a Substituted Modified Hypercube SMH 3 (MH 2 ) where the basic topology is a Modified Hypercube of the type shown in FIG. 6 and the block topology is a Modified Hypercube of the type shown in FIG. 5.
  • MH 2 Substituted Modified Hypercube SMH 3
  • the communication medium may be shared memory by using direct interconnection network (such as omega network, crossbar, indirect binary hypercube, etc.), a broadcast bus or an indirect interconnection network.
  • direct interconnection network such as omega network, crossbar, indirect binary hypercube, etc.
  • Broadcast bus schemes are not expensive but it is difficult to get high performance when the system is large, while the shared memory systems are expensive when scaled to large dimensions because of the rapid growth of the interconnection network.
  • Large system configurations (for example, with thousands of processors) are readily realized with the distributed memory based on a limited form of interconnection, such as the pyramid or binary hypercube.
  • the present patent application is directed to such indirect interconnection topologies which can be used for small (for example, with ten processors) to very large multiprocessor systems (for example, with a million or more processors).
  • An indirect interconnection topology for a multiprocessor system can be considered as a graph with each processor as a vertex (or node) and with, an edge between two vertices if their corresponding processors are directly connected.
  • graph and “system” are used interchangeably throughout this application.
  • the first criterion is that the degree of a graph should be small.
  • the degree of a graph is defined as the maximum number of vertices connected directly (through I/O ports) to a vertex in the graph. This restriction represents the fact that a processor with a large number of I/O line interfaces is expensive and may not be feasible for realization due to pin-out limitations, multiplexing problems, etc.
  • the second criterion is that the diameter of the graph should be small relative to the number of the vertices.
  • the diameter of a graph is defined as the maximum shortest distance between any pair of vertices in the graph. Processing elements not directly connected will have to have messages relayed by intervening vertices. The number of relays in the worst case should be kept small to keep communication time delay between processors to a minimum.
  • the message relaying load should be uniformly distributed throughout the network such that there is no bottleneck in the message communication flow.
  • the routing algorithm for any node in the system should be easily implemented, so that the cost of time for communication is small. Also, the system should be uniform so that each node processor could be loaded with the identical routing algorithm to forward messages independent of the location of the node in teh system. The node need only recognize its own address. Uniformity also creates a system without congestion.
  • the system should be robust. For a large system with thousands or more processors, it is very possible that there are some processors which have faults.
  • the faulted system with a number of faulted processors (nodes) should still be able to perform computation by disabling the faulty processors, reconfiguring the system (by the host or control processor) and by routing messages around the faulted notes.
  • the robustness is very important for such a system. Usually the robustness of a system is related to the number of possible (or alternative) paths between each pair of processors.
  • the system should have good emulation capabilities to emulate other network topologies.
  • the extreme case of small degree is to allow each processor to have at most two or three neighbors.
  • An example of a network topology with only two neighbors is the ring, as shown in FIG. 1C.
  • the diameter of a ring is equal to half the number of vertices. In the ring shown in FIG. 1C, the diameter, therefore, is 4, as the number of vertices is equal to 8.
  • FIG. 1F This is exemplified in FIG. 1F as the completely connected topology.
  • N the number of processors in the system
  • the degree p is equal to n-1 and the diameter d is equal to 1.
  • a completely connected topology offers extremely efficient communication between the processors but is limited in size due to the high number of I/O ports that each processor must support.
  • the vertices are grouped into 2 n cycles of n vertices each and conceptually arranged at the corners of an n-cube.
  • Each of the n edges emanating from a corner of the cube is used to connect one of the vertices of the corresponding cycle to a vertex of a neighboring cycle.
  • the diameter of the system is 5/2 ⁇ log 2 N+0(1).
  • a generalization of the binary trees, called multitree structure graphs, is formed by taking t binary trees and connecting the root of each to one vertex of a t-vertex cycle. Its diameter is less than 2 ⁇ log 2 N+0(1).
  • Another trivalent graph is the Dense Trivalent Group (DTG). Its diameter is 3/2 ⁇ log 2 N+0(1). This topology cannot simu late a two- or three-dimensional mesh.
  • FIG. 1A Another type of trivalent graph is the chordal ring, shown in FIG. 1A.
  • TABLE 1 A comparison of a variety of the prior art network topologies is shown in TABLE 1. Formulas for calculating the diameter and degree of the network topologies are shown. Also shown are the other types of topologies that each topology can emulate. Comparative judgments based on the criteria such as robustness, congestion and uniformity of the structures discussed above is also included in TABLE 1.
  • FIG. 2 is an example of another type of interconnection topology termed the hypercube or n-cube.
  • the hypercube takes its name from a three-dimensional cube extended into lesser and greater dimensions. For example, as shown in FIG. 2, a 1-cube is simply two nodes connected along a single communication path. Each node or vertex 900 is connected along an edge 901 of the cube which represents in graphical form a communication link.
  • the simplest form of a hypercube is a zero-cube in which there is a single node with no connections to any other nodes.
  • a 2-cube has the number of processors N equal to 2 n in which n is the dimension of the cube. Hence, the 2-cube has four processors shown with vertex addresses 00, 01, 11, and 10 in their binary form.
  • a standard three-dimensional cube also shown in FIG. 2 is degree of 3, a diameter of 3, and has 8 processors or nodes connected by 12 communication paths. Each node in a 3-cube connects with three other nodes.
  • the binary n-cube or hypercube has a diameter of log N and a degree of log 2 N.
  • the hypercube can be extended into any number of dimensions for implementing a regular interprocessor communication network. For example, the 4-cube of FIG.
  • FIG. 2 is a theoretical representation of a four-dimensional cube with the binary addresses of each node of the cube shown in FIG. 2.
  • the diameter of a 4-cube is equal to 4 and its degree is 4 such that each node is connected to four other nodes in the system.
  • Those skilled in the art can readily recognize the extension of the hypercube into n- dimensions.
  • the hypercube has received significant interest due to the success of the Caltech Cosmic Cube project, the commercial availability of the NCUBE/10 parallel processor, and the availability of boolean cube-configured concurrent processors from Intel Scientific Computers, NCUBE, Ametek, Floating Point Systems and Thinking Machine Corp.
  • the reason why the binary hypercube has attracted so much attention is because of its powerful topological properties. That is, homogeneity, symmetry, ability to map most other popular network topologies and recursive structure.
  • Many practical algorithms based on two- or three- dimensional meshes (grids), trees and rings (cycles) can be embedded into or emulated by the binary hypercube.
  • the commercially available binary hypercube parallel processor NCUBE/10 has a potential computing power of 500 million floating-point operations per second (MFLOPS) which is more than three times as much as that of a CRAY-1 computer.
  • MFLOPS floating-point operations per second
  • the relative cost of the NCUBE/10 is much less than that of the CRAY-1.
  • the NCUBE/10 parallel computer available from NCUBE Corporation of Beaverton, Oregon, is a true parallel processing computer which can be configured with as few as 16 processors and expanded to a maximum of 1,024 processors.
  • the NCUBE/10 machine is implemented using a hypercube technology expandable to a degree of 10 with a diameter of 10.
  • Each node within the system is an independent 32-bit microprocessor with its own local memory and, at the most, 10 communication links to other nodes or processors in the system.
  • Each node in the NCUBE/10 device has one 32-bit microprocessor and 128 Kbytes of memory.
  • Each node executes its own program out of its local memory and operates on its own data.
  • Each node in the NCUBE/10 machine is connected in a network of a hypercube configuration to a set of its neighbor's through Direct Memory Access (DMA) com munication channels that are controlled by the microprocessor of the node.
  • DMA Direct Memory Access
  • the initiation and execution of communication throughout the network is on an interrupt basis, allowing the microprocessors of each node to independently process data while communication through DMA channels continues.
  • System configuration, global control and external I/O is controlled by host processors.
  • Each node is embedded with a routing algorithm which allows it to communicate with any other node or to pass communications through its node to be forwarded to the addressed node.
  • CM-1 Another prior art multiprocessor using an n-cube interconnection network topology is the Connection Machine constructed at Thinking Machines Corporation, Cambridge, Massachusetts.
  • the prototype of this machine, the CM-1 contains 65,536 (2 16 ) cells or processors connected by packet-switched network based on the boolean n-cube (hypercube) topology and using an adaptive routing algorithm.
  • the network topology is constructed using a 12-cube where each node in the 12-cube is constructed of a router processor connected to 12 other remote router processors and also connected to 16 local microprocessors.
  • Each node in the 12-cube of the CM-1 is constructed from a custom designed VLSI chip that contains 16 processor cells and one router unit of the packet-switched communications network. All communications transfers through the 12-cube take place over the edges of the 12-cubes connected to bidirectional I/O pins on each VLSI processor and router chip.
  • Each vertex in the 12-cube of the CM-1 multiprocessor has 16 processors connected in a 4x4 grid (mesh) such that each processor at each vertex communicates directly with its north, east, west and south neighbors within the same vertex.
  • each vertex in the 12-cube of the CM-1 multiprocessor is substituted with a two-dimensional mesh topology which connects to the 12-cube through a router.
  • Parallel processing computers of the size and type of the NCUBE/10 and the CM-1 are used for a wide variety of scientific applications such as seismic processing, image processing, artificial intelligence processing, simulation of events such as fluid flow, molecular modeling, weather, and many other applications.
  • the common purpose in all of these applications is to solve a problem as a set of concurrent cooperating subtasks. As requirements for these applications grow in the future, higher performance with larger numbers of processors is required to effectively meet the needs of the scientific community.
  • VLSI Very Large Scale Integration
  • ULSI Ultra Large Scale Integration
  • the diameter of a topology of a small degree tends to be big and the robustness of the topology tends to be reduced.
  • Most of the previously discussed prior art topologies other than the hypercube are of degree 3 or 2.
  • a topology of a degree greater than 3 will be needed , but wi th a small diameter and improved robustness.
  • the first class of topologies discussed below is an improved hypercube termed the Modified Hypercube (MH).
  • MH Modified Hypercube
  • This unique topology reduces the diameter of a hypercube by approximately 1/2 at the expense of increasing the degree by 1. In some applications this is a favorable trade-off.
  • SMH Substituted and Modified Hypercube
  • the degree of an SMH can range from 3 to log 2 N or more.
  • topological properties PROP(S) of a multiprocessor system S is denoted by a three-tuple (N,p,d), where N is the total number of processors in S, p is the degree of the topology, and d is the diameter of S.
  • a new topology called Substituted Topology ST( 2 , S 1 ) can be formed by replacing each vertex in S 2 by a S 1 .
  • S 2 is called the basic topology and S 1 is called the block topology. Since each vertex in S 2 has a degree of p 2 (number of I/O ports per vertex), we need to decide how these p 2 edges are connected to the vertices in S 1 .
  • N N 1 ⁇ N 2 , where the total number of nodes in the Substituted Topology N is equal to the number of nodes in the basic topology N 2 times the number of nodes in a single block topology. Since each S 1 has N 1 vertices and S 2 has N 2 vertices, after each vertex in S 2 replaced by a S 1 , ST(S 2 ,S 1 ) has N 1 ⁇ N 2 vertices. For example, every vertex in S 2 is replaced by the block topology S 1 , each of which having N 1 nodes. The resulting Substituted Topology has N 1 times N 2 nodes.
  • the diameter of the Substituted Topology d 1 is a function of the diameter of the block topology d 1 , multiplied by the diameter of the basic topology, first incremented by one. Since the diameter of the basic topology S 2 is d 2 , the longest path between two vertices consists of d 2 edges and d 2 +1 vertices. After each vertex in the basic topology S 2 is replaced by a block topology S 1 , the longest possible path between two vertices in ST(S 2 ,S 1 ) is no more than (d 2 +1) ⁇ d 1 .
  • the Substituted Topology ST(S 2 ,S 1 ) contains many more vertices (many more processors). Since p 1 ⁇ p 2 and N 1 >2, the degree of ST(S 2 ,S 1 ) (i.e., p 1 + p 2 /N 1 ) can be smaller than the number of ports p 2 if p 1 is much smaller than p 2 . Although the diameter of ST(S 2 , S 1 ) can be potentially much bigger than d 2 , it is actually quite small when the details of the topology S 1 are considered along with the below-described routing algorithm. Several examples of this are discussed below.
  • This substituted topology still preserves some topological properties from both S 1 and S 2 .
  • a routing algorithm can be easily developed based on the routing algorithms for S 1 and S 2 .
  • the address of a vertex in ST(S 2 ,S 1 ) is denoted by a two- tuple (inter-address, inner-address).
  • Inter-address is the address of a vertex in the basic topology S 2 which has been replaced by a block topology S 1 in ST(S 2 ,S 1 ).
  • Inner-address is the address of a vertex in the particular block topology S 1 .
  • a path between two vertices (x,y) and (x',y'), where x, x' are inter-addresses and y, y' are inner-addresses, can be routinely derived as shown in Figure 3.
  • each vertex x in S 2 is replaced by a S 1 in ST(S 2 ,S 1 ). Since the longest distance between any two vertices in S 2 is d 2 and the longest distance between any two vertices in S 1 is d 1 , the longest possible distance between any two vertices in ST(S 2 ,S 1 ) is no more than d 1 ⁇ (d 2 +1).
  • the above substituting scheme can be applied to any two topologies.
  • the preferred embodiment of the present invention uses hypercube as the basic topology and to combine hypercube with various other block topologies to achieve our goal.
  • the substituting scheme may increase the diameter of the resulting topology. Therefore, the preferred embodiments of the present invention use a modified hypercube as the basic topology to reduce the diameter of the resulting topology.
  • a set of cubic func tions and a complement function are described below.
  • a vertex i ( 0 ⁇ i ⁇ 2 n -1) is connected to n other vertices ⁇ 0 (i), ⁇ 1 (i),..., ⁇ n-1 (i) in a hypercube topology.
  • each vertex i is also connected to vertex ⁇ (i). That is, each node having an address i in a Modified Hypercube is additionally connected to a node having a complement address ⁇ (i).
  • the addition of one more connection (port) to each node increases the degree of the hypercube by one. While the degree of the Modified Hypercube is increased by one, the diameter of the system is reduced to approximately half of the original. This property of the Modified
  • Figure 5 shows a simple example of a 2-cube in its Modified Hypercube form MH 2 .
  • the edges of the original 2-cube are shown in solid lines, while the additional edges of the Modified Hypercube are shown in dashed lines.
  • the vertex with binary address 00 is directly connected by the additional edge to the vertex having a binary address of 11.
  • the vertex with the binary address 10 is connected by a new edge to the vertex having a binary address 01.
  • the new edges in the Modified Hypercube connect vertices with complementary binary addresses.
  • the resulting topology is the same as a completely connected network.
  • FIG. 6 shows a 3-cube modified according to the teachings of the present invention to be a Modified Hypercube MH 3 .
  • Each vertex in MH 3 has a 3-bit binary address and the edges of the original 3-cube are shown in solid black lines.
  • the dashed lines show the additional edges in the MH3 which connect the vertices having complementary binary addresses.
  • MH 3 For the MH 3 shown in Figure 6, there are 8 nodes or vertices and each is of degree 4. That is, each processor node has four I/O ports. It is easy to see that the diameter of MH 3 is 2. Compared with n-cube, the MH n is more robust. For instance, there are four virtual 3-cubes in a MH3 as shown in Figure 7 with the node address shown in decimal form for convenience. when processor nodes 3 and 4 are faulty in a 3-cube, it can be seen that there exist no "good" 2-cubes. The entire multiprocessor system would be disabled for 2-cubes.
  • the Modified Hypercube topology can be extended to n-dimensional hypercubes to reduce the diameter by approximately one-half at the cost of increasing the degree by 1. This has significant advantages to multiprocessor computers in and by itself for improving robustness and communicability within the system. Those skilled in the art will readily recognize the performance improvements in a multiprocessor design where messages between multiprocessors can be more expeditiously forwarded to reduce the communication bottleneck inherent in such designs. The application of the Modified Hypercube topology to such prior art multiprocessors such as the NCUBE/10 machine will be readily apparent for increasing the speed of communication throughout the machine at the expense of reducing its degree.
  • an NCUBE/10 processor being limited to 10 interprocessor I/O ports, would contain 512 processors in a 9-cube with a degree of 10 but a diameter of 5. Such a trade-off is extremely advantageous in computing applications where speed between interprocessors is primary.
  • the properties of the Modified Hypercube MH n can be compared to the prior art topologies of Table 1 in that the diameter d is equal to [1/2 ⁇ [log 2 N+1]] and the degree is equal to log 2 N+1.
  • the Modified Hypercube is a completely symmetric system and has the same ability to emulate most popular networks such as the ring, mesh and tree topologies as the hypercube as well as having the ability to emulate the standard hypercube.
  • the Modified Hypercube is very good in terms of its robustness and of course is uniform in its topology, lending itself to a simple routing algorithm.
  • each vertex or vertex address in SMH n (S 1 ) is denoted by a two-tuple (x,y), where 0 ⁇ x ⁇ 2 n and 0 ⁇ y ⁇ N 1 . Note that there are N 1 ⁇ 2 n vertices in SMH n (S 1 ).
  • the x value of the two- tuple represents the vertex address location in the basic topology S 2 (which is MH n ) and the y value of the two-tuple represents the vertex address location within the block topology S 1 (which has been inserted into the basic topology at each S 2 vertex).
  • vertices in the same block are connected according to the topology of the block topology S 1 . That is, two vertices (x,y) and (x,y') are connected directly if and only if vertex y is connected directly to vertex y' in S 1 .
  • the discussion of how a vertex is connected to vertices in adjacent blocks is given below.
  • n+1 ⁇ N 1 there is only a subset of vertices in a block connected to the vertices in other adjacent blocks. That is, there are more vertices in the block topology S 1 than there are required connections to the basic topology S 2 (MH n ) so that some I/O ports of the S 1 vertices are left unused (assuming homogeneous processors with a fixed number of I/O ports set to the maximum degree of the system). Assume that we choose n+1 vertices to connect to other adjacent blocks.
  • vertex (x,y) is connected directly to vertex ( ⁇ y ( x ),y), if 0 ⁇ y ⁇ n, the cubic function of the X value being used to assign the connecting address node location in the Substituted Topology.
  • Vertex (x,n) the last remaining vertex to be connected in the block topology S 1 , is connected directly to vertex ( ⁇ (x), n), the complement function of the x value being used to assign the connecting address node location in the Substituted Topology.
  • the degree of SMH n ( S 1 ) is p 1 + [ n+1/n 1 ] and the diameter of a SMH n (S 1 ) can be quite small.
  • S 1 for SMH n (S 1 ).
  • vertex i for 0 ⁇ i ⁇ N 1 is connected to vertex (i+1) mod N 1 (modulus function for base 2).
  • the diameter of SMH n (C N1 ) is no more than [3 ⁇ N 1 /2]+[n/2].
  • the Substituted and Modified Hypercube topology will support a wide variety of topologies for the block topology S 1 .
  • FIG. 8 Another example of an efficient block topology S 1 for the SMH n (S 1 ) Substituted Topology would be to use a Modified Hypercube of lesser degree for the vertices of the Modified Hypercube basic topology.
  • An example of this topology is shown in Figure 8.
  • Each node within SMH 3 of Figure 9 is addressed using a binary two-tuple.
  • the address tuples shown in Figure 9 use decimal numbers for brevity.
  • each vertex of MH 3 has been replaced by MH 2 such that each MH 2 is a class of nodes having the same inter-address.
  • This MH 2 topology individually has a diameter d equal to 1 and a degree p 1 equal to 3.
  • n 15 for S 2 and let S 1 be a loop (cycle) of four nodes with addresses 0, 1, 2, 3 decimal (or 00, 01, 10, 11 binary).
  • the binary addresses of each node in the SMH 15 (C 4 ) topology are comprised of a two- tuple with the inter-address x being a 15-bit binary address ranging from x 0 through x 14 and the inner- address being a 2-bit binary address y ranging from y 0 through y 1 .
  • the diameter of the hypercube is 15, the degree of the hypercube is 15, and the number of nodes N equal 32,768.
  • a Modified Hypercube of degree 15 the diameter is 8, the degree is 16, and the number of nodes remains the same.
  • Modified Hypercube SMH 15 (C 4 ) has 131,072 nodes with a degree of 6 and a diameter of 13. Since the block topology S 1 only contains four nodes and each vertex of the MH 15 has a degree of 16 requiring each block to have 16 I/O lines, the mapping of the nodes of the block topology onto the basic topology is not a simple 1-to-1 correspondence. Each node within the block topology therefore must map onto four nodes in the basic topology. Since the four nodes within the block topology S 1 already have connections to two other nodes, four additional connections are required of each node to connect the nodes of the block topology to the edges of the basic topology. Thus, the degree of the system is equal to 6.
  • the S 1 block topology must be a cycle- connected topology.
  • a requirement for a cycle-connected topology is that there must be a path passing through all of the nodes only once.
  • Examples of cycle-connected topologies that can be used as S in the present invention are the ring, any hypercube including the Modified Hypercube, any full connection topology, etc.
  • An example of a topology which is not a cycle-connected topology and cannot be used as a block topology for the present invention is the tree structure.
  • Table 2 lists the diameter and ports for the SMH n (S 1 ) topologies where several S 2 topologies are listed. Table 2 clearly shows the advantages of the Substituted and Modified Hypercube topologies of the preferred embodiment of the present invention.
  • the diameter and degree of each of the Substituted Topologies can be calculated from the general formulas given below.
  • Modified Hypercube S 2 of degree 16 and C 8 is an 8-node S 1 cycle of degree 2 and diameter 4.
  • the degree and the diameter of SMH 15 (C 8 ) are 4 and 20 respectively.
  • the degree and the diameter of SMH 15 (C 32 ) are 3 and 32 respectively.
  • the degree and the diameter of SMH 15 (C 4 ) are 6 and 14 respectively.
  • the topology can be selected accordingly.
  • mapping of the nodes of the block topology- onto the edges of the basic topology is a matter of choice and is not limited to a specific algorithm. However, by keeping the mapping orderly, the routing algorithm for transferring messages throughout the multiprocessor system will be simplified. An example of mapping in this illustration would be to start with a node in the block topology having address 00 and being mapped onto four edges of the basic topology.
  • the addresses of the nodes outside the block onto which the present node is mapped would be ( 0 , X 1 , X 2 , X 3 ,...,X 14 , oo), (X 0 , X 1 , X 2 , X 3 , 4 , X 5 , ..., X 14 , 00), (X 0 ...X 7 , 8, X 9 ,..., X 14 , 00), and (X 0 ,..., X 11 , 12 , X 13 , X 14 , 00).
  • the foregoing mapping of nodes within the block topology onto edges in the basic topology is not a unique way of assigning the connections. Those skilled in the art will readily recognize a wide variety of mappings.
  • mapping of the node address 00 within the block topology connects to the nodes within other block topologies also having a local address of 00. This can also be seen in the SMH 3 of Figure 5. This mapping technique is also used to facilitate the routing algorithm and does not necessarily have to be adhered to in mapping one block topology onto another block topology through the edges of the basic topology.
  • Hypercube as the block topology on SMH n (S 1 ) are for illustration purposes only and are not intended to be limiting in the scope of the present invention. Those skilled in the art will readily recognize the extension of the SMH n topology to very large structures and the block topology used to create the Substituted Topology can likewise be very large. Although the applicants of the present invention have described one of the preferred embodiment of the present invention to be a Substituted Modified Hypercube with the block topology also being a Modified Hypercube or a cycle-connected topology, various design decisions will determine the decision as to the actual topologies substituted based on, for example, fixing the degree or fixing the diameter desired in the resulting system to tailor the multiprocessor system to its preferred application.
  • the routing algorithm for the Modified Hypercube takes advantage of the fact that direct connections are made within the hypercube structure between complemented node addresses and as many inter-block transfers are used as possible to shorten the diameter. This serves to reduce the overall diameter of the MH n approximately one-half, since the additional edges tend to reduce the worst-case minimum distances between any two nodes. In effect, the routing algorithm can use the additional edges as a "shortcut" to find the shortest path between nodes.
  • the routing algorithm for Substituted and Modified Hypercube takes advantage of the regularity inherent in hypercube topologies in that each node can be programmed with a similar routing algorithm, different only in the embedded address of the resident node.
  • the routing algorithm then uses the resident node address to calculate which I/O port to forward the message to if the message is not destined for the resident node.
  • the following routing algorithm takes advantage of the SMH n (S 1 ) topology to find the minimum distance between any two nodes. This algorithm is described in detail and in general mathematical terms to show the applicability of the routing algorithm to SMH n (S 1 ) topologies where the block topology S 1 can be any number of a wide variety of topologies.
  • SMH n (S 1 ).
  • P j denote a subset Of [1, 2,...,s] such that for each i ⁇ P j , a block w node (w, j) is connected directly to node ( ⁇ i , (w), j).
  • the path length is about [n/2]+[3 ⁇ N 1 /2].
  • HD(x,x') denote the subset of [0, 1,...,n-1] which contains all indexes in which x is different from x'. That is, the cardinality of HD is the Hamming distance of x and x'. Let denote [0, 1,...,n-1]-HD. When the Hamming distance between x and x' is more than [n/2], we may have to choose a j from H and connect (x,y) to ( ⁇ j (x),y) before the complement function ⁇ can be applied. This algorithm will be better understood from the detailed examples of the routing algorithm presented below.
  • the routing algorithm for the SMH n (S 1 ) topology is very simple and efficient.
  • 0 ⁇ j ⁇ n-1 and j mod N 1 y].
  • the routing algorithm for forwarding communication packets between nodes in an SMH n (S 1 ) multiprocessor system is as follows:
  • Step 3 Compute HD(x,x'), and M(y); Note that M(y) is fixed for each node (x,y). Therefore, it can be precomputed and stored at each node location.
  • SMH 11 C 4
  • the basic topology SMH 11 is a Substituted and Modified Hypercube of degree 11
  • the block topology C 4 is a cycle of four nodes (a 2-cube, not modified).
  • SMH n (C n+1 ) SMH n (C n+1 ), where C n+1 is a ring (cycle) topology with n+1 nodes.
  • This topology is similar to the previously proposed cube- connected cycles. However, its diameter is shorter than the diameter of cube-connected cycles due to the use of a Modified Hypercube as the basic topology.
  • N k ⁇ 2 n and the diameter d is roughly [n/2]+k+[k/2] (i.e., less than 3/2 ⁇ log 2 N).
  • S 1 can be any topology of degree 3 mentioned above.
  • the block topology S 1 can be a chordal ring of n+1 nodes.
  • the topologies discussed above can be programmed to emulate other structures such as the cube- connected cycle (CCC) algorithm. All CCC algorithms can easily be embedded into the SMH n topology. However, the basic SMH n topology is more robust and has better mapping flexibility than the CCC, and hence is preferred.
  • CCC cube- connected cycle
  • the basic NCUBE/10 processor node supports ten I/O ports in a 10-cube configuration to allow it to communicate with other processors in the system.
  • SMH 10 (MH 2 ) topology applied to the prior art NCUBE/10 processor nodes, an improvement in the number of nodes that the multiprocessor can support can be achieved while keeping the diameter constant and reducing the actual number of ports required for implementation.
  • N 1 2 2 or 4.
  • the number of ports required in this Substituted and Modified Hypercube topology is a maximum of 6. Hence, the number of ports required to implement this system and the diameter have actually been reduced. The result is a superior multiprocessor system with four times the nodes of the NCUBE/10 machine and better robustness.
  • the preferred embodiments of the present invention present many advantages over the prior art.
  • the diameter has been reduced, since the modified hypercube is employed as the basic topology.
  • the degree of the topology can be reduced by the substituting scheme. This also offsets the increased degree resulted from using a modified hypercube instead of using a hypercube as the basic topology in the substituting scheme.
  • the robustness of the SMH topologies is better than that of hypercube. This is due to the fact that each node in the hypercube is replaced by a set of nodes (i.e., a block topology).
  • the number of paths between two nodes in the original hypercube is less than the number of paths between two nodes chosen from two block topologies in a SMH topology which correspond to the two nodes in the hypercube.
  • the routing algorithm is very easy and efficient.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multi Processors (AREA)

Abstract

The present invention describes an improved topology for multiprocessor systems which facilitates inter-processor communication for parallel processing computers. The topology is constructed by modifying an n-dimensional hypercube by inserting additional communicational paths into the hypercube structure to reduce the diameter of the hypercube by increasing the number of ports (degree) for each node. In an alternate embodiment of the present invention, the corners of a modified hypercube are substituted with topologies having cycle paths in order to reduce the degree of the overall structure.

Description

"HYPERCUBE TOPOLOGY FOR MULTIPROCESSOR SYSTEMS WITH ADDED COMMUNICATION PATHS BETWEEN NODES OR SUBSTITUTED CORNER TOPOLOGIES".
FIELD OF THE INVENTION The present invention pertains to the field of high-speed digital data processors, and more particularly to interprocessor communication networks for multiprocessor or multicomputer designs.
BACKGROUND OF THE INVENTION The design of multiprocessor computers with a large number of parallel processing elements requires fast and efficient communication throughout the multiprocessor network. The largest bottleneck in the processing ability of large multiprocessor systems is the interprocessor communication. There exists in the prior art a wide variety of interprocessor communication network topologies which yield various levels of performance. The trend is to build large system configurations of multiprocessors operating in parallel in the range of thousands of processors.
In a multiprocessor or multicomputer design (the present discussion centers on topology structure so no differentiation is made between multiprocessors and multicomputers) using thousands of parallel processing elements, the physical packaging of the hardware becomes a major concern to interprocessor communication. For example, the preferred cost-effective method of implementing a large parallel processor is to use microprocessor packaged in individual VLSI packages with a small number of memory and other support chips for each processor. The number of other microprocessor packages that a single package can communicate with depends on the number of available input/output (I/O) ports available. For the fastest direct communication between two microprocessors, dedicated pins on the LSI package are used to form the communication interconnect. Thus, the number of communications ports that a microprocessor can support is directly dependent on the pin limitations of the packages. LSI packages of the prior art have been built with more than 300 pins, however, in multiprocessor designs using tens of thousands of processors, pin limitations are still a factor limiting the size of the design even if the I/O ports are multiplexed on shared pins. This is a very important factor as to why a single processor element in a multiprocessor design cannot have a large number of ports.
Interprocessor communication can be divided into two main categories of topologies, commonly entitled direct and indirect. The direct network connection topology has been widely researched and considered for telephone switching connections. These connections are typically found to be tightly coupled, slow-speed communications networks designed either as single-stage or multi-stage designs. The most commonly used direct interconnect topology is the crossbar interconnect widely used in older telephone exchange central offices. When the number of nodes increases in a direct interconnection topology, the number of possible interconnections grow very quickly. These types of prior art direct interconnection techniques are very expensive for loosely-coupled or distributed multiprocessor designs with a very large number of processors.
The indirect interconnection topology is typically described in terms of the physical layout of the interconnected nodes of the system. These interconnect schemes are usually described in terms of the degree required for implementation, i.e., degree one, degree two, degree three, hypercube, etc. The most promise for very large multiprocessor system interconnect topologies has been found in the hypercube topology.
Several prior art multiprocessors have been designed based upon the hypercube topology. Exemplary of these types of multiprocessors is the NCUBE/10 parallel processing computer manufactured by NCUBE Corporation of Beaverton, Oregon. This multiprocessor system uses a 10-dimensional hypercube, or 10-cube, when fully configured. The NCUBE/10 machine can operate using 1,024 microprocessors operating in parallel providing an overall performance of upwards of 500 million floating point operations per second (MLFOPS) or
2,000 million integer instructions per second (MIPS). However, a multiprocessor of this capability is still limited in its ability to grow to a larger size due to I/O limitations between the microprocessors. Several variations on multi-degree topologies from multiprocessors have been proposed in the prior art to reduce the longest path between microprocessors and the number of interconnections between microprocessors in order to improve performance and to allow the number of microprocessors to grow larger. These proposals for multiprocessor interconnection topologies are typically shown in the form of a graph in which nodes represent switching points or processing elements and edges represent communication links. Since the topologies tend to be regular, the descriptions lend themselves to graphical displays representing systems such as the types shown in the Figures attached to the present patent application. Those skilled in the art readily recognize the conversion of graphical representations of system topologies into hardware. Hence, this shorthand notation is a convenient method of representing larger and more complex hardware multiprocessor systems without the associated complexity of unnecessary details. To best understand the prior art of interconnection topologies for multiprocessor systems, the present patent application includes a detailed discussion of the prior art to more carefully place the present invention in light of its advancements over the prior art.
SUMMARY OF THE INVENTION An improved topology for multiprocessor systems is described to facilitate interprocessor communication for parallel processing computers. In one preferred embodiment of the present invention, an interprocessor communication system for use in multiprocessor designs is constructed by modifying an n- dimensional hypercube. This modified hypercube is constructed using 2n-nodes arranged as an n-dimensional hypercube (n-cube). Additional communication paths are inserted into the hypercube structure to reduce the diameter of the hypercube by increasing the number of ports (degree) for each node. This is an acceptable trade-off when the number of nodes within the system increases.
In a second preferred embodiment of the present invention, the corners of the modified hypercube are substituted with topologies having cycle paths in order to reduce the degree of the overall structure. BRIEF DESCRIPTION OF THE DRAWINGS In the drawings, where like numerals refer to like components throughout the several views:
FIG. 1 shows a variety of prior art network topologies such as (a) the chordal ring, (b) the cube- connected cycle (CCC), (c) the ring (cycle), (d) the tree, (e) the mesh (grid), and (f) the completely connected network topology.
FIG. 2 shows prior art hypercubes of (a) one-, (b) two-, (c) three-, and (d) four-dimensional configurations.
FIG. 3 shows a path connecting x and x' in the basic topology S2.
FIG. 4 shows a path connecting (x, y) and (x', y') in a Substituted Topology ST ( S2, S 1). FIG. 5 is a Modified Hypercube MH2 formed from a
2-cube.
FIG. 6 is a Modified Hypercube MH3 formed from a 3-cube.
FIG. 7 exemplifies the robustness of the Modified Hypercube by showing four remaining virtual 2-cubes in a MH3 configuration when nodes 3 and 4 are faulty.
FIG. 8 shows a substituted node (within the circle) of a Substituted Topology MH3 (MH2) where the basic topology is a Modified Hypercube of the type shown in FIG. 6 and the block topology is a Modified Hypercube of the type shown in FIG. 5.
FIG. 9 is a graphical representation of a Substituted Modified Hypercube SMH3 (MH2) where the basic topology is a Modified Hypercube of the type shown in FIG. 6 and the block topology is a Modified Hypercube of the type shown in FIG. 5. DETAILED DISCUSSION OF THE PRIOR ART In the recent years there has been a growing interest in developing multiprocessor or multicomputer systems. The quest for increased computational power in scientific computing and the limits of physical electronic devices have lead to the exploration of new architectures as alternatives to traditional monolithic designs. Multiprocessor designs hold the promise of tremendous performance increase, provided the interconnection communication can support the parallelism inherent in the computation.
The communication medium may be shared memory by using direct interconnection network (such as omega network, crossbar, indirect binary hypercube, etc.), a broadcast bus or an indirect interconnection network. Broadcast bus schemes are not expensive but it is difficult to get high performance when the system is large, while the shared memory systems are expensive when scaled to large dimensions because of the rapid growth of the interconnection network. Large system configurations (for example, with thousands of processors) are readily realized with the distributed memory based on a limited form of interconnection, such as the pyramid or binary hypercube. With this view, much current computer architecture research has focused on the use of identical processors in homogeneous configurations that employ message passing over limited forms of indirect interconnections. The present patent application is directed to such indirect interconnection topologies which can be used for small (for example, with ten processors) to very large multiprocessor systems (for example, with a million or more processors).
There are various designs for indirect inter- connection topologies. An indirect interconnection topology for a multiprocessor system can be considered as a graph with each processor as a vertex (or node) and with, an edge between two vertices if their corresponding processors are directly connected. The terms "graph" and "system" are used interchangeably throughout this application. In selecting a topology for a multiprocessor system, several factors are considered to optimize the design and maximize performance. The first criterion is that the degree of a graph should be small. The degree of a graph is defined as the maximum number of vertices connected directly (through I/O ports) to a vertex in the graph. This restriction represents the fact that a processor with a large number of I/O line interfaces is expensive and may not be feasible for realization due to pin-out limitations, multiplexing problems, etc.
The second criterion is that the diameter of the graph should be small relative to the number of the vertices. The diameter of a graph is defined as the maximum shortest distance between any pair of vertices in the graph. Processing elements not directly connected will have to have messages relayed by intervening vertices. The number of relays in the worst case should be kept small to keep communication time delay between processors to a minimum.
Third, there should be no congestion parts in the system. The message relaying load should be uniformly distributed throughout the network such that there is no bottleneck in the message communication flow.
Fourth, the routing algorithm for any node in the system should be easily implemented, so that the cost of time for communication is small. Also, the system should be uniform so that each node processor could be loaded with the identical routing algorithm to forward messages independent of the location of the node in teh system. The node need only recognize its own address. Uniformity also creates a system without congestion.
Fifth, the system should be robust. For a large system with thousands or more processors, it is very possible that there are some processors which have faults. The faulted system with a number of faulted processors (nodes) should still be able to perform computation by disabling the faulty processors, reconfiguring the system (by the host or control processor) and by routing messages around the faulted notes. The robustness is very important for such a system. Usually the robustness of a system is related to the number of possible (or alternative) paths between each pair of processors.
Sixth, the system should have good emulation capabilities to emulate other network topologies.
The extreme case of small degree (few I/O ports) is to allow each processor to have at most two or three neighbors. An example of a network topology with only two neighbors is the ring, as shown in FIG. 1C. The diameter of a ring is equal to half the number of vertices. In the ring shown in FIG. 1C, the diameter, therefore, is 4, as the number of vertices is equal to 8.
To allow the network topology a maximum of three neighbors to keep the degree small has been proposed in the prior art in the form of several different topologies for the family of trivalent graphs. One of these is shown as the tree structure in FIG. 1D in which each node is attached to one parent and two children. The tree topology results in congestion and poor robustness. Another example of a multiprocessor topology of small degree (4) is the mesh in which each processor has four neighbors to which it is attached. The diameter of the mesh structure is *N (where N is the number of nodes in the system) and as the size grows very large, so too does the diameter, making the interprocessor communication through the large network extremely difficult. The extreme case of small diameter is to allow each processor to be connected to every other processor in the network. This is exemplified in FIG. 1F as the completely connected topology. For a completely connected network where the number of processors in the system is N, the degree p is equal to n-1 and the diameter d is equal to 1. Thus, a completely connected topology offers extremely efficient communication between the processors but is limited in size due to the high number of I/O ports that each processor must support. The cube-connected cycle shown in FIG. 1B is a trivalent graph with the number of processors N=n2n vertices when n equals the dimension of the cube. The vertices are grouped into 2n cycles of n vertices each and conceptually arranged at the corners of an n-cube. Each of the n edges emanating from a corner of the cube is used to connect one of the vertices of the corresponding cycle to a vertex of a neighboring cycle. The diameter of the system is 5/2·log2 N+0(1). A generalization of the binary trees, called multitree structure graphs, is formed by taking t binary trees and connecting the root of each to one vertex of a t-vertex cycle. Its diameter is less than 2 · log2 N+0(1). Another trivalent graph is the Dense Trivalent Group (DTG). Its diameter is 3/2·log2N+0(1). This topology cannot simu late a two- or three-dimensional mesh.
Another type of trivalent graph is the chordal ring, shown in FIG. 1A. A comparison of a variety of the prior art network topologies is shown in TABLE 1. Formulas for calculating the diameter and degree of the network topologies are shown. Also shown are the other types of topologies that each topology can emulate. Comparative judgments based on the criteria such as robustness, congestion and uniformity of the structures discussed above is also included in TABLE 1.
Figure imgf000013_0001
FIG. 2 is an example of another type of interconnection topology termed the hypercube or n-cube. The hypercube takes its name from a three-dimensional cube extended into lesser and greater dimensions. For example, as shown in FIG. 2, a 1-cube is simply two nodes connected along a single communication path. Each node or vertex 900 is connected along an edge 901 of the cube which represents in graphical form a communication link. The simplest form of a hypercube is a zero-cube in which there is a single node with no connections to any other nodes.
A 2-cube has the number of processors N equal to 2n in which n is the dimension of the cube. Hence, the 2-cube has four processors shown with vertex addresses 00, 01, 11, and 10 in their binary form. A standard three-dimensional cube also shown in FIG. 2 is degree of 3, a diameter of 3, and has 8 processors or nodes connected by 12 communication paths. Each node in a 3-cube connects with three other nodes. In a general sense, the binary n-cube or hypercube has a diameter of log N and a degree of log2N. The hypercube can be extended into any number of dimensions for implementing a regular interprocessor communication network. For example, the 4-cube of FIG. 2 is a theoretical representation of a four-dimensional cube with the binary addresses of each node of the cube shown in FIG. 2. The diameter of a 4-cube is equal to 4 and its degree is 4 such that each node is connected to four other nodes in the system. Those skilled in the art can readily recognize the extension of the hypercube into n- dimensions.
The hypercube has received significant interest due to the success of the Caltech Cosmic Cube project, the commercial availability of the NCUBE/10 parallel processor, and the availability of boolean cube-configured concurrent processors from Intel Scientific Computers, NCUBE, Ametek, Floating Point Systems and Thinking Machine Corp. The reason why the binary hypercube has attracted so much attention is because of its powerful topological properties. That is, homogeneity, symmetry, ability to map most other popular network topologies and recursive structure. Many practical algorithms based on two- or three- dimensional meshes (grids), trees and rings (cycles) can be embedded into or emulated by the binary hypercube. For example, the commercially available binary hypercube parallel processor NCUBE/10 has a potential computing power of 500 million floating-point operations per second (MFLOPS) which is more than three times as much as that of a CRAY-1 computer. However, the relative cost of the NCUBE/10 is much less than that of the CRAY-1.
The NCUBE/10 parallel computer available from NCUBE Corporation of Beaverton, Oregon, is a true parallel processing computer which can be configured with as few as 16 processors and expanded to a maximum of 1,024 processors. The NCUBE/10 machine is implemented using a hypercube technology expandable to a degree of 10 with a diameter of 10. Each node within the system is an independent 32-bit microprocessor with its own local memory and, at the most, 10 communication links to other nodes or processors in the system. Each node in the NCUBE/10 device has one 32-bit microprocessor and 128 Kbytes of memory. Each node executes its own program out of its local memory and operates on its own data. Each node in the NCUBE/10 machine is connected in a network of a hypercube configuration to a set of its neighbor's through Direct Memory Access (DMA) com munication channels that are controlled by the microprocessor of the node. The initiation and execution of communication throughout the network is on an interrupt basis, allowing the microprocessors of each node to independently process data while communication through DMA channels continues. System configuration, global control and external I/O is controlled by host processors.
Twenty channels are connected to each node for communication with its neighbors in a fully configured NCUBE/10 machine. Ten in-bound and ten out-bound channels run at 10 Mbits per second. An additional in-bound and out-bound channel are used for system I/O and various control functions within the machine. Each node is embedded with a routing algorithm which allows it to communicate with any other node or to pass communications through its node to be forwarded to the addressed node.
Another prior art multiprocessor using an n-cube interconnection network topology is the Connection Machine constructed at Thinking Machines Corporation, Cambridge, Massachusetts. The prototype of this machine, the CM-1, contains 65,536 (216) cells or processors connected by packet-switched network based on the boolean n-cube (hypercube) topology and using an adaptive routing algorithm. The network topology is constructed using a 12-cube where each node in the 12-cube is constructed of a router processor connected to 12 other remote router processors and also connected to 16 local microprocessors. Each node in the 12-cube of the CM-1 is constructed from a custom designed VLSI chip that contains 16 processor cells and one router unit of the packet-switched communications network. All communications transfers through the 12-cube take place over the edges of the 12-cubes connected to bidirectional I/O pins on each VLSI processor and router chip.
Each vertex in the 12-cube of the CM-1 multiprocessor has 16 processors connected in a 4x4 grid (mesh) such that each processor at each vertex communicates directly with its north, east, west and south neighbors within the same vertex. Thus, each vertex in the 12-cube of the CM-1 multiprocessor is substituted with a two-dimensional mesh topology which connects to the 12-cube through a router.
Parallel processing computers of the size and type of the NCUBE/10 and the CM-1 are used for a wide variety of scientific applications such as seismic processing, image processing, artificial intelligence processing, simulation of events such as fluid flow, molecular modeling, weather, and many other applications. The common purpose in all of these applications is to solve a problem as a set of concurrent cooperating subtasks. As requirements for these applications grow in the future, higher performance with larger numbers of processors is required to effectively meet the needs of the scientific community.
However, as mentioned above, it is very difficult to build a hypercube with massive number of processors using today's VLSI technology, since the degree of a processor in a hypercube is of log2N. For instance, in the NCUBE/10 parallel processor there are 1024 processors (a 10-cube) in a fully configured, 16-board machine. Each board contains 64 processors (a 6-cube) with 8 Mbytes of memory. Since each processor on the board has off-board bidirectional I/O channels to four more processors and one bidirectional I/O channel to one I/O board (the six remaining I/O channels communicate with on-board processors), this results in 512 backplane connections just for communication channels for one board. If a multiprocessor system of about one million (220) processors is to be realized based on the same technology, 1792 backplane connections for one board are required (assuming each node can support 20 I/O ports). This makes the hypercube topology unrealistic for a multiprocessor of one million processors. As the Very Large Scale Integration (VLSI) technology improves to the Ultra Large Scale Integration (ULSI) technology, the number of processors which can be packed into a chip or a board increases . Thus , it is important to use a topology of small degree (fewer I/O) to minimize the number of inter-chip or inter-board I/O connections. However, the diameter of a topology of a small degree tends to be big and the robustness of the topology tends to be reduced. Most of the previously discussed prior art topologies other than the hypercube are of degree 3 or 2. For a multiprocessor of medium or large size, a topology of a degree greater than 3 will be needed , but wi th a small diameter and improved robustness.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
In the following detailed description of the preferred embodiments, reference is made to the accompanying drawings which form a part hereof and in which is shown by way of illustration a specific embodiment in which the invention may be practiced. This embodiment is described in sufficient detail to enable those skilled in the art to practice the invention, and it is to be understood that other embodiments may be utilized and that structural changes may be made without departing from the scope of the present invention. The following detailed description is, therefore, not to be taken in a limiting sense, and the scope of the present invention is defined by the appended claims. The objective of the new class of interconnection topologies which form the preferred embodiments of the present invention is to reduce the degree and the diameter of the hypercube while preserving the favorable topological properties so that very large parallel processors can be constructed and operated. The first class of topologies discussed below is an improved hypercube termed the Modified Hypercube (MH). This unique topology reduces the diameter of a hypercube by approximately 1/2 at the expense of increasing the degree by 1. In some applications this is a favorable trade-off. A second type of interconnection topology is proposed which is based upon the Modified Hypercube and termed the Substituted and Modified Hypercube (SMH). Several examples of the SMH topologies are presented below, and the MH and SMH topologies are described both in textual form and in mathematical notation with reference to the attached drawings.
The degree of an SMH can range from 3 to log2N or more. The diameter of an SMH is reduced compared with existing topologies of the same degree. For instance, there are N=(n+1)·2n nodes in one SMH topology discussed below, and the degree and the diameter are 3 and 2·log2N+0(1) respectively. This compares favorably with the similar cube-connected cycles which has a degree of 3 and a diameter of 5/2·log2N+0(1).
In the detailed discussions which follow, the three most important topological properties of an interconnection network are discussed, which are: 1) the total number of nodes N in the system; 2) the degree p of the system; and 3) the diameter d of the system. In keeping with the structure of this discussion, the following notation is used.
Substitution Topology Terminology The topological properties PROP(S) of a multiprocessor system S is denoted by a three-tuple (N,p,d), where N is the total number of processors in S, p is the degree of the topology, and d is the diameter of S.
Let S1 denote a topology of a first multiprocessor system with PROP(S1)=(N1,p1,d1) and S2 denote another topology of a second multiprocessor system with PROP(S2)=(N2,p2,d2). A new topology called Substituted Topology ST(2, S1 ) can be formed by replacing each vertex in S2 by a S1. S2 is called the basic topology and S1 is called the block topology. Since each vertex in S2 has a degree of p2 (number of I/O ports per vertex), we need to decide how these p2 edges are connected to the vertices in S1. Assume first that p2 edges are connected uniformly to the vertices in S1 for the time being. That is, there are at most [p2/N1] "extra" edges connected to each vertex in a S1 after the substituting. Let PROP(ST(S2,S1)=(N,p,d). Then the following three equations are true.
In the first equation, N = N1·N2, where the total number of nodes in the Substituted Topology N is equal to the number of nodes in the basic topology N2 times the number of nodes in a single block topology. Since each S1 has N1 vertices and S2 has N2 vertices, after each vertex in S2 replaced by a S1, ST(S2,S1) has N1·N2 vertices. For example, every vertex in S2 is replaced by the block topology S1, each of which having N1 nodes. The resulting Substituted Topology has N1 times N2 nodes.
For the second equation, p = p1+[p2/N1], where each vertex in the basic topology S1 is connected to P1 other vertices. After the substitution, [p2/N1] extra edges are added to it. (The half-bracket notation used here refers to the mathematic function of taking the integer quantity of a real or fractional number.)
For the third equation, d≤(d2+1)·d1, the diameter of the Substituted Topology d1 is a function of the diameter of the block topology d1, multiplied by the diameter of the basic topology, first incremented by one. Since the diameter of the basic topology S2 is d2, the longest path between two vertices consists of d2 edges and d2+1 vertices. After each vertex in the basic topology S2 is replaced by a block topology S1, the longest possible path between two vertices in ST(S2,S1) is no more than (d2+1)·d1.
In comparing ST(S2,S1) with the original basic topology S2, the Substituted Topology ST(S2,S1) contains many more vertices (many more processors). Since p1<p2 and N1>2, the degree of ST(S2,S1) (i.e., p1+ p2/N1 ) can be smaller than the number of ports p2 if p1 is much smaller than p2. Although the diameter of ST(S2, S1) can be potentially much bigger than d2, it is actually quite small when the details of the topology S1 are considered along with the below-described routing algorithm. Several examples of this are discussed below.
This substituted topology still preserves some topological properties from both S1 and S2. For instance, a routing algorithm can be easily developed based on the routing algorithms for S1 and S2. In order to demonstrate this let us further assume that the address of a vertex in ST(S2,S1) is denoted by a two- tuple (inter-address, inner-address). Inter-address is the address of a vertex in the basic topology S2 which has been replaced by a block topology S1 in ST(S2,S1). Inner-address is the address of a vertex in the particular block topology S1. A path between two vertices (x,y) and (x',y'), where x, x' are inter-addresses and y, y' are inner-addresses, can be routinely derived as shown in Figure 3.
First consider two vertices with addresses x and x' in S2, shown in Figure 3. There exists a path x=x0,x1,x2,..., xj-1,xj=x' connecting x and x' in S2. For the Substituted. Topology shown in Figure 4, each vertex x in S2 is replaced by a S1 in ST(S2,S1). Since the longest distance between any two vertices in S2 is d2 and the longest distance between any two vertices in S1 is d1, the longest possible distance between any two vertices in ST(S2,S1) is no more than d1·(d2+1).
Modified Hypercube The above substituting scheme can be applied to any two topologies. The preferred embodiment of the present invention uses hypercube as the basic topology and to combine hypercube with various other block topologies to achieve our goal. However, as discussed above, the substituting scheme may increase the diameter of the resulting topology. Therefore, the preferred embodiments of the present invention use a modified hypercube as the basic topology to reduce the diameter of the resulting topology. To aid in the description of the modified hypercube topology, a set of cubic func tions and a complement function are described below.
First, let i= [in-1in-2...i1i0] be the binary representation of an integer i (where n is the number of binary bits) and for a single bit x, let
Figure imgf000023_0003
=1-x, then 1) A cubic function, β j for 0≤j≤n-1, is defined by βj(i)=[in-1...
Figure imgf000023_0001
j+1ijij-1...i0], 2) A complement function, ɣ, is defined by ɣ (1)=[
Figure imgf000023_0002
in-1 in-2...ij+1ijij-1...i0] A vertex i ( 0 ≤ i ≤2n-1) is connected to n other vertices β0(i), β1(i),...,β n-1(i) in a hypercube topology. In the Modified Hypercube topology each vertex i is also connected to vertex ɣ (i). That is, each node having an address i in a Modified Hypercube is additionally connected to a node having a complement address ɣ (i). The addition of one more connection (port) to each node increases the degree of the hypercube by one. While the degree of the Modified Hypercube is increased by one, the diameter of the system is reduced to approximately half of the original. This property of the Modified
Hypercube will result in very good substituted topologies as described below. The Modified Hypercube of 2 vertices is denoted by MHn for purposes of the present patent application. PROP(MHn)= ( 2n,n+1, [n/2]). This Modified Hypercube has several advantages discussed below.
Figure 5 shows a simple example of a 2-cube in its Modified Hypercube form MH2. The edges of the original 2-cube are shown in solid lines, while the additional edges of the Modified Hypercube are shown in dashed lines. The vertex with binary address 00 is directly connected by the additional edge to the vertex having a binary address of 11. In a like fashion, the vertex with the binary address 10 is connected by a new edge to the vertex having a binary address 01. Thus the new edges in the Modified Hypercube connect vertices with complementary binary addresses. For MH2, the resulting topology is the same as a completely connected network.
Figure 6 shows a 3-cube modified according to the teachings of the present invention to be a Modified Hypercube MH3. Each vertex in MH3 has a 3-bit binary address and the edges of the original 3-cube are shown in solid black lines. The dashed lines show the additional edges in the MH3 which connect the vertices having complementary binary addresses.
For the MH3 shown in Figure 6, there are 8 nodes or vertices and each is of degree 4. That is, each processor node has four I/O ports. It is easy to see that the diameter of MH3 is 2. Compared with n-cube, the MHn is more robust. For instance, there are four virtual 3-cubes in a MH3 as shown in Figure 7 with the node address shown in decimal form for convenience. when processor nodes 3 and 4 are faulty in a 3-cube, it can be seen that there exist no "good" 2-cubes. The entire multiprocessor system would be disabled for 2-cubes. However, as shown in Figure 7, there still exist three 2-cubes (i.e., the 2-cubes formed by nodes [2, 5, 1, 6] or by nodes [0, 7, 5, 2] or by nodes [7,0,1, 6] in a MH3. In general, for MHn systems with for example two failures, there remain a plurality of (n-1)-cubes to carry on processing. Hence the system is very fault tolerant. The additional communication established in a Modified Hypercube also provides a greater number of communication paths which tend to reduce interprocessor communication delay within the system.
The Modified Hypercube topology can be extended to n-dimensional hypercubes to reduce the diameter by approximately one-half at the cost of increasing the degree by 1. This has significant advantages to multiprocessor computers in and by itself for improving robustness and communicability within the system. Those skilled in the art will readily recognize the performance improvements in a multiprocessor design where messages between multiprocessors can be more expeditiously forwarded to reduce the communication bottleneck inherent in such designs. The application of the Modified Hypercube topology to such prior art multiprocessors such as the NCUBE/10 machine will be readily apparent for increasing the speed of communication throughout the machine at the expense of reducing its degree. Thus, an NCUBE/10 processor, being limited to 10 interprocessor I/O ports, would contain 512 processors in a 9-cube with a degree of 10 but a diameter of 5. Such a trade-off is extremely advantageous in computing applications where speed between interprocessors is primary. The properties of the Modified Hypercube MHn can be compared to the prior art topologies of Table 1 in that the diameter d is equal to [1/2· [log2N+1]] and the degree is equal to log2N+1. The Modified Hypercube is a completely symmetric system and has the same ability to emulate most popular networks such as the ring, mesh and tree topologies as the hypercube as well as having the ability to emulate the standard hypercube. The Modified Hypercube is very good in terms of its robustness and of course is uniform in its topology, lending itself to a simple routing algorithm.
Substituted and Modified Hypercube Applying the substituting scheme to the Modified Hypercube, a Substituted and Modified Hypercube (SMH) can be obtained according to the teachings of the present invention. Assume as above that PROP(S1)= (N1,p1,d1), and denote the Substituted Topology ST(MHn,S1) by-SMHn(S1) since the basic topology S2 is a Modified Hypercube MHn. In compliance with the substitution terminology discussed above, each vertex or vertex address in SMHn(S1) is denoted by a two-tuple (x,y), where 0≤x<2n and 0≤y<N1. Note that there are N1·2n vertices in SMHn(S1). The x value of the two- tuple represents the vertex address location in the basic topology S2 (which is MHn) and the y value of the two-tuple represents the vertex address location within the block topology S1 (which has been inserted into the basic topology at each S2 vertex). For purposes of the description given here, it is defined that two vertices (x,y) and (x',y') belong to the same block (block x) if and only if x=x'. That is, the two vertices are found within the same block topology S1 when the x value of the address two-tuples match. Thus, each block (original S2 vertex) contains N vertices. Thus block x is adjacent to block x' if and only if either x'=βj (x) for 0≤j<n or x ' =ɣ( x) . Without knowing the details of the block topology S1, the connections between vertices in SMHn(S1) are described as follows. First, vertices in the same block (same original S2 vertex location) are connected according to the topology of the block topology S1. That is, two vertices (x,y) and (x,y') are connected directly if and only if vertex y is connected directly to vertex y' in S1. The discussion of how a vertex is connected to vertices in adjacent blocks is given below.
When n+1<N1, there is only a subset of vertices in a block connected to the vertices in other adjacent blocks. That is, there are more vertices in the block topology S1 than there are required connections to the basic topology S2 (MHn) so that some I/O ports of the S1 vertices are left unused (assuming homogeneous processors with a fixed number of I/O ports set to the maximum degree of the system). Assume that we choose n+1 vertices to connect to other adjacent blocks. That is, vertex (x,y) is connected directly to vertex (βy( x ),y), if 0<y<n, the cubic function of the X value being used to assign the connecting address node location in the Substituted Topology. Vertex (x,n), the last remaining vertex to be connected in the block topology S1, is connected directly to vertex (ɣ(x), n), the complement function of the x value being used to assign the connecting address node location in the Substituted Topology. This is in keeping with the earlier discussed concept of using the additional edges in a Modified Hypercube to connect complement binary addresses of nodes to reduce the diameter and to facilitate the simplicity of the routing algorithm. Thus, from the foregoing discussion, the degree of SMHn(S1) is
P1+1.
When n+1≥N1, vertex (x,y) is connected directly to vertices (βj(x), y), where 0≤j<n and y=j mod N1, and vertex (x,y) is also connected to vertex (ɣ(x),y) if y=n mod N1. In this example, there are more required connections from the block topology S1 to the basic topology S2 (MHn) so some of all of the nodes of S1 must connect to more than one edge in S2. The degree of SMHn ( S1 ) is p1 + [ n+1/n1 ] and the diameter of a SMHn(S1) can be quite small.
For purposes of illustrating the flexibility of the preferred embodiment of the present invention, a cycle-connected topology (i.e., a ring) with properties PROP(S1) = (N1, 2, [N1/2]) is denoted as and used as
Figure imgf000027_0001
S1 for SMHn(S1). In
Figure imgf000028_0001
1 vertex i for 0≤i<N1 is connected to vertex (i+1) mod N1 (modulus function for base 2). Then the diameter of SMHn(CN1) is no more than [3·N1/2]+[n/2]. Thus, the Substituted and Modified Hypercube topology will support a wide variety of topologies for the block topology S1.
Another example of an efficient block topology S1 for the SMHn(S1) Substituted Topology would be to use a Modified Hypercube of lesser degree for the vertices of the Modified Hypercube basic topology. An example of this topology is shown in Figure 8. For a basic topology MH3 shown in Figure 6, each vertex of MH3 is substituted by a Modified Hypercube of degree 2 to create SMH3 (S1) where S1=MH2. The resulting substituted topology results in 32 processors (N=32), a diameter of 4, and a degree of 4.
Figure 9 shows the resulting Substituted and Modified Hypercube SMH3(MH2) as the new substituted topology with S2=MH3 and S1=MH2. Each node within SMH3 of Figure 9 is addressed using a binary two-tuple. The address tuples shown in Figure 9 use decimal numbers for brevity. As previously discussed, each vertex of MH3 has been replaced by MH2 such that each MH2 is a class of nodes having the same inter-address. For example, the vertices that belong to the same class x=7 are [7,0], [7, 1], [7, 2], and [7, 3]. This MH2 topology individually has a diameter d equal to 1 and a degree p1 equal to 3.
To give another example of the flexibility of the SMHn algorithm, let n=15 for S2 and let S1 be a loop (cycle) of four nodes with addresses 0, 1, 2, 3 decimal (or 00, 01, 10, 11 binary). The binary addresses of each node in the SMH15(C4) topology are comprised of a two- tuple with the inter-address x being a 15-bit binary address ranging from x0 through x14 and the inner- address being a 2-bit binary address y ranging from y0 through y1. In a standard hypercube of degree 15, the diameter of the hypercube is 15, the degree of the hypercube is 15, and the number of nodes N equal 32,768. in a Modified Hypercube of degree 15, the diameter is 8, the degree is 16, and the number of nodes remains the same. In the present example, a Substituted and
Modified Hypercube SMH15(C4) has 131,072 nodes with a degree of 6 and a diameter of 13. Since the block topology S1 only contains four nodes and each vertex of the MH15 has a degree of 16 requiring each block to have 16 I/O lines, the mapping of the nodes of the block topology onto the basic topology is not a simple 1-to-1 correspondence. Each node within the block topology therefore must map onto four nodes in the basic topology. Since the four nodes within the block topology S1 already have connections to two other nodes, four additional connections are required of each node to connect the nodes of the block topology to the edges of the basic topology. Thus, the degree of the system is equal to 6.
The S1 block topology must be a cycle- connected topology. A requirement for a cycle-connected topology is that there must be a path passing through all of the nodes only once. Examples of cycle-connected topologies that can be used as S in the present invention are the ring, any hypercube including the Modified Hypercube, any full connection topology, etc. An example of a topology which is not a cycle-connected topology and cannot be used as a block topology for the present invention is the tree structure.
Table 2 lists the diameter and ports for the SMHn(S1) topologies where several S2 topologies are listed. Table 2 clearly shows the advantages of the Substituted and Modified Hypercube topologies of the preferred embodiment of the present invention. The diameter and degree of each of the Substituted Topologies can be calculated from the general formulas given below.
Figure imgf000031_0001
As another example of the SMHn topology, consider SMH15(C8) where SMH15 is a Substituted and
Modified Hypercube S2 of degree 16 and C8 is an 8-node S1 cycle of degree 2 and diameter 4. There are 2 18 processors in SMH15(C8). The degree and the diameter of SMH15(C8) are 4 and 20 respectively. In general, when the number of nodes in the cycle-connected topology increases the degree of the resulting topology decreases and the diameter of the resulting topology increases. On the other hand, if the number of nodes in the cycle-connected topology decreases, the degree of the resulting topology increases and its diameter decreases. For instance, the degree and the diameter of SMH15(C32) are 3 and 32 respectively. The degree and the diameter of SMH15(C4) are 6 and 14 respectively.
Thus, depending upon the demands of the application for the multiprocessor system, the topology can be selected accordingly.
The mapping of the nodes of the block topology- onto the edges of the basic topology is a matter of choice and is not limited to a specific algorithm. However, by keeping the mapping orderly, the routing algorithm for transferring messages throughout the multiprocessor system will be simplified. An example of mapping in this illustration would be to start with a node in the block topology having address 00 and being mapped onto four edges of the basic topology. The addresses of the nodes outside the block onto which the present node is mapped would be (
Figure imgf000032_0002
0, X1, X2, X3,...,X14, oo), (X0, X1, X2, X3,
Figure imgf000032_0001
4, X5, ..., X14, 00), (X0...X7, 8, X9,..., X14, 00), and (X0,..., X11,
Figure imgf000032_0003
12, X13, X14, 00). The foregoing mapping of nodes within the block topology onto edges in the basic topology is not a unique way of assigning the connections. Those skilled in the art will readily recognize a wide variety of mappings.
Note that the mapping of the node address 00 within the block topology connects to the nodes within other block topologies also having a local address of 00. This can also be seen in the SMH3 of Figure 5. This mapping technique is also used to facilitate the routing algorithm and does not necessarily have to be adhered to in mapping one block topology onto another block topology through the edges of the basic topology.
Those skilled in the art will readily recognize the wide variety of block topologies which can be used in combination with the Modified Hypercube MHn to provide new topologies of reduced diameter and degree. The examples given here using a Modified
Hypercube as the block topology on SMHn(S1) are for illustration purposes only and are not intended to be limiting in the scope of the present invention. Those skilled in the art will readily recognize the extension of the SMHn topology to very large structures and the block topology used to create the Substituted Topology can likewise be very large. Although the applicants of the present invention have described one of the preferred embodiment of the present invention to be a Substituted Modified Hypercube with the block topology also being a Modified Hypercube or a cycle-connected topology, various design decisions will determine the decision as to the actual topologies substituted based on, for example, fixing the degree or fixing the diameter desired in the resulting system to tailor the multiprocessor system to its preferred application.
Comparing the Substituted and Modified Hypercube topology SMHn(S1) to the prior art topology shown in Table 1, the diameter of SMHn is d= [n/2]+[3·N1/2]. The degree (number of ports) is p=P1+ [(n+1)/N1]. There is no congestion in the Substituted and Modified Hypercube in that the message or information distribution is facilitated by the Modified Hypercube for the basic topology. The SMHn topology is able to emulate the ring, mesh, tree, n-cube and cube-connected cycle topologies of the prior art. Its robustness is similar to the Modified Hypercube topology due to the additional interconnection paths, and hence it is very good. And similar to the hypercube and Modified Hypercube, the Substituted and Modified Hypercube is very uniform in its structure, lending itself to a standardized routing algorithm.
In comparing the prior art topologies with the Modified Hypercube and the Substituted and Modified
Hypercube, it is clear that the Substituted and Modified Hypercube is clearly superior to all the prior art multiprocessor topologies for very large multiprocessor systems. It gains efficiency as the number of pro- cessors N grows.
Routing Algorithm for SMHn The routing algorithm for the Modified Hypercube takes advantage of the fact that direct connections are made within the hypercube structure between complemented node addresses and as many inter-block transfers are used as possible to shorten the diameter. This serves to reduce the overall diameter of the MHn approximately one-half, since the additional edges tend to reduce the worst-case minimum distances between any two nodes. In effect, the routing algorithm can use the additional edges as a "shortcut" to find the shortest path between nodes. The routing algorithm for Substituted and Modified Hypercube takes advantage of the regularity inherent in hypercube topologies in that each node can be programmed with a similar routing algorithm, different only in the embedded address of the resident node. The routing algorithm then uses the resident node address to calculate which I/O port to forward the message to if the message is not destined for the resident node. The following routing algorithm takes advantage of the SMHn(S1) topology to find the minimum distance between any two nodes. This algorithm is described in detail and in general mathematical terms to show the applicability of the routing algorithm to SMHn(S1) topologies where the block topology S1 can be any number of a wide variety of topologies. Consider any pair of nodes (x,y) and (x'y') in
SMHn(S1). The worst-case minimum distance between any two nodes x and x' in a Modified Hypercube MHn is [n/2]. This is due to the fact that if the Hamming distance between x and x' is greater than [n/2], then x can first be connected to the complement ɣ(x) and then the distance from ɣ(x) to x' is less than n-[n/2]. Assume that the Hamming distance s between x and x' is [n/2], and assume that [ξ12,...,ξs] is a set of cubic functions with minimum cardinality such that ξ1· ξ2·...·ξs(x)=x'. Let Pj denote a subset Of [1, 2,...,s] such that for each iε Pj, a block w node (w, j) is connected directly to node (ξi, (w), j). When s is much greater than N1, it is possible that Pj is not empty for every 0≤j<N1. Since ξiξj (x) =ξjξi(x) for any two cubic functions ξi and ξ j , we can relabel functions ξ i for 1≤i≤s such that
P0=[1, 2,..., I0] and Pj= [ij-1+1, Ij-1+2,..., Ij] for 0<j<N1. Thus,
Figure imgf000035_0001
=s. For simplicity, let us also assume y=0. It can be seen that the worst possible choice (in terms of the longest of the shortest paths between (x,y=0) and (x',y')) for y' is y' = [N1 /2]. Let αj denote the function ξjξj-1...ξ2ξ1. Therefore, the maximum shortest distance between two vertices (x,y=0) and (x',y'= [N1/2]) is the following path: (x,y=0), (α1(x),0),...,(αI0(x),0), (αI0(x), 1), (α I0+1(x), 1),...,(αI1(x),1), (α(I1(x), 2),...,
1-1(x)=αs(x)=x'·N1-1), (αs(x)=x,, N1-2),...,
Figure imgf000036_0002
(x', [N1/2]). The path length is about [n/2]+[3·N1/2].
Note that when N. is small, the diameter is close to
[n/2]. Consider two vertices (x,y) and (x',y'). Let
HD(x,x') denote the subset of [0, 1,...,n-1] which contains all indexes in which x is different from x'. That is, the cardinality of HD is the Hamming distance of x and x'. Let
Figure imgf000036_0001
denote [0, 1,...,n-1]-HD. When the Hamming distance between x and x' is more than [n/2], we may have to choose a j from H
Figure imgf000036_0003
and connect (x,y) to (βj(x),y) before the complement function ɣ can be applied. This algorithm will be better understood from the detailed examples of the routing algorithm presented below.
The routing algorithm for the SMHn(S1) topology is very simple and efficient. The following is an example of a distributed routing algorithm according to a preferred embodiment of the present invention. Assume that processor node (x,y) received a packet of information or data with (x',y') as its destination address and M(y)=[j | 0≤j<n-1 and j mod N1=y].
The routing algorithm for forwarding communication packets between nodes in an SMHn(S1) multiprocessor system is as follows:
Step 1 If (x,y)=(x',y') then terminate (The destination is reached), else proceed to Step 2. Step 2 If x=x' and y≠y ' then
If y<y' and (y'-y)≤[N1/2] then send the packet to node (x, (y+1) mod N1) else if y<y' and (y'-y)>[N1/2] then send the packet to node (x, (y-1) mod N1) else if y>y' and (y'-y)≤[N1/2] then send the packet to node (x, (y-1) mod N1) else if y>y' and (y'-y(<[N1/2] then send the packet to node (x, (y+1) mod N1). Each of these types of packet transmissions is called inner-block transfers.
This step is possible since the block topology is restricted to a cycle-connected topology in which there is a cycle path in S1. Assume, not loss generality, this path is
0 →1 →2 ... →y →y+1 →... →N1+1 →0.
Step 3 Compute HD(x,x'),
Figure imgf000037_0001
and M(y); Note that M(y) is fixed for each node (x,y). Therefore, it can be precomputed and stored at each node location.
Step 4 If | HD(x,x')| ≤[n/2] then
If HD(x,x')∩ M(y)≠0 then randomly pick a j in HD(x,x') ∩ M(y) and send the packet to (βj (x),y) else ( i.e., HD(x,x') ∩ M(y)=0 ) then send the packet to (x,(y+1) mod N1). Each of these types of packet transmissions is called inter-block transfers.
Step 5 If | HD(x,x')| > [n/2] then
If y=n mod N1 then send the packet to (ɣ(x),y) else if
Figure imgf000037_0002
, ) ∩ M( y )≠0 then randomly pick a j from HD(x,x') 0 M(y) and send the packet to (βj(x),y) else send the packet to (x, (y+1) mod N1).
Steps 4 and 5 improve the diameter from d=[d1+1]·d2 to d≤[n/2] + [3·N1/2]. This is accomplished by using as many inter-block transfers as possible to achieve the shortest path between the source and destination in a substituted topology. As an example of the operation of the above routing algorithm, consider SMH11(C4) where the basic topology SMH11 is a Substituted and Modified Hypercube of degree 11 and the block topology C4 is a cycle of four nodes (a 2-cube, not modified). According to the previous terminology assumptions, the address of each node is denoted by (x=x0x1...x10,y=y0y1) and nodes ofblock x are connected as follows:
1) Node (x,y=00) is connected to the following five nodes: (β0 ( x ) , 00 ) , (β4(x), 00), (β8(x), 00), (x, 01) and (x, 11).
2) Node (x,y=01) is connected to the following five nodes: (β1(x),01), (β5(x), 01), (β9(x), 01), (x,00) and (x, 10).
3) Node (x,y=10) is connected to the following five nodes: (β2 ( x ) , 10 ) , (β6(x) , 10 ) , (β10(x), 10), (x,01) and (x, 11).
4) Node (x,y=11) is connected to the following five nodes: (ɣ(x), 11), (β3(x), 11), (β7(x), 11), (x, 10) and (x,00). (The remaining connections within this Substituted Topology are intuitively obvious.)
Assume that Node (00000000000, 01) wants to send a packet to Node (11101011010, 11). Then the following sequence of operations are carried out by each node which receives this packet.
1) Source node (00000000000,01) first computes HD=[0, 1, 2, 4, 6, 7, 9],
Figure imgf000039_0001
= [3, 5, 8, 10], and
M(01)=[1, 5, 9]. Since | HD | =7> [n/2]=5 and 5 is the only index in M(01) ∩
Figure imgf000039_0007
, this packet is sent to Node (00000100000, 01).
2) After Node (00000100000, 01) receives this packet, it computes HD=[0, 1, 2, 4 , 5, 6, 7, 9], =[3, 8, 10], and
Figure imgf000039_0002
M(01)= [1, 5, 9]. Since
Figure imgf000039_0006
∩M(01)=0, this packet is sent to Node (00000100000, 10).
3) After Node (00000100000, 10) received this packet, it computes HD=[0,1, 2, 4, 5, 6, 7, 9],
Figure imgf000039_0003
D=[3, 8, 10], and M(10)=[2, 6, 10]. Since | HD | =8> [n/2]=5 and 10 is the only index in
Figure imgf000039_0005
D ∩M (10), the packet is sent to Node (00000100001, 10).
4) For Node (00000100001, 10) HD= [0 , 1, 2, 4, 5, 6, 7, 9, 10],
Figure imgf000039_0004
D=[3, 8], and M( 01)=[2, 6, 10]. Since HD∩M(10)=0, Node (00000100001, 10) will send this packet to Node (00000100001, 11).
5) Since | HD | > [n/2 ] and 1110 mod 410=112, the complement function is indicated to find the shortest path and this packet is forwarded to Node (ɣ(00000100001), 11). That is, Node (11111011110, 11). (The subscript 10 indicates decimal number system and subscript 2 indicates binary number system.)
6) For Node (11111011110, 11), HD=[3, 8], and
M(11)=[3, 7]. Therefore this packet is sent to Node (11101011110, 11).
7) For Node (11101011110, 11), HD=[8] and M(11)=[3, 7]. Since HD∩M(11)=0, this packet is sent to Node
(11101011110, 00).
8) For Node (11101011110, 00), HD=[8] and
M(00)= [0,4,81. Therefore, this packet is sent to Node (11101011010, 00).
9) At last Node (11101011010, 00) sends the packet to its destination Node (11101011010, 11) within the same block.
Fixing the Degree or Diameter
When the degree of the resulting topology has to be 3, the only possible SMHn topology is SMHn(Cn+1), where Cn+1 is a ring (cycle) topology with n+1 nodes. According to the previously defined terminology, let PROP(SMH(Cn+1)=(N,p,d). Then the number of nodes N=(n+1)·2n, the number of ports or degree p=3 and the diameter d is roughly 2·n (or 2· (log2N-log2n+1)). This topology is similar to the previously proposed cube- connected cycles. However, its diameter is shorter than the diameter of cube-connected cycles due to the use of a Modified Hypercube as the basic topology. Since the degree of a ring is two, in order to keep the degree of the resulting substituted topology to be 3 the number of nodes in the ring can not be too small. This will effectively keep the diameter big (i.e., compared with log2N/2).
When the desired degree is 4 and the two block topologies used in the substituting scheme are MHn and S1, then S1 can be either a ring of k=[(n+1)/2] nodes or a topology of degree 3 with n+1 nodes. In the former case, N=k·2n and the diameter d is roughly [n/2]+k+[k/2] (i.e., less than 3/2·log2N). In the latter case, S1 can be any topology of degree 3 mentioned above. For instance, the block topology S1 can be a chordal ring of n+1 nodes. Therefore, in the resulting topology N=(n+1)·2n and d is roughly [n/2]+(n+1)+ √
Figure imgf000041_0001
< 3/2·log2N. It can be seen from the foregoing that there is a family of SMHn topologies similar to the cycle connected cube but with shorter diameter and better properties, such as those described above. The two simple cases discussed above for fixing the degree are not limited, and other examples are easily derived by those skilled in the art. By fixing the degree, a designer of a multiprocessor system can start with the maximum desired number of ports per microprocessor to build the largest multiprocessor feasible for the amount of overhead the system can support due to the size of the diameter of the system.
The topologies discussed above can be programmed to emulate other structures such as the cube- connected cycle (CCC) algorithm. All CCC algorithms can easily be embedded into the SMHn topology. However, the basic SMHn topology is more robust and has better mapping flexibility than the CCC, and hence is preferred.
When the diameter of the resulting topology is the primary concern. S1 should be a topology with few number of nodes. For instance, in SMHn(C4), N=4·2n=2n+2, the degree p=2+ [[n+1]/4]=[1/4·((log2N)-1)]+2, and the diameter d= [n/2] +5 (i.e., roughly 1/2·log2N when N is large).
In SMHn(H3), where H3 is the binary hypercube of 8 nodes, N=2n+3, p= [(n+1)/8]+3, and d= [n/2] +8-3= [(log2N-3)/2]+11. Therefore, the diameter of the resulting topology can be fairly close to 1/2·log2N and the degree can be close to 1/4 or 1/8 of log2N.
Using the above example and applying it to the prior art NCUBE/10 machine, a more efficient structure can be created using fewer ports and resulting in a greater number of nodes. The basic NCUBE/10 processor node supports ten I/O ports in a 10-cube configuration to allow it to communicate with other processors in the system. In this prior art system, the number of processors N=1,024, the diameter d=10, and the number of ports or degree p=10. Using an SMH10(MH2) topology applied to the prior art NCUBE/10 processor nodes, an improvement in the number of nodes that the multiprocessor can support can be achieved while keeping the diameter constant and reducing the actual number of ports required for implementation. For example, the basic topology is a Modified Hypercube MH10. In this hypercube, the number of nodes N2=210 or 1,024 nodes.
The diameter of MH10 is d2=9, and the number of ports required p2=11. By taking a simple Modified Hypercube
MH2 as the block topology, an improvement in the overall structure can be gained. For MH2, the number of nodes
N1=22 or 4. The diameter d1=1, and the number of ports required p1=3. Using the formulas discussed above and the terminology defined above, for SMH10(MH2) ,
N=N1· N2=4,096 nodes. The diameter of this system is d=9. The number of ports required in this Substituted and Modified Hypercube topology is a maximum of 6. Hence, the number of ports required to implement this system and the diameter have actually been reduced. The result is a superior multiprocessor system with four times the nodes of the NCUBE/10 machine and better robustness.
Thus, a new class of SMH topologies has been invented. Its degree can be ranged from 3 to log N. Therefore, a designer of a multicomputer can choose one of the topologies which fits his needs. Compared with similar topologies of the same degree and of the same number of nodes, the diameter of this class of topologies is usually shorter and it is also more robust. Just like hypercube topology, the routing logorithm can be used in this class of topologies is very easy and efficient.
The preferred embodiments of the present invention present many advantages over the prior art. The diameter has been reduced, since the modified hypercube is employed as the basic topology. The degree of the topology can be reduced by the substituting scheme. This also offsets the increased degree resulted from using a modified hypercube instead of using a hypercube as the basic topology in the substituting scheme. The robustness of the SMH topologies is better than that of hypercube. This is due to the fact that each node in the hypercube is replaced by a set of nodes (i.e., a block topology). Therefore, the number of paths between two nodes in the original hypercube is less than the number of paths between two nodes chosen from two block topologies in a SMH topology which correspond to the two nodes in the hypercube. There is no congestion or bottleneck in the communication flow of SMH topologies. The routing algorithm is very easy and efficient.
Although specific logic configurations have been illustrated and described for the preferred embodiments of the present invention set forth herein, it will be appreciated by those of ordinary skill in the art that any conventional logical arrangement which is calculated to achieve the same purpose may be substituted for the specific configurations shown. Thus, although the preferred embodiments have been described in terms of interprocessor communication for parallel processing computers, those skilled in the art will readily recognize the application of these techniques to inter-node communication where the nodes may be communication routers, telephone switching exchanges, and other types of communication networks.
While the present invention has been described in connection with the preferred embodiments thereof, it will be understood that many modifications will be readily apparent to those of ordinary skill in the art, and this application is intended to cover any adaptations or variations thereof. Therefore, it is manifestly intended that this invention be limited only by the claims and the equivalents thereof.

Claims

WHAT IS CLAIMED IS:
1. A communications network, comprising: a plurality of nodes, each having a unique address; first communication paths connected between a plurality of said nodes and arranged as a hypercube configuration; and second communication paths connected between a plurality of said nodes not directly connected by said first communication paths.
2. A network according to claim 1 wherein said second communication paths connect nodes having inverse addresses.
3. A network according to claim 1 wherein each of said nodes is a single processing element of a multiprocessor computer.
4. A network according to claim 1 wherein said second communication paths are limited in number and increase the number of required ports on each of said node by 1.
5. A communications system, comprising: a first communications network arranged as a modified hypercube such that each vertex is connected by edges in a hypercube configuration and each vertex is further connected to vertices not normally directly con nected by said hypercube configuration; and each of said vertices comprising a second communications network.
6. The system according to claim 5 wherein each of said second communications networks is arranged as a modified hypercube topology.
7. The system according to claim 5 wherein each of said second communication networks is arranged as a cycle-connected topology.
8. The system according to claim 5 wherein each of said second communications networks is arranged as a hypercube topology.
9. A method of constructing a modified hypercube topology, comprising the steps of:
(a) selecting a plurality of nodes;
(b) arranging said plurality of nodes in a hypercube topology; and
(c) adding additional communication paths between a plurality of said nodes not directly connected in said hypercube topology.
10. The method according to claim 9 wherein said adding step is limited to adding one additional communication path for each node in said system.
11. The method of routing a message from a source node having a source address to a destination node having a destination address in a Modified Hypercube topology, comprising the steps of:
(a) calculating the distance from the source node to the destination node;
(b) sending the message to a first adjacent node having an address that has a Hamming distance of one if the distance from the first node to the destination node is less than or equal to the diameter of the Modified Hypercube;
(c) sending the message to a second adjacent node having a complemented address if the distance from the second node to the destination node is greater than the diameter of the Modified Hypercube; and
(d) repeating steps (a) through (c) until the message arrives at the destination.
12. The method of sending a message from a source node in a source block to a destination node in a destination block in a substituted topology, comprising the steps of:
(a) performing an inter-block transfer between the source block and a. first block if the distance between the first block and the destination block is less than the distance between the source block and the destination block; and
(b) performing an inner-block transfer from the source node in the source block to a second node in the source block if the distance between the first block and the destination block is greater than the distance between the source block and the destination block.
PCT/US1988/002782 1987-08-14 1988-08-12 Hypercube topology for multiprocessor systems with added communication paths between nodes or substituted corner topologies WO1989001665A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US085,980 1979-10-18
US8598087A 1987-08-14 1987-08-14

Publications (1)

Publication Number Publication Date
WO1989001665A1 true WO1989001665A1 (en) 1989-02-23

Family

ID=22195201

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US1988/002782 WO1989001665A1 (en) 1987-08-14 1988-08-12 Hypercube topology for multiprocessor systems with added communication paths between nodes or substituted corner topologies

Country Status (1)

Country Link
WO (1) WO1989001665A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1992006436A2 (en) * 1990-10-03 1992-04-16 Thinking Machines Corporation Parallel computer system
FR2668626A1 (en) * 1990-10-30 1992-04-30 Thomson Csf METHOD FOR CONFIGURING A COMPUTER SYSTEM WITH MESH.
EP0817097A2 (en) * 1996-07-01 1998-01-07 Sun Microsystems, Inc. Interconnection subsystem for a multiprocessor computer system with a small number of processors using a switching arrangement of limited degree

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4247892A (en) * 1978-10-12 1981-01-27 Lawrence Patrick N Arrays of machines such as computers
EP0132926A2 (en) * 1983-05-31 1985-02-13 W. Daniel Hillis Parallel processor
US4729095A (en) * 1986-05-19 1988-03-01 Ncube Corporation Broadcast instruction for use in a high performance computer system
US4730322A (en) * 1985-09-27 1988-03-08 California Institute Of Technology Method and apparatus for implementing a maximum-likelihood decoder in a hypercube network
US4739476A (en) * 1985-08-01 1988-04-19 General Electric Company Local interconnection scheme for parallel processing architectures
US4766534A (en) * 1986-10-16 1988-08-23 American Telephone And Telegraph Company, At&T Bell Laboratories Parallel processing network and method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4247892A (en) * 1978-10-12 1981-01-27 Lawrence Patrick N Arrays of machines such as computers
EP0132926A2 (en) * 1983-05-31 1985-02-13 W. Daniel Hillis Parallel processor
US4739476A (en) * 1985-08-01 1988-04-19 General Electric Company Local interconnection scheme for parallel processing architectures
US4730322A (en) * 1985-09-27 1988-03-08 California Institute Of Technology Method and apparatus for implementing a maximum-likelihood decoder in a hypercube network
US4729095A (en) * 1986-05-19 1988-03-01 Ncube Corporation Broadcast instruction for use in a high performance computer system
US4766534A (en) * 1986-10-16 1988-08-23 American Telephone And Telegraph Company, At&T Bell Laboratories Parallel processing network and method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
COMMUNICATIONS OF ACM, Vol. 24, No. 5, May 1981, F.P. PREPARATA, "The Cube-connected Cyles: A Versatile Network for Parallel Computation", see pages 300, 302-303. *
IEEE MICRO OCTOBER 1986, JOHN P. HAYES et al., "A Microprocessor-based Hypercube Supercomputer", see pages 7, 14. *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1992006436A2 (en) * 1990-10-03 1992-04-16 Thinking Machines Corporation Parallel computer system
WO1992006436A3 (en) * 1990-10-03 1992-10-15 Thinking Machines Corp Parallel computer system
US5333268A (en) * 1990-10-03 1994-07-26 Thinking Machines Corporation Parallel computer system
FR2668626A1 (en) * 1990-10-30 1992-04-30 Thomson Csf METHOD FOR CONFIGURING A COMPUTER SYSTEM WITH MESH.
EP0484200A1 (en) * 1990-10-30 1992-05-06 Thomson-Csf Configuration process of a computerised system
EP0817097A2 (en) * 1996-07-01 1998-01-07 Sun Microsystems, Inc. Interconnection subsystem for a multiprocessor computer system with a small number of processors using a switching arrangement of limited degree
EP0817097A3 (en) * 1996-07-01 2000-07-05 Sun Microsystems, Inc. Interconnection subsystem for a multiprocessor computer system with a small number of processors using a switching arrangement of limited degree

Similar Documents

Publication Publication Date Title
US5170482A (en) Improved hypercube topology for multiprocessor computer systems
EP0733237B1 (en) Multidimensional interconnection and routing network for an mpp computer
EP0197103B1 (en) Load balancing for packet switching nodes
US5313645A (en) Method for interconnecting and system of interconnected processing elements by controlling network density
Johnsson et al. Optimum broadcasting and personalized communication in hypercubes
US5701416A (en) Adaptive routing mechanism for torus interconnection network
US5630162A (en) Array processor dotted communication network based on H-DOTs
Finkel Processor interconnection strategies
US11531637B2 (en) Embedding rings on a toroid computer network
US11372791B2 (en) Embedding rings on a toroid computer network
Yeh et al. Routing and embeddings in cyclic Petersen networks: an efficient extension of the Petersen graph
WO1989001665A1 (en) Hypercube topology for multiprocessor systems with added communication paths between nodes or substituted corner topologies
US11614946B2 (en) Networked computer
Yokota et al. A prototype router for the massively parallel computer RWC-1
US11169956B2 (en) Networked computer with embedded rings field
JPH07114515A (en) Decentralized memory computer with network for synchronous communication
Prakash et al. VLSI implementation of a wormhole router using virtual channels
Yang et al. Adaptive wormhole routing in k-ary n-cubes
Scheidig et al. An efficient organization for large, network-based multicomputer systems
Ravikumar VLSI Implementation of a Wormhole Router Using Virtual Channels
Kirmani Design and Implementa
Parhami Other Mesh-Related Architectures
Séguin et al. Computer Science Division Electrical Engineering and Computer Sciences University of California, Berkeley, CA 94720

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): JP KR

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): AT BE CH DE FR GB IT LU NL SE