WO2004059467A2 - A method for accessing a bus in a clustered instruction level parallelism processor - Google Patents

A method for accessing a bus in a clustered instruction level parallelism processor Download PDF

Info

Publication number
WO2004059467A2
WO2004059467A2 PCT/IB2003/005584 IB0305584W WO2004059467A2 WO 2004059467 A2 WO2004059467 A2 WO 2004059467A2 IB 0305584 W IB0305584 W IB 0305584W WO 2004059467 A2 WO2004059467 A2 WO 2004059467A2
Authority
WO
WIPO (PCT)
Prior art keywords
bus
clusters
switching means
cluster
sending
Prior art date
Application number
PCT/IB2003/005584
Other languages
French (fr)
Other versions
WO2004059467A3 (en
Inventor
Orlando M. Pires Dos Reis Moreira
Andrei Terechko
Victor M. G. Van Acht
Original Assignee
Koninklijke Philips Electronics N.V.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Koninklijke Philips Electronics N.V. filed Critical Koninklijke Philips Electronics N.V.
Priority to AU2003283672A priority Critical patent/AU2003283672A1/en
Priority to US10/540,409 priority patent/US20060095710A1/en
Priority to EP03775653A priority patent/EP1581862A2/en
Priority to JP2004563420A priority patent/JP2006512655A/en
Publication of WO2004059467A2 publication Critical patent/WO2004059467A2/en
Publication of WO2004059467A3 publication Critical patent/WO2004059467A3/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/38Concurrent instruction execution, e.g. pipeline or look ahead
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/38Concurrent instruction execution, e.g. pipeline or look ahead
    • G06F9/3824Operand accessing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/38Information transfer, e.g. on bus
    • G06F13/40Bus structure
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/38Concurrent instruction execution, e.g. pipeline or look ahead
    • G06F9/3824Operand accessing
    • G06F9/3826Bypassing or forwarding of data results, e.g. locally between pipeline stages or within a pipeline stage
    • G06F9/3828Bypassing or forwarding of data results, e.g. locally between pipeline stages or within a pipeline stage with global bypass, e.g. between pipelines, between clusters
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/38Concurrent instruction execution, e.g. pipeline or look ahead
    • G06F9/3885Concurrent instruction execution, e.g. pipeline or look ahead using a plurality of independent parallel functional units
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/38Concurrent instruction execution, e.g. pipeline or look ahead
    • G06F9/3885Concurrent instruction execution, e.g. pipeline or look ahead using a plurality of independent parallel functional units
    • G06F9/3889Concurrent instruction execution, e.g. pipeline or look ahead using a plurality of independent parallel functional units controlled by multiple instructions, e.g. MIMD, decoupled access or execute
    • G06F9/3891Concurrent instruction execution, e.g. pipeline or look ahead using a plurality of independent parallel functional units controlled by multiple instructions, e.g. MIMD, decoupled access or execute organised in groups of units sharing resources, e.g. clusters

Definitions

  • the invention relates to a clustered Instruction Level Parallelism processor and a method for accessing a bus in a clustered Instruction Level Parallelism processor.
  • ILP Instruction Level Parallelism
  • the main idea behind clustered processors is to allocate those parts of computation, which interact frequently, on the same cluster, whereas those parts which merely communicate rarely or those communication is not critical are allocated on different clusters.
  • the problem is how to handle Inter-Cluster-Communication ICC on the hardware level (wires and logic) as well as on the software level (allocating variables to registers and scheduling).
  • the most widely used ICC scheme is the full point-to-point connectivity topology, i.e. each two clusters have a dedicated wiring allowing the exchange of data.
  • the point-to-point ICC with a full connectivity simplifies the instruction scheduling, but on the other hand the scalability is limited due to the amount of wiring needed: N(N-1), with N being the number of clusters. Accordingly, the quadratic growth of the wiring limits the scalability to 2 - 10 clusters.
  • Yet another ICC scheme is the global bus connectivity.
  • the clusters are fully connected to each other via a bus, while requiring much less hardware resources compared to the above full point-to-point connectivity topology ICC scheme.
  • this scheme allows a value multicast, i.e. the same value can be send to several clusters at the same time or in other words several clusters can get the same value by reading the bus at the same time.
  • the scheme is furthermore based on statical scheduling, hence neither an arbiter nor any control signals are necessary. Since the bus constitutes a shared resource it is only possible to perform one transfer per cycle limiting the communication bandwidth as being very low.
  • the latency of the ICC will increase due to the propagation delay of the bus. The latency will further increase with increasing numbers of clusters limiting the scalability of the processor with such an ICC scheme.
  • the problem with the limited communication bandwidth can be partially overcome by using a multi-bus, where two busses are used for the ICC instead of one.
  • ICC communication scheme In another ICC communication scheme local busses are used.
  • This ICC scheme is a partially connected communication scheme. Therefore, the local busses merely connect a certain amount of clusters but not all at one time.
  • the disadvantage of this scheme is that it is harder to program, since e.g. if a value is to be send between clusters connected to different local buses, it can not be directly send within one cycle but at least two cycles are needed.
  • the point-to-point topology has a high bandwidth but the complexity of the wiring increases with the square of the number of clusters. A multicast, i.e. sending a value to several other clusters, is not possible.
  • the bus topology has a lower complexity, since the complexity linearly increases with the number of clusters, and allows multicast, but has a lower bandwidth.
  • the ICC schemes can either be fully- connected or partially connected. A fully-connected scheme has a higher bandwidth and a lower software complexity, but a higher wiring complexity is present and it is less scalable. A partially-connected scheme units good scalability with lower hardware complexity but has a lower bandwidth and a higher software complexity. It is therefore an object of the invention to improve the bandwidth of a bus within an ICC scheme for a clustered ILP processor, while decreasing the latency of said bus and without unduly increasing the complexity of the underlying programming system.
  • the basic idea of the invention is to add switches along the bus, in order divide the bus into smaller independent segments by opening/closing said switches.
  • a clustered Instruction Level Parallelism processor comprises a plurality of clusters C1-C4, a bus means 100 with a plurality of bus segments 100a, 100b, 100c, and switching means 200a, 200b arranged between adjacent bus segments 100a, 100b, 100c.
  • Said bus means 100 is used for connecting said clusters C1-C4, which comprises each at least one register file RF and at least one functional unit FU.
  • Said switching means 200 are used for connecting or disconnecting adjacent bus segments 100a, 100b, 100c.
  • said bus means 100 is a multi-bus comprising at least two busses, which will increase the communication bandwidth
  • the invention also relates to a method for accessing a bus 100 in a clustered Instruction Level Parallelism processor.
  • Said bus 100 comprises at least one switching means 200 along said bus 100.
  • a cluster C1-C4 can either perform a sending operation based on a source register and a transfer word or a receiving operation based on a designation source register and a transfer word.
  • Said switching means 200 are then opened/closed according to said transfer word.
  • the scheduling of a split or segmented bus is not much more complex than a global bus ICC while merely a few logic gates are needed to control a switch.
  • said transfer word represents the sending direction for the sending operation and the receiving direction for the receiving operation, allowing the control of the switches according to the direction of a data move.
  • Fig. 1 shows an point-to-point inter-cluster communication ICC scheme
  • Fig. 2 shows an ICC scheme via a bus
  • Fig. 3 shows an ICC scheme via a multi-bus
  • Fig. 4 shows an ICC scheme via local busses
  • Fig. 5 shows an ICC scheme via a segmented bus according to a first embodiment
  • Fig. 6 shows an ICC scheme via a segmented bus according to a second embodiment
  • Fig. 7 shows an ICC scheme via a segmented bus according to a third embodiment.
  • ICC scheme is the foil point-to-point connectivity topology, i.e. each two clusters have a dedicated wiring allowing the exchange of data.
  • a typical ILP processor with four clusters is shown in Fig. 1.
  • Fig. 2 shows another ICC scheme with a global bus connectivity.
  • the clusters are fully connected to each other via a bus, while requiring much less hardware resources compared to the ICC scheme as shown in Fig. 1.
  • this scheme allows a value multicast, i.e. the same value can be send to several clusters at the same time or in other words several clusters can get the same value by reading the bus at the same time.
  • the problem with the limited communication bandwidth can be partially overcome by using a multi-bus as shown in Fig. 3, where two busses are used for the ICC instead of one. Although this will increase the communication bandwidth, it will also increase the hardware overhead without decreasing the latency of the bus.
  • Fig. 4 shows another ICC communication scheme using local busses. This ICC scheme is a partially connected communication scheme.
  • the local busses merely connect a certain amount of clusters but not all at one time, e.g. clusters 1 to 3 are connected to one local bus and clusters 2 to 4 are connected to a second local bus.
  • the disadvantage of this scheme is that it is harder to program, since e.g. if a value is to be send from cluster 1 to cluster 4, it can not be directly send within one cycle but at least two cycles are needed.
  • Fig. 5 shows a inter-cluster communication ICC scheme via a segmented bus according to a first embodiment.
  • Said ICC scheme may be incorporated into a VLIW processor.
  • the scheme comprises 4 clusters CI - C4 connected to each other via a bus 100 and one switch 200 segmenting the bus.
  • the switch 200 When the switch 200 is open, one data move can be performed between cluster 1 CI and cluster 2 C2 and/or another between cluster 3 C3 and cluster 4 C4 within one cycle.
  • the switch 200 is closed, data can be moved within one cycle from cluster 1 CI or cluster 2 C2 to either cluster 3 C3 or cluster 4 C4.
  • ICC scheme according to the first embodiment only shows a single bus 100
  • the principles of the invention can readily be applied to multi-bus ICC schemes as shown in Fig. 3 and ICC schemes using local busses as shown in Fig. 4.
  • Fig. 6 shows a inter-cluster communication ICC scheme via a segmented bus according to a second embodiment.
  • the clusters CI - C4 as well as the switch control is shown in more detail.
  • Each cluster CI - C4 comprises a register file RF and a functional unit FU, and is connected to one bit bus 100 via an interface which is constituted of merely 3 OR gates G per bit. Alternatively, AND, NAND or NOR gates G can be used as interface.
  • each cluster CI - C4 can obviously comprise more than one register file RF and one functional unit FU.
  • the functional units FU may be specialised functional units FU dedicated to any bus operations.
  • the representation of the bypass logic of the register file is omitted, since it is not essential for the understanding of the split or segmented bus according to the invention.
  • the bus according to the second embodiment is implemented with two wires per bit. One wire is carrying the left to right value while the other wire carries the right to left value of the bus.
  • other implementations of the bus are also possible.
  • the bus splitting switch can be implemented with just a few MOS transistors Ml, M2 for each bus line.
  • the access control of the bus can be performed by the clusters CI - C4 by issuing a localjnov or a global jnov operation.
  • the arguments of these operations are the source register and the target register.
  • the localjnov operation merely uses a segment of the bus by opening the bus-splitting switch, while the global jnov uses the whole bus 100 by closing the bus-splitting switch 200.
  • the operation to move data may accept more than one target register, i.e. a list of target registers, belonging to different clusters CI - C4. This may also be implemented by a register/cluster mask in a one bit vector.
  • Fig. 7 shows a inter-cluster communication ICC scheme via a segmented bus according to a third embodiment of the invention.
  • Fig; 7 depicts six clusters CI - C6, a bus 100 with three segments 100a, 100b, 100c and two switches 200a, 200b, i.e. two clusters are associated to each bus segment.
  • the clusters CI - C6, the interface of the clusters and the bus 100 as well as the switches 200 can be embodied as described in the second embodiment with reference to Fig. 6.
  • the switches are considered to be closed by default.
  • the bus access can be performed by the clusters CI - C6 either by a send operation or a receive operation.
  • a cluster In those cases that a cluster needs to send data, i.e. perform a data move, to another cluster via the bus, said cluster performs a send operation, wherein said send operation has two arguments, namely the source register and the sending direction, i.e. the direction to which the data is to be sent.
  • the sending direction can be Teft" or 'right", and to provide for multicast it can also be ⁇ air, i.e. Tefif and ⁇ right ⁇
  • cluster 3 C3 needs to move data to cluster 1 CI, it will issue a send operation with a source register, i.e. one of its registers where the data to be moved is stored, and a sending direction indicating the direction to which the data is to be moved as arguments.
  • the sending direction is left. Therefore, the switch 200b between cluster 4 C4 and cluster 5 C5 will be opened, since the bus segment 200b with the clusters 5 and 6 C5, C6 is not required for this data move.
  • the switch which is arranged closest on the opposite side of the sending direction, is opened, whereby the usage of the bus is limited to only those segments which are actually required to perform the data move, i.e. those segments between the sending and the receiving cluster.
  • cluster 3 C3 if cluster 3 C3 needs to receive data from cluster 1 CI, it will issue a receive operation with a destination register, i.e. one of its registers where the received data is to be stored, and a receiving direction indicating the direction from where the data is to be received as arguments.
  • the receiving direction is left. Therefore, the switch 200b between cluster 4 and cluster 5 C4, C5 will be opened, since the bus segment 100c with the clusters 5 and 6 C5, C6 is not required for this data move.
  • the switch which is arranged closest on the opposite side of the receiving direction, is opened, whereby the usage of the bus is limited to only those segments which are actually required to perform the data move, i.e. those segments between the sending and the receiving cluster.
  • the receiving direction may also be unspecified. Therefore, all switches will remain closed.
  • the switches do not have any default state. Furthermore, a switch configuration word is provided for programming the switches 200. Said switch configuration word determines which switches 200 are open and which ones are closed. It may be issued in each cycle as with normal operation, like a sending/receiving operation. Therefore, the bus access is performed by a sending/receiving operation and a switch configuration word in contrast to a bus access by a sending/receiving operation with the sending/receiving direction as argument as described according to the third embodiment.

Abstract

The basic idea of the invention is to add switches along a bus, in order divide the bus into smaller independent segments by opening/closing said switches. A clustered Instruction Level Parallelism processor comprises a plurality of clusters (C1 - C6) each comprising at least one register file (RF) and at least one functional unit (FU), a bus means (100) for connecting said clusters (C1 - C6), wherein said bus (100) comprises a plurality of bus segments (100a, 100b, 100c), and switching means (200), which is arranged between adjacent bus segments (100a, 100b, 100c). Said switching means (200) are used for connecting or disconnecting adjacent bus segments (100a, 100b, 100c). Furthermore, a method for accessing a bus (100) in a clustered Instruction Level Parallelism processor is shown. Said bus (100) comprises at least one switching means (200) along said bus (100). A cluster can either perform a sending operation based on a source register and transfer word or a receiving operation based on a designation source register and a transfer word. Said switching means are then opened/closed according to said transfer word.

Description

Clustered ILP processor and a method for accessing a bus in a clustered ILP processor
The invention relates to a clustered Instruction Level Parallelism processor and a method for accessing a bus in a clustered Instruction Level Parallelism processor.
One main problem in the area of Instruction Level Parallelism (ILP) processors is the scalability of register file resources. In the past, ILP architectures have been designed around centralised resources to cover for the need of a large number of registers for keeping the results of all parallel operation currently being executed. The usage of a centralised register file eases data sharing between functional units and simplifies register allocation and scheduling. However, the scalability of such a single centralised register is limited, since huge monolithic register files with a large number of ports are hard to build and limit the cycle time of the processor.
Recent developments in the areas of VLSI technologies and computer architectures suggest that a decentralised organisation might be preferable in certain areas. It is predicted that the performance of future processors will be limited by communication restrains rather than computation restrains. One solution to this problem is to portion resources and to physically distribute these resources over the processor to avoid long wires, having a negative effect on communication speed as well as on the latency. This can be achieved by clustering. In a clustered processor several resources, like functional units and register files are distributed over separate clusters. In particular for clustered ILP architectures each cluster comprises a set of functional units and a local register. The main idea behind clustered processors is to allocate those parts of computation, which interact frequently, on the same cluster, whereas those parts which merely communicate rarely or those communication is not critical are allocated on different clusters. However, the problem is how to handle Inter-Cluster-Communication ICC on the hardware level (wires and logic) as well as on the software level (allocating variables to registers and scheduling). The most widely used ICC scheme is the full point-to-point connectivity topology, i.e. each two clusters have a dedicated wiring allowing the exchange of data. On the one hand, the point-to-point ICC with a full connectivity simplifies the instruction scheduling, but on the other hand the scalability is limited due to the amount of wiring needed: N(N-1), with N being the number of clusters. Accordingly, the quadratic growth of the wiring limits the scalability to 2 - 10 clusters.
Furthermore, it is also possible to use partially connected networks for point- to-point ICC. Here the clusters are not connected to all other clusters (fully connected) but are e.g. merely connected to adjacent clusters. Although the wiring complexity will be decreased, problems for programming the processor will increase, which are not solved satisfactorily by existing automatic scheduling and allocating tools.
Yet another ICC scheme is the global bus connectivity. The clusters are fully connected to each other via a bus, while requiring much less hardware resources compared to the above full point-to-point connectivity topology ICC scheme. Additionally, this scheme allows a value multicast, i.e. the same value can be send to several clusters at the same time or in other words several clusters can get the same value by reading the bus at the same time. The scheme is furthermore based on statical scheduling, hence neither an arbiter nor any control signals are necessary. Since the bus constitutes a shared resource it is only possible to perform one transfer per cycle limiting the communication bandwidth as being very low. Moreover, the latency of the ICC will increase due to the propagation delay of the bus. The latency will further increase with increasing numbers of clusters limiting the scalability of the processor with such an ICC scheme.
The problem with the limited communication bandwidth can be partially overcome by using a multi-bus, where two busses are used for the ICC instead of one.
Although this will increase the communication bandwidth, it will also increase the hardware overhead without decreasing the latency of the bus.
In another ICC communication scheme local busses are used. This ICC scheme is a partially connected communication scheme. Therefore, the local busses merely connect a certain amount of clusters but not all at one time. The disadvantage of this scheme is that it is harder to program, since e.g. if a value is to be send between clusters connected to different local buses, it can not be directly send within one cycle but at least two cycles are needed.
Accordingly, the advantages and disadvantages of the known ICC schemes can be summarised as follows. The point-to-point topology has a high bandwidth but the complexity of the wiring increases with the square of the number of clusters. A multicast, i.e. sending a value to several other clusters, is not possible. On the other hand, the bus topology has a lower complexity, since the complexity linearly increases with the number of clusters, and allows multicast, but has a lower bandwidth. The ICC schemes can either be fully- connected or partially connected. A fully-connected scheme has a higher bandwidth and a lower software complexity, but a higher wiring complexity is present and it is less scalable. A partially-connected scheme units good scalability with lower hardware complexity but has a lower bandwidth and a higher software complexity. It is therefore an object of the invention to improve the bandwidth of a bus within an ICC scheme for a clustered ILP processor, while decreasing the latency of said bus and without unduly increasing the complexity of the underlying programming system.
This problem is solved by a ILP processor according to claim 1 and a method for accessing a bus in a clustered Instruction Level Parallelism processor according to claim 5.
The basic idea of the invention is to add switches along the bus, in order divide the bus into smaller independent segments by opening/closing said switches.
According to the invention, a clustered Instruction Level Parallelism processor comprises a plurality of clusters C1-C4, a bus means 100 with a plurality of bus segments 100a, 100b, 100c, and switching means 200a, 200b arranged between adjacent bus segments 100a, 100b, 100c. Said bus means 100 is used for connecting said clusters C1-C4, which comprises each at least one register file RF and at least one functional unit FU. Said switching means 200 are used for connecting or disconnecting adjacent bus segments 100a, 100b, 100c. By splitting the bus into different segments the latency of the bus within one bus segment is improved. Although the overall latency of the total bus, i.e. all switches closed, is nonetheless linearly increasing with the number of clusters, data moves between local or adjacent clusters can have lower latencies than moves over different bus segment, i.e. over different switches. A slow down of local communication, i.e. between neighbouring clusters, due to global interconnect requirements of the bus ICC can be avoided by opening switches, so that shorter busses, i.e. bus segments, with lower latencies can be achieved. Furthermore, incorporating the switches is cheap and easy to implement, while increasing the available bandwidth of the bus and enhancing latency problems caused by a long bus without giving up a fully-connected ICC. According to an aspect of the invention, said bus means 100 is a multi-bus comprising at least two busses, which will increase the communication bandwidth
The invention also relates to a method for accessing a bus 100 in a clustered Instruction Level Parallelism processor. Said bus 100 comprises at least one switching means 200 along said bus 100. A cluster C1-C4 can either perform a sending operation based on a source register and a transfer word or a receiving operation based on a designation source register and a transfer word. Said switching means 200 are then opened/closed according to said transfer word.
From a software viewpoint, the scheduling of a split or segmented bus is not much more complex than a global bus ICC while merely a few logic gates are needed to control a switch.
According to a further aspect of the invention, said transfer word represents the sending direction for the sending operation and the receiving direction for the receiving operation, allowing the control of the switches according to the direction of a data move.
The invention will now be described in more detail with reference to the drawing, in which:
Fig. 1 shows an point-to-point inter-cluster communication ICC scheme; Fig. 2 shows an ICC scheme via a bus;
Fig. 3 shows an ICC scheme via a multi-bus;
Fig. 4 shows an ICC scheme via local busses;
Fig. 5 shows an ICC scheme via a segmented bus according to a first embodiment; Fig. 6 shows an ICC scheme via a segmented bus according to a second embodiment; and
Fig. 7 shows an ICC scheme via a segmented bus according to a third embodiment.
The most widely used ICC scheme is the foil point-to-point connectivity topology, i.e. each two clusters have a dedicated wiring allowing the exchange of data. A typical ILP processor with four clusters is shown in Fig. 1.
Fig. 2 shows another ICC scheme with a global bus connectivity. The clusters are fully connected to each other via a bus, while requiring much less hardware resources compared to the ICC scheme as shown in Fig. 1. Additionally, this scheme allows a value multicast, i.e. the same value can be send to several clusters at the same time or in other words several clusters can get the same value by reading the bus at the same time. The problem with the limited communication bandwidth can be partially overcome by using a multi-bus as shown in Fig. 3, where two busses are used for the ICC instead of one. Although this will increase the communication bandwidth, it will also increase the hardware overhead without decreasing the latency of the bus. Fig. 4 shows another ICC communication scheme using local busses. This ICC scheme is a partially connected communication scheme. Therefore, the local busses merely connect a certain amount of clusters but not all at one time, e.g. clusters 1 to 3 are connected to one local bus and clusters 2 to 4 are connected to a second local bus. The disadvantage of this scheme is that it is harder to program, since e.g. if a value is to be send from cluster 1 to cluster 4, it can not be directly send within one cycle but at least two cycles are needed.
Fig. 5 shows a inter-cluster communication ICC scheme via a segmented bus according to a first embodiment. Said ICC scheme may be incorporated into a VLIW processor. The scheme comprises 4 clusters CI - C4 connected to each other via a bus 100 and one switch 200 segmenting the bus. When the switch 200 is open, one data move can be performed between cluster 1 CI and cluster 2 C2 and/or another between cluster 3 C3 and cluster 4 C4 within one cycle. On the other hand, when the switch 200 is closed, data can be moved within one cycle from cluster 1 CI or cluster 2 C2 to either cluster 3 C3 or cluster 4 C4.
With this scheme the scalability of the hardware resources, like the number of clusters and switches, is linear as in the case of known ICC as shown in Fig. 2.
Although the ICC scheme according to the first embodiment only shows a single bus 100, the principles of the invention can readily be applied to multi-bus ICC schemes as shown in Fig. 3 and ICC schemes using local busses as shown in Fig. 4. Merely some switches 200 need to be incorporated into the multi-bus or the local bus in order to achieve a split or segmented bus.
Fig. 6 shows a inter-cluster communication ICC scheme via a segmented bus according to a second embodiment. Here the clusters CI - C4 as well as the switch control is shown in more detail. Each cluster CI - C4 comprises a register file RF and a functional unit FU, and is connected to one bit bus 100 via an interface which is constituted of merely 3 OR gates G per bit. Alternatively, AND, NAND or NOR gates G can be used as interface.
However, each cluster CI - C4 can obviously comprise more than one register file RF and one functional unit FU. The functional units FU may be specialised functional units FU dedicated to any bus operations. Furthermore, there may be several functional units writing to the bus. The representation of the bypass logic of the register file is omitted, since it is not essential for the understanding of the split or segmented bus according to the invention. Although only one bit of the bus word is shown, it is obvious that the bus can have any desired word size. Moreover, the bus according to the second embodiment is implemented with two wires per bit. One wire is carrying the left to right value while the other wire carries the right to left value of the bus. However, other implementations of the bus are also possible.
The bus splitting switch can be implemented with just a few MOS transistors Ml, M2 for each bus line.
The access control of the bus can be performed by the clusters CI - C4 by issuing a localjnov or a global jnov operation. The arguments of these operations are the source register and the target register. The localjnov operation merely uses a segment of the bus by opening the bus-splitting switch, while the global jnov uses the whole bus 100 by closing the bus-splitting switch 200.
Alternatively, in order to allow multicast, the operation to move data may accept more than one target register, i.e. a list of target registers, belonging to different clusters CI - C4. This may also be implemented by a register/cluster mask in a one bit vector.
Fig. 7 shows a inter-cluster communication ICC scheme via a segmented bus according to a third embodiment of the invention. Fig; 7 depicts six clusters CI - C6, a bus 100 with three segments 100a, 100b, 100c and two switches 200a, 200b, i.e. two clusters are associated to each bus segment. Obviously, the number of clusters, switches and bus segments may vary from this example The clusters CI - C6, the interface of the clusters and the bus 100 as well as the switches 200 can be embodied as described in the second embodiment with reference to Fig. 6. In the third embodiment the switches are considered to be closed by default. The bus access can be performed by the clusters CI - C6 either by a send operation or a receive operation. In those cases that a cluster needs to send data, i.e. perform a data move, to another cluster via the bus, said cluster performs a send operation, wherein said send operation has two arguments, namely the source register and the sending direction, i.e. the direction to which the data is to be sent. The sending direction can be Teft" or 'right", and to provide for multicast it can also be Λair, i.e. Tefif and Λright\
For example, if cluster 3 C3 needs to move data to cluster 1 CI, it will issue a send operation with a source register, i.e. one of its registers where the data to be moved is stored, and a sending direction indicating the direction to which the data is to be moved as arguments. Here, the sending direction is left. Therefore, the switch 200b between cluster 4 C4 and cluster 5 C5 will be opened, since the bus segment 200b with the clusters 5 and 6 C5, C6 is not required for this data move. Or in other more general words, when the cluster issues a send operation, the switch, which is arranged closest on the opposite side of the sending direction, is opened, whereby the usage of the bus is limited to only those segments which are actually required to perform the data move, i.e. those segments between the sending and the receiving cluster.
If the cluster 3 C3 needs to send the same data to clusters 1 and 6 CI, C6, i.e. a multicast, then the sending direction will be ΛalF . Therefore, all switches 200a between the cluster 3 and the cluster 1 as well as all switches 200b between the clusters 3 and 6 will remain closed.
According to a further example, if cluster 3 C3 needs to receive data from cluster 1 CI, it will issue a receive operation with a destination register, i.e. one of its registers where the received data is to be stored, and a receiving direction indicating the direction from where the data is to be received as arguments. Here, the receiving direction is left. Therefore, the switch 200b between cluster 4 and cluster 5 C4, C5 will be opened, since the bus segment 100c with the clusters 5 and 6 C5, C6 is not required for this data move. Or in other more general words, when the cluster issues a receive operation, the switch, which is arranged closest on the opposite side of the receiving direction, is opened, whereby the usage of the bus is limited to only those segments which are actually required to perform the data move, i.e. those segments between the sending and the receiving cluster.
For the provision of multicast the receiving direction may also be unspecified. Therefore, all switches will remain closed.
According to a fourth embodiment, which is based on the third embodiment, the switches do not have any default state. Furthermore, a switch configuration word is provided for programming the switches 200. Said switch configuration word determines which switches 200 are open and which ones are closed. It may be issued in each cycle as with normal operation, like a sending/receiving operation. Therefore, the bus access is performed by a sending/receiving operation and a switch configuration word in contrast to a bus access by a sending/receiving operation with the sending/receiving direction as argument as described according to the third embodiment.

Claims

CLAIMS:
1. A clustered Instruction Level Parallelism processor, comprising: a plurality of clusters each comprising at least one register file and at least one functional unit; a bus means for connecting said clusters, said bus comprising a plurality of bus segments, and switching means, arranged between adjacent bus segments, for connecting or disconnecting adjacent bus segments.
2. Processor according to claim 1, wherein each cluster is coupled to at least one bus segment.
3. Processor according to claims 1 or 2, wherein two or more clusters are coupled to the same bus segment.
4. Processor according to claim 1, 2 or 3, wherein said bus means is a multi-bus comprising at least two busses.
5. Method for accessing a bus in a clustered Instruction Level Parallelism processor, wherein said bus comprises at least one switching means along said bus, comprising the steps of: performing a sending operation based on a source register and a transfer word, and/or performing a receiving operation based on a designation source register and a transfer word; - opening/closing said switching means according to said transfer word. .
6. Method according to claim 5, wherein said transfer word represents the sending direction for the sending operation and the receiving direction for the receiving operation.
7. Method according to claim 6, wherein the default state of said switching means is closed.
8. Method according to claim 7, wherein the one of said switching means, which is closest to a cluster performing said sending operation or said receiving operation in the direction opposite of said sending or said receiving direction, is opened.
9. Method according to claim 6, wherein said sending direction or said receiving direction is left, right or all.
10. Method according to claim 9, wherein no switching means is opened, if said sending direction or receiving direction is all.
11. Method according to claim 5, wherein said transfer word represents a switch configuration word, wherein said switching means are opened/closed according to said configuration word.
PCT/IB2003/005584 2002-12-30 2003-11-28 A method for accessing a bus in a clustered instruction level parallelism processor WO2004059467A2 (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
AU2003283672A AU2003283672A1 (en) 2002-12-30 2003-11-28 A method for accessing a bus in a clustered instruction level parallelism processor
US10/540,409 US20060095710A1 (en) 2002-12-30 2003-11-28 Clustered ilp processor and a method for accessing a bus in a clustered ilp processor
EP03775653A EP1581862A2 (en) 2002-12-30 2003-11-28 A method for accessing a bus in a clustered instruction level parallelism processor
JP2004563420A JP2006512655A (en) 2002-12-30 2003-11-28 Clustered ILP processor and method of accessing a bus in a clustered ILP processor

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP02080588.3 2002-12-30
EP02080588 2002-12-30

Publications (2)

Publication Number Publication Date
WO2004059467A2 true WO2004059467A2 (en) 2004-07-15
WO2004059467A3 WO2004059467A3 (en) 2004-12-29

Family

ID=32668861

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2003/005584 WO2004059467A2 (en) 2002-12-30 2003-11-28 A method for accessing a bus in a clustered instruction level parallelism processor

Country Status (8)

Country Link
US (1) US20060095710A1 (en)
EP (1) EP1581862A2 (en)
JP (1) JP2006512655A (en)
KR (1) KR20050089084A (en)
CN (1) CN1732436A (en)
AU (1) AU2003283672A1 (en)
TW (1) TW200506722A (en)
WO (1) WO2004059467A2 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1814038A2 (en) * 2006-01-31 2007-08-01 Broadcom Corporation Cache coherent split bus
US7751329B2 (en) 2007-10-03 2010-07-06 Avaya Inc. Providing an abstraction layer in a cluster switch that includes plural switches

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9781062B2 (en) * 2014-01-08 2017-10-03 Oracle International Corporation Using annotations to extract parameters from messages
US9672043B2 (en) 2014-05-12 2017-06-06 International Business Machines Corporation Processing of multiple instruction streams in a parallel slice processor
US9720696B2 (en) 2014-09-30 2017-08-01 International Business Machines Corporation Independent mapping of threads
US9977678B2 (en) 2015-01-12 2018-05-22 International Business Machines Corporation Reconfigurable parallel execution and load-store slice processor
US10133581B2 (en) 2015-01-13 2018-11-20 International Business Machines Corporation Linkable issue queue parallel execution slice for a processor
US10133576B2 (en) 2015-01-13 2018-11-20 International Business Machines Corporation Parallel slice processor having a recirculating load-store queue for fast deallocation of issue queue entries
EP3144820A1 (en) 2015-09-18 2017-03-22 Stichting IMEC Nederland Inter-cluster data communication network for a dynamic shared communication platform
US9983875B2 (en) 2016-03-04 2018-05-29 International Business Machines Corporation Operation of a multi-slice processor preventing early dependent instruction wakeup
US10037211B2 (en) 2016-03-22 2018-07-31 International Business Machines Corporation Operation of a multi-slice processor with an expanded merge fetching queue
US10346174B2 (en) 2016-03-24 2019-07-09 International Business Machines Corporation Operation of a multi-slice processor with dynamic canceling of partial loads
US10761854B2 (en) 2016-04-19 2020-09-01 International Business Machines Corporation Preventing hazard flushes in an instruction sequencing unit of a multi-slice processor
US10037229B2 (en) 2016-05-11 2018-07-31 International Business Machines Corporation Operation of a multi-slice processor implementing a load/store unit maintaining rejected instructions
US9934033B2 (en) 2016-06-13 2018-04-03 International Business Machines Corporation Operation of a multi-slice processor implementing simultaneous two-target loads and stores
US10042647B2 (en) 2016-06-27 2018-08-07 International Business Machines Corporation Managing a divided load reorder queue
US10318419B2 (en) 2016-08-08 2019-06-11 International Business Machines Corporation Flush avoidance in a load store unit
CN111061510B (en) * 2019-12-12 2021-01-05 湖南毂梁微电子有限公司 Extensible ASIP structure platform and instruction processing method

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0494056A2 (en) * 1990-12-31 1992-07-08 International Business Machines Corporation Dynamically partitionable and allocable bus structure
EP0778531A1 (en) * 1995-12-04 1997-06-11 Kabushiki Kaisha Toshiba Low power consumption data transfer bus
WO2001073566A2 (en) * 2000-03-28 2001-10-04 Analog Devices, Inc. Electronic circuits with dynamic bus partitioning

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5887138A (en) * 1996-07-01 1999-03-23 Sun Microsystems, Inc. Multiprocessing computer system employing local and global address spaces and COMA and NUMA access modes
US6219776B1 (en) * 1998-03-10 2001-04-17 Billions Of Operations Per Second Merged array controller and processing element
WO2000028430A1 (en) * 1998-11-10 2000-05-18 Fujitsu Limited Parallel processor system
US6334177B1 (en) * 1998-12-18 2001-12-25 International Business Machines Corporation Method and system for supporting software partitions and dynamic reconfiguration within a non-uniform memory access system
US6978459B1 (en) * 2001-04-13 2005-12-20 The United States Of America As Represented By The Secretary Of The Navy System and method for processing overlapping tasks in a programmable network processor environment
US6957318B2 (en) * 2001-08-17 2005-10-18 Sun Microsystems, Inc. Method and apparatus for controlling a massively parallel processing environment

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0494056A2 (en) * 1990-12-31 1992-07-08 International Business Machines Corporation Dynamically partitionable and allocable bus structure
EP0778531A1 (en) * 1995-12-04 1997-06-11 Kabushiki Kaisha Toshiba Low power consumption data transfer bus
WO2001073566A2 (en) * 2000-03-28 2001-10-04 Analog Devices, Inc. Electronic circuits with dynamic bus partitioning

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1814038A2 (en) * 2006-01-31 2007-08-01 Broadcom Corporation Cache coherent split bus
EP1814038A3 (en) * 2006-01-31 2008-01-02 Broadcom Corporation Cache coherent split bus
US7751329B2 (en) 2007-10-03 2010-07-06 Avaya Inc. Providing an abstraction layer in a cluster switch that includes plural switches

Also Published As

Publication number Publication date
EP1581862A2 (en) 2005-10-05
CN1732436A (en) 2006-02-08
WO2004059467A3 (en) 2004-12-29
US20060095710A1 (en) 2006-05-04
AU2003283672A8 (en) 2004-07-22
KR20050089084A (en) 2005-09-07
JP2006512655A (en) 2006-04-13
AU2003283672A1 (en) 2004-07-22
TW200506722A (en) 2005-02-16

Similar Documents

Publication Publication Date Title
US20060095710A1 (en) Clustered ilp processor and a method for accessing a bus in a clustered ilp processor
US10282338B1 (en) Configuring routing in mesh networks
US8737392B1 (en) Configuring routing in mesh networks
US6738891B2 (en) Array type processor with state transition controller identifying switch configuration and processing element instruction address
US8151088B1 (en) Configuring routing in mesh networks
EP1239374B1 (en) Shared program memory for use in multicore DSP devices
KR101076869B1 (en) Memory centric communication apparatus in coarse grained reconfigurable array
KR20060110858A (en) A single chip protocol converter
US20020186042A1 (en) Heterogeneous integrated circuit with reconfigurable logic cores
KR100951856B1 (en) SoC for Multimedia system
US10282170B2 (en) Method for a stage optimized high speed adder
DiTomaso et al. Extending the energy efficiency and performance with channel buffers, crossbars, and topology analysis for network-on-chips
CN105138494A (en) Multi-channel computer system
JP4644410B2 (en) Method for controlling communication of a single computer within a computer network
CN116185599A (en) Heterogeneous server system and method of use thereof
JP2004535613A (en) Data processing method and data processing device
CN110096475B (en) Many-core processor based on hybrid interconnection architecture
US20060101233A1 (en) Clustered instruction level parallelism processor
CN100373329C (en) Data processing system with clustered ILP processor
US20140269753A1 (en) Method for implementing a line speed interconnect structure
WO2004063934A1 (en) System and method for scalable interconnection of adaptive processor nodes for clustered computer systems
Kodi et al. Co-design of channel buffers and crossbar organizations in NoCs architectures
Wang et al. Design and implementation of fault-tolerant and cost effective crossbar switches for multiprocessor systems
CN115658594A (en) Heterogeneous multi-core processor architecture based on NIC-400 cross matrix
Yan et al. An overview of Reconfigurable Multiple Bus Machine (RMBM)

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A2

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): BW GH GM KE LS MW MZ SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LU MC NL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
WWE Wipo information: entry into national phase

Ref document number: 2003775653

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 2006095710

Country of ref document: US

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 10540409

Country of ref document: US

WWE Wipo information: entry into national phase

Ref document number: 20038A79415

Country of ref document: CN

Ref document number: 1020057012338

Country of ref document: KR

WWE Wipo information: entry into national phase

Ref document number: 2004563420

Country of ref document: JP

WWP Wipo information: published in national office

Ref document number: 1020057012338

Country of ref document: KR

WWP Wipo information: published in national office

Ref document number: 2003775653

Country of ref document: EP

WWP Wipo information: published in national office

Ref document number: 10540409

Country of ref document: US