WO2023285304A1 - Subscription to sync zones - Google Patents
Subscription to sync zones Download PDFInfo
- Publication number
- WO2023285304A1 WO2023285304A1 PCT/EP2022/069053 EP2022069053W WO2023285304A1 WO 2023285304 A1 WO2023285304 A1 WO 2023285304A1 EP 2022069053 W EP2022069053 W EP 2022069053W WO 2023285304 A1 WO2023285304 A1 WO 2023285304A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- sync
- processors
- configurable
- request
- synchronisation
- Prior art date
Links
- 230000004044 response Effects 0.000 claims description 82
- 230000004888 barrier function Effects 0.000 claims description 72
- 238000000034 method Methods 0.000 claims description 35
- 230000002776 aggregation Effects 0.000 claims description 27
- 238000004220 aggregation Methods 0.000 claims description 27
- 101100368149 Mus musculus Sync gene Proteins 0.000 description 1045
- 238000011144 upstream manufacturing Methods 0.000 description 20
- 230000011664 signaling Effects 0.000 description 8
- 230000004931 aggregating effect Effects 0.000 description 6
- 230000009471 action Effects 0.000 description 4
- 238000004891 communication Methods 0.000 description 4
- 230000000694 effects Effects 0.000 description 4
- 230000001360 synchronised effect Effects 0.000 description 4
- 230000001419 dependent effect Effects 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 230000007246 mechanism Effects 0.000 description 3
- 230000000644 propagated effect Effects 0.000 description 3
- 230000007704 transition Effects 0.000 description 3
- 101100524347 Xenopus laevis req-b gene Proteins 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 238000010586 diagram Methods 0.000 description 1
- 239000004744 fabric Substances 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F15/00—Digital computers in general; Data processing equipment in general
- G06F15/16—Combinations of two or more digital computers each having at least an arithmetic unit, a program unit and a register, e.g. for a simultaneous processing of several programs
- G06F15/163—Interprocessor communication
- G06F15/173—Interprocessor communication using an interconnection network, e.g. matrix, shuffle, pyramid, star, snowflake
- G06F15/17306—Intercommunication techniques
- G06F15/17325—Synchronisation; Hardware support therefor
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/52—Program synchronisation; Mutual exclusion, e.g. by means of semaphores
- G06F9/522—Barrier synchronisation
Definitions
- the present disclosure relates to a data processing device comprising a plurality of processors and in particular to the co-ordination of synchronisations involving ones of the plurality of processors.
- a processing device for performing the processing of that data may be provided.
- the processing device may function as a work accelerator to which processing of certain data is offloaded from a host system.
- Such a processing unit may have specialised hardware for performing specific types of processing.
- machine intelligence As an example, one area of computing in which such a specialised accelerator subsystem may be of use is found in machine intelligence.
- a machine intelligence algorithm is based around performing iterative updates to a "knowledge model", which can be represented by a graph of multiple interconnected nodes.
- the implementation of each node involves the processing of data, and the interconnections of the graph correspond to data to be exchanged between the nodes.
- at least some of the processing of each node can be carried out independently of some or all others of the nodes in the graph, and therefore large graphs expose great opportunities for multi-threading. Therefore, a processing device specialised for machine intelligence applications may comprise a large degree of multi threading.
- One form of parallelism can be achieved by means of an arrangement of multiple processor tiles on the same chip (i.e. same die), each processor tile comprising its own separate respective execution unit and memory (including program memory and data memory). Thus separate portions of program code can be run in parallel on different ones of the tiles.
- each processing unit performs a compute phase and an exchange phase in an alternating cycle.
- each processing unit performs one or more computation tasks locally on processing unit, but does not communicate any results of its computations with any others of the processing units.
- each processing unit is allowed to exchange one or more results of the computations from the preceding compute phase to and/or from one or more others of the processing units. Furthermore, according to the BSP principle, a barrier synchronization is placed at the juncture transitioning from the compute phase into the exchange phase, transitioning from the exchange phase into the compute phase, or both.
- a central sync controller may be provided for receiving sync requests from each of a set of processors that are to sync together, and returning sync acknowledgments once sync requests have been received from all processors participating together in the synchronisation.
- all of the processors belonging to a procesing device may participate in a synchronisation. However, for some synchronisation points, it may be that some of the processors have no data to exchange with other processors. Therefore, it has been proposed to allow some processors of the processing device to operate asynchronously with other processors of the processing device. However, at a later time, two or more groups of processors, which may be behaving asynchronously with respect to one another, may be required to synchronise together at a synchronisation point. It is, therefore, desirable to provide a mechanism enabling different groupings of tiles to synchronise in a flexible manner.
- a data processing device comprising: a plurality of processors; and a sync controller comprising circuitry configured to receive requests from the processors to participate in synchronisations and, in response to receiving the requests, return acknowledgments to the processors, wherein each of the processors comprises: an execution unit configured to execute a set of computer readable instructions held in memory of the respective processor; and a register storing, for each of a set of configurable sync groups, an indication as to whether or not the respective processor belongs to the respective configurable sync group, wherein for a first of the processors: the indication for a first of the configurable sync groups indicates that the first of the processors does not belong to the first of the configurable sync groups; and the indication for a second of the configurable sync groups indicates that the first of the processors does belong to the second of the configurable sync groups, wherein the first of the processors comprises circuitry configured to, in response to the indication for the first of the configurable sync groups indicating that the first of
- a set of configurable sync groupings (which may be referred to as sync zones) are defined. Any of the processors may belong to any of the sync zones. Each of the processor comprises a register indicating to which of the sync zones it belongs. If a processor does not belong to a sync zone, it continually asserts a sync request for that sync zone to the sync controller. If a processor does belong to a sync zone, it will only assert its sync request for that sync zone upon arriving at a synchronisation point for that sync zone indicated in its compiled code set.
- the sync controller once all of the processors belong to a particular sync zone have reached the synchronisation point, will have received a sync request for that sync zone from all processors in the device (including those belonging to the zone and not belong to the zone), and can proceed to cause the sync acknowledgments to be sent to the processors of the device.
- the circuitry of the sync controller is configured to, in response to all of the processors of the data processing device issuing a request to participate in the synchronisation for the first of the configurable sync groups, issuing a corresponding acknowledgment to each of the processors.
- the circuitry of the sync controller is configured to: in response to the requests to participate in the synchronisation for the first of the configurable sync groups, issue a further request to an external sync controller for the processors to synchronise with further processors belong to further devices; and subsequently, in response to receipt of a further acknowledgment of the further request from the external sync controller, return to each of the processors, the corresponding acknowledgment to each of the processors.
- the external sync controller comprises: storage storing a set of configuration settings for the first of the configurable sync groups; and circuitry configured to: in response to the further request received from the sync controller, exchange one or more additional requests and one or more additional acknowledgments with further devices in dependence upon the configuration settings for the first of the configurable sync groups.
- the execution unit of the respective processor is configured to, upon reaching a first barrier synchronisation enforced between the processors belonging to the first of the configurable sync groups, issue a request to participate in the synchronisation for the first of the configurable sync groups, wherein for the first of the processors, the respective execution unit is configured to, whilst the execution units of each of the processors belonging to the first of the configurable sync groups are paused waiting at the first synchronisation barrier, proceed with computation or data exchange without waiting at the first barrier synchronisation.
- the synchronisation for the second of the configurable sync groups is a second barrier synchronisation
- the execution unit of the first of the processors is configured to: in response to receipt of an acknowledgment to the request to participate in the synchronisation for the second of the configurable sync groups, proceed past the second barrier synchronisation.
- the execution unit of the first of the processors is configured to: proceed past the second barrier synchronisation by entering an exchange phase in which the first of the processors at least one of: sends or receives data.
- the execution unit of the first of the processors is configured to, following assertion of the request to participate in the synchronisation for the first of the configurable sync groups, execute an update instruction to update the indication for the first of the configurable sync groups to specify that the first of the processors does belong to the first of the configurable sync groups.
- the execution unit of the first of the processors is configured to, following assertion of the request to participate in the synchronisation for the second of the configurable sync groups, execute an update instruction to update the indication for the second of the configurable sync groups to specify that the first of the processors does not belong to the second of the configurable sync groups.
- the data procesing device comprises aggregation circuitry configured to: in response to all of the processors providing a respective request to participate in the synchronisation for the first of the configurable sync groups, provide a first aggregate sync request to the sync controller; and in response to all of the processors providing the request to participate in the synchronisation for the second of the configurable sync groups, provide a second aggregate sync request to the sync controller, wherein the circuitry of the sync controller is configured to return acknowledgments to the processors in response to each of the first aggregate sync request and the second aggregate sync request.
- the data processing device comprises: a first sync request wire connected to the first of the processors; and a second sync request wire connected to the first of the processors, wherein the circuitry of the first of the processors is configured to, assert the request to participate in the synchronisation for the first of the configurable sync groups by asserting a signal on the first sync request wire, wherein the circuitry of the second of the processors is configured to, assert the request to participate in the synchronisation for the second of the configurable sync groups by asserting a signal on the second sync request wire.
- the circuitry of the first of the processors comprises a first multiplexer configured to, in dependence upon the indication that the first of the processors does not belong to the first of the configurable sync groups, select a first input so as to output a first signal representative of the request to participate in the synchronisation for the first of the configurable sync groups.
- the circuitry of the first of the processors comprises a second multiplexer configured to, in dependence upon the indication that the first of the processors does belong to the second of the configurable sync groups, select a second input so as to output a second signal controlled by the execution unit.
- the execution unit of the first of the processors is configured to, upon reaching the synchronisation point, execute a sync instruction to cause the second signal to be set to a state so as to represent the request to participate in the synchronisation for the second of the configurable sync groups.
- the second of the configurable sync groups comprises further processors belonging to one or more further data processing devices, wherein the first of the processors is configured to participate in the synchronisation for the second of the configurable sync groups by exchanging data with one or more of the further processors.
- the execution unit of the first of the processors is configured to, upon reaching the synchronisation point for the second of the configurable sync groups, execute a sync instruction to cause the assertion of the request to participate in the synchronisation for the second of the configurable sync groups.
- the circuitry of the first of the processors is configured to: receive from the execution unit of the first of the processors, a control signal indicating that the execution unit has reached the synchronisation point; and convert the control signal to the request to participate in the synchronisation for the second of the configurable sync groups.
- the first of the processors comprises a first interface configured to receive a first acknowledgment signal for the first of the configurable sync groups, wherein the circuitry of the first of the processors is configured to invert the first acknowledgment signal to produce the request to participate in the synchronisation for the first of the configurable sync groups, wherein the first of the processors comprises a second interface configured to receive a second acknowledgment signal for the second of the configurable sync groups, wherein the circuitry of the second of the processors is configured to invert the second acknowledgment signal to produce the request to participate in the synchronisation for the second of the configurable sync groups.
- the data processing device is an integrated circuit.
- a method implemented in a data processing device comprising a plurality of processors, the method comprising: at each of the processors: storing, for each of a set of configurable sync groups, an indication as to whether or not the respective processor belongs to the respective configurable sync group; and executing a set of computer readable instructions held in memory of the respective processor, wherein for a first of the processors: the indication for a first of the configurable sync groups indicates that the first of the processors does not belong to the first of the configurable sync groups; and the indication for a second of the configurable sync groups indicates that the first of the processors does belong to the second of the configurable sync groups, wherein the method comprises: at a sync controller, receiving requests from the processors to participate in synchronisations and, in response to receiving the requests, returning acknowledgments to the processors; at a first of the processors, in response to the indication for the first of the configurable sync groups indicating that the first of the processors does
- the method comprises: at the sync controller, in response to all of the processors of the data processing device issuing a request to participate in the synchronisation for the first of the configurable sync groups, issuing a corresponding acknowledgment to each of the processors.
- the method comprises, at the sync controller: in response to the requests to participate in the synchronisation for the first of the configurable sync groups, issuing a further request to an external sync controller for the processors to synchronise with further processors belong to further devices; and subsequently, in response to receipt of a further acknowledgment of the further request from the external sync controller, returning to each of the processors, the corresponding acknowledgment to each of the processors.
- the method comprises, at the external sync controller: storing a set of configuration settings for the first of the configurable sync groups; and in response to the further request received from the sync controller, exchanging one or more additional requests and one or more additional acknowledgments with further devices in dependence upon the configuration settings for the first of the configurable sync groups.
- the method comprises, for each of the processors belonging to the first of the configurable sync groups, upon reaching a first barrier synchronisation enforced between the processors belonging to the first of the configurable sync groups, issuing a request to participate in the synchronisation for the first of the configurable sync groups; and for the first of the processors, whilst the execution units of each of the processors belonging to the first of the configurable sync groups are paused waiting at the first synchronisation barrier, proceeding with computation or data exchange without waiting at the first barrier synchronisation.
- the synchronisation for the second of the configurable sync groups is a second barrier synchronisation
- the method comprises at the first of the processors, and in response to receipt of an acknowledgment to the request to participate in the synchronisation for the second of the configurable sync groups, proceeding past the second barrier synchronisation.
- the method comprises at the first of the processors, proceeding past the second barrier synchronisation by entering an exchange phase in which the first of the processors at least one of: sends or receives data.
- the method comprises at the first of the processors, following assertion of the request to participate in the synchronisation for the first of the configurable sync groups, executing an update instruction to update the indication for the first of the configurable sync groups to specify that the first of the processors does belong to the first of the configurable sync groups.
- the method comprises at the first of the processors, following assertion of the request to participate in the synchronisation for the second of the configurable sync groups, executing an update instruction to update the indication for the second of the configurable sync groups to specify that the first of the processors does not belong to the second of the configurable sync groups.
- the method comprises: in response to all of the processors providing a respective request to participate in the synchronisation for the first of the configurable sync groups, providing a first aggregate sync request to the sync controller; in response to all of the processors providing the request to participate in the synchronisation for the second of the configurable sync groups, providing a second aggregate sync request to the sync controller; and the sync controller, returning acknowledgments to the processors in response to each of the first aggregate sync request and the second aggregate sync request.
- the data processing device comprises: a first sync request wire connected to the first of the processors; and a second sync request wire connected to the first of the processors, wherein the method comprises: the first of the processors asserting the request to participate in the synchronisation for the first of the configurable sync groups by asserting a signal on the first sync request wire; and the second of the processors asserting the request to participate in the synchronisation for the second of the configurable sync groups by asserting a signal on the second sync request wire.
- the method comprises: at a first multiplexer belonging to the first of the processors, and in dependence upon the indication that the first of the processors does not belong to the first of the configurable sync groups, selecting a first input so as to output a first signal representative of the request to participate in the synchronisation for the first of the configurable sync groups.
- the method comprises: at a second multiplexer belonging to the second of the processors, and in dependence upon the indication that the first of the processors does belong to the second of the configurable sync groups, selecting a second input so as to output a second signal controlled by the execution unit.
- the method comprises: the first of the processors, upon reaching the synchronisation point, executing a sync instruction to cause the second signal to be set to a state so as to represent the request to participate in the synchronisation for the second of the configurable sync groups.
- the second of the configurable sync groups comprises further processors belonging to one or more further data processing devices, wherein the method comprises, the first of the processors participating in the synchronisation for the second of the configurable sync groups by exchanging data with one or more of the further processors.
- the method comprises: the first of the processors, upon reaching the synchronisation point for the second of the configurable sync groups, executing a sync instruction to cause the assertion of the request to participate in the synchronisation for the second of the configurable sync groups.
- the method comprises: at the first of the processors: receiving from the execution unit of the first of the processors, a control signal indicating that the execution unit has reached the synchronisation point; and converting the control signal to the request to participate in the synchronisation for the second of the configurable sync groups.
- the first of the processors comprises a first interface
- the method comprises: receiving, at the first interface, a first acknowledgment signal for the first of the configurable sync groups; at the first of the processors, inverting the first acknowledgment signal to produce the request to participate in the synchronisation for the first of the configurable sync groups
- the first of the processors comprises a second interface
- the method comprises: receiving at the second interface, a second acknowledgment signal for the second of the configurable sync groups; and at the second of the processors, inverting the second acknowledgment signal to produce the request to participate in the synchronisation for the second of the configurable sync groups.
- the data processing device is an integrated circuit.
- Figure 1 illustrates an example of a multi-tile processing unit
- Figure 2 is a schematic diagram illustrating the compute and exchange phases within a multi-tile processing unit
- Figure 3 illustrates exchange of data in a bulk synchronous parallel system
- Figure 4 is a schematic illustration of internal and external synchronisation barriers
- Figure 5 is a schematic illustration of an integrated circuit comprising a multi-tile processing unit and sync controller circuitry
- Figure 6 is a schematic illustration of a processor tile
- Figure 7 illustrates a timeline of the state of a sync request wire of a tile and the corresponding sync acknowledgment wire of that tile
- Figure 8A is a schematic illustration of a sync output interface of a tile for outputting sync requests towards a sync controller for the processing unit;
- Figure 8B is a schematic illustration of a sync input interface comprising circuitry for producing a sync ack pulse in response to an edge in the sync acknowledgment signal;
- Figure 9 is a schematic illustration of the sync aggregation circuitry for aggregating sync request state of all tiles of the processing unit for delivery to the sync controller;
- Figure 10 is a schematic illustration of the sync distribution wiring for delivering a sync acknowledgment signal to all of tiles of the processing unit;
- Figure 11 is a schematic illustration of circuitry for aggregating sync request state output by two pair tiles with upstream sync request state
- Figure 12 is a schematic illustration of circuitry for aggregating sync request state for a column of tiles
- Figure 13 is a schematic illustration of circuitry for aggregating sync request state from multiple columns of tiles
- Figure 14A is a schematic illustration of circuitry within a sync controller for providing a sync acknowledgment in response to receipt of a sync request for the procesing unit;
- Figure 14B is a schematic illustration of a sync signalling scheme used for signalling external sync request and acknowledgements
- Figure 15 is a schematic illustration of the division of tiles of a processing unit between different sync zones and the transmission of sync requests by the tiles of those zones;
- Figure 16 is a schematic illustration of the division of tiles of a processing unit between different sync zones and the delivery of sync acknowledgments to the tiles of those zones;
- Figure 17 is a schematic illustration of the exchange of sync requests and acknowledgments between GSPs in the system
- Figure 18 is a schematic illustration of a system in which external sync zones are implemented
- Figure 19 is an illustration of an example sync network
- Figure 20 illustrates an example of a method for performing synchronisations between different configurable sync groups of processors
- Figure 21 illustrates an example of a method for signalling sync requests.
- FIG. 1 illustrates an example processing unit 2 comprising a plurality of processors 4.
- the processors 4 are presented as being tiles 4. However, the tiles 4 may be described more generally as being processors 4.
- Each such processing unit 2 is formed on an integrated circuit.
- the multi-tile processing unit 2 shown is described in US Patent Application no: 15/886065, which is incorporated by reference.
- the processing unit 2 comprises an array 6 of multiple processor tiles 4 and an interconnect 34 connecting between the tiles 4.
- the processing unit 2 may be implemented alone as one of multiple dies packaged in the same 1C package.
- the interconnect 34 may also be referred to herein as the "exchange fabric” 34 as it enables the tiles 4 to exchange data with one another.
- Each tile 4 comprises a respective instance of an execution unit and memory.
- the processing unit 2 may comprise of the order of hundreds of tiles 4, or even over a thousand.
- an "array" as referred to herein does not necessarily imply any particular number of dimensions or physical layout of the tiles 4.
- each processing unit 2 also comprises one or more external links, enabling the processing unit 2 to be connected to one or more other processing units (e.g. one or more other instances of the same processing unit 2). These external links may enable the processing unit 2 to be connected to: a host system; and one or more other instances of the processing unit 2 on the same 1C package or card, or on different cards.
- the processing unit 2 receives work from the host, in the form of application data which it processes.
- the interconnect 34 is configured to enable the different tiles 4 in the array 6 to communicate with one another. Flowever, as well as there potentially being dependencies between threads on the same tile 4, there may also exist dependencies between the portions of the program running on different tiles 4 in the array 6. A technique is therefore required to prevent a piece of code on one tile 4 running ahead of data upon which it is dependent being made available by another piece of code on another tile 4. This is achieved using a data consistency model.
- FIGS 2 and 3 illustrate an implementation of a BSP exchange scheme, in which each tile 4 performs a compute phase 33 and an exchange phase 32 in an alternating cycle, separated from one to the other by a barrier synchronization 30 between tiles 4.
- a barrier synchronization is placed between each compute phase 33 and the following exchange phase 32.
- each tile 4 performs one or more computation tasks locally on-tile, but does not communicate any results of these computations with any others of the tiles 4.
- each tile 4 is allowed to exchange one or more results of the computations from the preceding compute phase to and/or from one or more others of the tiles 4, but does not perform any new computations until it has received from other tiles 4 any data on which its task(s) has/have dependency. It is not excluded that other operations such as internal control-related operations may be performed in the exchange phase 32.
- the communication external to the tile group may optionally utilise the BSP mechanism, but alternatively may not utilize BSP and may instead use some other synchronization mechanism of its own.
- a barrier synchronization 30 is placed at the juncture transitioning from the compute phase 33 into the exchange phase 32, or the juncture transitioning from the exchange phase 32 into the compute phase 33, or both. That is to say, either: (a) all tiles 4 are required to complete their respective compute phases 33 before any in the group is allowed to proceed to the next exchange phase 32, or (b) all tiles 4 in the group are required to complete their respective exchange phases 32 before any tile in the group is allowed to proceed to the next compute phase 33, or (c) both of these conditions are enforced. In all three variants, it is the individual tiles 4 which alternate between phases, and the assembly which synchronizes. The sequence of exchange and compute phases may then repeat over multiple repetitions.
- each repetition of exchange phase and compute phase is sometimes referred to as a "superstep" (though note that in the literature the terminology is not always used consistently: sometimes each individual exchange phase and compute phase individually is called a superstep, whereas elsewhere, as in the terminology adopted herein, the exchange and compute phases together are referred to as a superstep).
- Figure 3 illustrates the BSP principle as implemented amongst a group 4i, 4ii, 4iii of some or all of the tiles in the array 6, in the case which imposes: (a) a barrier synchronization from compute phase 33 to exchange phase 32 (see above). Note that, in this arrangement, some tiles 4 are allowed to begin computing 33 whilst some others are still exchanging.
- the BSP model may be used for the exchange of data between tiles 4 on the processing unit 2.
- the communication between tiles 4 of a processing unit 2 occurs in time deterministic fashion, in which data packets are transmitted without headers as in our earlier application US Patent Application no: 15/886065. Additionally, the BSP model may also be used for the exchange of data between processing units 2.
- FIG 4 illustrates an example BSP program flow involving both internal (i.e. between tiles 4 of a single processing unit 2) and external (i.e. between processing units 2) synchronizations.
- the flow comprises internal exchanges 50 (of data between tiles 4 of the same processing unit 2) and an external exchange 50' (of data between tiles 4 of different processing units 2).
- the program flow in Figure 4 illustrates a program flow for a first processing unit 2i and a second processing unit 2ii.
- the internal BSP supersteps (comprising the internal exchanges 50 of data between tiles 4 of the same processing unit 2) are kept separate from the external sync and exchange (comprising the external exchanges 50' of data between tiles 4 of different processing units 2).
- the program may be arranged to perform a sequence of synchronizations, exchange phases and compute phases comprising, in the following order: (i) a first compute phase, then (ii) an internal barrier synchronization 30, then (iii) an internal exchange phase 50, then (iv) an external barrier synchronization 80, then (v) an external exchange phase 50'.
- the external barrier 80 is imposed after the internal exchange phase 50, such that the program only proceeds to the external exchange 50' after the internal exchange 50.
- a compute phase may be included between (iii) internal exchange and (iv) external barrier.
- This overall sequence is enforced by the program (e.g. being generated as such by the compiler).
- the program is programmed to act in this way by means of a SYNC instruction executed by the tiles 4.
- the internal synchronization and exchange does not extend to any tiles or other entities on another processing unit 2.
- the sequence (i)-(v) (with the aforementioned optional compute phase between (iii) and (iv)) may be repeated in a series of overall iterations. Per iteration there may be multiple instances of the internal compute, sync and exchange (i)-(iii) prior to the external sync & exchange. I.e. multiple instances of (i)-(iii) (retaining that order), i.e.
- any of the tiles 4 may each be performing their own instance of the internal synchronization and exchange (ii)-(iii) in parallel with the other tiles 4.
- some tiles 4 may perform local input/output during a compute phase. For example, they may exchange data with a host or other type of external storage.
- the tiles 4 taking part in the barrier synchronisation are referred to as a synchronisation group.
- a set of configurable synchronisation groups is supported by the processing unit 2, where each of these configurable synchronisation groups is referred to herein as a synchronisation zone.
- Each of the tiles 4 may subscribe to a particular synchronisation zone, thus permitting an arbitrary group of tiles to sync together.
- Each of the synchronisation zones is individually configurable to comprise different synchronisation groups of tiles 4 in dependence upon settings for the respective synchronisation zone. By modifying these settings individual tiles 4 may be associated or disassociated with synchronisation zones.
- a synchronisation zone supported for a particular processing unit 2 may be configured as internal, in which case the tiles 4 of that procesing unit 2 that are subscribed to that zone only sync with one another.
- a synchronisation zone supported for a particular procesing unit 2 may be configured as external, in which case, the zone extends across multiple processing units 2, with tiles 4 of one processing unit 2 participating in the zone synchronising with tiles 4 of another processing unit 2 participating in the zone.
- the sync logic comprises an internal sync controller 55 and an external sync controller 58, which are described in more detail later.
- the sync logic comprises an internal sync controller 55 and an external sync controller 58, which are described in more detail later.
- the sync logic may, prior to acknowledging the request, propagate an external sync request to a further entity of the sync zone.
- the further entity could be a proxy for exchanging data with a host system or sync logic associated with another processing unit 2.
- an external sync request is propagated to sync logic associated with another processing unit 2
- the action taken by the sync logic associated with that other processing unit 2 in response to the external sync request depends upon whether that logic is defined as the master for the sync zone or as a propagation node for the sync zone.
- the propagation nodes for a sync zone propagate their received external sync requests towards the master defined for the sync zone.
- the sync master once it has received external sync requests for each of the processing units 2 containing tiles 4 belonging to the sync zone, returns external sync acknowledgments to the sync logic associated with each of those other processing units 2 (apart from its own processing unit 2) containing tiles 4 belonging to the sync zone.
- the sync master also causes sync acknowledgments to be returned to each of the tiles 4 in its own processing unit 2.
- the sync logic (which comprises a propagation node) associated with each of the processing units 2, upon receiving an external sync acknowledgment originating from the sync master, returns sync acknowledgments to those tiles 4 of its processing unit 2.
- the tiles 4 of the sync zone pass the barrier synchronisation and exchange data with one other during the exchange phase. This exchange of data between different processing units 2 is done in a non-time deterministic manner as described in our earlier application, US app no: 15/886,065.
- sync network is used to refer to the connected sync propagation nodes/circuits for a sync zone that are used to exchange sync requests/acknowledgments so as to co-ordinate a barrier synchronisation between tiles 4 belonging to a sync zone.
- Sync requests transmitted towards the master node defined in the sync network are said to be transmitted "upstream” in the sync network.
- Sync acknowledgements transmitted towards the slave nodes defined in the sync network are said to be transmitted "downstream” in the sync network.
- FIG. 5 illustrates an example of an integrated circuit 500a (i.e. a chip 500a).
- a chip 500a i.e. a chip 500a
- the connected chips of which the example chip 500a is one, are referred to as chips 500.
- Each chip 500 comprises a processing unit 2 comprising tiles 4.
- Each chip 500 may also be referred to as a processing device 500 or as an accelerator subsystem 500, since the processing unit 2 of each chip 500 functions as an accelerator for processing workloads provided by a host system.
- the processing devices 500 are described as being chips 500 throughout this description, it is not excluded that in some embodiments, such processing devices 500 could be implemented on the same integrated circuit.
- chip 500a the specific chip shown in Figure 5 is referred to as chip 500a.
- processing unit 2a the specific processing unit shown in Figure 5 is referred to as processing unit 2a.
- the features of the chip 500a and processing unit 2a described below, are also features of each of the chips 500 and processing units 2.
- Each of the tiles 4 in the processing unit 2a may participate in different types of barrier sync.
- AA first type of barrier sync is an internal sync, in which only tiles 4 of the same processing unit 2a participate.
- a second type of sync is an external wired sync in which the sync zone for the sync, in addition to including tiles 4 of processing unit 2a, also includes tiles 4 on one or more chips 500 that are accessible over local wired connections.
- the sync messages are exchanged between the chips 500 over dedicated wires used for the transmission of different types of sync message.
- the application data that is exchanged between the chips 500 during the exchange phase for an external wired sync is sent over PCIe connections between the chips 500 participating in the sync.
- a third type of sync is an external sync with host involvement.
- a host sync proxy (HSP) participates in the barrier sync by exchanging sync messages with the processing unit 2a, prior to an exchange phase in which data is exchanged between the host and the tiles 4 of the processing unit
- a fourth type of sync is an external packet-based sync in which the sync group for the sync, in addition to including tiles 4 of processing unit 2a, also includes tiles 4 on one or more chips 500 that are accessible over a packet-switch network (e.g. an Ethernet network).
- a packet-switch network e.g. an Ethernet network.
- the sync messages are also sent over the same packet-switched network.
- a plurality of sync zones are provided for the processing unit 2a.
- Each sync zone is individually configurable to comprise different sync groupings of tiles 4.
- Each of the sync zones may be configured as an external sync zone (in which case the corresponding sync group includes tiles 4 of other processing units 2) for an external barrier synchronisation or as an internal sync zone (in which case the sync group for that sync zone is limited to tiles 4 of the processing unit 2a) for an internal barrier synchronisation.
- the sync zones may be categorised into different sets depending upon the hardware provided for that sync zone and, consequently, the type of syncs that be implemented using that sync zone.
- a first set of the sync zones are sync zones that may be configured for use for either for the first type of sync discussed above (i.e. internal sync) or the second type of sync discussed above (i.e. external wired sync).
- the first 22 of these zones (labelled sync zones 1-22) belong to the first set of sync zones.
- sync zones 1 and 2 may be used for barrier synchronisations following which data exchange is carried out between the host and the tiles 4 of the processing unit 2a.
- a second set of the sync zones are sync zones that may be used either for the first type of sync discussed above (i.e. internal sync) or the fourth type of sync discussed above (i.e. external packet- based sync).
- the last eight of these zones (labelled sync zones 23-30) belong to the second set of sync zones.
- Each tile 4 has a sync request wire for each sync zone.
- the state of this wire is referred to herein as tile sync request state.
- the state of the wire is set to indicate that a sync request is asserted by the tile 4, the resulting sync request is referred to as a tile sync request.
- Each tile 4 comprises an execution unit 52, which may control the state of the sync request wires. For any such wire, the signal output by the execution unit 52 and used to assert a tile sync request on that wire is referred to as the sync control signal.
- Each tile 4 also has a sync acknowledgment wire for each sync zone.
- the state of this wire is referred to herein as the internal sync acknowledgment state.
- the execution unit 52 is responsive to pulses generated in response to edges in the internal sync acknowledgment state. Such a pulse is referred to herein as a sync ack pulse.
- Aggregation circuitry is provided in the processing unit 2a for aggregating the sync request state of all of the tiles 4 in the processing unit 2a.
- the state of the signal output by each such unit of aggregation circuitry is referred to herein as aggregate sync request state, and a sync request signalled by the aggregate sync request state is referred to as an aggregate sync request.
- the aggregate sync request state of all of the tiles 4 of the processing unit 2a is referred as internal aggregate sync request state and a sync request signalled by such state is referred to as an internal sync request.
- Such an internal sync request is provided as an input to the internal sync controller 55, which responds by outputting a corresponding internal sync acknowledgment. This internal sync acknowledgment is propagated to all of the tiles 4 of the processing unit 2a.
- the internal sync controller 55 for certain configured sync zones, outputs a sync request to the external sync controller (the GSP 58) in response to the internal sync request.
- This sync request is referred to as an external sync request.
- the GSP 58 responds by returning a sync acknowledgment to the internal sync controller 55.
- This returned acknowledgment is referred to as external sync acknowledgment.
- Figure 5 shows that the processing unit 2a includes sync controller circuitry 55 (shown as the IPU sync controller 55) between the tiles 4 and the GSP 58.
- the IPU sync controller 55 may also be referred to as the internal sync controller 55, since it acknowledges internal sync requests for internal barrier synchronisations without requiring input from the GSP 58.
- the IPU sync controller 55 receives internal sync requests represented by aggregate sync request state output by the tiles 4, and performs an action in dependence upon settings in a register 501 of the GSP 58.
- the settings in the register 501 define for each sync zone whether that sync zone is defined as internal or as external. Indications of the settings in register 501 are provided to the IPU sync controller 55 over interface 502 between the GSP 58 and the IPU sync controller 55. Any of the 30 sync zones may be defined as either external or internal.
- the IPU sync controller 55 When an internal sync request is received at the IPU sync controller 55 and the sync zone for that sync request is defined in register 501 as being an external sync zone, the IPU sync controller 55 responds by providing an external sync request to the GSP 58 on an interface of the GSP 58 associated with the particular sync zone for the sync. As shown in Figure 5, the GSP 58 has a number of interfaces (labelled as ISO to IS29), each of which is associated with one of the sync zones provided for the processing unit 2a. The sync controller 55 provides the external sync request over one of the interfaces (ISO to IS29) that is associated with the same sync zone as the internal sync request.
- ISO to IS29 interfaces
- the GSP 58 will return an external sync acknowledgment, which is sent over the same one of the interfaces ISO to IS29 over which the external sync request was provided.
- the sync controller 55 outputs an internal sync acknowledgement to each of the tiles 4 in the processing unit 2a.
- the IPU sync controller 55 When an internal sync request associated with a particular sync zone is received at the IPU sync controller 55, if that sync zone is defined in register 501 as being as internal sync zone, the IPU sync controller 55 causes an internal sync acknowledgment to be sent to the tiles 4 of the processing unit 2a. The IPU sync controller 55 performs this action without waiting for an external sync acknowledgment from the GSP 55 The IPU sync controller 55 may, however, also pass an external sync request signal to the GSP 58, such that it is asserted on an interface of the GSP 58 that is associated with the sync zone. This enables the GSP 58 to log trace data for the sync.
- the IPU sync controller 55 includes a plurality of sets of wires, with each set of wires being associated with a different sync zone.
- Each set of wires includes at least a sync request wire, on which an internal sync request for the respective sync zone is received, and a sync acknowledgment wire on which an internal sync acknowledgment for the respective sync zone is sent to the tiles 4.
- the IPU sync controller 55 includes a plurality of sets of wires, with each set of wires being associated with a different sync zone. Each set of wires is also associated with a different one of the GSP 58 interfaces ISO to IS29 and is used to pass external sync requests to the GSP 58 and receive external sync acknowledgments from the GSP 58 for the respective sync zone.
- each individual tile 4 In order to ensure that each tile 4 indicates in which sync zone it is to participate, each individual tile 4 also has a plurality of dedicated sync request wires, each of which is associated with one of the sync zones defined for the processing unit 2a. Each tile 4, when it is to participate in a barrier synchronisation associated with a particular sync zone, issues a tile sync request on a sync request wire associated with that sync zone. Each tile 4 also has a plurality of dedicated sync acknowledgment wires, each of which is associated with one of the sync zones defined for the processing unit 2a.
- Each tile 4 after issuing a tile sync request on a sync request wire for a sync zone, receives from the sync controller 55, the internal sync acknowledgment on its sync acknowledgment wire associated that sync zone. In response, the tile 4 then progress to the exchange phase following the barrier synchronisation.
- the tile 4 comprises a memory 51 for storing both instructions for execution by the execution unit 52 and data that the execution unit 52 is configured to perform operations on when executing the instructions.
- the memory 51 comprises a local program for its respective tile 4, where that local program comprises instructions to be executed by the execution unit 52.
- the set of instructions (or code) for each tile 4 comprise indications of synchronisation points at which the tile 4 is to participate in a barrier synchronisation.
- the indications comprise SYNC instructions, which are for execution by the respective execution unit 52 when the tile 4 reaches the synchronisation point in its local program.
- the tile 4 comprises a sync zone register 53, which stores, for each of the plurality of sync zones defined for procesing unit 2, an indication as to whether or not the tile 4 belongs to the respective sync zone.
- the sync zone register 53 comprises a bitmap, where each bit indicates whether or not the tile 4 belongs to a different one of the sync zones.
- the execution unit 52 of the tile 4 may execute instructions to modify the indications held in the sync zone register 53.
- the sync zones to which each of the tiles 4 of the processing unit 2a belong are fixed for a particular application.
- the execution units 52 of one or more tiles 4 of the processing unit 2a execute instructions to modify the sync zone indications held in their registers 53 in order to change the sync zones to which they belong.
- the tile 4 comprises a data output interface 54, which is used for sending, during internal exchange phases, data to other tiles 4 belonging to the same processing unit 2a and for sending, during external exchange phases, data to destinations external to the device 500a.
- the tile 4 comprises a data input interface 59, which is used for receiving, during internal exchange phases, data from other tiles 4 belonging to the same processing unit 2a and for receiving, during external exchange phases, data from sources external to the device 500a.
- the tile 4 comprises a plurality of sync output interfaces 60, which are used for outputting tile sync request state from the tile 4 towards the internal sync controller 55.
- the tile 4 also comprises a plurality of sync input interfaces 61 for receiving internal sync acknowledgments from the sync controller 55 and notifying the execution unit 52.
- Each of the sync output interfaces 60 is associated with a different sync zone and is used for sending a tile sync request for its associated sync zone on a corresponding sync request wire.
- Each of the sync input interfaces 61 is associated with a different sync zone and is used for receiving an internal sync acknowledgment for its associated sync zone on a corresponding sync acknowledgment wire.
- the tile 4 is shown as comprising only two sync output interfaces 60 and two sync input interfaces 61, in practice the tile 4 would comprise more than two (e.g. 30) of each type of interface 60, 61.
- the tile 4 is configured to output a tile sync request for a sync zone by setting the state of the relevant sync request wire to the opposite of the state of the corresponding sync acknowledgment wire for the sync zone. For example, if the internal sync acknowledgment signal for a particular sync zone is set low, in order to assert a tile sync request, the signal on the corresponding sync request wire is set high. Conversely, if the internal sync acknowledgment signal for a particular sync zone is set low, in order to assert a tile sync request, the signal on the corresponding sync request wire is set high.
- FIG. 7 illustrates a timeline over which a plurality of tile sync requests are issued by a tile 4.
- the sync request and acknowledgment signals shown are for a particular sync zone, labelled 'z', in which the tile 4 is configured to participate.
- the sync request wire, output interface 60, sync acknowledgment wire, and input interface 61 that are discussed are those for sync zone z.
- the state of sync acknowledgment wire of the tile 4 is held low.
- the execution unit 52 of the tile 4 executes a SYNC instruction to cause a tile sync request (i.e. reql) to be asserted on the sync request wire.
- Circuitry of the output interface 60 causes the tile sync request (i.e. reql) to be issued by setting the state of the sync request wire to be opposite to the state of the sync acknowledgment wire. Since the acknowledgment wire is held in a low state, the tile sync request is issued by setting the state of the sync request wire to be high.
- the tile sync request is represented by the transition of the sync request signal from low to high.
- an internal sync acknowledgment (i.e. ackl) is received at the input interface 61 the tile 4.
- the internal sync acknowledgment is detected when the sync acknowledgement wire changes state, i.e. when an edge in the received sync acknowledgment is detected.
- the sync request wire of the tile 4 is held in a high state.
- the internal sync acknowledgment 'ackl' is received at the tile 4 once the sync acknowledgment wire is also set to a high state.
- the sync request wire and the sync acknowledgment wire are then both held in the high state. Since both wires are held in the same state, the tile sync request (i.e. reql) is no longer asserted.
- the transition point at which ackl is received therefore, also reflects the point at which reql is deasserted.
- the execution unit 52 moves into the exchange phase during which it may execute one or more SEND instructions to cause data from memory 51 to be sent over the data output interface 54.
- the execution unit 52 executes a further SYNC instruction to causes a further tile sync request (i.e. req2) to be issued.
- a further tile sync request i.e. req2
- the further tile sync request is issued by setting the sync request wire to a low state.
- the tile sync request remains asserted until the corresponding acknowledgment (i.e. ack2) is received when the sync acknowledgment wire is set to a low state.
- the execution unit 52 proceeds to execute a further SYNC instruction causing the next tile sync request (i.e. req3) to be issued by setting the sync request wire to be set to a high state.
- This tile 4 sync request remains asserted until the sync acknowledgment wire is also set to a high state, marking the receipt of ack3.
- the values of the sync zone register 53 may be used to control the state of the sync request wires.
- the control of the state of the sync request wires in dependence upon the values within the sync zone register 53 enables a barrier sync for a particular sync zone to proceed even if the tile 4 does not belong to that particular sync zone and is not configured to participate.
- the sync zone register 53 comprises an indication that the tile 4 does not belong to that sync zone.
- the sync output interface 60 for that particular sync zone is configured to output a tile sync request in response to the indication in the register 53 that the tile 4 does not belong to that sync zone.
- the sync output interface 60 continues to output the tile sync request irrespective of the activity of the execution unit 52.
- the execution unit 52 may continue to execute additional instructions for performing computations during a compute phase or may execute SYNC instructions to participate in barrier synchronisations with respect to other sync zones. In this way, the tile 4 may operate asynchronously with respect to the compute-exchange cycles of the other tiles 4 that do belong to the sync zone in which the tile 4 of Figure 6 is not participating.
- the sync output interface 60 comprises an inverter 82 configured to invert the internal sync acknowledgment signal, so as to provide a signal that may be output to provide a tile sync request when the tile 4 is not participating in the sync zone associated with the interface 60.
- the sync output interface 60 comprises a multiplexer 81, which is controlled in dependence upon the indication in the register 53 to select between outputting the inverted form of the internal sync acknowledgment supplied by the inverter 82 or outputting a sync signal controlled by the execution of SYNC instructions by the execution unit 52.
- a control signal reflecting the indication in register 53 that the tile 4 does not participate is received at multiplexer 81 and controls multiplexer 81 to select the inverted form of the internal sync acknowledgment.
- the interface 60 outputs this inverted form of the internal sync acknowledgment on the sync request wire. Since the inverted form of the internal sync acknowledgment reflects an asserted tile sync request, in this way, when the tile 4 is not participating in the sync zone, the tile sync request for the sync zone is continually asserted, irrespective of the activity of the execution unit 52.
- a control signal reflecting the indication in register 53 that the tile 4 does participate is received at the multiplexer 81 and controls multiplexer 81 such that the state of the sync request wire is controlled by the execution unit 52.
- This execution unit 52 provides a sync signal, and sets this sync signal to high in order to assert a tile sync request and sets the sync signal to low in order to deassert a sync request.
- the XOR gate 83 is used to provide the tile sync request according to the signalling scheme for signalling sync requests to the sync controller 55.
- the XOR gate 83 receives the internal sync ack signal and either outputs this sync ack, in the case that the execution unit 52 is not asserting the sync control signal, or outputs an inverted form of the internal sync ack, in the case that the execution unit is asserting the sync control signal.
- the multiplexer 81 when the multiplexer 81 is controlled to select the output from the XOR gate 83, the multiplexer 81, and hence the output interface 60, outputs a tile sync request when controlled to do so by the execution unit 52.
- the execution unit 52 executes SYNC instructions, which cause the state of the sync request wire output from interface 60 to be set to be the opposite to the current state of the sync acknowledgment wire.
- FIG 8B illustrates an example of a sync input interface 61, which receives the internal sync acknowledgement signal and in dependence upon the state of this signal, outputs sync ack pulses to the execution unit 52.
- the example sync input interface 61 may be associated with any of the sync zones, and each of the sync input interfaces 61 for the different sync zones comprises the same circuitry.
- the example sync input interface 61 may be associated with the same sync zone as the example sync output interface 60 that is illustrated in Figure 8A.
- the interface 61 comprises a flip flop 85, which stores the state of the internal sync ack signal received at the interface 61.
- the flip flop 85 outputs this latched state.
- the interface 61 also comprises an XOR gate 86, which receives the state of the sync ack wire as one input and receives the output of the flip flop 85 as another input.
- the XOR gate 86 outputs a high signal when these two inputs differ.
- the state of the sync ack wire for interface 61 changes, the state of this sync ack wire will temporarily not match the output of the flip flop 85.
- the XOR gate 86 receives one high input, and one low input, and as a consequence outputs a high signal.
- the interface 61 provides a pulse (the sync ack pulse) in response to an edge in its received sync ack signal.
- the sync ack pulse is output from the interface 61 to the execution unit 52. If the execution unit 52, has executed a SYNC instruction for the sync zone corresponding to the sync ack pulse, it stalls whilst waiting for this sync ack pulse.
- the execution unit 52 In response to receipt of the sync ack pulse, the execution unit 52 passes the barrier and proceeds to the exchange phase in which data is exchanged between its tile 4 and the other tiles 4. If the execution unit 52 has not executed such a SYNC instruction, but is part of a tile 4 that is indicated in the sync zone register 53 as not participating in the sync zone associated with the sync ack pulse, the execution unit 52 ignores the sync ack pulse.
- the internal sync ack signal that is received at the interface 61 is also provided to the corresponding sync output interface 60 that is associated with the sync input interface 61.
- this internal sync ack signal is provided as an input both to the XOR gate 83 and to the inverter 82 and is, in this way, used to provide the tile sync request signal.
- Each of the tiles 4 in the processing unit 2a operates similarly to assert tile sync requests in dependence upon the state of its sync acknowledgment wires.
- Aggregation circuitry is provided in the processing unit 2a for aggregating the tile sync requests output by the tiles 4 to provide an internal sync request that is provided to the sync controller 55.
- the aggregation circuity performs such aggregation for each sync zone to provide an aggregate sync request state for each sync zone.
- the aggregation circuitry is configured to aggregate the state of the tile sync request outputs such that the aggregate signal changes state in response to the tile sync request state of each tile 4 changing state.
- each tile's 4 sync request wire for a particular sync zone in the procesing unit 2 is set to a low state
- the aggregate signal will also be low.
- the aggregation circuitry causes the aggregate signal to change state to a high state in response to the state of all of the tile sync request wires for the sync zone being set to a high state.
- Figure 9 illustrates an example as to how the tile sync requests are aggregated across the processing unit 2a.
- the tiles 4 are arranged in pairs (referred to as 'pair tiles'), with the pairs being arranged in columns.
- the aggregation circuity comprises sets of circuitry 910 and circuitry 920.
- Each of the pair tiles is associated with a set of circuitry 910 that is configured to aggregate the sync request state from its associated tiles 4.
- Each set of circuitry 910 receives sync request state from upstream in the sync network and aggregates this state with the sync request state output by the associated pair tiles 4.
- circuitry 920 is configured to receive the aggregated sync request state from the different ones of the columns.
- the aggregation circuitry 910, 920 is configured to perform the aggregation of sync request state in dependence upon the state of the internal sync acknowledgment signal (which is output by the sync controller 55) for the sync zone.
- Figure 10 illustrates how an internal sync acknowledgment may be distributed to different tiles 4 of the processing unit 2a and to the aggregation circuitry 910, 920.
- the sync controller 55 changes the state of the internal sync acknowledgment signal in response to receipt of an internal sync request.
- the internal sync acknowledgement signal is provided to all of the tiles 4, the aggregation circuitry 910 and the aggregation circuitry 920.
- the internal sync acknowledgment signal output by the sync controller 55 is provided with the same state on all of the sync acknowledgment wires used to distribute the signal to the tiles 4 and the circuitry 910, 920.
- the aggregation circuitry 910 comprises an OR gate 1100, and an AND gate 1110.
- Each of the gates 1100, 1110 receives sync request state from each of the two pair tiles 4 associated with circuitry 910.
- Each of the gates 1100, 1110 additionally receives an upstream sync request signal, which reflects aggregated sync request state for tiles 4 further up the relevant column.
- the circuitry 910 comprises a multiplexer 1120, which is controlled in dependence upon the internal sync acknowledgment signal to select between the output of the OR gate 1100 and the output of the AND gate 1110. If the internal sync acknowledgment signal is high, the OR gate 1100 is selected, whereas if the internal sync acknowledgment signal is low, the AND gate 1110 is selected. The consequence of this selection is that the circuitry 910 only outputs a signal that is opposite to the internal sync acknowledgment signal if all of the tile sync request signals (i.e. the signals from both tiles 4 and the upstream sync request signal) received at circuitry 910 have the opposite state to the internal sync acknowledgment signal.
- the OR gate 1100 is selected, and so the tile sync request state output by circuitry 910 will also be high unless all inputs to the OR gate 1100 are low.
- the AND gate 1110 is selected, and so the tile sync request state output by circuitry 910 will also be low, unless all inputs to the AND gate 1110 are high.
- circuitry 910 Multiple instances of circuitry 910 are chained together to provide aggregate sync request state for a column.
- FIG 12 illustrates how the aggregate sync request state may be provided for a column 1200 of tiles 4.
- Each of the sets of circuitry 910b-e is configured to, as shown in Figure 11, receive the tile sync request state output by its associated tiles 4, and additionally receive the aggregate sync request state (also referred to as the upstream sync request state) provided by the adjacent set of circuitry 910.
- circuitry 910b receives the upstream sync request state output by circuitry 910a
- circuitry 910c receives the upstream sync request state output by circuitry 910b, and so on.
- circuitry 910a Since circuitry 910a is located at the top of the column 1200, with no further aggregation circuitry 910 located above, the circuitry 910a receives as its upstream sync request state, the output of invertor 1210.
- the invertor 1210 inverts the sync acknowledgment signal and in so doing provides as its output, an asserted sync request signal.
- Each of the sets of circuitry 910a-e will output an asserted sync request once it receives an asserted tile sync request from its associated tiles 4 and an asserted sync request from higher in the column 1200. In this way, once all of the tiles 4 also provide an asserted sync request signal, the aggregate output by the circuitry 910e will also be the same asserted sync request signal.
- Figure 13 illustrates an example of the aggregation circuitry 920, and shows how the circuitry 920 aggregates the sync request state of different columns.
- the circuitry 920 includes an OR gate 1300 and an AND gate 1310. Both the AND gate 1310 and the OR gate 1300 receive as inputs, the aggregated sync request state from two of the columns, and further aggregated sync request state.
- the further aggregated sync request state is shown in Figure 13 as "Exchange aggregated sync request state" since the state is aggregated in the direction of the data exchange wiring, which runs perpendicular to the columns.
- the exchange aggregated sync request state may be output by a further instance of the circuitry 920 that is upstream in the sync network or, if there are no further instances of the circuitry 920 that are upstream, may be provided by an inverted form of the internal sync acknowledgment signal.
- the circuitry 920 comprises a multiplexer 1320, which is controlled in dependence upon the internal sync acknowledgment signal to select between the output of the OR gate 1300 and the output of the AND gate 1310. If the internal sync acknowledgment signal is high, the OR gate 1300 is selected, whereas if the internal sync acknowledgment signal is low, the AND gate 1310 is selected. The consequence of this selection is that the circuitry 920 only outputs a signal that is opposite to the internal sync acknowledgment signal if all of the aggregate sync request state (i.e. the aggregate sync request state from both associated columns and the exchange aggregated sync request state) received at circuitry 920 have the opposite state to the internal sync acknowledgment signal.
- the aggregate sync request state i.e. the aggregate sync request state from both associated columns and the exchange aggregated sync request state
- the OR gate 1300 is selected, and so the aggregate sync request state output by circuitry 920 will also be high unless all inputs to the OR gate 1300 are low.
- the AND gate 1310 is selected, and so the aggregate sync request state output by circuitry 920 will also be low, unless all inputs to the AND gate 1310 are high.
- circuitry 920 Multiple instances of circuitry 920 are chained together to provide aggregate sync request state for the processing unit 2a.
- the consequence of the aggregation performed by the instances of the circuitry 910 and the instances of the circuitry 920 is that the aggregate sync request state that is provided represents an internal sync request when all of the tiles 4 have set their sync request output signal appropriately.
- Figure 14A illustrates circuitry within the sync controller 55.
- Figure 14A shows the circuitry provided in sync controller 55 that is associated with a single sync zone.
- sync controller 55 comprises a separate instance of such circuitry for each sync zone.
- the circuitry within the sync controller 55 provides an internal sync ack in response to receipt of an internal sync request.
- the circuitry of the sync controller 55 also communicates with the GSP 58 to send and receive external sync request and acknowledgment signals.
- the external sync request and acknowledgment signals are provided according to an alternative sync signalling scheme, which is described in more detail with reference to Figure 14B.
- a register 501 is provided in the GSP 58 and indicates for each of the sync zones supported for the processing unit 2a, which of those zones is configured as being internal (including only tiles 4 of the processing unit 2a) and which is configured as being external (also including tiles 4 of other processing units 2).
- a signal indicating as such (shown as the 'enable internal sync' signal) is provided by circuitry of the GSP 58 to the OR gate 1440. Consequently, the OR gate 1440 outputs a high signal to the multiplexer 1450. The signal is used to control the multiplexer 1450 to output the internal sync request state (i.e. the aggregate sync request state received at the controller 55).
- the internal sync ack state is consequently set to be the same as the internal sync request state. In this way, when the sync zone is configured to be internal, the sync controller 55 immediately acknowledges the internal sync request by setting the internal sync acknowledgment state to be the same as the internal sync request state.
- the enable internal sync signal is set low and, therefore, the output of the OR gate 1440 will be set to be low until the GSP 58 provides an external sync acknowledgment signal.
- the sync controller 55 provides an external sync request to the GSP 58.
- the external sync requests and sync acknowledgments are represented according to a different sync scheme to the scheme (exemplified in Figure 7) that is used for the tile sync requests and acknowledgments and for the internal sync request and acknowledgments.
- This Figure illustrates an example of a sync handshake between a downstream propagation node and an upstream propagation node in the sync network.
- the downstream propagation node may, for example, be the GSP 58 on one chip 500, whilst the upstream propagation node is the GSP 58 on another chip 500.
- Figure 14B illustrates the state of an external sync request signal and the state of an external sync acknowledgment signal. These are each provided on separate wires and so the state of the signals reflect the state of the wires.
- the downstream propagation node provides an external sync request (shown as 1. sync request asserted) by setting the external sync request signal to be high. This causes an external sync request to be detected at the upstream propagation node.
- the downstream propagation node will keep the external sync request asserted until it receives an external sync acknowledgment. In effect, the downstream propagation node stalls until it receives the external sync acknowledgment.
- the upstream propagation node in response to the external sync request, provides an external sync acknowledgment (shown as 2. sync acknowledgment asserted) by setting the external sync acknowledgment signal to be high. This causes an external sync acknowledgment to be detected at the downstream propagation node.
- the downstream propagation node in response to the external sync acknowledgment, deasserts the external sync request (shown as 3. sync request deasserted) by setting the external sync request signal low.
- the upstream propagation node detects that the sync request signal has been deasserted and, in response to the deassertion of the sync request signal, deasserts the external sync acknowledgment (shown as 4. sync acknowledgment deasserted) by setting the state of the external sync acknowledgment signal to be low. With the external sync acknowledgment signal deasserted, the sync handshake between the two nodes of the sync network is then complete.
- the sync controller 55 comprises circuitry for converting an internal sync request to an external sync request and for converting the external sync acknowledgment to an internal sync acknowledgement.
- the circuitry comprises a XOR gate 1410, which is part of the circuitry for generating the external sync request from an internal sync request.
- the XOR gate 1410 receives as one input, the aggregate sync request state of the processing unit 2a, and as another input, the current state of the internal sync ack signal.
- the XOR gate 1410 outputs a high signal (indicating a sync request) if there is a mismatch between this aggregate sync request state and the sync ack state.
- Such a mismatch is indictive that a tile sync request has been asserted by all of the tiles 4 of the processing unit 2a.
- a high signal is output from the XOR gate 1410, this is provided to the OR gate 1420, which responds by also issuing a high signal, which is provided to the AND gate 1430.
- the AND gate 1430 receives as one input, the output of OR gate 1420, and as another input the inverted state of the external sync acknowledgment signal.
- the AND gate 1430 therefore, only outputs a high signal if the external sync acknowledgment signal is currently low (indicating that the external sync acknowledgment is not currently asserted).
- the output of the AND gate 1430 provides the external sync request signal to the GSP 58.
- the GSP 58 is configured to exchange external sync requests and acknowledgments with additional GSPs 58 in the system 550.
- the GSP 58 provides the external sync ack signal, which is provided to invertor 1460 and the OR gate 1440.
- the invertor 1460 inverts the external sync ack signal (which is now set high) to produce a low signal, which is provided to the AND gate 1430.
- the AND gate 1430 outputs a low signal, causing the external sync request to be deasserted.
- the OR gate 1440 provides a high output to the multiplexer 1450. This signal is used to control the multiplexer 1450 such that the internal sync request state is output from the multiplexer 1450. Therefore, in response to the external sync ack signal, the multiplexer 1450 is controlled to set the internal sync ack state to be the same as the internal sync request state, thus causing an internal sync acknowledgement to be sent to the tiles 4.
- Figure 15 illustrates how different groupings of tiles 4 may be subscribed to three different sync zones.
- Figure 15 shows the indications in each of registers 53 for these three sync zones.
- the registers 53 for tile 4a and tile 4b belong to a first sync zone (labelled as 'Z ) and to a third sync zone (labelled as 'Z3'), but do not belong to a second sync zone (labelled as 'Z2').
- the registers 53 for tile 4c and 4d belong to the second sync zone (labelled as 'Z2') and to the third sync zone (labelled as 'Z3'), but do not belong to a first sync zone (labelled as 'Z ).
- the sync aggregation circuitry 1500 shown in Figure 15 may comprise the aggregation circuitry 910 and 920 discussed above.
- Z1 is shown as including a group 1510a of two tiles 4a, 4b and Z2 is shown as including a group 1510b of two tiles 4c, 4d.
- tile sync request wires associated with Zl are shown in Figure 15 as SRZ1.
- the assertion of a tile sync request is represented by setting the state of the relevant sync request wire to be opposite to the state of the relevant sync acknowledgment wire.
- tiles 4a and 4b do belong to Zl, these tiles 4a, 4b only assert a tile sync request on their sync request wire for Zl when they reach a barrier synchronisation that is associated with Zl in their compiled code set.
- the execution unit 52 of the respective tile 4a, 4b executes a SYNC instruction taking an indication of Zl as an operand, which causes a tile sync request to be asserted on the sync request wire of the tile 4 that is associated with Zl.
- all of the tiles 4 (including those not belonging to Zl) in the processing unit 2a are asserting a tile sync request for Zl.
- the sync aggregation circuitry 1500 aggregates the tile sync requests to provide the internal sync request (shown as AZ1) for Zl to the sync controller 55.
- the sync distribution wiring shown in Figure 16 may comprise the wiring shown in Figure 10 for providing internal sync acknowledgments to the tiles 4.
- the sync controller 55 asserts an internal sync acknowledgment signal (shown as SAZ1) that is associated with Zl. SAZ1 is provided by sync distribution wiring 1600 to each of the tiles 4a-d.
- the execution units 52 of tiles 4a, 4b pass the barrier synchronisation and proceed to the exchange phase. If the exchange phase is an internal exchange phase, one or more of the execution units 52 of tiles 4a, 4b execute instructions to exchange data between tiles 4a, 4b. If the exchange phase is an external exchange phase, one or more of the execution units 52 of tiles 4a, 4b execute instructions to exchange data with devices external to the device 500a.
- tiles 4c and 4d do belong to Z2, these tiles 4a, 4b only assert a tile sync request on their sync request wire for Z2 when they reach a barrier synchronisation that is associated with Z2 in their compiled code set.
- the execution units 52 of the tiles 4c, 4d each execute a SYNC instruction taking an indication of Z2 as an operand. Each such SYNC instruction causes the logic in the respective tile 4 to assert an internal sync request on its sync request wire for Z2.
- the sync aggregation circuitry 1500 provides the aggregated sync request (shown as AZ2) for Z2 to the sync controller 55.
- FIG. 16 illustrates how sync acknowledgments for the Z2 sync are returned to the tiles 4.
- the sync controller 55 asserts an internal sync acknowledgment signal (shown as SAZ2) that is associated with Z2.
- SAZ2 is provided by sync distribution wiring 1600 to each of the tiles 4a-d.
- the execution units 52 of tiles 4c, 4d pass the barrier synchronisation and proceed to the exchange phase. If the exchange phase is an internal exchange phase, one or more of the execution units 52 of tiles 4c, 4d execute instructions to exchange data between tiles 4c, 4d. If the exchange phase is an external exchange phase, one or more of the execution units 52 of tiles 4c, 4d execute instructions to exchange data with devices external to the device 500a.
- each of the tiles 4a-d of the processing unit 2a issues a tile sync request when it reaches a barrier synchronisation associated with Z3.
- the execution unit 52 of each tile 4 executes a SYNC instruction taking an indication of Z3 as an operand, thus causing a tile sync request to be issued on the sync request wire of the respective tile 4.
- the sync aggregation circuitry 1500 aggregates the state of the sync request wires and provides an internal sync request (AZ3) to the sync controller 55.
- the sync controller 55 in response to receipt of AZ3, causes an internal sync acknowledgment to be returned to each of the tiles 4a-d in the processing unit 2a.
- the sync distribution wiring 1600 causes the internal sync acknowledgment signal to be asserted on the sync acknowledgment wires associated with Z3. Since all of these tiles 4a-d belong to Z3, in response to the internal sync acknowledgment, the execution unit 52 of each tile 4a-d passes the barrier synchronisation and enters the exchange phase (which may be an internal or an external exchange phase).
- the sync controller 55 will, if the sync zone for which an internal sync request is received is configured as an internal sync zone, acknowledge the sync request without providing an external sync request to the GSP 58. Flowever, if the sync zone is configured as an external sync zone, the sync controller 55 will forward the external sync request to the GSP 58 and await receipt of an external sync acknowledgment from the GSP 58 before forwarding the internal sync acknowledgment to the tiles 4.
- the GSP 58 itself contains different configuration settings that indicate how external sync requests should be propagate for different external sync zones.
- the sync network 700 includes a sync master 710 and multiple sync propagation nodes 720a, 720b, 720c.
- each of the sync master 710 and the sync propagation nodes 720a, 720b, 720c is a GSP 58.
- the sync network 700 further comprises a plurality of sets of slave nodes 730a, 730b, 730c, 730d from which sync requests originate.
- the slave nodes 730a, 730b, 730c, 730d together form a sync group defined for the sync network 700.
- the tiles 4 function as the slave nodes for a barrier sync, and the sync requests which originate from those slave nodes are the tile sync requests discussed above.
- the slave nodes are divided into different sets. For example, there is a first set of slave nodes 730a, a second set of slave nodes 730b, a third set of slave nodes 730c, and a fourth set of slave nodes 730d.
- each of the sets of slave nodes 730a, 730b, 730c, 730d are tiles 4 of a different processing unit 2a.
- Each slave nodes issues a sync request upstream in the sync network 700.
- the sync requests from a group of slave nodes are aggregated and provided to a node higher in the sync network.
- sync propagation node 720a is associated with the first set of slave nodes 730a.
- sync master 710 and sync propagation nodes 720a, 720b, 720c are GSPs 58
- each of the sets of slave nodes are tiles 4 on the same chip 500 as their associated GSP 58.
- Sync propagation nodes 720b, 720c receive aggregated sync requests originating from their associated slave nodes 730b, 730c, but do not receive sync requests from other sync propagation nodes. In response to receipt of a sync request originating from its associated slave nodes 730b, 730c, each sync propagation node 720b, 720c propagates a sync request upstream in the sync network 700 to sync propagation node 720a.
- Sync propagation node 720a waits until it receives a sync request from each of its downstream nodes. These downstream nodes comprise the sync propagation nodes 720b, 720c and the set of slave nodes 730a associated with sync propagation node 720a. When sync propagation node 720a has received all of the sync requests from each of its downstream nodes, it issues a sync request to the sync master 710.
- the sync master 710 waits until it receives a sync request from each of its downstream nodes. These downstream nodes comprise the sync propagation node 720a and the set of slave nodes 730d associated with the sync master 710. When the sync master 710 has received all of the sync requests from each of its downstream nodes, it issues sync acknowledgments back to the sync propagation node 720a and to the slave nodes 730d.
- the sync propagation node 720a upon receiving a sync acknowledgment from the sync master 710, issues sync acknowledgments to each of the downstream sync propagation nodes 720b, 720c and to its associated slave nodes 730a. Likewise, the sync propagation nodes 720b, 720c, in response to receipt of these sync acknowledgments, each issue sync acknowledgments to their associated slave nodes 730b, 730c. All of the slave nodes 730a-d of the sync network 700, in response to receipt of the sync acknowledgments, pass the barrier synchronisation and exchange data during the exchange phase.
- the example in Figure 19 shows a specific arrangement of a sync network 700 in which the sync master 710 receives a sync request from only one downstream sync propagation node 720a.
- the sync master 710 may receive sync requests from more than one downstream sync propagation node.
- the example sync propagation node 720a receives sync requests from two downstream sync propagation nodes 720b, 720c, alternatively it may receive sync requests from a different number of downstream sync propagation nodes.
- Figure 17 illustrates a system 550 comprising a plurality of devices 500a-c.
- Figure 17 shows how the GSPs 58 of these devices 500a-c exchange external sync requests and external sync acknowledgments at an external barrier synchronisation.
- Each processing unit 2a-c issues an external sync request to its associated GSP 58.
- Such an external sync request is issued by the sync controller 55 of the processing unit 2 when that sync controller 55 receives aggregate sync request state (i.e. an internal sync request) indicating that each of the tiles 4 of its processing unit 2 has issued a tile sync request.
- aggregate sync request state i.e. an internal sync request
- Each of the internal sync requests shown in Figure 17 as being sent by the sync controller 55 to the GSP 58 is associated with the same sync zone.
- Each GSP 58 stores configuration settings for different sync zones indicating how it will respond to received external sync requests from those zones. These configuration settings indicate which interfaces of the GSP 58 are enabled for particular sync zones and the directionality (i.e. whether sync requests are sent or received on those interfaces) for the enabled interfaces.
- the GSP 58 of device 500b is configured to, in response to receipt of the external sync request from the sync controller 55 of procesing unit 2b, propagate the external sync request upstream to GSP 58 of device 500a.
- the GSP 58 of device 500a is configured to, in response to receipt of both the external sync request from the sync controller 55 of procesing unit 2a and the external sync request from the GSP 58 of device 500b, propagate an external sync request upstream to GSP 58 of device 500c.
- the GSPs 58 of devices 500a and 500b therefore, both acts as intermediate nodes (i.e. propagation nodes) in the sync network.
- the GSP 58 of device 500c is configured to receive the external sync request from GSP 58 of device 500a and an external sync request from the sync controller 55 of device 500c.
- the GSP 58 of device 500c issues an external sync acknowledgment to the sync controller 55 of device 500c and an external sync acknowledgment to the GSP 58 of device 500a.
- the GSP 58 of device 500c therefore, acts as the master node for the sync network.
- the GSP 58 of device 500a issues external sync acknowledgments to the sync controller 55 of device 500a and to the GSP 58 of device 500b.
- the GSP 58 of device 500b issues an external sync acknowledgement to the sync controller 55 of device 500b.
- Each of the sync controllers 55 of the devices 500a-c in response to receipt of the respective sync acknowledgments, issues sync acknowledgment to all of the tiles 4 of its respective processing unit 2-c, as described above with respect to Figure 14A.
- the tiles 4 belonging to the sync zone (as indicated in their sync zone register 53), in response to receipt of such external sync acknowledgments, proceed to the external exchange phase.
- sync zone Z3 may be sync zone Z3 that was discussed above with reference to Figures 15 and 16.
- Figure 18 illustrates how data exchange may be performed between different processing units 2 during external exchange phases for sync zones Z1 and Z2.
- sync zone Z1 that was discussed above with reference to Figures 15 and 16 comprises, in addition to tiles 4a and 4b, tiles 4e and 4f, which belong to processing unit 2b.
- the registers 53 of tiles 4e and 4f comprise indications that these tiles belong to sync zone Zl.
- Tiles 4a and 4b when they reach the barrier synchronisation for Zl, each issue an external sync request toward the sync controller 55 of device 500a in the manner described above with respect to Figure 15.
- tiles 4e and 4f when they reach the barrier synchronisation for Zl, each issue a tile sync request towards the sync controller 55 of device 500b.
- these sync controllers 55 each forward an external sync request to the GSP 58 of their device 500a, 500b.
- the GSPs 58 of the devices 500a, 500b exchange an external sync request and an external sync acknowledgement and then cause internal sync acknowledgments to be sent (via the sync controller 55 of their device 500a, b) to the tiles 4a,b,e,f of the device 500a,
- sync zone Z2 comprises in addition to tiles 4c and 4d, which were discussed above with respect to Figure 15, tiles 4h and 4g, which belong to processing unit 2c.
- the registers 53 of tiles 4c and 4d comprise indications that these tiles belong to sync zone Z2.
- Tiles 4c and 4d when they reach the barrier synchronisation for Z2, each issue a tile sync request towards the sync controller 55 of device 500a in the manner described above with respect to Figure 15.
- tiles 4h and 4g when they reach the barrier synchronisation for Z2, each issue a tile sync request towards the sync controller 55 of device 500c.
- each of these sync controllers 55 forwards external sync requests to the GSPs 58 of their devices 500a, 500c.
- the GSPs 58 of devices 500a, 500c exchange an external sync request and an external sync acknowledgement and then cause internal sync acknowledgments to be sent (via the sync controller 55 of their device 500a, b) to the tiles 4c,d,h,g of the device 500a, 500c to which they belong.
- Data exchange then takes place between one or more of tiles 4c, d and one or more of tiles 4h,g via interfaces 580 between the devices 500a, c.
- tile 4d is shown as sending data 590b to tile 4g.
- Figure 20 illustrates a method 2000 for co-ordinating synchronisations between the processors 4 based on configurable sync groups (i.e. the sync zones discussed above). The method is performed on a single device 500.
- the indications as to which sync zones, the respective processor 4 belongs are stored in the register 53 of the processor.
- Each of the processors 4 stores in its register 53, an indication for each of the sync zones, whether or not that processor 4 belongs to the respective sync zone.
- each of the processors 4 executes instructions held in its memory 51.
- S2020 may be performed at the same time as other steps in method 2000 are being performed.
- the sync controller 55 of the device 500 receives sync requests from the processors 4, and in response, returns sync acknowledgments.
- the sync requests are received at the sync controller 55 in the form of aggregated (or internal) sync requests that result from the individual tile sync requests.
- These tile sync requests include the requests issued at S2040 and S2050. Hence, S2030 is not complete when S2040 and S2050 are performed.
- a first of the processors 4 which does not belong to a first sync zone, issues a sync request for the first sync zone.
- the first of the processors 4 asserts the request in response to the indication in the register 53 of the first of the processors 4 that the first of the processors 4 does not belong the first sync zone.
- the first of the processors 4 at which S2040 is performed may, for example, be the tile 4c shown in Figure 15, with the first sync zone being Zl.
- the sync controller 55 may, once all of the processors have asserted a sync request for Zl, return acknowledgments to all of the processors 4 in the device 500.
- a first of the processors 4 which does belong to a second sync zone, issues a sync request for the second sync zone.
- the first of the processors 4 asserts the request in response to the execution unit 52 reaching a synchronisation point (e.g. barrier) for the second sync zone in its code in memory 51.
- a synchronisation point e.g. barrier
- the execution unit 52 executes a SYNC instruction to cause the sync request to be asserted.
- the second of the processors 4 at which S2040 is performed may, for example, be the tile 4c shown in Figure 15, with the second sync zone being
- the sync controller 55 may, once all of the processors have asserted a sync request for the second sync zone, return acknowledgments to all of the processors 4 in the device 500.
- Figure 21 illustrates a method 2100 for co-ordinating synchronisations using a new scheme for signalling sync requests and acknowledgments.
- the method 2100 is performed by a single device 500.
- each of the processors 4 receives a signal representing a state of a sync acknowledgment wire for the respective processor 4.
- Each such sync acknowledgment wire on which the signal is received at S2110 is associated with the same sync zone.
- Each such signal received at each of the processors represents the same state (i.e. either high or low).
- each of the processors 4 asserts a sync request by setting the state of the sync request wire for the respective processor in dependence upon the received signal so as to be the opposite to the state of the sync acknowledgment wire for the respective processor 4.
- the aggregation circuitry 920, 910 in response to detecting that each of the sync request wires has been set to the opposite of the state of the sync acknowledgment wires, outputs an aggregate sync request (i.e. an internal sync request) for a first of the barrier synchronisations to the sync controller 55.
- an aggregate sync request i.e. an internal sync request
- the sync controller 55 in response to the aggregate sync request, returns a sync acknowledgment to each of the processors 4. This is achieved for each processor 4, by causing the state of the sync acknowledgment wire of the respective processor 4 to be set to be the same as the state of the sync request wire of the respective processor 4.
- the synchronisation points are BSP barrier synchronisations
- the synchronisation points may be different types of synchronisation points.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Software Systems (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Multi Processors (AREA)
Abstract
Description
Claims
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2024501746A JP2024525099A (en) | 2021-07-14 | 2022-07-08 | Joining a sync zone |
KR1020247004969A KR20240023708A (en) | 2021-07-14 | 2022-07-08 | Subscription to synchronization zones |
EP22748304.7A EP4348427A1 (en) | 2021-07-14 | 2022-07-08 | Subscription to sync zones |
CN202280049523.8A CN117716340A (en) | 2021-07-14 | 2022-07-08 | Subscribing to a synchronization zone |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GBGB2110148.0A GB202110148D0 (en) | 2021-07-14 | 2021-07-14 | Synchronisation for a multi-tile processing unit |
GB2110148.0 | 2021-07-14 | ||
GB2209635.8 | 2022-06-30 | ||
GBGB2209635.8A GB202209635D0 (en) | 2021-07-14 | 2022-06-30 | Subscription to sync zones |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2023285304A1 true WO2023285304A1 (en) | 2023-01-19 |
Family
ID=82748222
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/EP2022/069053 WO2023285304A1 (en) | 2021-07-14 | 2022-07-08 | Subscription to sync zones |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2023285304A1 (en) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190121784A1 (en) * | 2017-10-20 | 2019-04-25 | Graphcore Limited | Synchronization in a multi-tile, multi-chip processing arrangement |
US20210200602A1 (en) * | 2019-12-30 | 2021-07-01 | Graphcore Limited | Sync Groupings |
-
2022
- 2022-07-08 WO PCT/EP2022/069053 patent/WO2023285304A1/en active Application Filing
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190121784A1 (en) * | 2017-10-20 | 2019-04-25 | Graphcore Limited | Synchronization in a multi-tile, multi-chip processing arrangement |
US20210200602A1 (en) * | 2019-12-30 | 2021-07-01 | Graphcore Limited | Sync Groupings |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11651226B2 (en) | System having multiple processing unit sets for training neural networks | |
KR20190044567A (en) | Synchronization amongst processor tiles | |
GB2569271A (en) | Synchronization with a host processor | |
EP3729261B1 (en) | A centralized-distributed mixed organization of shared memory for neural network processing | |
US11637682B2 (en) | Extended sync network | |
US20230016049A1 (en) | Subscription to Sync Zones | |
WO2023285304A1 (en) | Subscription to sync zones | |
US20240004726A1 (en) | Barrier Sync Signalling | |
Gao et al. | Impact of reconfigurable hardware on accelerating mpi_reduce | |
US20230023957A1 (en) | Communication Between Stacked Die | |
US12073262B2 (en) | Barrier synchronization between host and accelerator over network | |
US20210241089A1 (en) | System Having Multiple Processing Unit Sets For Training Neural Networks | |
US11709794B2 (en) | Exchange between stacked die | |
US11675686B2 (en) | Tracing activity from multiple components of a device | |
US20230024224A1 (en) | Tracing Synchronisation Activity of a Processing Unit | |
US11726937B2 (en) | Control of data sending from a multi-processor device | |
US20230281144A1 (en) | External Exchange Connectivity | |
US11841732B2 (en) | Predictive clock control | |
US11625357B2 (en) | Control of data transfer between processors | |
US11907725B2 (en) | Communication in a computer having multiple processors | |
US12112043B2 (en) | Data flow control device in streaming architecture chip |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22748304 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2022748304 Country of ref document: EP |
|
ENP | Entry into the national phase |
Ref document number: 2024501746 Country of ref document: JP Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 202280049523.8 Country of ref document: CN |
|
ENP | Entry into the national phase |
Ref document number: 2022748304 Country of ref document: EP Effective date: 20240103 |
|
ENP | Entry into the national phase |
Ref document number: 20247004969 Country of ref document: KR Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 1020247004969 Country of ref document: KR |
|
NENP | Non-entry into the national phase |
Ref country code: DE |