US20190363717A1 - Multi-chip structure having configurable network-on-chip - Google Patents

Multi-chip structure having configurable network-on-chip Download PDF

Info

Publication number
US20190363717A1
US20190363717A1 US15/990,506 US201815990506A US2019363717A1 US 20190363717 A1 US20190363717 A1 US 20190363717A1 US 201815990506 A US201815990506 A US 201815990506A US 2019363717 A1 US2019363717 A1 US 2019363717A1
Authority
US
United States
Prior art keywords
chip
noc
configurable
processing system
configuration data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US15/990,506
Other versions
US10505548B1 (en
Inventor
Ian A. Swarbrick
Ahmad R. Ansari
David P. Schultz
Kin Yip Sit
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xilinx Inc
Original Assignee
Xilinx Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xilinx Inc filed Critical Xilinx Inc
Priority to US15/990,506 priority Critical patent/US10505548B1/en
Assigned to XILINX, INC. reassignment XILINX, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: ANSARI, AHMAD R., SCHULTZ, DAVID P., SIT, KIN YIP, SWARBRICK, IAN A.
Publication of US20190363717A1 publication Critical patent/US20190363717A1/en
Application granted granted Critical
Publication of US10505548B1 publication Critical patent/US10505548B1/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03KPULSE TECHNIQUE
    • H03K19/00Logic circuits, i.e. having at least two inputs acting on one output; Inverting circuits
    • H03K19/02Logic circuits, i.e. having at least two inputs acting on one output; Inverting circuits using specified components
    • H03K19/173Logic circuits, i.e. having at least two inputs acting on one output; Inverting circuits using specified components using elementary logic circuits as components
    • H03K19/177Logic circuits, i.e. having at least two inputs acting on one output; Inverting circuits using specified components using elementary logic circuits as components arranged in matrix form
    • H03K19/17748Structural details of configuration resources
    • H03K19/1776Structural details of configuration resources for memories
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/14Protection against unauthorised use of memory or access to memory
    • G06F12/1416Protection against unauthorised use of memory or access to memory by checking the object accessibility, e.g. type of access defined by the memory independently of subject rights
    • G06F12/1425Protection against unauthorised use of memory or access to memory by checking the object accessibility, e.g. type of access defined by the memory independently of subject rights the protection being physical, e.g. cell, word, block
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/70Protecting specific internal or peripheral components, in which the protection of a component leads to protection of the entire computer
    • G06F21/71Protecting specific internal or peripheral components, in which the protection of a component leads to protection of the entire computer to assure secure computing or processing of information
    • G06F21/76Protecting specific internal or peripheral components, in which the protection of a component leads to protection of the entire computer to assure secure computing or processing of information in application-specific integrated circuits [ASIC] or field-programmable devices, e.g. field-programmable gate arrays [FPGA] or programmable logic devices [PLD]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/70Protecting specific internal or peripheral components, in which the protection of a component leads to protection of the entire computer
    • G06F21/82Protecting input, output or interconnection devices
    • G06F21/85Protecting input, output or interconnection devices interconnection devices, e.g. bus-connected or in-line devices
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03KPULSE TECHNIQUE
    • H03K19/00Logic circuits, i.e. having at least two inputs acting on one output; Inverting circuits
    • H03K19/02Logic circuits, i.e. having at least two inputs acting on one output; Inverting circuits using specified components
    • H03K19/173Logic circuits, i.e. having at least two inputs acting on one output; Inverting circuits using specified components using elementary logic circuits as components
    • H03K19/177Logic circuits, i.e. having at least two inputs acting on one output; Inverting circuits using specified components using elementary logic circuits as components arranged in matrix form
    • H03K19/17748Structural details of configuration resources
    • H03K19/17768Structural details of configuration resources for security

Definitions

  • Examples of the present disclosure generally relate to multi-chip structures and, in particular, to multi-chip structures that implement a configurable Network-on-Chip (NoC) for communication between chips.
  • NoC Network-on-Chip
  • SoC system-on-chip
  • Other SoCs can have different components embedded therein for different applications.
  • the SoC provides many advantages over traditional processor-based designs. It is an attractive alternative to multi-chip designs because the integration of components into a single device increases overall speed while decreasing size.
  • the SoC is also an attractive alternative to fully customized chips, such as an application specific integrated circuit (ASIC), because ASIC designs tend to have a significantly longer development time and larger development costs.
  • a configurable SoC which includes programmable logic, has been developed to implement a programmable semiconductor chip that can obtain benefits of both programmable logic and SoC.
  • a multi-chip structure that implements a configurable Network-on-Chip (NoC) for communication between chips is described herein.
  • NoC Network-on-Chip
  • a minimal configuration for the configurable NoC of each chip can be enabled to establish communications between the chips to permit communications for further configuration.
  • An example of the present disclosure is an apparatus.
  • the apparatus includes a first chip comprising a first processing system and a first configurable Network-on-Chip (NoC) connected to the first processing system, and includes a second chip comprising a second processing system and a second configurable NoC connected to the second processing system.
  • the first configurable NoC is connected to the second configurable NoC via an external connector.
  • the first processing system is operable to obtain first information from off of the first chip and configure the first configurable NoC based on the first information.
  • the second processing system is operable to obtain second information from off of the second chip and configure the second configurable NoC based on the second information.
  • the first processing system and the second processing system are communicatively coupled with each other via the first configurable NoC and the second configurable NoC when the first configurable NoC and the second configurable NoC are configured based on the first information and the second information, respectively.
  • Another example of the present disclosure is a method for operating multiple integrated circuits.
  • a configurable Network-on-Chip (NoC) of the respective chip is configured based on initial configuration data.
  • the configurable NoCs of the multiple chips are connected via external connectors external to the multiple chips.
  • System configuration data is communicated between the controllers of the multiple chips via the configurable NoCs of the multiple chips configured based on the initial configuration data.
  • the configurable NoC of the respective chip is configured based on the system configuration data.
  • a first processing system on a first chip is communicatively connected to a second processing system on a second chip via a first configurable Network-on-Chip (NoC) on the first chip and a second configurable NoC on the second chip.
  • NoC Network-on-Chip
  • a first transaction request is transmitted from the first processing system through the first configurable NoC and the second configurable NoC to the second processing system.
  • a second transaction request corresponding to the first transaction request is transmitted from the second processing system to a configurable component on the second chip via a peripheral interconnect on the second chip.
  • the second processing system is operable to configure the second configurable NoC via the peripheral interconnect.
  • FIG. 1 is a block diagram of a multi-chip structure according to an example.
  • FIG. 2 is a block diagram depicting a multi-chip structure with multiple chips each having a system-on-chip (SoC) according to an example.
  • SoC system-on-chip
  • FIG. 3 is a block diagram depicting a network-on-chip (NoC) of a SoC according to an example.
  • NoC network-on-chip
  • FIG. 4 is a block diagram depicting connections between endpoint circuits in a SoC through the NoC according to an example.
  • FIG. 5 is a block diagram depicting a NoC packet switch according to an example.
  • FIG. 6 is example configurations of a NoC packet switch according to an example.
  • FIG. 7 is a block diagram depicting connections to a register block of a NoC packet switch through a NoC Peripheral Interconnect (NPI) according to an example.
  • NPI NoC Peripheral Interconnect
  • FIG. 8 is a block diagram depicting a multi-chip structure with interconnected NoCs according to an example.
  • FIG. 9 is a flowchart for operating a multi-chip structure according to an example.
  • FIG. 10 is a flowchart for operating a multi-chip structure according to an example.
  • Examples described herein provide for a multi-chip structure that implements a configurable Network-on-Chip (NoC) for communication between chips.
  • each chip of the multi-chip structure reads data from off-chip that indicates how a configurable NoC of the respective chip is to be configured for a minimal configuration to establish communications between the chips.
  • Each chip configures its NoC according to the minimal configuration, and thereafter, the chips may communicate with others of the chips through the NoCs.
  • the communication between the chips may include communicating system-level configuration data, which may be used to re-configure the NoCs, for example.
  • the NoCs may be configured using a peripheral interconnect to write data to register blocks of switches of the respective NoC.
  • a master on one chip can communicate with slave endpoint circuits (e.g., the register blocks of the switches) on another chip via the interconnected NoCs and the peripheral interconnect of the chip on which the slave endpoint circuit is disposed.
  • slave endpoint circuits e.g., the register blocks of the switches
  • FIG. 1 is a block diagram of a multi-chip structure, such as a two-and-a-half-dimensional integrated circuit (2.5DIC) structure, according to an example.
  • the 2.5DIC structure includes a first chip 51 , a second chip 52 , a third chip 53 , and a memory chip 62 attached to an interposer 70 or another substrate.
  • the 2.5DIC structure may have fewer or more chips, and the memory chip 62 may be outside of, but communicatively coupled to, the 2.5DIC structure.
  • Each of the first chip 51 , second chip 52 , and third chip 53 can include is an integrated circuit (IC), such as a system-on-chip (SoC) as described below.
  • the memory chip 62 can comprise any form of memory for storing data, such as a configuration file.
  • the first chip 51 , second chip 52 , third chip 53 , and memory chip 62 are attached to the interposer 70 by electrical connectors 72 , such as microbumps, controlled collapse chip connection (C4) bumps, or the like.
  • Electrical connectors 74 on a side of the interposer 70 opposite from the chips 51 , 52 , 53 , 62 for attaching the 2.5DIC structure to another substrate, such as a package substrate, for example.
  • the electrical connectors 74 may be C4 bumps, ball grid array (BGA) balls, or the like.
  • the interposer 70 includes electrical interconnects that electrically connect various ones of the chips 51 , 52 , 53 , 62 .
  • the electrical interconnects can include one or more metallization layers or redistribution layers on the side of the interposer 70 on which the chips 51 , 52 , 53 , 62 are attached, one or more through substrate vias (TSVs) through the bulk substrate (e.g., silicon substrate) of the interposer 70 , and/or one or more metallization layers or redistribution layers on the side of the interposer 70 opposing the side on which the chips 51 , 52 , 53 , 62 are attached.
  • TSVs through substrate vias
  • various signals, packets, etc. can be communicated between various ones of the chips 51 , 52 , 53 , 62 .
  • the multi-chip structure can include various stacked chips, such as in a three-dimensional IC (3DIC) structure.
  • 3DIC three-dimensional IC
  • two or more memory chips may be stacked on each other with the bottom memory chip being attached to the interposer 70 .
  • Other multi-chip structures may be implemented in other examples, such as without an interposer.
  • FIG. 2 is a block diagram depicting a multi-chip structure with multiple chips each having a SoC according to an example.
  • the multi-chip structure includes a first SoC 101 (e.g., on the first chip 51 of FIG. 1 ), a second SoC 102 (e.g., on the second chip 52 ), and a third SoC 103 (e.g., on the third chip 53 ).
  • Each SoC 101 , 102 , 103 is an IC comprising a processing system 104 , a network-on-chip (NoC) 106 , a configuration interconnect 108 , and one or more programmable logic regions 110 .
  • NoC network-on-chip
  • Each SoC 101 , 102 , 103 can be coupled to external circuits, and as illustrated, the first SoC 101 is coupled to nonvolatile memory (NVM) 112 (e.g., on the memory chip 62 in FIG. 1 ).
  • the NVM 112 can store data that can be loaded to the SoCs 101 , 102 , 103 for configuring the SoCs 101 , 102 , 103 , such as configuring the NoC 106 and the programmable logic region(s) 110 .
  • NVM nonvolatile memory
  • the NVM 112 is on the memory chip 62 attached to the interposer 70 ; however, in other examples, memory, such as flash memory, can be external to the multi-chip structure and communicatively coupled to the SoC 101 , such as via an serial peripheral interface (SPI).
  • the memory may be attached to a same package substrate to which the multi-chip structure is attached, and may communicate with the SoC 101 via the package substrate.
  • the processing system 104 of each SoC 101 , 102 , 103 is connected to the programmable logic region(s) 110 through the NoC 106 and through the configuration interconnect 108 .
  • the processing system 104 of each SoC 101 , 102 , 103 can include one or more processor cores.
  • the processing system 104 can include a number of ARM-based embedded processor cores.
  • the programmable logic region(s) 110 of each SoC 101 , 102 , 103 can include any number of configurable logic blocks (CLBs), which may be programmed or configured using the processing system 104 through the configuration interconnect 108 of the respective SoC 101 , 102 , 103 .
  • the configuration interconnect 108 can enable, for example, frame-based programming of the fabric of the programmable logic region(s) 110 by a processor core of the processing system 104 (such as a platform management controller (PMC) described further below).
  • PMC platform management controller
  • the NoC 106 includes end-to-end Quality-of-Service (QoS) features for controlling data-flows therein.
  • QoS Quality-of-Service
  • the NoC 106 first separates data-flows into designated traffic classes. Data-flows in the same traffic class can either share or have independent virtual or physical transmission paths.
  • the QoS scheme applies two levels of priority across traffic classes. Within and across traffic classes, the NoC 106 applies a weighted arbitration scheme to shape the traffic flows and provide bandwidth and latency that meets the user requirements. Examples of the NoC 106 are discussed further below.
  • the NoC 106 is independent from the configuration interconnect 108 , for example.
  • each SoC 101 , 102 , 103 can be selectively communicatively connected together via the NoC 106 of the respective SoC 101 , 102 , 103 . Further, the NoCs 106 of the SoCs 101 , 102 , 103 are communicatively connected, such as through external electrical connections on an interposer (e.g., interposer 70 ).
  • an interposer e.g., interposer 70
  • FIG. 3 is a block diagram depicting the NoC 106 of a SoC according to an example.
  • the NoC 106 includes NoC master units (NMUs) 202 , NoC slave units (NSUs) 204 , a network 214 , NoC peripheral interconnect (NPI) 210 , and register blocks 212 .
  • NMU 202 is an ingress circuit that connects a master circuit to the NoC 106 .
  • Each NSU 204 is an egress circuit that connects the NoC 106 to a slave endpoint circuit.
  • the NMUs 202 are connected to the NSUs 204 through the network 214 .
  • the network 214 includes NoC packet switches 206 and routing 208 between the NoC packet switches 206 .
  • Each NoC packet switch 206 performs switching of NoC packets.
  • the NoC packet switches 206 are connected to each other and to the NMUs 202 and NSUs 204 through the routing 208 to implement a plurality of physical channels.
  • the NoC packet switches 206 also support multiple virtual channels per physical channel.
  • the NPI 210 includes circuitry to program the NMUs 202 , NSUs 204 , and NoC packet switches 206 .
  • the NMUs 202 , NSUs 204 , and NoC packet switches 206 can include register blocks 212 that determine functionality thereof.
  • the NPI 210 includes a peripheral interconnect coupled to the register blocks 212 for programming thereof to set functionality.
  • the register blocks 212 in the NoC 106 support interrupts, QoS, error handling and reporting, transaction control, power management, and address mapping control.
  • Configuration data for the NoC 106 can be stored in the NVM 112 and provided to the NPI 210 for programming the NoC 106 and/or other slave endpoint circuits.
  • FIG. 4 is a block diagram depicting connections between endpoint circuits in a SoC through the NoC 106 according to an example.
  • endpoint circuits 302 are connected to endpoint circuits 304 through the NoC 106 .
  • the endpoint circuits 302 are master circuits, which are coupled to NMUs 202 of the NoC 106 .
  • the endpoint circuits 304 are slave circuits coupled to the NSUs 204 of the NoC 106 .
  • Each endpoint circuit 302 and 304 can be a circuit in the processing system 104 , a circuit in a programmable logic region 110 , or a circuit in another subsystem.
  • Each endpoint circuit in the programmable logic region 110 can be a dedicated circuit (e.g., a hardened circuit) or a circuit configured in programmable logic.
  • the network 214 includes a plurality of physical channels 306 .
  • the physical channels 306 are implemented by programming the NoC 106 .
  • Each physical channel 306 includes one or more NoC packet switches 206 and associated routing 208 .
  • An NMU 202 connects with an NSU 204 through at least one physical channel 306 .
  • a physical channel 306 can also have one or more virtual channels 308 .
  • FIG. 5 is a block diagram depicting a NoC packet switch 206 according to an example.
  • the NoC packet switch 206 has four bi-directional connections or ports (each labeled a “side” for convenience). In other examples, a NoC packet switch 206 can have more or fewer connections or ports.
  • the NoC packet switch 206 has a first side Side 0, a second side Side 1, a third side Side 2, and a fourth side Side 3.
  • the NoC packet switch 206 includes a register block 212 for configuring the functionality of the NoC packet switch 206 .
  • the register block 212 includes addressable registers, for example.
  • the register block 212 includes a configuration register and a routing table.
  • the configuration register can set a configuration mode of the NoC packet switch 206 , as described in FIG. 6 , for example, and the routing table can identify how packets received at the NoC packet switch 206 are to be routed based on the configuration mode.
  • FIG. 6 illustrates example configurations of a NoC packet switch 206 according to an example.
  • FIG. 6 shows a first configuration 602 , a second configuration 604 , and a third configuration 606 .
  • a NoC packet switch 206 can have more, fewer, or different configurations in other examples.
  • the configurations can be implemented using the configuration register and routing table in the NoC packet switch 206 .
  • the NoC packet switch 206 acts as a pass-through.
  • a packet entering on the first side Side 0 exits on the third side Side 2, and vice versa.
  • a packet entering on the second side Side 1 exits on the fourth side Side 3, and vice versa.
  • a packet entering on the first side Side 0 exits on the second side Side 1, and a packet entering on the second side Side 1 exits on the first side Side 0.
  • a packet entering on one of the first side Side 0, the third side Side 2, or the fourth side Side 3 exits on another one of the first side Side 0, the third side Side 2, or the fourth side Side 3 based on a destination identification of the packet being routed.
  • a packet entering on one of the first side Side 0, the second side Side 1, or the third side Side 2 exits on another one of the first side Side 0, the second side Side 1, or the third side Side 2 based on a destination identification of the packet being routed.
  • the NoC packet switch 206 illustrated in FIG. 6 has connectivity using 3 sides, and in other examples, connectivity can use fewer (e.g., 2) connections or more (e.g., 4) connections depending on where connectivity is desired to be established. Additional details of example configurations will be described in the context of further examples.
  • FIG. 7 is a block diagram depicting connections to a register block 212 of a NoC packet switch 206 through the NPI 210 in a SoC 101 , 102 , 103 according to an example.
  • the NPI 210 includes a root node 404 , interconnected NPI switches 408 , and a protocol block 410 .
  • the root node 404 resides on a platform management controller (PMC) 402 , which as show in subsequent examples, further resides in the processing system 104 of the SoC 101 , 102 , 103 .
  • the PMC 402 includes a local boot read only memory (ROM) 403 for storing boot sequence instructions, for example.
  • ROM boot read only memory
  • the root node 404 can packetize a transaction request, such as a write or read request, into a format implemented by the NPI 210 and can transmit a memory-mapped transaction request to interconnected NPI switches 408 .
  • the transaction request can be routed through the interconnected NPI switches 408 to a protocol block 410 connected to the register block 212 to which the transaction request is directed.
  • the protocol block 410 can then translate the memory-mapped transaction request into a format implemented by the register block 212 and transmit the translated request to the register block 212 for processing.
  • the register block 212 can further transmit a response to the transaction request through the protocol block 410 and the interconnected NPI switches 408 to the root node 404 , which then responds to the master circuit that issued the transaction request.
  • the root node 404 can translate a transaction request between a protocol used by the one or more master circuits, such as the PMC 402 , and a protocol used by the NPI 210 .
  • the master circuits can implement the Advanced eXtensible Interface fourth generation (AXI4) protocol
  • the NPI 210 can implement an NPI Protocol.
  • the protocol blocks 410 can also translate the transaction request from the protocol implemented on the NPI 210 to a protocol implemented by the register blocks 212 of the NoC packet switches 206 .
  • the protocol blocks 410 can translate between NPI Protocol and the Advanced Microcontroller Bus Architecture (AMBA) 3 Advanced Peripheral Bus (APB3) protocol.
  • AMBA Advanced Microcontroller Bus Architecture
  • APIB3 Advanced Peripheral Bus
  • the PMC 402 may execute instructions stored in the boot ROM 403 to issue transaction requests (e.g., write requests) through the NPI 210 (e.g., the root node 404 , interconnected NPI switches 408 , and protocol blocks 410 ) to register blocks 212 of NoC packet switches 206 to initially program the NoC packet switches 206 to initially configure the NoC 106 for that respective SoC 101 , 102 , 103 .
  • the PMC 402 may subsequently reprogram the NoC packet switches 206 .
  • the PMC 402 is further connected to the configuration interconnect 108 , which is in turn connected to the programmable logic regions 110 .
  • the PMC 402 is configured to program the fabric of the programmable logic regions 110 using, for example, a frame-based programming mechanism through the configuration interconnect 108 .
  • the configuration interconnect 108 is a delivery mechanism for programming programmable units on the respective SoC that is independent of the delivery mechanism of the NPI 210 for programming other programmable units (e.g., slave endpoint circuits like the register blocks 212 of the NoC packet switches 206 ) on the respective SoC 101 , 102 , 103 .
  • FIG. 8 is a block diagram depicting a multi-chip structure with interconnected NoCs 106 according to an example.
  • FIG. 8 illustrates some aspects of the multi-chip structure of FIG. 2 in more detail while omitting other aspects so as not to obscure aspects described here.
  • each SoC 101 , 102 , 103 includes a processing system (PS) 104 , programmable logic regions (PL) 110 , and components that form a NoC 106 .
  • the processing system 104 includes a PMC 402 , which further includes boot ROM 403 and a root node 404 of an NPI 210 .
  • the processing system 104 and programmable logic regions 110 include various ones of NMUs 202 (boxes labeled with an “M” in FIG.
  • the NoC 106 includes routing 208 and NoC packet switches 206 (boxes labeled with an “x”) at various intersections of routing 208 .
  • the NMUs 202 are connected to the routing 208 , and the NSUs 204 are also connected to the routing 208 .
  • the NoC packet switches 206 are capable of being configured to connect and direct communications between various ones of the NMUs 202 and the NSUs 204 .
  • the NPI 210 of the NoC 106 is generally illustrated as dashed lines emanating from the root node 404 . More specifically, the NPI 210 includes interconnected NPI switches 408 and protocol blocks 410 connected to register blocks 212 of the NoC packet switches 206 , as described with respect to FIG. 7 previously.
  • Routing 208 of each NoC 106 is connected to external connectors 802 to interconnect the NoCs 106 of the SoCs 101 , 102 , 103 .
  • the external connectors 802 can be or include, for example, bumps attaching the respective chips to an interposer and/or metallization layers or redistribution layers on the interposer, such as described with respect to FIG. 1 .
  • Routing 208 of the NoC 106 of SoC 101 is connected to routing 208 of the NoC 106 of SoC 102 via external connectors 802
  • routing of the NoC 106 of SoC 102 is connected to routing 208 of the NoC 106 of SoC 103 via external connectors 802 .
  • each SoC 101 , 102 , 103 undergoes a multi-stage boot sequence.
  • each SoC 101 , 102 , 103 configures, for example, a minimal number of NoC packet switches 206 to establish communication between the SoCs 101 , 102 , 103 through the NoCs 106 .
  • communications between the SoCs 101 , 102 , 103 only occurs through the interconnected NoCs 106 and external connectors 802 , as shown in FIG. 8 .
  • system configuration data for a system-level configuration can be communicated between the SoCs 101 , 102 , 103 on the interconnected NoCs 106 for configuring programmable components of the SoCs 101 , 102 , 103 , in a second stage of the boot sequence.
  • fabric configuration data for programming the fabric of programmable logic regions 110 can be communicated between the SoCs 101 , 102 , 103 on the interconnected NoCs 106 .
  • the PMC 402 of each SoC 101 , 102 , 103 executes boot instructions stored on the boot ROM 403 .
  • the execution of these instructions cause the PMC 402 to read data from off-chip of the respective SoC 101 , 102 , 103 .
  • the data can be stored on another chip attached to the interposer to which the chip of the SoC 101 , 102 , 103 is attached and/or input by a user implementing the SoC 101 , 102 , 103 .
  • the data is stored on e-fuses on a memory device attached to the interposer.
  • Various hardened input/output (IO) interfaces may be implemented to read the data from off-chip, which is not specifically illustrated in FIG.
  • the information that is read identifies which NoC packet switches 206 on the respective SoC 101 , 102 , 103 are to be configured in the first stage, identifies the configuration of those NoC packet switches 206 , and identifies where the chip of the respective SoC 101 , 102 , 103 is in relation to the other chips of the other SoCs 101 , 102 , 103 (e.g., wherein the chip is in the stack of chips).
  • each chip of the SoCs 101 , 102 , 103 can be manufactured by the same processes, e.g., the chips of the SoCs 101 , 102 , 103 can be the same, and the arrangement of the chips on, e.g., the interposer can determine what information is read to configure the SoCs 101 , 102 , 103 .
  • Execution of the instructions from the boot ROM 403 further causes each PMC 402 , based on the information that has been read, to transmit memory-mapped transaction requests through the root node 404 and NPI 210 to the register blocks 212 of the NoC packet switches 206 identified by the read information to write information to those register blocks 212 and thereby configure the NoC packet switches 206 .
  • the NoC packet switches 206 configured, communication between the PMCs 402 of the SoCs 101 , 102 , 103 can commence over the NoCs 106 , which can permit inter-chip communication to communicate system-level configuration data, for example. More details are described in the context of the example of FIG. 8 .
  • each PMC 402 of the SoCs 101 , 102 , 103 reads data from off-chip.
  • the PMC 402 of the SoC 101 reads data that indicates that the SoC 101 is to be the master and first chip (e.g., identified as ‘00’) in the configuration of SoCs 101 , 102 , 103 , that two NoC packet switches 206 a and 206 b are to be configured, and that indicates the identification and configuration of the NoC packet switches 206 a and 206 b .
  • the identification and configuration for the data that indicates the identification and configuration of the NoC packet switches 206 a and 206 b can include an identification (e.g., a 9-bit identification) and configuration code (e.g., 2-bit code) of the respective NoC packet switch 206 a , 206 b .
  • the PMC 402 of the SoC 101 can determine addresses of register blocks 212 of the NoC packet switch 206 a , 206 b for programming routing tables of the NoC packet switch 206 a , 206 b based on the identification data that was read, and can determine a configuration of the NoC packet switch 206 a , 206 b based on the configuration code.
  • the PMC 402 of the SoC 101 then, through the root node 404 and NPI 210 of the SoC 101 , writes the configuration and routing tables to register blocks 212 of the NoC packet switches 206 a and 206 b .
  • the configuration of NoC packet switch 206 a can be the first configuration 602 of FIG. 6
  • the configuration of NoC packet switch 206 b can be the second configuration 604 of FIG. 6 .
  • the routing tables of the NoC packet switch 206 b can direct memory-mapped packets through different sides of the NoC packet switch 206 b based on an address in the respective memory-mapped packet.
  • a chip identification can be appended to addresses of the memory-mapped packets, and the NoC packet switch 206 b can direct packets based on the chip identification. For example, packets having a chip identification of ‘00’ (e.g., for the SoC 101 ) are routed to the fourth side Side 3 of the NoC packet switch 206 b , and packets having a chip identification greater than ‘00’ are routed to the first side Side 0 of the NoC packet switch 206 b.
  • the PMC 402 of the SoC 102 reads data that indicates that the SoC 102 is to be a slave and second chip (e.g., identified as ‘01’) in the configuration of SoCs 101 , 102 , 103 , that two NoC packet switches 206 c and 206 d are to be configured, and that indicates the identification and configuration of the NoC packet switches 206 c and 206 d , as described above in the context of the SoC 101 .
  • a slave and second chip e.g., identified as ‘01’
  • the PMC 402 of the SoC 102 can determine addresses of register blocks 212 of the NoC packet switch 206 c , 206 d for programming routing tables of the NoC packet switch 206 c , 206 d based on the identification data that was read, and can determine a configuration of the NoC packet switch 206 c , 206 d based on the configuration code.
  • the PMC 402 of the SoC 102 then, through the root node 404 and NPI 210 of the SoC 102 , writes the configuration and routing tables to register blocks 212 of the NoC packet switches 206 c and 206 d .
  • the configuration of NoC packet switch 206 c can be the first configuration 602 of FIG.
  • the routing tables of the NoC packet switch 206 d can direct memory-mapped packets through different sides of the NoC packet switch 206 d based on an address in the respective memory-mapped packet. For example, packets having a chip identification of ‘01’ (e.g., for the SoC 102 ) are routed to the fourth side Side 3 of the NoC packet switch 206 d ; packets having a chip identification greater than ‘01’ are routed to the first side Side 0 of the NoC packet switch 206 d ; and packets having a chip identification less than ‘01’ are routed to the third side Side 2 of the NoC packet switch 206 d.
  • packets having a chip identification of ‘01’ e.g., for the SoC 102
  • packets having a chip identification greater than ‘01’ are routed to the first side Side 0 of the NoC packet switch 206 d
  • packets having a chip identification less than ‘01’ are routed to the third side Side 2 of the NoC packet switch 206 d
  • the PMC 402 of the SoC 103 reads data that indicates that the SoC 103 is to be a slave and third chip (e.g., identified as ‘10’) in the configuration of SoCs 101 , 102 , 103 , that two NoC packet switches 206 e and 206 f are to be configured, and that indicates the identification and configuration of the NoC packet switches 206 e and 206 f , as described above in the context of the SoC 101 .
  • a slave and third chip e.g., identified as ‘10’
  • the PMC 402 of the SoC 103 can determine addresses of register blocks 212 of the NoC packet switch 206 e , 206 f for programming routing tables of the NoC packet switch 206 e , 206 f based on the identification data that was read, and can determine a configuration of the NoC packet switch 206 e , 206 f based on the configuration code.
  • the PMC 402 of the SoC 103 then, through the root node 404 and NPI 210 of the SoC 103 , writes the configuration and routing tables to register blocks 212 of the NoC packet switches 206 e and 206 f .
  • the configuration of NoC packet switch 206 e can be the first configuration 602 of FIG.
  • the routing tables of the NoC packet switch 206 f can direct memory-mapped packets through different sides of the NoC packet switch 206 f based on an address in the respective memory-mapped packet. For example, packets having a chip identification of ‘10’ (e.g., for the SoC 102 ) are routed to the fourth side Side 3 of the NoC packet switch 206 f , and packets having a chip identification less than ‘10’ are routed to the third side Side 2 of the NoC packet switch 206 f.
  • a chip identification of ‘10’ e.g., for the SoC 102
  • the PMC 402 of the SoC 101 can communicate with the PMC 402 of the SoC 102 via the NMU 202 a on the processing system 104 of the SoC 101 , the NoC packet switches 206 a , 206 b , 206 d , 206 c and corresponding routing 208 , and the NSU 204 a on the processing system 104 of the SoC 102 .
  • the PMC 402 of the SoC 101 can communicate with the PMC 402 of the SoC 103 via the NMU 202 a on the processing system 104 of the SoC 101 , the NoC packet switches 206 a , 206 b , 206 d , 206 f , 206 e and corresponding routing 208 , and the NSU 204 b on the processing system 104 of the SoC 103 .
  • Each PMC 402 has a dedicated portion of the address map of the NoC 106 .
  • the PMCs 402 of the SoCs 101 , 102 , 103 can communicate with each other by including the chip identification (e.g., ‘00’, ‘01’, and ‘10’) in the memory-mapped packet to be communicated via the interconnected NoCs 106 .
  • the NoC packet switches 206 a - f can route the packets according to the chip identification, as described above.
  • the communication via the interconnected NoCs 106 is according to the Advanced eXtensible Interface fourth generation (AXI4) protocol.
  • AXI4 Advanced eXtensible Interface fourth generation
  • system configuration data can be communicated from the PMC 402 of the SoC 101 to the PMCs of the SoCs 102 , 103 .
  • the PMC 402 of the SoC 101 can access system configuration data from memory, e.g., flash memory, that is off-chip from the SoC 101 .
  • the memory may be the NVM 112 on the memory chip 62 in FIGS. 1 and 2 .
  • the SoC 101 can implement any IO interface and other IP to enable the PMC 402 to access the system configuration data from the memory.
  • a memory controller may be connected to the processing system 104 (e.g., to the PMC 402 ), and the memory controller can be connected through an 10 interface to memory.
  • the PMC 402 of the SoC 101 can then communicate this system configuration data to the PMCs 402 of the slave SoCs 102 , 103 via the interconnected NoCs 106 (e.g., with the configured NoC packet switches 206 a - f ).
  • the NoC 106 can be quiesced locally, and the PMCs 402 on each SoC 101 , 102 , 103 can further configure components, including the local NoC 106 , for system-level operations.
  • the configuration of the NoC packet switches 206 a - f may remain or may be changed by the system configuration data.
  • the NoCs 106 of the SoCs 101 , 102 , 103 can be reconfigured, and such reconfiguration can maintain communication through interconnected NoCs 106 between the SoCs 101 , 102 , 103 .
  • various functionality of the NoC packet switches 206 can be configured, such as routing tables, QoS setting, and others.
  • the fabric configuration data can be accessed via the processing system 104 (e.g., PMC 402 ) of the SoC 101 and communicated to the other processing systems 104 of the SoCs 102 , 103 .
  • the fabric configuration data may be accessed through an interface with a user device such that the fabric configuration data is downloaded from the user device, or may be accessed from off-chip memory, for example. Appropriate IO interfaces may be implemented to access the fabric configuration data.
  • the processing system 104 (e.g., PMC 402 ) of the SoC 101 then communicates the fabric configuration data to the other processing systems 104 of the SoCs 102 , 103 via the interconnected NoCs 106 , which are configured according to the system configuration data, for example.
  • the PMC 402 of the respective processing system 104 programs one or more programmable logic regions 110 via the local configuration interconnect 108 of the respective SoC 101 , 102 , 103 .
  • the programmable logic regions 110 of the SoCs 101 , 102 , 103 can be subsequently executed, which may permit communication between different programmable logic regions 110 via the NoC 106 of the respective SoC 101 , 102 , 103 for local communications and/or via the interconnected NoCs of the SoCs 101 , 102 , 103 for communications between SoCs 101 , 102 , 103 .
  • FIG. 9 is a flowchart for operating a multi-chip structure according to an example.
  • data is read from off-chip.
  • the data indicates, among other things, which NoC packet switches 206 are to be configured on the respective chip and the configuration of those NoC packet switches 206 .
  • the NoC packet switches 206 indicated by the read data are configured via the NPI 210 of the chip and based on the read data. Configuring these NoC packet switches 206 establishes at least a minimal interconnection between the chips through the NoCs 106 .
  • the master obtains system configuration data from off-chip, and at block 908 , the master communicates the system configuration data to the slaves via the interconnected NoCs 106 .
  • a system-level configuration is implemented based on the received system configuration data.
  • the master obtains fabric configuration data from off-chip, and at block 914 , the fabric configuration data is communicated to slaves via the interconnected NoCs 106 .
  • the fabric configuration data is implemented in the fabric of the respective SoC (e.g., in the programmable logic region(s)) based on the fabric configuration data.
  • a master PMC 402 (such as the PMC 402 on the SoC 101 ) can communicate with programmable slave endpoint circuits on other SoCs 101 , 102 , 103 via the interconnected NoCs 106 and the NPI 210 local to the SoC 101 , 102 , 103 of the respective programmable slave endpoint circuit.
  • the NoCs 106 can be configured for such communications by the first stage boot sequence to establish minimal interconnections for communications between the SoCs 101 , 102 , 103 and/or by the second stage boot sequence to establish a system-level configuration. Referring back to FIG.
  • register blocks 212 were described as being in the NoC packet switches 206 for configuring the NoC packet switches 206 .
  • other programmable slave endpoint circuits can also include register blocks 212 for configuring those slave endpoint circuits or maintaining data generated by those slave endpoint circuits, such as performance data.
  • Some example programmable slave endpoint circuits can include a memory controller, a clock generator, a temperature sensor, etc.
  • the processing system 104 of the SoC 101 needs to re-configure or read data from a clock generator on the SoC 102 .
  • the processing system 104 e.g., PMC 402
  • the processing system 104 creates a memory-mapped transaction request (e.g., an AXI4 read or write request), and transmits that memory-mapped transaction request from an NMU 202 (e.g., NMU 202 a ) into the NoC 106 on the SoC 101 .
  • the NoC packet switches 206 of the NoC 106 of the SoC 101 route the memory-mapped transaction request to external connectors 802 , which are connected to the NoC 106 of the SoC 102 .
  • the NoC packet switches 206 of the NoC 106 of the SoC 102 then route the memory-mapped transaction request to an NSU 204 (e.g., NSU 204 a ) of the processing system 104 of the SoC 102 .
  • the PMC 402 of the processing system 104 of the SoC 102 then passes the memory-mapped transaction request to the root node 404 , which translates the memory-mapped transaction request to another format implemented on the NPI 210 of the SoC 102 .
  • the root node 404 of the SoC 102 transmits the translated memory-mapped transaction request through the interconnected NPI switches 408 and appropriate protocol block 410 of the NPI 210 on the SoC 102 to the clock generator on the SoC 102 .
  • the clock generator can process the transaction request and transmit a response.
  • the response can be communicated along the same route in reverse order, e.g., through the protocol block 410 , interconnected NPI switches 408 , and root node 404 of the NPI 210 , PMC 402 , NSU 204 , and NoC 106 on the SoC 102 , and the NoC 106 and NMU 202 to the processing system 104 on the SoC 101 .
  • FIG. 10 is a flowchart for operating a multi-chip structure according to an example.
  • a memory-mapped transaction request is transmitted from a master on a first chip through a NoC 106 on the first chip.
  • the NoC 106 on the first chip is connected to a NoC 106 on a second chip.
  • the memory-mapped transaction request is received at a slave on the second chip through the NoC 106 on the second chip.
  • the memory-mapped transaction request is transmitted through an NPI 210 on the second chip.
  • the memory-mapped transaction request is received and processed at the slave endpoint circuit on the second chip.
  • the slave endpoint circuit on the second chip transmits a response to the memory-mapped transaction request to the master on the first chip via the NPI 210 on the second chip and the NoCs 106 on the first and second chips.
  • Examples described herein can achieve benefits. For example, configuration data of the SoCs can be moved off-chip from the SoCs, thereby reducing space and resources on the SoC. Memory chips may be easily and cheaply manufactured and programmed, and separate chips of the SoCs and memory chips may reduce cost and complexity of producing the systems. Further, by implementing a configurable NoC, a flexible, low-overhead communications interconnect can be implemented in the SoCs. The information read from off-chip by the chips can enable a minimal configuration for the NoCs to establish communications between the SoCs to permit communications for further configuration. Other benefits and advantages may be obtained by various examples.

Abstract

A multi-chip structure that implements a configurable Network-on-Chip (NoC) for communication between chips is described herein. In an example, an apparatus includes a first chip comprising a first processing system and a first configurable NoC connected to the first processing system, and includes a second chip comprising a second processing system and a second configurable NoC connected to the second processing system. The first and second configurable NoCs are connected together via an external connector. The first and second processing systems are operable to obtain first and second information from off of the first and second chip and configure the first and second configurable NoCs based on the first and second information, respectively. The first and second processing systems are communicatively coupled with each other via the first and second configurable NoCs when the first and second configurable NoCs are configured based on the first and second information, respectively.

Description

    TECHNICAL FIELD
  • Examples of the present disclosure generally relate to multi-chip structures and, in particular, to multi-chip structures that implement a configurable Network-on-Chip (NoC) for communication between chips.
  • BACKGROUND
  • Advances in integrated circuit technology have made it possible to embed an entire system, such as including a processor core, a memory controller, and a bus, in a single semiconductor chip. This type of chip is commonly referred to as a system-on-chip (SoC). Other SoCs can have different components embedded therein for different applications. The SoC provides many advantages over traditional processor-based designs. It is an attractive alternative to multi-chip designs because the integration of components into a single device increases overall speed while decreasing size. The SoC is also an attractive alternative to fully customized chips, such as an application specific integrated circuit (ASIC), because ASIC designs tend to have a significantly longer development time and larger development costs. A configurable SoC (CSoC), which includes programmable logic, has been developed to implement a programmable semiconductor chip that can obtain benefits of both programmable logic and SoC.
  • SUMMARY
  • A multi-chip structure that implements a configurable Network-on-Chip (NoC) for communication between chips is described herein. A minimal configuration for the configurable NoC of each chip can be enabled to establish communications between the chips to permit communications for further configuration.
  • An example of the present disclosure is an apparatus. The apparatus includes a first chip comprising a first processing system and a first configurable Network-on-Chip (NoC) connected to the first processing system, and includes a second chip comprising a second processing system and a second configurable NoC connected to the second processing system. The first configurable NoC is connected to the second configurable NoC via an external connector. The first processing system is operable to obtain first information from off of the first chip and configure the first configurable NoC based on the first information. The second processing system is operable to obtain second information from off of the second chip and configure the second configurable NoC based on the second information. The first processing system and the second processing system are communicatively coupled with each other via the first configurable NoC and the second configurable NoC when the first configurable NoC and the second configurable NoC are configured based on the first information and the second information, respectively.
  • Another example of the present disclosure is a method for operating multiple integrated circuits. Locally at each chip of multiple chips by a controller of the respective chip, a configurable Network-on-Chip (NoC) of the respective chip is configured based on initial configuration data. The configurable NoCs of the multiple chips are connected via external connectors external to the multiple chips. System configuration data is communicated between the controllers of the multiple chips via the configurable NoCs of the multiple chips configured based on the initial configuration data. Locally at each chip by the controller of the respective chip, the configurable NoC of the respective chip is configured based on the system configuration data.
  • Another example of the present disclosure is a method for operating multiple integrated circuits. A first processing system on a first chip is communicatively connected to a second processing system on a second chip via a first configurable Network-on-Chip (NoC) on the first chip and a second configurable NoC on the second chip. A first transaction request is transmitted from the first processing system through the first configurable NoC and the second configurable NoC to the second processing system. A second transaction request corresponding to the first transaction request is transmitted from the second processing system to a configurable component on the second chip via a peripheral interconnect on the second chip. The second processing system is operable to configure the second configurable NoC via the peripheral interconnect.
  • These and other aspects may be understood with reference to the following detailed description.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • So that the manner in which the above recited features can be understood in detail, a more particular description, briefly summarized above, may be had by reference to example implementations, some of which are illustrated in the appended drawings. It is to be noted, however, that the appended drawings illustrate only typical example implementations and are therefore not to be considered limiting of its scope.
  • FIG. 1 is a block diagram of a multi-chip structure according to an example.
  • FIG. 2 is a block diagram depicting a multi-chip structure with multiple chips each having a system-on-chip (SoC) according to an example.
  • FIG. 3 is a block diagram depicting a network-on-chip (NoC) of a SoC according to an example.
  • FIG. 4 is a block diagram depicting connections between endpoint circuits in a SoC through the NoC according to an example.
  • FIG. 5 is a block diagram depicting a NoC packet switch according to an example.
  • FIG. 6 is example configurations of a NoC packet switch according to an example.
  • FIG. 7 is a block diagram depicting connections to a register block of a NoC packet switch through a NoC Peripheral Interconnect (NPI) according to an example.
  • FIG. 8 is a block diagram depicting a multi-chip structure with interconnected NoCs according to an example.
  • FIG. 9 is a flowchart for operating a multi-chip structure according to an example.
  • FIG. 10 is a flowchart for operating a multi-chip structure according to an example.
  • To facilitate understanding, identical reference numerals have been used, where possible, to designate identical elements that are common to the figures. It is contemplated that elements of one example may be beneficially incorporated in other examples.
  • DETAILED DESCRIPTION
  • Examples described herein provide for a multi-chip structure that implements a configurable Network-on-Chip (NoC) for communication between chips. In some examples, each chip of the multi-chip structure reads data from off-chip that indicates how a configurable NoC of the respective chip is to be configured for a minimal configuration to establish communications between the chips. Each chip configures its NoC according to the minimal configuration, and thereafter, the chips may communicate with others of the chips through the NoCs. The communication between the chips may include communicating system-level configuration data, which may be used to re-configure the NoCs, for example. The NoCs may be configured using a peripheral interconnect to write data to register blocks of switches of the respective NoC. Further, once the NoCs are configured to permit communication between chips, a master on one chip can communicate with slave endpoint circuits (e.g., the register blocks of the switches) on another chip via the interconnected NoCs and the peripheral interconnect of the chip on which the slave endpoint circuit is disposed.
  • Various features are described hereinafter with reference to the figures. It should be noted that the figures may or may not be drawn to scale and that the elements of similar structures or functions are represented by like reference numerals throughout the figures. It should be noted that the figures are only intended to facilitate the description of the features. They are not intended as an exhaustive description of the claimed invention or as a limitation on the scope of the claimed invention. In addition, an illustrated example need not have all the aspects or advantages shown. An aspect or an advantage described in conjunction with a particular example is not necessarily limited to that example and can be practiced in any other examples even if not so illustrated or if not so explicitly described.
  • FIG. 1 is a block diagram of a multi-chip structure, such as a two-and-a-half-dimensional integrated circuit (2.5DIC) structure, according to an example. The 2.5DIC structure includes a first chip 51, a second chip 52, a third chip 53, and a memory chip 62 attached to an interposer 70 or another substrate. In other examples, the 2.5DIC structure may have fewer or more chips, and the memory chip 62 may be outside of, but communicatively coupled to, the 2.5DIC structure. Each of the first chip 51, second chip 52, and third chip 53 can include is an integrated circuit (IC), such as a system-on-chip (SoC) as described below. The memory chip 62 can comprise any form of memory for storing data, such as a configuration file. The first chip 51, second chip 52, third chip 53, and memory chip 62 are attached to the interposer 70 by electrical connectors 72, such as microbumps, controlled collapse chip connection (C4) bumps, or the like. Electrical connectors 74 on a side of the interposer 70 opposite from the chips 51, 52, 53, 62 for attaching the 2.5DIC structure to another substrate, such as a package substrate, for example. The electrical connectors 74 may be C4 bumps, ball grid array (BGA) balls, or the like.
  • The interposer 70 includes electrical interconnects that electrically connect various ones of the chips 51, 52, 53, 62. The electrical interconnects can include one or more metallization layers or redistribution layers on the side of the interposer 70 on which the chips 51, 52, 53, 62 are attached, one or more through substrate vias (TSVs) through the bulk substrate (e.g., silicon substrate) of the interposer 70, and/or one or more metallization layers or redistribution layers on the side of the interposer 70 opposing the side on which the chips 51, 52, 53, 62 are attached. Hence, various signals, packets, etc. can be communicated between various ones of the chips 51, 52, 53, 62.
  • In other examples, more or fewer chips may be included, and the chips may be in other configurations. For example, more or fewer chips that include a SoC may be implemented, such as two, four, or more chips, and more or fewer memory chips may be included. In some examples, the multi-chip structure can include various stacked chips, such as in a three-dimensional IC (3DIC) structure. For example, two or more memory chips may be stacked on each other with the bottom memory chip being attached to the interposer 70. Other multi-chip structures may be implemented in other examples, such as without an interposer. Various modifications may be made that would be readily apparent to a person having ordinary skill in the art.
  • FIG. 2 is a block diagram depicting a multi-chip structure with multiple chips each having a SoC according to an example. The multi-chip structure includes a first SoC 101 (e.g., on the first chip 51 of FIG. 1), a second SoC 102 (e.g., on the second chip 52), and a third SoC 103 (e.g., on the third chip 53). Each SoC 101, 102, 103 is an IC comprising a processing system 104, a network-on-chip (NoC) 106, a configuration interconnect 108, and one or more programmable logic regions 110. Each SoC 101, 102, 103 can be coupled to external circuits, and as illustrated, the first SoC 101 is coupled to nonvolatile memory (NVM) 112 (e.g., on the memory chip 62 in FIG. 1). The NVM 112 can store data that can be loaded to the SoCs 101, 102, 103 for configuring the SoCs 101, 102, 103, such as configuring the NoC 106 and the programmable logic region(s) 110. As illustrated in FIGS. 1 and 2, the NVM 112 is on the memory chip 62 attached to the interposer 70; however, in other examples, memory, such as flash memory, can be external to the multi-chip structure and communicatively coupled to the SoC 101, such as via an serial peripheral interface (SPI). For example, the memory may be attached to a same package substrate to which the multi-chip structure is attached, and may communicate with the SoC 101 via the package substrate. In general, the processing system 104 of each SoC 101, 102, 103 is connected to the programmable logic region(s) 110 through the NoC 106 and through the configuration interconnect 108.
  • The processing system 104 of each SoC 101, 102, 103 can include one or more processor cores. For example, the processing system 104 can include a number of ARM-based embedded processor cores. The programmable logic region(s) 110 of each SoC 101, 102, 103 can include any number of configurable logic blocks (CLBs), which may be programmed or configured using the processing system 104 through the configuration interconnect 108 of the respective SoC 101, 102, 103. For example, the configuration interconnect 108 can enable, for example, frame-based programming of the fabric of the programmable logic region(s) 110 by a processor core of the processing system 104 (such as a platform management controller (PMC) described further below).
  • The NoC 106 includes end-to-end Quality-of-Service (QoS) features for controlling data-flows therein. In examples, the NoC 106 first separates data-flows into designated traffic classes. Data-flows in the same traffic class can either share or have independent virtual or physical transmission paths. The QoS scheme applies two levels of priority across traffic classes. Within and across traffic classes, the NoC 106 applies a weighted arbitration scheme to shape the traffic flows and provide bandwidth and latency that meets the user requirements. Examples of the NoC 106 are discussed further below. The NoC 106 is independent from the configuration interconnect 108, for example. The processing system 104, programmable logic regions 110, and/or other components of each SoC 101, 102, 103 can be selectively communicatively connected together via the NoC 106 of the respective SoC 101, 102, 103. Further, the NoCs 106 of the SoCs 101, 102, 103 are communicatively connected, such as through external electrical connections on an interposer (e.g., interposer 70).
  • FIG. 3 is a block diagram depicting the NoC 106 of a SoC according to an example. The NoC 106 includes NoC master units (NMUs) 202, NoC slave units (NSUs) 204, a network 214, NoC peripheral interconnect (NPI) 210, and register blocks 212. Each NMU 202 is an ingress circuit that connects a master circuit to the NoC 106. Each NSU 204 is an egress circuit that connects the NoC 106 to a slave endpoint circuit. The NMUs 202 are connected to the NSUs 204 through the network 214. In an example, the network 214 includes NoC packet switches 206 and routing 208 between the NoC packet switches 206. Each NoC packet switch 206 performs switching of NoC packets. The NoC packet switches 206 are connected to each other and to the NMUs 202 and NSUs 204 through the routing 208 to implement a plurality of physical channels. The NoC packet switches 206 also support multiple virtual channels per physical channel. The NPI 210 includes circuitry to program the NMUs 202, NSUs 204, and NoC packet switches 206. For example, the NMUs 202, NSUs 204, and NoC packet switches 206 can include register blocks 212 that determine functionality thereof. The NPI 210 includes a peripheral interconnect coupled to the register blocks 212 for programming thereof to set functionality. The register blocks 212 in the NoC 106 support interrupts, QoS, error handling and reporting, transaction control, power management, and address mapping control. Configuration data for the NoC 106 can be stored in the NVM 112 and provided to the NPI 210 for programming the NoC 106 and/or other slave endpoint circuits.
  • FIG. 4 is a block diagram depicting connections between endpoint circuits in a SoC through the NoC 106 according to an example. In the example, endpoint circuits 302 are connected to endpoint circuits 304 through the NoC 106. The endpoint circuits 302 are master circuits, which are coupled to NMUs 202 of the NoC 106. The endpoint circuits 304 are slave circuits coupled to the NSUs 204 of the NoC 106. Each endpoint circuit 302 and 304 can be a circuit in the processing system 104, a circuit in a programmable logic region 110, or a circuit in another subsystem. Each endpoint circuit in the programmable logic region 110 can be a dedicated circuit (e.g., a hardened circuit) or a circuit configured in programmable logic.
  • The network 214 includes a plurality of physical channels 306. The physical channels 306 are implemented by programming the NoC 106. Each physical channel 306 includes one or more NoC packet switches 206 and associated routing 208. An NMU 202 connects with an NSU 204 through at least one physical channel 306. A physical channel 306 can also have one or more virtual channels 308.
  • FIG. 5 is a block diagram depicting a NoC packet switch 206 according to an example. As illustrated, the NoC packet switch 206 has four bi-directional connections or ports (each labeled a “side” for convenience). In other examples, a NoC packet switch 206 can have more or fewer connections or ports. The NoC packet switch 206 has a first side Side 0, a second side Side 1, a third side Side 2, and a fourth side Side 3. The NoC packet switch 206 includes a register block 212 for configuring the functionality of the NoC packet switch 206. The register block 212 includes addressable registers, for example. The register block 212 includes a configuration register and a routing table. The configuration register can set a configuration mode of the NoC packet switch 206, as described in FIG. 6, for example, and the routing table can identify how packets received at the NoC packet switch 206 are to be routed based on the configuration mode.
  • FIG. 6 illustrates example configurations of a NoC packet switch 206 according to an example. FIG. 6 shows a first configuration 602, a second configuration 604, and a third configuration 606. A NoC packet switch 206 can have more, fewer, or different configurations in other examples. The configurations can be implemented using the configuration register and routing table in the NoC packet switch 206. In a default configuration, the NoC packet switch 206 acts as a pass-through. A packet entering on the first side Side 0 exits on the third side Side 2, and vice versa. Further, a packet entering on the second side Side 1 exits on the fourth side Side 3, and vice versa. In the first configuration 602, a packet entering on the first side Side 0 exits on the second side Side 1, and a packet entering on the second side Side 1 exits on the first side Side 0. In the second configuration 604, a packet entering on one of the first side Side 0, the third side Side 2, or the fourth side Side 3 exits on another one of the first side Side 0, the third side Side 2, or the fourth side Side 3 based on a destination identification of the packet being routed. In the third configuration 606, a packet entering on one of the first side Side 0, the second side Side 1, or the third side Side 2 exits on another one of the first side Side 0, the second side Side 1, or the third side Side 2 based on a destination identification of the packet being routed. The NoC packet switch 206 illustrated in FIG. 6 has connectivity using 3 sides, and in other examples, connectivity can use fewer (e.g., 2) connections or more (e.g., 4) connections depending on where connectivity is desired to be established. Additional details of example configurations will be described in the context of further examples.
  • FIG. 7 is a block diagram depicting connections to a register block 212 of a NoC packet switch 206 through the NPI 210 in a SoC 101, 102, 103 according to an example. To connect to a register block 212, the NPI 210 includes a root node 404, interconnected NPI switches 408, and a protocol block 410. The root node 404 resides on a platform management controller (PMC) 402, which as show in subsequent examples, further resides in the processing system 104 of the SoC 101, 102, 103. The PMC 402 includes a local boot read only memory (ROM) 403 for storing boot sequence instructions, for example.
  • Generally, the root node 404 can packetize a transaction request, such as a write or read request, into a format implemented by the NPI 210 and can transmit a memory-mapped transaction request to interconnected NPI switches 408. The transaction request can be routed through the interconnected NPI switches 408 to a protocol block 410 connected to the register block 212 to which the transaction request is directed. The protocol block 410 can then translate the memory-mapped transaction request into a format implemented by the register block 212 and transmit the translated request to the register block 212 for processing. The register block 212 can further transmit a response to the transaction request through the protocol block 410 and the interconnected NPI switches 408 to the root node 404, which then responds to the master circuit that issued the transaction request.
  • The root node 404 can translate a transaction request between a protocol used by the one or more master circuits, such as the PMC 402, and a protocol used by the NPI 210. For example, the master circuits can implement the Advanced eXtensible Interface fourth generation (AXI4) protocol, and the NPI 210 can implement an NPI Protocol. The protocol blocks 410 can also translate the transaction request from the protocol implemented on the NPI 210 to a protocol implemented by the register blocks 212 of the NoC packet switches 206. In some examples, the protocol blocks 410 can translate between NPI Protocol and the Advanced Microcontroller Bus Architecture (AMBA) 3 Advanced Peripheral Bus (APB3) protocol.
  • As described in further detail subsequently, within and separately for each SoC 101, 102, 103, the PMC 402 may execute instructions stored in the boot ROM 403 to issue transaction requests (e.g., write requests) through the NPI 210 (e.g., the root node 404, interconnected NPI switches 408, and protocol blocks 410) to register blocks 212 of NoC packet switches 206 to initially program the NoC packet switches 206 to initially configure the NoC 106 for that respective SoC 101, 102, 103. The PMC 402 may subsequently reprogram the NoC packet switches 206.
  • The PMC 402 is further connected to the configuration interconnect 108, which is in turn connected to the programmable logic regions 110. The PMC 402 is configured to program the fabric of the programmable logic regions 110 using, for example, a frame-based programming mechanism through the configuration interconnect 108. The configuration interconnect 108 is a delivery mechanism for programming programmable units on the respective SoC that is independent of the delivery mechanism of the NPI 210 for programming other programmable units (e.g., slave endpoint circuits like the register blocks 212 of the NoC packet switches 206) on the respective SoC 101, 102, 103.
  • FIG. 8 is a block diagram depicting a multi-chip structure with interconnected NoCs 106 according to an example. FIG. 8 illustrates some aspects of the multi-chip structure of FIG. 2 in more detail while omitting other aspects so as not to obscure aspects described here. Generally, each SoC 101, 102, 103 includes a processing system (PS) 104, programmable logic regions (PL) 110, and components that form a NoC 106. The processing system 104 includes a PMC 402, which further includes boot ROM 403 and a root node 404 of an NPI 210. The processing system 104 and programmable logic regions 110 include various ones of NMUs 202 (boxes labeled with an “M” in FIG. 8) and NSUs 204 (boxes labeled with an “S”). The NoC 106 includes routing 208 and NoC packet switches 206 (boxes labeled with an “x”) at various intersections of routing 208. The NMUs 202 are connected to the routing 208, and the NSUs 204 are also connected to the routing 208. The NoC packet switches 206 are capable of being configured to connect and direct communications between various ones of the NMUs 202 and the NSUs 204. The NPI 210 of the NoC 106 is generally illustrated as dashed lines emanating from the root node 404. More specifically, the NPI 210 includes interconnected NPI switches 408 and protocol blocks 410 connected to register blocks 212 of the NoC packet switches 206, as described with respect to FIG. 7 previously.
  • Routing 208 of each NoC 106 is connected to external connectors 802 to interconnect the NoCs 106 of the SoCs 101, 102, 103. The external connectors 802 can be or include, for example, bumps attaching the respective chips to an interposer and/or metallization layers or redistribution layers on the interposer, such as described with respect to FIG. 1. Routing 208 of the NoC 106 of SoC 101 is connected to routing 208 of the NoC 106 of SoC 102 via external connectors 802, and routing of the NoC 106 of SoC 102 is connected to routing 208 of the NoC 106 of SoC 103 via external connectors 802.
  • Generally, each SoC 101, 102, 103 undergoes a multi-stage boot sequence. In a first stage, each SoC 101, 102, 103 configures, for example, a minimal number of NoC packet switches 206 to establish communication between the SoCs 101, 102, 103 through the NoCs 106. In some examples described herein, communications between the SoCs 101, 102, 103 only occurs through the interconnected NoCs 106 and external connectors 802, as shown in FIG. 8. With communications between the SoCs 101, 102, 103 established through the NoCs 106, system configuration data for a system-level configuration can be communicated between the SoCs 101, 102, 103 on the interconnected NoCs 106 for configuring programmable components of the SoCs 101, 102, 103, in a second stage of the boot sequence. After the system-level configuration is established, fabric configuration data for programming the fabric of programmable logic regions 110 can be communicated between the SoCs 101, 102, 103 on the interconnected NoCs 106.
  • In the first stage of the boot sequence, the PMC 402 of each SoC 101, 102, 103 executes boot instructions stored on the boot ROM 403. The execution of these instructions cause the PMC 402 to read data from off-chip of the respective SoC 101, 102, 103. The data can be stored on another chip attached to the interposer to which the chip of the SoC 101, 102, 103 is attached and/or input by a user implementing the SoC 101, 102, 103. In some examples, the data is stored on e-fuses on a memory device attached to the interposer. Various hardened input/output (IO) interfaces may be implemented to read the data from off-chip, which is not specifically illustrated in FIG. 8. The information that is read identifies which NoC packet switches 206 on the respective SoC 101, 102, 103 are to be configured in the first stage, identifies the configuration of those NoC packet switches 206, and identifies where the chip of the respective SoC 101, 102, 103 is in relation to the other chips of the other SoCs 101, 102, 103 (e.g., wherein the chip is in the stack of chips). By being configured to read this information from off-chip, each chip of the SoCs 101, 102, 103 can be manufactured by the same processes, e.g., the chips of the SoCs 101, 102, 103 can be the same, and the arrangement of the chips on, e.g., the interposer can determine what information is read to configure the SoCs 101, 102, 103.
  • Execution of the instructions from the boot ROM 403 further causes each PMC 402, based on the information that has been read, to transmit memory-mapped transaction requests through the root node 404 and NPI 210 to the register blocks 212 of the NoC packet switches 206 identified by the read information to write information to those register blocks 212 and thereby configure the NoC packet switches 206. With the NoC packet switches 206 configured, communication between the PMCs 402 of the SoCs 101, 102, 103 can commence over the NoCs 106, which can permit inter-chip communication to communicate system-level configuration data, for example. More details are described in the context of the example of FIG. 8.
  • In the context of FIG. 8, each PMC 402 of the SoCs 101, 102, 103 reads data from off-chip. The PMC 402 of the SoC 101 reads data that indicates that the SoC 101 is to be the master and first chip (e.g., identified as ‘00’) in the configuration of SoCs 101, 102, 103, that two NoC packet switches 206 a and 206 b are to be configured, and that indicates the identification and configuration of the NoC packet switches 206 a and 206 b. For example, the identification and configuration for the data that indicates the identification and configuration of the NoC packet switches 206 a and 206 b can include an identification (e.g., a 9-bit identification) and configuration code (e.g., 2-bit code) of the respective NoC packet switch 206 a, 206 b. The PMC 402 of the SoC 101 can determine addresses of register blocks 212 of the NoC packet switch 206 a, 206 b for programming routing tables of the NoC packet switch 206 a, 206 b based on the identification data that was read, and can determine a configuration of the NoC packet switch 206 a, 206 b based on the configuration code. The PMC 402 of the SoC 101 then, through the root node 404 and NPI 210 of the SoC 101, writes the configuration and routing tables to register blocks 212 of the NoC packet switches 206 a and 206 b. For example, the configuration of NoC packet switch 206 a can be the first configuration 602 of FIG. 6, and the configuration of NoC packet switch 206 b can be the second configuration 604 of FIG. 6. The routing tables of the NoC packet switch 206 b can direct memory-mapped packets through different sides of the NoC packet switch 206 b based on an address in the respective memory-mapped packet. A chip identification can be appended to addresses of the memory-mapped packets, and the NoC packet switch 206 b can direct packets based on the chip identification. For example, packets having a chip identification of ‘00’ (e.g., for the SoC 101) are routed to the fourth side Side 3 of the NoC packet switch 206 b, and packets having a chip identification greater than ‘00’ are routed to the first side Side 0 of the NoC packet switch 206 b.
  • The PMC 402 of the SoC 102 reads data that indicates that the SoC 102 is to be a slave and second chip (e.g., identified as ‘01’) in the configuration of SoCs 101, 102, 103, that two NoC packet switches 206 c and 206 d are to be configured, and that indicates the identification and configuration of the NoC packet switches 206 c and 206 d, as described above in the context of the SoC 101. The PMC 402 of the SoC 102 can determine addresses of register blocks 212 of the NoC packet switch 206 c, 206 d for programming routing tables of the NoC packet switch 206 c, 206 d based on the identification data that was read, and can determine a configuration of the NoC packet switch 206 c, 206 d based on the configuration code. The PMC 402 of the SoC 102 then, through the root node 404 and NPI 210 of the SoC 102, writes the configuration and routing tables to register blocks 212 of the NoC packet switches 206 c and 206 d. For example, the configuration of NoC packet switch 206 c can be the first configuration 602 of FIG. 6, and the configuration of NoC packet switch 206 d can be the second configuration 604 of FIG. 6. The routing tables of the NoC packet switch 206 d can direct memory-mapped packets through different sides of the NoC packet switch 206 d based on an address in the respective memory-mapped packet. For example, packets having a chip identification of ‘01’ (e.g., for the SoC 102) are routed to the fourth side Side 3 of the NoC packet switch 206 d; packets having a chip identification greater than ‘01’ are routed to the first side Side 0 of the NoC packet switch 206 d; and packets having a chip identification less than ‘01’ are routed to the third side Side 2 of the NoC packet switch 206 d.
  • The PMC 402 of the SoC 103 reads data that indicates that the SoC 103 is to be a slave and third chip (e.g., identified as ‘10’) in the configuration of SoCs 101, 102, 103, that two NoC packet switches 206 e and 206 f are to be configured, and that indicates the identification and configuration of the NoC packet switches 206 e and 206 f, as described above in the context of the SoC 101. The PMC 402 of the SoC 103 can determine addresses of register blocks 212 of the NoC packet switch 206 e, 206 f for programming routing tables of the NoC packet switch 206 e, 206 f based on the identification data that was read, and can determine a configuration of the NoC packet switch 206 e, 206 f based on the configuration code. The PMC 402 of the SoC 103 then, through the root node 404 and NPI 210 of the SoC 103, writes the configuration and routing tables to register blocks 212 of the NoC packet switches 206 e and 206 f. For example, the configuration of NoC packet switch 206 e can be the first configuration 602 of FIG. 6, and the configuration of NoC packet switch 206 f can be the second configuration 604 of FIG. 6. The routing tables of the NoC packet switch 206 f can direct memory-mapped packets through different sides of the NoC packet switch 206 f based on an address in the respective memory-mapped packet. For example, packets having a chip identification of ‘10’ (e.g., for the SoC 102) are routed to the fourth side Side 3 of the NoC packet switch 206 f, and packets having a chip identification less than ‘10’ are routed to the third side Side 2 of the NoC packet switch 206 f.
  • With the respective SoCs 101, 102, 103 having configured the NoC packet switches 206 a-f, communication can be established between the SoCs 101, 102, 103. For example, the PMC 402 of the SoC 101 can communicate with the PMC 402 of the SoC 102 via the NMU 202 a on the processing system 104 of the SoC 101, the NoC packet switches 206 a, 206 b, 206 d, 206 c and corresponding routing 208, and the NSU 204 a on the processing system 104 of the SoC 102. Similarly, the PMC 402 of the SoC 101 can communicate with the PMC 402 of the SoC 103 via the NMU 202 a on the processing system 104 of the SoC 101, the NoC packet switches 206 a, 206 b, 206 d, 206 f, 206 e and corresponding routing 208, and the NSU 204 b on the processing system 104 of the SoC 103. Each PMC 402 has a dedicated portion of the address map of the NoC 106. With this portion of the address map, the PMCs 402 of the SoCs 101, 102, 103 can communicate with each other by including the chip identification (e.g., ‘00’, ‘01’, and ‘10’) in the memory-mapped packet to be communicated via the interconnected NoCs 106. The NoC packet switches 206 a-f can route the packets according to the chip identification, as described above. In some examples, the communication via the interconnected NoCs 106 is according to the Advanced eXtensible Interface fourth generation (AXI4) protocol.
  • With the PMCs 402 of the SoCs 101, 102, 103 being able to communicate between each other, system configuration data can be communicated from the PMC 402 of the SoC 101 to the PMCs of the SoCs 102, 103. For example, the PMC 402 of the SoC 101 can access system configuration data from memory, e.g., flash memory, that is off-chip from the SoC 101. For example, the memory may be the NVM 112 on the memory chip 62 in FIGS. 1 and 2. The SoC 101 can implement any IO interface and other IP to enable the PMC 402 to access the system configuration data from the memory. For example, a memory controller may be connected to the processing system 104 (e.g., to the PMC 402), and the memory controller can be connected through an 10 interface to memory. The PMC 402 of the SoC 101 can then communicate this system configuration data to the PMCs 402 of the slave SoCs 102, 103 via the interconnected NoCs 106 (e.g., with the configured NoC packet switches 206 a-f).
  • With the system configuration data communicated to the individual PMCs 402 of the SoCs 101, 102, 103, the NoC 106 can be quiesced locally, and the PMCs 402 on each SoC 101, 102, 103 can further configure components, including the local NoC 106, for system-level operations. The configuration of the NoC packet switches 206 a-f may remain or may be changed by the system configuration data. The NoCs 106 of the SoCs 101, 102, 103 can be reconfigured, and such reconfiguration can maintain communication through interconnected NoCs 106 between the SoCs 101, 102, 103. With the configuration of the NoCs 106, various functionality of the NoC packet switches 206 can be configured, such as routing tables, QoS setting, and others.
  • With the system configured according to the system configuration data, the fabric configuration data can be accessed via the processing system 104 (e.g., PMC 402) of the SoC 101 and communicated to the other processing systems 104 of the SoCs 102, 103. The fabric configuration data may be accessed through an interface with a user device such that the fabric configuration data is downloaded from the user device, or may be accessed from off-chip memory, for example. Appropriate IO interfaces may be implemented to access the fabric configuration data. The processing system 104 (e.g., PMC 402) of the SoC 101 then communicates the fabric configuration data to the other processing systems 104 of the SoCs 102, 103 via the interconnected NoCs 106, which are configured according to the system configuration data, for example.
  • With the fabric configuration data received at the various processing systems 104 of the SoCs 101, 102, 103, the PMC 402 of the respective processing system 104 programs one or more programmable logic regions 110 via the local configuration interconnect 108 of the respective SoC 101, 102, 103. The programmable logic regions 110 of the SoCs 101, 102, 103 can be subsequently executed, which may permit communication between different programmable logic regions 110 via the NoC 106 of the respective SoC 101, 102, 103 for local communications and/or via the interconnected NoCs of the SoCs 101, 102, 103 for communications between SoCs 101, 102, 103.
  • FIG. 9 is a flowchart for operating a multi-chip structure according to an example. At block 902, at each chip, data is read from off-chip. The data indicates, among other things, which NoC packet switches 206 are to be configured on the respective chip and the configuration of those NoC packet switches 206. At block 904, at each chip, the NoC packet switches 206 indicated by the read data are configured via the NPI 210 of the chip and based on the read data. Configuring these NoC packet switches 206 establishes at least a minimal interconnection between the chips through the NoCs 106. At block 906, the master obtains system configuration data from off-chip, and at block 908, the master communicates the system configuration data to the slaves via the interconnected NoCs 106. At block 910, at each chip, a system-level configuration is implemented based on the received system configuration data. At block 912, the master obtains fabric configuration data from off-chip, and at block 914, the fabric configuration data is communicated to slaves via the interconnected NoCs 106. In block 916, the fabric configuration data is implemented in the fabric of the respective SoC (e.g., in the programmable logic region(s)) based on the fabric configuration data.
  • With the NoCs 106 configured on and interconnected between the SoCs 101, 102, 103, a master PMC 402 (such as the PMC 402 on the SoC 101) can communicate with programmable slave endpoint circuits on other SoCs 101, 102, 103 via the interconnected NoCs 106 and the NPI 210 local to the SoC 101, 102, 103 of the respective programmable slave endpoint circuit. The NoCs 106 can be configured for such communications by the first stage boot sequence to establish minimal interconnections for communications between the SoCs 101, 102, 103 and/or by the second stage boot sequence to establish a system-level configuration. Referring back to FIG. 7, register blocks 212 were described as being in the NoC packet switches 206 for configuring the NoC packet switches 206. In other examples, other programmable slave endpoint circuits can also include register blocks 212 for configuring those slave endpoint circuits or maintaining data generated by those slave endpoint circuits, such as performance data. Some example programmable slave endpoint circuits can include a memory controller, a clock generator, a temperature sensor, etc.
  • For example, assume that the processing system 104 of the SoC 101 needs to re-configure or read data from a clock generator on the SoC 102. The processing system 104 (e.g., PMC 402) of the SoC 101 creates a memory-mapped transaction request (e.g., an AXI4 read or write request), and transmits that memory-mapped transaction request from an NMU 202 (e.g., NMU 202 a) into the NoC 106 on the SoC 101. The NoC packet switches 206 of the NoC 106 of the SoC 101 route the memory-mapped transaction request to external connectors 802, which are connected to the NoC 106 of the SoC 102. The NoC packet switches 206 of the NoC 106 of the SoC 102 then route the memory-mapped transaction request to an NSU 204 (e.g., NSU 204 a) of the processing system 104 of the SoC 102. The PMC 402 of the processing system 104 of the SoC 102 then passes the memory-mapped transaction request to the root node 404, which translates the memory-mapped transaction request to another format implemented on the NPI 210 of the SoC 102. The root node 404 of the SoC 102 transmits the translated memory-mapped transaction request through the interconnected NPI switches 408 and appropriate protocol block 410 of the NPI 210 on the SoC 102 to the clock generator on the SoC 102. The clock generator can process the transaction request and transmit a response. The response can be communicated along the same route in reverse order, e.g., through the protocol block 410, interconnected NPI switches 408, and root node 404 of the NPI 210, PMC 402, NSU 204, and NoC 106 on the SoC 102, and the NoC 106 and NMU 202 to the processing system 104 on the SoC 101.
  • FIG. 10 is a flowchart for operating a multi-chip structure according to an example. At block 1002, a memory-mapped transaction request is transmitted from a master on a first chip through a NoC 106 on the first chip. The NoC 106 on the first chip is connected to a NoC 106 on a second chip. At block 1004, the memory-mapped transaction request is received at a slave on the second chip through the NoC 106 on the second chip. At block 1006, the memory-mapped transaction request is transmitted through an NPI 210 on the second chip. At block 1008, the memory-mapped transaction request is received and processed at the slave endpoint circuit on the second chip. At block 1010, the slave endpoint circuit on the second chip transmits a response to the memory-mapped transaction request to the master on the first chip via the NPI 210 on the second chip and the NoCs 106 on the first and second chips.
  • Examples described herein can achieve benefits. For example, configuration data of the SoCs can be moved off-chip from the SoCs, thereby reducing space and resources on the SoC. Memory chips may be easily and cheaply manufactured and programmed, and separate chips of the SoCs and memory chips may reduce cost and complexity of producing the systems. Further, by implementing a configurable NoC, a flexible, low-overhead communications interconnect can be implemented in the SoCs. The information read from off-chip by the chips can enable a minimal configuration for the NoCs to establish communications between the SoCs to permit communications for further configuration. Other benefits and advantages may be obtained by various examples.
  • While the foregoing is directed to specific examples, other and further examples may be devised without departing from the basic scope thereof, and the scope thereof is determined by the claims that follow.

Claims (20)

1. An apparatus comprising:
a first chip comprising a first processing system and a first configurable Network-on-Chip (NoC) connected to the first processing system; and
a second chip comprising a second processing system and a second configurable NoC connected to the second processing system; and
wherein:
the first configurable NoC is connected to the second configurable NoC via an external connector;
the first processing system is operable to obtain first information from off of the first chip and configure the first configurable NoC based on the first information;
the second processing system is operable to obtain second information from off of the second chip and configure the second configurable NoC based on the second information; and
the first processing system and the second processing system are communicatively coupled with each other via the first configurable NoC and the second configurable NoC when the first configurable NoC and the second configurable NoC are configured based on the first information and the second information, respectively.
2. The apparatus of claim 1 further comprising an interposer, the first chip and the second chip each being attached to the interposer, the external connector being on the interposer.
3. The apparatus of claim 1, wherein:
the first processing system comprises a first controller and first read-only memory (ROM);
executing instructions stored on the first ROM by the first controller causes the first controller to obtain the first information from off of the first chip and configure the first configurable NoC;
the second processing system comprises a second controller and second ROM; and
executing instructions stored on the second ROM by the second controller causes the second controller to obtain the second information from off of the second chip and configure the second configurable NoC.
4. The apparatus of claim 1, wherein:
the first configurable NoC comprises first routing and first switches connected to the first routing;
the first switches comprise comprising respective first programmable register blocks to configure functionality of the first switches;
the first processing system is operable to write at least some of the first information to at least some of the first programmable register blocks to configure the first configurable NoC based on the first information;
the second configurable NoC comprises second routing and second switches connected to the second routing;
the second switches comprise comprising respective second programmable register blocks to configure functionality of the second switches; and
the second processing system is operable to write at least some of the second information to at least some of the second programmable register blocks to configure the second configurable NoC based on the second information.
5. The apparatus of claim 1, wherein:
the first configurable NoC comprises a first peripheral interconnect connected between the first processing system and first configurable components of the first configurable NoC, the first processing system being operable to configure the first configurable components of the first configurable NoC via the first peripheral interconnect; and
the second configurable NoC comprises a second peripheral interconnect connected between the second processing system and second configurable components of the second configurable NoC, the second processing system being operable to configure the second configurable components of the second configurable NoC via the second peripheral interconnect.
6. The apparatus of claim 5, wherein the first processing system is operable to communicate with a programmable component of the second chip via the first configurable NoC, the second configurable NoC, the second processing system, and the second peripheral interconnect.
7. The apparatus of claim 1, wherein:
the first processing system and the second processing system are operable to communicate configuration data via the first configurable NoC and the second configurable NoC when the first configurable NoC and the second configurable NoC are configured based on the first information and the second information, respectively;
the first processing system is operable to further configure the first configurable NoC based on the configuration data; and
the second processing system is operable to further configure the second configurable NoC based on the configuration data.
8. The apparatus of claim 1, wherein:
the first chip further comprises a first programmable logic region and a first configuration interconnect connected between the first processing system and the first programmable logic region;
the first processing system being operable to configure the first programmable logic region via the first configuration interconnect;
the second chip further comprises a second programmable logic region and a second configuration interconnect connected between the second processing system and the second programmable logic region; and
the second processing system being operable to configure the second programmable logic region via the second configuration interconnect.
9. A method for operating multiple integrated circuits, the method comprising:
configuring, locally at each chip of multiple chips by a controller of the respective chip, a configurable Network-on-Chip (NoC) of the respective chip based on initial configuration data, wherein the configurable NoCs of the multiple chips are connected via external connectors external to the multiple chips;
communicating system configuration data between the controllers of the multiple chips via the configurable NoCs of the multiple chips configured based on the initial configuration data; and
configuring, locally at each chip by the controller of the respective chip, the configurable NoC of the respective chip based on the system configuration data.
10. The method of claim 9 further comprising:
communicating fabric configuration data between the controllers of the multiple chips via the configurable NoCs of the multiple chips configured based on the system configuration data; and
configuring, locally at each chip by the controller of the respective chip, one or more programmable logic regions of the respective chip based on the fabric configuration data, wherein the configurable NoC of the respective chip is not used to configure the one or more programmable logic regions.
11. The method of claim 9, wherein configuring the configurable NoC of the respective chip based on the initial configuration data and the system configuration data each includes communicating between the controller of the respective chip and first programmable components of the configurable NoC of the respective chip via a peripheral interconnect.
12. The method of claim 11 further comprising communicating between the controller of a first chip of the multiple chips and a second programmable component of a second chip of the multiple chips via the configurable NoCs of the first chip and the second chip configured based on the initial configuration data or the system configuration data and via the peripheral interconnect of the second chip.
13. The method of claim 9, wherein the initial configuration data is obtained by each controller of the multiple chips from off-chip from the respective chip.
14. The method of claim 9, wherein each configurable NoC of the multiple chips includes:
egress circuits;
ingress circuits;
programmable switches; and
routing, wherein the programmable switches are interconnected by the routing, the interconnected programmable switches and routing being connected to and between the egress circuits and the ingress circuits.
15. The method of claim 14, wherein the programmable switches each include a register block, the register block being writable to program the programmable switches.
16. A method for operating multiple integrated circuits, the method comprising:
communicatively connecting a first processing system on a first chip to a second processing system on a second chip via a first configurable Network-on-Chip (NoC) on the first chip and a second configurable NoC on the second chip;
transmitting a first transaction request from the first processing system through the first configurable NoC and the second configurable NoC to the second processing system; and
transmitting a second transaction request corresponding to the first transaction request from the second processing system to a configurable component on the second chip via a peripheral interconnect on the second chip, wherein the second processing system is operable to configure the second configurable NoC via the peripheral interconnect.
17. The method of claim 16 further comprising translating, by the second processing system, the first transaction request into the second transaction request.
18. The method of claim 16, wherein the configurable component on the second chip is in a circuit block that is not part of the second configurable NoC.
19. The method of claim 16, wherein communicatively connecting the first processing system to the second processing system comprises:
configuring, locally at the first chip by the first processing system, the first configurable NoC based on initial configuration data obtained by the first processing system from off of the first chip; and
configuring, locally at the second chip by the second processing system, the second configurable NoC based on initial configuration data obtained by the second processing system from off of the second chip.
20. The method of claim 16, wherein configuring the second configurable NoC comprises configuring, by the second processing system, programmable switches of the second configurable NoC via the peripheral interconnect.
US15/990,506 2018-05-25 2018-05-25 Multi-chip structure having configurable network-on-chip Active US10505548B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US15/990,506 US10505548B1 (en) 2018-05-25 2018-05-25 Multi-chip structure having configurable network-on-chip

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US15/990,506 US10505548B1 (en) 2018-05-25 2018-05-25 Multi-chip structure having configurable network-on-chip

Publications (2)

Publication Number Publication Date
US20190363717A1 true US20190363717A1 (en) 2019-11-28
US10505548B1 US10505548B1 (en) 2019-12-10

Family

ID=68615349

Family Applications (1)

Application Number Title Priority Date Filing Date
US15/990,506 Active US10505548B1 (en) 2018-05-25 2018-05-25 Multi-chip structure having configurable network-on-chip

Country Status (1)

Country Link
US (1) US10505548B1 (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112822124A (en) * 2020-12-31 2021-05-18 深圳云天励飞技术股份有限公司 Multi-chip communication system, method, chip and storage medium
US11036660B2 (en) * 2019-03-28 2021-06-15 Intel Corporation Network-on-chip for inter-die and intra-die communication in modularized integrated circuit devices
US11169822B2 (en) * 2019-02-14 2021-11-09 Xilinx, Inc. Configuring programmable logic region via programmable network
US11223361B2 (en) * 2018-09-28 2022-01-11 Intel Corporation Interface for parallel configuration of programmable devices
US11264361B2 (en) 2019-06-05 2022-03-01 Invensas Corporation Network on layer enabled architectures
CN114205241A (en) * 2021-11-19 2022-03-18 芯盟科技有限公司 Network-on-chip
US20220116044A1 (en) * 2018-12-27 2022-04-14 Intel Corporation Network-on-chip (noc) with flexible data width
US20220156215A1 (en) * 2018-10-18 2022-05-19 Shanghai Cambricon Information Technology Co., Ltd. Network-on-chip data processing method and device
US11386020B1 (en) * 2020-03-03 2022-07-12 Xilinx, Inc. Programmable device having a data processing engine (DPE) array
US11424744B2 (en) * 2018-12-28 2022-08-23 Intel Corporation Multi-purpose interface for configuration data and user fabric data
US11789883B2 (en) * 2018-08-14 2023-10-17 Intel Corporation Inter-die communication of programmable logic devices
US11971836B2 (en) * 2018-10-18 2024-04-30 Shanghai Cambricon Information Technology Co., Ltd. Network-on-chip data processing method and device

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11296706B2 (en) * 2018-06-27 2022-04-05 Intel Corporation Embedded network on chip accessible to programmable logic fabric of programmable logic device in multi-dimensional die systems
US10893005B2 (en) * 2018-09-17 2021-01-12 Xilinx, Inc. Partial reconfiguration for Network-on-Chip (NoC)
US11704271B2 (en) * 2020-08-20 2023-07-18 Alibaba Group Holding Limited Scalable system-in-package architectures
US11520717B1 (en) 2021-03-09 2022-12-06 Xilinx, Inc. Memory tiles in data processing engine array
US11848670B2 (en) 2022-04-15 2023-12-19 Xilinx, Inc. Multiple partitions in a data processing array

Family Cites Families (39)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU2002233500A1 (en) 2001-02-14 2002-08-28 Clearspeed Technology Limited An interconnection system
US6781407B2 (en) 2002-01-09 2004-08-24 Xilinx, Inc. FPGA and embedded circuitry initialization and processing
US7420392B2 (en) 2001-09-28 2008-09-02 Xilinx, Inc. Programmable gate array and embedded circuitry initialization and processing
US7356633B2 (en) 2002-05-03 2008-04-08 Sonics, Inc. Composing on-chip interconnects with configurable interfaces
US7149829B2 (en) 2003-04-18 2006-12-12 Sonics, Inc. Various methods and apparatuses for arbitration among blocks of functionality
US8020163B2 (en) 2003-06-02 2011-09-13 Interuniversitair Microelektronica Centrum (Imec) Heterogeneous multiprocessor network on chip devices, methods and operating systems for control thereof
US7653820B1 (en) 2003-10-31 2010-01-26 Xilinx, Inc. System and method for securing using decryption keys during FPGA configuration using a microcontroller
US7185309B1 (en) 2004-01-30 2007-02-27 Xilinx, Inc. Method and apparatus for application-specific programmable memory architecture and interconnection network on a chip
US7689726B1 (en) 2004-10-01 2010-03-30 Xilinx, Inc. Bootable integrated circuit device for readback encoding of configuration data
US7328335B1 (en) 2004-10-01 2008-02-05 Xilinx, Inc. Bootable programmable logic device for internal decoding of encoded configuration data
US7281093B1 (en) 2004-12-21 2007-10-09 Xilinx, Inc. Memory apparatus for a message processing system and method of providing same
US7199608B1 (en) 2005-02-17 2007-04-03 Xilinx, Inc. Programmable logic device and method of configuration
US7380035B1 (en) 2005-03-24 2008-05-27 Xilinx, Inc. Soft injection rate control for buses or network-on-chip with TDMA capability
US7788625B1 (en) 2005-04-14 2010-08-31 Xilinx, Inc. Method and apparatus for precharacterizing systems for use in system level design of integrated circuits
US7301822B1 (en) 2005-05-18 2007-11-27 Xilinx, Inc. Multi-boot configuration of programmable devices
US7650248B1 (en) 2006-02-10 2010-01-19 Xilinx, Inc. Integrated circuit for in-system signal monitoring
US7454658B1 (en) 2006-02-10 2008-11-18 Xilinx, Inc. In-system signal analysis using a programmable logic device
US7831801B1 (en) 2006-08-30 2010-11-09 Xilinx, Inc. Direct memory access-based multi-processor array
US7521961B1 (en) 2007-01-23 2009-04-21 Xilinx, Inc. Method and system for partially reconfigurable switch
US7500060B1 (en) 2007-03-16 2009-03-03 Xilinx, Inc. Hardware stack structure using programmable logic
US9292436B2 (en) 2007-06-25 2016-03-22 Sonics, Inc. Various methods and apparatus to support transactions whose data address sequence within that transaction crosses an interleaved channel address boundary
US7576561B1 (en) 2007-11-13 2009-08-18 Xilinx, Inc. Device and method of configuring a device having programmable logic
US8006021B1 (en) 2008-03-27 2011-08-23 Xilinx, Inc. Processor local bus bridge for an embedded processor block core in an integrated circuit
US7772887B2 (en) 2008-07-29 2010-08-10 Qualcomm Incorporated High signal level compliant input/output circuits
US8214694B1 (en) 2009-03-12 2012-07-03 Xilinx, Inc. Lightweight probe and data collection within an integrated circuit
US9065722B2 (en) * 2012-12-23 2015-06-23 Advanced Micro Devices, Inc. Die-stacked device with partitioned multi-hop network
US9230112B1 (en) 2013-02-23 2016-01-05 Xilinx, Inc. Secured booting of a field programmable system-on-chip including authentication of a first stage boot loader to mitigate against differential power analysis
US9336010B2 (en) 2013-03-15 2016-05-10 Xilinx, Inc. Multi-boot or fallback boot of a system-on-chip using a file-based boot device
US9165143B1 (en) 2013-03-15 2015-10-20 Xilinx, Inc. Image file generation and loading
US9030227B1 (en) * 2013-08-20 2015-05-12 Altera Corporation Methods and apparatus for providing redundancy on multi-chip devices
US9152794B1 (en) 2013-09-05 2015-10-06 Xilinx, Inc. Secure key handling for authentication of software for a system-on-chip
US20150103822A1 (en) 2013-10-15 2015-04-16 Netspeed Systems Noc interface protocol adaptive to varied host interface protocols
US20150109024A1 (en) * 2013-10-22 2015-04-23 Vaughn Timothy Betz Field Programmable Gate-Array with Embedded Network-on-Chip Hardware and Design Flow
US9411688B1 (en) 2013-12-11 2016-08-09 Xilinx, Inc. System and method for searching multiple boot devices for boot images
US9699079B2 (en) 2013-12-30 2017-07-04 Netspeed Systems Streaming bridge design with host interfaces and network on chip (NoC) layers
US9652410B1 (en) 2014-05-15 2017-05-16 Xilinx, Inc. Automated modification of configuration settings of an integrated circuit
US9652252B1 (en) 2014-10-29 2017-05-16 Xilinx, Inc. System and method for power based selection of boot images
US9323876B1 (en) 2014-11-24 2016-04-26 Xilinx, Inc. Integrated circuit pre-boot metadata transfer
US10243882B1 (en) 2017-04-13 2019-03-26 Xilinx, Inc. Network on chip switch interconnect

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11789883B2 (en) * 2018-08-14 2023-10-17 Intel Corporation Inter-die communication of programmable logic devices
US11223361B2 (en) * 2018-09-28 2022-01-11 Intel Corporation Interface for parallel configuration of programmable devices
US11971836B2 (en) * 2018-10-18 2024-04-30 Shanghai Cambricon Information Technology Co., Ltd. Network-on-chip data processing method and device
US20220156215A1 (en) * 2018-10-18 2022-05-19 Shanghai Cambricon Information Technology Co., Ltd. Network-on-chip data processing method and device
US11342918B2 (en) * 2018-12-27 2022-05-24 Intel Corporation Network-on-chip (NOC) with flexible data width
US11700002B2 (en) * 2018-12-27 2023-07-11 Intel Corporation Network-on-chip (NOC) with flexible data width
US20220116044A1 (en) * 2018-12-27 2022-04-14 Intel Corporation Network-on-chip (noc) with flexible data width
US11424744B2 (en) * 2018-12-28 2022-08-23 Intel Corporation Multi-purpose interface for configuration data and user fabric data
US11169822B2 (en) * 2019-02-14 2021-11-09 Xilinx, Inc. Configuring programmable logic region via programmable network
US11036660B2 (en) * 2019-03-28 2021-06-15 Intel Corporation Network-on-chip for inter-die and intra-die communication in modularized integrated circuit devices
US20220150184A1 (en) * 2019-06-05 2022-05-12 Invensas Corporation Symbiotic Network On Layers
US11270979B2 (en) * 2019-06-05 2022-03-08 Invensas Corporation Symbiotic network on layers
US11824046B2 (en) * 2019-06-05 2023-11-21 Invensas Llc Symbiotic network on layers
US11264361B2 (en) 2019-06-05 2022-03-01 Invensas Corporation Network on layer enabled architectures
US11386020B1 (en) * 2020-03-03 2022-07-12 Xilinx, Inc. Programmable device having a data processing engine (DPE) array
WO2022142919A1 (en) * 2020-12-31 2022-07-07 深圳云天励飞技术股份有限公司 Multi-chip communication system and method, chip and storage medium
CN112822124A (en) * 2020-12-31 2021-05-18 深圳云天励飞技术股份有限公司 Multi-chip communication system, method, chip and storage medium
CN114205241A (en) * 2021-11-19 2022-03-18 芯盟科技有限公司 Network-on-chip

Also Published As

Publication number Publication date
US10505548B1 (en) 2019-12-10

Similar Documents

Publication Publication Date Title
US10505548B1 (en) Multi-chip structure having configurable network-on-chip
US11201623B2 (en) Unified programmable computational memory and configuration network
JP7244497B2 (en) Integration of programmable devices and processing systems into integrated circuit packages
US5908468A (en) Data transfer network on a chip utilizing a multiple traffic circle topology
US6266797B1 (en) Data transfer network on a computer chip using a re-configurable path multiple ring topology
KR102381158B1 (en) Standalone interface for integrating stacked silicon interconnect (SSI) technology
US11263169B2 (en) Configurable network-on-chip for a programmable device
US6275975B1 (en) Scalable mesh architecture with reconfigurable paths for an on-chip data transfer network incorporating a network configuration manager
US20220198115A1 (en) Modular periphery tile for integrated circuit device
US11169822B2 (en) Configuring programmable logic region via programmable network
KR20220095203A (en) Multichip Stackable Devices
US20230376248A1 (en) System-on-chip having multiple circuits and memory controller in separate and independent power domains
CN111357016B (en) On-chip communication system for neural network processor
US9176916B2 (en) Methods and systems for address mapping between host and expansion devices within system-in-package (SiP) solutions
CN107622993B (en) Shared through-silicon vias in 3D integrated circuits
US9170974B2 (en) Methods and systems for interconnecting host and expansion devices within system-in-package (SiP) solutions
US10929331B1 (en) Layered boundary interconnect
US20230283547A1 (en) Computer System Having a Chip Configured for Memory Attachment and Routing

Legal Events

Date Code Title Description
AS Assignment

Owner name: XILINX, INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:SWARBRICK, IAN A.;ANSARI, AHMAD R.;SCHULTZ, DAVID P.;AND OTHERS;SIGNING DATES FROM 20180522 TO 20180525;REEL/FRAME:045907/0847

FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STCF Information on status: patent grant

Free format text: PATENTED CASE

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 4